05. Artificial (or not) Intelligence, Randomness and Free Will

In this series of articles about human and machines, we have shown how machines are more capable than humans at many, many levels (from driving, to decision making, and in many instances, even at making sense of human language), but we have also shown how ‘machines’ are part of who we already are (we use cars and smartphones, many people have been provided mechanical body parts, and so on).  More importantly, we have showcased how this perceived separation between machines and humans is highly erroneous, as we are also machines (made up of tiny structures that work together similarly to how a computer/machine works).

Could any of this so-called ‘advanced’ stuff (robots, nanobots, mechanical spleens, digital ‘eyes’, telescopes, phones, printed organs, artificial DNA alterations, etc.) become harmful to us in a way that we cannot ‘fix’ or otherwise control it anymore?  Could we humans arrive at our end as a species (extinction) because we do not understand how to play with these ‘toys’?

This last part of the series will be huge, as we cannot explain it thoroughly enough without touching upon many different topics and give plenty of examples.  This last part focuses on Artificial Intelligence (which many seem to fear today), about what ‘random’ really means, and whether there is such a thing as ‘free will’.  These are rather difficult topics that are highly interconnected and extremely important for the future we head into, so it may take reading this article 2-3 times to fully absorb it all.  Have I scared you already?  If so, I’ll also tell you that it will be a fun read because I have some ‘mental’ games (challenges) for you to play with, we will try to build a robo-chicken to finally understand why it crossed the road, we will look at how today’s news and movies have a completely wrong understanding of A.I., why the concept of “free will” may be complete nonsense, and so much more.

Buckle-up and let’s get started!

A week or so ago, a friend and I buckled-up for a car ride to visit a place we’ve never visited before; somewhere on a mountain, yet close to the sea.  We found the place on google maps and set up a phone’s navigation system to guide us there.  On the map, it looked like a lighthouse at the edge of the mountain, and Google Maps showed us that we can reach that place with the car.  There were no warnings of any ‘dangerous’ roads, or anything like that.

So what happened?  Google created very dangerous directions for us to follow.  First, the road was so narrow that you could barely fit one car, even though it was marked as a two-way road.  Add to the mix many ‘sharp’ curves, trees all around providing very low visibility, and the fact that all of this was on the side of a mountain, and you have a perfect recipe for either falling off of the mountain or experiencing a head-on collision, and with no option for turning back.  The situation turned worse as the road was playing host to more and more ‘tiny rocks’ and becoming more and more inclined as we progressed.  The car’s tires and brakes did not manage that combination well at all, and we started slipping down a sloping ravine, even with the brakes to the floor.  In that direction, the road ended at the edge of a steep cliff, but we had no choice other than to ‘ride it out’ until the car, thankfully, stopped near the edge of the cliff.  We tried to drive back up, as there was no other way out, but the car couldn’t climb more than half of that slope before we started to slip down again, backwards, towards the cliff’s edge.  Second try – full speed, lots of rocks shooting out from beneath the wheels, along with lots of dust and smoke.  It was like drifting near the edge of a mountain, where any sideway move, because of the slippery road, could throw you off the ledge and into the sea below.  Thankfully, we did manage to climb it and return back home, but it wasn’t a pretty experience.  The car smelled like melted plastic, with dust covering the interior, and our hearts pumping far too fast.

Did Google try to kill us?  Whose fault was it?  What if we slipped off of that road into the sea and died?  Should Google be responsible for it?  What if we were using a self-driving car and that car drove us directly into the sea, based on Google’s directions?

We’ll get back to that at the end of the article.

When people say that they fear technology, or specifically ‘robots’, what are they really talking about?  Technology is a knife, a gun, a washing machine, a piece of furniture or a laptop.  I can kill with any of those by shooting or cutting someone, putting them in a washing machine or hitting them with the laptop :).  Although rare, a laptop could ‘explode’ due to extreme battery overheating, without anyone ‘animating’ it.  A knife (or even a stick) can become a lethal weapon if you slip and fall onto it.  Even a piece of furniture can become deadly if you fall and hit your head on it hard enough.  Almost all such technologies, tools, have the potential to be harmful: whether animated by humans, or just ‘reacting’ to different environmental factors (like the laptop exploding), or for someone to simply be in the ‘wrong’ place at the ‘wrong’ time.

It seems to me that the only reason to ever ‘fear’ machines/tools are these two:

  • Intentionality and Money
  • Unpredictability

1.Intentionality and Money

No matter how ‘stupid’ a machine is, humans can make it dangerous.  It is relatively easy today to attach a smartphone to a machine gun and program it to kill only people who smile.  Having a smartphone camera detect a ‘smiling’ face (something it’s already doing while you’re taking photos) and then follow your added programming to activate the trigger of a machine gun is not that complex.  No matter what technology/tool we talk about, humans can ‘intentionally’ make it dangerous towards others or the environment.

To ‘control’ the outcome of any tools we use, time, resources and energy must be dedicated to the development of safer tools and smarter approaches in dealing with them.  But today’s monetary race mentality impairs people from focusing enough on the tools they supervise.  If there’s not enough money allocated to monitor and very thoroughly test new technologies, then those technologies will be released to the world without proper testing, all because someone wants to make a business out of them before someone else does.  Money is also the primary factor behind the creation of many useless or dangerous tools: guns, drones used for killing people, spying A.I.’s, and so much more.  So, money is not only a limitation on the testing and managing of new technologies; it’s also a core driver for intentionally creating things that are very harmful.

  1. Unpredictability

No matter how educated people are, or how well-designed the environment is to not produce the abhorrent behavior that ‘animates’ tools to become dangerous, a piece of technology can still be dangerous when you cannot predict its ‘behavior’, its outcome.

Let’s first consider something simple: FIRE.

Fire may not seem like much of a tool to most of us now.  But while many may see illumination, furnaces, and stoves as tools today, they are just modern versions of what fire was used for not long ago.  When humans started to ‘tame’ fire, and then learned how to ‘make’ fire out of wooden sticks or via other methods, their lives and societies changed forever.  It became possible to cook, get warm, see in the dark, and even protect themselves from other animals.  That happened hundreds of thousands of years ago, so it’s interesting to note that fire was still at the center of ‘modern’ societies as recently as 100 or so years ago.  From purifying water (boiling) to illuminating streets, heat production and transforming its energy into mechanical power, it was a ‘must-have’ tool.  But it’s also true that fire has been responsible for innumerous villages being burnt to the ground, bringing about the death of millions, and many more injured and frightened.

150 or so years ago, humans started to understand tiny ‘electrons’ and how to use their movement to create energy for human use, ushering in light bulbs in place of fire-lamps, electric heaters instead of fireplaces, and so on.  Today, we have powerplugs, powerwalls, computers, refrigerators, microwaves, A/C’s and so many other electricity-fueled utility devices within the modern home.  Someone who lived 150 years ago would be terrified to learn that our ‘modern’ homes, especially those made entirely of wood, would be ‘plated’ with ‘invisible energy’ that could possibly start a fire inside the house.  Trying to explain to those people how to ‘tame’ power (this time ‘electrons’, rather than fire) for so many uses would cause them to think you’re crazy, as to them there would appear to be far too many risks involved with such ‘magic’.  While we still have the issue of houses, building complexes, or even entire villages or small cities that occasionally burn down because of these installments, and people can still suffer or die from that (as well as electrocution), the overall impact of ‘taming’ electrons is much safer and insanely advantageous over the direct use of fire for these needs.

Playing with fire and electrons is a very dangerous business, but safety measures ‘evolve’ alongside these technologies to make them safer and safer.  For example, you could insert two metal nails into a european power socket with your bare hands, and I can guarantee that it won’t feel good.  You are probably not that ‘stupid’ to try it (you have learned and understand the consequences).  Then again, you might not yet be aware that many newer power plugs have safety features built into them to not allow people (or children) to do such harmful things to themselves.  For instance, you can design these power sockets to communicate with electronics with just a small amount of energy when you first plug in a smart device, so that they have to recognize the electronic device before providing the full power needed.  Therefore, the power socket ‘knows’ that you just plugged in a ‘proper’ device, instead of two metal nails to test your body’s capacity to absorb a jolt of lightning :), and only then will it allow the full power to ‘leak’ out into the device that you plugged in.

But many would argue that the fear of ‘machines’ and technology is mostly about the A.I. stuff – Artificial Intelligence – and not the more ‘predictable’ technologies.

These days you hear all kinds of stories about how A.I. can understand human language, can understand human emotions, run scientific experiments, or even dream.  It sounds a lot like anthropomorphism, and that may very well be the case.

So let’s focus on a much clearer understanding what Artificial Intelligence really is and how it works.  If there is a single grouping of two words that you are sure to hear more and more these days, it’s “Artificial Intelligence”, and without understanding how ‘it’ works, you may both gain and project a completely distorted view of this emergent technological field.

When I search on google for “TROM” (a documentary I made a few years ago), Google tries to correct me by asking “Did you mean ‘TRON’?”.  It does that because TRON is a popular movie.  When I search “stars this week” to learn what astronomical stars I might be able to see more brightly with my telescope this week (depending on my location, weather, etc.), I get suggestions for “Dancing with the stars”, a TV show that I have no interest in, or news about Hollywood ‘stars’ that happen to be most popular this week.  Google doesn’t do this to change my views of the world, dumb me down, or to annoy me, but more simply because the Google’s A.I. is nothing more than a very long strain of code (instructions) as to how it should display results.  It may take into consideration what is more popular in the area I am connecting from, combine that with the hour and date that I created the search, or what was searched for by other people from my area of the world that day.  It will also take into account what websites are more popular and display results from those websites first.  All of this is part of an algorithm that makes Google ‘behave’ in a particular way for each individual search.

Google A.I. is not what Hollywood depicts in movies.  It is powerful, but only for a very specific task.  ‘IT’ does not wonder about stars in the night sky, nor is it at all curious about my documentary.  ‘IT’ just is, and ‘IT’ has some specific functions that people created and continue to refine for ‘IT’.

If, for instance, the Google search engine is programmed to take into account the previous search when you search for something new, then the results will become very different from what they might be without that ‘feature’.  And you can already see this.  If you ask Google “Who is Charles Darwin?” Google will answer that.  But then, if you ask “When did he die?”, Google automatically recognizes that you are still asking about Darwin and answers you “April 19, 1882”.  It gets even more interesting when you then ask “Who was his wife?”, as Google keeps the conversation going by telling you Darwin’s wife’s name.  You can continue to ask “When did she die?”, and you will again get a relevant answer based on the chain of previous questions and answers.  By this time, you may be feeling a sense of some ‘smarter’ A.I. with whom you want to have a conversation with, but remember, it’s very limited to the rules that this software follows.  Google Now is not really ‘wondering’ what your interest in Darwin might be, or whether you are an ‘atheist’ (since you searched for Darwin).  Even if you program IT to ‘create’ such associations, IT still can not ‘think’ like a human.  IT can only do those associations and display some results based upon that.

Super complex A.I. comes about when the series of instructions that it follows are many, widely varied, and dynamic (meaning that the software can add new rules and overwrite the old ones).  This is when an A.I. may become unpredictable, and potentially dangerous.  To understand this aspect, we need to first talk about the idea of ‘random’ to help better understand and weight what ‘unpredictable’ really means.  This will take up a big chunk of the article, but bear with me because understanding this concept will make you say “Aha, I get it!” at the end of the article (at least I hope so).

Random

If I were to challenge you to find a random process in nature or invent a software that creates random numbers, do you think you would be able to complete that challenge?  Let’s see.

“ 93 ??  dada ?|/\ so what?!  “ – This seems ‘random’, right?  Well, what if it isn’t?  The lottery, weather, ideas in your head – they all appear to be random.  But are they?

“Random” is a term used to describe something that seems to use no rules to arrive at a result; an outcome with no way of understanding of how that outcome was arrived at.  You may have come across numerous ‘random’ websites where you can generate ‘random’ numbers (random number generators).  They claim to generate ‘random’ numbers using computer software.  How true is that statement?  How in the world can you program something to give you a result that is fully unpredictable or otherwise impossible to understand how it was arrive at, when you are the one who programs it to create those numbers?

Here are some numbers: 145 → 2 → 20 → 4 → 16 → 37 → 58

Are they random or not?  How can you tell?

If I were to tell you that they are not ‘random’, and that all of them are related to each other by a single ‘rule’, would you be able to figure out the rule that unites them all?

145 → 42 → 20 → 4 → 16 → 37 → 58 → X

The only way to ‘predict’ X is to understand the rule governing the string (the rule of the game).

Their properties are highlighted by this circle, as they are all connected in the following way: the rule is applied to ‘145’ to arrive at ‘42’.  The same rule is then applied to ‘42’ to create ‘20’, and so on.  A single rule is applied to each number in order to provide you with the next one.  So, if you figure out the rule being used and then apply it to ‘58’, you will find the “X” number.  And to test that you are correct, the same rule must bring you back to 145 when it is applied to “X”.  Get it?

Give it a try and see if you can figure out the rule.  Take all the time you want.  Let’s see how ‘smart’ you are, because if you crack this problem, you are a bit like an A.I..  🙂

You can check the answer here.

The rule is to add together the squares of each digit within the current result to arrive at the next number in the series.

145 → 1² + 4² + 5² = 42

42 → 4² + 2² = 20

20 → 2² + 0² = 4

4 → 4² = 16

16 → 1² + 6² = 37

37 → 3² + 7² = 58

58 → 5² + 8² = 89 (the ‘unknown’ number)

89 → 8² + 9² = 145

You see, once you know the rule behind the puzzle, it becomes so easy – but it’s very hard for a human brain to figure out what the rule is.

This is the basic idea behind generating ‘random’ numbers: sequentially apply a rule to a number that will create a string of numbers so different from each other that they seem ‘random’ to you and me.  But of course, with enough computational power and time, testing millions or billions of formulas very quickly, you can eventually crack any rule behind any such string of numbers, no matter how complex it may be, which highlights the fact that all such seemingly ‘random’ strings of numbers are not ‘random’ at all.  The only difficulty lies in cracking the rule that creates them.  Depending on a rule’s complexity, it can take present-day computers hundreds, if not millions, of years to crack complex ‘riddles’, so no worries if you were unable to figure out the rule behind the string of numbers above, even if this was a very simple one compared to how complex these rules can be made.

The puzzle above is intended to help you understand how ‘tricky’ such relationships can be, making it so hard for us to understand what rules are being applied to those relationships, even when the rules themselves are not that complicated.  Now, to more thoroughly understand this idea, let’s make up a rule that will give us ‘random’ numbers from a computer.

To do this, we’ll first need a ‘seed’ (an initial starting number).  We’ll then multiply it with itself, and then only retain the digits at the center of the resulting number, repeating the process on those retained digits.  That will serve as the rule we just invented.  It’s relatively simple, but it will give rise to more complex and ‘unpredictable’ numbers than the previous rule we played with.  The formula looks something like this: SEED*SEED= xxxxxxx.  The next ‘seed’ will be the five X’s from the middle of the newly generated number.  And then we repeat the entire thing.

So, let’s start with 46 as our initial seed.

46*46 = 2116

The new seed is “11”.

Let’s go on and generate a few more such numbers:

11*11 = 121

2*2 = 4

4*4 = 16

16*16 = 256

5*5 = 25

25*25 = 625

2*2 = 4

4*4 = 16

16*16 = 256

…..

So, the ‘random’ numbers generated are 11, 2, 4, 16, 5, 25, 2, 4, 16, 5….

As you can see, for those who do not know the rules, there seems to be no obvious relationship between them and their order, but the results also start ‘repeating’ (patterns), which makes it easier to ‘crack’ the rule used to generate them.  To make them not repeat so often, you would need to start with a bigger seed, and perhaps a more complex rule set.  This example illustrates how computers work to generate the ‘ugly’ and ‘unpredictable’ numbers that we call ‘random’.  But remember, if any patterns repeat, that is called a ‘bias’, a hint about the rule, and it makes these numbers more easily predictable.

What is more ‘random’: a real coin toss or a virtual one (simulation)?

13 – 18 – 23 – 30 – 37 – 4 – 7

Can you guess the rule behind this new string of numbers?  This one is nearly impossible (even for computers) to guess, as its rule creation was wildly different from the other ones.  How different?  This is not based on any mathematical formula.  It was rendered using this kind of sound (listen to it): sound

I generated this number with random.org (app here) and they do not use mathematical rules to generate new number strings; they use a natural phenomena to do that.  More precisely, they use atmospheric noise (the sound you just heard).  If you tune an analog radio ‘in between’ the live broadcast stations, you will hear a similar noise.  What you are hearing is actually more amazing than most people realize, because you are listening to the effects of thunderstorms across very wide distances. This noise is primarily created by lightning discharges (40 per second, worldwide) that are nearly impossible to completely predict with today’s knowledge and technology.  This noise varies according to where lightning discharges happen (distance), along with their intensity, frequency, and so on.

So, they record this noise and analyze it with a software that creates a visual representation of it.  Then they basically pick up variations in this graph (little bumps), transforming them into 0’s and 1’s (binary strings).

For instance: big and wide bumps are interpreted as 0’s, while flat and wide (no bumps) areas are interpreted as a 1.  And as you may know, computers use 0’s and 1’s to create all that you see on a computer (photos, videos, text, formulas, etc.).  Finally, using the binary strings created from that noise, you can connect the thunderstorms to the computer, and they can now ‘communicate’.

So, for instance, they can then add a rule in the computer saying 0=tail and 1=heads for a coin toss game.  Now, when you go to their website to play the coin toss game and click the button to flip a virtual coin, the software looks at this atmospheric noise and if it first reads 0 in this ‘noise’ graph (big bump), then the coin will end up tails.  If it reads 1 (flat), then it will be heads.  Get it?

10010 means: Heads – Tails – Tails – Heads – Tails, and the representation for that would look similar to the above representation.

This is really amazing, because it really means that your virtual coin will ‘land’ on heads or tails based on lightning strikes.  Go ahead and play with some of their games, as they are all based on this concept.

It should be mentioned that this system is still far from perfect.  They need to pick up this atmospheric noise within a special environment, since even a computer fan’s electrical properties can disturb it by introducing an additional, unintended pattern in the noise (bias), rendering the resulting noise ‘less random’.  But even after very carefully applying methods to eliminate those influencers, if the software analyzes a piece of that sound from ‘thunderstorms’ and ‘sees’ that bumps (0’s) occur more often than non-bumps (1’s), that means that the system is again somehow non-randomly ‘favoring’ one of the two.  That’s not good for such a system, because it needs to produce as close to a 50-50 chance for either 0’s and 1’s to be picked up from that noise.  If something is influencing this noise that causes it to ‘favor’ one of the two, it becomes more predictable.  As mentioned earlier, when patterns repeat, it is easier to crack the ‘rule’ of the game or to otherwise predict outcomes.  In this case, if the noise favors arriving at a 1, for instance, then it will be more likely for Heads to come up as a result in such a coin flip game.

To understand how patterns (‘biases’) can make even complex systems predictable, roll a die.  A real one.  What do you get?  You can only ‘land on’ a 1, 2, 3, 4, 5 or 6.  You have 1 in 6 chances to guess the number, which is pretty ‘random’.  Roll two dice and the story changes, as some numbers now have more chances to come up as a result of combinations.

If you roll two dice, it turns out that your best ‘bet’ would be on 6, 7, or 8, because you will statistically land on those more often (with 7 more often than any).  The two-dice system favors certain numbers as you roll them multiple times, making it more predictable.  So, the system of two dice has a BIAS.  If you were to simulate the roll of two dice millions of times, you would see more clearly how those 3 results come up far more often than any other numbers.  The same thing applies as you add more dice to the mix, with some numbers resulting more frequently as a result (source).

Random.org is striving to eliminate all biases to get closer and closer to 50-50 chances for arriving at 0’s and 1’s.  They have not attained ‘the ultimate’ 50-50 yet, but they are very close to it.

Random.org is the most well-known of such ‘random’ numbers generators and games based on ‘randomness’ that output unpredictable results.  But even their system is based on physical phenomena that have a deterministic value.  In other words, something creates those thunderstorms and, if we gain enough information on that and pull together enough computational power, in theory, we could learn how to predict the outcomes.  There are some other systems that ‘claim’ to produce even more ‘random’ outcomes, which you can check out here, but for the ‘purpose’ of this article, we will stick with random.org’s powerful example.

So, is a real coin flip more ‘random’ than a virtual one at random.org?

Rolling a single ‘real’ die appears ‘random’, until you employ a high-speed camera and some mathematical formulas to help you predict the outcome (read about it here).  And while a real coin toss may also seem ‘random’, they can also be predicted with the same high-speed cameras and some maths (source).  Rolling a real die and flipping a real coin involve numerous biases that you can figure out to predict outcomes: the die’s edge, corner radius, material, center of gravity, launch speed, etc.… the coin’s weight, balance point, metallurgy, etc..  All of these biases can make such ‘events’ predictable due to statistical probabilities (you roll a die many times to learn if it ‘favors’ some outcomes, and then work to figure out why – the same goes for a coin toss).

(see this video for more on this)

So, it may seem ‘counter-intuitive’ to opt for a virtual coin toss on random.org than for a real one, but the random.org one produces much more unpredictable results than the one influenced directly by humans.  Interesting, isn’t it?

In reality, it seems that nothing is truly ‘random’, because all events have a deterministic value.  Many events, such as weather, the movements of billions of atoms, or even a lottery extraction, can be based on a complex series of mechanistic events and reactions, making it appear ‘random’ to us, because we are not recognizing the exact deterministic values at work.  Many scientists say that spontaneous reactions occur in the quantum world (the world beneath the atoms), where some particles seem to just ‘pop’ into existence, but we will leave that for another article, as it requires a more detailed presentation.

All of this discussion about ‘random’ was not that ‘randomly’ chosen for this article, as this is not just about numbers, but also very related to how a huge chunk of the internet works.  Whenever you send an email, buy something online or chat via skype, you are doing all of this ‘securely’ because of the idea of ‘random’.

Computers manage data in ‘binary’, which is simply strings of 0’s and 1’s that represent the data stored on its hard drives, as well as whatever data is being worked on at any given moment (text files, video, apps, system commands, online communications, etc.).  If I send you an email with the text “How are you?”, what is actually ‘sent’ over the internet is 010010000110111101110111001000000110000101110010011001010010000001111001011011110111010100111111.  If someone ‘intercepts’ that binary string, that person can easily read the email that I just sent to you.  To make this communication secure, the idea is basically to create a secret formula (rule set) that ‘scrambles’ the content of my email message in a way that the result would make no sense to anyone.  This is very similar to how you create ‘random’ numbers using a mathematical formula, so that no one understands how they were created.

So, imagine this simple rule:

Replace all a, o, h, e, r, u, y and w letters with 1, 2, 3, 4, 5, 6, 7, respectively.  An email that says “How are you?” will then look like “328 154 726?”.  If we then add another rule to our code to eliminate spaces, the email will look like 328154726.  The computer then sends out a string of 0’s and 1’s that represent that scrambled message.  No matter who might intercept my message, he/she won’t be able to understand what that message actually says.  However, if the person receiving the message (you) has the rule we just described, you can unscramble the message and understand what I sent in my email.  This scrambling and unscrambling of data is the basic concept behind encryption.  Most modern encryption is secure because, as mentioned earlier, it would take hundreds of years for today’s supercomputers to crack their rulesets in order to decode the data/messages that they mask.  To understand more about this process watch this video – https://www.youtube.com/watch?v=M7kEpw1tn50

CRACKING

Over time, the term hacking has come to be associated with people who intentionally breach the security of a software or take advantage of discovered ‘bugs’ to cause harm.  However, that term is continuously being misused (mostly by ‘the media’), as ‘cracking’ is the proper technical term for describing that.  Hacking means “to take something apart to learn how it works”, often with the intention to positively improve on its design. So, keep that in mind.

Speaking of encryption, let’s talk a bit about software security, because even if everything is encrypted, security is not only about people intentionally trying to crack the encryption systems, security is also about unpredictability (perhaps even more so).

You may have heard of computer viruses, but have you ever wondered what they really are?  They are basically software apps or scripts (0’s and 1’s) that can wreak havoc on a system in one way or another.  Viruses are just one type of malicious software (others include trojans, worms, spyware, rootkits, etc.), so to incorporate them all, we will use the more all-encompassing term ‘malware’ (source).

To simplify this concept: if you construct a building, you cannot possibly be fully aware of the ‘health’ of every part used for it.  For example, cracks may have formed in some parts of the cement, and if water is able to infiltrate the building there, the cement may eventually weaken to the point where the entire building (or a section of it) collapses.  Holes in the building negatively affect environmental systems (A/C, heating, humidity, etc.) and may also allow rats, insects, or whatever to get in (stealing or contaminating the food), creating significant discomforts for the people making use of the building.  So, malware is to computers like cracks, water, and rats/insects are for buildings.  Some can collapse your software/operating system (like erosion for buildings), some will add annoying popups with advertising on your browser, or steal your passwords or credit cards (like rats disturbing the ‘peace’ of the ones living in the buildings, or stealing the food).

When you construct an operating system (like Windows, MAC Os, Linux), you cannot be fully aware of all that you have built.  There are simply far too many inter-connections and rules that you need to include, such as recognizing and fully supporting so many combinations of thousands of hardware possibilities, allow this code to do that, but not this, recognizing and properly loading the USB flash drive that you just inserted into your computer or recognizing that you have a video card so that your monitor works, allowing you to open and then make use of a browser – and so on.  All of these are coded rules designed into the operating system (OS).  Then consider all that’s needed for the OS to properly handle all of the other pieces of software that you choose to install within your OS (like apps, programs) to allow for all of those to work together.  Just imagine how many rules it takes for a typical operating system to work.

So, there may be some rules that you were not able to completely develop (unaware that someone might try doing something you didn’t anticipate) or rules that conflict with other rules, or missing rules.  And this happens quite often.  If some ‘rats’ (crackers) discover any improperly coded software (holes in the software), they can then infiltrate and mess with the Operating System (or any software) and may add new, undesirable rules to it.  For instance, you may have a rule that does not allow specific files and folders to be deleted, as they are critical to the proper functioning of your OS, but if there is a ‘crack’ that can allow the ability to delete them, imagine someone taking advantage of it and doing just that.

The most robust approach is to quickly patch newly discovered holes to keep your Operating System (OS) as trouble-free as possible.  As an “open-source” operating system, Linux is able to manage this approach via the support of a huge community of open-source programmers that continuously seek out and fix these holes.  Proprietary (closed) operating systems, such as Windows (and sometimes MAC OS), are created and supported by companies that mainly rely on a handful of paid employees to work on the code, so updates to their operating system (hole patching) are issued much slower than with Linux’ open-source approach.  As a result, closed OS’s have to try to ‘kill the rats after they have entered the building’.  This is why Windows (and sometimes Mac) users usually have some kind of anti-virus system installed on their computers, as they are designed to search for particular ‘rats’ to kill.  Unlike Windows or MacOS, the operating system code that makes up the free, open-source Linux system is open for inspection to everyone, so anyone can report or fix holes the moment they are discovered.  This ultra-fast and widespread response network is why you do not need an anti-virus system on Linux.  This doesn’t make Linux perfect, of course, but it does make it extremely robust.

Even if you create a much saner environment, where no one wants to cause ‘harm’ via such holes in any software (A.I., your smartphone, a health app, etc.), the presence of software holes can still be dangerous without intention (like rain drops that gradually weaken the cement).  You can basically write an app (a piece of code) and, without being aware of any potential problems, program the app to access some parts of the Operating System that happen to have holes in them, unintentionally causing the OS to ‘misbehave’ (errors).  So imagine that you construct a complex software that carefully monitors one’s health and, as needed, injects insulin into the body, but because the software (yours, the OS, another running app on the system, or any combination) has holes in it, the commands issued by your app may not be carried out properly, unintentionally endangering someone’s life.

This is why unpredictability can become dangerous.  Linux provides a good counter-example of how when many eyes are on the code, and when many hands are on the keyboard (without monetary pressure), you can more quickly and accurately fix these holes to make the software safer.  If it weren’t for the monetary system’s ‘profit-motive’ demand encouraging businesses to minimize these efforts, we could easily automate these testing processes, simulating hundreds of thousands of different software checks to find such holes and correct them.  Nevertheless, it’s important to recognize that unpredictability is a big part of why some pieces of code have errors or ‘allow’ others to take advantage of these holes and mess with them.  The more robust pieces of code you have, the less likelihood there is for anyone to crack them.  While the saner a society becomes, the less ‘attackers’ emerge out of it.

Artificial (or not) Intelligence:

We are finally ready to focus on Artificial Intelligence, and I hope that all that was presented so far will start to make more sense now.

Machine Learning and Complex Rules

In what category does the cat fit in?  Can you find the clue?

acerous: parrot, humpback whale, chimp,

non-acerous: goat, giraffe, cow.

So, is a cat acerous or non-acerous?

You don’t need to already know what words like ‘acerous’ mean.  Instead, look to find patterns within the groups of animals to then figure out by what criteria they are being organized, like we did for the numbers earlier.  This is how you can ‘learn’ where the cat fits in.

So, what does a goat, a giraffe, and a cow have in common?  Then, what does a parrot, a humpback whale, and a chimp have in common?  In order to classify the cat in this system, you must first figure out the ‘rule’ behind the game.  Sounds familiar, right?

If I tell you that a cat is acerous, would help you work out the rule?  Is this classification based on genetic relationships, skeleton types, brain, diet?  It will be quite hard for you to figure it out with only three examples of each, but it would become more obvious if you were given thousands of examples, and you would realize that ‘acerous’ means ‘animals without horns’.  The more clues you have, the more obvious the rule becomes, like in the case of ‘random’ numbers.  This example illustrates how machines ‘learn’.

You provide huge amounts of data and program the system to simulate and search for patterns.  If the computer program has a huge database of animals and their features to work with, it can then arrive at a statistic as to what the acerous or non-acerous animals have in common with each other.  If the statistics show that 60% of non-acerous animals share same number of legs, but 100% of them grow horns, it will record the 100% statistic as being more correct.  IT does not understand what animals are or what that group is all about, as IT is all about working with statistics.  So, now that the software has the statistic that shows by what criteria the acerous and non-acerous groups are divided, and if the computer has access to a lot of biological information about cats and that information shows that cats do not grow horns (cat ≠ horns), then it can fit the cat in the correct category.

The more data you feed to such algorithms, the better they become at statistically predicting ‘patterns’, because they can quickly find ‘biases’ in the ‘riddle’.  Statistics smartly applied.

True story: a known author wrote a book but credited a fictitious author name to publish it anonymously.  The book was later scanned with a statistical program like the one described above and, by comparison to other writings from multiple well known authors, it correctly connected the ‘anonymous’ writing with the real author’s name.

So how did that happen?

Let’s say Jack is very interested in science and astronomy, but his friend Emma is more ‘romantic’ and interested about movies.

Imagine now that we build a software that can make associations between words, so we put the rules:

stars, moon, night, telescope, math = astronomy/science

actor, night, Titanic, love = movies/romantic

If this software analyzes many of the emails that both send out to the world over a period of time, it can show that Jack is interested in astronomy/science and Emma in romantic movies.  This would only serve as a simple statistic, programmed by humans and based on the number of unique words used.  But if you also weigh the repetition rate of the words used by each of them, then the resulting statistics can become more complex and accurate.  Why?

Could you guess which one wrote this: “Did you see Apollo 13 last night?  I loved the shot of the Moon!”  It could be Jack, who is interested in astronomy and just saw a movie, or it may happen to be the more ‘romantic’ Emma, who loves movies and she just happened to see one that is about ‘space/astronomy/science’.  You and I may be terrible at guessing who wrote the email, but the software will have a much better chance at ‘guessing’, and I will explain why.

The text has the following words that both use in their writings: Apollo 13, night, moon, and love.  If Jack rarely uses “night”, and almost never used “Apollo 13”, but 50% of the time he mentions the Moon in his emails, yet Emma never used the words “moon” or “Apollo 13” at all so far in her emails, but “night” and “love” have come up 80% of the time (combined), then you can use some mathematical formulas to work out which one, statistically, is more likely to have written that message.  You may not get how I described their words use, I don’t either :), but that’s not the point.  The point is that rules like these are used to better predict who has written a particular text.  So, after analyzing and ‘weighing’ all of the statistical data, the software may say : There is an 86.4% chance that Jack wrote this email.

There is nothing ‘esoteric’ when a piece of software ‘recognizes’ you by your writing style.  It is strictly rules and formulas that give birth to statistics and probabilistics.

Face Recognition

Do you know why Facebook’s face algorithm is so good at recognizing faces (except mine)?  It’s because people tag other people, and if a person is tagged enough times, the software can recognize recurring patterns in the pixels (face), like in the ‘acerous’ animals example, and associate those patterns with your name.  Then it can ‘recognize’ you (the pixel patterns for your face) in new photos, based on these multiple examples and associations it makes and has been exposed to.  You are basically ‘teaching’ this software how to ‘recognize’ faces every time you tag someone.  But since I do not have a personal Facebook account (so there are no photos of my face with tags on them), the Facebook algorithm does not have any data about my face on which to apply that pattern recognition formula.  If you tell Facebook’s algorithm to analyze the face of a human who is not in their database, it will have no clue who that human is.

An example of the process:

– the software can assess the overall texture of skin to help determine age.  It can also detect moles and other features

– it searches for shadows and wrinkles to help determine age

– the software ‘reads’ the shape of lips to determine mood and gender

– eyebrow shapes are key to determining mood

– jewelry can help determine gender

– shadows cast by hair help determine gender

https://www.youtube.com/watch?t=434&v=8wHZ3oso618

Language and Voice Recognition

IBM’s Watson supercomputer does not really ‘understand’ human language.  It generates statistics to allow it to arrive at probabilistic outcomes.  Watson’s software analyzes language in the same way that Facebook’s system analyzes faces: it records millions of writings and statistically graphs how words are used.  For example, 85.5% of the occurrences of the word “are” were found to be associated with “you” (written one after the other, in either order), and always in the form of “are you” when the sentence ends with a question mark.

So Watson just learned an important aspect of English grammar: that within a question, the correct form is “are you”, rather than “you are”, and that learning is based completely on what it statistically deduced from analyzing a lot of English text.  You see, you don’t have to enter any grammatical rules into Watson.  Allow it to use similar statistical measurements and huge amounts of data, and IT will statistically figure out the rules on its own.  Pretty awesome, and super important!

In the same way, you don’t have to tell a similar software what ‘acerous’ means.  You just have to provide multiple examples and it will figure it out, again, based on statistics.  This is why more advanced autocorrect software works quite well.  Even if you intentionally write “How you are?” or “How is you?”, the software statistically knows that both of them should be “How are you?”.

Watch this video to better understand how IBM Watson works – https://www.youtube.com/watch?v=DywO4zksfXw

The same thing goes for voice recognition software that can ‘understand’ and process multiple accents as it transforms the sound into a graphical representation (like random.org does with atmospheric noise) and then analyzes the various patterns within it.  If thousands of people are recorded saying the word “you”, then visual representations of the sounds can become associated with that word.  “You” becomes a graphical sound pattern.  Do that for all words and you can recognize and process voice.  Do that in multiple accents and you can recognize voices even more accurately.

Automated Suggestions

Whenever you see any recommendations: from Netflix to YouTube, Spotify to Amazon, they are ALL based on the principles of statistics and patterns (what you previously bought, listened to, searched for, etc.), and the more data you provide to them (the more you buy, the more songs you listen to, and so on), the more accurate they become at ‘predicting’ what you want/like.  If you buy ‘stuff’ from Amazon that is associated with ‘astronomy’ (pillows with stars design, a book about Mars, etc.), then Amazon may recommend a telescope to you.  If you buy make-up, purses, clothes or other ‘fashion’-related items, then the recommendations of shopping websites will be based on that.  If you typically buy songs that have a particular bitrate, then songs with similar bitrates will be recommended to you.  All of this is based on pre-programmed rules implemented by Amazon.

Playing Games

Check out this video of a software playing a computer game – https://www.youtube.com/watch?v=Q70ulPJW3Gk

For anyone who is not aware of any of the information we’ve presented so far in this article, it may seem really ‘mysterious’ as to how a computer (an A.I.) could learn how to play a video game, and then become better the more it plays, undisturbed by any human intervention.  But you might now recognize that the game uses the same methods as Facebook or IBM Watson: statistics and pattern recognition.  IT starts with a bunch of ‘random’ moves (pre-programmed diverse moves that seem to follow no purpose, similar to basic ‘random’ number generators), but the moves that lead to scoring more points will also score as statistically better to adopt in its memory for use in future moves.  And so, the software adopts more and more of the moves that score the most points and will eventually be able to play that particular game extremely well.  If we were to anthropomorphize the process, it uses continuous ‘reinforcers’ to learn and perfect what works best for the task it was programmed for (source).

This kind of ‘reinforcement’ algorithm is the latest big thing in A.I. that will make these systems appear to us as though they ‘learn’ in the same way that humans do.  If you make a four-legged robot and the software only allows it to move its legs in a ‘random’ manner, but you also program it to adopt the leg movements that allow it to move forward the most, then, over numerous tries, the robot will continually re-write its software toward using whatever patterns of four-legged forward movement it arrives at.  The way it moves forward it may end up looking completely new to us humans, but again, that’s only because of the way this robot was programmed (adopt new moves that propel it forward).  This robot may ‘invent’ a better four-legged forward walking method than we have ever witnessed.

Doing Research

There is a sea worm that, if you cut off its head, it grows back again.  Nuts!  How in the world is it able to do this?  No one properly understood how genes from this flatworm are linked and work to make this possible.  That is, not until a machine came up with the answer to this 100 year old mystery in just three days.  Ain’t that amazing?  🙂  It is, but if you’re not aware of the details of the story, and perhaps overly influenced by sensationalistic movie plots, your projection may be far off.  That computer didn’t sit down to ponder what a flatworm is, or even how the ‘damn’ thing grows back its head every time you cut it off.  The computer is basically a bunch of algorithms made by humans to simulate and test ‘random’ (different) scenarios about how the flatworm regenerates itself, similar to the other examples showcased so far.  In this case, scientists had to invent a custom programing language so that the software could simulate many ways that genes might work in a flatworm in order to arrive at the most statistically probable answer.  It created ‘random’ simulations and those that turned out closer to ‘regenerating the virtual worm’ were kept, while the other ones were discarded.  Repeating this process many times, similar to how the other software ‘learned’ how to play a game, the best scenarios were adopted and implemented.  In this way, it only took three days for the software to arrive at the best explanation of that process ever produced, so far (source).

Recognizing Objects

Facebook’s face recognition works mainly based on what people tag (what pixels/what faces) and correlates the pixels with the tag, but there is another, more sophisticated and useful feature that such complex algorithm can perform: you throw a bunch of photos (thousands) at them and the algorithm sorts them out by what is in the photos.  So, dogs become grouped with other dogs, cars with cars, and faces with faces.  How are they doing that?

Here’s an example: how did google A.I. recognize a cat from youtube videos, without anyone telling it what a cat looks like?  Well, they intentionally feed it thousands of videos with cats after programming it to recognize patterns in videos (pixels for example) to statistically ‘understand’ which patterns are similar within all of those videos.  Analyzing pixels and then highlighting the most probabilistic features it found (let’s say all videos have two semi ellipses – cat’s ears – and two circles – cat’s eyes), it could create a pixel-based image that looks to us like a cat – puts those ellipses and circles in relation to each other based on how it detected them from the videos it analized.  In the same way, if fed a lot of male porn, that software might draw a picture of a penis.  That may be harder to accomplish, however, as the software may become confused between the shape of an arm vs. a penis :).  So it may draw something that looks like either a small and weird arm, or an overly exaggerated long penis with tentacles (fingers) :).  Software does not discriminate between the two.  It just identifies shapes and draws sketches based on how it is programmed and what data it is fed.

The better such pattern recognition software becomes, the more accurately they will be able to ‘recognize’, tag, and sort more and more objects.  But it also depends on the characteristics of what you feed it, as a cat pattern is more easily distinguished from video materials than a penis attached to a body.

Watch this 2015 video presentation if you want to learn more about the pattern recognition process.

Recently, some news titles sensationalized the concept of “What computers dream of”, showing these images:

Interestingly, it is the same kind of software that drew a cat.

This time, the software was ‘forced’ to recognize patterns (like buildings, faces, etc.) within ‘random’ photos, and once completed, it is programmed to modify the photo with the details that it found (to emphasize the image with those details every time it finds them).  This causes the software to overemphasize these features with each editing, resulting in a ‘weird’ photo.  A.I. “dreams”, only if we can also say that cats meditate when they lie down.

But this type of feedback loop built into the software, and based on complex rules and statistics, is super powerful, as it can be thrown at anything when provided with proper data and programming for each task.  The examples I’ve shown here are oversimplifications of the actual systems behind these A.I.’s, as they have a ton of rules, but the basics are still the same.

Click anywhere on the image to shuffle through all of them faster.

Future Implementations

This complex software, that many refer to as “A.I.”, will make a huge difference, as they will be able to deal with numerous dynamic scenarios, even if only for specific tasks.

One example of how these system will significantly impact our world can be exemplified by looking at weather predictions.  To predict weather, people mainly use statistical software based on past data: what were the chances for rain for a given temperature, humidity, etc., and then applying that toward future predictions.  So, as emerging patterns are compared to similar patterns in the past, they are able to ‘predict’ what will happen.  That works rather well and you can predict the weather several days in advance.  With the new kind of statistical and programming software (A.I.) that we’ve been discussing, you can do a kind of ‘reverse engineering’, putting all past data into software so it can run multiple simulations (like they did for the flatworm).  New patterns will emerge that will make weather predictions much more reliable, perhaps for weeks in advance rather than days.  This works because we are unable to recognize nearly as many varied patterns in weather that a computer simulation can, and those may be crucial for future predictions.  The difference with this new kind of software is that it constantly feeds on huge amounts of new data, continually recognizing new patterns by performing multiple simulations on it.  Once you get the idea behind all this new approach, you will be mind-blown of the many important applications this approach can be used for.

Imagine understanding cancer, global warming, resource management, all kinds of diseases, virus proliferation, important patterns in DNA code, and a vast number of other applications!

Building a Robo-Chicken

Let’s use all that we’ve learned so far to build a robo-chicken because, why not….  🙂

Imagine that we want to build a robot chicken and apply an A.I. to it to allow it cross the road, so we can finally answer that “Why the chicken cross the road?” riddle :).

We first ‘teach’ the chicken how to walk using the ‘machine learning’ method, allowing it to adopt the best leg movements that ‘propel’ it forward and keep it level (straight).  After many tests, we have a robo-chicken that can walk forward.  We’ll then add more complexity in the movement on the same principle of ‘machine learning’ to allow the chicken to change direction.  Next we we put a camera on the robo-chicken’s head and program the software to associate certain shapes of cars with stopping the chicken from moving, but only when these shapes are of a certain speed and size, so that the chicken can cross the road if the cars are far away (small shapes) and moving slow.  Now the chicken is ready to try crossing the road, as it will wait for the proper (programmed) time: cars far away, meaning no ‘big ‘shapes’ in sight.

We can add a plethora of additional rules, such as taking into account the width of the road, or calculating what speed it should move in order to cross it more safely.  We might even add software and a microphone to it so it will ‘listen’ for particular sounds (like a honk) to re-evaluate its moves (supposing it made a ‘wrong’ move and someone is about to run over the chicken).  Adding many relevant rules will increase the complexity of this chicken’s ‘behavior’, perhaps bringing it close to that of a normal one.  But this robo-chicken will not head for a gun shop to buy a pistol to hijack a car like a frustrated human might do, unless we program it to do so.  In other words, this chicken must be programmed accordingly for any new tasks beyond crossing a road.

But we’re not done yet.  If we do not program or test it thoroughly enough, our chicken could cause an accident if, say, it crosses the street in front of an oncoming bicyclist.  Neglecting to put that data in (what a bicyclist might look like) may allow our chicken to cause havoc if it crosses streets without paying attention to cyclists (or other potential obstacles such as people).  We could even fail to understand that when it rains, the chicken’s leg movements are different so it may not properly calculate how fast it should move to successfully cross the road, increasing its likelihood of getting ‘killed’.  You see, making a robo-chicken that can cross the road is quite difficult, and many tests and adjustments must be conducted before allowing the robo-chicken to run autonomously on the streets.  Imagine, then, how many tests were required to allow for autonomous self-driving cars.  They are all required in order to be able to deal with unpredictability: holes in the software (bugs), unknown and new situations that such systems will encounter, etc..

Can the robo-chicken really become like a real chicken, or perhaps even human-like in its ‘intelligence’, if enough complexity is designed into it?

Human-Like

So, are A.I.’s actually dreaming, painting, writing songs, communicating, or whatever?  Well, yes and no.  No, because that association of ‘dreaming’ is mainly about humans in a sleep state and, even if the result is similar (a song, sentence, etc.), the way that humans and software arrive at those outcomes is not at all similar.  So the ‘yes” can only make some sort of sense when you look at the results, but not at how it arrived at those results.

The way that they ‘reinforce’ the statistical software that many refer to as A.I. may seem similar to how a human learns, but there is hardly any similarity at all.  Humans have far more input mechanisms than any machine with any software.  Humans do not adopt what is good for them or what proves to work best and then incorporate that new knowledge like a simple line of code in a database.  Humans may learn how to ski better when it happens to be their birthday, but may grow bored of skiing on a Sunday.  Humans can eat ice cream and something about the taste can cause them to ‘connect some dots’ and come up with new ideas seemingly unrelated to ice cream.  Humans are constantly bombarded with multiple stimuli (ideas, smells, sounds, memories, etc.) and they unconsciously take bits of that in, with the brain then creating massively complex associations that constantly weaken or strengthen its neural connections.

Even if you were to combine all the world’s complex statistical algorithms into one, so that they recognize objects, language, sounds, identify emotions, communicate, and so much more, all of those ‘abilities’ are always limited by the data you feed into it and the rules that you include to manage them.  Even if the rules and data are super complex, where we can talk to this robot and feel like we’re talking to a human, and this robot has attitudes, tells jokes and laughs at yours (maybe only the ‘good’ ones), and all that, for this robot to become ‘more like a human’, it must be constantly exposed to an environment from where it can pick up continuous data in the form of new ‘experiences’.  Even with all that, I doubt you could simulate a human, who has so many inputs and feelings (multiple chemical discharges), fuzzy memories and moods, ‘falls in love’, and so on.  A robot can make a ‘sad’ face, but is IT truly sad the way a human understands that emotion?

The ‘marriage’ of A.I. with human appearance attributes is due to the interface that it is often built around these tools.  They give IT a human voice, along with human-like responses, and most of us tend to be easily fooled into believing that these tools are like humans.  Google Now or Siri may say things like “Hey Jack, how was the meeting?  What did Emma think about your designs?” because, as we explained earlier, this type of software statistically ‘understands’ how to match words in a way that is comprehensible to us humans.  IT does not actually ‘wonder’ like a human does, nor is it ‘curious’ about what happened at your meeting like a human might be.  IT doesn’t ‘care’ about any of that, as IT only looked over your emails and came up with a good match of words for the interaction with you, and this match of words merely appeared in the form of a ‘question’ to you.  If a robot is programmed to simulate a sneeze and ‘leak’ water from tiny holes in its metal body, would we say that the robot has a flu?  Of course not.  It only simulates some outward flu-like symptoms.  In very much the same way, a robot cannot be ‘sad’.  IT can only simulate some outward symptoms of being sad, like facial expressions.  When humans experience ‘sad’, they also experience many thoughts and other feelings surrounding those thoughts.  They may have trouble focusing, become angry and curse, or may even vomit from the inner turmoil they are experiencing.

Perhaps at some point, humans will be able to make a machine that has as many inputs as humans, and can learn alongside humans similar to how a child does (here is one recent path towards an artificial neural system).  But even then, the strong suspicion is that it will still be widely different from how a human is, because humans are different from each other based on their “total environment”.  It’s the environment that makes a human.  So this imaginary future complex statistical algorithm can only reflect some characteristics of some humans under some certain circumstances and periods of time.  Of course, all of that is purely imagination, as today’s software is still very far away from anything like that, and it may never reach that point, as the future that our technology is directed towards seems to be on a completely different path: not simulating humans, but developing something much better that humans can use.

No one has even created “artificial stupidity” that resembles a human, yet alone artificial intelligence.  What humans have created so far are much more clearly described as complex pieces of software and sensors that, based on statistics and predetermined mathematical rules, can arrive at very ‘educated’ guesses in a very short amount of time.

Another news title I saw recently: “Google’s artificial-intelligence bot says the purpose of living is ‘to live forever’”.  But of course the software that Google is using can only output some matching words based on statistics and smart algorithms, and only to the collection of words that are fed into it as questions.

Human: What is the purpose of life?

Machine: To serve the greater good.

I would say that such responses sound more like artificial stupidity, and I would recommend to Google that they feed their algorithm with our article on Purpose and Evolution :), or some articles on the evolution of words and general semantics.  Perhaps after feeding it relevant data and relevant algorithms, their statistical software would have replied “I do not think the question makes sense.”

The Turing Test is supposedly the ultimate test for ‘intelligence’ (or at least it was until recently).  Here’s how it works: Imagine you are communicating via a text chat with ‘someone’, not knowing whether it is a human or a machine.  Based only on the conversation, would you be able to tell which one you are communicating with?  Well, it probability depends on whether you watch movies too much, or if you read science, or listen to music.  Basically, it depends on who you are, what questions you choose to ask and what you deduce from all of the conversation.

There may be people who will ask more relevant and complex questions and will figure out rather quickly that they are talking to a machine instead of a human being, and perhaps some that would ask relatively simple questions and be easily fooled by a much simpler A.I. system.

What if you ask “What is 3454*4546?”  Would it look like a software program if it answers quickly?  What if it’s a human being with a calculator, or one messing with you by giving you a ‘random’ result of the equation that you can’t quickly check?  Or perhaps a rare human that can multiply very quickly?  Or a software that does not have multiplications programmed into its software and is unable to answer?  Being able to ‘talk’ with software the same way that you talk to your friends does not mean the software is ‘intelligent’, nor does it mean that you are :), or me, or anyone else.  It only means that both parties are agreeing to a certain degree on a set of rules that they are using to exchange information.  It’s rather hard to tell how well that process works because, for example, this article is being explained by a human (me), but is being ‘understood’ by a wide variety of people, each in their own unique way, and perhaps not at all by some.  Maybe some know far more than me about programming and, as a consequence, understand better what it is being explained in this article than I do, while maybe others know very little about technology and simply cannot comprehend most of the points I am trying to make.

The Turing Test can only test some software for certain ‘abilities’, e.g. making statistical sense of some human language rules.  If a software passes the Turing Test, that does not mean that it can drive a car, wonder about exoplanets, get angry, care about your meeting, and so on.

Next time you ‘talk’ with Google Now, Siri, Cortana or, in the future, with IBM Watson, ask them: “Then why how, is it you that can be around?” to see if any of them think of you as nuts, or think of it as being a riddle, or are not in the ‘mood’ for answering that question.  You could probably simulate all of those in software A.I., but they would only be simulations that can go so far.  A human has ‘moods’, and responds or reacts to questions and statements based on their environment (internal, as well as external), culture and whatever complexities may be associated with the situation they happen to be in at the moment (both conscious and subconscious).  In other words, a human is massively more dynamic and unknowable.  A human might slap you in the face or grow angry if s/he finds your question annoying or offensive.  Software does not have that, as it’s pre-programmed to follow a particular path, no matter how complex that path may have been made for it.

If a human being learns how to play a game, that experience will help him/her in playing other games, as well as influence his/her overall behavior.  Software, if it learns a game, has no idea how to play another one.  You would need to reprogram the software before it could learn a different game, even if the software is based on similar ‘machine learning’ concepts.

A human’s many input mechanisms are to A.I. what random.org is to a simple random generator website/app.  Humans are so complex in the ways that they pick up information from the environment that we may call their associations ‘random’.  For example, if you happen to drink orange juice while learning maths, the taste of orange juice may influence you to learn better for the rest of your life.  That’s how ‘sensitive’ humans are to these inputs.  They are like random.org’s atmospheric noise in the way they ‘ingest’ information from the environment, and in how that information is then processed.

But what is the point of ‘simulating’ a human or, more to the point, a human brain?  How can you refer to a simulated brain as being human-like when it would be fully devoid of all other inputs – feelings as chemicals, the sense of balance, low or high energy levels of the brain induced by the state of the rest of the body, etc.?

Airplanes do not have wings like birds do, trains and cars do not have legs, and the fastest boats make use of propellers instead of fins.  There are many, many cases in which it makes little sense to try to mimic nature, because you can invent much more efficient systems to manage the specific goals you wish to accomplish.  Today’s A.I.’s are not being designed to simulate the human brain.  As a result, they are becoming increasingly extraordinary at performing complex tasks that no human brain could do, becoming an extension of us humans.

Can these systems become dangerous?

Is is possible for the varied samples of A.I. that we have presented so far to become unpredictable to the point that they can be dangerous?

When we tried to drive a car to that lighthouse, Google did not try to kill us by mapping out a very dangerous road for us to traverse.  It was more like the robo-chicken or an operating system, as the situations you may encounter on today’s roads are so vast and varied, and there are so many roads that Google’s system cannot predict them all.  In our case, that road may have been closed or otherwise re-classified after Google ‘indexed’ it, and Google wasn’t updated about it.  That is a ‘hole’ in the software, just like holes in your Operating System that causes errors.  Such systems cannot be perfect, which is a word that makes no sense today, since such systems are measured in accuracy percentage.  For instance, Google’s self-driving car may only be 97% safe, but that qualifies as a great achievement, considering how complex it is to make such an autonomous vehicle.  Some people may find themselves lost due to following Google Maps recommendations, or perhaps even involved in accidents, but for the vast majority, there are no issues related to making use of that system.

Such systems are so complex that there will always be some degree of unpredictability.  This is why these system need to constantly update to become better, and that is also the key to making them even more predictable.  It’s the Linux approach, but on steroids, and becoming increasingly automated all the time.

Some scientists dealing with such A.I. (complex statistical machines and algorithms) admit that they do not fully understand how the software arrived at a decision or ‘learned’ a new skill (play a game, write a sentence, etc.).  This is directly related to all that we have explained so far in regards to ‘random’: from initially basic rules, these algorithms grow into very complex sets of outcomes that become very difficult, if not impossible, to reverse engineer.  Nevertheless, even when you find no patterns in how such a system arrives at a result, you can still understand how they work by discovering ‘biases’ and patterns in what they output (behavior).

The key factor behind understanding that such systems can be considered ‘safe’, even when not fully predictable, is the following: These systems are in a closed loop!  The self-driving car A.I. ‘knows’ how to operate its car and avoid obstacles, IT has no idea how to play a video game or answer questions.  If you put that amazing software that ‘learned’ how to play that game into a car, it will not be able to start it, let alone attempt driving it.  Even if you want to use the software that determined how a flatworm regenerates in order to understand how other animals regenerate, you will have zero success.  You would first have to add new data to that software and new methods of testing regenerative processes that are relevant to the new ‘creature’ that you want it to analyze.  This is why Hollywood depictions of A.I. are so far off and primitive, as they imagine A.I. to behave like a human, rather than what it really is.  “Powerful Statistics” sounds rather ‘lame’, doesn’t it?  But that’s much, much closer to reality than calling it “Artificial Intelligence”.  There are army drones that are programmed to kill humans within particular areas (members of other tribes), but you will never hear of a US army drone turning back to attack US troops because it felt unfair to kill those people from Iraq (or whatever).  Such systems are far from being like humans.  They can manage complex and powerful associations when they are programmed correctly and vast amounts of relevant data is fed into them, but that’s all they can do.  They can arrive at new data, and then work further with those results, but only within a closed loop and based on their programmed rulesets.  Google’s self-driving car will not, not matter how many tests and simulations you do with it, come to understand human language, or turn its security cameras to the sky and think that a better use for them would be to ‘hunt’ for alien species instead of avoiding obstacles.  Those are just primitive human concepts promoted vividly by a world-wide-dumbed-down media.  Fearing that such A.I. will try to ‘take over’ is like fearing that your smartphone will try to mess up your messages today in order to to sabotage your relationships.  It’s nothing more than a ridiculous, uneducated fear.

Even in cases where such pieces of software become too complex and hard to predict, regardless of whether they are ‘in a box’ (closed system) or not, we can already predict systems that are based on ‘randomness’.  Let’s look at some human behavior, since what could be more unpredictable than that, right?  After all, humans try to make A.I. so complex that it performs similar to humans (which appears to be of no practical use).  So, let’s see how we can cope with humans from the perspective of ‘unpredictability’, as humans seem to exhibit a lot of ‘random’ behaviors.  Is it true though?

Here’s a simple game (please don’t cheat!).  Quickly choose a ‘random’ number between 1 and 10.  It should be the first number that comes into your head.  Got it?  Click this to see if I guessed your choice.(7)

Most people choose that number, and this is statistically drawn by simply asking many people to choose a ‘random’ number between 1 and 10.  I may have no idea how you arrived at that answer, yet for most of you, I still guessed it.  And if you did not choose that number, go and ask around and you will find that most people will choose that number.

You see, even when we deal with very unpredictable systems (like a human being), there are methods for predicting some of their outcomes (perhaps many), and this is enormously important.  Here’s why:

Let’s look to random.org again, since it produces sets of results that are nearly impossible to understand their deterministic value because… well… just try to predict how thunderstorms (40 ligthing discharges a second) influence the overall atmospheric noise.  See how good you are at that :).  Even with that amount of complexity, we can still ‘beat’ random.org at some of its games.

If you go on their website and play the coin flip game, or if you have their app on your phone and open up the same game, you will (of course) find it impossible to guess the series of coin tosses (if they are tails or heads).  So, what if I dare you to play a game with me, using the same system, and whoever guesses three consecutive coin tosses, wins?  Guessing three coin tosses in a row seems even harder to do.  However, if you are the first to choose your ‘random’ guesses of three consecutive coin flips, I can then make my guesses more ‘mathematically educated’, allowing me to guess the results of three random tosses in a row more times than you will.  My chances of winning at this game are far superior to yours, ALL THE TIME, even when using random.org.

You can play this game to test it, and the rules are very simple.  If anyone picks, for instance, Heads Tails Tails (HTT), all you have to do is to take the first two of them (HT) and put them last, and then make your first ‘letter’ the opposite of the last one in the other person’s guess (T becomes H in our case).  So for HTT, you end up choosing HTH.

Examples:

If you choose TTT – I choose HTT

If you choose THH – I choose TTH

If you choose HHH – I choose THH

and so on.

To better understand this approach, see this video explanation of the game
https://www.youtube.com/watch?v=IMsa-qBlPIE

You can also read about it on Wikipedia.

You will be surprised how accurate your ‘guesses’ will be using this method.

Remember, random.org is quite ‘random’, meaning quite unpredictable in how it arrives at the outcomes.  Even so, we can make sense of the results in certain circumstance, discovering patterns and predicting future results.

So, let’s get back to you.  How did I predict your number (if I did that)?  If you use random.org to try to generate a ‘random’ number between 1 and 10, it won’t statistically be 7 any more than any of the other possibilities.  But it is for humans.  There are polls where thousands of people from around the world played this game and, overwhelmingly, chose 7.  In attempting to try to explain why, they came up with many different guesses: because there are 7 dwarfs in Snow White, due to the James Bond movie character’s “007” handle, because there are 7 days in a week, and so on.  Maybe you chose it because I mentioned earlier in the article that 7 is an important number when rolling two dice.  No one truly understands why most people pick that number, but it seems to be purely cultural, even if we cannot understand the exact process of how the decision to pick that number is made.

There are people in the world that, because they were never exposed to the idea of numbers, do not understand them.  In this documentary, someone asks a man from a tribe to say how many children he has, so the man replied with “Jimmy, John, Clara, Max” (not the actual names), but he did not have the concept of ‘four’.  The most he could do was to draw four lines in the sand to show how many children he had, but can not understand why you and I would call it ‘four lines’.  People are definitely not born with the ability to recognize numbers, but they are taught what these symbols are by their culture.  It is more likely that people choose ‘7’ because we live in very similar environments on many perspectives: the days of the week are basically the same in all ‘modern’ tribes, the maths is universal, and the same stories and movies are pretty much well-known to most of us.

Human behavior is similar to ‘random’ numbers generated by random.org, in that you can’t guess all that well how they will behave or what numbers will be generated.  But the more humans you study, like the more numbers you have, the easier it becomes for you to spot patterns (biases) and predict outcomes, rendering their behavior (like it renders ‘random’ numbers) as non-random.  Both human behavior and certain number strings only seem ‘random’ until you are able to see their patterns (biases).

If you take five people from five different tribes and tell them “Do something random for five seconds”, it is very likely that they will do very different things: facial expressions, dance moves, sounds…  Given that experience, you would be inclined to say that people do indeed do ‘random’ things.  But take 10 million people and you will definitely find numerous patterns that you can track back to their individual tribal cultures (perhaps similar facial expressions, dance moves or sounds).

The thing is, rolling two or more dice just once, you will get similar chances (or less) to rolling just one die (source).  It’s only when you roll two or more dice multiple times that you will discover the pattern of favoring certain numbers as we explained (source).  The same thing goes for human behavior, as the more sampling and tests you use to reveal more patterns, the more chances you gain toward predictability.

Why do you think most people want to get married, but only with one other human being if they are not of the Islamic ‘faith’?  Why do you think heterosexual men are attracted to certain women?  Or that they like women at all?  Why do you think women tend to have long hair?  All of those ‘patterns’ serve as examples (proofs) of how humans are anything but ‘random’.  They are ‘biases’ that you can find when you look at many humans (lots of data), and these ‘biases’ make such systems predictable (remember?).

We can predict to certain degrees how humans will behave under many circumstances.  For example, if someone goes running down a street naked, we can predict what facial expressions the witnessing men or women are likely to make, and how offended they may (or not) become.  And all of that is based on observing how multiple humans behave across various cultures to discover the ‘biases’ within their culture.  If they are raised in the US, Spain or Romania, then most will feel offended by the sight of a naked human in a public space.  Assuming the ‘streaker’ is male, they may try to beat him up in Romania, while in US, they may arrest him instead.  All of that is predictable to a reasonable degree.  Of course it’s true that ‘some’ observers may react in a more unpredictable way, just as every roll of two dice won’t produce a 6, 7 or 8 result, but the statistical majority will behave according to our ‘predictions’.

If we move our ‘naked man’ from one of the ‘modern’ tribes to a tribe that is used to nudity (perhaps because everyone is nude there), then we can also predict how his witnesses will react.  As you can imagine, they will not accord much attention to the naked guy.  However, they will if the guy has a significantly different color of skin than what they are used to seeing.

What I am trying to express here is that the notion of “free will” is completely bogus.  If people were to behave more ‘randomly’ than not, we would be unable to communicate with each other, or have any kind of society for that matter, as all people would be doing very different things.

The interesting thing with humans is that you can even predict their ‘individual’ behavior.  You don’t need that many samples, but larger samples are always useful to help increase the accuracy of your ‘predictions’.

We can even use reinforcers to manipulate human behavior; the same kinds of reinforcers that are behind the idea of ‘machine learning’.  But we will go into greater detail on that in a separate article.

People are essentially machines, even in the way they think.  Human behavior functions deterministically within groups, but are more like the atmospheric noise when sampled individualistically.

The key to greater predictability in both A.I. systems and human behavior is more sampling, testing and simulating.  And even if you are unable to fully understand what exactly allowed an A.I. to come up with a new cancer treatment, or for a human to kill another human, you can still predict and manipulate such complex systems, be it by trying to reverse engineering them (like decoding an encrypted message), or by increased sampling and testing (analyze more humans to learn the patterns in their violent behavior – it may turn out related to a lack of money, stress, little love and care from parents, and/or any number of other environmental factors in and throughout their lives).

Think again about fire.  For thousands of years, people had to ‘tame’ it without any real understanding of how it works or how to best manage the dangers.  With today’s newer technologies and sophisticated software, we are able to track down and learn deeply about ‘taming’ the dangers, and perhaps predict even the most unpredictable pieces of software.  For example, when IBM Watson comes up with sophisticated suggestions for medical treatments, it records every move/decision/association it makes so that you can track all of the processes it went through to arrive at those decisions.

Worrying about A.I. is like worrying about Genetically Modified Organisms (GMO).  You have to specifically point out just what it is that you’re worrying about, as Genetically Modified Organisms come in many forms and from many techniques, while A.I. also comes in many different flavors and systems.  Both A.I. and GMO may be dangerous when unpredictable, which is why you have to make sure that the world you live in is not putting roadblocks to testing and experimenting or safety implementations, like it does today due to its controlling profit-motive.

So, the next time you hear something about “Artificial Intelligence“, replace those two words with the phrase “Powerful Statistics” and see how ‘sensational’ those news article end up sounding.

There are already numerous A.I. systems that influence your life, perhaps even endangering it.  For example, Google’s ‘search’ A.I. may be set up to recommend to you what is currently popular each time you perform a search or use a service like Google Now.  Where the society is dumbed down, the results it provides transforms its ‘customers’ (you and I) into yet more dumbed down creatures, and the loop continues, feeding into the system even more dumbed down content supported by the dumbed down people.  This may seem like no real concern or danger for you or other people’s lives, but if you continually feed unreal news to people (like those about A.I. – ironically fed to them through another A.I.), then “Powerful Statistics” easily show that people are much more likely to grow strong, unreal fears about adopting such new systems that aim to save many lives (cancer research, nanobots in medicine, genetic engineering, stem cell research, etc.).  Your life is also in danger when such A.I.’s are misused by surveillance systems to detect, by whatever you write, watch, download, visit, etc., whether you fit into a profile category that the present-day ‘justice’ systems has deemed as ‘criminal’ and arrest you for maybe watching some particular movies that are considered illegal, or accessing some websites that are considered illegal, or just by using some words in a certain order that have been labeled as implying some kind of ‘terrorist threat’.  Additionally, autonomous drones may kill innocent people because of how they are programmed.  And even when more accurately killing just those that it was programmed to target, such abhorrent misuse of technology only creates more enemies and fear of such A.I.’s, as wars are never the answer to solving differences or conflicts.  But of course, all of that reflects severely corrupted aspects of the culture, not the software.

Complex decision for a software to take:

Imagine that we now have a global TVP-like society in place.  How would A.I. systems function in a rare situation like the following: let’s imagine that we have an autonomous airplane transporting 100 people over a crowded city, but unfortunately, it runs into a flock of birds causing some of its engines fail.  The only options available to the A.I. is to either crash-land the airplane in an empty field outside of the city, or try to land at the city’s airport, with an 80% chance of not reaching the destination and crashing into the core of the city, killing an estimated 1,000 people.  How would you program the software for handling such a potential situation?

First of all, we must put a very serious emphasis on the following: you will always strive to avoid such situations as much as possible.  Thinking about such scenarios without taking this focus into account is irresponsibly unreal.  As a result, you will never have systems moving airplanes over crowded cities, or airplanes without backups for all important systems (landing, propulsion, etc.).  You can easily reduce fatal occurrences to such a low that it will be more probable for someone to be killed by a wayward 3 cm. meteorite (and then develop systems to prevent that, too).  To really understand how unreal such imaginative, fear-based scenarios are, let’s look again at Google’s self-driving car.  These cars always run at the highest-rated speed limits when there is no possible accident ‘in sight’ for a certain radius that will allow it to stop from that speed.  So, if it’s radars and all other systems detect a ‘clear’ road with nothing to adversely affect tire traction (bad weather, for example), then the car can run with the full speed that the road allows, but when there are risks (statistically) that the car ‘sees’ (traffic crowding, rain or icy conditions, etc.), then the car is programmed to reduce its speed, so even if something unexpected runs in front of it, it has plenty of time to stop.

You can watch this recent TED video explaining how Google’s self-driving car deals with such situations – https://www.youtube.com/watch?v=tiwVMrTLUWg

That being said, and assuming that you understand that significant safety measures are paramount for autonomous systems, in a case where a software (A.I.) has to deal with choosing between 100 and 1,000 lives, then I for one see no answer to that, because I refuse to think that such situations will ever happen with the world that TVP proposes.  Even if it does ever happen, the human race will very quickly develop the necessary solution to prevent such a scenario from reoccurring that, again, is extremely unlikely to happen if such systems are properly implemented.  This is similar to thinking that tall buildings might fall over onto people’s heads…  Well, you do everything possible at the time to make sure that never happens and, if it ever does happen, then you learn how to make better buildings.  There is no point in hunting down the designer of a building to accuse him, as that never addresses the issue.

Summary

Let’s try to summarize this mammoth of an article, and the rest of the entire series.

Artificial Intelligence is a severely hyped notion of complex statistical software with complex mathematics behind it, all contained within a closed loop.  The good news is that it is hyped in the dumbed-down direction, one that tries to compare them to humans and anthropomorphizes them, a lot.  Today’s A.I. software is more capable than a human brain at so many levels that it seems very counter-intuitive (backward) to try to make them work like the human brain.  But even if someone tries that, we are nowhere near achieving that, nor is there any visible path towards achieving that level of complexity or mechanical dynamics.  It also seems to me that less-educated people who think that A.I. (these software) refers to machines becoming increasingly more and more human-like completely ignore that human thinking is fully created by the environment they are exposed to throughout their lives.  There is no inbuilt, inherited ‘reason’ in humans, and the same applies to intelligence.

It was a fictional book about some fictional people who built the smartest fictional Artificial Intelligence (computer) and asked the question: “What is the meaning of life?”.  The machine replied that it will take millions of years for it to fully calculate the answer.  The people waited for that, passing the information about this machine from generation to generation, making IT like a God that people were awaiting for the ultimate answer.  And then the moment arrived: all were gathering to hear the answer, lots of emotions and a huge cult that had developed around IT.  And IT finally said: I have the answer to your question “What is the meaning of life?”.  The answer is 42!  🙂

Perhaps after reading this article, it now makes more sense to you how such a fictional story is not far from the truth because, more than likely, that fictional A.I. was either fed insufficient information for what it was programmed for, it was ask a not-so relevant question and the associations it made from that were so nonsensical that it arrived at 42 as the answer, or that the fictional A.I. made so many complex associations because of its complex software that its answer may be very scientifically relevant, but too ‘random’ for people who had no idea how it arrived at that answer, or where/how to apply it.

Intentionality and Money are the issues that we are right to fear when it comes to such complex machines, which is why it is essential to tackle those first, and the only way I am aware of doing that is to move towards The Venus Project global approach, where very few (if any) would be motivated to intentionally ‘animate’ these tools in a harmful direction, and a society in which there are no artificial barriers for developing and testing such tools.  Dealing with unpredictability seems to be much less of a concern within such a society.

I hope you understand that ‘our’ relationship with the tools we invent is very complex, and it is extremely dependent on the culture, as these tools can be used to cure diseases, create abundance and positively affect our lives overall, or they can be ‘used’ to make our lives miserable and hang in perpetual danger.  I also hope you now better understand the blurry line between humans and machines, yet still understand the differences where it is obvious.  But the most important aspect that I’d like you to take away from this special TVPM issue is the huge discrepancy between what most media outlets present about technology and ourselves, compared to what the reality actually is.  Nanobots, ‘cyborgs’, A.I.’s, 3-D printed organs, and so on, have a much deeper and important impact value for everyone when they are properly understood within their scientific light, rather than them being twisted, exaggerated and bleached for ‘entertaining’ presentations or as sensationalistic items competitively intended to increase a news channel’s viewer ratings.

All of the technologies presented in this special issue represent what they are capable of ‘as of this writing’, which means that a month from now, one year, or 100, they will have refined that much further.  Technology continually moves forward at an exponential rate, and if it’s managed by a saner society, the prospect for them to become extraordinarily important to our lives is enormous.  This is why The Venus Project is striving to change the structure of society, in order to allow for these technologies (and much more) to develop quicker, safer, and more relevant to the needs of all life.

Additional resources :

  1. Courses on Artificial Intelligence: Intro to Machine Learning, Intro to Artificial Intelligence, Knowledge-Based AI: Cognitive Systems
  2. Special TVP Magazine on how to automate the entire world: AA WORLD
  3. All about the technology developed by The Venus Project
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s