01. Human vs Machine

Author: Tio

Human beings are extraordinary creatures.  Just think of the machines they built, the discoveries they made, and the continual, steady progress of this thing they call ‘science’.  They can look back billions of years into the abyss of the universe through telescopes and mathematical formulas, manipulate atoms and even enhance their biology.  However, the human being, the individual, is extremely obsolete without the tools he invented.  And when I say obsolete, we’re talking in terms of the kinds of jobs that are required in today’s monetary system.  From their arms and legs to their brains and varied skills, it seems obvious that humans have become surpassed by machines that can do far better jobs, even without any human control/involvement.

So, what if we take all of the top tools the human invented and compare them to the bare-naked human creature?  From their vision to dexterity; from memory to creativity, would humans stand any chance against their machines?

Hearing and Sniffing

If you currently rely on humans, with their little ears and tiny noses, to be detectors of any sort of sounds and odors, then you would be better off hiring a cow, as it hears and detect odors better than any human can.  Actually this is the same reason why dogs are often used to detect odors (dangerous chemicals, drugs, gunpowder, etc.) and not humans.  But even well-trained dogs are being systematically replaced with robots that are continually getting better at ‘sniffing’ a variety of ‘smells’.

Gasbot is one such robot, used for detecting and mapping bio-gas emissions at landfill sites.

It can:

  • Localize itself and navigate in semi-structured environments, both indoor and outdoor
  • Produce models of the gas distribution
  • Detect and localize gas sources

When it comes to hearing, check out  this auditory illusion to see how very easily humans are tricked by what they hear, depending on what they watch when they hear it.

Today, a plethora of devices exist that are used to detect even the slightest sounds, or are unharmed by the loudest of them.  The human ear can be easily damaged by loud noise, and is completely deaf to most of the sound frequencies that can be detected by human devices.  Even when compared to other animals, humans are quite deaf.

Thus, relying on human’s hearing and ‘sniffing’ abilities is either antiquated, or was never really relied upon in the first place.

Arms and Touch

Human arms are fantastic tools.  Because of them, we have mice and keyboards, space shuttles and supermarkets, clothes and written language.  However, for the past 50 years since the development of modern day technologies, human arms are being systematically replaced by a variety of mechanized arms: from construction to writing, from production of any sort of products to machinery control.

We already have robots that can pretty much manufacture anything from the microscopic to the macroscopic.  Looking at the huge variety of robot arms that currently exist, exhibiting so many sophisticated movements and control, human hands are already looking like ‘old’ tools.  We have robot hands with 360 degree joint rotation, ‘n’ fingers with fine sensitivity to pressure and temperature, simulating our touch sensation.  They are extremely robust, and come in so many shapes, forms and materials.  You can read our special TVPM edition on automation to see many examples that currently exist, so we won’t go through all these examples again in this article.

When it comes to relying on human hands to handle complex tasks, you can easily replace them with mechanical arms/tools.  No human hands can screw a screw, but a screw driver can do that without any human hands.  In today’s world, human arms are almost useless  without tools, and many of the tools can be automatically controlled by various systems or robot arms.

But we also write with our mouths or control devices with our brains.  You don’t need ‘a human hand’ these days to create something.

Stephen Hawking, a very influential scientist who has a rare form of ALS that makes him unable to move, manages to write books, scientific papers, develop new formulas, and ‘talk’, using only the movements of his cheek and very little movement of one of his hands.

Voice

Speaking of voice :), text-to-speech software has been gradually getting a more and more natural voice over time.  Sometimes it is hard to tell the difference between a synthesized voice and a human voice.  One example is the IVONA voices collection.  Listen to this short demo to hear for yourself. You can also go to ivona.com to listen to demos in more languages.

Imagine such software reading a story to your children or narrating documentaries into any language, or providing a voice for a character in an animated movie or game – and all of that available in both male or female voices, in multiple languages and accents.

Mobility and Reaction

Humans generally have no problem standing up.  They can climb stairs, run, climb trees and react extremely quickly.  Imagining a robot that can do all of that is a bit difficult, since the best robot out there that can perform such tasks that are small and easy for a human is extremely slow and very inflexible compared to a human.  However robots are continually improving, as this series of DARPA robots attest while showing great mobility in many different circumstances: https://www.youtube.com/watch?v=9qCbCpMYAe4

Robots can now walk, run, climb stairs, maintain their equilibrium in tough situations, and more.  Do not forget though that when we think of robots as clumsy, it’s because we so often test them in our human-centric world, a world full of chairs and stairs, doors and floors, and lots of walls.  Thus, the mobility of a robot can be made substantially better, considering a robot can be provided with various types of propulsion, such as wheels, legs, wings, the ability to hover in the air, and more.

Try to swim faster, or otherwise out-perform a robot designed to move through water.  Or try to outrun a robot with wheels.  There is even a robot with ‘legs’ that can outrun the fastest man on Earth.

Human reaction time may seems very quick, but just take a look at this experiment to see what our human reactions look like in slow motion.  Then watch this one, with a robotic hand that is far superior at reaction time and dexterity than any human hand can be.

EPFL recently developed a robot hand that is 3-6 times faster than the average human eye-hand reaction.  The robot uses a high speed camera for detecting objects and is simply programmed by manually pointing the hand at the object.   The robot then recognizes the movement and adapts to catching the object tossed at it.  Watch a demo video.

Strength and Durability

The strongest man on earth can lift around 3 times his own weight.  A dung beetle can lift a thousand times its own weight.  A machine we know how to build can lift…well, perhaps an unlimited amount of weight.  The days when humanity had to rely on human muscle power are long obsolete.  A human is also prone to diseases, and a human needs breaks and food.  A machine can work non-stop, without breaks, and is far more durable than any human.

On land, the NASA crawler-transporter can transport loads over 9000 tons, meaning it can transport the entire Eiffel tower.  NASA’s crawler-transporter is designed to be very slow, but this truck is much faster and can transport 400 tons at once.  That is, it can transport two huge blue whales at once.

This huge monster is almost 100 meters (328 feet) tall and 225 meters (738 feet) long.  It is used for digging and transporting earth (materials) and can transport 4 times the volume of the largest swimming pool on Earth, every day.  https://www.youtube.com/watch?v=cocg1u0nwbI

The largest swimming pool in the world is so big that you can sail small boats inside its area.

This machine, known as a ‘mole’, can drill holes up to 19 meters in diameter, through solid rock.

On water, machines can transport even bigger loads.  This water ship is 4 football fields long and can transport not just one Eiffel Tower, but 66 of them!  And in the air, the largest aircraft can transport not 2 blue whales, but 3 big ones, plus 6 or so large african elephants.

Vision

Our vision is not only limited to the eyes, but instead is about the eyes and the brain.  So are our other senses, but for the sake of example, let’s keep this simple.

Have you been out today?  If so, I bet you came across many people.  How many faces do you remember?  Perhaps none, because the way we see is quite poor.  Our eyes can only focus on their center point, and our overall attention is very limited.  Watch this video to test your selective attention.

If you stretch your arms out at 180 degrees and then look straight forward, you will probably not see your arms anymore.  More to that point, if you focus on a single word in this text, you will soon realize how the words near it become more and more blurry the farther they are from the centered word, until they just dissapear from your field of view.

With all that you ‘see’ every day, only a very small spot in your field of vision is sharp, while the rest is blurry and parts of it are colorless.(source)

Even a relatively cheap camera nowadays can capture a 360 degree video, and it has no blind spots or loss of color.  You can understand this 360 degree capability by watching this short video https://vimeo.com/91509966

How much can you zoom in on this photo with your eyes?  Can you spot the yellow kayaks?

http://www.gigapan.com/galleries/11203/gigapans/152220

Focus hard, they are here somewhere.

There are drones that survey areas from higher than a 5 km altitude (around 3 miles) and, from there, can spot a pigeon flying close to the ground.  They can also stream live footage to the ground and detecting/tracking all moving objects from cars to people.

https://www.youtube.com/watch?v=QGxNyaXfJsA

The human eye also does a pretty bad job in low light conditions.  It takes a while for our eyes to adjust and, even once they do, on a very dark night, we can maybe spot 2-5 thousands stars under almost perfect conditions (low pollution, no clouds, no mountains, etc.).  Think about how many stars you see when you look up, and then look at this photo taken with a relatively affordable camera.  I’m sure your eyes do not come anywhere near close to seeing that many stars and details.

This is what your room may look like to your eyes under low light conditions, once your eyes become adjusted.  This is what it looks like to a small $2.5 thousand camera, which is 8 times more sensitive than a human eye.

Actually, any night security camera is far better that the human eye in low-light environments, not to mention that humans see/sense only a tiny fraction of existing lightwaves, while cameras and other devices can be designed to cover a huge range of such frequencies (perhaps all of them when combined), including infrared which allows you to ‘see’ in complete darkness, since it ‘senses’ the heat emitted by individual elements of ‘the world’ (creatures, rocks, etc.).

Have you ever tried to catch a fly with your hand?  If so, you probably recognize that it’s very difficult to do, and that’s because a fly sees in a different way than you see.  A fly can see 10 times faster than humans.  When you watch a movie, you typically experience 30 photos (frames) per second, while your eyes and brain interpret that as continual movement (a movie).  A fly would not enjoy such a movie because it needs around 300 frames per second to see it as a movie, rather than a photo slideshow.

If 300 frames per second seems like a lot, there is now a camera that captures 100 billion frames per second.  Think about that!

https://www.youtube.com/watch?v=Y_9vd4HWlVA

So, would you prefer to hire a human being for his visual abilities?  Can a human still be a better security guard than modern day technologies?  Or maybe better at observing any kind of event and be better able to spot relevant information out of what he sees?  Of course not.  Human vision may have been the greatest tool on the planet 100 years ago, but with the advent of photo/video cameras and other devices that can capture different light wavelengths, and at much higher resolution & speed, human vision has become completely surpassed for this kind of duty.

But still, humans are better at recognizing objects and situations, right?  Well, yes.  They are still better at differentiating between cats and mice, types of cars, maybe even faces and other such ‘objects’/shapes – or are they?

So, let’s look at the brain.

Brain and Creativity

Our brains are fantastic.  No other creature has a brain that can match our capabilities.  However, we are already surpassed by computers in many areas where the human brain had reigned supreme in the past.

In school, we are told to memorize information, however the internet ‘stores’ far more than a brain can.  When was the last time you searched for something on google?  Why didn’t you search inside your brain?  It’s because you simply don’t know most things.  Let me emphasize that again, most of the information and knowledge that is discovered through science, you and I are not at all aware of.  That is simply because it is far too much information for anyone to retain and recall.  Long gone are the days when any advanced human society relies on people to retain information for a particular job.  Or at least those days should be long gone, as only an obsolete system may still require such skills.

How long does it take you to read an average-sized book?  A couple of days maybe?  What if the book had 10 billion pages?  Even if you read 1000 pages a day (which is insane), it will take you 10 million days to finish the book.  That’s around 27 thousand years of continuous reading.  You would have had to start back at a time when there were no or few humans in North and South America in order to finish that book today.  The IBM Watson computer can do that in 43 minutes.  Not only can this computer scan 10 billion files in 43 minutes, but it can also draw very powerful conclusions to help with diagnosing diseases, understand natural language, and even come up with unique recipes.(source)

The trend with computers today is the big data that it is gathered daily.  From smart health tracking devices to facebook posts, youtube videos, blogs, security cameras, and smart fridges, a huge amount of data is created every day.  So huge that if you add a 100gb hard drive to your computer, you would need 25 million more of them to store all of the data that it is produced in a single day.(source)  Imagine the entire population of Australia, every single person living there, having a 100gb hard drive full of data.  That is how much new data is produced every day.

That is the key for how smart computers have become: big data.  The type of computing that can mine all of this data is called cognitive computing.  Many consider what we are experiencing with cognitive computing as a new era in computers.  First came mechanical systems that counted things (1900).  Those machines evolved into electro-mechanical devices over time.  In 1950, there was a major shift where these types of systems switched over to programmable systems, the ones that we still use now.  You program these machines to do tasks (like apps on your smartphone), and they do them.  However, many experts claim that in 2011, another switch happened and we are now in the embryonic phase of it; an era where computers actually learn, becoming smarter with time.  The interesting thing about this new kind of computing is that it learns like a human being, through examples and repetitions.  And the more data you feed into it and the more you allow it to learn, the ‘smarter’ it becomes.  There is nothing ‘magical’ about this, since it’s basically following a bunch of statistics and rules, coupled with the ability to understand natural language.  These computers read, literally, billions of documents, looking for patterns to highlight.

The only way to adequately explain these new computer systems is to give you an example: Let’s say you want to book a trip to a place where the temperature is not too hot, but not too cold.  You want the trip to occur in 2 months time.  You want the hotel to have a swimming pool, sushi in the menu, and you’ll bring your wife and 2 kids with you.  You also want to do scuba diving to see some coral reefs while you’re there, and the kids want to enjoy a rollercoaster ride.  For the sake of providing a present-day example where we use money for barter, you also have a budget in mind for your trip.

In today’s world, how would you go about trying to find such a location?  Maybe you could start by asking people around you, although they know very little about the world and such places, or hunting through many holiday-planner websites where you can select certain keywords and categories, but not come anywhere near as specific as what you have in mind for this trip.

Now here comes cognitive computing with an IBM Watson-like app, where all you need to do is to say, using natural language, what you want from the trip, as exemplified above.  The app searches through wikipedia, facebook and twitter posts, tripadvisor websites, and other digital sources, interprets the data in a comprehensive way, and finds the perfect location for your holiday.  It’s as simple as that.

You can apply the same approach for finding a diagnosis for your symptoms, learn about anything you want to, or just ask any kind of question to be provided with relevant advice.

These systems are already tested and functional, but not yet widely available for public use.

Understanding natural human language (how we speak) is the key for fast development of such computers, as natural language is the main source of unstructured information.  80% of the ‘25 million 100gb hard drives worth of data that it is produce daily’ is in the form of this kind of untapped and unstructured data.(source)

As the original inventor of the software behind the IBM Watson computer pointed out in this TED talk, even though the software has not changed much over the past several years, the big change has been in the data that the software can tap into.  The more data it is provided, the more associations and connections it can make, resulting in better statistics.  Computers can now understand natural human written language and even translate it from one language to another or recognize human speech.  And while they are not perfect, the rate at which they continually improve is phenomenally quick.

At present, they are at just 1% accuracy in recognizing objects from photos when compared against experts, and at over 97% accuracy at recognizing human faces (better than humans).

There are computers today with millions of nodes and billions of connections, although the human brain has billions of nodes and trillions of connections.  However, based on Moore’s law (the observation that the number of transistors in a dense integrated circuit doubles approximately every two years – and we have been experiencing that for decades), we will reach the human brain’s capacity of nodes and connections within just 25 more years.  You and I, if you are not too old 🙂 and don’t get hit by a car and die, will still be alive to take advantage of this huge computational power.

Learn more about the Watson computer and its amazing present day capabilities in this talk.

Human vs Machine – One on one.

Hands down, machines beat humans at so many levels when it comes to memory, decision making, or face recognition (and it’s getting close for object recognition).  It still has difficulties with translation and speech recognition, however, they are literally getting better at those every single day.

Computers can also write stories and news articles (in a very quick and accurate manner), compose songs, poetry, or even paint.

Keep in mind that when a human writes, he uses his pointy ‘tentacles’ (fingers) to physically push some buttons on a keyboard, or to press the point of a stick while dragging it across a piece of paper.  A machine need’s none of that.

IF: from vision to hearing and odor (and other) sensing; from strength and durability to speed, mobility, decision making and voice recognition/translation/replication; memory and data mining; robots/machines/computers/software is/are already better or close to human capabilities,

THEN: what jobs are left for humans since these machines can drive, be doctors or assistants, in perhaps any domain, function as managers, and can create unique recipes, songs, or articles; build things, maintain them, and make new, important discoveries faster than all of humanity combined?

It’s now easier to think of what humans are still better at handling, meaning what jobs can’t be replace thus far, than to think of what jobs can be replaced.

There are still some domains where humans are better than robots, and these domains tend to not be ‘jobs’ in today’s world, which is a positive note.  Humans seem to be very good at interacting with other humans: providing moral support, teaching, being creative and inventing new things.  Even though robots are starting to become good at reading human emotions, making discoveries on their own out of big data and in lab research, replacing teachers’ interaction with children, or even at the art of ‘debate’, we are far from from becoming useless creatures.  Technology is like a piano, and we are the ones making the music.  Over the past 50 years, ‘jobs’ have become an overly outdated and obsolete ideal, but the concept of ‘work’ is something quite different.

While the use of sophisticated computer systems will surely continue to expand in controlling complex systems like transportation or production, mining big data to arrive at better decisions, discovering new things (from medical treatments to perhaps important mathematical formulas), composing original work (from documentary scripts to music), and more, we humans are the ones for whom all of this is made, and we will be part of it (discovering right alongside them, creating and innovating, enjoying and educating).  We are still the only ones who can look at all this and inject meaning.  No robot will look at the stars and be in awe, asking what is its place in the universe, at least not for many years to come (or maybe never).  No robot will fight for creating an equal society for all or for taking better care of the environment.

Computers, robots, devices and machines are tools, our tools, and we need to take advantage of their abilities without being afraid of them.

Four to five years ago, you could barely find people talking about robots replacing jobs.  Today, it looks like this has become a major concern for many people around the world.  From Bill Gates to Google, Jeremy Rifkin to M.I.T.  professors, Peter Diamandis and well known Youtubers, or thousands of various news titles, the world may finally be recognizing that we, as humans, have been surpassed on so many levels by machinery that is massively more efficient and better designed for these jobs and, as a result, we must think of a different way of organizing a global society that still relies completely on human labor (jobs), just so that people can ‘afford to live’ and so that the people benefitting most from the current approaches can keep on living better than the rest.  The only thing I am afraid of is that there seems to be no real alternatives in any of these people’s minds, as they seem to not think about the bigger picture and thus, they continue to try to solve new problems with the same old, outdated tools and solutions that created the problems, perhaps eventually resulting in a total chaos.

We must try our best to make The Venus Project more visible, because we are already in the midst of a massive change, a technological one, that is racing toward us very quickly, no matter what laws, people’s beliefs, tribal separation or opinions may exist, and it seems that the world is still very much unaware of any sustainable solution as all-encompassing as The Venus Project.

We humans are not becoming obsolete creatures.

It’s just that it’s about time that we start learning how to be fully human,

since for most of recorded human history, we have been doing repetitive machine-like tasks.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s