It is quite a challenge to define either “goods” or “services”. Goods can be anything from 3D printers to furniture, gadgets, clothing and so much more, while services can be anything from medical services to entertainment and different kinds of maintenance services. Still, I will try to make sense of these concepts and show you how goods can be made in a fully automated way and how services can become completely autonomous.
GOODS – PRODUCTS
– Complexity and Mass Production
– Resources and the Zero Marginal Cost
Before you think about the notion of goods, it is a must for you to read our article on “Abundance”. It is quite short, concentrated, and of course, free.
In short, it is quite erroneous to think that the same goods will be produced and people’s wants will be the same in a TVP-like society. Also, for hugely complex projects such as the Large Hadron Collider, you may think that if you cannot automate all the processes of its construction, no one will want to get their hands dirty and help with the process. If you think like that, you are missing the “motivation” factor. Consider that if there is something that cannot be built in a fully automated fashion with today’s technology, it does not stop it from being built. People can still get involved here and there, although they will also likely be replaced by machines in the years to come, regardless of sector, thus giving them the opportunity to focus on whatever else they might like to do.
So, supposing you have read that article about Abundance, let’s start our journey with this one.
Complexity and Mass Production
Have you ever seen the Discovery Channel’s “How It’s Made” TV series? If not and you are curious about how products are made in automated factories, then you should take a look at it. I have to warn you, though – it covers 23 seasons, and that’s because there are so many products that are produced today.
From toothpaste to umbrellas, cars, shoes, bolts, cookies, cakes, laptops, furniture, and everything else you can imagine, nearly all are created in automated factories already. To illustrate just some of the complexity of what can be done today, I’ll introduce you to one friendly robot, a car that is built in 3 to 5 days, and 3D printers that may print your next smartphone. I then promise you a wonderful video mashup of mechanical brothers and sisters creating a wide variety of products.
The Friendly Robot called Baxter.
Why is ‘he’ different from other robots? Well, it’s because you program it by simply showing it what to do. That’s all it takes! So, if you want it to do complex things, then you can just show ‘him’ how to do it, step by step, like taking ‘his’ arm and moving it to the place you want to, then the griper and so on, which is much easier than training a kid to do something. You can take its arm, grab a bottle with it, and then put the bottle in a box, again with its arm. This robot memorizes all of these actions and can repeat it indefinitely, or until you teach it some new tricks. The robot “gets it” and does the work. (read more about it)
You can program this robot to do pretty much anything you can imagine. I suppose the only limitations may be its hardware. But also consider that grippers are getting extremely complex, as we have shown in our previous AA World article about Construction, and with the advent and continual expansion of 3D printing, many products will be produced in a completely new way – not even requiring robots for assembly or other tasks.
Baxter’s software can be easily updated so that its speed and complex behaviour can evolve.
Imagine teaching a Baxter-like robot to prepare many kinds of foods (a robot chef), for instance. There are actually plenty of such robot-chefs, and other kinds of machinery that make the process of creating food an automated one.
Now is time to actually meet Baxter:
Building a car in 3-5 days.
Tesla’s Model S car is not only the most efficient electric car, as well as the safest one on the roads today, but the way it is built is almost fully automated. It only takes 3 to 5 days to get from raw material to a full car. They have 160 robots continuously working on almost all aspects of the car’s construction.
The same robot can put the seats in the car, change its own tools, and then put some glue around the windshield and fit the windshield onto the car. The same robot then does that for the rear glass of the vehicle – all done by one robot. Think about that! They also have robots that paint the cars, others that handle welding and yet others that actually transport the vehicles inside the factory in a completely autonomous way.
Here it is! Take a look at the factory. I bet every single human you see there helping the robots build these cars could be replaced with today’s technology.
You can also watch the Tesla Model S documentary for more details on this.
Can we print a smartphone?
3D printers are becoming a common tool among enthusiasts, schools and even manufacturers. The great thing about 3D printers is the layer-by-layer additive process. It means, for instance, that it can make a tool with all its functional parts, all at once. This is an amazing thing, since it means that you do not need multiple techniques and factories to create different parts of the same thing. Because of this process, it can also create extremely complex structures while using even less resources than traditional manufacturing.
You may have already heard of complex 3D printed “stuff” like functional prosthesis, houses with electrical and plumbing systems embedded, edible food, complex and functional tools, clothes and toys and even functional organs. This cluster of products is only limited by the accuracy of the printer, updateable software and the materials used.
Do we dare think that we could print a full-featured smartphone, for instance, with the processor, the screen, and all its parts, together? We may not be there yet, but let’s examine some current technologies that allow us to think realistically about this ambition.
When it comes to such devices, it may still be far easier, smarter, less resource consuming and more efficient overall to build their respective parts separately, and then just put them together. So maybe print the parts, then assemble them.
It is already possible to print electrical circuits. Nearly all common electronic materials, including conductor, dielectric, resistor, and semiconductor inks, can be processed and printed. We can also print conformal sensors, antennas, shielding and other active and passive components, as Aerosol Jet has proven.
This is an example of putting this idea into use – a fully functional game controller printed with one single printer (plastic plus electrical circuit all-in-one) – http://www.computerworld.com/s/article/9247934/This_3D_printer_technology_can_print_a_game_controller_electronics_and_all
A fully functional loudspeaker was also created a few months ago using only 3D printers. http://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/first-3d-printed-loudspeaker-hints-at-future-of-consumer-electronics
Interestingly, Aerosol Jet can also print on non-flat surfaces. This may mean more than you realize. You see, your home PC, smartphone, tablet, and other electronic devices have this thing called a “motherboard” or “mainboard”, which may be the biggest thing inside your device. This core component is a smart circuit board, regulating the flow of energy between nearly all of your device’s components: processor, memory, etc.. The ability to print circuits on non-flat surfaces may mean that processors, memory chips, graphic cards, etc. could be connected together in devices of any shape or format. We could even print the motherboard’s functionality right on the inside case of the device, getting rid of the “motherboard” as a separated component. (source) This approach reduces the resources consumed and simplifies the method, while potentially increasing the complexity of the motherboard’s functionality.
Aerosol Jet printing on a 3D surface: https://www.youtube.com/watch?v=ioSN2jG49Gw
Although we did not show how to print a cpu or a graphic card, such examples show us that there is already progress when it comes to printing electronic devices. For instance, not long ago, the same Aerosol Jet system printed a smart-wing for a drone, with its full electronic parts included. (http://www.forbes.com/sites/tjmccue/2012/03/23/hybrid-created-with-3d-printing-and-printed-electronics/)
Printing electronics will be a huge change in the way we view 3D printing, because electronics are far more complex in what they can do and thus the wide variety of their uses can explode quite rapidly.
Today’s printers can use around 100 materials: from food ingredients to waxes, ceramics, plastics and even metals, and the list is expanding. Combining those materials with new, more accurate techniques means that 3D printers can become the main technology that produces the goods we need.
For a more extensive read about 3D printers, check the Wikipedia article.
The above Tesla Model S factory example and the increasing complexity of 3D printers are proof that very complex goods can already be produced in an automated and autonomous fashion, while the Baxter robot opens the window to the use of a new kind of programmable assembly robot, in a way that is far more varied, complex and easy to maintain.
Just take a look at the wonder of our mechanical brothers and sisters and what they can do today:
Resources and the Zero Marginal Cost
It is very important that all of these goods and products are produced in the most resource and energy-efficient manner. We live on a finite planet, after all…
To make the full case for that, it would take at least two ‘special’ TVP Magazine issues, perhaps going through the latest in nanotechnology.
But when it comes to resources and producing any kind of product, here’s the way I think of them:
– You make a PRODUCT out of a MATERIAL(s).
– The MATERIAL(s) is created using autonomous technologies and the least energy consumed. – The MATERIAL is recyclable
– The PRODUCT is recyclable
So, let’s say you make a 3D model of a house out of a type of plastic. Once you are finished with it, you should be able to easily, and in an automated fashion, recycle that house model into a different kind of object without much waste, or no waste at all, with very little energy consumption ( renewable energy perhaps ). So, that house model might become a car model, a toy, a tool, or whatever.
This is what atoms do. When you die, the parts that made up “you” will not disappear. They will become part of some other things: mountains, grass, stars, freezbies, chocolate, and so on. We can already manage these atoms to a certain extend and create stuff with them. We organize atoms by their properties and divide them into categories: iron, carbon, hydrogen, etc. We then take these categories of clusters of atoms, each with multiple properties, and we model them like we do with plasticine. Of course, it’s much more complicated than that, but this is the entire process of making anything. We make things out of selected materials.
The more intelligently and efficiently we learn to play with this stuff, the more things we will be able to mold them into: electronics, foods, wheels, houses, bicycles, airplanes, socks, and so on.
We have already shown that we have the methods of building all of this stuff in automated ways, meaning arranging all of these materials that we find in nature and/or isolate, but how much can we efficiently recycle them and reuse them to build other kinds of stuff?
Imagine a bunch of kids playing with Lego pieces. There are a finite amount of lego pieces, but they build many toy-things with them: buildings, cars, houses, food plates, forks, and so on. By arranging the pieces in different ways, they can create a multitude of things. The more they reuse the pieces when they want to build new things, the less resources (pieces) they consume. If they want to build a Lego car, and no longer need the Lego house, they can disassemble the house and turn it into a car, and maybe even have some spare Lego pieces left out of that to be used later on to build other objects.
If we replace the children’s work with some machines that build these Lego parts, and these machines are powered by the Sun, then the cost of building new Lego things is Zero (we don’t have to pay the kids to do that anymore 🙂 ). The kids may need food to have the energy to build all of these toys, but the machines can use unlimited power from the Sun, and then recycle the pieces (reuse them). In the end, it won’t cost them any resources and energy to build a continually recycleable world of Lego like products.
The same goes for the idea of Zero Marginal Cost, which refers to resources rather than money. It means that it may initially cost you to build a thing, but it won’t cost you more to build other replicas. For instance, if you buy a 3D printer, the costs are only for building the first one, because this 3D printer can “print” other 3D printers. Sure, you still need the material to build more 3D printers, but nothing more than that. It is essentially a self-replicator and, more and more, people are printing 3D items with types of materials that are easily recyclable (like Lego pieces). So, it may become like a Lego game, where you have a finite amount of materials (let’s say some sort of plastic) that Earth’s ‘big kids’ use to build their ‘toys’ with their 3D printers and you can build, recycle, and rebuild all sort of ‘toys’ without using more resources than we already have, simply because you reuse them all the time, very much like the Lego pieces.
Making these materials act like Lego pieces may seem difficult to grasp at first, but take a look at the Filabot, for example, which can recycle plastic that you already have in your home and plastic models you had previously created with your printer, directly into your ‘ink’ supply. So, if you have plastic bottles at your home, or old toys, or whatever is made from plastic, you can let this machine transform it into ‘ink’ for your 3D printer so that you can create other things with that plastic, exactly like the Lego pieces thought experiment. However, this machine uses just a fraction of the energy to do that.
This way you greatly reduce plastic waste, as well as the energy consumption for recycling plastic or other materials.
This is just one example of how we can deal with resources in a Lego-like way, and not only for 3D printers. This approach may be applied to all other manufacturing processes, as well.
So, automated factories and complex robots (including 3D printer-like machinery), along with reusable materials, will reduce the cost in terms of resources and construction of new products to a zero margin.
On the other hand, goods/products are becoming more like information technology – abundant and free. We can already see them more as “services” than products. Let me explain.
Creativity and Media
It’s difficult to enclose ‘services’ into a single concept for discussion, because almost anything can be viewed as a service. However, I will try to showcase some of the complex services we use (or will use in the near future) and thus, I hope to prove that their complexity is a proof that almost everything in terms of so called ‘services’ can be automatized.
I mentioned earlier that some goods, or more goods, are becoming information-products. If I have a picture file on my computer, it does not cost me anything to copy and send it so that you also have it on your own computer. The “production” of a new digital photo is free. It may move a few electrons here and there, but the energy consumption is so small that it is basically a free process. That is, the information is free.
Now, think about the 3D printing that we have already talked about, and combine that with the digital world. Let’s say that the picture file I sent to you was instead a 3D project file. So imagine the scene: I send you a 3D project file which doesn’t cost us a thing, you open it with any relevant free 3D printer software out there, and then you “print” it – using recyclable plastic and a printer that was printed with another printer. Then further consider all of that being powered by renewable energy. How does that sound to you?
Such typical usage transforms the 3D printing process into an information-technology. There are already tons of websites where anyone can download 3D model files for all kinds of things: toys, tools, shoes, parts to build a new 3D printer, and more. As a sample, Thingiverse is one of the websites where you can go and download tens of thousands of 3D models for free.
On the other hand, there are two services that seem more ‘complex and needy’ that we all use, regardless of whether we want it or not: health services and food services. We all eat and we all want to be healthy.
Today, many people may prefer to ‘eat out’ at a restaurant. There’s no need to prepare the food or personally clean up afterwards. Plus, you cannot easily make all of these delicacies you tend to find available in restaurants :).
However, before we dive into how to get to the food, let’s briefly highlight how food is made. In the “How It’s Made” series that I mentioned at the beginning of this article, you will see a plethora of automated ways to make any type of food; from cakes to animal products, from salads to fried potatoes.
One recent example of food production is the vertical farm system. Watch this video to better grasp the idea. We will then replace the ‘human workers’ they describe with the robots I will show you after you watch the video: https://www.youtube.com/watch?v=1clRcxZS52s#t=107
You can also read more about these kinds of farms on wikipedia.
There are a variety of ways to get food to people. Here are two methods.
- Vending machines.
I find vending machines very useful. They are opened 24 hours a day and you simply press one button to get what you want. There shouldn’t be much need to go into details about them, since it’s already a widely used technology, but you can watch this documentary about vending machines to see how many products are delivered or even made with them. To stir your curiosity, though, pizza or hot food can be made, as well as a wide variety of ice creams, hot dogs and sandwiches.
[I will add pictures and descriptions here to be a bit more detailed – the design will make more sense]
- Automated restaurants.
There are already many restaurants that are automated, at least in the way that you order your food and/or the delivery process. For example, there are robots that can cook up to 80 bowls of ramen/day, and there are restaurants where you order from a touchscreen “menu”, either inside the restaurant or from home via an app. There is little need for waitresses or cooks anymore, as this entire concept is already proven to work.
So, as a service, getting and making food is already becoming more and more automated, even for complex types of dishes.
Here’s how you can order food in a fully automated way: https://www.youtube.com/watch?v=1BBJshlo4vk
And this is how robots can be chefs: https://www.youtube.com/watch?v=zeWnPSMvJGQ
But what if you eat something and you feel sick? What do you do?
Well, it depends, but you should not ask me, you should ask your smartphone. Artificial Intelligence software like Watson, invented by IBM and capable of reading millions of documents in just few seconds, can help you with your problem. It understands human language almost perfectly, helping it win on Jeopardy, a game of ‘words and knowledge’, against the world’s top two ‘champions’ back in 2011. Watson is currently working in medicine and prescribes treatments for various symptoms. It is still in testing, but has already proven itself to be a huge step forward in both speed and accuracy.
For instance, a few months ago, Watson prescribed a better treatment for a certain type of tumor, better than any doctor could. It is also being used right now to discover new treatments & cures for cancer and the more it learns, the smarter it becomes. (source)
Such AI’s, combined with cheap, yet powerful smartphones (devices), can analyze your symptoms and arrive at a highly educated conclusion; perhaps the most educated conclusion available in the world.
Some sensors can also replace a visit to the doctor. From the smartphone’s camera that can track your hearth rhythm and detect skin cancer, to the gps tracking your fitness, or special, small and non-intrusive devices that analyze blood or urine samples, to other more sophisticated sensors, they are already here. There are so many sensors already available and apps for them that I find it impossible to point to specific ones. Maybe this clip will give you an idea of how advanced they have already become:
These are not toys, however. They are already very accurate, often more so than a visit to your doctor, and are continually improving. It may not be long before they completely replace most family doctors. So then, imagine that your health is continuously monitored by such non-intrusive devices and made sense by apps which constantly feed this data to a Watson-like AI. The AI can then recommend to you what kinds of foods to eat, if and what kinds of physical exercises to make, and much more to help you achieve and retain optimum health. That sounds great, but what if you need some pills that Watson recommends?
Well, let’s print it!
Really, let’s print your medicine with your 3D printer. You don’t have to blindly believe me – watch this short 3 minutes TED talk https://www.youtube.com/watch?v=mAEqvn7B2Qg
You can also watch a 13 minute talk by the same person, but more detailed, here.
This is just an educated idea right now, but we can also envision fully automated pharmacies where you can get your medicine from. They exist now and are in limited use.
But what if you need some kind of surgery?
DaVinci is a robot that has already been in service for well over a decade (source). Its arms are very precise and it even enables surgeons to operate it from long distances. So, if you need surgery, a surgeon from across the world can do the job. However, the significance of this robot-surgeon is actually far greater than that. What happens when it can learn from these surgeries and then operate without human help?
Some parts of surgeries and full surgeries that are not extremely complicated can already be fully automated, such as certain types of eye procedures that do not require a high degree of complexity, or the field of urology which has integrated robotics into many procedures including radical cystectomies, surgical nerve grafting and pyeloplasty. Robotic surgery has almost entirely taken over radical prostatectomy and the role of surgical robotics is continuously expanding. Robotic surgery helps improve patient outcome by minimizing the surgeon’s natural movement tremors, increasing range of motion, decreasing blood loss, decreasing length of hospital stay, and decreasing postoperative pain. Since the field of Urology deals with very difficult and delicate procedures, robotics offers a significant advantage by allowing for far greater accuracy, flexibility, smoother actions, and greater range of motion.
Integrating sensors in the human body can provide an entangled relationship with the robot than what is possible with a human surgeon, allowing the robot greater surgical accuracy. For instance, if it were removing a tumor, the tumor could be injected with a fluorescent fluid that the robot’s cameras can identify, thus learning which cells are tumorous in order to remove it with much great precision. If we manage to create a more accurate 3D map (or sensorial map – tissue texture, etc) of the patient’s body, then perhaps a robot can interpolate and do the job that a surgeon can. (source1)(source2)(source3)
But we might not need surgery in the future. Tiny nanobots might learn how to ‘fix’ us from inside-out or keep track of our health, providing the right treatment at the right time and place in a personalized manner that could reduce or remove the need for many of today’s surgical procedures.
The last one on my list of health services is ‘nursing’. Some people, especially old folks, need assistance when it comes to health-related issues. There are already certain kinds of robots used in hospitals to keep an eye on patients, but sure, it may take more to fully replace the human factor. https://www.youtube.com/watch?v=dx0zxr3D_zU
Similar to surgical procedures, technologies that monitor one’s health in non-intrusive ways can reduce the need for nursing.
Of course there are many health-related issues that still require human assistance, so this is only intended to showcase how rapidly technology is advancing and how health services are becoming more and more automated, accurate and efficient. It’s not science-fiction anymore to monitor your health from home, using small and inexpensive devices, or to be assisted by AI when you get a diagnosis and treatment.
Creativity and Media
When I was in school, I thought ‘how cool would it be to have my own TV Channel’, because I could add so many great movies and documentaries. At another point, I was reading an interesting monthly magazine and thought to myself ‘if I had such a magazine, I would write about so many amazing things’.
Well, only a few years have passed and I have already managed to create my own documentary, develop my own documentary/movie/lecture-based website, and manage a magazine for which I also write (this one) – and all of them are far more lovable and enjoyable than I originally imagined.
All of that was made possible due to the fact that so many complicated things have become more and more automated and user friendly. With a laptop and an internet connection, I am able to edit photos, videos and music, build websites and manage this magazine (and more).
A few years ago such things could only be handled by huge teams of expensive professionals.
I can now, on my own, even remove background noise from audio recordings, stabilize shaky video footage, record with a cheap camera in front of a green screen and then add my own background, improve the image quality, and anything else you might imagine: making 3D animations, radio shows, video shows, slideshows :), websites, programs, etc.
Software plays the most important role when it comes to automating a process: a robot without software is a mechanical corpse. Often, it’s the hardware, the robot itself, limiting the capabilities of the software (although the opposite can also be true). When it comes to the internet and the digital world, software is rarely held back by hardware, which is why we can all create and use so many tools.
Want to carry an orchestra in your pocket? You can add a violin, piano, guitar, drums, and all the musical instruments you can imagine from single app, and that’s only one category of things you can do with a smartphone.
Your smartphone and computer have become the gateway to a plethora of services: from communication to entertainment, work to health, collaboration and management.
But let’s think big. Damn big. Huge even!
We are all somewhat aware how much information is online. Just consider Wikipedia, which has around 750000 articles at the moment. But many people may find it hard to taste the great amount of information which was not written for their own personal education level, or presented in a way that they find entertaining and engaging. This is why I am suggesting the following scenario:
If you want to know more about lions, just say that to your computer and it will teach you about lions in a way that you will find extremely entertaining and educational.
That sentence probably seems very simplistic and almost devoid of meaning, but it is way, way more interesting and profound than you may realize.
Before I explain the awesomeness of this idea, I want to make you aware that we have already published an extensive article about such new ways of rethinking education in one of our previous issues (link here) and I recommend that you go back and read that article after you finish this one. I bet you will find it very interesting: it is about games and linux, friends and Watson, Darwin and viruses, and much more.
Back to our story, let me explain to you the beauty behind this idea.
Computers already understand you to some degree, even if it’s not perfect. Google displays online searches in a personalized way, depending on where you are from, what have you searched for before, and so on. Understanding human language is not something new and as we have already seen, IBM’s Watson is working on mastering that.
Understanding language is only one part of the entire thing, since computers can also examine pictures, videos and audios and make sense of them.
IBM’s Watson can already search through millions of videos, audios and photos and display results based on those sources. Let’s say you want to search what Jacque Fresco has to say about politics. A Watson-like system could show you a video clip with him talking about politics, or play an audio portion of a lecture, or both combined. Quite amazing isn’t it?
Let’s go even further. Check out this picture. A bad picture of a cat, isn’t it? Not very impressive? Well, it was BUILT by a computer that knew nothing about cats. It watched 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognise pictures of cats using a “deep learning” algorithm. On its own, it deduced what a cat looks like and ‘drew’ it.
Google developed this computer that can look at videos and photos, and understand what it sees. That’s impressive, and although Facebook face recognition is now as accurate as the human brain, Google’s computer can even play games, based only on visual queues, by learning similar to a human. So basically, this computer watches the game and understand the game’s rules based only on that. Google is working hard on creating software that mimics the way human learn and understand. The project is called the Google Brain. Of course, they are not the only ones focusing on cognitive AI abilities.
Computer learns how to play games by just observing (from min 02:46): https://www.youtube.com/watch?v=mArrNRWQEso&feature=youtu.be&t=2m46s
All of these tools can understand you: your level of education, emotions, focus level, what you like and don’t like, and more. They can also understand what they are looking at, from videos to photos, audio and text writings.
Now, what if, when you search for something, the search engine already knows your current level of understanding of that topic and only displays the results that you will understand and prefer? Even better, what if the results are not writings that were previously written, but are written by the software in direct answer to your question, using the most up-to-date knowledge on the subject?
Almost all smartphones with iOS, Android, and even Windows 8 can do this to a limited extent. Just over to Google.com, click on the ‘mic’ icon and say “What is the distance to the moon?” to have Google ‘tell’ you with a voice, not only in text.
More than that, there is software that can actually ”create news articles”. In March of this year, an earthquake hit California. Three minutes later, a robot created a short post about the earthquake, with all the important information in it. Read the article here and see if you would have been able to tell whether it was written by a robot or a human. This is not an isolated case. Many websites and companies use such software to write their news. These robots can even track events and provide updates. Some research shows that many people could not tell the difference between articles written by robots and human written articles. (source)
So, as you can see, the idea of a computer understanding you quite well, and writing articles specifically for you, is not science fiction at all.
Now, if they can master video games and so many other controls (e.g. you using your smartphone’s speech recognition to set up your alarm or send a message), they could also control all kinds of software. So is not farfetched to learn that they can also create videos, like this company is showcasing using a similar software.
So, you want to know more about lions, you just say that to your computer and it teaches you about lions in a way that you will find extremely entertaining and educational.
Since it knows you and what you prefer (for instance, short videos, no background music and a male voice), it then searches across millions of articles, creates a relevant ‘script’ and then transforms that script into a customized documentary (video) using photos, audio and videos from the internet or, even better, drawing the story for you as Google’s computer drew that cat (ok, better than that, but you get the point).
So again, the computer searches for what you asked for and understands what it finds. Then writes a script and creates a video. The end result is a very personalized one, custom made for you, since the same computer understands you, your level of existing knowledge of the topic and what you like.
How does it sound now? Awesome, isn’t it!? You will be able to learn about anything in completely customized, original and personal ways.
I think that in the next few years, you will be able to talk to your computer as you do with any other human being. The difference will be that the computer can do many things for you that your friend can’t. Just think of telling it what kind of website you want to build, and it simply creates it for you,. as it understands every programming language; or just say what food you want and it cooks it for you; and so much more… Just think of the possibilities. Couldn’t we make any service fully automated and extremely easy to use and interact with?
I hope I have demonstrated that almost any kind of goods and products can be created in fully automated ways, using far less resources and energy, and that services can be made very smart and complex by using similar processes to learn as humans do.
I know I’m unable to talk about all goods and services but, with the examples provide and using your imagination, try to automate in your mind other productions of goods and other deployments of services. See if you can automate everything. 😉