Cities are just complex villages – clusters of people with a bunch of ‘stuff’ around them. In our case, the ‘stuff’ is mostly technology that allows for comfort, knowledge, research, and more.
A city may be thought of as a ‘total enclosure system’ – a self sustainable community, somewhat similar to a cruise ship, that can provide all the needs and wants of the occupants, independent of the other cruise ships or any external reliance.
The clusters of people + technology that we call cities can be very different from one another, from their functionality to their local environment. A city can be round, or perhaps more square, depending of what its functionality will be. For instance, when it comes to access, it is much better to build a circular city with the important facilities in the center, so that access to them is easier. However if you build a city in very hot, dry areas, you may opt for a different shape to better circulate the wind through the city and cool it down. The same goes for locations, as a city in the sea may be very differently built than a city in an area with high hills. The size of cities is also dependent on these factors.
Land, water, and even space are locations where humans can create cities. While space is a recently explored environment and it may take many years of technological development to create cities there, land and water are already environments that humans have the knowledge to control and, thus, the ability to create complex cities in such areas.
However, saying that you want to build a city on land does not say very much, since there are so many factors that can substantially change a city plan: climate, soil, landscape, elevation, etc.. This is why it is impossible to imagine such cities without knowing a great detail about their position on planet Earth. Each city is unique.
Since each city can be very different from another, making it irrelevant to categorize them by ‘types’, I will try to highlight key components of any city, such as:
– DNS: Digital Nervous System
– TES: Total Enclosure System
– CDL: Colonizing Different Landscapes
If you have been following the AA WORLD series of articles, you may have noticed that from construction to transportation, production to delivery, and even the ‘home’, all can become fully automated and autonomous, while the interface between these technologies and humans can be made very intuitive. We have also discussed ways of making education decentralized and how schools as we think of them today may not be needed in the future, along with how we will create and deal with abundance. Since there won’t be nearly as many special places in these future cities like we see today – offices, police stations, banks, petrol/gas stations, so many parking lots, town halls and others – cities of the future might serve many other purposes, as I will try to describe in this article.
Therefore, when it comes to cities, you have to incorporate the other AA World articles to be able to visualize a more complete picture of how the cities might look in the future. Once this series ends, we will release a special TVP Magazine issue containing all of them.
DNS: Digital Nervous System
A smart city needs to sense and react to ever-changing conditions.
In order for this kind of systems approach to work properly, there are 3 key components:
– High Connectivity and Massive Data Storage
– Computational Power for Arriving at Decisions
To sense, the city has to have sensors in key points to record localized temperatures, production flows, air quality, water consumption, analyze bridges and other constructions, and so on.
This mash of sensors, connections and the interpretation of them by software is called ‘the internet of things’ today, and is something we are experiencing more and more. What this means is that the gadgets, electronics, pipes, walls, full houses and buildings, and almost all of the physical objects around us are gaining digital awareness.
For instance, a system of water pipes with simple flow sensors and pressure valves to control the flow becomes far more intelligent once they are digitized, the data is uploaded into the cloud (the internet) and it is interpreted by smart software that then communicates with and manages the valves and sensors autonomously. Thus, by monitoring the water consumption of a city, we can program the pipes to adjust the water flow to minimize water waste. This way we can automate a huge network of pipes in a very simple way. This is just one example of why digitizing these objects/things will make a city intelligent and responsive.
Imagine bridges that communicate with the traffic flow, or entire transportation systems that can do that. Then consider food production lines that are able to ‘understand’ what it is needed and where it is needed. In the words of IBM: “Hospitals can monitor and regulate pacemakers long distance, factories can automatically address production line issues and hotels can adjust temperature and lighting according to a guest’s preferences”. The possibilities seem to be limited only by our imagination.
Let’s look at some present day examples to highlight more exactly how this technology works, and what it is already possible today.
In Amsterdam, engineers are working to deploy a smart system of public lighting by 2018 by connecting energy efficient LEDs with each other and a smart network that can not only save energy, but be smart enough to light the streets or other public places exactly when needed and as much as needed, and also automatically report failures. Considering that roughly 19% of all electricity use goes to lighting, an independent, global trial of LED technology in 12 of the world’s largest cities found that LEDs can generate energy savings of 50 to 70 percent — with savings reaching 80 percent when LED lighting is coupled with smart controls. (source)
City24/7 created smart screens that they place in key points of a city, such as bus stops, train stations, major entryways, etc., that display relevant information about that particular location. These smart screens have a dedicated emergency communication networking, battery backup, ruggedized structure (ATM strength), high-speed network access, various sensors such as chemical, bio-hazard, environmental; powerful processing; can direct people in area (what to do, where to go), monitor conditions remotely, and more. It is basically a highly durable box (screen) that sense the environment and is smart enough to be extremely helpful for the inhabitants of a city.
Cisco, an important name when it comes to the internet of things idea, is collaborating with many cities to make them smarter. One example is Barcelona, where they underlay a plan to transform it by 2020 into a smart city by deploying sensors in various parts of the city and making sense of them through smart computer programs. The sectors of improvements include: transportation, real estate, safety and security, utilities, learning, health, sports and entertainment, and government. (source)
Although there will be no government or real estate in the Venus Project and the notion of ‘security’ may change a lot, we should stress here that the technologies presented in the AA WORLD series are strictly for their technological capabilities and not intended to present or argue any societal implementations.
Barcelona has already implemented smart parking, smart bus stations, and they even have smart garbage cans. Sensors inside these garbage cans can detect if the garbage is full and/or is emitting bad odors, and then direct garbage trucks to empty only those that need it. This is much more efficient than picking garbage cans one after another, with some of them empty or hardly used. The parking lots in Barcelona have light and metal detectors to detect empty parking spots and direct people to those spots via an app. (source) Also, a city-wide network of sensors provides valuable real-time information on the flow of citizens, noise and other forms of environmental pollution, as well as traffic and weather conditions. (source) This is a video showcasing the Barcelona smart city project.
Controlling a city seems to be as easy as controlling a game, as this Cisco demo video shows.
HP CeNSE is another example of this idea of ‘sensing’ the environment. CeNSE consists of a highly intelligent network of billions of nanoscale sensors designed to feel, taste, smell, see, and hear what is going on in the world. These sensors can analyze earthquakes, “smell” a gas leak, sense wear and tear on a bridge, track the spread of the next flu virus, and more.
However, it is not only about putting sensors in cars, roads, buildings, and so on; it is also about putting sensors into the soil, atmosphere, space, etc., thus ‘sensing’ all of nature to better manage the natural resources, predict the weather and more.
NASA has been doing this for quite a while, analyzing global waters, clouds, wind, precipitations, temperatures, land and sea elevations, vegetation and a bunch more. The system allows for better prediction of hurricanes, tsunamis, floods, etc.. Have a look at this amazing NASA map showcasing the technology behind the project.
This video highlights near-future improvements of the current satellite system – https://www.youtube.com/watch?v=BLG2qnGunyg&list=UUjWM2q7LQQpTQ23QhDa7t0Q
NASA’s future plans include the expansion of this cluster of satellites, making them increasingly autonomous. They also plan on adding flocks of drones, surveying the weather from inside the Earth’s atmosphere.- https://www.youtube.com/watch?v=w_fLMtkCntY
Such complex arrays of sensors can monitor Earth and its resources from inside Earth, on ground level, in the atmosphere and from space. That, combined with sensors in traffic, buildings, and objects will make our environment very sensitive. Yet, there is one more approach that will make Earth extremely sensitive and intelligent: the human network.
You see, putting sensors in key points within cities and around the planet may be quite a challenge and can cover much of our needs, but looking at people as ‘sensor carriers’ changes this picture a lot. Billions of people are already carrying around a device (smartphone) that has become much more than a phone and, in some situations, even smarter than a computer. These smartphones can include multiple sensors to detect pollution, location, movement and orientation, atmospheric pressure, temperature, etc.. Many of them already host light sensors, humidity sensors, and a bunch more. (source) Since humans travel all around the world and inhabit places from Africa’s deserts to Alaska, they can become dynamic sensors that help map the world.
Then consider how all of that, combined with sensors inside the human body that can monitor one’s health, would create a highly detailed map of our world that can detect and track virus outbreaks & treatment resistance, better understand disease propensities and much more.
Sensing the world around and inside us is something that is already happening on a planetary scale. Billions, perhaps trillions of sensors are already functioning to track almost every aspect of the Earth, including people’s health, climate, buildings, and pretty much everything else.
The interesting fact is that, once these objects, buildings, and resources are connected with each other and connected together via smart networks, a huge amount of fine tuning and smart automation can be done.
IBM, Intel and many cities are already adopting this idea of connecting physical objects to the internet, digitizing them, and transforming cities into ‘living organisms’ that can sense and respond.
This is a proof that the idea of connecting objects with each other through smart networks is not only feasible, but very efficient.
High Connectivity and Massive Data Storage:
To be able to cope with this huge influx of data (information), we need super high-speed connections. The fastest ‘wired’ broadband connection achieved up to now is 1.4 terabits per second. At that speed, we could download 44 HD movies in one second. (source)
When it comes to wireless connectivity, “The South Korean government announced a new initiative to introduce a next-generation 5G wireless connection within six years. The new mobile standard would offer connections around 1,000 times faster than current 4G services” (source)
These speeds are impressive and the amount of data collected will grow bigger and bigger as we connect more devices and sensors to the system. Perhaps a new type of data storage will be required.
The current best hard drive storage capabilities store a ‘bit’ of data (the smallest unit) using 1 million atoms. IBM has since proved that one bit of data can be stored in just 12 atoms. That increase in storage density is huge. (source) Now imagine storing digital data within DNA structures. Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record a thousand fold. Theoretically, we could store a copy of the entire 2011 Internet – 1.8 ZB (zettabytes) – in just 4 grams of DNA. This new approach to long-term data storage seems to be completely feasible, efficient and extremely durable. (source)
Computational Power for Arriving at Decisions
Ok, so let’s assume that we now have trillions of sensors in and out of planet Earth, analyzing every aspect of it, along with sensors inside our bodies or carrying them with us, while huge amounts of data are being collected from all these sensors. Even if we also assume that we now have massive storage units for all this data, all of this potential is worthless without powerful computers and smart software.
The power of a computer is scaled by the number of calculations per second it can perform. To date, the most powerful one is Milky Way 2, which can do around 33 quadrillion calculations per second. That number is a bit difficult to understand, so consider that one hour of this machine’s calculations is roughly equivalent to 1,000 years of difficult sums by 1.3 billion people. Imagine the entire population of China continuously doing complex calculations for 1,000 years. That’s the power of one hour of calculations for this machine. It is very, very impressive. (source1, 2)
But the Milky Way 2 supercomputer is just one of hundreds, if not thousands, of supercomputers out there. Imagine the combined power of the top 500, which would be around 250 quadrillion calculations per second. An hour of calculations done by these machines is roughly equivalent to 1,400 years of continuous intense sums done by 7 billion people (the entire world population),with no breaks for sleep, bathroom, eating, etc.. Imagine that.
If you are impressed by those numbers, your jaw will likely drop when you will learn the next amazing fact. Bitcoin is a decentralized digital currency (a software-based payment system), with no one in control. To verify and record payments (transactions), users (you, me, everyone with a computer that agrees to help) have put their personal computer’s power to work. Together, people from around the world have created the equivalent of a gigantic supercomputer that is 256 times more powerful than the top 500 supercomputers in the world. What do you think about that? 256 times!
Let’s put it in perspective again: one hour of calculations done by the Bitcoin computers is like the current global population doing complex sums for the next 350,000 thousand years or so. You might also imagine these 7+ billion people starting back when Archaic Homo sapiens, the forerunner of anatomically modern humans, evolved. Once again, that is just to match one hour of the Bitcoin network’s computations.
HP is currently developing a very powerful computer (The Machine) with the ‘internet of things’ idea in mind. It uses photons instead of electrons, is six times more powerful than existing servers and requires 80 times less energy. HP claims The Machine is capable of managing 160 petabytes in 250 nanoseconds. They also claim that this computer will be a huge shift in computer systems, able to cope with the huge influx of data coming from what it is called as “the internet of things”. (source). You can watch a half hour presentation of this technology by HP https://www.youtube.com/watch?v=JzbMSR9vA-c
These numbers are impossible to make sense of, but the point is that our present computational power is huge. Many science researchers are using these supercomputers to do a wide variety of investigation.
Scientists use supercomputers to explore the chemical properties of materials in physically realistic environments, investigate various processes at the quantum scale, use a combination of experiment and large molecular simulations to understand how, at a molecular level, mutations enable resistance to antibiotics in the causes of, among others, bacterial meningitis; supercomputers are also used for climate modelling research (atmospheric models, ocean models and land models). Climate researchers are able to run full Earth system models with the additional complexity required in, for example, modelling evaporation from land and the associated plant transpiration. Supercomputers are also using biomechanical models to understand how dinosaurs moved; simulating the energy production of future fusion reactors; exploring new renewable energy technologies such as dye-sensitised solar cells; and designing quieter, more efficient aeroplanes. (source)
For instance, Tianhe-1A, the second most powerful supercomputer in the world, ran a simulation involving 110 billion atoms through 500,000 time-steps. In every one of these steps, Tianhe-1A has to analyze the relationships between each and every atom. These calculations took three hours to complete and accounted for 0.116 nanoseconds of simulated time — and this is on a computer capable of processing two quadrillion calculations per second. (source)
You can do a google search to see many more uses of supercomputers.
Now think about the various types of research done by distributed computing. As in the case of Bitcoin, this means people donating their personal computer’s power to form a network of computers that bring together immense computational power. Here is a list of such projects.
So, with the ability to handle quadrillions of calculations per second, computers are already more than capable to do tremendous work, and we haven’t even touched the quantum computer model, which seems to completely revolutionize the computer as we know it today, making a huge leap in computational power. I highly recommend that you watch this 3 minute video about one company that is already using quantum computing technology – http://www.dwavesys.com/
Perhaps more important than computational power and accumulated data storage is how we can arrive at relevant decisions. How might we automate the process of arriving at decisions?
Well, this is already happening in almost all aspects of society: construction, food production, management, etc. When you are dealing with huge amounts of data, you need computers coupled with smart software to search through all of it and arrive at conclusions. Of course, the software is written by humans, but it serves as proof that such systems are not only useful, but necessary.
If you want to detect supernovas (massive explosions of stars), you need computers to ‘watch’ the night sky 24/7 to reveal them (source). If you want to do medical drug research, you need robots (computers connected to external devices) to analyze huge amounts of data and arrive at decisions (source). Robots now dominate many leading bioscience laboratories, doing in just hours what once took days or weeks.
Adam and Eve are two robotic computers that form hypotheses, select efficient experiments to discriminate between them, execute the experiments using laboratory automation equipment, and then analyze the results. Both Adam and Eve have made actual discoveries. Adam was developed to investigate the functional genomics of yeast and the robot succeeded in autonomously identifying the genes that encode locally “orphan” enzymes in yeast. From Eve, scientists have discovered lead compounds for confronting malaria, Chagas, African sleeping sickness and other conditions. (source)
“Advanced laboratory robotics can be used to completely automate the process of science.” Wikipedia
When it comes to research, perhaps the most impressive robot discovery and overall ‘arriving at decisions’ is IBM’s Watson. I have mentioned IBM Watson in the AA WORLD series more than God is mentioned in the bible :), but there is good reason for that. The way that Watson was built allows for a huge array of uses.
Watson can ‘read’ hundreds of millions of articles in a very short amount of time, look through videos and photos, understand human language and shapes (objects, images), and then arrive at a decision focusing on whatever you are requesting from it.
From the IBM website:
- When asked a question, Watson relies on hypothesis generation and evaluation to rapidly parse relevant evidence and evaluate responses from disparate data.
- Watson can read and understand natural language, important in analyzing unstructured data that make up as much as 80 percent of data today.
- Through repeated use, Watson literally gets smarter by tracking feedback from its users and learning from both successes and failures.
- Watson is a cognitive technology that processes information more like a human than a computer—by understanding natural language, generating hypotheses based on evidence, and learning as it goes.
What makes Watson so amazing is its capacity to combine 3 extraordinary features: natural language processing, hypothesis generation and evaluation, and dynamic learning.
Here’s how Watson can work in healthcare:
“First, the physician might describe symptoms and other related factors to the system.
Watson can then identify the key pieces of information and mine the patient’s data to find relevant facts about family history, current medications and other existing conditions.
It combines this information with current findings from tests, and then forms and tests hypotheses by examining a variety of data sources—treatment guidelines, electronic medical record data and doctors’ and nurses’ notes, as well as peer-reviewed research and clinical studies. From here, Watson can provide potential treatment options and its confidence rating for each suggestion.” IBM
Watson can also replace government with smarter scientific decisions:
“Cognitive computing can help improve the service of the public sector in several ways, improving on slow and manual decision-making processes by employing such capabilities as decision-management, predictive and content analytics, planning, discovery, information integration and data management.
Watson learns like a human. As it refines its own knowledge from its findings in vast sets of data and its interactions with the employees using it, it helps public employees improve process and policy. Watson helps deliver personal service to citizens navigating complex processes. From these interactions, Watson learns the priorities of the public and helps inform policies that serve those interests. And with threats to security an ongoing problem, Watson can uncover patterns of activity that can help an agency interpret and address abnormal usage that may suggest an emerging problem.” IBM
The IBM Watson Discovery Advisor is a research assistant that helps researchers collect information and synthesize insights to stay updated on recent findings and share information with colleagues. New York’s Genome Center plans to use the IBM Watson cognitive computing system to analyze the genomic data of patients diagnosed with a highly aggressive and malignant brain cancer, and to more rapidly deliver personalized, life-saving treatment to patients of this disease. Learn more about how Watson can accelerate and help clinicians personalize treatments.
What is even more ‘out of this world’ about Watson is its recently presented technology called the Watson Debater, which does just that. Imagine you ask Watson about anything like, for instance, what is the influence of violent games on human behavior. Watson will read through millions of scientific papers on the subject and arrive at both PROS and CONS about the subject.
Here’s a video showcasing this technology:
With all that being said about Watson, it is feasible to think that an AI can make scientific decisions of all sorts: city planning, food production, people’s health and comfort, environmental decisions, and so on.
The Digital Nervous System (DNS), as I showed you, can be extremely complex; capable of analyzing infinitely varied data from all around the world by making quadrillions of calculations per second and fully capable of complex decision making. Without this DNS, a city would be just a pile of cement & metal and, for us, the planet would be an environment full of unpredictable and uncontrollable surprises.
TES: Total Enclosure System
Since we have already covered the construction, transportation, production & services, and ‘home’ aspect of this automated and autonomous future, we are left with two important topics to cover in order to think of these cities as being fully autonomous: food and energy.
If a city can produce all of the food necessary for its occupants and produce all of the energy consumed within the city, then we can rightly say that this city can be seen as a total-enclosure system.
As a result, these two topics will be huge, so we will cover them in the next TVP Magazine issue.
Recommended DOCUMENTARIES for this article:
The Age of Big Data – http://www.videoneat.com/documentaries/2920/the-age-of-big-data-bbc-horizon-online
How Satellites Rule Our World – http://www.videoneat.com/documentaries/2878/how-satellites-rule-our-world
Life in a Networked Society – http://www.videoneat.com/documentaries/2673/estonia-life-in-a-networked-society
Smartest Machine on Earth – http://www.videoneat.com/documentaries/2727/smartest-machine-on-earth-video
Science Under Attack – http://www.videoneat.com/documentaries/1114/science-under-attack-bbc-horizon-watch-online
City Under the Sea – http://www.videoneat.com/documentaries/2533/city-under-the-sea-national-geographic
Earth From Space – http://www.videoneat.com/documentaries/3203/earth-from-space-nova-video