If I had told you 25 years ago that, in a quarter century’s time, one-third of the human race would be communicating with one another in huge global networks of hundreds of millions of people—exchanging audio, video, and text—and that the combined knowledge of the world would be accessible from a cellphone, that any single individual could post a new idea, introduce a product, or pass a thought to a billion people simultaneously, and that the cost of doing so would be nearly free, you would have shaken your head in disbelief. All are now reality.

But what if I were to say to you that 25 years from now, the bulk of the energy you use to heat your home and run your appliances, power your business, drive your vehicle, and operate every part of the global economy will likewise be nearly free? That’s already the case for several million early adopters who have transformed their homes and businesses into micropower plants to harvest renewable energy on site. Even before any of the fixed costs for installation of solar and wind are paid back—often in as little as two to eight years—the marginal cost of the harvested energy is nearly free.1 Unlike fossil fuels and uranium for nuclear power, in which the commodity itself always costs something, the sun collected on your rooftop, the wind traveling up the side of your building, the heat coming up from the ground under your office, and the garbage anaerobically decomposing into biomass energy in your kitchen are all nearly free.

And what if nearly free information were to begin managing nearly free green energy, creating an intelligent communication/energy matrix and infrastructure that would allow any business in the world to connect, share energy across a continental Energy Internet, and produce and sell goods at a fraction of the price charged by today’s global manufacturing giants? That too is beginning to evolve on a small scale as hundreds of start-up businesses establish 3D printing operations, infofacturing products at near zero marginal cost, powering their Fab Labs with their own green energy, marketing their goods for nearly free on hundreds of global websites, and delivering their products in electric and fuel-cell vehicles powered by their own green energy. (We will discuss the up-front fixed capital costs of establishing the collaborative infrastructure shortly.)

And what if millions of students around the world who had never before had access to a college education were suddenly able to take courses taught by the most distinguished scholars on the planet and receive credit for their work, all for free? That’s now happening.

And finally, what if the marginal cost of human labor in the production and distribution of goods and services were to plummet to near zero as intelligent technology substitutes for workers across every industry and professional and technical field, allowing businesses to conduct much of the commercial activity of civilization more intelligently, efficiently, and cheaply than with conventional workforces? That too is occuring as tens of millions of workers have already been replaced by intelligent technology in industries and professional bodies around the world. What would the human race do, and more importantly, how would it define its future on Earth, if mass and professional labor were to disappear from economic life over the course of the next two generations? That question is now being seriously raised for the first time in intellectual circles and public policy debates.


Getting to near zero marginal cost and nearly free goods and services is a function of advances in productivity. Productivity is “a measure of productive efficiency calculated as the ratio of what is produced to what is required to produce it.”2 If the cost of producing an additional good or service is nearly zero, that would be the optimum level of productivity.

Here again, we come face-to-face with the ultimate contradiction at the heart of capitalism. The driving force of the system is greater productivity, brought on by increasing thermodynamic efficiencies. The process is unsparing as competitors race to introduce new, more productive technologies that will lower their production costs and the price of their products and services to lure in buyers. The race continues to pick up momentum until it approaches the finish line, where the optimum efficiency is reached and productivity peaks. That finish line is where the marginal cost of producing each additional unit is nearly zero. When that finish line is crossed, goods and services become nearly free, profits dry up, the exchange of property in markets shuts down, and the capitalist system dies.

Until very recently, economists were content to measure productivity by two factors: machine capital and labor performance. But when Robert Solow—who won the Nobel Prize in economics in 1987 for his growth theory—tracked the Industrial Age, he found that machine capital and labor performance only accounted for approximately 14 percent of all of the economic growth, raising the question of what was responsible for the other 86 percent. This mystery led economist Moses Abramovitz, former president of the American Economic Association, to admit what other economists were afraid to acknowledge—that the other 86 percent is a “measure of our ignorance.”3

Over the past 25 years, a number of analysts, including physicist Reiner Kümmel of the University of Würzburg, Germany, and economist Robert Ayres at INSEAD business school in Fontainebleau, France, have gone back and retraced the economic growth of the industrial period using a three-factor analysis of machine capital, labor performance, and thermodynamic efficiency of energy use. They found that it is “the increasing thermodynamic efficiency with which energy and raw materials are converted into useful work” that accounts for most of the rest of the gains in productivity and growth in industrial economies. In other words, “energy” is the missing factor.4

A deeper look into the First and Second Industrial Revolutions reveals that the leaps in productivity and growth were made possible by the communication/energy matrix and accompanying infrastructure that comprised the general-purpose technology platform that firms connected to. For example, Henry Ford could not have enjoyed the dramatic advances in efficiency and productivity brought on by electrical power tools on the factory floor without an electricity grid. Nor could businesses reap the efficiencies and productivity gains of large, vertically integrated operations without the telegraph and, later, the telephone providing them with instant communication, both upstream to suppliers and downstream to distributors, as well as instant access to chains of command in their internal and external operations. Nor could businesses significantly reduce their logistics costs without a fully built-out road system across national markets. Likewise, the electricity grid, telecommunications networks, and cars and trucks running on a national road system were all powered by fossil fuel energy, which required a vertically integrated energy infrastructure to move the resource from the wellhead to the refineries and gasoline stations.

This is what President Barack Obama was trying to get at in his now-famous utterance during the 2012 presidential election campaign: “You didn’t build that.” While the Republican Party opportunistically took the quote out of context, what Obama meant was that successful businesses require infrastructure—electricity transmission lines, oil and gas pipelines, communication networks, roads, schools, etc.—if they are to be productive.5No business in an integrated market economy can succeed without an infrastructure. Infrastructures are public goods and require government enablement as well as market facilitation. Common sense, yes, but it was lost in the fury that followed President Obama’s remarks, in a country where the prevailing myth is that all economic success is a result of entrepreneurial acumen alone and that government involvement is always a deterrent to growth.

Public infrastructure is, for the most part, paid for or subsidized by taxes and overseen and regulated by the government, be it on the local, state, or national level. The general-purpose technology infrastructure of the Second Industrial Revolution provided the productive potential for a dramatic increase in growth in the twentieth century. Between 1900 and 1929, the United States built out an incipient Second Industrial Revolution infrastructure—the electricity grid, telecommunications network, road system, oil and gas pipelines, water and sewer systems, and public school systems. The Depression and World War II slowed the effort, but after the war the laying down of the interstate highway system and the completion of a nationwide electricity grid and telecommunications network provided a mature, fully integrated infrastructure. The Second Industrial Revolution infrastructure advanced productivity across every industry, from automobile production to suburban commercial and residential building developments along the interstate highway exits.

During the period from 1900 to 1980 in the United States, aggregate energy efficiency—the ratio of useful to potential physical work that can be extracted from materials—steadily rose along with the development of the nation’s infrastructure, from 2.48 percent to 12.3 percent. The aggregate energy efficiency leveled off in the late 1990s at around 13 percent with the completion of the Second Industrial Revolution infrastructure.6 Despite a significant increase in efficiency, which gave the United States extraordinary productivity and growth, nearly 87 percent of the energy we used in the Second Industrial Revolution was wasted during transmission.7

Even if we were to upgrade the Second Industrial Revolution infrastructure, it’s unlikely to have any measurable effect on efficiency, productivity, and growth. Fossil fuel energies have matured and are becoming more expensive to bring to market. And the technologies designed and engineered to run on these energies, like the internal-combustion engine and the centralized electricity grid, have exhausted their productivity, with little potential left to exploit.

Needless to say, 100 percent thermodynamic efficiency is impossible. New studies, however, including one conducted by my global consulting group, show that with the shift to a Third Industrial Revolution infrastructure, it is conceivable to increase aggregate energy efficiency to 40 percent or more in the next 40 years, amounting to a dramatic increase in productivity beyond what the economy experienced in the twentieth century.8


The enormous leap in productivity is possible because the emerging Internet of Things is the first smart-infrastructure revolution in history: one that will connect every machine, business, residence, and vehicle in an intelligent network comprised of a Communications Internet, Energy Internet, and Logistics Internet, all embedded in a single operating system. In the United States alone, 37 million digital smart meters are now providing real-time information on electricity use.9 Within ten years, every building in America and Europe, as well as other countries around the world, will be equipped with smart meters. And every device—thermostats, assembly lines, warehouse equipment, TVs, washing machines, and computers—will have sensors connected to the smart meter and the Internet of Things platform. In 2007, there were 10 million sensors connecting every type of human contrivance to the Internet of Things. In 2013, that number was set to exceed 3.5 billion, and even more impressive, by 2030 it is projected that 100 trillion sensors will connect to the IoT.10 Other sensing devices, including aerial sensory technologies, software logs, radio frequency identification readers, and wireless sensor networks, will assist in collecting Big Data on a wide range of subjects from the changing price of electricity on the grid, to logistics traffic across supply chains, production flows on the assembly line, services in the back and front office, as well as up-to-the-moment tracking of consumer activities.11 As mentioned in chapter 1, the intelligent infrastructure, in turn, will feed a continuous stream of Big Data to every business connected to the network, which they can then process with advanced analytics to create predictive algorithms and automated systems to improve their thermodynamic efficiency, dramatically increase their productivity, and reduce their marginal costs across the value chain to near zero.

Cisco systems forecasts that by 2022, the Internet of Everything will generate $14.4 trillion in cost savings and revenue.12 A General Electric study published in November 2012 concludes that the efficiency gains and productivity advances made possible by a smart industrial Internet could resound across virtually every economic sector by 2025, impacting “approximately one half of the global economy.” It’s when we look at each industry, however, that we begin to understand the productive potential of establishing the first intelligent infrastructure in history. For example, in just the aviation industry alone, a mere 1 percent improvement in fuel efficiency, brought about by using Big Data analytics to more successfully route traffic, monitor equipment, and make repairs, would generate savings of $30 billion over 15 years.13

The health-care field is still another poignant example of the productive potential that comes with being embedded in an Internet of Things. Health care accounted for 10 percent of global GDP, or $7.1 trillion in 2011, and 10 percent of the expenditures in the sector “are wasted from inefficiencies in the system,” amounting to at least $731 billion per year. Moreover, according to the GE study, 59 percent of the health-care inefficiencies, or $429 billion, could be directly impacted by the deployment of an industrial Internet. Big Data feedback, advanced analytics, predictive algorithms, and automation systems could cut the cost in the global health-care sector by 25 percent according to the GE study, for a savings of $100 billion per year. Just a 1 percent reduction in cost would result in a savings of $4.2 billion per year, or $63 billion over a 15-year period.14 Push these gains in efficiency from 1 percent, to 2 percent, to 5 percent, to 10 percent, in the aviation and health-care sectors and across every other sector, and the magnitude of the economic change becomes readily apparent.

The term Internet of Things was coined by Kevin Ashton, one of the founders of the MIT Auto ID Center, back in 1995. In the years that followed, the IoT languished, in part, because the cost of sensors and actuators embedded in “things” was still relatively expensive. In an 18 month period between 2012 and 2013, however, the cost of radio-frequency identification (RFID) chips, which are used to monitor and track things, plummeted by 40 percent. These tags now cost less than ten cents each.15 Moreover, the tags don’t require a power source because they are able to transmit their data using the energy from the radio signals that are probing them. The price of micro-electromechanical systems (MEMS), including gyroscopes, accelerometers, and pressure sensors, has also dropped by 80 to 90 percent in the past five years.16

The other obstacle that slowed the deployment of the IoT has been the Internet protocol, IPv4, which allows only 4.3 billion unique addresses on the Internet (every device on the Internet must be assigned an Internet protocol address). With most of the IP addresses already gobbled up by the more than 2 billion people now connected to the Internet, few addresses remain available to connect millions and eventually trillions of things to the Internet. Now, a new Internet protocol version, IPv6, has been developed by the Internet Engineering Task Force; it will expand the number of available addresses to a staggering 340 trillion trillion trillion—more than enough to accommodate the projected 2 trillion devices expected to be connected to the Internet in the next ten years.17

Nick Valéry, a columnist at The Economist, breaks down these incomprehensibly large numbers, making sense of them for the average individual. To reach the threshold of 2 trillion devices connected to the Internet in less than ten years, each person would only need to have “1,000 of their possessions talking to the Internet.”18 In developed economies, most people have approximately 1,000 to 5,000 possessions.19 That might seemlike an inordinately high number, but when we start to look around the house, garage, automobile, and office, and count up all the things from electric toothbrushes to books to garage openers to electronic pass cards to buildings, it’s surprising how many devices we have. Many of these devices will be tagged over the next decade or so, using the Intenet to connect our things to other things.

Valérey is quick to point out a number of big unresolved issues that are beginning to dog the widespread rollout of the IoT, potentially impeding its rapid deployment and public acceptance. He writes:

The questions then become: Who assigns the identifier? Where and how is the information in the database made accessible? How are the details, in both the chip and the database, secured? What is the legal framework for holding those in charge accountable?

Valéry warns that

glossing over such matters could seriously compromise any personal or corporate information associated with devices connected to the internet. Should that happen through ignorance or carelessness, the internet of things could be hobbled before it gets out of the gate.20

Connecting everyone and everything in a neural network brings the human race out of the age of privacy, a defining characteristic of modernity, and into the era of transparency. While privacy has long been considered a fundamental right, it has never been an inherent right. Indeed, for all of human history, until the modern era, life was lived more or less publicly, as befits the most social species on Earth. As late as the sixteenth century, if an individual was to wander alone aimlessly for long periods of time in daylight, or hide away at night, he or she was likely to be regarded as possessed. In virtually every society that we know of before the modern era, people bathed together in public, often urinated and defecated in public, ate at communal tables, frequently engaged in sexual intimacy in public, and slept huddled together en masse.

It wasn’t until the early capitalist era that people began to retreat behind locked doors. The bourgeois life was a private affair. Although people took on a public persona, much of their daily lives were pursued in cloistered spaces. At home, life was further isolated into separate rooms, each with their own function—parlors, music rooms, libraries, etc. Individuals even began to sleep alone in separate beds and bedrooms for the very first time.

The enclosure and privatization of human life went hand-in-hand with the enclosure and privatization of the commons. In the new world of private property relations, where everything was reduced to “mine” versus “thine,” the notion of the autonomous agent, surrounded by his or her possessions and fenced off from the rest of the world, took on a life of its own. The right to privacy came to be the right to exclude. The notion that every man’s home is his castle accompanied the privatization of life. Successive generations came to think of privacy as an inherent human quality endowed by nature rather than a mere social convention fitting a particular moment in the human journey.

Today, the evolving Internet of Things is ripping away the layers of enclosure that made privacy sacrosanct and a right regarded as important as the right to life, liberty, and the pursuit of happiness. For a younger generation growing up in a globally connected world where every moment of their lives are eagerly posted and shared with the world via Facebook, Twitter, YouTube, Instagram, and countless other social media sites, privacy has lost much of its appeal. For them, freedom is not bound up in self-contained autonomy and exclusion, but rather, in enjoying access to others and inclusion in a global virtual public square. The moniker of the younger generation is transparency, its modus operandi is collaboration, and its self-expression is exercised by way of peer production in laterally scaled networks.

Whether future generations living in an increasingly interconnected world—where everyone and everything is embedded in the Internet of Things—will care much about privacy is an open question.

Still, in the long passage from the capitalist era to the Collaborative Age, privacy issues will continue to be a pivotal concern, determining, to a great extent, both the speed of the transition and the pathways taken into the next period of history.

The central question is: When every human being and every thing is connected, what boundaries need to be established to ensure that an individual’s right to privacy will be protected? The problem is that third parties with access to the flow of data across the IoT, and armed with sophisticated software skills, can penetrate every layer of the global nervous system in search of new ways to exploit the medium for their own ends. Cyber thieves can steal personal identities for commercial gain, social media sites can sell data to advertisers and marketers to enhance their profits, and political operatives can pass on vital information to foreign governments. How then do we ensure an open, transparent flow of data that can benefit everyone while guaranteeing that information concerning every aspect of one’s life is not used without their permission and against their wishes in ways that compromise and harm their well-being?

The European Commission has begun to address these issues. In 2012, the Commission held an intensive three month consultation, bringing together more than 600 leaders from business associations, civil society organizations, and academia, in search of a policy approach that will “foster a dynamic development of the Internet of Things in the digital single market while ensuring appropriate protection and trust of EU citizens.”21

The Commission established a broad principle to guide all future developments of the Internet of Things:

In general, we consider that privacy & data protection and information security are complimentary requirements for IoT services. In particular, information security is regarded as preserving the confidentiality, integrity and availability (CIA) of information. We also consider that information security is perceived as a basic requirement in the provision of IoT services for the industry, both with a view to ensure information security for the organization itself, but also for the benefit of citizens.22

To advance these protections and safeguards, the Commission proposed that mechanisms be put in place

to ensure that no unwanted processing of personal data takes place and that individuals are informed of the processing, its purposes, the identity of the processor and how to exercise their rights. At the same time processors need to comply with the data protection principles.23

The Commission further proposed specific technical means to safeguard user privacy, including technology to secure data protection. The Commission concluded with a declaration that “it should be ensured, that individuals remain in control of their personal data and that IoT systems provide sufficient transparency to enable individuals to effectively exercise their data subject rights.”24

No one is naïve regarding the difficulty of turning theory to practice when it comes to securing everyone’s right to control and dispose of their own data in an era that thrives on transparency, collaboration, and inclusivity. Yet there is a clear understanding that if the proper balance is not struck between transparency and the right to privacy, the evolution of the Internet of Things is likely to be slowed, or worse, irretrievably compromised and lost, thwarting the prospects of a Collaborative Age. (These questions of privacy, security, access, and governance will be examined at length throughout the book.)

Although the specter of connecting everyone and everything in a global neural network is a bit scary, it’s also exciting and liberating at the same time, opening up new possibilities for living together on Earth, which we can only barely envision at the outset of this new saga in the human story.

The business community is quickly marshaling its resources, determined to wrest value from a technological revolution whose effects are likely to match and even exceed the advent of electricity at the dawn of the Second Industrial Revolution. In 2013 The Economist’s intelligence unit published the first global business index on the “quiet revolution” that’s beginning to change society. The Economist surveyed business leaders across the world, concentrating on the key industries of financial services, manufacturing, health care, pharmaceuticals, biotechnology, IT and technology, energy and natural resources, and construction and real estate.

The report started off by observing that the rapid drop in technology costs and new developments in complimentary fields including mobile communication and cloud computing, along with an increase in government support, is pushing the IoT to the center stage of the global economy. Thirty-eight percent of the corporate leaders surveyed forecast that the IoT would have a “major impact in most markets and most industries” within the next three years, and an additional 40 percent of respondents said it would have “some impact on a few markets or industries.” Only 15 percent of corporate executives felt that the IoT would have only “a big impact for only a small number of global players.”25 Already, more that 75 percent of global companies are exploring or using the IoT in their businesses to some extent and two in five CEOs, CFOs, and other C-suite level respondents say they have “a formal meeting or conversation about the IoT at least once a month.”26

Equally interesting, 30 percent of the corporate leaders interviewed said that the IoT will “unlock new revenue opportunities for existing products/services.” Twenty-nine percent said the IoT “will inspire new working practices or business processes.” Twenty-three percent of those surveyed said the IoT “will change our existing business model or business strategy.” Finally, 23 percent of respondents said that the IoT “will spark a new wave of innovation.” Most telling, more than 60 percent of executives “agree that companies that are slow to integrate the IoT will fall behind the competition.”27

The central message in The Economist survey is that most corporate leaders are convinced that the potential productivity gains of using the Internet of Things across the value chain are so compelling and disruptive to the old ways of doing business that they have no choice but to try to get ahead of the game by embedding their business operations in the IoT platform.

However, the IoT is a double-edged sword. The pressure to increase thermodynamic efficiency and productivity and to reduce marginal costs will be irresistible. Companies that don’t forge ahead by taking advantage of the productivity potential will be left behind. Yet the unrelenting forward thrust in productivity let loose by an intelligent force operating at every link and node across the entire Third Industrial Revolution infrastructure is going to take the marginal cost of generating green electricity and producing and delivering an array of goods and services to near zero within a 25-year span. The evolution of the Internet of Things will likely follow roughly the same time line that occurred from the takeoff stage of the World Wide Web in 1990 to now, when an exponential curve resulted in the plummeting costs of producing and sending information.


Admittedly, such claims appear overstated until we take a closer look at the meaning of the word exponential. I remember when I was a kid—around 13 years old—a friend offered me an interesting hypothetical choice. He asked whether I would accept $1 million up front or, instead, one dollar the first day and a doubling of that amount every day for one month. I initially said “you have to be kidding . . . anyone in their right mind would take the million.” He said, “hold on, do the math.” So I took out a paper and pencil and began doubling the dollar. After 31 days of doubling, I was at over one billion dollars. That’s one thousand millions. I was blown away.

Exponential growth is deceptive; it creeps up on you. On day 15, the doubling process had only reached $16,384, leaving me confident that I had struck the right deal going for one million cash in hand. The next six days of doubling was a shocker. With just six more doublings, the figure had already topped $1 million. The next ten days knocked my socks off. By the thirty-first day of the month, the doubling of that dollar had topped $1 billion. I had just been introduced to exponential growth.

Most of us have a difficult time grasping exponential growth because we are so used to thinking in linear terms. The concept itself received very little attention in the public mind until Gordon Moore, cofounder of Intel, the world’s largest semiconductor chip maker, noted a curious phenomenon, which he described in a now-famous paper published in 1965. Moore observed that the number of components in an integrated circuit had been doubling every year since its invention in 1958:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase.28

Moore slightly modified his earlier projection in 1975 saying that the doubling is occurring every two years. That doubling process continued for another 37 years, although recently, scientists have begun to predict a slowing in the number of transistors that can be put on a computer chip. The physicist Michio Kaku says we’re already beginning to see a slowdown and that Moore’s Law, at least in regard to chips, will peter out in another ten years using conventional silicon technology. Anticipating the slowdown, Intel is introducing its 3D processors, confident it can keep the doubling in place a bit longer.

Kaku points out that there is an upper limit on how much computing power can be squeezed out of silicon. He adds, however, that newer technologies like 3D chips, optical chips, parallel processing, and eventually molecular computing and even quantum computing will likely ensure an exponential growth curve in computing power well into the future.29

Moore’s Law has since been observed in a wide range of information technologies. Hard-disc storage capacity is experiencing a similar exponential growth curve. Network capacity—the amount of data going through an optical fiber—has achieved an even steeper exponential curve: the amount of data transmitted on an optical network is doubling every nine months or so.30

It’s the exponential factor that allowed computing costs to plummet for more than 50 years. When the first giant mainframe computers were being developed, the cost of computing was huge and out of commercial reach. The conventional wisdom was that, at best, only the military and a few research institutions could ever cover the costs. What experts failed to take into account was exponential growth in capacity and the falling costs of production. The invention of the integrated circuit (the microchip) changed the equation. Where 50 years ago a computer might cost millions of dollars, today hundreds of millions of people are equipped with relatively cheap smartphones with thousands of times more computing capacity than the most powerful mainframe computers of the 1960s.31 In the year 2000, one gigabyte of hard-drive space cost in the neighborhood of 44 dollars. By 2012, the cost had plunged to seven cents. In 2000, it cost $193 per gigabyte to stream video. Ten years later, that cost had dropped to three cents.32

To appreciate the significance of the exponential curve in computing power and cost reduction, consider this: the first commercially successful mass-produced business computer, the IBM 1401, often referred to as the Model T of the computer industry, debuted in 1959. The machine was five feet high and three feet wide and came with 4,096 characters of memory. It could perform 193,000 additions of eight-digit numbers in 60 seconds. The cost to rent IBM’s computer was $30,000 per year.33 In 2012, the Raspberry Pi, the world’s cheapest computer, went on sale for 25 dollars.34 The Raspberry Pi Foundation is being swamped with orders from buyers in developing countries as well as in first-world markets.

Today’s cell phones weigh a few ounces, can fit into a coat pocket, and cost a few hundred dollars. Sometimes they are even given away for free if the customer buys the carrier’s service plan. Yet they have thousands of times as much memory as the original Cray-1A computer of the late 1970s, which cost close to $9 million and weighed over 12,000 pounds.35 The marginal cost of computing power is heading to zero.

The exponential curve in generating information has fundamentally altered the way we live. As mentioned earlier, much of the human race is connecting with one another on the Internet and sharing information, entertainment, news, and knowledge for nearly free. They have already passed into the zero marginal cost society.

The exponential curve has migrated from the world of computing to become a standard for measuring economic success across a range oftechnologies, becoming a new benchmark for commercial performance and returns on investment.


Nowhere is exponentiality more discussed today than in the renewable-energy industry. Many of the key players have come over from the information technology and Internet sectors to apply experience they garnered there to the new energy paradigm. They correctly sense two uncanny parallels.

First, the harvesting power of renewable energy technology is experiencing its own exponential growth curve in solar and wind, with geothermal, biomass, and hydro expected to follow. Like the computer industry, the renewable energy industry has had to reckon with initially high capital costs in the research, development, and market deployment of each new generation of the technology. Companies are also forced to stay two to three generations ahead of their competitors in anticipating when to bring new innovations online, or risk being crushed by the force of the exponential curve. A number of market leaders have gone belly-up in recent years because they were tied into old technologies and were swept away by the speed of innovation. Industry analysts forecast that the harvesting technology for solar and small wind power will be as cheap as cell phones and laptops within 15 years.

Second, like the Communications Internet where the up-front costs of establishing the infrastructure were considerable, but the marginal cost of producing and distributing information is negligible, the up-front costs of establishing an Energy Internet are likewise significant, but the marginal cost of producing each unit of solar and wind power is nearly zero. Renewable energy, like information, is nearly free after accounting for the fixed costs of research, development, and deployment.

Internet technology and renewable energies are beginning to merge to create an Energy Internet that will change the way power is generated and distributed in society. In the coming era, hundreds of millions of people will produce their own renewable energy in their homes, offices, and factories and share green electricity with each other on an Energy Internet, just as we now generate and share information online. When Internet communications manage green energy, every human being on Earth becomes his or her own source of power, both literally and figuratively. The creation of a renewable-energy regime, loaded by buildings, partially stored in the form of hydrogen, distributed via a green electricity Internet, and connected to plug-in, zero-emission transport, establishes the five pillar mechanism that will allow billions of people to share energy at near zero marginal cost in an IoT world.

The scientific community is abuzz over the exponential curves in renewable-energy generation. Scientific American published an article in 2011 asking whether Moore’s Law applies to solar energy, and if so, might we already be on the course of a paradigm shift in energy similar to what has occurred in computing. The answer is an unqualified yes.

The impact on society is all the more pronounced when we consider the vast potential of solar as a future energy source. The sun beams 470 exajoules of energy to Earth every 88 minutes—equaling the amount of energy human beings use in a year. If we could grab hold of one-tenth of 1 percent of the sun’s energy that reaches Earth, it would give us six times the energy we now use across the global economy.36

Despite the fact that the sun is clearly the universal energy source from which all our fossil fuel and other energies are derived, it makes up less than 0.2 percent of the current energy mix primarily because, up until recently, it has been expensive to capture and distribute—this is no longer the case.

Richard Swanson, the founder of SunPower Corporation, observed the same doubling phenomena in solar that Moore did in computer chips. Swanson’s law holds that the price of solar photovoltaic (PV) cells tends to drop by 20 percent for every doubling of industry capacity. Crystalline silicon photovoltaic cell prices have fallen dramatically, from $60 a watt in 1976 to $0.66 a watt in 2013.37

Solar cells are capturing more solar energy that strikes them while reducing the cost of harvesting the energy. Solar efficiencies for triple junction solar cells in the laboratory have reached 41 percent. Thin film has hit 20 percent efficiency in the laboratory.38

If this trend continues at the current pace—and most studies actually show an acceleration in exponentiality—solar energy will be as cheap as the current average retail price of electricity today by 2020 and half the price of coal electricity today by 2030.39

The German power market is just beginning to experience the commercial impact of near zero marginal cost renewable energy. In 2013, Germany was already generating 23 percent of its electricity by renewable energy and is expected to generate 35 percent of its electricity from renewables by 2020.40The problem is that during certain times of day, the surge of solar and wind power flooding into the grid is exceeding the demand for electricity, resulting in negative prices. Nor is Germany alone. Negative prices for electricity are popping up in places as diverse as Sicily and Texas.41

This is a wholly new reality in the electricity market and a harbinger of the future as renewable energy comes to make up an increasing percentage of electricity generation. Negative prices are disrupting the entire energy industry. Utilities are having to push back on investing in “backup” gas and coal fired power plants because they can no longer guarantee a reliable return on their investments. In Germany, a gas- or coal-fired power plant that might cost $1 billion to build, but that will no longer run at full capacity because of the onslaught of renewable energies into the grid, can only pay for itself on days when there is no wind or heavy cloud cover. This extends the time it takes to pay off building new coal- and gas-fired plants, making the investments unfeasable. As a result, renewable energy is already beginning to push fossil-fuel-powered plants off the grid, even at this early stage of the Third Industrial Revolution.42

Global energy companies are being pummeled by the exponentiality of renewable energy. BP released a global energy study in 2011, reporting that solar generating capacity grew by 73.3 percent in 2011, producing 63.4 gigawatts, or ten times greater than its level just five years earlier.43 Installed solar capacity has been doubling every two years for the past 20 years with no end in sight.44

Even in the United States, where the transition to new green energies has been tepid compared to Europe, the power sector is reeling. David Crane, president and CEO of NGR Energy, noted in November 2011 that “in the last two years, the delivered cost of energy from PV was cut in half. NGR expects the cost to fall in half again in the next two years, which would make solar power less expensive than retail electricity in roughly 20 states,” all of which will revolutionize the energy industry.45

Like solar radiation, wind is ubiquitous and blows everywhere in the world—although its strength and frequency varies. A Stanford University study on global wind capacity concluded that if 20 percent of the world’s available wind was harvested, it would generate seven times more electricity than we currently use to run the entire global economy.46 Wind capacity has been growing exponentially since the early 1990s and has already reached parity with conventionally generated electricity from fossil fuels and nuclear power in many regions of the world. In the past quarter century, wind turbine productivity increased 100-fold and the average capacity per turbine grew by more than 1,000 percent. Increased performance and productivity has significantly reduced the cost of production, installation, and maintenance, leading to a growth rate of more than 30 percent per year between 1998 and 2007, or a doubling of capacity every two and a half years.47

Naysayers argue that subsidies for green energy, in the form of feed-in tariffs, artificially prop up the growth curve. The reality is that they merely speed up adoption and scale, encourage competition, and spur innovation, which further increases the efficiency of renewable energy harvesting technologies and lowers the cost of production and installation. In country after country, solar and wind energy are nearing parity with conventional fossil fuel and nuclear power, allowing the government to begin phasing out tariffs. Meanwhile, the older fossil fuel energies and nuclear power, although mature and well past their prime, continue to be subsidized at levels that far exceed the subsidies extended to renewable energy.

A study prepared by the Energy Watch Group predicts four different future market-share scenarios of new wind- and solar-power-plant installations, estimating 50 percent market share by 2033, with a moreoptimistic estimate of reaching the same goal as early as 2017.48 While solar and wind are on a seemingly irreversible exponential path to near zero marginal costs, geothermal energy, biomass, and wave and tidal power are likely to reach their own exponential takeoff stage within the next decade, bringing the full sweep of renewable energies into an exponential curve in the first half of the twenty-first century.

Still, the powers that be continually lowball their projections of renewable energy’s future share of the global energy market, in part because, like the IT and telecommunications industry in the 1970s, they aren’t anticipating the transformative nature of exponential curves, even when faced with the cumulative doubling evidence of several decades.

Ray Kurzweil, the MIT inventor and entrepreneur who is now head of engineering at Google and has spent a lifetime watching the powerful disruptive impact of exponential growth on the IT industry, did the math just on solar alone. Based on the past 20 years of doubling, Kurzweil concluded that “after we double eight more times and we’re meeting all of the world’s energy needs through solar, we’ll be using one part in 10,000 of the sunlight that falls on earth.”49 Eight more doublings will take just 16 years, putting us into the solar age by 2028.

Kurzweil may be a bit optimistic. My own read is that we’ll reach nearly 80 percent renewable energy generation well before 2040, barring unforeseen circumstances.


Skeptics legitimately argue that nothing we exchange is ever really free. Even after the IoT is fully paid for and plugged in, there will always be some costs in generating and distributing information and energy. For that reason, we always use the term near zero when referring to the marginal cost of delivering information, green energy, and goods and services.

Although the marginal costs of delivering information are already tiny, there is a considerable effort afoot to reduce them even further, to get as close as possible to zero marginal cost. It is estimated that the Internet service providers (ISPs) that connect users to the Internet enjoyed revenues of $196 billion in 2011.50 All in all, an amazingly low cost for connecting nearly 40 percent of the human race and the entire global economy.51 Besides paying for service providers, everyone using the Internet pays for the electricity used to send and access information. It is estimated that the online delivery of a one-megabyte file costs only $0.001.52 However, the megabytes add up. The Internet uses up to 1.5 percent of the world’s electricity, costing $8.5 billion—again a small cost for enjoying global communication.53 That’s equivalent to the price of building four to five new gambling casinos in Las Vegas. Still, with ever-increasing interconnectivity and more powerful computing devices,electricity use is escalating. Google, for example, uses enough energy to power 200,000 homes.54

Much of the electricity generated is consumed by servers and data centers around the world. In 2011 in the United States alone, the electricity used to run servers and data centers cost approximately $7.5 billion.55 The number of federal data centers grew from 432 in 1998 to 2,094 in 2010.56 By 2011 there were more than 509,000 data centers on Earth taking up 285 million square feet of space, or the equivalent of 5,955 football fields.57 Because most of the electrical power drawn by IT equipment in these data centers is converted to heat energy, even more power is needed to cool the facilities. Often between 25 and 50 percent of the power is used for cooling the equipment.58

A large amount of electricity is also wasted just to keep the servers idling and ready in case a surge in activity slows down or crashes the system. The consulting firm McKinsey found that, on average, data centers are using only 6 to 12 percent of their electricity to power their servers during computation—the rest is used to keep them up and ready.59 New power-management applications are being put in place to lower the power mode when idle or to run at lower frequencies and voltages. Slowing down the actual computation also saves electricity. Another approach to what the industry calls energy-adaptive computing is to reduce energy requirements by minimizing overdesign and waste in the way IT equipment itself is built and operated.60

Cutting energy costs at data centers will ultimately come from powering the facilities with renewable energy. Although the up-front fixed cost of powering data centers with renewable energy will be significant, the payback time will continue to narrow as the costs of constructing positive power facilities continue to fall. And once the facilities and harvesting technologies are up and running, the marginal cost of generating solar and wind power and other renewable energies will be nearly zero, making the electricity almost free. This reality has not been lost on the big players in the data-storage arena.

Apple announced in 2012 that its huge new data center in North Carolina will be powered by a massive 20-megawatt solar-power facility and include a five-megawatt fuel-storage system powered by biogas to store intermittent solar power to ensure a reliable 24/7 supply of electricity.61 McGraw-Hill’s data center in East Windsor, New Jersey, will be powered by a 14-megawatt solar array. Other companies are planning to construct similar data-center facilities that will run on renewable energy.62

Apple’s data center is also installing a free cooling system in which cool nighttime outside air is incorporated into a heat exchange to provide cold water for the data center cooling system.63 Providing data centers with onsite renewable energy whose marginal cost is nearly free is going to dramatically reduce the cost of electricity in the powering of a global Internet of Things, getting us ever closer to nearly free electricity in organizing economic activity.

Reducing the cost of electricity in the management of data centers goes hand in hand with cutting the cost of storing data, an ever larger part of the data-management process. And the sheer volume of data is mushrooming faster than the capacity of hard drives to save it.

Researchers are just beginning to experiment with a new way of storing data that could eventually drop the marginal cost to near zero. In January 2013 scientists at the European Bioinformatics Institute in Cambridge, England, announced a revolutionary new method of storing massive electronic data by embedding it in synthetic DNA. Two researchers, Nick Goldman and Ewan Birney, converted text from five computer files—which included an MP3 recording of Martin Luther King Jr.’s “I Have a Dream” speech, a paper by James Watson and Francis Crick describing the structure of DNA, and all of Shakespeare’s sonnets and plays—and converted the ones and zeros of digital information into the letters that make up the alphabet of the DNA code. The code was then used to create strands of synthetic DNA. Machines read the DNA molecules and returned the encoded information.64

This innovative method opens up the possibility of virtually unlimited information storage. Harvard researcher George Church notes that the information currently stored in all the disk drives in the world could fit in a tiny bit of DNA the size of the palm of one’s hand. Researchers add that DNA information can be preserved for centuries, as long as it is kept in a dark, cool environment.65

At this early stage of development, the cost of reading the code is high and the time it takes to decode information is substantial. Researchers, however, are reasonably confident that an exponential rate of change in bioinformatics will drive the marginal cost to near zero over the next several decades.

A NEAR ZERO MARGINAL COST communication/energy infrastructure for the Collaborative Age is now within sight. The technology needed to make it happen is already being deployed. At present, it’s all about scaling up and building out. When we compare the increasing expenses of maintaining an old Second Industrial Revolution communication/energy matrix of centralized telecommunications and centralized fossil fuel energy generation, whose costs are rising with each passing day, with a Third Industrial Revolution communication/energy matrix whose costs are dramatically shrinking, it’s clear that the future lies with the latter. Internet communication is already being generated and shared at near zero marginal cost and so too is solar and wind power for millions of early adopters.

The stalwart supporters of fossil fuels argue that tar sands and shale gas are readily available, making it unnecessary to scale up renewable energies, at least in the short term. But it’s only because crude oil reserves are dwindling, forcing a rise in price on global markets, that these other more costly fossil fuels are even being introduced. Extracting oil from sand and rock is an expensive undertaking when compared to the cost of drilling a hole and letting crude oil gush up from under the ground. Tar sands are not even commercially viable when crude oil prices dip below $80-per barrel, and recall that just a few years ago, $80-per-barrel oil was considered prohibitively expensive. As for shale gas, while prices are currently low, troubling new reports from the field suggest that the promise of shale gas independence has been overhyped by the financial markets and the energy industry. Industry analysts are voicing growing concern that the shale gas rush, like the gold rushes of the nineteenth century, is already creating a dangerous bubble, with potentially damaging consequences for the American economy because too much investment has moved too quickly into shale gas fields.66

Andy Hall, an oil trader known in the sector as “God,” owing to his remarkably accurate trend forecasts on oil futures, shook up the industry in May 2013 with his declaration that shale gas will only “temporarily” boost energy production. Hall informed investors in his $4.5 billion Astenbeck hedge fund that, although shale gas gushes at first, production rapidly declines because each well only taps a single pool of oil in a large reservoir. The quick exhaustion of existing shale gas reservoirs requires producers to continuously find new shale gas deposits and dig new wells, which jack up the cost of production. The result, says Hall, is that it is “impossible to maintain production . . . without constant new wells being drilled [which would] require high oil prices.” Hall believes that shale gas euphoria will be a short-lived phenomenon.67 The International Energy Agency (IEA) agrees. In its annual 2013 World Energy Outlook report, the IEA forecast that “light tight oil,” a popular term for shale gas, will peak around 2020 and then plateau, with production falling by the mid-2020s. The U.S. shale gas outlook is even more bearish. The U.S. Energy Department’s Energy Information Administration expects higher shale gas levels to continue only to the late teens (another five years or so) and then slow.68

What hasn’t yet sunk in is that fossil fuel energies are never going to approach zero marginal cost, or even come close. Renewable energies, however, are already at near zero marginal cost for millions of early adopters. Scaling them so that everyone on Earth can produce green energy and share it across the Internet of Things, again, at near zero marginal cost, is the next great task for a civilization transitioning from a capitalist market to a Collaborative Commons.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s