The Fourth Industrial Revoltuion
Technology will change the world so fiercely in the coming years that economist expect a Fourth Industrial Revolution. From breakthroughs in nanotechnology to genetic engineering it seems the question is no longer if they will see widespread adoption but when. However, even among such impressive company one future field stands out. In the past decade Artificial Intelligence (AI) has advanced by leaps and bounds. Already, products you may use on a daily basis from Google searches, to Netflix recommendations utilize these newly developed capabilities. But such data analysis tasks are merely a breakout role; there is much more to come. Most of it will be good, but like any period of great change there will be growing pains. Such a warning may conjure up images from the silver screen of machines like Hal 9000 or Terminator turning on their masters. The truth of the situation is much less spectacular, but no less worthy of our attention. The real risk is to those swept away in the wake of rapid social and economic changes.
Lessons from the past
What is an “industrial revolution” really? The phrase sounds like it must be hyperbole given that we usually reserve the word “revolution” for violent social upheavals. Of course, industrial revolutions are kicked off by new technologies that change, or create industries (and often destroy others). But such events happen regularly as economies grow. What sets industrial revolutions apart is that the reach of the change quickly goes beyond a particular industry and causes long term shifts in society as a whole. To understand exactly why and how that might happen let’s consider the First Industrial Revolution, The Second Industrial Revolution (a.ka. the Technological Revolution), and the Third Industrial Revolution (a.k.a. the Digital Revoltion).
In the 18th century the First Industrial revolution was spurred by the development of machine tools, factories, chemical manufacturing, steam power and many other innovations. This marked the first time in history that there was sustained exponential economic growth. Yet at the same time, the industrial revolution saw millions working and living in squalor. By mid-century the United States had emerged as an industrialized power and the pace of innovations from the First Industrial Revolution was slowing. Suddenly in the 1870s a second burst of innovation brought renewed growth. Electric power, the internal combustion engine, the petroleum industry and the telephone were but a few of the defining forces of what came to be known as the Second Industrial Revolution. Yet for all that progress, this period is remembered more for robber barons than for improved standards of living. Indeed, inequality skyrocketed as the government ruled against those who challenged the system.
“…the costs of this indifference to the victims of capital were high. For millions, living and working conditions were poor, and the hope of escaping from a lifetime of poverty slight. As late as the year 1900, the United States had the highest job-related fatality rate of any industrialized nation in the world. Most industrial workers still worked a 10-hour day (12 hours in the steel industry), yet earned from 20 to 40 percent less than the minimum deemed necessary for a decent life. The situation was only worse for children, whose numbers in the work force doubled between 1870 and 1900.” 1
It was not until the early 20th century that reform efforts began to take hold in the United States. The tide of public opinion increasingly turned against the robber barons as strikes ended in bloodshed, and writers increasingly cast light on social injustices. The Jungle by Upton Sinclair exposed unsanitary working conditions in Chicago meat packing plants. Lincoln Steffens called the public to combat corruption in his series of articles entitled The Shame of the Cities. It was around this time of Progressive reforms that the term middle class was first given the meaning it holds today—a group of professional or managerial workers between the upper class, and the low-wage working class.2 The middle class began as a small minority but would come to be the single largest economic class in the United States as the country emerged from World War II.
It was against the post-war backdrop of suburban middle class life that in 1947 John Bardeen, Walter Brattain and William Shockley of AT&T’s Bells labs invented the first solid state transistor. This invention is often considered the beginning of the third industrial revolution, as it ultimately would make possible modern computing. However, in the 20 years following the invention technical progress was steady, but not very impactful. The high costs of early computer systems limited their adoption to well-funded government projects and large corporations. The third industrial revolution began in earnest in the 1970s which saw the introduction of personal computers which were smaller and more affordable than anything that had come before. The following decades would see the digitization of music, the first mobile phones and digital cameras, the proliferation of robotic arms in manufacturing settings and in 1991 the public debut of the internet. But this progress came at some cost. Many industries were eroded with incredible speed as digital innovations displaced jobs. Travel websites replaced human travel agents, print newspapers struggled to reach readers in a world of digital media, and encyclopedia salesman were wiped out by free online information.
A few key themes arise from the history of industrial revolutions. Firstly, many industries simultaneously experienced unusually strong levels of creative destruction, which is the “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.”3 Secondly, there is a lasting shift in the demographics of the population affected. Levels of education, regional levels of economic opportunity, and the required skill level and amount of jobs in the economy shift. Thirdly, inequality often increases as the gap widens between those with the skills and education to adapt, and those who struggle to adapt effectively. Lastly, although there are hardships during the time of transition ultimately the technologies introduced improve the standard of living of the population.
The Rise of Machine Learning
Why would AI, a field which has existed for over 60 years suddenly trigger changes on the scale of an industrial revolution? If you are old enough to remember the 1980s you may recall a general excitement in the potential of AI. These high expectations existed not only in business and government circles but also in the entertainment industry with films such as Tron (1982), WarGames (1983) and The Terminator (1984). The field of Artificial Intelligence was booming and there had been breakthroughs in the creation of “expert systems”. The goal was to translate the judgment of a domain-expert into a series of logical rules that could be executed by a computer. While these approaches saw early success, each expert system was limited to the extremely specific set of knowledge it had been programed with, and writing and maintaining those specialized programs took tremendous amounts of skill and resources. By the late 80s expert systems had not lived up to expectations, interest waned and funding dried up.
In the past decade a subfield of AI called Machine Learning (ML) has overcome the problems that caused expert systems to be impractical. Put simply ML attempts to get a computer to understand something, not by explicitly programming the computer to do so, but rather by allowing it observe examples from which it can form generalizations or models. For example Google Image SafeSearch distinguishes between normal images and obscene ones. No human programmed Google with a set of rules to test if an image is obscene; in fact it might not be possible for a human to do so at all.
In Jacobellis v. Ohio, a case concerning if a film shown at Heights Arts Theatre in Cleveland Heights, Ohio was art or pornography, the United States Supreme Court abandoned trying to give a legal definition for obscene images. Justice Potter wrote:
“I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [pornography], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that"
This kind of “I know it when I see it” approach is how SafeSearch was trained. It learned to distinguish between obscene and normal images by running hundreds of thousands of each type of image through a machine learning algorithm. The result is a learned model of what images probably contain obscene content. Clearly, an algorithm that can tackle a problem that a Supreme Court justice sidesteps is something special. But what makes ML revolutionary is that the learning proceeded on its own, it was not the result of meticulous effort by some sort of domain expert. The capabilities provided by Machine Learning are very general; with different training data it could learn to classify cats vs dogs or discriminate between the faces of different individuals. In fact, ML is not limited to images, nor is it even limited even to pre-recorded data. ML algorithms merely require some set of observations. For example, ML algorithms have been able to learn to play video games in real time. They perform what appear to be random moves, but gradually learn from their mistakes and begin to avoid the situations that have caused them to lose in the past. A more detailed explanation of the some of the algorithms used in ML along with specific examples can be found in this article from the Economist.
Changes We Can Expect
ML makes it possible for computers to complete tasks that a human written program could not achieve. Such capabilities spurred Dr. Mike Osborne, co-lead of the Machine Learning Research Group at Oxford, to reconsider traditional models of how technology effects employment. Ironically, he used techniques from ML to find an answer to his question. He analyzed over 700 job descriptions from the O*NET database and gave them a scores in each of three “bottlenecks”. These bottleneck are types of tasks which are difficult to automate even with recent advancements in ML. The first bottleneck, “perception and manipulation tasks” concerns the fact that that computers are bad at understanding the large variety of objects we see on a daily basis, a shortcoming which worsens their ability to properly grasp and move objects. The second bottleneck is creativity. It is very difficult to write a program that can think outside the box and come up with novel and unexpected ideas. The final bottleneck is social intelligence, meaning capabilities like understanding human emotions, responding intelligently in conversation, and understanding the subtext of social situations.
Dr. Osborne’s study found that 47% of jobs in the United States are at risk to be automated in the next couple decades. To be clear “at risk” means that the technology necessary to automate those jobs could be made; the study does not consider if it would be financially practical do to so, nor does it account for regulatory roadblocks or liability concerns. Even still, the result is startling. The study was published in 2013. At that time only a few professions were clearly threatened, for example there was speculation that many truck drivers would be phased out in favor of autonomous vehicles. Other predications from the study seemed less believable, for example the idea that wait staff at restaurants might be replaced. And yet in the intervening 3 years Dr. Osborne has been vindicated as we have seen major restaurant chains adopt systems like Ziosk, which removes the need for a human to take orders or accept payments. Other at risk jobs include: transportation and logistics workers, office and administrative support, labor in production occupations, many service and sales workers, construction workers, paralegals and legal assistants. What these and other at risk jobs have in common is that they do not contain a high proportion of tasks requiring creativity, social intelligence, or difficult perception and manipulation challenges. 4
Another take away is that jobs which require less skill and education tend to be easier to automate. This may cause the loss of many low-skill, low-wage jobs. Creative destruction will create some new jobs but they are unlikely to be suitable for displaced low-skill workers. As we observed in past revolutions, this will tend to increased inequality in the short term. Of course in the long term people will always adapt and the income gap will narrow, right? Thomas Piketty might disagree. In his book Capital in the Twenty-first Century he argues that in order to have the kind of inclusive capitalist society that existed in America in the past half century there must be continuous creative destruction and economic growth. Such growth lessens the importance of old money, by constantly creating new generations of innovators and entrepreneurs who overtake the old guard. But in a world where the return on capital outpaces economic growth the old guard will never be outshone. This is a troubling thought considering that the changes we see today will only give a further advantages to capital. Specifically, computer capital can increasing replace labor. Economists like Erik Brynjolfsson and Andrew McAfee argue that such a situation has already existed for some time now. In the past increased productivity caused increased compensation, and increased employment. But in the recent past these trends have become decoupled, pointing to the fact that while businesses are benefiting workers see minimal improvements to their situations.
The drive to make things better is a fundamental part of the human experience. We are often asked to focus this drive very narrowly on the task at hand. Such focus is useful, but we must not lose sight of the greater arc of our actions. Much of this essay has been a discussion of risks, negative trends, and possible adverse outcomes. To be clear, I am an optimist. History tells us that innovation inevitable brings with it an increased standard of living. For example the majority of people would rather be middle class today, than wildly wealthy in the year 19005. I am hopeful that the technologies like robotics, artificial intelligence and machine learning will free us from mundane and fundamentally unfulfilling work to focus on the types of creative, social, and skillful activities we love. My warnings are not meant to alarm; they are a call to action. We are in this together. There are solutions to these growing pains in education, fiscal and monetary policy, and in the modernizing of aging government institutions. I cant say for certain what changes are best, but by being proactive we can get ahead of these problems. The question is not what circumstances will we find ourselves in, think of the circumstances will we put ourselves in.
P.S. This blog is named 4thRev.com in reference to the Fourth Industrial Revolution. As a roboticist, it is particularly important to me to be dedicated to issues of education, equality, technology and the economy. The logo is derived from a visual pun. A solid is formed by the revolution of the number four around a vertical axis. A slice is cut from the solid, and the 2D logo is taken from a view of the exposed edges.