In a recent article, Paul Krugman pointed out the fallacies in the widely held belief that more education for all will lead to better jobs, lower unemployment and reduced inequality in the economy. The underlying thesis in Krugman's argument (drawn from Autor, Levy and Murnane)  is fairly straightforward and compelling: Advances in computerisation do not increase the demand for all "skilled" labour. Instead they reduce the demand for routine tasks, including many tasks that we currently perceive as skilled and require significant formal education for a human being to carry out effectively.

This post is my take on what advances in technology, in particular artificial intelligence, imply for the nature of employment and education in our economy. In a nutshell, advances in artificial intelligence and robotics means that the type of education and employment that has been dominant throughout the past century is now almost obsolete. The routine jobs of 20th century manufacturing and services that were so amenable to creating mass employment are increasingly a thing of the past. This does not imply that college education is irrelevant. But it does imply that our current educational system, which is geared towards imparting routine and systematic skills and knowledge, needs a radical overhaul.

As Autour et al note, routine human tasks have gradually been replaced by machinery and technology  since atleast the advent of the Industrial Revolution. What has changed in the last twenty years with the advent of computerisation is that the sphere of human activities that can be replaced by technology has broadened significantly. But there are still some significant holes. The skills that Autour et al identify as complementary rather than substitutable by computerisation are those that have proved most challenging for AI scientists to replicate. The inability to automate many tasks that require human sensory and motor skills is an example of what AI researchers call Moravec's Paradox. Hans Moravec identified that it is much easier to engineer apparently complex computational tasks such as the ability to play chess than it is to engineer the sensorimotor ability of a one-year old child. In a sense, computers find it harder to mimic some of our animalistic skills and relatively easy to mimic many of our abilities that we have long thought of as separating us from other animals. Moravec's paradox explains why many manual jobs such as driving a car have so far resisted automation. At the same time AI has also found it hard to engineer the ability to perform some key non-routine cognitive tasks such as the ability to generate creative and novel solutions under conditions of significant irreducible uncertainty.

One of the popular misconceptions about the limits of AI/technology is the notion that the engineered alternative must mimic the human skillset completely in order to replace it. In many tasks the human method may not be the only way or even the best way to achieve the task. For example, the Roomba and subsumption architectures do not need to operate like a human being to get the job done. Similarly, a chess program can compete with a human player even though the brute-force method of the computer has very little in common with the pattern-recognising, intuitive method of the grandmaster. Moreover, automating and replacing human intervention frequently involves a redesign of the operating environment in which the task is performed to reduce uncertainty, so that the underlying task can be transformed into a routine and automatable one. Herbert Simon identified this long ago when he noted: "If we want an organism or mechanism to behave effectively in a complex and changing environment, we can design into it adaptive mechanisms that allow it to respond flexibly to the demands the environment places on it. Alternatively, we can try to simplify and stabilize the environment. We can adapt organism to environment or environment to organism". To hazard a guess, the advent of the "car that drives itself" will probably involve a significant redesign of the design and rules of our roads.

This redesign of the work environment to reduce uncertainty lies at the heart of the Taylorist/Fordist logic that brought us the assembly line production system and has now been applied to many white-collar office jobs. Of course this uncertainty is not eliminated. As Richard Langlois notes, it is "pushed up the hierarchy to be dealt with by adaptable and less-specialized humans" or in many cases, it can even be pushed out of the organisation itself. Either way, what is indisputable is that for the vast majority of employees whether on an assembly line in FoxConn or in a call center in India, the job content is strictly codified and routine. Ironically, this very process of transforming a job description into one amenable to mass employment itself means that the job is that much more likely to be automated in the future as the sphere of activities that are thwarted by Moravec's paradox reduces. For example, we may prefer competent customer service from our bank but have long since reconciled ourselves to sub-standard customer service as the price we pay for cheap banking. Once we have replaced the "tacit knowledge" of the "expert" customer service agent with an inexperienced agent who needs to be provided with clear rules, we are that much closer to replacing the agent in the process altogether.

The implication of my long-winded argument is that even Moravec's paradox will not shield otherwise close-to-routine activities from automation in the long run. That leaves us with employment opportunities necessarily being concentrated in significantly non-routine tasks (cognitive or otherwise) that are hard to replicate effectively through computational means. It is easy to understand why the generation of novel and creative solutions is difficult to replicate in a systematic manner but this is not the only class of activities that falls under this umbrella. Also relevant are many activities that require what Hubert and Stuart Dreyfus call expert know-how. In their study of skill acquisition and training that was to form the basis of their influential critique of AI, they note that as one moves from being a novice at an activity to being an expert, the role of rules and algorithms in guiding our actions diminishes to be replaced with an intuitive tacit understanding. As Hubert Dreyfus notes, "a chess grandmaster not only sees the issues in a position almost immediately, but the right response just pops into his or her head."

The irony of course is that the Taylorist logic of the last century has been focused so precisely on eliminating the need for such expert know-how, in the process driving our educational system to de-emphasise the same. What we need is not so much more education as a radically different kind of education. Frank Levy himself made this very point in an article a few years ago but the need to overhaul our industrial-age education system has been most eloquently championed by Sir Ken Robinson [1,2]. To say that our educational system needs to focus on "creativity" is not to claim that we all need to become artists and scientists. Creativity here is defined as simply the ability to explore effectively rather than follow a algorithmic routine, a role that many of our current methods of "teaching" are not set up to achieve. It applies as much to the intuitive, unpredictable nature of biomedical research detailed by James Austin as it does to the job of an expert motorcycle mechanic that Matthew Crawford describes so eloquently. The need to move beyond a simple, algorithmic level of expertise is not one driven by sentiment but increasingly by necessity as the scope of tasks that can be performed by AI agents expands.   A corollary of this line of thought is that jobs that can provide "mass" employment will likely be increasingly hard to find. This does not mean that full employment is impossible, simply that any job that is routine enough to employ a large number of people doing a very similar role is likely to be automated sooner or later.



Wednesday links: routine jobs Abnormal Returns

[...] Ashwin Parameswaran, “The routine jobs of 20th century manufacturing and services that were so amenable to creating mass employment are increasingly a thing of the past.”  (Macroeconomic Resilience) [...]


What about tech support, or roles that require critical thinking. Surely these type of jobs are widespread, but can a computer troubleshoot new problems like an experienced tech support agent can?


Chris - I think we're a long way away from computers replacing "expert" experienced tech support agents. But we've already replaced so many experienced service agents with less experienced novice agents, a trend that many customers have accepted due to the reduced cost of doing so. And a computer is not that far away from being able to replace an average tech support agent - both follow rules and algorithms that can be easily codified. If I had to hazard a guess, you'd have free/cheap automated support combined with expensive support from experienced agents if needed. Either way, the current model of a large workforce of service workers working off a predetermined, strictly codified script is unlikely to survive much longer.


Ashwin, take a look at Peter Voss's company a2i2 ( Voss's goal is human level+ AGI, but he is using his computational engine to power automated call centers to raise revenue for his research. As you noted in your blog posts, basic call center operators are the types of jobs that are routine enough to be handled by clever narrow AI. Regarding the use of AI in non-routine tasks, you should take a look at the work of Monica Anderson (, the founder of Syntience (, and her AI paradigm of artificial intution (


Ptolemaios - Thanks for the comment and links. Monica Anderson's work sounds interesting.

More Education Might Not Help Unemployment and Future Jobs « Working for Liberty

[...] Here. Eco World Content From Across The Internet. Featured on EcoPressed On Nuclear Power: Regulating Our Reaction   LikeBe the first to like this post. [...]

Bruce Wilder

Another outstanding post, although I shouldn't be surprised. To get an assembly-line factory to work as a complete system required a lot of humans to fill in for machines, which had not been invented yet. It created a paradigm, which the advance of computing tech in our day has simply extended, and permitted digital models to replace analog schemes. And, that paradigm calls for using people as substitutes for machines, in creating a technically efficient system, and then improving the efficiency of the system, by replacing the machine-substitute humans. That role for human workers, of filling in for a machine, which had not been invented, is, apparently fascinating to humans, looking for the meaning of it all. But, the function was clearly just a stop-gap, to get a system of many parts up and running, with the full intention of replacing those humans playing a machine's role, with . . . machines. As for the "mass employment" role: the archetypal mass employment role wasn't the machine-substitute, but the less glorified machine-tender, the bottom rung on the human hierarchy of system engineers, designers and architects. If variation is to be kept within bounds, someone has to be there to observe and evaluate as the machine's own entropy takes it out of bounds, and fix the problem; someone else has to be able to figure out what those bounds should be, and someone above all of them, has to understand why those bounds matter in the overall scheme. And, there is an overall scheme. The thing about Fordist production is that it required Ford, because there was, above all, a system. Ford's genius was seeing that the reduction in variation of standardized parts made possible a closely-coordinated system. He didn't see -- at least not immediately -- that the organization he created would itself have to learn and adapt. Redesigning the auto factory to accommodate regular changes in the design of the product was a very big deal. To a large extent, that's why G.M. is as big as Ford. And, recognizing that the auto workers were machine-tenders, whose primary role should be monitoring "quality" and diagnosing and fixing the problem when the bounds on variation were exceeded in detail, that was another very big deal. The machine-substitute is following algorithms; the machine-tender is monitoring whether the algorithms being followed are having desired results, and intervening either when the rules are broken or when the rules are being followed but not having desired results. It is why Toyota is as big as GM and Ford. Your pessimism about customer service representatives is telling. I was reminded of the excellent essays by the late Tanta, at Calculated Risk, about how loan officers work, and the rules and bounds that govern their activity were eroded by securitization and the growth of corrupt organizations like Countrywide and Golden West and WaMu. We have these vast systems, and when a system crashes, as mortgage-securitization has crashed, there's a lot of work to do. I think it is easy to see that systems need architects. Taylorist and Fordist schemes required Taylor and Ford, no? But, they also need machine-tenders and machine-designers, in abundance. The machines have to be minded and re-invented, with great regularity.


Bruce - Thanks for that excellent comment. I can't disagree with any of it. As you say, we need machine tenders and designers. I think even tenders may get automated with some of the more nuanced approaches in AI but we're a long way away from automating the job of a designer. My concern is that our educational system is still geared to churn out algorithm implementers rather than designers. The other point is how automation has reduced the quality of the service in so many instances by substituting the human expert with a cheap algorithm-following novice and now we're rapidly replacing these novices with machines. Atleast in the short run, it's not clear that we can make this transition without some economic pain, unemployment etc.


Okay, it's been explained that machines will still need us humans for a while longer but no one has addressed why we started out at a dead run in the first place? The algorythms needed for the first automated machines were not the simple call center routines for answering machines today! Auto industry welders and painters were what used the programming back in the day not answering services...We've been working backward and not taking baby steps like we should have because big money, the auto industry had the capital to invest in systems that required the programming not your personal answering machine! Artificial Intelligence will start making it's biggest advances when someone sits down and takes the time to teach a computer to do the first things we learned how to do, walk and talk...Then let it get out and explore on it's own, it's surrounding environment while letting it write it's own routines (store to memory) on what does what and why and how things react to external forces (running into things) etc..Programming curiosity into the machine would be a good start after that so that it would be curious enough to go out and explore but it would need to have the capacity to learn and store everything possible along the way or why bother?! I don't know what it takes to write any of the programming I've mentioned but I can guess quite a bit and having a program that can write its own experiences would be even more massive and storage capacity at its current levels probably doesn't even come close to the needs of supporting all of this so we are limited by this. Once staorage capacity has reached the levels of the human brain maybe we'll be able to build machines with human like traits and abilities but we'll still need a more thorough understanding of how and why animals and humans learn the way they do... When was the last time you saw or heard of a monkey using a stove? It might know that fire can burn you if it's been exposed to it in it's habitat but would the monkey know that a stove is a controlled situation for the use of fire? If you turned on a burner how would it react? How do you program that into a machine? I'd guess pretty much the same way both the monkey and us have learned about fire including the differences of the limits of our knowledge and someday the machine, if given enough processing and reasoning power along with storage room, may be teaching us some new uses for fire! It'll come down to teaching and learning and we seem to have a problem doing that for ourselves right now...Maybe we should be looking at how to teach ourselves to learn better or maybe learn how to become better teachers!?...What came first the chicken or the egg? What we need in the field of AI is an egghead of sorts...


Regarding implications for politics, see also: Yannick Rumpala, Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks, Technology in Society, Volume 34, Issue 1, 2012, Older version available at: