Revisiting the Jobs Artificial Intelligence Will Create

This audio conversation digs into research on the emerging job categories spurred by AI and what leaders should be thinking about next.

MIT SMR editor in chief Paul Michelman recently spoke with authors Paul Daugherty and H. James Wilson to discuss the research that went into their 2017 article “The Jobs That Artificial Intelligence Will Create” and to find out what new developments in this space have their attention.

Key takeaways from the discussion include:

Soft skills will be crucial for collaborating with smart machines.

A major focus of the Daugherty and Wilson article is what kinds of skills will enable job transformation in the age of AI. Surprisingly, they’ve found that STEM skills, while important, are certainly not a prerequisite for transformation. In fact, their research shows that four soft skills are becoming much more valuable as human-machine collaboration advances. These skills include complex reasoning, creativity, social and emotional intelligence, and sensory perception.

Training for AI will require shifts in learning.

Humans have never trained for many of the new jobs that AI will create — so the question becomes, how do companies develop their workforce? Daugherty and Wilson cite the importance of experiential learning through hands-on apprenticeships and using technology to advance skills in new categories of jobs. In addition, the burden for skilling and reskilling must shift from the individual employee to the collective institution or organization. If learning is going to become a core competency for global businesses, then organizations will need to step up to help build learning platforms for employees.

Responsible AI marks a new frontier.

Daugherty and Wilson point out that responsible AI, which was hinted at in their original research and spelled out in more detail in their recent article “Using Artificial Intelligence to Promote Diversity,” has become a much more important question. Thinking about bias, transparency, and accountability will be crucial in the future development of AI systems. As the authors describe, “These are going to be really important issues for businesses and organizations to grasp and to make sure that as we have increasing numbers of people working in AI and more powerful solutions delivered with the AI, how do we make sure we deliver the right outcomes in all cases?”

Transcript

Paul Michelman: Whether we cheer it or fear it, artificial intelligence has landed hard on the organizational doorstep, and its arrival as a business technology has generated a whole lot of noise. There’s a raging debate crossing the public and private sectors on whether robots, algorithms, and other increasingly intelligent technologies are more likely to be a boon or spell doom for the world of work.

It’s easy to get lost amidst all the uncertainty and speculation, but when we do, we may fail to see what’s happening right in front of us right now. AI is already creating new forms of employment. In fact, researchers at Accenture have identified several new categories of jobs spurred by AI. This research is featured in the MIT Sloan Management Review article “The Jobs That Artificial Intelligence Will Create.” I’m joined by authors H. James Wilson and Paul Daugherty for a look at the findings from their first round of research and what they have learned since about the new roles that AI is creating in the organization. Jim and Paul, welcome, and thanks for taking the time to talk about the work you and your colleagues are doing to help us understand AI’s impact on employment.

Paul Daugherty: It’s great to be here today with you, Paul. This is Paul Daugherty, and I’m looking forward to the conversation.

James Wilson: Great to be here, Paul.

Paul Michelman: I think we agree that we should all be concerned about the high risk for job displacement through intelligent automation. It’s a real threat — not just on the factory floor. But you note that that is not the full picture. AI is also creating fully new types of jobs. So first, help get us situated — will you introduce us to your findings at a high level?

Paul Daugherty: Yeah, we started this about two-and-a-half years ago when Jim and I were looking at the advance of AI and the current state of a lot of the discussions around AI. And we became concerned about the type of dialogue that was happening. As you say, there certainly is a massive impact on the way work is done, brought on by AI. But in our early experience, we saw a lot of promise for AI to change jobs and create jobs and make more human jobs — or make jobs more human — in many steps. So Jim and I launched this research project to look at 1,500 organizations and how they were using AI and how it was impacting their business, their workforce, and the things that they did in the company. And the finding was that contrary to what a lot of people think, we believe AI will create a lot of novel new jobs. It will certainly eliminate some jobs, but we believe that the net effect will be creating a lot of jobs — and jobs that are good jobs that leverage our human capability in different ways. Broadly speaking, we came up with three categories of jobs that we call trainerexplainer, and sustainer — three categories of new jobs where we’re using our human capability in different ways to allow AI to have the positive impact on the way we work, the way we live, and overall a positive impact on outcomes.

Paul Michelman: Thanks, Paul. Let’s walk through each of these categories, beginning with trainer.

James Wilson: So we initially did that research of about 1,500 companies, and we didn’t initially see these three job categories, but when we started to dig down into the research, when we started to do follow-up case studies, that’s where we really started to see these jobs surface — managers that we were interviewing talking about writing fundamentally new job descriptions. And we actually saw recurring job titles [and] job categories that they were writing for. One of those job categories is the trainer role. And these are the people that are quite often doing the data science. They’re doing the machine learning engineering. They’re the ones that are actively building the AI systems. One of the things that we see is that even within the same company, there can be a lot of variety within a particular job category, like a trainer job. So, for instance, Tesla: You can see that the carmaker is recruiting line managers with experience in robotics, and robot engineers and computer vision researchers, and deep learning scientists and machine learning systems experts. So really rich variety — even within that one trainer category within a single company.

Paul Michelman: And Jim, trainers are exclusively technology experts?

James Wilson: No, not necessarily. And we can talk some more about that. You know, it’s important to have functional experts on your team, as well. It might be that you have a person with a marketing background or an operations background on your team helping identify and solve problems that the technical experts — for instance, the data scientists — will then go in and solve for.

Paul Daugherty: And just to add on, one specific type of job we see here in the trainer category are the AI personality trainers — somebody who can behaviorally train the chatbots and intelligent virtual agents that so many companies are deploying right now. Companies deploy those solutions to interact (voice-driven interaction) with their consumers and such. What they’re realizing is that AI becomes the brand. And so you need to train it to behave in the right way, to operate the right way, to have the right answers, the right tone, etc. And that’s a nontechnical type of job that’s needed to shape that type of behavior and [to] work with the engineers to get that behavior implemented in the right way in the solutions.

Paul Michelman: Great. Let’s move on to the second category, which you label explainers.

Paul Daugherty: Explainers is one that I think is getting to the fact that AI is embedded in very complex systems and business processes. And so there’s an issue both of explaining AI itself and how it’s working, but more broadly explaining the kinds of outcomes that are being generated by the systems that are being developed. For example, if you think about a self-driving car, it has a lot of AI embedded in it, but there’s lots of other driving systems and things included. So when you think about autonomous vehicles and what’s happening, what we’re seeing is companies creating roles [for] people to understand the overall context of the system — the environmental conditions, the road conditions, lots of things in addition to the AI itself and how it was behaving, so that they can understand and tune the systems to operate more effectively…. Understanding that impact is the [type of job] we’re seeing in the explainer category.

James Wilson: In some cases, these explainer roles are actually being encouraged through regulation. So this year by some estimates there were about 75,000 new explainer roles being created related to the GDPR’s right to [explanation]. And these are analysts in banks, for instance, and in customer service centers and that sort of thing, [who] answer customers’ questions about an algorithmic decision.

Paul Michelman: So are explainers always an interface between the organization and the public? Or are they also interfacing within parts of the organization?

James Wilson: They quite often are interfacing with parts of the organization as well. So, for instance, in health care we’re seeing a lot of early evidence that explainers are working with physicians in explaining why an AI system is making a particular recommendation and whether then the doctor can go on and make a medical recommendation to a patient as a result. They often are working in health care settings, making interpretations and sharing insights with medical professionals, not necessarily patients or customers.

Paul Michelman: So let’s move to the third category: sustainers.

Paul Daugherty: This is really speaking to the roles that are needed to manage AI (the use of AI) and to make sure that it not only behaves right at the outset, but it continues to behave properly to produce the desired outcomes over time, because the technology changes, the data changes, the situation changes, the business changes. And sustainer roles are people who really understand the outcomes that need to be driven to make sure that that outcome and that impact is sustained.

James Wilson: They also spend a good deal of their day thinking about unintended consequences from AI systems and how those end up being received by the public. So, for instance, surge pricing. Is a surge pricing model going to be something that is sustainable for a company? That was an issue, obviously, that some of the firms like Uber and Lyft had to deal with initially. How do you come up with a surge-pricing model that’s algorithm-driven but also is sustainable? Things like biased algorithms, discriminatory facial recognition systems — these are things that [the] first wave of trainers didn’t necessarily think about, but now sustainers think about whether these unanticipated, unintended consequences are something that can be managed. Or maybe they might even recommend that an AI system has to be taken out of operation until the company figures out how to get it right.

Paul Michelman: What’s an example of a title that a sustainer might have in the organization?

Paul Daugherty: I think sustainers can manifest themselves in a number of ways. We’re seeing this often as augmenting the team or the work that’s being done in different situations. For example, in manufacturing or factory types of situations where they’re using collaborative robots and different types of technology that need to be continually configured and rearranged to meet the dynamic needs of the supply chain and what they’re producing — sustainer roles in that sense would be the technician who’s reorganizing and managing the interface between the robots and the production process that’s being performed. So those are the types of roles that we see there.

James Wilson: You know, just driving up and down the streets of San Francisco, you’re going to pass a number of autonomous vehicles. But of course, sitting behind that robo-car is an AI safety trainer. And so you see a lot of those roles in autonomous vehicle situations. In general, any company that’s building robotic systems is going to be hiring these AI safety or AI compliance officers that really make sure at a basic level that the systems that they’re deploying are safe in the public.

Paul Michelman: You did this research, originally, two years ago. And I guess in terms of the longevity of management ideas, two years is really not that long a period of time. But in the world of AI, we almost should be talking about dog years, I think — two years seems like a long time. So I’m wondering: When you’re looking at the market today, when you’re looking at employment trends today, would you stick to these three categories? Have they evolved? How has your thinking shifted, if at all?

Paul Daugherty: Yeah, I think there’s a little bit of both. I’ll talk first about what we see with the categories we identified. If you look at trainers, explainers, and sustainers, I think we see more evidence every day of how these roles are growing and increasing. For example, if you look at job postings, which we were researching a little while ago, you can find explainer in job titles now — Algorithm Explainability Engineer and Financial Services Explainability Specialist and things like that — the need to explain the algorithms and the AI. We’re seeing this accelerate, I think, as you said, in this dog-year type of fashion. We’ve also seen some compelling examples from some of the early entrants of why you need these roles. I think Facebook is an instructive story. What they’ve done, following all the focus on them around Cambridge Analytica, is they’ve created tens of thousands of new jobs to add humans in to manage the algorithms and produce the results that people really want, in a more responsible fashion. And those are sustainer jobs — it’s people added in. I think Facebook’s comment was along the lines of: We’ve concluded algorithms can’t manage the algorithms, we need people to manage the algorithms. And those aren’t isolated incidents. I think those are examples of the roles that all companies are going to need as they deploy the technology.

James Wilson: Yeah, our article focused on unprecedented new job categories where people are out there developing and responsibly managing AI systems. But while AI is certainly creating new jobs, it’s also changing old jobs by augmenting them. And we didn’t get into that much in that initial article. For example, at one bioscience company that we’ve been looking at — it’s based out here in the Bay Area — scientists use robotic lab equipment to help on certain experimental tasks. The robotic helpers precisely squirt liquids and they plate cells and they count microbe colonies in a way that augments and accelerates scientific work. And as a result of this robot augmentation, scientists are now able to complete about 400 times more experiments each week. So if you think about that, a scientist now has the potential to make a hundred years’ worth of scientific discovery in a single year through AI augmentation. But you know the lab scientist’s job content has really changed quite a bit. She now does things that are quite a bit different than she was doing before and has different ways of doing them. And we didn’t get into that topic as much. We were much more focused on the job creation, not the job content change.

Paul Michelman: When we’re looking at the three fully new categories of jobs, how equally and evenly distributed are these roles going to be? Are there particular industries or types of organizations for which these roles are going to emerge earlier? Are there other organizations that should take kind of a sit-back-and-wait approach?

James Wilson: I would make two points here. The first is that companies really need all three roles. For instance, a few years ago many of the most advanced AI firms — the major technology companies, for instance — focused exclusively on staffing AI trainers. But now they’re playing catch up. So you really do need to have all three. But I think one insight here is that the AI talent war is quite a bit different and broader than a lot of people initially thought. My second point is that the distribution of the roles is going to vary quite a bit by industry and customer and regulatory context.

Paul Michelman: When we’re looking at these new categories, it would seem that one of the fundamental challenges organizations face is that these are jobs that no one has done before. No one has ever trained to be an AI trainer. How do we solve for that?

Paul Daugherty: That’s one of the biggest challenges that I think we have to face as we look at how do we prepare people for these new roles and how do businesses and organizations prepare for these new roles. We’ve done some follow-on research on this, and we think there’s three things that we really need to focus on to get this right. One is focusing more on experiential learning. If you look at traditional training, it would show that people forget 80% of what they learn within about a day of learning it from traditional training methods. So how do you get people engaged in the learning process in the experiential way? We think apprenticeships are very important — hands-on learning, learning injected at different points in the process. For example, we’ve done an interesting training and learning approach with a large aircraft manufacturer, where we used AI and mixed-reality technology to equip workers with a mixed-reality headset that helps them understand the job they were doing and do higher-skilled jobs faster by providing them guidance along the way. And that’s an example of using technology plus experiential learning to advance people skills into these new categories.

A second thing we found is important is shifting the burden from just the person needing to learn to looking [at] the responsibility [that] different institutions — businesses, etc. — have for the training. One thing we firmly believe is that every organization needs to look at learning as a core competency in a really new and fresh way. And you need to think about learning platforms from lifelong learning as a core part of what you do. Because to your point, you can’t go hire people for some of these roles, you may need to build people to do them. For example, we worked with an oil company on a new drilling technology that uses visualization and AI and gaming engines to create a whole different way for a technician to operate a drill (oil drilling, operating miles underground). So where are you going to hire the gaming engine, visualization-inspired driller? You’re not going to find people on the market with those skills. You’re going to have to take your current technicians and develop these new digital skills in them, which is why we believe that these learning platforms are going to be a critical component for companies. It’s going to be differentiating for those who can get it right.

And then finally, from an overall societal and multi-stakeholder perspective, we need to look at how we enable vulnerable people in the population [who] are already maybe separated by a digital divide — who don’t have the right baseline skills to operate in this environment — and do more to make sure that everybody’s got the base of skills that [they] need to participate in these jobs.

Paul Michelman: So this is really interesting. On the one hand, a focus on on-the-job learning, experiential learning, certainly promises or would seem to promise a shorter time frame and maybe more stickiness to get people trained up for these new roles. And yet, that’s still a major organizational undertaking — maybe not as great as relying on academia to fill the void, which will take decades — but still these jobs need to be done. They may not be fully at scale, but as you guys have noted, they’re very much real and happening right now. So as we look at these three categories, where should that first crop of people come from?

James Wilson: Well, I think one thing that we can do today is to make it easier for people to become trainers, explainers, and sustainers by basically lowering the barrier to building or improving an AI system — what Paul and I call AI democratization. We’re already beginning to see point-and-click AI training tools out there. And many of the cloud AI services providers, for instance, are quite easy to use. If you have a data set, you can just upload the data set to one of these services and then start playing around with the data. So I think the complement to what Paul was just talking about, which was raising the skill level, is also at the same time to lower the barrier to using these systems. I think that’s a really important thing. And it’s often an untapped opportunity, but we’re beginning to see more and more companies migrating toward that model as well.

Paul Michelman: In terms of global impact for these new categories of jobs in particular — and I realize this is going to be a difficult question to answer in particulars — so general trends would be fine, but I think a lot of people would like some help in sizing the opportunity that your research suggests, especially as we think about potential job loss at the hands of AI, machine learning, and automation. Are the new jobs we’re discussing here a relative drop in the bucket for the highly specialized few or well-trained few? What is this going to look like at scale?

Paul Daugherty: These jobs certainly are a drop in the bucket, but you have to put it in context. We think this is a major impact — these jobs are a major impact going forward on employment and opportunity for people. However, just to start, there will be a lot of disruption in the labor force, and there will be categories of jobs that are at risk for automation. But you have to look at the broad spectrum of how that’ll happen. And from the research we’ve done, if you look across categories of jobs, if you look at the content of work, there’s about 10% of work generally that we found through our research is human-only (only humans can do). There’s about 35% of work that is automatable — that part of the work is automatable by machines, algorithms, etc. And the rest of the work — which is the majority of it — is really augmentable, which means you can improve the way humans do it, but it’s largely going to need to be done by humans. And I think that the context around these new jobs is [that] most of the jobs become transformed in different ways. And how do we use AI and other technology to transform the jobs to prepare people for those changed jobs? So that’s a big impact, and I would say almost every job will change as a result of the technology. Many new jobs will be created and some will be eliminated.

One good data point, having just come from a G7 meeting recently: Canada announced that through their investment they’re making, they expect a $16 billion economic increase in output as a result of the investments they’re making in AI. That’s significant output. They talk about 16,000 jobs they’re creating through the focus on AI. And we see similar types of impacts and results around the world and larger impacts in terms of GDP increase (economic output increase) by countries. And that’s where the opportunity is — in kind of envisioning how do we prepare people for these new types of jobs that will be created?

James Wilson: Yeah, just building on Paul’s point, I think you can get a good quantitative sense of the size of opportunity by looking at business leaders’ investment expectations, especially around growth. In our research, for instance, we found that firms that invest in their AI workforce at the same rate as top-performing businesses in their sector are going to grow both revenues but also their workforce.

Paul Daugherty: There’s another impact on jobs that I think we need to think about, which is the fact that it’s hard to anticipate where the new jobs are coming from and what the new jobs will look like. That’s why we try to be prescriptive and talk about trainers and explainers and sustainers. One historical observation I’d offer is that if you look back at prior technology waves we’ve had — 20 years ago, people wouldn’t have anticipated that we’d have large categories of people employed in things like search engine optimizers, web designers, eBay retail merchants, etc. In a similar fashion, we’re already seeing this creation of the new jobs going forward, and they’re the unanticipated, new things that we need to continue to be creative about and look for as time goes on.

Paul Michelman: So what’s next in your research?

James Wilson: In our research, we see that about 69% of executives believe that their industry is going to be completely transformed between now and 2022 as a result of AI. But we continue to try to understand not only the jobs that are going to be created, but also the skills that are going to help this transformation — that are going to enable this transformation. And I think this is an important area for our research. Paul’s already set it up very nicely. A lot of our findings thus far have been surprising to us. For instance, you might think that STEM skills are the be-all and end-all for the age of AI. But our research is showing that four distinctively soft skills are becoming much more valuable as we begin collaborating with smart machines and using smart machines: These are complex reasoning, creativity, social/emotional intelligence, and certain forms of sensory perception…. So interestingly, one thing that we’re tracking now is how skills are becoming softer. And what does that look like on an AI team?

Paul Daugherty: Yeah, I think going further on those human skills. Because one question we get a lot is exactly that, which is: “OK, Paul and Jim, we get you, we believe what you’re laying out here. What do I do tomorrow? What do I do next month to start preparing my people and my workforce?” Getting to that next level of specificity — the human skills and how we get people ready — I think is really important. There’s a couple other fronts we’ve launched. One is on responsible AI, which we hinted at in the original article, but it’s really become more important, which is: How do we make sure we get the right outcomes from AI? Speaking of things like transparency and explainability, which one of our job categories addresses; thinking about bias, which is an issue that many have run into when they apply AI — creating biased outcomes rather than inclusive outcomes; thinking about accountability; thinking about trustworthiness and issues like that. So we’re doing a lot of further work on that. In fact, we have a new article in MIT SMR on fairness and approaches to fairness with AI and some work we’ve done in that area. These are going to be really important issues for businesses and organizations to grasp and to make sure that as we have increasing numbers of people working in AI and more powerful solutions delivered with the AI, how do we make sure we deliver the right outcomes in all cases?

Paul Michelman: Terrific. Paul Daugherty, Jim Wilson, thank you both very much.

Paul Daugherty: Thank you, Paul.

James Wilson: Thank you, Paul.

 

SOURCE: MIT Sloan Management Review

Share this post