AI does not care about the color of your collar!

Oh those AI nerds are at it again, this time they are armed with Deep Learning and they are professing the end is nigh. CGP Grey has been saying that for a while in “Humans Need Not Apply”, essentially insinuating that up to 45% of jobs will be lost because people will simply not be employable. The economist seems to be backing up the techies with an Oxford Study that found out of the 702 occupations, 47% are susceptible to automation (The Brits had 35% and Japan had 49%). And doesn’t stagnation of median wages already point to the fact that automation and offshoring mean the techies are right?

Yet looking at the long-term picture, technology has always resulted in more jobs. Even ATMs that were supposed to end branch teller jobs created more jobs due to more branches, despite the tellers per branch dropping from 20 per branch (1988) to 13 (in 2004). But there was also some negative impact – the job was now more routine, resulting in lower training. Today the branch teller is arguably a lower skilled job and has almost 50% turnover.

The techies also use Instagram (13 employees) killing Kodak (145,000) employees. as an example. However it was not Instagram that killed Kodak it was the Smartphone and there are more people employed by the Smartphone market than Kodak has had employees.

So who is right? And more importantly which roles are the most susceptible? In the true spirit of AI, let’s have a look at the data.

Cognitive vs Manual

  • Nonroutine cognitive occupations, which include management and professional occupations
  • Nonroutine manual occupations, which include service occupations related to assisting or caring for others
  • Routine cognitive, which include sales and office occupations
  • Routine manual, which include construction, transportation, production and repair occupations

What we see is a separation around the concept of “routine”, as opposed to skill (or the old concept of blue collar vs. white collar). The pessimist argues that this can be further broken down where every job has routine components. But this is not just about process automation (Remember those Six Sigma/LEAN who did thousands of workflow charts to put into workflow engines with marginal success?).

The new world is not about process re-engineering; the new world is about data driven processes. So as long as data exists in the form of inputs and outputs on your job, it is likely a machine will outperform you over time. Essentially it does not discriminate against experience, knowledge or skill. This means an Auditor with a $150k job is more likely to be in the “cross-hairs” of AI based automation than their Personal Assistant. The overall league table looks like this.

League Table

You will see the concept of “routine” and “data” come out as opposed to current remuneration, skill or experience. What about the so-called domain expert? The person who combines skills with experience? You know – the insurance agent and the underwriter? What about the C-Suite Exec? The argument from this group is that machines suck at negotiating, being creative, motivating and leading a team. This is all true of course but in a McKinsey interview Jeremy Howard argues that top executives should not feel “safe”:

“It’s striking how little data you need before you would want to switch over and start being data driven instead of intuition driven. Right now, there are a lot of leaders of organizations who say, “Of course I’m data driven. I take the data and I use that as an input to my final decision-making process.” But there’s a lot of research showing that, in general, this leads to a worse outcome than if you rely purely on the data. Now, there are a ton of wrinkles here. But on average, if you second-guess what the data tells you, you tend to have worse results. And it’s very painful—especially for experienced, successful people—to walk away quickly from the idea that there’s something inherently magical or unsurpassable about our particular intuition.”

Kaggle founder Anthony Goldbloom is even more brutal on the domain experts: Two pieces are required to be able to do a really good job in solving a machine-learning problem. The first is somebody who knows what problem to solve and can identify the data sets that might be useful in solving it. Once you get to that point, the best thing you can possibly do is to get rid of the domain expert who comes with preconceptions about what are the interesting correlations or relationships in the data and to bring in somebody who’s really good at drawing signals out of data.”

Howard and Goldbloom are right to a greater extent but it also brings up the main economic notion that automation (AI or otherwise) is not just about losing jobs but also about increasing the value of the tasks that remain with humans. Remember the remaining jobs are not necessarily the ones you need the most experience or skill, it is the one where there is the least historic data and the least routine.

The other great “savior” for humans has been regulation and fragmentation. This is why tech firms increasingly seed their inventions outside of the US and sweep up the 50 states once they have perfected it outside of the US. However regulation that does not help the end consumer of the product or service will eventually be a speed bump to AI driven progress.

So where does it leave us? A lot of arguments predicting the end of humans but a long-term macro trend of technology creating more jobs (if you said 25 years ago your son and daughter would be a drone operator or augmented reality game designer, they would have put you in an institution).

In my opinion it is the latter. The AI based workforce is no doubt very different and most likely better. Aptitude will be important but so is Attitude. This latter piece could be the key point, thinking Linearly vs thinking Exponentially. As Erik Brynjolfsson (Second Machine Age) put it nicely, “(t)he greatest failing of the human mind is the inability to understand the exponential function.” Meaning, we always over-estimate what the technology will do in 2-3 years but under-estimate what it will do in years 5-10, where it really matters (Probably means any business case you have done is wrong!!).

The good news is if you got this far in this blog, you have probably mixed up your routine and your job is safe……for now J

So do you agree that AI will impact labor market? Will it be positive or negative? Can your job can be automated by AI? More importantly can your bosses job be automated? 🙂

Next week we combine some of the concepts from the Tech blog and the Jobs blog and put our game face on to Impact on Insurance.

lakshan
Author
lakshan
Lakshan is an experienced global executive that has worked across technology, venture capital, insurance, wealth management, construction, manufacturing and mining.

He has worked in corporates across the globe and has rounded it off with a MBA and a couple of Exec Programs, but please don't hold that against him, as he is busily unlearning everything he learnt over the last 20 years to stay relevant for the next 20 years 🙂

As CTO, he is bringing in exponential technology that will define the next 10 years into the Intellect SEEC products. His current projects revolve around AI and Blockchain.

He holds global patents in several technologies and is an investor and advisor to numerous Fintech startups.

He is a sports fan, music aficionado and animal lover and his claim to fame is that he has trained his pet bulldog Mortimer to obey him whenever the bulldog wants to 🙂
Previous Post
ai-banner-riskanalyst

AI – From Not Working to Neural Networking

Next Post
risk-analyst-ssae-soc

Intellect Risk Analyst completes SSAE SOC 2 certification

@ 2018 Intellect Design Arena Inc. | 20 Corporate Place South, Piscataway, NJ 08854, USA.