Embracing the new jobs the future AI-driven world will offer

AIs are far from perfect and tens of thousands of human employees will be needed to ensure their effective and efficient use

While many have written on the jobs that may get lost in the rollout of artificial intelligence, I prefer to reflect on the thousands of new jobs that are being created to (i) design and feed AIs, (ii) assess the technical and economic viability of fintech projects, (iii) enhance the quality of third party datasets or (iv) simply communicate a realistic view of what AIs can and especially cannot do and then structure business models which mitigate the weaknesses of current AIs.

These types of roles are already emerging and will likely grow into the tens if not hundreds of thousands. That has crucial implications for the educational needs of society.

It is important to understand that AI can be general or narrow (i.e. for all purposes or just for a very narrow use case) and that AI abbreviates two related but different concepts – artificial intelligence and augmented intelligence. The former in its general form is, in theory, Hollywood's vision of an entirely autonomous Skynet while the latter is empowering human experts with strong machinery to exponentially increase the value add they can deliver within a specific period of time.

Based on all evidence I have seen, the theory of a general fully autonomous artificial intelligence appears to be just that, a theory hyped by those who believe to benefit from its dissemination, such as entrepreneurs or venture capitalists. To borrow words from a senior European AI academic: “It is funny that we academics are actually not part of the AI hype.”

READ MORE

Artificial intelligence applied to narrow use cases, such as passport checks at airports, can however work in an exceptionally resource efficient manner. Such narrow AI is very likely to continue growing strongly in tandem with the share prices of those firms supplying the computing power.

In simplistic terms, narrow artificial intelligence solutions work very well if a use case involves a frequently repeated, real time recognition case such as passport checks.

In contrast, if the use case includes few repetitions, the economics of designing and coding the AI are debatable. If the use case is a recognition of an observable fact but not real time, AI competes with skilled humans in low income countries, who are much faster in learning such tasks and can continue learning outside the data space during the execution of the task itself.

If the use case is, however, not recognition of an observable fact (eg does a visual show the human in the passport?) but instead a prediction of some unobservable outcome, the three significant weaknesses of so called “deep learning” manifest themselves.

First, the results of deep learning neural networks can vary depending on the approach chosen to select the original input weights. But neural networks lack a robust measure of uncertainty to date, such as confidence intervals in classic statistics.

Second, deep learning really emphasises the imitation part of learning by effectively copying successful behaviour displayed in big data sets. It does not include much conceptual understanding or knowledge transfer type learning. Hence, there is an argument to be made that it would be more accurately named “deep imitation” instead of deep learning.

Third and most crucially, deep learning neural networks are so complex that I have yet to meet even a scientist who understands exactly what happens deep in the hidden layers of her/his neural network. Such lack of full understanding combined with the overemphasis on the imitation part of learning and the imperfect ability to precisely measure uncertainty results in human experts often not fully trusting deep neural networks in prediction based use cases, where the outcomes cannot easily be quality checked.

Understanding this context is crucial to realising why AIs are far from perfect and why tens of thousands of human employees are needed to ensure their effective and efficient use. So which jobs will come up?

First, there are designers and feeders of AIs: these roles include both highly skilled PhD level data scientist originating from departments in computer science, statistics, finance or other quantitative (social) sciences as well as lesser trained individuals that feed the AIs with ground proofs of observable outcomes via services such as Amazon’s Mechanical Turk.

Second: there are the quality checkers of so called AI solutions. Many startups, especially in fintech, display very aggressive marketing of the supposed ability that their narrow AIs have but often one wonders how startups where the majority of executives lack technical PhDs actually build such wonder-machines. Cases such as that of Theranos have made those aiming to engage with startups rather cautious. Consequently, it is no surprise that conducting due diligence on artificial intelligence startups is establishing itself as a viable niche in the field of technology consulting.

Third: there are the data quality enhancers. Narrow artificial intelligence approaches and especially augmented intelligence approaches that empower experts need tons of data, often supplied by third parties. But these datasets are not always supplied in pristine quality, to put it politely. As a reaction, institutional investors have teamed up with academics to launch deep data delivery standards (www.DeepData.ai).

These are, however, just standards that third party data providers can voluntarily sign up to. It still often needs several human analysts to enhance the data quality of third parties. In other words, similar to the thousands of SAP consultants out there, we can expect to see thousands of data specialist having expertise in some of the most popular third party datasets used in the respective sector.

Fourth: given the above discussed weaknesses of narrow artificial intelligence, augmented intelligence solutions are becoming increasingly popular. These use the same technologies but do not aim for 100 per cent automatic process and instead employ experts to utilise the technological powers of the AI age.

Such expert-led process allow for more agile business models than insisting on fully artificial processes. However, having the expertise to decide which approach to follow and how to structure the technology into a viable business model is a key competence in itself and hence has emerged as a relevant field of technology consulting with many jobs on the horizon.

The final consequence of the emergence of augmented intelligence, which one could summarise in the equation ‘AI=humans^machines’, is the crucial role of PhD level experts going forward.

While overly theoretical economists may not be popular anymore with subjective electorates, employing and retaining PhD level computer scientists and financial data scientists is a necessary though not sufficient prerequisite to be successful in AI. If a company further manages to have such data driven expertise in the majority of its senior decision makers, it may prove a sufficient condition for success.

For any country to have sufficient human resources with such PhDs, it needs to reflect on an innovative model to fund doctoral studies. For any country to benefit from all these emerging jobs in the age of AI, it needs to reflect how to reform educational systems so that children start seriously learning the languages of computers at a similar time as they start learning foreign languages.

Most crucially perhaps, we may want to reflect how to fund enhancing the education of those millions who are in jobs and are years beyond full time education but who have real concerns that their roles may not be much needed well before they are due to reach retirement.

Professor Andreas Hoepner is chair in operational risk at UCD Michael Smurfit Graduate Business School