Publication Cover
Impact Volume 2023, 2023 - Issue 1
329
Views
0
CrossRef citations to date
0
Altmetric

You will have doubtless come across CAPTCHA tests on the internet, where you are asked to demonstrate your human identity by ticking a checkbox or recognising an image. Have you noticed that they are getting harder? Have you wondered why?

Captcha tests have become more difficult for people to complete due to advances in artificial intelligence and machine learning algorithms. As bots become more sophisticated, captcha tests have had to evolve to become more complex and difficult for bots to solve, which unfortunately has also made them harder for people to complete.

Unless you have been living on a remote island, cut off from the outside world, the rise of what is commonly termed artificial intelligence (AI) will not have escaped you. For example, there has been much in the news lately about ChatGPT; an advanced AI chatbot designed to mimic a human conversationalist. It is able to give structured textual answers to questions, drawing on a huge base of information – but is able also to write computer programs, compose music, and even write poetry. Microsoft, a major investor (to the tune of $10 billion) behind ChatGPT, is already using it in its Bing search engine and plans to embed it in Word, Powerpoint, and Excel.

SCENARIOS FOR AI

What are the implications of this accelerating development of AI for work, for workers, or indeed, for humanity? Such issues are addressed in the book Life 3.0 by the MIT physics professor, and co-founder of the Future of Life Institute, Max Tegmark.

Tegmark sees the evolution of life on Earth as having three stages: Life 1.0 (biological), Life 2.0 (cultural), and Life 3.0 (technological). The emergence of intelligent machines, machines that break free of biology and can improve their own software and hardware, marks the beginning of Life 3.0.

In this book AI is viewed as an ever-rising sea, increasingly reaching the more elevated parts of the landscape of human abilities. AI could therefore impact society in increasingly many ways, from creating new jobs and enhancing human capabilities to displacing workers and exacerbating economic inequality. It already has the potential to transform areas such as transportation, communication, energy, finance, and medicine.

But Tegmark notes that as well as potential for huge gains there are major ethical and legal issues surrounding the use of AI, such as privacy and security concerns, problems of bias and discrimination, maldistribution of generated wealth - and worries about autonomous weapons. (He points out that there have been at least two incidents where unleashing a nuclear weapon was narrowly averted by humans making unconventional judgements in unforseen situations; would humanity have been as fortunate if an autonomous AI-controlled system had been making protocol-based decisions?).

In later parts of the book Tegmark considers the possibility that superintelligent AI (AI that surpasses human intelligence and control) eventually emerges, with outcomes that may be existential for humans. Life 3.0 presents a spectrum of associated scenarios, some utopian, some dystopian - and some arguably in between.

Other books on AI provide a range of perspectives and stances on AI’s capabilities and merits. Some, for example the short book Novacene: The coming age of hyperintelligence by the late James Lovelock (best known for his Gaia hypothesis that the Earth is a self-regulating system), take a similar view about the potential of AI to that of Life 3.0. (Lovelock suggests that humans should embrace a new symbiotic relationship with the machines they create. Hopefully more “All Watched Over by Machines of Loving Grace” than “The Matrix”.) However, a more sceptical view is taken in How to Stay Smart in a Smart World by Gerd Gigerenzer.

In this book Gerd Gigerenzer, a German psychologist and expert in decision-making, discusses the impact of artificial intelligence on human cognition and decision-making. He argues that the hype around AI has led to an overestimation of its capabilities, and an underestimation of the value of human intuition and experience. Gigerenzer emphasizes that while AI can be useful in certain contexts, it is not a substitute for human intelligence.

Gigerenzer explains that the above “certain contexts” where AI can be useful are those which are well-defined, stable and where large amounts of historical data are available. Human intelligence on the other hand has evolved to cope with situations where the future can be highly uncertain, may not be like the past and data can be scarce. Such situations are commonplace.

He also points to deeper problems with AI. Humans have mental concepts; when they see, for example, a bus they know what it is and what it is for. AI neural networks have no concept of what an image (or for that matter, anything) represents in the world, they learn to recognise an object purely by finding associations between pixels. AI needs to be trained on thousands of images to recognise a bus while children can recognise one after seeing only a few, and AI can sometimes be tricked into completely misidentifying an object (for example a bus as an ostrich!) by a change of just a few pixels in a picture - a change people could not even detect let alone be fooled by.

How can we “stay smart” in a world transformed by AI, and not be beguiled, bamboozled - or bludgeoned - by it? Gigerenzer argues this requires understanding its potentials and risks and needs humans to develop their own decision-making skills, such as using “fast and frugal” heuristics that allow people to make quick decisions based on limited information and which in the real world can be more effective than more complex decision-making methods.

AI, ANALYSTS AND THEIR CLIENTS

Whatever the precise future trajectory for AI may be, it is clearly becoming more capable and will have considerable impact on individuals and on society as a whole. In working life, some occupations will be affected more than others - for instance, the impact on the work of artisans may be much less than on that of analysts. What sorts of relationships might emerge between AI and analysts (and their clients)?

First, research analysts of various kinds obviously have been already playing a crucial role in the fundamental development of what might be termed the soft infrastructure of AI – optimising algorithms, data structuring systems, neural network training methods and so on.

Second, building on that infrastructure, analysts working in or with all sorts of organisations have been using, or looking at using, AI as an additional tool to support their work. AI looks to become a powerful business tool - Microsoft executives clearly think so, as do those at Google, evidenced by the launch of their own AI chatbot, Bard. Tegmark thinks that AI is poised to move beyond mastery of strategic games like chess and to be able to challenge, or help, humans in strategy development in areas such as business, defence or politics. That might or might not reduce client demand for analysts but, either way, considerably increased care would be needed over issues such as accuracy, bias and transparency. (The launch of Bard was embarrassingly marred by some factual errors in the chatbot’s responses, alerting us to the risks of, say, error-strewn but smooth-talking successors to Wikipedia).

A third possible role for analysts is in helping providing insights for their clients into the future of AI. As the above books illustrate, AI is a prime example of the need to consider alternative possible futures. Preparing for a future world in which AI features strongly is an important case of decision-making under high uncertainty, where techniques such as scenario planning and systems thinking and modelling should have a useful contribution to make.

THE REVEAL

Two paragraphs of the preceding text were not written by me; they were (part of) the responses of ChatGPT to my questions: “Why have Captcha tests got harder for people to do?” and “Write a short summary, focusing on AI, of the book How to Stay Smart in a Smart World by Gerd Gigerenzer”. (ChatGPT wrote the first paragraph of the associated text). Judge for yourself how well it performed.

Worried that AI is about to take over? Let’s see what ChatGPT thinks: “Chat GPT, please write an 8-line poem about AI.”

In circuits and code, we've created a mind,
An AI of learning, programmed to find,
Patterns and insights in data so vast,
It learns from experience, it learns fast.
It processes, predicts, and analyzes with ease,
And in its abilities, we stand in unease,
For as we build it smarter each day,
What fate for humanity will AI portray?

The Poet Laureate can probably rest easy. For now.

Additional information

Notes on contributors

Geoff Royston

Dr Geoff Royston is a former president of the OR Society and a former chair of the UK Government Operational Research Service. He was head of strategic analysis and operational research in the Department of Health for England, where for almost two decades he was the professional lead for a large group of health analysts.

Unknown widget #5d0ef076-e0a7-421c-8315-2b007028953f

of type scholix-links

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.