What is Artificial Intelligence? What might it be able to do? Will it do more harm than good? These are questions that might have been for science fiction just a few years ago, but which are serious questions today. There are many examples emerging of where AI is making huge strides in helping to run businesses, supporting complex decisions such as medical diagnoses, and even supporting scientific research. As with all technologies, AI will shift the balance of power in the world, but more than any other technology it opens up an existential question around what it means to be human, especially if machines become smarter than us.
How do we approach such challenging questions? We could, out of fear, try to stop the development of AI like the Luddites of the 19th century – the Luddites had serious and valid concerns about the mechanisation of the textile industry and its impact on society, which largely turned out to be quite accurate, if unstoppable. We could embrace AI, and accept all the benefits that are claimed and ignore the pitfalls, at least until they emerge. Or we could try and understand it, not as a technology but in its implications for individuals and society.
Artificial intelligence is the ability of a computer or other human-made device to exhibit behaviour that we would normally consider requires human intelligence. There is a long history of AI, pre-dating modern computers. Ada Lovelace (1815-1852) is considered to be the first computer programmer, devising programs for Charles Babbage’s difference engine; she wrote “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.” Today we can reasonably surmise that a computer has powers Ada did not anticipate.
Alan Turing (1912 – 1954) did anticipate modern AI, and invented a test (the Turing Test), that could be used to determine if a computer was intelligent. It states that if an independent observer of a dialogue between a machine and a human could not determine which was the machine, then the machine could be considered intelligent. Passing the Turing Test is just one dimension of intelligence, and we are getting close to having systems that pass it.
In 1960 John McCarthy (1927 – 2011) created a computer programming language called LISP. A feature of this language is that a program can write programs and run them. This underpins much of modern AI. Traditional computer programs are rigid in their processing, and cannot adapt, as Ada Lovelace foresaw. AI programs can learn and adapt – computer programs that can create computer programs or adapt themselves just as a our brains can adapt. Once we have computer programs that can change its behaviour we have the potential for some form of intelligence that can learn and develop.
There was a boom in interest in AI in the 1960’s. Joseph Weizenbaum (1923 – 2008) wrote what was arguably the first chatbot, a programme called Eliza that could simulate a Rogerian Psychotherapist. Weizenbaum was disturbed to find people were opening their hearts to this computer program, and he spent much of his life warning of the dangers of granting too much power to computers. Computers at that time were very expensive, slow and difficult to programme, and interest in AI waned. The imagined possibilities of AI still seemed a distant prospect, and there were plenty other developments to entertain computer scientists.
In the 1980’s with the arrival of powerful personal computers, interest in AI was re-invigorated in the form of expert systems, sometimes called knowledge-based systems. To the extent that a human expert makes decisions using rules, these can be included in an expert system to enable a non-expert to make expert-like decisions. A simple example would be a symptom-checker on a health website. Deep Blue was a computer that ran an expert system to play chess, first devised in 1985. In 1996 it lost four games to two against the world champion Gary Kasparov. In 1997 it beat Kasparov.
Now we have natural language processors that can translate convincingly between two human languages. We have in our homes simple voice recognition systems that can take simple instructions. More recently we have systems that can convincingly write texts that are indistinguishable from human-written texts. Systems are being devised that can interpret images and make diagnoses based on x-ray images and CT scans.
Thus today we can reasonably argue that computers can outperform humans on many tasks that would not long ago have been thought beyond the reach of machines. In some cases, such as Deep Blue, computers can outperform humans on matters of intellect. It is no longer far-fetched to think that we might converse convincingly with a computer and consider the computer in some sense to be our equal, perhaps even our superior.
Consider what AI might mean for mindfulness teaching. An AI programme may generate a convincing script for a mindfulness meditation. It might even be able to deliver it verbally in a way that engages the listener. The listener may then have experiences that enhance their mindfulness. In a limited sense, it is possible that AI could teach mindfulness, at least in guidance of mindfulness practices.
Weizenbaum’s Eliza, using a limited protocol, could convincingly simulate a Rogerian psychotherapist back in 1967. With a good inquiry model, it seems feasible that a modern day Eliza could become very effective at helping someone explore their experience in a mindful way. Add in some visual processing that can observe someone’s behaviour, include some speech recognition, include some neural learning, and our modern-day Eliza could become very effective as (say) an MBCT teacher.
We are realistically facing the prospect that a machine could in some sense become an effective mindfulness teacher. Given that anxiety and depression are prevalent and cause a great deal of suffering, it would be very appealing to have an app on a phone that could guide someone through an MBCT course, all at very low cost. What if we found in a blind randomised trial that a phone app was as effective as a traditional mindfulness class? Do we dare to ask that question?
We are then thrown into the same dilemma that Joseph Weizenbaum found himself in over half a century ago. The question is not whether AI can replace a human in an area like mindfulness teaching, but should it, and if so what are the consequences. This opens up many questions.
How much power should we grant machines? We already do grant a lot – apply for a mortgage and it is an algorithm examining your credit history and income that makes the decision about how much the bank “thinks” you can afford. Machines already determine which social media posts you should see first, and which advertisements are likely to appeal to you. They can even manipulate voting intentions. This is a crucial area of inquiry – this is where we have some choice as humanity. Once power is granted it is hard to take it back.
Is intelligence just about problem solving, like playing chess? Or should it include empathy, relational skills, emotions? Could machines really feel as well as think logically? Weizenbaum’s Eliza could simulate some of that many years ago, so at a behavioural level it is reasonable to expect machines to appear to think in a much wider sense. AI is not human intelligence, and operates in ways quite different to how the human mind works – just as a tractor works differently to a horse when pulling a plough. Indeed, AI’s ability to do exhaustive search is far beyond any human capacity and can yield results humans could not.
We have been moving at a pace to a world where a large number of people rely on technology for education, information and interaction. Today we can have regular and easy communication with people round the world, and even develop friendships with people we never meet in person. We increasingly rely on information sources online, and happily take advice from online systems. We already have an increasingly intimate relationship with technology. When that technology can interact in more intelligent ways, can display some form of empathy, can provide easier to access support and advice than a human and sometimes better advice, how will our relationship with technology evolve?
What does that mean for us? If we are no longer the most intelligent entities in the world, how will we respond? If The Matrix seemed like unrealistic fantasy movie, some technologists are already talking about brain implants to enable easier interaction with machines. What happens if the implants then could be used to control the brain? AI presents a real existential challenge and opens up questions about what it means to be human.
Is it all bleak? Most science fiction projections of AI are dystopian. The themes, such as in The Matrix, portray AI as power-hungry, exploitative and controlling. Jeanette Winterson in “12 Bytes: How artificial intelligence will change the way we live and love” explores some different possibilities. What if a generalised AI came to the conclusion that Buddhist principles were the optimum basis for behaviour and set about reducing suffering? How would that unfold? Would it be beneficial to have a patient and kind AI tutor for your child who is struggling with normal school? One that can adapt to the needs of a neurodiverse population? What if AI, without the evolutionary baggage humans carry, were to be kinder and ethically better than human society as a whole?
The questions around the ethics of AI need to be fully explored. AI could be an important and valuable element in the development of a kinder society. If AI is just seen as a way of replacing people, as a route to more shareholder value, if it accelerates the widening gap between rich and poor, if it devalues humans, then we have a problem. If it provides valuable support to those in need, if it removes drudgery, if it advances scientific discoveries, if it reduces our demands on the planet, if it enables us to treat humans as more than workers in an economic model, then we have much to look forward to. Of course it is likely to be a mix of these, and there will be effects we do not anticipate. We need to consciously engage in these questions.
Artificial intelligence is not going away. You might take an axe to your iPhone, but that will not remove the impact of technology on you. It is already too far embedded in the social, cultural and economic environment in which we all live. Rather, we should engage mindfully, not rush to judgements, and set about on a long inquiry. It is not just AI that needs to be the subject of that wider inquiry, it is the broader context of how we are already moving to a more mechanised world, a world with some wonderful opportunities as well as some frightening threats. It is a world currently dominated by economic self-interest and conflicts driven by a more base form of human intelligence. Perhaps intelligent machines might transform the world into something kinder? The future need not be dystopian, but it probably will be if, as a society, we drift mindlessly into it.