Apocalypse (not) now: is AI an existential threat?

 

Listen on your device now

In this episode

Depending on who you speak to, AI is either going to plunge us into the abyss or improve every aspect of our lives immeasurably. The hype around AI can be disorientating, so let the RTBC team steer you away from the grim end-of-humanity inevitability, as we explore a more nuanced version of the AI story. Our guests Mustafa Suleyman, Dr Mhairi Aitken and Lauren M. E. Goodlad discuss whether the benefits of AI will ever outweigh the risks, why AI hype can serve as a distraction from some very pressing issues, and whether Geoff can ever replace Ed as a more obedient podcast host.

Plus: Despite the technological advances of AI, why are Ed and Geoff still hung up on Ceefax?

Guests

Mustafa Suleyman, Co-founder of Inflection AI and author of The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma (@mustafasuleyman)

Dr Mhairi Aitken, Ethics Fellow, Alan Turing Institute (@mhairi_aitken / @turinginst)

Lauren M. E. Goodlad, Professor of English and Comparative Literature and Chair of the Critical AI Initiative at Rutgers University (@CriticalAI)

More info

Buy a copy of Mustafa’s book here

Learn more about Inflection AI here

Learn more about the Alan Turing Institute and the work Mhairi is doing on children’s rights and AI

Learn more about Rutgers University’s Critical AI Initiative with the journal’s inaugural issue to follow in October 2023

We love hearing from you. If you have views on this episode, or ideas for future shows you can contact us via our website, our social media (@cheerfulpodcast) or write us an email (reasons@cheerfulpodcast.com)

Episode transcript

Mustafa Suleyman

Ed: Now, to begin our conversation, I'm delighted to say that we're joined by Mustafa Suleyman, who is co-founder and CEO of Inflection AI, and author of a new book: The Coming Wave, Technology, Power, and the 21st Century’s Greatest Dilemma. Mustafa, thanks so much for joining us.

Mustafa: Thanks for having me. It's great to be here.

Ed: Now, let's start by asking, what is the Coming Wave, and what is the 21st century's greatest dilemma?

Mustafa: The coming wave of AI and synthetic biology are two new general-purpose technologies which are bursting onto the scene. I think people will have a better understanding of AI since the arrival of ChatGPT over the last year. But the arrival of synthetic biology is the ability to engineer new compounds that are precisely tailored to the specific tasks that we care about.

Maybe they're making a crop drought-resistant or pest-resistant. Or they're making a new kind of material used in construction that is more carbon efficient. I think there are some characteristics of these two new waves that are different to previous technologies, that mean over the next couple of decades we have to figure out how to contain - and that is limit – their consequences. And in some sense limit access to those technologies in ways that enable us to prevent the potential threat that they pose to the future of the nation state.

Ed: What is the greatest dilemma then?

Mustafa: So, the dilemma is that these technologies are essential. We must have them, and they're going to deliver the greatest productivity boost we have seen in the history of our species. I mean, these are the tools that allow us to replicate what has made us so successful as a species. Our intelligence, our ability to synthesise information, use that to make predictions about how the world around us is likely to unfold, and then invent new things, create things, solve hard scientific problems, build businesses.

And that is a unique facet of our species, and it has been the engine of progress for millennia, and we're now taking some parts of that process and distilling it into a piece of software. I predict over the next two decades we're going to see a really incredible boost to our creative process and our general well-being, and I think that there's huge reasons to be incredibly optimistic about those outcomes. But the dilemma is that we can only benefit from those things if we also mitigate the harms. Just as we have faced that similar dilemma with the arrival of many other technologies in the past: we've done a great job of mitigating the downsides of aircraft travel or cars or nuclear power for example.

Geoff: But isn't there an inherent difference in that this goes beyond a tool, and AI will have an autonomy and a decision-making capability that's different to any development in the past?

Mustafa: Yeah, I think you're right. The difference with this kind of technology is that it has some potential characteristics which we may choose to design into them. These are not inevitable techno-deterministic trajectories. There is no emergence that naturally happens with these models. It is super important to reframe the default expectation that has been set with AI, which is the Terminator framing. It has led people to think that this is just going to sort of naturally recursively self-improve. ‘There'll be an intelligence explosion’. ‘The AI will be able to update its own code’. That it will be able to create autonomy for itself and it will be able to set its own goals. This is all completely wrong. People may design those capabilities into the models. But there's nothing inherent in the technology which means that is inevitable.

Geoff: And talk to us about this statement that you signed. One of the people they call the ‘Godfather of AI’ Geoffrey Hinton got together some leading minds in the field, people like yourself, Bill Gates, Sam Altman, and it was only 22 words long. I'll read it. It says: ‘mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks. Such as pandemics and nuclear war.’ So why did you feel it was necessary to sign that statement?

Mustafa: I have huge respect for Geoff. I mean, he was one of our first advisors at DeepMind in 2011. And I think that it is certainly true that if this wave continues to proliferate in the way that every other technology has proliferated, it's got cheaper and easier to use and if everybody's able to run very powerful open source AIs on their laptops in 10 years’ time, then the question is, how do we prevent people from dangerously experimenting with capabilities like recursive self-improvement or capabilities like autonomy?

And in that framing, we do have to take seriously the threats that arise from that. So these models will lower the barrier to entry to most tasks. They will act as a coach, an advisor, a sort of a research assistant, a teacher. And you can think of all the amazing benefits that arise from that. But you can also think of reducing the barrier to entry to being able to manufacture a biological or chemical weapon. You know, we've seen those capabilities in these new large language models, my own at Inflection AI included. But if our technologies end up in 10 and 20 years in complete Open Source, then how do we know that some people won't actually use those models to try and do those kinds of things? Those are the sorts of concerns that we all have.

Geoff: What does regulation look like, Mustafa? I mean, just so we can get our heads around this question.

Mustafa: I think the first thing is that regulators have to be technically competent. Which means they have to be able to ask the right questions and probe in sensible technical ways. Take the development of nuclear power or nuclear weapons for example, I mean regulators are extremely technically competent.

I'm not saying we need that today in AI, but I'm saying that we don't need to reinvent the wheel. There's no need to panic. There are lots of good precedents for regulatory frameworks which have been incredibly successful. Take aircraft safety, for example. You know, the industry is forced to share insights among competitors if there has been safety incident.

We do need international consensus because if we have groups in less regulated areas who are able to experiment without the kinds of oversight that we would want to hold ourselves accountable to in the West and elsewhere, then that does present a risk.

Ed: Let's talk about your history in this area. So, you were the co-founder of DeepMind, which eventually became Google DeepMind. I came and visited you after I lost the election in 2015, just to see what it was all about. What did that experience teach you in relation to these questions we're discussing?

Mustafa: Well, I co-founded DeepMind in 2010, precisely because I could see the potential impact of this technology. And obviously I didn't know that everything was going to be as successful as we are, but I could certainly see from that point that even if AI was mildly successful, then it was going to fundamentally shape our values and what it means to be human. And so right from the outset, I have been focused on trying to introduce the language and ideas of safety and ethics to the field of artificial intelligence.

Ed: Is one way of thinking about this that you designed at DeepMind AlphaGo. Which turned out to be able to play Go not just much better than a human but was doing moves and ‘thinking’ in ways that humans just didn't. If you think about medicine, for example, presumably that is the potential? That it can recognise patterns around cancer, just as an example, that maybe humans have never recognised.

Mustafa: Yeah, the quest of artificial intelligence is the discovery or invention of new knowledge. That’s what we're trying to do. I mean, productivity and civilisation is built on us inventing and creating new strategies that help us to live life in a healthier and more productive way.

And that's what will ultimately help us in health care, in education and transport. Because there are efficiencies to be had absolutely everywhere! I mean, we're in a sort of pocket of inefficiency as a species. And there are huge leaps that we can make and we've actually demonstrated it in health care multiple times. We’ve published many Nature papers now in the field on identifying cancers in mammograms, identifying blinding diseases in OCT scans (which is work that I did with the NHS At Moorfields Eye Hospital). So there's lots of examples now of AI really having a very practical impact.

Ed: Eight or nine years ago, people were saying ‘self-driving cars are about to take over, what’s going to happen to all the truck drivers…?’ It turned out that the hype about self-driving cars was at least premature. Is there hype around AI which is premature?

Mustafa: I think people always overhype things short-term and always underappreciate things long term. You know, I think that's just our bias as a species. I think two years ago, AI was probably underhyped because people hadn't wrapped their head around what was going on. Whereas now I think everyone's seen ChatGPT and they've had a chance to play with these language models. I think there's been a huge shift, but there's always a risk of overhype for sure.

Ed: Why don't we sort of end with a bit of optimism. If we get this right, what can the benefits be for humanity in the 10, 15, 20 year view?

Mustafa: Yeah, I mean over a 10-year period for sure: billions of people are going to get near equal access to the most capable teacher, coach, chief of staff, personal assistant, confidant in history. So just as every one of us in the more developed world has broadly equal access to the best laptop and smartphone that humanity has ever invented, we're going to see the same trajectory with respect to access to intelligence.

And that means that everyone is going to get an incredible lawyer, an incredible doctor, an amazing teacher, all, broadly speaking, equally, within the decade. And that's why I say I think that's going to be one of the most meritocratic moments in the history of our species. It's going to be incredibly empowering because suddenly your network, your social privilege, your family, your class, all the things that we have all been concerned about for centuries, are going to matter much less. And what is going to matter is your willingness to adopt and use new technologies to get entirely personalised, neatly synthesised access to information really quickly and use those tools to help you do great things in the world and be great inventors.

Ed: Maybe you will even be able to find a more technologically adept podcast co host through AI!

Geoff: Oh my god, that would be so amazing! Like an AI Ed who knew how to plug his headphones in and mute his microphone when there's a din in the background?

Mustafa: That's exactly what AI is going to do. They're going to augment your weaknesses so that you can spend time being the best Ed and then the kind of fumbling headphone Ed is going to be a thing of the past.

Ed: Mustafa Suleyman, author of the new book, The Coming Wave, thank you so much for joining us.

Mustafa: Thanks for having me, it's been great to be with you both.

Mhairi Aitken

Geoff: Next, we welcome back to the podcast Ethics Fellow from the Alan Turing Institute, Dr. Mhairi Aitken. Hello.

Mhairi: Hi, how are you?

Geoff: Thanks for coming back. It's been a few years. I guess what we're talking about today is AI hype, both good and bad. So depending on who you listen to, it's either going to be this huge existential risk, or it's going to be completely transformative for society. Is that a real binary? Can you place yourself at either ends of that spectrum, or is it more nuanced than that?

Mhairi: Yeah, I guess I'm far more in the more nuanced kind of middle position. There’s definitely a lot of hype and sensationalism around AI at the moment. And a lot of that I would say is that it's really serving as a distraction from discussing the realities of what AI currently is, what it's capable of, and what the real present concrete risks around AI are at the moment.

Certainly over the last year, there's been lots of announcements, lots of statements about existential risks related to AI. And in most cases those statements have come out exactly at the precise moments where we were having really important discussions around emerging regulatory frameworks, particularly the EU AI Act.

In most cases, I think that is a deliberate distraction technique to distract this discussion away from discussing concrete real risks around AI or how we hold Big Tech companies and developers accountable for the decisions they're making in designing and developing AI. Shifting the focus to hypothetical far-fetched scenarios where the risk is the technology itself, rather than the decisions of developers or of organisations or of people. And that's really what we should be focusing on when we're talking about AI and especially risks related to AI.

Ed: So what's an example of something that's being overlooked in the more kind of grandiose kind of warnings about AI?

Mhairi: AI is already all around us. We're all interacting with AI daily, maybe even on an hourly basis. AI is integrated into the systems that we're interacting with all the time. So at the moment we're hearing a lot of excitement around generative AI, particularly around large language models. But that's just one form of AI. That's just one category of AI. And AI is already integrated into systems that make decisions about our access to services in the public sector, as well as in the private sector. AI is already in the technologies, in our smartphones, online, that we're interacting with all the time. Of course, we need to be anticipating what the risks are further down the line and what kind of advances in AI might be enabled in the future. But we also really need to be scrutinising the way that AI is already being used today.

Geoff: And can you tell us about some of those decisions it is making? Give us some examples.

Mhairi: Yeah, there’s AI systems that are used to process information in who gets access to finance. So in banking, it's used not necessarily to make those decisions, but to inform decisions. It's used in the public sector to make decisions about who gets access to services. It’s used in areas such as immigration, policing, education, healthcare, and of course, in all of these areas, there are very valid and legitimate things that AI could do in terms of informing decision making or processing complex information. But the risks are where these systems are used to inform decision making without accountability, without scrutiny. If they're relied on to make those decisions, or if we don't have insight into what data or potentially what biases are involved in the decision making processes, then it could be very, very dangerous.

Geoff: And just to take one of those as an example, you mentioned immigration. So I'm thinking about when my wife applied for her indefinite leave to remain visa, we had to compile a huge amount of information and supporting documents and then human eyes needed to go through all that to check that she met the criteria, right?

Ed: That's not the one where you did the wrong Dropbox and then ruined your holiday, is it?

Geoff: I don't want to talk about that.

Ed: Maybe that wouldn't have happened if we'd had an AI Geoff, actually.

Geoff: Well, maybe AI could replace me as a husband somewhat successfully. But presumably it has the potential to save the time that human eyes would take to look through and parse and double check all that information and then give it to a human to oversee the final decision. So that is a time saving use of AI. Are there any examples of AI being able to do things that aren't just time saving, that are creative and beyond what we would be able to do at our own speed?

Mhairi: Yeah. And that's where it starts to get really risky because a lot of the benefits that AI currently presents are in that kind of realm of efficiency improvements. But to ensure that that efficiency isn't coming at the risk of safety or making inappropriate or inaccurate results, it needs a high degree of scrutiny and oversight.

It needs to be checked that there aren't biases within those processes, that it's not perpetuating biases. You mentioned creativity and there’s an interesting discussion around the extent to which AI can do something creative, whether it has the capacity to do that. And I don't think AI itself can ever be creative. It doesn't have that kind of intellectual capacities to think for itself. But it can be used creatively. These are tools and they can definitely be used creatively, but it's really important that we always think of them in that way: as tools that can be used by people, by humans. And it's always humans who are accountable or that should be held accountable for the outputs that they produce or for the decisions that are made based on uses of AI.

Ed: How is the UK doing on these issues? I mean, it does strike me listening to you that the government tends to be very underpowered on these issues, partly because technology moves so fast and government moves so slowly.

Mhairi: Yeah. I often hear the argument: ‘can regulation or can government keep up with the pace of innovation?’ And I think sometimes that argument is used as part of the kind of big tech rhetoric of, ‘we need to have big tech players to understand what's happening because government can’t keep up.’

There’s no reason why government couldn't keep up! We can regulate AI. We need to regulate AI. And it’s not impossible. The UK approach is quite different from the EU approach. So in the EU, there's the EU AI act. So it's going to be a piece of legislation around regulating AI. In the UK, there's no current plans for a single AI regulator or a single piece of regulation legislation relating to AI. But the approach in the UK is there's a ‘pro-innovation principles-based approach’ to regulating AI. And that seeks to equip existing regulators to grapple with the challenges of regulating AI across their sectors.

Ed: You work a lot on thinking about the people who often aren't included in discussions around AI, but could be negatively impacted by it. Talk to us a little bit about this.

Mhairi: Yeah, this is something I'm really passionate about. I think a lot of the challenges, a lot of the problems we face at the moment is that the discussions around AI are really dominated by Big Tech players. Increasingly we're hearing the voices of Big Tech players and that's where a lot of these kinds of sensational narratives around existential risk are coming from. But when we think about the risk associated with AI, it's absolutely crucial that we focus on the voices of impacted communities, people who are really impacted by these technologies. People who aren't typically involved in decision making around how those technologies are designed or developed and how they're deployed. One area of work that I work on is around engaging children in decisions around AI.

So at the Alan Turing Institute, I lead a program of work around children's rights and AI. We're working with primary school children across Scotland to involve them in discussions around AI and we're involving them to find out - first of all - what they already know about AI, how they feel about AI in their lives, but also how they would like to be involved in the future. It's often suggested that AI is so technical or so complex that most everyday people can't really understand or can't really get involved in these discussions. That's nonsense. I think that's really quite dangerous nonsense because when people say that it's really a way of kind of closing down or shutting down debates, shutting down public discussions around AI.

If anybody comes to the classrooms with these eight-year-old children who are talking about AI, you'll see that they can definitely understand it and they have a lot to say about it. Children are probably the group within our society that are going to be most impacted by these technologies in their future lives, but there are many, many groups who are underrepresented in design processes and development processes, and it's really important that we bring them into these conversations.

Ed: Just to end with, we have something on the podcast called the Geoffocracy, in which Geoff is the benign ruler. If you became the minister for AI, what would be your first act?

Mhairi: Well, yes, I have lots of ideas for this one. The main one would be that I would want to create like a citizens’ panel, a citizens’ assembly on AI. At the moment the government are planning a global AI safety summit. But it really is - from what we've seen so far - centring the voices of industry, of Big Tech, and really focusing on the safety of future advances in AI. And what I would want to see is something which is really focused on the real current risks of AI, looking at ethics around AI. And understand what matters to people when we think about the future of AI. How can we harness the value and the benefits of AI and show that they're equitably distributed across society? And also understand what wider public concerns are around AI and how we can address those.

Ed: Did she get the job?

Geoff: Well, I will feed that into the AI and see whether it thinks you should get the job or not!

Ed: Mhairi Aitken, it's been a pleasure to talk to you. Thank you so much for joining us.

Mhairi: Thank you.

Lauren Goodlad

Ed: So to complete our conversation, I'm delighted to say that we're joined now by Lauren Goodland, who is Professor of English and Comparative Literature and Chair of the Critical AI Initiative at Rutgers University in the United States. Lauren, thank you so much for joining us.

Lauren: Thank you for having me.

Ed: Maybe you could just start by telling us a little bit about yourself and how an English professor finds herself chairing a Critical AI Initiative at Rutgers.

Lauren: So I'm going to give you the very quick version of this because it could go on forever. But I'm essentially a historian. I'm a Victorianist. I started working on the history of statistics and realised that many of the basic tools that are fundamental to how AI is designed today were Victorian era innovations in statistical modelling. They were done by famous eugenicists like Francis Galton and other people that were interested in what are now considered to be pseudoscientific ways of thinking about biometrics and modelling people. And I realised that people who work in the technology sector in fields like AI ethics, they too had been interested. So I began collaborating with technologists. I worked very closely with several AI researchers and here I am!

Geoff: So you're saying that AI is built on a bad foundation of extrapolating poorly about human beings from data?

Lauren: I don't want to call it a bad foundation, because I actually think that it is sometimes exactly the foundation that we want for some things. But it all depends upon the quality of the data and its suitability to model what it is that you're looking to model. And it is a statistical model.

And in fact, that's really what we're calling AI today should really be called ‘data driven predictive analytics’. Because that's what it is! The world is full of data, or at least the parts of the world that use the internet are, and that data is taken from internet, from internet of things, from devices, from sensors and cameras that are all over the place, largely without our consent, and often in violation of copyright. So that is one ethical dilemma.

Ed: And Lauren, tell us what your critical AI initiative at Rutgers does.

Lauren: Well, we do a lot of different things, We're highly interdisciplinary. We're also very international. The reason that we're working with data scientists at Pretoria is that one of many problems with the language models about which there is so much enthusiasm today is that they are preponderantly trained on data that comes from English speakers or speakers of major European languages.

I mean, bear in mind that roughly 30 percent of people in the world have never used the internet. And what this means is that in spite of the fact that the amount of data on the internet seems enormous, it enormously over-represents the people on the internet, who are preponderantly white, preponderantly male, preponderantly North American.

And that is one of the reasons why bots mimic and pick up so much - to use the technical expression - garbage in, right? And that is why they need so much instruction - that's a term used by the industry - by people, usually very low paid people working from the global south, places like the Philippines or Kenya, at industrial scale to label and annotate the conspiracy theories, the pornography, the disturbing stories that would embarrass companies that build these tools.

We're producing a monoculture. And when you put that together with the first thing I said about how statistical modelling is really the genie, the secret sauce behind being able to predict what the next word or sentence might be if I put in a prompt, then you realise that that works by picking what is the most probable. What you realise is the degree to which this is normativising language to the point actually even of potential model collapse. It lacks the diversity to simulate how people talk about the world. It's just a model! And it's a probabilistic model mimicking how a certain swath of people who are hugely over-represented are going to talk about the world, in answers that are the most probable of all possible answers.

Ed:  Lauren, listening to you, it's quite a pessimistic account of the impact of AI. But you did say earlier that you thought there might be some benefits. I mean, do you see a kind of well-regulated advance in AI as potentially offering big benefits, like the medical benefits, for example?

Lauren: Absolutely. I am not what is conventionally called a Luddite. Although, let me put in a plug for the Luddites! They were not against technology either. They were actually fighting for economic justice, not against technology. So, in that sense, I guess I am a Luddite. But the use of predictive analytics to help problems for which it is suited could be amazing, right? And certainly it can help us to build better drugs, say, through technologies like protein folding. Better weather prediction, hopefully climate models that get us to do the right thing, possibly make energy more efficient.

The issue is that the world's most powerful and lucrative tech companies who are dominating the research in this area, they are not doing those things. These companies did a lot of more ‘blue sky’ stuff, but mostly what they're all in on now is language models. I find it very unfortunate that these powerful tech companies that have in the past tried so hard to portray themselves as public spirited companies that really want to do right by the world, that they are so completely uncritical about the copyright infringement, the carbon footprint…

I just read a story yesterday. Microsoft does the training for OpenAI. So Microsoft's use of water to cool data centres for training AI has gone up by 34% in one year. And Google’s by 20%. And there's too many of them. So of the many issues that I've discussed, one of the ones that needs the most regulation is that these companies are just too large and too powerful.

Geoff: Well, can we talk a bit about creativity and AI? Because I've been thinking about this quite a lot in the light of the Hollywood writers’ strike. Because one of the issues is that there is a fear that studios will use AI to generate a show or a script. And that way they don't have to pay a writer for the IP, the intellectual property: it belongs to the studio, and then they would hire in good writers to make sense of the garbled mess that the AI churned out.

So what about if AI could write a show better than Succession? Because you mentioned the Luddites before, and I was thinking, my problem isn't with the machine loom. That's great. It feels like an efficiency that stops humans having to do something mundane. It's about those people getting to continue living a life of purpose and financial security. But it feels different with creativity, because it doesn't feel like you're freeing a writer from a mundane task so they can live their best life.

Lauren: Well, that's a great philosophical question for a creative writer to ponder. I'm sure Ishiguro is busy at work on this very novel or something like it.  We're talking about the most gifted writers in the world. These systems will never be able to do that.

Now, could there be some kind of wholly different technology? Probably that tries to figure out the many different processes that are involved in what makes us what we are, which are embodied people who feel and think and process information through an entire body, not just this brain. I think it's a philosophical question, but you're absolutely right. Nobody asked for machines that would write things. And right now, you put your finger on where the stakes are. A studio knows that an AI system is going to produce a garbage screenplay. But if they can hire a writer to revise rather than to do a first draft, and save money, then why not?

Ed: Let's end, Lauren. There's clearly a lot of dangers and risks here that you perceive. Is there something cheerful you'd like to leave us with about the work you do?

Lauren: Well, people support us. People want to be empowered. There's a lot of people that do not buy the narrative, which is: this is going do great things at some point, but it could also do terrible things and we've got to regulate it because the stakes are huge. But what's actually going on right now? ‘Don't worry about that. We're not even there yet.’

And, if this begins to seem confusing and disabling, it is! Because it's an incoherent mess of warnings and hypings, and hypings that seem like warnings, and warnings that hype, and all of it makes people feel dumb and disempowered. Anybody can understand how AI works. It is not - to use the cliché - rocket science, it is statistics!

I think that it's going to take time, as with any new technology. I can't tell you right off the top of my head how long it took to make trains safer. Airlines, think about how heavily regulated they are. Think about what it would be like if Boeing or Airbus said, ‘we've got this great plane. It's new. It's better than ever. But we're not going to tell you how we built it. We're not going to let other engineers come in.’ I think that's good news because it means that we're going to need to work together, especially younger people, to empower ourselves as citizens and demand the kind of regulation that says ‘this is not the AI that we need. We need AI that serves the public interest.’

Ed: Well look, Lauren Goodlad, it's so interesting to talk to you and to hear your perspective, looking at AI from a completely different point of view. Thank you so much for joining us.

Lauren: Thank you.