[imagesource:wikimediacommons]
Christopher Nolan has drawn a dramatic comparison between the rapidly developing technology of artificial intelligence and his new movie about the creation of the atomic bomb.
In a conversation following a preview screening of Oppenheimer in New York, Nolan expressed his concern for the rapid rise in AI alongside other panellists, including Los Alamos National Laboratory director Dr Thom Mason, physicists Dr Carlo Rovelli and Dr Kip Thorne, plus author Kai Bird, who co-wrote American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer, which Nolan’s film is based on. Variety notes his concerns:
“The rise of companies in the last 15 years bandying words like algorithm — not knowing what they mean in any kind of meaningful, mathematical sense — these guys don’t know what an algorithm is,” Nolan shared at the screening. “People in my business talking about it, they just don’t want to take responsibility for whatever that algorithm does.”
“Applied to AI, that’s a terrifying possibility. Terrifying,” Nolan continued. “Not least because, AI systems will go into defensive infrastructure ultimately. They’ll be in charge of nuclear weapons. To say that that is a separate entity from the person wielding, programming, putting that AI to use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”
In Nolan’s film, Cillian Murphy plays the theoretical physicist J. Robert Oppenheimer, chronicling the moment he was tapped by US military powers to develop the atomic bomb during World War II.
When this question was posed – “do you think we’ll keep re-examining Oppenheimer? As our understanding of quantum physics continues, as our taming of the atom continues?” – Nolan answered:
“I hope so,” Nolan stated. “When I talk to the leading researchers in the field of AI right now, for example, they literally refer to this — right now — as their Oppenheimer moment. They’re looking to history to say, ‘What are the responsibilities for scientists developing new technologies that may have unintended consequences?’”
The director’s warning comes at a crucial point in time, as Hollywood writers are on strike against the use of AI, putting the entertainment industry at a near-complete halt:
“With the labor disputes going on in Hollywood right now, a lot of it — when we talk about AI, when we talk about these issues — they’re all ultimately born from the same thing, which is when you innovate with technology, you have to maintain accountability,” Nolan stated.
Nolan admitted that he doesn’t think Oppenheimer’s story offers any easy answers to these questions about AI, “but it at least it can show where some of those responsibilities lie and how people take a breath and think, ‘Okay, what is the accountability?’”
Even AI pioneer Dr Geoffrey Hinton expressed regret regarding his life work, having laid the groundwork for the tech industry’s investment in generative artificial intelligence, which powers chatbots such as ChatGPT.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton told the Times. “It is hard to see how you can prevent the bad actors from using it for bad things.”
Now, everyone in Silicon Valley and beyond needs to figure out who is taking responsibility before AI starts wreaking havoc.
The Guardian wrote a listicle about the ‘Five ways AI might destroy the world‘, noting ominously how “Everyone on Earth could fall over dead in the same second” while another article mentions the five most disturbing ways AI is currently being used.
“Terrifying” might just be an understatement.
[source:variety]