From tomorrow, the UK government is hosting the first global AI Safety Summit, bringing together about 100 people from industry and government to develop a shared understanding of the emerging risks of leading-edge AI while unlocking its benefits. 

The event will be held at Bletchley Park, a site in Milton Keynes that became the home of code breakers during World War II and saw the development of Colossus, the world’s first programmable digital electronic computer, used to decrypt the Nazi Party’s Enigma code, shortening the war by at least two years.

“AI will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve problems we once thought beyond us. But it also brings new dangers and new fears,” said UK Prime Minister Rishi Sunak in a speech last week, adding that one the aims of the summit will be an attempted agreement on the first ever international statement about the nature of the risks posed by AI.

In September, the UK government released an agenda ahead of the summit, which included the development of a shared understanding of the risks posed by frontier AI, alongside calls for a process of international collaboration on AI safety, including how best to support national and international frameworks.

These talking points were reinforced by a discussion paper that was published by the government last week, due to be distributed to attendees of the summit with the aim of informing discussions.

“The UK wants to be seen as an innovation hub and [AI technologies are] clearly going to be a massive area of growth and development, both for the economy and the workforce,” said Philip Blows, CEO of StreaksAI, a UK-based developer of AI technology.

However, while the general consensus seems to be in favor of an event where the risks of the technology are discussed, the format of the AI Safety Summit has faced some criticism. While some high profile attendees have been announced, such as US Vice President Kamala Harris, conformation of the full guest list has not yet been made public.

Who gets to sit at the table and make decisions about the most important safety issues and potential harms is really critical, said Michael Bak, executive director of the Forum on Information and Democracy.

“If that's a close-knit group of people, dominated by the private sector… that would concern me,” Bak said. “My desire would be that there would be recognition of the value that civil society brings to the table, in addition to the benefit of technologists who are developing these products for private interests.”

Hosting an AI Safety Summit is a “positive first step” as it means governments are “acknowledging that there are risks attached to this technology,” said Shweta Singh, assistant professor at the University of Warwick whose research includes ethical and responsible AI.

There’s a concern, however, that ahead of the summit the talking points have been focused on some of the more headline-grabbing existential threats of AI, threats which the government itself have said are very unlikely to happen, and less of a discussion around harms such as bias and disinformation which we’re already seeing happen in real time.

For example, when Large Language Models (LLMs) behind popular generatve AI tools scrape the internet to form the building blocks of their learning, they're bringing with them the biases that already exist within that content. In one instance, an Asian woman posted on social media that when she asked AI image generator Playground AI to turn a selfie she’d taken into “a professional LinkedIn profile photo,” it made her look like a white woman.

The current lack of any global consensus on how to regulate AI demonstrates just how complex an issue it is.

When it comes to regulating technology, getting the balance right is really important, said Sarah Pearce, partner at Hunton Andrews Kurth, who has seen a tripling of the number of incoming requests related to AI governance in the last year.

“When you hear people like [Prime Minister] Rishi Sunak say that the legislators need to learn to understand the technology more in order to be able to put together the appropriate regulation, for me that that makes sense at this moment in time and I think it's the right approach.”

In March, the UK government published a white paper outlining its AI strategy, stating it was seeking to avoid what it called “heavy-handed legislation,” and will instead call on existing regulatory bodies to use current regulations to ensure that AI applications adhere to guidelines, rather than draft new laws.

However, Pearce adds that’s also not to say that the EU’s “more advanced… and more prescriptive” approach to AI regulation — as set out in its draft AI Act — is wrong or will stifle innovation.

“I'm a realist as well as an idealist and while I think that having global regulation would be the ideal, if I were to put on my realist hat, I can acknowledge that that might not be possible. However, that’s where this kind of summit could prove highly useful and I do hope we see a lot more is harmony as a result,” she said.

There has to be more cooperation and more coordination across tech companies and governments, said the University of Warwick's Singh, arguing that while we wait for the law to catch up, there needs to be more of a consensus around developing a set of ethical principles and guidelines that focus on the prevention of harm.

With elections set to take place in the UK and US next year,  Singh said tackling this issue is “the need of the hour,” adding that the kind of harms this technology might be capable of regarding the disruption of the democratic process is something that should worry us all.

However, while more clearly needs to be done, Singh said that the fact that the summit is even happening is itself an acknowledgment that the risk exists and something needs to be done about it, noting that its likely that the role played by deepfakes and disinformation during an election campaign will be a turning point for many politicians getting serious about tackling this issue.

While no one is under any illusions that global governments are going to suddenly announce a unified regulatory framework for AI in the aftermath of the summit, there does seem to be widespread consensus that this summit shouldn’t be a "one and done" event.

This is something that could kickstart a series of summits and ultimately lead to a form of regulation, Pearce said, adding that she’d like to see it setting paving the way for future global summits that ensure there is some kind of global alignment on our approach to AI development and use.

Future summits that focus on growth and innovation would also be welcome, said Blows, who acknowledged that while discussions around the risks and concerns of AI are valid, it would be nice to see the conversation balance out in the future via events and media headlines that focus on the technology’s potential for good.

“We do need to look at what impact AI is going to have on the current economy and the jobs that we currently do, and hopefully balance that with what opportunities, new industries, and new jobs AI is going to create,” Blows said

Leadership in this space also needs to emerge in the coming months, said the Forum on Information and Democracy's Bak, who added that while he applauded the UK government for trying to grasp this particular nettle, any future policy or regulatory work that takes place to address the impact of these frontier technologies needs to reflect more than just the views of those who can afford a seat at the table, and focus on the power imbalances that exist between civil society and the corporate world.

“We need to understand that even though the technology may be developed in the global north, its impacts are felt across the world and there's an added responsibility for those who are creating it, those who are implementing it, and therefore governments who want to take an active role in it,” he said.

IT World