View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Frontier model risks will top AI Safety Summit agenda

The UK government says the delegates will examine the best way to use advanced AI models as safely as possible.

By Ryan Morrison

The first day of the UK’s AI Safety Summit will see delegates focus on the risks posed by next-generation frontier models. In two roundtable discussions, the attendees will talk about how developers should safely scale models, what the international community should do, and how policymakers can mitigate risk. The summit has been billed as the UK’s opportunity to put itself at the centre of the global debate on AI safety, but many in the industry have criticised the way it has been organised, saying that it feeds the agenda of Big Tech companies.

The summit will be held at Bletchley Park in Milton Keynes, home of the WW2 British code-breakers (Photo: Gordon Bell/Shutterstock)
The summit will be held at Bletchley Park in Milton Keynes, home of the Second World War British code-breakers. (Photo by Gordon Bell/Shutterstock)

The summit is designed to bring countries, academics and the biggest AI labs together to discuss how to safely utilise the most capable AI models. Microsoft, OpenAI, Google, and Anthropic are expected to be at the conference, which is being held at Bletchley Park on 1 and 2 November.

Details have been drip-fed for the past few months, and the latest update, released today by the Department of Science, Innovation and Technology (DSIT), includes the agenda for the first day, confirming the focus on models currently in development rather than AI tools already in use. This will include GPT-5 from OpenAI, Claude 3 from Anthropic, and Google’s Gemini.

This approach has been criticised by civil society groups, AI start-ups and privacy campaigners. They argue that current AI technologies present a real danger, including around the use of facial recognition and other forms of biometric analysis. The focus on future risk ties into the government’s approach to AI regulation, with a look at ways to safely allow innovation using AI rather than direct regulation.

DSIT argues that the next-generation models present the biggest risk and so is the most important place to start. “These are the most advanced generation of highly capable AI models, most often foundation models, that could exhibit dangerous capabilities,” a spokesperson said. “It is at the frontier where the risks are most urgent given how fast it is evolving, but also where the vast promise of the future economy lies.”

Digital ministers from around the world, civil society groups, and the largest AI companies will begin with a discussion of the risks emerging from the rapid advances of AI, before moving on to examine how to capitalise on its benefits safely.

“AI holds enormous potential to power economic growth, drive scientific progress and deliver wider public benefits, but there are potential safety risks from frontier AI if not developed responsibly,” summit organisers warned.

The day will begin with sessions on understanding the national security risks frontier AI presents as well as the dangers a loss of control over the model could bring. There will also be a discussion on issues surrounding misinformation, election disruption and an erosion of social trust as a result of the ability of AI to create fake material.

Content from our partners
How distributors can leverage digital tools for successful customer experience
How Midsona accelerated efficiency and reduced costs with a modern ERP system
Streamlining your business with hybrid cloud

The second half of the day will involve a study into how to utilise the models safely, with delegates considering how risk thresholds, effective safety assessments, and robust governance and accountability mechanisms can be defined. The delegates will then look at how national policymakers can better manage the risks and harness the opportunities of AI to deliver economic and social benefits. 

The final session of the first day will be a panel discussion on the transformative opportunities of AI for the “public good” in the short and long term. This will include a look at how teachers and students can use AI in education. 

New £400k AI risk challenge

The agenda comes as the DSIT also unveiled a £400,00 investment fund called the Fairness Innovation Challenge, designed to support schemes offering solutions to AI bias and discrimination. Winners of up to £130,000 investments will be those offering a new approach to the bias issue through a wider social context in model development.

Fairness in AI systems is one of the government’s key principles for AI, as set out in the AI Regulation White Paper and part of the agenda for the upcoming summit. DSIT said  AI is a powerful tool for good, presenting near-limitless opportunities to grow the global economy and deliver better public services.

In the UK, the NHS is already trialling AI to help medical professionals identify cases of breast cancer, develop new drugs and improve patient outcomes. The government is also using it to tackle climate change and other challenges but the risks have to be identified and solutions found for this to be a viable technology and scale.

Participants in the challenge will have access to King’s College London’s generative AI model built on anonymised records of ten million British NHS patients and has been built to predict possible health outcomes. Part of the challenge will see them work on potential bias in the model. The second part includes presenting solutions to tackle discrimination in their own unique models and focus areas. 

Secretary of State for Science, Innovation and Technology, Michelle Donelan, said it’s important to face up to the risks of frontier AI in order to “reap the enormous benefits this transformative technology has to offer”. She added: “AI presents an immense opportunity to drive economic growth and transformative breakthroughs in medicine, clean energy and education. Tackling the risk of AI misuse, so we can adopt this technology safely, needs global collaboration.”

Read more: Concern as AI Safety Summit ‘limited to 100 delegates

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU