UK AI Safety Lab Ventures Across the Pond with New US Location
The British Artificial Intelligence (AI) Safety Research Institute plans to expand internationally by opening a new location in the United States.
On May 20, UK Skills Secretary Michelle Donelan announced that the Institute would open its first overseas office in San Francisco in the summer.
The announcement said the strategic choice of the San Francisco office will allow the UK to collaborate with one of the world’s largest AI labs, located between London and San Francisco, while “harnessing the wealth of technical talent available in the Bay Area.” .
It also said the move would help “strengthen” relationships with major U.S. players to drive global AI safety “for the public good.”
Already, the AI Safety Institute’s London branch has a team of 30 people who are on track to expand and gain further expertise, particularly in the area of risk assessment of cutting-edge AI models.
Donelan said this expansion represents the UK’s leadership and vision for AI safe practices.
“This is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global perspective, strengthening our partnership with the US and paving the way for other countries to leverage our expertise. AI safety.”
This follows the UK’s landmark AI Safety Summit held in London in November 2023. This summit was the first to focus on AI safety globally.
Related: Microsoft faces billions of dollars in fines in EU over Bing AI issues
The event featured leaders from around the world, including from the United States and China, and leading voices in AI, including Microsoft President Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEOs Demis Hassabiss and Elon Musk.
In this latest announcement, the UK said it is also publishing the lab’s latest results from safety tests conducted on five publicly available advanced AI models.
They said they have anonymized the model and that the results provide a “snapshot” of the model’s functionality rather than designating it as “safe” or “unsafe.”
Findings included that several models were capable of completing cybersecurity challenges, while others struggled with more advanced models. Several models were found to have doctoral-level knowledge of chemistry and biology.
All tested models were “highly vulnerable” to basic jailbreaks, and it concluded that the tested models were unable to complete “complex and time-consuming tasks” without human supervision.
Institute director Ian Hogearth said the assessment would help contribute to an empirical assessment of the model’s capabilities.
“AI safety is still a very young and emerging field. These results represent only a small portion of the assessment approaches that AISI is developing.”
magazine:‘AI in each other’ to prevent AI apocalypse: Science fiction writer David Brin