Today: 7 October 2025
28 July 2023
2 mins read

Tech firms form body to ensure safe development of AI models

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members…reports Asian Lite News

Four major tech companies — Google, OpenAI, Microsoft, and Anthropic have come together to form a new industry body designed to ensure the “safe and responsible development” of “frontier AI” models.

In response to growing calls for regulatory oversight, these tech firms have announced the formation of “Frontier Model Forum” which will draw on the technical and operational expertise of its member companies to benefit the entire AI ecosystem and develop a public library of solutions to support industry best practices and standards.

The Forum aims to help — advance AI safety research to promote responsible development of frontier models and minimise potential risks, identify safety best practices for frontier models, share knowledge with policymakers, academics, civil society and others to advance responsible AI development, and support efforts to leverage AI to address society’s biggest challenges.

Although the Frontier Model Forum currently has only four members, the collective said it is open to new members.

Qualifying organisations must be developing and deploying frontier AI models, as well as showing a “strong commitment to frontier model safety”.

“We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. We’re all going to need to work together to make sure AI benefits everyone,” said Kent Walker, President, Global Affairs, Google & Alphabet.

Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a diversity of backgrounds and perspectives.

Moreover, the founding companies will also establish key institutional arrangements, including a charter, governance and funding with a working group and executive board to lead these efforts.

“We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement on Wednesday.

The Forum will focus on three key areas over the coming year to support the safe and responsible development of frontier AI models — promote knowledge sharing and best practices among industry, governments, civil society, and academia, support the AI safety ecosystem by identifying the most important open research questions on AI safety, and facilitate information sharing among companies and governments.

ALSO READ-7 top tech firms sign deal with US on AI guardrails

Previous Story

G20 must step up for climate action, says Guterres

Next Story

ChatGPT fined for exposing personal info of 687 S. Koreans

Latest from -Top News

OCTOBER 7: Stop the Violence Now, Says Guterres

Guterres recalled that “the attackers brutally killed more than 1,250 Israelis and foreign nationals….reports Asian Lite News UN Secretary-General Antonio Guterres urged an immediate halt to the violence in Gaza, Israel, and

Piyush Goyal Heads to Doha for Trade Talks

During the visit, both sides are expected to discuss the proposed India–Qatar Free Trade Agreement (FTA)….reports Asian Lite News Union Commerce and Industry Minister Piyush Goyal will travel to Doha, Qatar, on

Hamas Heads to Egypt for Gaza Talks

The negotiation will focus on the details of enacting the first phase of the plan…reports Asian Lite News A delegation from the Palestinian group Hamas arrived in Egypt on Sunday ahead of

Multi-alignment, upgraded

With US ties strained and China tense, New Delhi taps Europe’s harder edge for co-development, clean tech and strategic autonomy, writes Manoj Menon India is recalibrating its great-power hedging as frictions with
Go toTop

Don't Miss

Tech giants add $2.4 tn due to AI this year

In Europe and Israel, 40 per cent of new unicorns

AI may help find life on Mars, other planets  

The innovative analytical method does not rely simply on identifying