Anthropic, Google, Microsoft and OpenAI Appoint Executive Director for the Frontier Model Forum and Unveil $10M AI Safety Fund
In Brief
Anthropic, Google, Microsoft, and OpenAI have introduced Chris Meserole as the inaugural Executive Director of the Frontier Model Forum.
Simultaneously, they have announced a substantial AI Safety Fund, with over $10 million in funding.
Anthropic, Google, Microsoft and OpenAI have appointed Chris Meserole as the inaugural Executive Director of the Frontier Model Forum. The industry consortium is dedicated to promoting the secure and ethical progression of cutting-edge AI models globally. Simultaneously, Forum introduced an AI Safety Fund of more than $10 million. The fund’s objective is to drive advancements in AI safety research.
Executive Director Chris Meserole brings an extensive background in technology policy and the governance of emerging technologies. Before his appointment, he served as the Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution. In his new capacity, Meserole will lead efforts to advance AI safety research, identify best practices for secure AI model development, disseminate knowledge to stakeholders, and support initiatives that harness AI to address societal challenges.
“The most powerful AI models hold enormous promise for society, but to realize their potential, we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum,”
said Meserole.
The AI Safety Fund has emerged in response to the rapid progression of AI capabilities over the past year, necessitating further academic research into AI safety. This fund, a collaboration between the Frontier Model Forum and philanthropic partners, will provide support to independent researchers worldwide affiliated with academic institutions, research organizations, and startups.
The primary contributors to this initiative are Anthropic, Google, Microsoft, and OpenAI, along with philanthropic organizations such as the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn. Their combined contributions exceed $10 million, with the anticipation of additional donations from other partners.
Earlier this year, Forum members pledged voluntary commitments at the White House, including enabling third-party discovery and reporting of vulnerabilities in their AI systems. The AI Safety Fund aligns with this commitment by offering external communities funding to scrutinize frontier AI systems. A diversity of voices and perspectives will enrich the global discourse on AI safety and expand the general AI knowledge base.
Boosting AI Safety and Collaboration
The AI Safety Fund primarily concentrates on bolstering the development of new evaluation techniques and red teaming approaches for AI models, aiming to uncover potential hazards. Red teaming is a structured process for scrutinizing AI systems to identify harmful capabilities, outputs, or infrastructural threats. Increased funding in this domain could elevate safety and security standards and furnish valuable insights into mitigating challenges posed by AI systems.
Moreover, the Fund will seek research proposals soon and will be managed by the Meridian Institute with advice from an advisory committee, including independent experts, AI professionals, and grantmaking experts.
A responsible disclosure process is in progress, allowing Frontier AI labs to share information about vulnerabilities and hazardous capabilities in frontier AI models, along with their solutions. Some Forum companies have already identified such issues in the context of national security, serving as case studies for guiding other labs in implementing responsible disclosure procedures.
Looking ahead, the Frontier Model Forum plans to establish an Advisory Board in the coming months, tasked with steering its strategic priorities and representing diverse expertise and perspectives. The Forum will regularly release updates, including the addition of new members. Simultaneously, the AI Safety Fund will issue its first call for proposals, with grants awarded shortly thereafter. In addition, the Forum will continue to release technical findings as they become available.
The Forum’s overarching goal is to collaborate with Chris Meserole and expand engagement with the broader research community. This includes partnerships with organizations such as the Partnership on AI, MLCommons, and other leading NGOs, government entities, and multinational organizations. Together, they aim to harness the potential of AI while ensuring its secure and ethical development and utilization.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Agne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on agnec@mpost.io.
More articlesAgne is a journalist who covers the latest trends and developments in the metaverse, AI, and Web3 industries for the Metaverse Post. Her passion for storytelling has led her to conduct numerous interviews with experts in these fields, always seeking to uncover exciting and engaging stories. Agne holds a Bachelor’s degree in literature and has an extensive background in writing about a wide range of topics including travel, art, and culture. She has also volunteered as an editor for the animal rights organization, where she helped raise awareness about animal welfare issues. Contact her on agnec@mpost.io.