© 2025 WGCU News
PBS and NPR for Southwest Florida
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The challenges of ensuring 'Frontier' AI models are safe & beneficial to humans

ChatGPT hit the world on Nov. 22, 2022 as a free website or app that let users interact with it like a chat bot that seemed extremely knowledgeable and was easy to use.

While this advanced level of what we call AI may have seemed to come out of nowhere, ChatGPT was actually just the next version of what are called Large Language Models (LLMS) being developed by OpenAI. Similar models using essentially the same technologies were also being developed but hadn’t been released to the public for such easy access like OpenAI did with ChatGPT. GPT-2 came out in early 2019 and was available, but not with such an accessible interface and it was far less capable.

On release, ChatGPT used GPT-3.5 and OpenAI has continued releasing more advanced models, and many other companies and entities have released their own LLMs that are basically built using the same technologies that ChatGPT uses. There are dozens of them, and some are more powerful than others.

Frontier AI Models are the ones that are highly capable, like OpenAI's GPT-4o, Google's Gemini 1.5, Anthropic's Claude 3.5, and Meta's LLaMA 3. They represent advancements in language processing, reasoning, and multimodal capabilities — and it’s turned out they are able to perform functions beyond what their creators originally envisioned. These are the models on the cutting edge of AI development. Many experts warn Frontier Models could potentially pose risks to public safety, and could have dangerous capabilities.

The Frontier Model Forum is an industry-supported non-profit focused on addressing these significant risks to public safety and even national security. Its members currently include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI. Its core mandates are to identify best practices and support standards development, and to advance science and independent research in the field of AI.

We meet the Executive Director of the Frontier Model Forum, Chris Meserole. He is an expert on AI safety, international governance, and global cooperation. His background is in what’s called ‘interpretable machine learning’ and in ‘computational social science.’ Prior to joining the Frontier Model Forum, he served as Director of the AI and Emerging Technology Initiative at the Brookings Institution and was a fellow in its Foreign Policy program.

We spoke while he was in town to give a talk for the Naples Council on World Affairs. You can hear his talk on WGCU-FM on Sunday, March 30 at 8pm.

WGCU is your trusted source for news and information in Southwest Florida. We are a nonprofit public service, and your support is more critical than ever. Keep public media strong and donate now. Thank you.