A Safety and Security committee has been formed by the OpenAI Board with the purpose to provide suggestions about safety and security decisions for all OpenAI projects. One of the committee’s first tasks will be to assess and develop the processes and safeguards that OpenAI has in place within 90 days. After 90 days, the committee will share recommendations with the OpenAI Board for full review. Following the review, OpenAI will publicly release an update on adopted process changes. 

Security leaders weigh in

Stephen Kowski, Field CTO at SlashNext Email Security+:

“OpenAI creating a new AI safety committee, and starting to train its next major AI model, is no surprise. Especially after the recent agreement in Seoul where global leaders committed to responsible AI development. Nearly every government is laser-focused on AI governance right now, and OpenAI's own partners, like Microsoft, just signed onto international AI safety pledges. So, really, OpenAI and similar vendors have little choice but to put these kinds of controls and oversight in place if they want to keep operating and innovating in today's environment. By being proactive, OpenAI gets a strong voice in shaping these controls, therefore, there is a strong incentive for them to initiate this type of governance on their own.”

Narayana Pappu, CEO at Zendata: 

“This news of a new AI model from OpenAI puts them on par with institutions like Google, which has specifically had a safety and security board since 2019. Although the field of AI is fairly new, there are parallel institutions in other industries, such as institution review boards that govern medical research on human subjects, with equal significance. Their structures of having non-technical, as well as outside, unaffiliated people, are certainly relevant and applicable to AI security and safety and should be considered by OpenAI moving forward.”

John Bambenek, President at Bambenek Consulting:

“The one, obviously concerning, view is that I don’t see any outside involvement, based on this announcement. The board seems to be entirely OpenAI employees or executives. It’ll be difficult to prevent an echo chamber effect from taking hold that may overlook risks from more advanced models.”

Nicole Carignan, Vice President of Strategic Cyber AI at Darktrace:

“As AI innovation continues to unfold at a rapid pace, we hope to see similar commitments for data science and data integrity. Data integrity, testing, evaluation and verification, as well as accuracy benchmarks, are key components in the accurate and effective use of AI. Encouraging diversity of thought in AI teams is also crucial to help combat bias and harmful training and/or output. Most importantly, AI should be used responsibly, safely and securely. The risk AI poses is often in the way it is adopted.”