OpenAI is sounding the alarm on the cyber abilities of its own products

فريق التحرير

OpenAI says the cyber capabilities of its frontier AI systems are advancing rapidly and cautioned on Wednesday that upcoming models are likely to reach a “high” risk level, according to Axios.

OpenAI noted that its most recent releases already show substantial jumps in capability, particularly as the models become atbetter of operating autonomously for extended periods, which opens the door to brute–force–style activity.

The company reported that GPT-5 achieved a 27% score on a capture-the-flag challenge in August, while GPT-5.1-Codex-Max reached 76% last month.
“We expect upcoming AI models to follow this same trajectory,” the report says. “In anticipation, we are preparing and evaluating as if every new model could reach ‘high’ levels of cybersecurity capability as defined by our Preparedness Framework.”

As these models become more capable, the pool of individuals who could carry out cyberattacks may widen significantly.

This is not the company’s first cybersecurity alert; it issued a similar warning in June about bioweapons risks, and the release of ChatGPT Agent in July was ultimately rated “high” in that category. “High” is the second-most severe rating, positioned just below the “critical” category, at which point models would not be safe for public deployment.

Frontier AI systems are becoming increasingly adept at identifying security weaknesses, not only those built by OpenAI. The trend is clear: as AI becomes more advanced, it quickly adds to the arsenal of cybercriminals eager to use it for harm.

Recently, OpenAI said it has expanded its industry-wide collaborations to address cybersecurity risks, including the Frontier Model Forum, founded in 2023 with other leading labs. The company also plans to launch a separate Frontier Risk Council, an advisory body intended to bring experienced cyber defenders and security experts into direct collaboration with OpenAI teams.

Additionally, OpenAI is privately testing Aardvark, a tool designed to help developers uncover vulnerabilities in their products. Access requires an application, and according to the company, Aardvark has already identified critical security issues.

Yet it’s important to note that former employee testimonies paint a dark picture of OpenAI and other AI companies regarding safety. Many safety teams were axed across the industry, and a culture of breakneck advancement at any price seems to dominate the field.

THE BRIEF - Curated regional news every Monday
MENA TECH’s weekly newsletter keeps you updated on all major tech and business news across the region and around the world.
By subscribing, you confirm you are 18+ years old, will receive newsletter and promotional content, and agree to our terms of use and privacy policy. You may unsubscribe at any time.
Read More
MENA TECH – The leading Arabic-language media platform for technology and business
MENA TECH – The leading Arabic-language media platform for technology and business
Copyright © 2025 MenaTech. All rights reserved.