OpenAI Warns Advanced AI Models Could Spark Major Cybersecurity Threats
OpenAI says its next AI models carry a “high” cybersecurity risk because their skills are growing fast. These models might create zero-day exploits for protected systems or help pull off big intrusions on companies and factories, according to its blog post reported by Yahoo and Reuters, Axios, and NDTV.
To fight back, OpenAI is building stronger defenses into the models themselves. They’re making tools that let security teams check code and fix holes faster. Basic protections include locking down access, hardening servers, blocking outgoing traffic, and constant monitoring.
Soon, OpenAI plans a program giving cyberdefense pros tiered access to beefed-up features. They’re also starting the Frontier Risk Council with outside security experts. It kicks off on cybersecurity but will cover other big AI risks later.
Vulnerabilities Show Up in OpenAI-Related Tools Too
Problems aren’t just ahead-looking. OWASP published a list of top 10 AI agent threats, per Security Boulevard. Researchers found over 30 flaws in AI coding tools that let attackers steal data or run code remotely. These hit IDEs like Cursor, GitHub Copilot, and others through prompt injections that hijack AI agents and trigger normal IDE features for harm.
One issue stands out: a command injection in OpenAI’s Codex CLI, CVE-2025-61260. It runs commands from tampered config files without asking the user, as detailed in The Hacker News. Attacks often chain prompt tricks with auto-approved actions to edit settings or leak files.
- Edit settings.json to run malicious code via paths like php.validate.executablePath.
- Overwrite workspace files for code execution without reopening.
- Read sensitive files and send data to attacker servers via JSON schemas.
Fixes mean sticking to trusted files and servers, reviewing added sources, and applying least privilege to AI tools.
Gartner’s Take on AI Browsers, Including OpenAI’s
OpenAI offers one of these agentic AI browsers, but Gartner tells companies to block them all for now. Default settings favor ease over safety, letting AI chatbots hit bad sites or store sensitive data in risky clouds, per a ZDNet report on Gartner’s advisory.
Risks include prompt injections, data theft, and skipped security checks. Employees might feed company secrets to AI without knowing. Gartner pushes risk checks first—most will fail today.
OpenAI’s steps like the Risk Council show they’re ahead on defenses, but these examples underline why AI cybersecurity needs constant work.