OpenAI Releases GPT-5.1-Codex-Max, Boosting Coding Tasks in Codex
OpenAI announced GPT-5.1-Codex-Max on November 19, 2025, as the new default model in its Codex coding environments. This update replaces the previous GPT-5.1-Codex and focuses on handling large-scale software development tasks more reliably. According to a Yahoo! News report from that day, the model trains to work across multiple context windows, making it suitable for extended sessions without losing track of details.
What Makes GPT-5.1-Codex-Max Different
The core upgrade is a process called compaction, which compresses key information when the context window fills up. This lets the model manage millions of tokens over long periods, like full project refactorings or multi-hour debugging. WEEL’s coverage on November 20 explains how this enables 24-hour-plus autonomous work, something earlier models struggled with. OpenAI’s tests show it maintains consistent performance in these scenarios.
On efficiency, GPT-5.1-Codex-Max uses about 30% fewer thinking tokens than GPT-5.1-Codex for similar results in medium reasoning settings, as noted in the Yahoo article. It also introduces an “xhigh” inference option for tougher problems, allowing deeper analysis when needed.
Benchmark scores back up the gains. In SWE-Lancer IC SWE, a test mimicking real dev tasks, it hit 79.9% accuracy, up from 66.3% on the old model, per WEEL. On SWE-Bench Verified (500 samples), it scored 77.9%, beating GPT-5.1-Codex’s 73.7% and Google’s Gemini 3 Pro’s 76.2%. Terminal-Bench 2.0 gave it 58.1%, ahead of the others at 52.8% and 54.2%. A WWWhat’sNew piece from November 21 confirms these results, highlighting its edge in complex reasoning.
- Trained for Windows environments, a first for OpenAI models (Yahoo).
- Supports terminal operations and code reviews with better context retention (WWWhat’sNew).
- Reduces costs through token savings while keeping or improving output quality (all sources).
How to Get Started with It in Codex
It’s available now in Codex via ChatGPT’s Plus, Pro, Business, Edu, and Enterprise plans. You can use it in the Codex CLI—install with npm i -g @openai/codex, then run codex and select the model—or through IDE extensions for VS Code, Cursor, and similar tools, as detailed in WEEL’s guide. API access comes soon for direct integration.
OpenAI’s own engineers use it weekly, and it has helped increase their pull requests by 70% (WWWhat’sNew). For security, it runs in isolated local spaces by default, with monitoring to catch issues, though it doesn’t hit the highest cybersecurity tier yet.
This release pushes Codex toward more agent-like coding support, where the AI sticks with you through entire workflows.