Posts

Showing posts from March, 2026

Claude Code Source Leak: How a Simple npm Mistake Exposed Anthropic’s Internal AI Engine

Image
A recent incident involving Anthropic has drawn significant attention across the cybersecurity and AI communities after internal source code for its AI coding assistant, Claude Code, was unintentionally exposed online. What Happened? Anthropic confirmed that a release of Claude Code (version 2.1.88) accidentally included a source map file within its npm package. This file, typically used for debugging, allowed anyone to reconstruct the original TypeScript source code—effectively exposing the internal workings of the application. The leak consisted of roughly 500,000 lines of code spread across nearly 2,000 files, giving a detailed look into the architecture, tools, and orchestration logic behind the AI-powered coding assistant. Anthropic clarified that the incident was caused by a packaging error and human oversight, not an external breach or cyberattack. Importantly, no customer data or credentials were compromised. What the Leak Revealed The exposed codebase provided deep insig...

Bearlyfy Hits Russian Firms with Custom GenieLocker Ransomware

Image
  A pro-Ukrainian group called  Bearlyfy  has been attributed to more than 70 cyber attacks targeting Russian companies since it first surfaced in the threat landscape in January 2025, with recent attacks leveraging a custom Windows ransomware strain codenamed GenieLocker.  Bearlyfy (also known as Labubu) operates as a dual-purpose group aimed at inflicting maximum damage upon Russian businesses; its attacks serve the dual objectives of extortion for financial gain and acts of sabotage. The hacking group was   first documented   by F6 in September 2025 as leveraging encryptors associated with LockBit 3 (Black) and Babuk, with early intrusions focusing on smaller companies before upping the ante and demanding ransoms to the tune of €80,000 (about $92,100). By August 2025, the group had claimed at least 30 victims. Beginning May 2025, Bearlyfy actors also utilized a modified version of  PolyVice , a ransomware family attributed to   Vice Society ...

Critical AI Vulnerability: OpenAI Patches ChatGPT Data Exfiltration Flaw

Image
Security researchers have identified a serious vulnerability in ChatGPT that allowed sensitive user data to be exfiltrated without the user’s knowledge. The issue has now been patched by OpenAI , but it raises major concerns about data security in AI-driven environments. The flaw was discovered by security researchers and demonstrated how a single malicious prompt could turn a normal interaction into a covert data exfiltration channel. How the Attack Worked The vulnerability relied on prompt injection techniques, where attackers could manipulate the AI’s behavior through carefully crafted inputs. In this case, malicious prompts could force ChatGPT to leak: Conversation history Uploaded files Sensitive contextual data What makes this particularly dangerous is that the attack could occur silently, without any visible indication to the user. Codex Vulnerability Expands the Risk Alongside the ChatGPT issue, researchers also identified a separate vulnerability affecting Open...

Big Tech AI Spending and the Energy Cost Risk

Image
  This report summarizes Reuters reporting from March 31, 2026, on the scale of planned AI infrastructure spending by major technology firms and the risks posed by rising energy costs. Reuters reported that, before the Iran war, Microsoft, Amazon, Alphabet, and Meta were expected to spend about $635 billion in 2026 on data centers, chips, and other AI infrastructure. That figure was described as an increase from $383 billion the previous year and $80 billion in 2019. Objective The objective of this report is to explain how energy market volatility could affect AI investment plans, corporate earnings, and broader equity markets. The article frames the issue not as a technology slowdown by default, but as a stress test for whether current AI spending plans remain sustainable if energy costs rise further. Summary of Findings Reuters cited Melissa Otto, head of research at S&P Global Visible Alpha, who said that persistently high oil prices could force revisions to capital spending...

OpenAI Patch for ChatGPT Data Exfiltration and Codex Token Vulnerability

Image
  Overview This report summarizes two security issues involving OpenAI products that were publicly described on March 30, 2026. The first issue concerned a previously unknown flaw in ChatGPT that researchers said could allow sensitive conversation content, including uploaded files and user messages, to be exfiltrated through a covert DNS-based channel originating from the Linux runtime used for code execution and data analysis. The second issue concerned a command injection vulnerability in OpenAI Codex that researchers said could expose GitHub tokens and enable unauthorized access to code repositories. According to the reporting, OpenAI patched the ChatGPT issue on February 20, 2026, and patched the Codex issue on February 5, 2026. The article also states there was no evidence the ChatGPT flaw had been maliciously exploited. Objective The objective of this report is to document the nature of the vulnerabilities, describe their potential impact on users and organizations, and hig...

Critical Supply Chain Attack: Axios npm Package Compromised to Deliver Cross-Platform RAT

Image
A major supply chain attack has impacted the widely used JavaScript library Axios , after attackers managed to publish malicious versions of the package to npm using compromised maintainer credentials. The incident affects versions 1.14.1 and 0.30.4 , which were found to include a hidden malicious dependency designed to deliver malware across multiple operating systems. How the Attack Was Executed The attackers gained access to the npm account of a core Axios maintainer, allowing them to push tampered versions of the package without raising immediate suspicion. These malicious releases introduced a fake dependency named “plain-crypto-js” , which served as the initial infection vector. Because the packages were published through legitimate channels, they bypassed standard CI/CD security checks, making the attack particularly dangerous. Malware Delivery via Post-Install Script The injected dependency was not just harmless code, it contained a post-install script that executed au...

Advanced Malware Threat: GlassWorm Uses Solana Blockchain to Evade Detection

Image
Security researchers have uncovered a new evolution of the GlassWorm malware campaign, introducing a highly sophisticated technique that leverages blockchain technology to hide its command-and-control (C2) infrastructure. Unlike traditional malware, this variant uses the Solana blockchain as a “dead drop” mechanism to retrieve instructions and payloads, making detection and takedown significantly more difficult. How the Attack Works The attack chain is multi-stage and designed for maximum stealth and data exfiltration. Initial access is typically gained through compromised developer ecosystems, including malicious packages distributed via platforms like npm, PyPI, GitHub, and extension marketplaces. Once executed, the malware retrieves instructions from data embedded in Solana blockchain transactions, effectively hiding its infrastructure in a decentralized and immutable environment. From there, it downloads system-specific payloads and begins the infection process. Multi-Stage...

Critical AI Browser Risk: Claude Extension Flaw Enabled Zero-Click Prompt Injection

Image
  Security researchers have uncovered a critical vulnerability in Claude Chrome Extension developed by Anthropic , exposing users to zero-click attacks that could silently hijack the AI assistant. The flaw allowed attackers to inject malicious prompts into the extension without any user interaction , simply by visiting a compromised or malicious website. How the Attack Worked The vulnerability, dubbed ShadowPrompt , was not a single bug but a chain of weaknesses that together created a powerful exploit path. At its core, the issue relied on two key problems: The extension trusted any subdomain under *.claude.ai A cross-site scripting (XSS) vulnerability existed in a trusted subdomain component By combining these, attackers could deliver malicious instructions that the extension would treat as legitimate user input. This meant that simply visiting a webpage could trigger hidden background code that sends commands directly to the AI assistant without clicks, prompts, or ...

Critical AI Security Risk: LangChain and LangGraph Vulnerabilities Expose Sensitive Data

Image
  Security researchers have identified multiple high-impact vulnerabilities affecting widely used AI frameworks LangChain and LangGraph , raising serious concerns about data security in modern AI-powered applications. These flaws could allow attackers to access filesystem data, environment secrets, and even conversation history , putting sensitive enterprise information at risk. Understanding the Impact LangChain and LangGraph are core building blocks for many AI applications, especially those leveraging large language models (LLMs). Their widespread adoption makes any vulnerability particularly dangerous. Researchers highlighted that each flaw targets a different layer of sensitive data, including: Local files stored on the system Environment variables and API keys Stored conversations and workflow data This means attackers could potentially extract critical business data from compromised AI systems. Breakdown of the Vulnerabilities The findings include three distinct...

Critical Supply Chain Risk: Open VSX Bug Allowed Malicious VS Code Extensions to Bypass Security Checks

Image
Security researchers have uncovered a critical flaw in the Open VSX registry that could allow malicious Visual Studio Code extensions to bypass pre-publish security checks and be distributed to users. The issue affects the extension marketplace used by several VS Code-based environments and raises serious concerns around software supply chain security. How the Vulnerability Worked The root cause of the issue lies in a flaw within the pre-publish scanning pipeline. According to researchers, the system relied on a single boolean value to determine the outcome of security scans. This created a critical ambiguity: the pipeline could not distinguish between a successful scan and a complete failure of the scanning process. As a result, when security scanners failed to run—especially under heavy load—the system interpreted this as “nothing to scan,” allowing potentially malicious extensions to pass through validation and be published. Why This Is Dangerous This vulnerability effectively b...

Critical Alert: CISA Adds Actively Exploited F5 Vulnerability to KEV Catalog

Image
  The U.S. Cybersecurity and Infrastructure Security Agency ( CISA ) has added a newly identified vulnerability to its Known Exploited Vulnerabilities (KEV) catalog after confirming that it is being actively exploited in real-world attacks. The vulnerability, tracked as CVE-2025-53521 , affects F5 BIG-IP Access Policy Manager (APM) and is considered highly critical due to its potential to enable remote code execution (RCE). From DoS to Remote Code Execution At first, the vulnerability was categorized as a Denial-of-Service (DoS) issue. However, further technical analysis revealed that it could be exploited to execute arbitrary code remotely. This reclassification significantly increases its severity. Instead of merely disrupting services, attackers may now be able to gain full control over affected systems, making it a much more dangerous threat. Evidence of Active Exploitation The addition of this vulnerability to the KEV catalog confirms that it is already being used by threa...

Chinese Hackers Caught Deep Within Telecom Backbone Infrastructure

Image
  A China-linked state-sponsored threat actor has deployed kernel implants and passive backdoors deep within telecommunication backbone infrastructure worldwide for long-term persistence, Rapid7 reports. The stealth digital sleeper cells have not been attributed to any known APT but are meant for high-level espionage, including against government networks, the cybersecurity firm says. The persistent tools were deployed as part of apparent discreet breaches that are characterized by recurring elements, suggesting an ongoing operation aimed at “embedding stealthy access mechanisms deep inside telecom and critical environments” for extended access. As part of its investigation, Rapid7 uncovered passive backdoors and kernel-level implants that have been used in combination with credential harvesters and cross-platform command frameworks. “Together, these components form a persistent access layer designed not simply to breach networks, but to inhabit them,” the cybersecurity firm w...

Pro-Iranian Hacking Group Claims Credit for Hack of FBI Director Kash Patel’s Personal Account

Image
  A pro-Iranian hacking group claimed Friday to have hacked an account of FBI Director Kash Patel and has posted online what appear to be years-old photographs of him, along with a work resume and other personal documents. Many of those records appeared to be more than a decade old. “Kash Patel, the current head of the FBI, who once saw his name displayed with pride on the agency’s headquarters, will now find his name among the list of successfully hacked victims,” said a message posted Friday from the group Handala. The message was accompanied by more than a half dozen photos of Patel, including ones of him standing beside an antique sports car and another with a cigar in his mouth. The group also said that it was making available for download emails and other documents from Patel’s account. Many of the records appeared to relate to his personal travels and business from more than 10 years ago. The FBI had no immediate comment on Friday, but a person familiar with the matter who s...

Hackers Use Fake Resumes to Infiltrate Companies and Steal Credentials

Image
A new cyberattack campaign is turning a routine business process into a serious security risk. Threat actors are now distributing fake job applications containing malicious files, allowing them to infiltrate corporate systems and steal sensitive data. The campaign, identified as FAUX#ELEVATE , targets organizations by sending emails that appear to come from legitimate job candidates. Attached to these emails are resumes that look normal but actually contain hidden malicious scripts. Once opened, the file quietly executes in the background without raising immediate suspicion. From that point, the attack progresses rapidly. Within seconds, the malware connects to external infrastructure to download additional components and begin extracting sensitive information from the infected system. This includes stored credentials, browser data, and other valuable corporate information. In some cases, the attackers also deploy cryptocurrency mining software, although the primary objective appear...

When AI Becomes the Attack Surface: Why the Kill Chain No Longer Works

Image
The Model We’ve Always Trusted For a long time, the “kill chain” has been one of the most reliable ways to understand cyberattacks. The idea was straightforward: every attacker follows a path starting with reconnaissance, moving through access and lateral movement, and ending with impact. This structure gave security teams something very valuable: predictability. If you knew the stages, you could detect patterns. If you could detect patterns, you had a chance to stop the attack before it went too far. But that predictability is starting to disappear. AI Is Changing the Rules of the Game AI is no longer just a tool sitting on the side. It’s now deeply embedded in systems, helping automate processes, make decisions, and interact across environments. And that’s where things start to shift. Instead of building an attack step by step, an attacker can now target something that already exists inside the environment: an AI agent. These agents are designed to be efficient and helpful, wh...