Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-03-16T05:00:05.670697+00:00
- January 2026 CVE Landscape: 23 Critical Vulnerabilities Mark 5% Increase, APT28 Exploits Microsoft Office Zero-Day (www.recordedfuture.com, 2026-02-24T00:00:00)
Score: 11.832
January 2026 saw 23 actively exploited CVEs, including APT28’s Microsoft Office zero-day and critical auth bypass flaws impacting enterprise systems. - Google patches two Chrome zero-days under active attack. Update now (www.malwarebytes.com, 2026-03-13T12:58:37)
Score: 10.755
Google has released an out-of-band Chrome update to patch two zero-day vulnerabilities that are already being actively exploited. - March 2026 Patch Tuesday fixes two zero-day vulnerabilities (www.malwarebytes.com, 2026-03-11T10:47:32)
Score: 9.907
Microsoft patched 79 security vulnerabilities this month, including bugs that could let attackers escalate privileges or crash critical services. - Watch out for fake Malwarebytes renewal notices in your calendar (www.malwarebytes.com, 2026-03-13T15:48:16)
Score: 7.775
Scammers are sending fake calendar “renewal” notices impersonating Malwarebytes to trick victims into calling a fake billing number. - Attackers impersonate Temu in ClickFix $Temu airdrop scam (www.malwarebytes.com, 2026-03-13T09:30:43)
Score: 7.731
A fake $TEMU crypto airdrop uses the ClickFix trick to make victims run malware themselves and quietly installs a remote-access backdoor. - Apple patches Coruna exploit kit flaws for older iOS versions (www.malwarebytes.com, 2026-03-12T17:49:44)
Score: 7.622
Apple issued security updates for older iOS and iPadOS versions to close vulnerabilities exploited by the Coruna exploit kit. - This Android vulnerability can break your lock screen in under 60 seconds (www.malwarebytes.com, 2026-03-12T13:13:59)
Score: 7.59
Researchers showed how attackers could pull encryption keys, recover the PIN, and access sensitive data from affected devices. - Latin America's Cybersecurity Turning Point: From Reactive Defense to Threat Intelligence (www.recordedfuture.com, 2026-03-03T00:00:00)
Score: 7.299
Latin America's threat landscape is evolving fast — and reactive defense is no longer enough. PIX fraud, ransomware, and targeted attacks are outpacing overstretched security teams. Recorded Future provides LATAM-specific intelligence, automation, and seamless integrations to help your team get ahead of threats before they hit. - 2025 Cloud Threat Hunting and Defense Landscape (www.recordedfuture.com, 2026-02-19T00:00:00)
Score: 7.299
Threat actors are doubling down on cloud infrastructure — exploiting misconfigurations, abusing native services, and pivoting through hybrid environments to maximize impact. See how attack patterns are evolving across exploitation, ransomware, credential abuse, and AI service targeting in this latest cloud threat roundup. - February 2026 CVE Landscape: 13 Critical Vulnerabilities Mark 43% Drop from January (www.recordedfuture.com, 2026-03-12T00:00:00)
Score: 6.999
February 2026 saw a 43% decrease in high-impact vulnerabilities, with Recorded Future's Insikt Group® identifying 13 vulnerabilities requiring immediate remediation, down from 23 in January 2026.
Top 10 AI / LLM-Related Threats
Generated 2026-03-16T06:00:22.754084+00:00
- Uncovering Security Threats and Architecting Defenses in Autonomous Agents: A Case Study of OpenClaw (arxiv.org, 2026-03-16T04:00:00)
Score: 20.78
arXiv:2603.12644v1 Announce Type: new
Abstract: The rapid evolution of Large Language Models (LLMs) into autonomous, tool-calling agents has fundamentally altered the cybersecurity landscape. Frameworks like OpenClaw grant AI systems operating-system-level permissions and the autonomy to execute complex workflows. This level of access creates unprecedented security challenges. Consequently, traditional content-filtering defenses have become obsolete. This report presents a comprehensive securit - Depth Charge: Jailbreak Large Language Models from Deep Safety Attention Heads (arxiv.org, 2026-03-16T04:00:00)
Score: 19.78
arXiv:2603.05772v2 Announce Type: replace
Abstract: Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights are made public, they are exposed to jailbreak attacks even after alignment. Existing attacks operate primarily at shallow levels, such as the prompt or embedding level, and often fail to expose vulnerabilities rooted in deeper model components, which creates a false sense of security for successfu - Announcing Pwn2Own Berlin for 2026 (www.thezdi.com, 2026-03-12T16:25:15)
Score: 19.551
If you just want to read the contest rules, click here . Willkommen zurück, meine Damen und Herren, zu unserem zweiten Wettbewerb in Berlin! That’s correct (if Google translate didn’t steer me wrong). After our inaugural competition last year, Pwn2Own returns to Berlin and OffensiveCon . Outside of our shipping troubles , we had an amazing time and can’t wait to get back. Last year, we added Artificial Intelligence as a category with great results. This year, we’re expanding this and splitting i - PILOT: Command-line Interface Fuzzing via Path-Guided, Iterative Large Language Model Prompting (arxiv.org, 2026-03-16T04:00:00)
Score: 17.78
arXiv:2511.20555v2 Announce Type: replace
Abstract: Command-line interface (CLI) fuzzing tests programs by mutating both command-line options and input file contents, thus enabling discovery of vulnerabilities that only manifest under specific option-input combinations. Prior works of CLI fuzzing face the challenges of generating semantics-rich option strings and input files, which cannot reach deeply embedded target functions. This often leads to a misdetection of such a deep vulnerability usi - Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models (arxiv.org, 2026-03-16T04:00:00)
Score: 17.78
arXiv:2603.05773v2 Announce Type: replace
Abstract: Safety alignment is often conceptualized as a monolithic process wherein harmfulness detection automatically triggers refusal. However, the persistence of jailbreak attacks suggests a fundamental mechanistic decoupling. We propose the \textbf{\underline{D}}isentangled \textbf{\underline{S}}afety \textbf{\underline{H}}ypothesis \textbf{(DSH)}, positing that safety computation operates on two distinct subspaces: a \textit{Recognition Axis} ($\ma - Evolving Deception: When Agents Evolve, Deception Wins (arxiv.org, 2026-03-16T04:00:00)
Score: 17.78
arXiv:2603.05872v2 Announce Type: replace
Abstract: Self-evolving agents offer a promising path toward scalable autonomy. However, in this work, we show that in competitive environments, self-evolution can instead give rise to a serious and previously underexplored risk: the spontaneous emergence of deception as an evolutionarily stable strategy. We conduct a systematic empirical study on the self-evolution of large language model (LLM) agents in a competitive Bidding Arena, where agents iterat - Superficial Safety Alignment Hypothesis (arxiv.org, 2026-03-16T04:00:00)
Score: 17.78
arXiv:2410.10862v3 Announce Type: replace-cross
Abstract: As large language models (LLMs) are overwhelmingly more and more integrated into various applications, ensuring they generate safe responses is a pressing need. Previous studies on alignment have largely focused on general instruction-following but have often overlooked the distinct properties of safety alignment, such as the brittleness of safety mechanisms. To bridge the gap, we propose the Superficial Safety Alignment Hypothesis (SSAH - Accelerate Attack Surface Discovery with new AI-Powered Connectors (www.rapid7.com, 2026-03-09T16:28:20)
Score: 17.337
Discovery: The foundation of exposure management To understand your attack surface, and all related exposures, Rapid7's Command Platform provides Attack Surface Management, (included in Surface Command, Exposure Command and Incident Command). It provides a 360° view of all assets in the organization, their associated risks, and how they relate to one another. This provides teams with the attack surface visibility they can trust to detect security issues from endpoint to cloud. This blog wil - Prompt Injection as Role Confusion (arxiv.org, 2026-03-16T04:00:00)
Score: 15.78
arXiv:2603.12277v1 Announce Type: cross
Abstract: Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models infer roles from how text is written, not where it comes from. We design novel role probes to capture how models internally identify "who is speaking." These reveal why prompt injection works: untrusted text that imitates a role inherits that role's authority. We test this insight by inje - Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild (unit42.paloaltonetworks.com, 2026-03-03T11:00:30)
Score: 15.754
Uncover real-world indirect prompt injection attacks and learn how adversaries weaponize hidden web content to exploit LLMs for high-impact fraud. The post Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild appeared first on Unit 42 . - Verifying LLM Inference to Detect Model Weight Exfiltration (arxiv.org, 2026-03-16T04:00:00)
Score: 14.78
arXiv:2511.02620v3 Announce Type: replace
Abstract: As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker controlling an inference server may exfiltrate model weights by hiding them within ordinary model responses, a strategy known as steganography. This work investigates how to verify LLM model inference to defend against such attacks and, more broadly, to detect anomalous or buggy behavior during inf - A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring (arxiv.org, 2026-03-16T04:00:00)
Score: 14.78
arXiv:2602.23163v2 Announce Type: replace-cross
Abstract: Large language models are beginning to show steganographic capabilities. Such capabilities could allow misaligned models to evade oversight mechanisms. Yet principled methods to detect and quantify such behaviours are lacking. Classical definitions of steganography, and detection methods based on them, require a known reference distribution of non-steganographic signals. For the case of steganographic reasoning in LLMs, knowing such a re - PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses (arxiv.org, 2026-03-16T04:00:00)
Score: 13.48
arXiv:2603.13026v1 Announce Type: cross
Abstract: Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an a - Proactive Preparation and Hardening Against Destructive Attacks: 2026 Edition (cloud.google.com, 2026-03-06T14:00:00)
Score: 13.098
Written by: Matthew McWhirt, Bhavesh Dhake, Emilio Oropeza, Gautam Krishnan, Stuart Carrera, Greg Blaum, Michael Rudden UPDATE (March 13): Added guidance around abuse or misuse of endpoint / MDM platforms . Background Threat actors leverage destructive malware to destroy data, eliminate evidence of malicious activity, or manipulate systems in a way that renders them inoperable. Destructive cyberattacks can be a powerful means to achieve strategic or tactical objectives; however, the risk of repr - Coruna: The Mysterious Journey of a Powerful iOS Exploit Kit (cloud.google.com, 2026-03-03T14:00:00)
Score: 12.384
Introduction Google Threat Intelligence Group (GTIG) has identified a new and powerful exploit kit targeting Apple iPhone models running iOS version 13.0 (released in September 2019) up to version 17.2.1 (released in December 2023) . The exploit kit, named “Coruna” by its developers, contained five full iOS exploit chains and a total of 23 exploits. The core technical value of this exploit kit lies in its comprehensive collection of iOS exploits, with the most advanced ones using non-public expl - Towards Contextual Sensitive Data Detection (arxiv.org, 2026-03-16T04:00:00)
Score: 11.78
arXiv:2512.04120v2 Announce Type: replace
Abstract: The emergence of open data portals necessitates more attention to protecting sensitive data before datasets get published and exchanged. To do so effectively, we observe the need to refine and broaden our definitions of sensitive data, and argue that the sensitivity of data depends on its context. Following this definition, we introduce a contextual data sensitivity framework building on two core concepts: 1) type contextualization, which cons - When Trusted Websites Turn Malicious: WordPress Compromises Advance Global Stealer Operation (www.rapid7.com, 2026-03-10T13:00:00)
Score: 11.141
Overview Rapid7 Labs has identified and analyzed an ongoing, widespread compromise of legitimate, potentially highly trusted WordPress websites, misused by an unidentified threat actor to inject a ClickFix implant impersonating a Cloudflare human verification challenge (CAPTCHA). The lure is designed to infect visitors with a multi-stage malware chain that ultimately steals and exfiltrates credentials and digital wallets from Windows systems. The stolen credentials can subsequently be used for f - Accelerate custom LLM deployment: Fine-tune with Oumi and deploy to Amazon Bedrock (aws.amazon.com, 2026-03-10T15:42:16)
Score: 11.068
In this post, we show how to fine-tune a Llama model using Oumi on Amazon EC2 (with the option to create synthetic data using Oumi), store artifacts in Amazon S3, and deploy to Amazon Bedrock using Custom Model Import for managed inference. - Expert Selections In MoE Models Reveal (Almost) As Much As Text (arxiv.org, 2026-03-16T04:00:00)
Score: 10.78
arXiv:2602.04105v3 Announce Type: replace-cross
Abstract: We present a text-reconstruction attack on mixture-of-experts (MoE) language models that recovers tokens from expert selections alone. In MoE models, each token is routed to a subset of expert subnetworks; we show these routing decisions leak substantially more information than previously understood. Prior work using logistic regression achieves limited reconstruction; we show that a 3-layer MLP improves this to 63.1% top-1 accuracy, and - Protect What Matters Most: Aligning Sensitive Data with Exposure Risk (www.rapid7.com, 2026-03-11T12:22:07)
Score: 10.773
This blog was written in collaboration with Symmetry Systems' Claude Mandy. Rapid7 and Symmetry Systems are partnering to help organizations reduce breach impact by aligning sensitive data intelligence with real-world exposure paths across both human and machine identities. Breaches are measured in data, not vulnerabilities Vulnerabilities are one thing, but the breaches that follow are rarely just technical incidents. More often, they become business events with far-reaching consequences, - Rapid7 Detection Coverage for Iran-Linked Cyber Activity (www.rapid7.com, 2026-03-11T17:31:06)
Score: 10.424
The tension arising out of the conflict in Iran is beginning to show signs of expanding beyond a strictly regional crisis. Following our recent published advisories, this communication is intended to outline and summarize the detection and enrichment coverage available to Rapid7 customers, broadly assess the macro cyber threat landscape, and demonstrate the specific actions undertaken within the Rapid7 portfolio to assure our customers of the protection they receive and can expect moving forward - Iran’s Cyber Playbook in the Escalating Regional Conflict (www.rapid7.com, 2026-03-11T17:30:58)
Score: 10.424
Following our recent published advisories, this publication is intended to outline a summary of the cyber activities associated with the tension. Based on the available information, we believe the conflict is beginning to show signs of expanding beyond a strictly regional crisis. Initial threat reporting pointed to a measurable increase in cyber activity linked to the crisis predominantly focused on hacktivist mobilization, with reports of phishing campaigns, and claims of data theft and disrupt - Auditing the Gatekeepers: Fuzzing "AI Judges" to Bypass Security Controls (unit42.paloaltonetworks.com, 2026-03-10T10:00:29)
Score: 10.411
Unit 42 research reveals AI judges are vulnerable to stealthy prompt injection. Benign formatting symbols can bypass security controls. The post Auditing the Gatekeepers: Fuzzing "AI Judges" to Bypass Security Controls appeared first on Unit 42 . - Malicious AI Assistant Extensions Harvest LLM Chat Histories (www.microsoft.com, 2026-03-05T16:02:12)
Score: 9.98
Malicious AI browser extensions collected LLM chat histories and browsing data from platforms such as ChatGPT and DeepSeek. With nearly 900,000 installs and activity across more than 20,000 enterprise tenants, the campaign highlights the growing risk of data exposure through browser extensions. The post Malicious AI Assistant Extensions Harvest LLM Chat Histories appeared first on Microsoft Security Blog . - Building custom model provider for Strands Agents with LLMs hosted on SageMaker AI endpoints (aws.amazon.com, 2026-03-05T16:15:41)
Score: 9.883
This post demonstrates how to build custom model parsers for Strands agents when working with LLMs hosted on SageMaker that don't natively support the Bedrock Messages API format. We'll walk through deploying Llama 3.1 with SGLang on SageMaker using awslabs/ml-container-creator, then implementing a custom parser to integrate it with Strands agents.
Auto-generated 2026-03-16
