Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-03-23T05:00:05.740269+00:00
- January 2026 CVE Landscape: 23 Critical Vulnerabilities Mark 5% Increase, APT28 Exploits Microsoft Office Zero-Day (www.recordedfuture.com, 2026-02-24T00:00:00)
Score: 10.665
January 2026 saw 23 actively exploited CVEs, including APT28’s Microsoft Office zero-day and critical auth bypass flaws impacting enterprise systems. - [updated] Google patches two Chrome zero-days under active attack (www.malwarebytes.com, 2026-03-13T12:58:37)
Score: 9.589
Google has released an out-of-band Chrome update to patch two zero-day vulnerabilities that are already being actively exploited. - That “job brief” on Google Forms could infect your device (www.malwarebytes.com, 2026-03-20T11:38:40)
Score: 7.746
Fake job offers on Google Forms are spreading PureHVNC malware that can take over your device. - Your tax forms sell for $20 on the dark web (www.malwarebytes.com, 2026-03-19T11:33:30)
Score: 7.579
Tax season is also peak season for identity theft. Malwarebytes researchers spotted criminals trading stolen tax records on dark web forums. - 2025 Year in Review: Malicious, Infrastructure (www.recordedfuture.com, 2026-03-19T00:00:00)
Score: 7.499
Explore Insikt Group’s 2025 Malicious Infrastructure Report. Gain insights into Cobalt Strike, Vidar infostealers, and AI-driven threats to secure your 2026 strategy. - Apple patches WebKit bug that could let sites access your data (www.malwarebytes.com, 2026-03-18T11:19:59)
Score: 7.411
Apple has released a Background Security Improvement that silently fixes a WebKit vulnerability (CVE-2026-20643). - 2025 Identity Threat Landscape Report: Inside the Infostealer Economy: Credential Threats in 2025 (www.recordedfuture.com, 2026-03-16T00:00:00)
Score: 7.299
Recorded Future's 2025 Identity Threat Landscape Report analyzes hundreds of millions of compromised credentials to reveal how infostealer malware is evolving, which systems attackers are targeting, and what security teams must do to get ahead of credential-based breaches. - Google cracks down on Android apps abusing accessibility (www.malwarebytes.com, 2026-03-17T09:59:12)
Score: 7.235
Malware has been abusing Android’s accessibility features for years. Google just made that a lot harder. - Zombie ZIP method can fool antivirus during the first scan (www.malwarebytes.com, 2026-03-16T16:09:08)
Score: 7.111
Researchers published about the Zombie ZIP vulnerability (or not a vulnerability, that's up for debate) that can bypass a first AV inspection. - CrowdStrike Innovates to Modernize National Security and Protect Critical Systems (www.crowdstrike.com, 2026-03-18T05:00:00)
Score: 6.867
Top 10 AI / LLM-Related Threats
Generated 2026-03-23T06:00:22.746906+00:00
- Automated Membership Inference Attacks: Discovering MIA Signal Computations using LLM Agents (arxiv.org, 2026-03-23T04:00:00)
Score: 20.78
arXiv:2603.19375v1 Announce Type: new
Abstract: Membership inference attacks (MIAs), which enable adversaries to determine whether specific data points were part of a model's training dataset, have emerged as an important framework to understand, assess, and quantify the potential information leakage associated with machine learning systems. Designing effective MIAs is a challenging task that usually requires extensive manual exploration of model behaviors to identify potential vulnerabili - Evolving Jailbreaks: Automated Multi-Objective Long-Tail Attacks on Large Language Models (arxiv.org, 2026-03-23T04:00:00)
Score: 20.78
arXiv:2603.20122v1 Announce Type: new
Abstract: Large Language Models (LLMs) have been widely deployed, especially through free Web-based applications that expose them to diverse user-generated inputs, including those from long-tail distributions such as low-resource languages and encrypted private data. This open-ended exposure increases the risk of jailbreak attacks that undermine model safety alignment. While recent studies have shown that leveraging long-tail distributions can facilitate su - A Framework for Formalizing LLM Agent Security (arxiv.org, 2026-03-23T04:00:00)
Score: 20.48
arXiv:2603.19469v1 Announce Type: new
Abstract: Security in LLM agents is inherently contextual. For example, the same action taken by an agent may represent legitimate behavior or a security violation depending on whose instruction led to the action, what objective is being pursued, and whether the action serves that objective. However, existing definitions of security attacks against LLM agents often fail to capture this contextual nature. As a result, defenses face a fundamental utility-secu - The Autonomy Tax: Defense Training Breaks LLM Agents (arxiv.org, 2026-03-23T04:00:00)
Score: 18.78
arXiv:2603.19423v1 Announce Type: new
Abstract: Large language model (LLM) agents increasingly rely on external tools (file operations, API calls, database transactions) to autonomously complete complex multi-step tasks. Practitioners deploy defense-trained models to protect against prompt injection attacks that manipulate agent behavior through malicious observations or retrieved content. We reveal a fundamental \textbf{capability-alignment paradox}: defense training designed to improve safety - Announcing Pwn2Own Berlin for 2026 (www.thezdi.com, 2026-03-12T16:25:15)
Score: 17.884
If you just want to read the contest rules, click here . Willkommen zurück, meine Damen und Herren, zu unserem zweiten Wettbewerb in Berlin! That’s correct (if Google translate didn’t steer me wrong). After our inaugural competition last year, Pwn2Own returns to Berlin and OffensiveCon . Outside of our shipping troubles , we had an amazing time and can’t wait to get back. Last year, we added Artificial Intelligence as a category with great results. This year, we’re expanding this and splitting i - MAPLE: Metadata Augmented Private Language Evolution (arxiv.org, 2026-03-23T04:00:00)
Score: 17.78
arXiv:2603.19258v1 Announce Type: cross
Abstract: While differentially private (DP) fine-tuning of large language models (LLMs) is a powerful tool, it is often computationally prohibitive or infeasible when state-of-the-art models are only accessible via proprietary APIs. In such settings, generating DP synthetic data has emerged as a crucial alternative, offering the added benefits of arbitrary reuse across downstream tasks and transparent exploratory data analysis without the opaque constrain - Prompt Injection as Role Confusion (arxiv.org, 2026-03-23T04:00:00)
Score: 15.78
arXiv:2603.12277v2 Announce Type: replace-cross
Abstract: Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models infer roles from how text is written, not where it comes from. We design novel role probes to capture how models internally identify "who is speaking." These reveal why prompt injection works: untrusted text that imitates a role inherits that role's authority. We test this insight - Accelerate Attack Surface Discovery with new AI-Powered Connectors (www.rapid7.com, 2026-03-09T16:28:20)
Score: 15.67
Discovery: The foundation of exposure management To understand your attack surface, and all related exposures, Rapid7's Command Platform provides Attack Surface Management, (included in Surface Command, Exposure Command and Incident Command). It provides a 360° view of all assets in the organization, their associated risks, and how they relate to one another. This provides teams with the attack surface visibility they can trust to detect security issues from endpoint to cloud. This blog wil - Open, Closed and Broken: Prompt Fuzzing Finds LLMs Still Fragile Across Open and Closed Models (unit42.paloaltonetworks.com, 2026-03-17T10:00:38)
Score: 15.411
Unit 42 research unveils LLM guardrail fragility using genetic algorithm-inspired prompt fuzzing. Discover scalable evasion methods and critical GenAI security implications. The post Open, Closed and Broken: Prompt Fuzzing Finds LLMs Still Fragile Across Open and Closed Models appeared first on Unit 42 . - The Verifier Tax: Horizon Dependent Safety Success Tradeoffs in Tool Using LLM Agents (arxiv.org, 2026-03-23T04:00:00)
Score: 14.78
arXiv:2603.19328v1 Announce Type: new
Abstract: We study how runtime enforcement against unsafe actions affects end-to-end task performance in multi-step tool using large language model (LLM) agents. Using tau-bench across Airline and Retail domains, we compare baseline Tool-Calling, planning-integrated (TRIAD), and policy-mediated (TRIAD-SAFETY) architectures with GPT-OSS-20B and GLM-4-9B. We identify model dependent interaction horizons (15 to 30 turns) and decompose outcomes into overall suc - Text-Based Personas for Simulating User Privacy Decisions (arxiv.org, 2026-03-23T04:00:00)
Score: 14.78
arXiv:2603.19791v1 Announce Type: new
Abstract: The ability to simulate human privacy decisions has significant implications for aligning autonomous agents with individual intent and conducting cost-effective, large-scale privacy-centric user studies. Prior approaches prompt Large Language Models (LLMs) with natural language user statements, data-sharing histories, or demographic attributes to simulate privacy decisions. These approaches, however, fail to balance individual-level accuracy, prom - LISAA: A Framework for Large Language Model Information Security Awareness Assessment (arxiv.org, 2026-03-23T04:00:00)
Score: 14.78
arXiv:2411.13207v3 Announce Type: replace
Abstract: The popularity of large language models (LLMs) continues to grow, and LLM-based assistants have become ubiquitous. Information security awareness (ISA) is an important yet underexplored area of LLM safety. ISA encompasses LLMs' security knowledge, which has been explored in the past, as well as their attitudes and behaviors, which are crucial to LLMs' ability to understand implicit security context and reject unsafe requests that may - The Phish, The Spam, and The Valid: Generating Feature-Rich Emails for Benchmarking LLMs (arxiv.org, 2026-03-23T04:00:00)
Score: 14.78
arXiv:2511.21448v5 Announce Type: replace
Abstract: In this paper, we introduce a metadata-enriched generation framework (PhishFuzzer) that seeds real emails into Large Language Models (LLMs) to produce 23,100 diverse, structurally consistent email variants across controlled entity and length dimensions. Unlike prior corpora, our dataset features strict three-class labels (Phishing, Spam, Valid), provides full URL and attachment metadata, and annotates each email with attacker intent. Using thi - PlanTwin: Privacy-Preserving Planning Abstractions for Cloud-Assisted LLM Agents (arxiv.org, 2026-03-23T04:00:00)
Score: 14.78
arXiv:2603.18377v2 Announce Type: replace
Abstract: Cloud-hosted large language models (LLMs) have become the de facto planners in agentic systems, coordinating tools and guiding execution over local environments. In many deployments, however, the environment being planned over is private, containing source code, files, credentials, and metadata that cannot be exposed to the cloud. Existing solutions address adjacent concerns, such as execution isolation, access control, or confidential inferen - Ransomware Under Pressure: Tactics, Techniques, and Procedures in a Shifting Threat Landscape (cloud.google.com, 2026-03-16T14:00:00)
Score: 14.313
Written by: Bavi Sadayappan, Zach Riddle, Ioana Teaca, Kimberly Goody, Genevieve Stark Introduction Since 2018, when many financially motivated threat actors began shifting their monetization strategy to post-compromise ransomware deployments, ransomware has become one of the most pervasive threats to organizations across almost every industry vertical and region. In recent years ransomware operations have evolved, creating a robust ecosystem that has lowered the barrier to entry via the commodi - Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild (unit42.paloaltonetworks.com, 2026-03-03T11:00:30)
Score: 14.088
Uncover real-world indirect prompt injection attacks and learn how adversaries weaponize hidden web content to exploit LLMs for high-impact fraud. The post Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild appeared first on Unit 42 . - Trojan's Whisper: Stealthy Manipulation of OpenClaw through Injected Bootstrapped Guidance (arxiv.org, 2026-03-23T04:00:00)
Score: 13.48
arXiv:2603.19974v1 Announce Type: new
Abstract: Autonomous coding agents are increasingly integrated into software development workflows, offering capabilities that extend beyond code suggestion to active system interaction and environment management. OpenClaw, a representative platform in this emerging paradigm, introduces an extensible skill ecosystem that allows third-party developers to inject behavioral guidance through lifecycle hooks during agent initialization. While this design enhance - The Attack Cycle is Accelerating: Announcing the Rapid7 2026 Global Threat Landscape Report (www.rapid7.com, 2026-03-18T13:00:00)
Score: 13.379
The predictive window has collapsed. In 2025, high-impact vulnerabilities weren’t quietly accumulating risk. They were operationalized, and often within days. Today, Rapid7 Labs released the 2026 Global Threat Landscape Report , an in-depth analysis of how attacker behavior is evolving across vulnerability exploitation, ransomware operations, identity abuse, and AI-driven tradecraft. The data shows a clear pattern: exposure is being identified and weaponized faster than most organizations are se - Introducing V-RAG: revolutionizing AI-powered video production with Retrieval Augmented Generation (aws.amazon.com, 2026-03-19T16:45:42)
Score: 12.554
This post introduces Video Retrieval-Augmented Generation (V-RAG), an approach to help improve video content creation. By combining retrieval augmented generation with advanced video AI models, V-RAG offers an efficient, and reliable solution for generating AI videos. - Retrieval-Augmented LLMs for Security Incident Analysis (arxiv.org, 2026-03-23T04:00:00)
Score: 12.48
arXiv:2603.18196v2 Announce Type: replace
Abstract: Investigating cybersecurity incidents requires collecting and analyzing evidence from multiple log sources, including intrusion detection alerts, network traffic records, and authentication events. This process is labor-intensive: analysts must sift through large volumes of data to identify relevant indicators and piece together what happened. We present a RAG-based system that performs security incident analysis through targeted query-based f - Introducing Nova Forge SDK, a seamless way to customize Nova models for enterprise AI (aws.amazon.com, 2026-03-18T16:06:21)
Score: 11.61
Today, we are launching Nova Forge SDK that makes LLM customization accessible, empowering teams to harness the full potential of language models without the challenges of dependency management, image selection, and recipe configuration and eventually lowering the barrier of entry. - Use RAG for video generation using Amazon Bedrock and Amazon Nova Reel (aws.amazon.com, 2026-03-19T16:45:50)
Score: 11.554
In this post, we explore our approach to video generation through VRAG, transforming natural language text prompts and images into grounded, high-quality videos. Through this fully automated solution, you can generate realistic, AI-powered video sequences from structured text and image inputs, streamlining the video creation process. - Improving Generalization on Cybersecurity Tasks with Multi-Modal Contrastive Learning (arxiv.org, 2026-03-23T04:00:00)
Score: 11.48
arXiv:2603.20181v1 Announce Type: new
Abstract: The use of ML in cybersecurity has long been impaired by generalization issues: Models that work well in controlled scenarios fail to maintain performance in production. The root cause often lies in ML algorithms learning superficial patterns (shortcuts) rather than underlying cybersecurity concepts. We investigate contrastive multi-modal learning as a first step towards improving ML performance in cybersecurity tasks. We aim at transferring knowl - ClawWorm: Self-Propagating Attacks Across LLM Agent Ecosystems (arxiv.org, 2026-03-23T04:00:00)
Score: 11.48
arXiv:2603.15727v2 Announce Type: replace
Abstract: Autonomous LLM-based agents increasingly operate as long-running processes forming densely interconnected multi-agent ecosystems, whose security properties remain largely unexplored. In particular, OpenClaw, an open-source platform with over 40,000 active instances, has stood out recently with its persistent configurations, tool-execution privileges, and cross-platform messaging capabilities. In this work, we present ClawWorm, the first self-r - Proactive Preparation and Hardening Against Destructive Attacks: 2026 Edition (cloud.google.com, 2026-03-06T14:00:00)
Score: 11.432
Written by: Matthew McWhirt, Bhavesh Dhake, Emilio Oropeza, Gautam Krishnan, Stuart Carrera, Greg Blaum, Michael Rudden UPDATE (March 13): Added guidance around abuse or misuse of endpoint / MDM platforms . Background Threat actors leverage destructive malware to destroy data, eliminate evidence of malicious activity, or manipulate systems in a way that renders them inoperable. Destructive cyberattacks can be a powerful means to achieve strategic or tactical objectives; however, the risk of repr
Auto-generated 2026-03-23
