Categories Uncategorized

Weekly Threat Report 2026-03-09

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-03-09T05:00:06.599423+00:00

  1. January 2026 CVE Landscape: 23 Critical Vulnerabilities Mark 5% Increase, APT28 Exploits Microsoft Office Zero-Day (www.recordedfuture.com, 2026-02-24T00:00:00)
    Score: 12.999
    January 2026 saw 23 actively exploited CVEs, including APT28’s Microsoft Office zero-day and critical auth bypass flaws impacting enterprise systems.
  2. Latin America's Cybersecurity Turning Point: From Reactive Defense to Threat Intelligence (www.recordedfuture.com, 2026-03-03T00:00:00)
    Score: 8.465
    Latin America's threat landscape is evolving fast — and reactive defense is no longer enough. PIX fraud, ransomware, and targeted attacks are outpacing overstretched security teams. Recorded Future provides LATAM-specific intelligence, automation, and seamless integrations to help your team get ahead of threats before they hit.
  3. 2025 Cloud Threat Hunting and Defense Landscape (www.recordedfuture.com, 2026-02-19T00:00:00)
    Score: 8.465
    Threat actors are doubling down on cloud infrastructure — exploiting misconfigurations, abusing native services, and pivoting through hybrid environments to maximize impact. See how attack patterns are evolving across exploitation, ransomware, credential abuse, and AI service targeting in this latest cloud threat roundup.
  4. Beware of fake OpenClaw installers, even if Bing points you to GitHub (www.malwarebytes.com, 2026-03-06T11:11:26)
    Score: 7.743
    Bing search results pointed victims to GitHub repositories claiming to host OpenClaw installers, but in reality they installed malware.
  5. Windows File Shredder: When deleting a file isn’t enough (www.malwarebytes.com, 2026-03-05T11:07:53)
    Score: 7.576
    File Shredder for Windows from Malwarebytes lets you truly, actually, really delete a file or folder from your hard drive or USB drive.
  6. Attackers abuse OAuth’s built-in redirects to launch phishing and malware attacks (www.malwarebytes.com, 2026-03-04T12:53:00)
    Score: 7.421
    Researchers have found that attackers are abusing OAuth to send users from legitimate Microsoft or Google login pages to phishing sites or malware downloads.
  7. High-severity Qualcomm bug hits Android devices in targeted attacks (www.malwarebytes.com, 2026-03-04T12:33:22)
    Score: 7.419
    Google has patched 129 Android vulnerabilities, including an actively exploited flaw in a widely used Qualcomm component.
  8. Chrome flaw let extensions hijack Gemini’s camera, mic, and file access (www.malwarebytes.com, 2026-03-03T12:10:19)
    Score: 7.25
    Researchers found a now-patched vulnerability in "Live in Chrome" that allowed a Chrome extension to inherit Gemini’s permissions.
  9. How to understand and avoid Advanced Persistent Threats (www.malwarebytes.com, 2026-02-26T18:52:11)
    Score: 6.963
    APT stands for Advanced Persistent Threat. But what does that actually mean, and how does it translate into the kind of threat you’re facing?
  10. Fake Zoom and Google Meet scams install Teramind: A technical deep dive (www.malwarebytes.com, 2026-02-26T22:40:00)
    Score: 6.489
    Attackers don’t always need custom malware. Sometimes they just need a trusted brand and a legitimate tool.

Top 10 AI / LLM-Related Threats

Generated 2026-03-09T06:00:18.538612+00:00

  1. Window-based Membership Inference Attacks Against Fine-tuned Large Language Models (arxiv.org, 2026-03-09T04:00:00)
    Score: 20.78
    arXiv:2601.02751v2 Announce Type: replace-cross
    Abstract: Most membership inference attacks (MIAs) against Large Language Models (LLMs) rely on global signals, like average loss, to identify training data. This approach, however, dilutes the subtle, localized signals of memorization, reducing attack effectiveness. We challenge this global-averaging paradigm, positing that membership signals are more pronounced within localized contexts. We introduce WBC (Window-Based Comparison), which exploits
  2. Depth Charge: Jailbreak Large Language Models from Deep Safety Attention Heads (arxiv.org, 2026-03-09T04:00:00)
    Score: 19.78
    arXiv:2603.05772v1 Announce Type: new
    Abstract: Currently, open-sourced large language models (OSLLMs) have demonstrated remarkable generative performance. However, as their structure and weights are made public, they are exposed to jailbreak attacks even after alignment. Existing attacks operate primarily at shallow levels, such as the prompt or embedding level, and often fail to expose vulnerabilities rooted in deeper model components, which creates a false sense of security for successful de
  3. Peak + Accumulation: A Proxy-Level Scoring Formula for Multi-Turn LLM Attack Detection (arxiv.org, 2026-03-09T04:00:00)
    Score: 19.48
    arXiv:2602.11247v2 Announce Type: replace
    Abstract: Multi-turn prompt injection attacks distribute malicious intent across multiple conversation turns, exploiting the assumption that each turn is evaluated independently. While single-turn detection has been extensively studied, no published formula exists for aggregating per-turn pattern scores into a conversation-level risk score at the proxy layer — without invoking an LLM. We identify a fundamental flaw in the intuitive weighted-average app
  4. SecureRAG-RTL: A Retrieval-Augmented, Multi-Agent, Zero-Shot LLM-Driven Framework for Hardware Vulnerability Detection (arxiv.org, 2026-03-09T04:00:00)
    Score: 17.78
    arXiv:2603.05689v1 Announce Type: new
    Abstract: Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks, yet their application in hardware security verification remains limited due to scarcity of publicly available hardware description language (HDL) datasets. This knowledge gap constrains LLM performance in detecting vulnerabilities within HDL designs. To address this challenge, we propose SecureRAG-RTL, a novel Retrieval-Augmented Generation (RAG)-
  5. Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models (arxiv.org, 2026-03-09T04:00:00)
    Score: 17.78
    arXiv:2603.05773v1 Announce Type: new
    Abstract: Safety alignment is often conceptualized as a monolithic process wherein harmfulness detection automatically triggers refusal. However, the persistence of jailbreak attacks suggests a fundamental mechanistic decoupling. We propose the \textbf{\underline{D}}isentangled \textbf{\underline{S}}afety \textbf{\underline{H}}ypothesis \textbf{(DSH)}, positing that safety computation operates on two distinct subspaces: a \textit{Recognition Axis} ($\mathbf
  6. Evolving Deception: When Agents Evolve, Deception Wins (arxiv.org, 2026-03-09T04:00:00)
    Score: 17.78
    arXiv:2603.05872v1 Announce Type: new
    Abstract: Self-evolving agents offer a promising path toward scalable autonomy. However, in this work, we show that in competitive environments, self-evolution can instead give rise to a serious and previously underexplored risk: the spontaneous emergence of deception as an evolutionarily stable strategy. We conduct a systematic empirical study on the self-evolution of large language model (LLM) agents in a competitive Bidding Arena, where agents iterativel
  7. ESAA-Security: An Event-Sourced, Verifiable Architecture for Agent-Assisted Security Audits of AI-Generated Code (arxiv.org, 2026-03-09T04:00:00)
    Score: 17.78
    arXiv:2603.06365v1 Announce Type: new
    Abstract: AI-assisted software generation has increased development speed, but it has also amplified a persistent engineering problem: systems that are functionally correct may still be structurally insecure. In practice, prompt-based security review with large language models often suffers from uneven coverage, weak reproducibility, unsupported findings, and the absence of an immutable audit trail. The ESAA architecture addresses a related governance probl
  8. Agent Tools Orchestration Leaks More: Dataset, Benchmark, and Mitigation (arxiv.org, 2026-03-09T04:00:00)
    Score: 17.78
    arXiv:2512.16310v2 Announce Type: replace
    Abstract: Driven by Large Language Models, the single-agent, multi-tool architecture has become a popular paradigm for autonomous agents. However, this architecture introduces a severe privacy risk, which we term Tools Orchestration Privacy Risk (TOP-R): an agent, to achieve a benign user goal, autonomously aggregates non-sensitive fragments from multiple tools and synthesizes unexpected sensitive information. We provide the first systematic study of th
  9. Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild (unit42.paloaltonetworks.com, 2026-03-03T11:00:30)
    Score: 17.421
    Uncover real-world indirect prompt injection attacks and learn how adversaries weaponize hidden web content to exploit LLMs for high-impact fraud. The post Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild appeared first on Unit 42 .
  10. SemFuzz: A Semantics-Aware Fuzzing Framework for Network Protocol Implementations (arxiv.org, 2026-03-09T04:00:00)
    Score: 14.78
    arXiv:2603.05989v1 Announce Type: new
    Abstract: Network protocols are the foundation of modern communication, yet their implementations often contain semantic vulnerabilities stemming from inadequate understanding of specification semantics. Existing gray-box and black-box testing approaches lack semantic modeling of protocols, making it difficult to precisely express testing intent and cover boundary conditions. Moreover, they typically rely on coarse-grained oracles such as crashes, which are
  11. Before You Hand Over the Wheel: Evaluating LLMs for Security Incident Analysis (arxiv.org, 2026-03-09T04:00:00)
    Score: 14.78
    arXiv:2603.06422v1 Announce Type: new
    Abstract: Security incident analysis (SIA) poses a major challenge for security operations centers, which must manage overwhelming alert volumes, large and diverse data sources, complex toolchains, and limited analyst expertise. These difficulties intensify because incidents evolve dynamically and require multi-step, multifaceted reasoning. Although organizations are eager to adopt Large Language Models (LLMs) to support SIA, the absence of rigorous benchma
  12. Information-Theoretic Privacy Control for Sequential Multi-Agent LLM Systems (arxiv.org, 2026-03-09T04:00:00)
    Score: 14.78
    arXiv:2603.05520v1 Announce Type: cross
    Abstract: Sequential multi-agent large language model (LLM) systems are increasingly deployed in sensitive domains such as healthcare, finance, and enterprise decision-making, where multiple specialized agents collaboratively process a single user request. Although individual agents may satisfy local privacy constraints, sensitive information can still be inferred through sequential composition and intermediate representations. In this work, we study \emp
  13. When Specifications Meet Reality: Uncovering API Inconsistencies in Ethereum Infrastructure (arxiv.org, 2026-03-09T04:00:00)
    Score: 14.78
    arXiv:2603.06029v1 Announce Type: cross
    Abstract: The Ethereum ecosystem, which secures over $381 billion in assets, fundamentally relies on client APIs as the sole interface between users and the blockchain. However, these critical APIs suffer from widespread implementation inconsistencies, which can lead to financial discrepancies, degraded user experiences, and threats to network reliability. Despite this criticality, existing testing approaches remain manual and incomplete: they require ext
  14. Good-Enough LLM Obfuscation (GELO) (arxiv.org, 2026-03-09T04:00:00)
    Score: 14.78
    arXiv:2603.05035v2 Announce Type: replace
    Abstract: Large Language Models (LLMs) are increasingly served on shared accelerators where an adversary with read access to device memory can observe KV caches and hidden states, threatening prompt privacy for open-source models. Cryptographic protections such as MPC and FHE offer strong guarantees but remain one to two orders of magnitude too slow for interactive inference, while static obfuscation schemes break under multi-run statistical attacks onc
  15. Proactive Preparation and Hardening Against Destructive Attacks: 2026 Edition (cloud.google.com, 2026-03-06T14:00:00)
    Score: 14.765
    Written by: Matthew McWhirt, Bhavesh Dhake, Emilio Oropeza, Gautam Krishnan, Stuart Carrera, Greg Blaum, Michael Rudden Background Threat actors leverage destructive malware to destroy data, eliminate evidence of malicious activity, or manipulate systems in a way that renders them inoperable. Destructive cyberattacks can be a powerful means to achieve strategic or tactical objectives; however, the risk of reprisal is likely to limit the frequency of use to very select incidents. Destructive cybe
  16. Coruna: The Mysterious Journey of a Powerful iOS Exploit Kit (cloud.google.com, 2026-03-03T14:00:00)
    Score: 14.051
    Introduction Google Threat Intelligence Group (GTIG) has identified a new and powerful exploit kit targeting Apple iPhone models running iOS version 13.0 (released in September 2019) up to version 17.2.1 (released in December 2023) . The exploit kit, named “Coruna” by its developers, contained five full iOS exploit chains and a total of 23 exploits. The core technical value of this exploit kit lies in its comprehensive collection of iOS exploits, with the most advanced ones using non-public expl
  17. Train CodeFu-7B with veRL and Ray on Amazon SageMaker Training jobs (aws.amazon.com, 2026-02-24T15:46:50)
    Score: 12.702
    In this post, we demonstrate how to train CodeFu-7B, a specialized 7-billion parameter model for competitive programming, using Group Relative Policy Optimization (GRPO) with veRL, a flexible and efficient training library for large language models (LLMs) that enables straightforward extension of diverse RL algorithms and seamless integration with existing LLM infrastructure, within a distributed Ray cluster managed by SageMaker training jobs. We walk through the complete implementation, coverin
  18. Traversal-as-Policy: Log-Distilled Gated Behavior Trees as Externalized, Verifiable Policies for Safe, Robust, and Efficient Agents (arxiv.org, 2026-03-09T04:00:00)
    Score: 12.48
    arXiv:2603.05517v1 Announce Type: cross
    Abstract: Autonomous LLM agents fail because long-horizon policy remains implicit in model weights and transcripts, while safety is retrofitted post hoc. We propose Traversal-as-Policy: distill sandboxed OpenHands execution logs into a single executable Gated Behavior Tree (GBT) and treat tree traversal — rather than unconstrained generation — as the control policy whenever a task is in coverage. Each node encodes a state-conditioned action macro mined
  19. SPARK: Jailbreaking T2V Models by Synergistically Prompting Auditory and Recontextualized Knowledge (arxiv.org, 2026-03-09T04:00:00)
    Score: 12.48
    arXiv:2511.13127v3 Announce Type: replace-cross
    Abstract: Jailbreak attacks can circumvent model safety guardrails and reveal critical blind spots. Prior attacks on text-to-video (T2V) models typically add adversarial perturbations to obviously unsafe prompts, which are often easy to detect and defend. In contrast, we show that benign-looking prompts containing rich, implicit cues can induce T2V models to generate semantically unsafe videos that both violate policy and preserve the original (bl
  20. Malicious AI Assistant Extensions Harvest LLM Chat Histories (www.microsoft.com, 2026-03-05T16:02:12)
    Score: 11.647
    Malicious AI browser extensions collected LLM chat histories and browsing data from platforms such as ChatGPT and DeepSeek. With nearly 900,000 installs and activity across more than 20,000 enterprise tenants, the campaign highlights the growing risk of data exposure through browser extensions. The post Malicious AI Assistant Extensions Harvest LLM Chat Histories appeared first on Microsoft Security Blog .
  21. Building custom model provider for Strands Agents with LLMs hosted on SageMaker AI endpoints (aws.amazon.com, 2026-03-05T16:15:41)
    Score: 11.549
    This post demonstrates how to build custom model parsers for Strands agents when working with LLMs hosted on SageMaker that don't natively support the Bedrock Messages API format. We'll walk through deploying Llama 3.1 with SGLang on SageMaker using awslabs/ml-container-creator, then implementing a custom parser to integrate it with Strands agents.
  22. Look What You Made Us Patch: 2025 Zero-Days in Review (cloud.google.com, 2026-03-05T14:00:00)
    Score: 11.527
    Written by: Casey Charrier, James Sadowski, Zander Work, Clement Lecigne, Benoît Sevens, Fred Plan Executive Summary Google Threat Intelligence Group (GTIG) tracked 90 zero-day vulnerabilities exploited in-the-wild in 2025. Although that volume of zero-days is lower than the record high observed in 2023 (100), it is higher than 2024’s count (78) and remained within the 60–100 range established over the previous four years, indicating a trend toward stabilization at these levels. In 2025, we cont
  23. Claude Code Security and the AI Market Reaction: What Security Leaders should Actually Focus on (www.rapid7.com, 2026-03-02T15:00:00)
    Score: 9.923
    When Anthropic announced Claude Code Security, the market reacted immediately. Several cybersecurity stocks saw sharp drops as speculation spread that AI-powered code security tools could displace traditional security platforms. The narrative moved quickly: AI is replacing AppSec. AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. T
  24. The Post-RAMP Era: Allegations, Fragmentation, and the Rebuilding of the Ransomware Underground (www.rapid7.com, 2026-02-25T13:56:38)
    Score: 9.722
    Executive summary The January 2026 seizure of RAMP disrupted a major ransomware coordination hub, but it did not dismantle the ecosystem behind it. Instead, it destabilized trust and accelerated fragmentation across the underground. Rather than consolidating around a single successor, ransomware actors are redistributing across both gated platforms like T1erOne and accessible forums such as Rehub. This shift reflects adaptation, not decline. For defenders, visibility into centralized coordinatio
  25. Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock (aws.amazon.com, 2026-02-25T20:56:13)
    Score: 9.691
    In this post, we explain how we implemented multi-LoRA inference for Mixture of Experts (MoE) models in vLLM, describe the kernel-level optimizations we performed, and show you how you can benefit from this work. We use GPT-OSS 20B as our primary example throughout this post.

Auto-generated 2026-03-09

Written By

More From Author

You May Also Like