Categories Uncategorized

Weekly Threat Report 2026-04-13

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-04-13T05:00:06.098267+00:00

  1. Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure (www.cisa.gov, 2026-04-06T11:03:58)
    Score: 13.375
    Advisory at a Glance Title Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure Original Publication April 7, 2026 Executive Summary Iran-affiliated advanced persistent threat (APT) actors are conducting exploitation activity targeting internet-facing operational technology (OT) devices, including programmable logic controllers (PLCs) manufactured by Rockwell Automation/Allen-Bradley. This activity has led to PLC disruptions across several U.S.
  2. Understanding and Anticipating Venezuelan Government Actions (www.recordedfuture.com, 2026-04-08T00:00:00)
    Score: 8.132
    Explore an in-depth analysis of Venezuela’s political landscape following the January 2026 US operation to capture Nicolás Maduro. This executive summary examines Acting President Delcy Rodríguez’s transition strategy, her pragmatic re-engagement with Washington, and the internal threats posed by PSUV rivals like Diosdado Cabello. Gain insights into the "three-phase" US plan for stabilization, the 2026 Organic Hydrocarbons Law reforms, and the outlook for economic recovery versus the e
  3. Fake Claude site installs malware that gives attackers access to your computer (www.malwarebytes.com, 2026-04-10T16:16:26)
    Score: 7.778
    We found a convincing fake site that installs a trojanized Claude app while quietly deploying PlugX malware.
  4. This fake Windows support website delivers password-stealing malware (www.malwarebytes.com, 2026-04-09T09:40:52)
    Score: 7.566
    A convincing Microsoft lookalike tricks users into downloading malware that steals passwords, payments, and account access.
  5. Apple expands “DarkSword” patches to iOS 18.7.7 (www.malwarebytes.com, 2026-04-02T14:13:44)
    Score: 6.431
    Apple has quietly expanded patches against the vulnerabilities in the DarkSword exploit kit to include iOS and iPadOS 18.7.7
  6. Malwarebytes Privacy VPN receives full third-party audit (www.malwarebytes.com, 2026-04-02T13:00:00)
    Score: 6.422
    We commissioned a third-party audit for the infrastructure behind our VPNs. Here are the results.
  7. ClickFix finds a new way to infect Macs (www.malwarebytes.com, 2026-04-10T15:02:18)
    Score: 5.77
    ClickFix campaigns have found a way around macOS Tahoe's warnings against pasting commands in the Terminal. They're using Script Editor instead.
  8. Scammers pose as Amazon support to steal your account (www.malwarebytes.com, 2026-04-09T13:05:44)
    Score: 5.59
    A new wave of Amazon refund scams is spreading, hitting both email inboxes and text messages.
  9. NSFW app leak exposes 70,000 prompts linked to individual users (www.malwarebytes.com, 2026-04-09T11:02:51)
    Score: 5.575
    MyLovelyAI leaked personal data, explicit prompts, and images of over 100,000 users, exposing many to sextortion and doxxing.
  10. 30,000 private Facebook images allegedly downloaded by Meta employee (www.malwarebytes.com, 2026-04-09T10:07:37)
    Score: 5.569
    The accused didn't just browse around; he built a custom script designed to circumvent Meta's internal detection systems.

Top 10 AI / LLM-Related Threats

Generated 2026-04-13T06:00:18.808591+00:00

  1. Exploiting Web Search Tools of AI Agents for Data Exfiltration (arxiv.org, 2026-04-13T04:00:00)
    Score: 28.78
    arXiv:2510.09093v2 Announce Type: replace
    Abstract: Large language models (LLMs) are now routinely used to autonomously execute complex tasks, from natural language processing to dynamic workflows like web searches. The usage of tool-calling and Retrieval Augmented Generation (RAG) allows LLMs to process and retrieve sensitive corporate data, amplifying both their functionality and vulnerability to abuse. As LLMs increasingly interact with external data sources, indirect prompt injection emerge
  2. M-Trends 2026: Data, Insights, and Strategies From the Frontlines (cloud.google.com, 2026-03-23T14:00:00)
    Score: 23.079
    Every year, the cyber threat landscape forces defenders to adapt to evolving adversary tactics, techniques, and procedures (TTPs). In 2025, Mandiant observed a clear divergence in adversary pacing that closely aligns with the trends we have been documenting for defenders over the past year. On one end of the spectrum, cyber criminal groups optimized for immediate impact and deliberate recovery denial. On the other end, sophisticated cyber espionage groups and insider threats optimized for extrem
  3. Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection (arxiv.org, 2026-04-13T04:00:00)
    Score: 20.78
    arXiv:2604.09024v1 Announce Type: cross
    Abstract: Multi-modal large language models (MLLMs) have emerged as powerful tools for analyzing Internet-scale image data, offering significant benefits but also raising critical safety and societal concerns. In particular, open-weight MLLMs may be misused to extract sensitive information from personal images at scale, such as identities, locations, or other private details. In this work, we propose ImageProtector, a user-side method that proactively pro
  4. DeepGuard: Secure Code Generation via Multi-Layer Semantic Aggregation (arxiv.org, 2026-04-13T04:00:00)
    Score: 17.78
    arXiv:2604.09089v1 Announce Type: cross
    Abstract: Large Language Models (LLMs) for code generation can replicate insecure patterns from their training data. To mitigate this, a common strategy for security hardening is to fine-tune models using supervision derived from the final transformer layer. However, this design may suffer from a final-layer bottleneck: vulnerability-discriminative cues can be distributed across layers and become less detectable near the output representations optimized f
  5. Unreal Thinking: Chain-of-Thought Hijacking via Two-stage Backdoor (arxiv.org, 2026-04-13T04:00:00)
    Score: 16.78
    arXiv:2604.09235v1 Announce Type: new
    Abstract: Large Language Models (LLMs) are increasingly deployed in settings where Chain-of-Thought (CoT) is interpreted by users. This creates a new safety risk: attackers may manipulate the model's observable CoT to make malicious behaviors. In open-weight ecosystems, such manipulation can be embedded in lightweight adapters that are easy to distribute and attach to base models. In practice, persistent CoT hijacking faces three main challenges: the d
  6. Kill-Chain Canaries: Stage-Level Tracking of Prompt Injection Across Attack Surfaces and Model Safety Tiers (arxiv.org, 2026-04-13T04:00:00)
    Score: 16.48
    arXiv:2603.28013v3 Announce Type: replace
    Abstract: Multi-agent LLM systems are entering production — processing documents, managing workflows, acting on behalf of users — yet their resilience to prompt injection is still evaluated with a single binary: did the attack succeed? This leaves architects without the diagnostic information needed to harden real pipelines. We introduce a kill-chain canary methodology that tracks a cryptographic token through four stages (EXPOSED -> PERSISTED -&gt
  7. Trans-RAG: Query-Centric Vector Transformation for Secure Cross-Organizational Retrieval (arxiv.org, 2026-04-13T04:00:00)
    Score: 15.48
    arXiv:2604.09541v1 Announce Type: new
    Abstract: Retrieval Augmented Generation (RAG) systems deployed across organizational boundaries face fundamental tensions between security, accuracy, and efficiency. Current encryption methods expose plaintext during decryption, while federated architectures prevent resource integration and incur substantial overhead. We introduce Trans-RAG, implementing a novel vector space language paradigm where each organization's knowledge exists in a mathematica
  8. Conversations Risk Detection LLMs in Financial Agents via Multi-Stage Generative Rollout (arxiv.org, 2026-04-13T04:00:00)
    Score: 14.78
    arXiv:2604.09056v1 Announce Type: new
    Abstract: With the rapid adoption of large language models (LLMs) in financial service scenarios, dialogue security detection under high regulatory risk presents significant challenges. Existing methods mainly rely on single-dimensional semantic judgments or fixed rules, making them inadequate for handling multi-turn semantic evolution and complex regulatory clauses; moreover, they lack models specifically designed for financial security detection. To addre
  9. BadSkill: Backdoor Attacks on Agent Skills via Model-in-Skill Poisoning (arxiv.org, 2026-04-13T04:00:00)
    Score: 13.48
    arXiv:2604.09378v1 Announce Type: new
    Abstract: Agent ecosystems increasingly rely on installable skills to extend functionality, and some skills bundle learned model artifacts as part of their execution logic. This creates a supply-chain risk that is not captured by prompt injection or ordinary plugin misuse: a third-party skill may appear benign while concealing malicious behavior inside its bundled model. We present BadSkill, a backdoor attack formulation that targets this model-in-skill thr
  10. Semantic Intent Fragmentation: A Single-Shot Compositional Attack on Multi-Agent AI Pipelines (arxiv.org, 2026-04-13T04:00:00)
    Score: 12.48
    arXiv:2604.08608v1 Announce Type: new
    Abstract: We introduce Semantic Intent Fragmentation (SIF), an attack class against LLM orchestration systems where a single, legitimately phrased request causes an orchestrator to decompose a task into subtasks that are individually benign but jointly violate security policy. Current safety mechanisms operate at the subtask level, so each step clears existing classifiers — the violation only emerges at the composed plan. SIF exploits OWASP LLM06:2025 thro
  11. Self-Sovereign Agent (arxiv.org, 2026-04-13T04:00:00)
    Score: 11.78
    arXiv:2604.08551v1 Announce Type: new
    Abstract: We investigate the emerging prospect of self-sovereign agents — AI systems that can economically sustain and extend their own operation without human involvement. Recent advances in large language models and agent frameworks have substantially expanded agents' practical capabilities, pointing toward a potential shift from developer-controlled tools to more autonomous digital actors. We analyze the remaining technical barriers to such deploym
  12. When an Attacker Meets a Group of Agents: Navigating Amazon Bedrock's Multi-Agent Applications (unit42.paloaltonetworks.com, 2026-04-03T22:00:38)
    Score: 11.578
    Unit 42 research on multi-agent AI systems on Amazon Bedrock reveals new attack surfaces and prompt injection risks. Learn how to secure your AI applications. The post When an Attacker Meets a Group of Agents: Navigating Amazon Bedrock's Multi-Agent Applications appeared first on Unit 42 .
  13. Introducing stateful MCP client capabilities on Amazon Bedrock AgentCore Runtime (aws.amazon.com, 2026-04-09T14:47:57)
    Score: 11.535
    In this post, you will learn how to build stateful MCP servers that request user input during execution, invoke LLM sampling for dynamic content generation, and stream progress updates for long-running tasks. You will see code examples for each capability and deploy a working stateful MCP server to Amazon Bedrock AgentCore Runtime.
  14. Building Intelligent Search with Amazon Bedrock and Amazon OpenSearch for hybrid RAG solutions (aws.amazon.com, 2026-04-06T17:49:32)
    Score: 10.851
    In this post, we show how to implement a generative AI agentic assistant that uses both semantic and text-based search using Amazon Bedrock, Amazon Bedrock AgentCore, Strands Agents and Amazon OpenSearch.
  15. What Project Glasswing Means for Security Leaders (www.rapid7.com, 2026-04-09T17:51:15)
    Score: 10.665
    Anthropic’s Project Glasswing matters because it offers an early look at how quickly software flaws may soon be found, validated, and potentially turned into viable attack paths, even if that capability is currently limited to a closed partner program. Anthropic says its restricted Claude Mythos Preview model has already identified thousands of high-severity vulnerabilities, including flaws in major operating systems and browsers, and in some cases developed related exploits autonomously. Some e
  16. What’s New in Rapid7 Products and Services: Q1 2026 in Review (www.rapid7.com, 2026-04-09T12:46:35)
    Score: 10.615
    If product releases had a runway moment, Q1 at Rapid7 would’ve walked out in Cloud Dancer; crisp, confident, and quietly powerful, before breaking into a full gallop in the Year of the Horse. At Rapid7, our first-quarter launches combined velocity with refinement: meaningful enhancements designed to move security teams faster without adding complexity. Let’s cover off the key launches, one by one. Detection and response MDR for Microsoft Getting more value from the tools you already have is an o
  17. XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.98
    arXiv:2604.09489v1 Announce Type: new
    Abstract: Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This ap
  18. Retrieval Augmented Classification for Confidential Documents (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.48
    arXiv:2604.08628v1 Announce Type: new
    Abstract: Unauthorized disclosure of confidential documents demands robust, low-leakage classification. In real work environments, there is a lot of inflow and outflow of documents. To continuously update knowledge, we propose a methodology for classifying confidential documents using Retrieval Augmented Classification (RAC). To confirm this effectiveness, we compare RAC and supervised fine tuning (FT) on the WikiLeaks US Diplomacy corpus under realistic se
  19. Stringology-Based Cryptanalysis for EChaCha20 Stream Cipher (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.48
    arXiv:2604.08862v1 Announce Type: new
    Abstract: Stringology-Based Cryptanalysis (SBC) offers a suitable and a structurally aligned approach for uncovering structural patterns in stream ciphers that traditional statistical tests may often fail to detect. Despite \texttt{EChaCha20}'s design enhancements, no systematic investigation has been performed to determine whether its expanded 6$\times$6 state matrix and modified Quarter-Round Function (\texttt{QR-F}) introduce subtle keystream patter
  20. ChatGPT, is this real? The influence of generative AI on writing style in top-tier cybersecurity papers (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.48
    arXiv:2604.09316v1 Announce Type: new
    Abstract: With the release of ChatGPT in 2022, generative AI has significantly lowered the cost of polishing and rewriting text. Due to its widespread usage, conference organizers instated specific requirements researchers need to adhere to when using GenAI. When asked to rewrite text, GenAI can introduce stylistic changes, often concentrated to a handful of “marker words“ commonly associated with AI usage. Prior large-scale studies in preprints and biome
  21. A Deductive System for Contract Satisfaction Proofs (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.48
    arXiv:2604.09165v1 Announce Type: cross
    Abstract: Hardware-software contracts are abstract specifications of a CPU's leakage behavior. They enable verifying the security of high-level programs against side-channel attacks without having to explicitly reason about the microarchitectural details of the CPU. Using the abstraction powers of a contract requires proving that the targeted CPU satisfies the contract in the sense that the contract over-approximates the CPU's leakage. Besides p
  22. Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection (arxiv.org, 2026-04-13T04:00:00)
    Score: 9.48
    arXiv:2601.10294v3 Announce Type: replace
    Abstract: Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We propose a new adversarial prompt attack paradigm: Reasoning Hijacking and instantiate it wit
  23. North Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package in Supply Chain Attack (cloud.google.com, 2026-03-31T14:00:00)
    Score: 9.384
    Written by: Austin Larsen, Dima Lenz, Adrian Hernandez, Tyler McLellan, Christopher Gardner, Ashley Zaya, Michael Rudden, Mon Liclican Introduction Google Threat Intelligence Group (GTIG) is tracking an active software supply chain attack targeting the popular Node Package Manager (NPM) package " axios ." Between March 31, 2026, 00:21 and 03:20 UTC, an attacker introduced a malicious dependency named " plain-crypto-js " into axios NPM releases versions 1.14.1 and 0.30.4. Axio
  24. New Whitepaper: Stealthy BPFDoor Variants are a Needle That Looks Like Hay (www.rapid7.com, 2026-04-02T13:00:00)
    Score: 8.95
    Executive Overview Advanced persistent threats (APTs) are constantly and consistently changing tactics as network defenders plug holes in defenses. Static indicators of compromise (IoCs) for the BPFDoor have been widely deployed, forcing threat actors to get creative in their use of this particular strain of malware. What they came up with is ingenious. New research from Rapid7 Labs has uncovered undocumented features leading to the discovery of 7 new BPFDoor variants: a stealthy kernel-level ba
  25. Metasploit Wrap-Up 04/10/2026 (www.rapid7.com, 2026-04-10T19:11:43)
    Score: 8.917
    Speedup Improvements of MSFVenom & New Modules This week, we have added new modules to Metasploit Framework targeting Cisco Catalyst SD-WAN controllers and osTicket as well as updates and improvements to Windows service-for-user persistence, and LDAP/ADCS-related modules to automatically report related services resulting in an improved data stream, which can be queried by using the services command. We also landed an improvement to msfvenom’s bootup time, thanks to bcoles , resulting in an a

Auto-generated 2026-04-13

Written By

More From Author

You May Also Like