Categories Uncategorized

Weekly Threat Report 2026-04-06

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-04-06T05:00:05.766322+00:00

  1. Apple expands “DarkSword” patches to iOS 18.7.7 (www.malwarebytes.com, 2026-04-02T14:13:44)
    Score: 7.597
    Apple has quietly expanded patches against the vulnerabilities in the DarkSword exploit kit to include iOS and iPadOS 18.7.7
  2. Malwarebytes Privacy VPN receives full third-party audit (www.malwarebytes.com, 2026-04-02T13:00:00)
    Score: 7.589
    We commissioned a third-party audit for the infrastructure behind our VPNs. Here are the results.
  3. Infiniti Stealer: a new macOS infostealer using ClickFix and Python/Nuitka (www.malwarebytes.com, 2026-03-26T17:39:01)
    Score: 6.955
    A new macOS infostealer, NukeChain (now Infiniti Stealer), uses fake CAPTCHA pages to trick users into running malicious commands.
  4. Bogus Avast website fakes virus scan, installs Venom Stealer instead (www.malwarebytes.com, 2026-03-27T10:49:31)
    Score: 6.574
    A fake Avast scan tells you your PC is infected, then installs the malware that steals passwords, session data and crypto wallets.
  5. ClickFix Campaigns Targeting Windows and macOS (www.recordedfuture.com, 2026-03-25T00:00:00)
    Score: 6.465
    Insikt Group reveals five ClickFix social engineering clusters (QuickBooks, Booking.com, Birdeye) targeting Windows and macOS. Learn how threat actors exploit native system tools with malicious, obfuscated commands to gain initial access, and get key mitigations for defense
  6. Killer robots are here. Now what? (Lock and Code S07E07) (www.malwarebytes.com, 2026-04-05T23:10:20)
    Score: 6.16
    This week on the Lock and Code podcast, we speak with Peter Asaro about killer robots, how to stop them, and their obvious consequences.
  7. That dream job offer from Coca-Cola or Ferrari? It’s a trap for your passwords (www.malwarebytes.com, 2026-04-03T23:13:58)
    Score: 5.827
    We uncovered two job scams posing as legitimate offers from Coca-Cola and Ferrari that could pry into Google and Facebook accounts.
  8. Blocking children from social media is a badly executed good idea (www.malwarebytes.com, 2026-04-03T14:37:52)
    Score: 5.767
    Governments are each inventing their own flavor of an age based ban for social media. Is the cure worse than the disease?
  9. The Iran War: What You Need to Know (www.recordedfuture.com, 2026-04-03T00:00:00)
    Score: 5.665
    Insikt Group tracks the cyber, physical, and geopolitical components of the US-Israeli strikes on Iran — with continuously updated threat analysis and scenarios.
  10. Latin America and the Caribbean Cybercrime Landscape (www.recordedfuture.com, 2026-04-02T00:00:00)
    Score: 5.499
    This report provides an overview of trends and developments in the cybercriminal ecosystem of Latin America and the Caribbean (LAC) in 2025.

Top 10 AI / LLM-Related Threats

Generated 2026-04-06T06:00:19.096398+00:00

  1. M-Trends 2026: Data, Insights, and Strategies From the Frontlines (cloud.google.com, 2026-03-23T14:00:00)
    Score: 24.746
    Every year, the cyber threat landscape forces defenders to adapt to evolving adversary tactics, techniques, and procedures (TTPs). In 2025, Mandiant observed a clear divergence in adversary pacing that closely aligns with the trends we have been documenting for defenders over the past year. On one end of the spectrum, cyber criminal groups optimized for immediate impact and deliberate recovery denial. On the other end, sophisticated cyber espionage groups and insider threats optimized for extrem
  2. Understanding the Effects of Safety Unalignment on Large Language Models (arxiv.org, 2026-04-06T04:00:00)
    Score: 17.78
    arXiv:2604.02574v1 Announce Type: new
    Abstract: Safety alignment has become a critical step to ensure LLMs refuse harmful requests while providing helpful and harmless responses. However, despite the ubiquity of safety alignment for deployed frontier models, two separate lines of recent work–jailbreak-tuning (JT) and weight orthogonalization (WO)–have shown that safety guardrails may be largely disabled, resulting in LLMs which comply with harmful requests they would normally refuse. In spite
  3. Cooking Up Risks: Benchmarking and Reducing Food Safety Risks in Large Language Models (arxiv.org, 2026-04-06T04:00:00)
    Score: 17.78
    arXiv:2604.01444v2 Announce Type: replace
    Abstract: Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food safety remains a high-stakes domain where inaccurate or misleading information can cause severe real-world harm. Despite these risks, current LLMs and safety guardrails lack rigorous alignment tailored to domain-specific food hazards. To address this gap, we introduce FoodGuardBench, the first compreh
  4. Poison Once, Exploit Forever: Environment-Injected Memory Poisoning Attacks on Web Agents (arxiv.org, 2026-04-06T04:00:00)
    Score: 15.48
    arXiv:2604.02623v1 Announce Type: new
    Abstract: Memory makes LLM-based web agents personalized, powerful, yet exploitable. By storing past interactions to personalize future tasks, agents inadvertently create a persistent attack surface that spans websites and sessions. While existing security research on memory assumes attackers can directly inject into memory storage or exploit shared memory across users, we present a more realistic threat model: contamination through environmental observatio
  5. Backdoor Attacks on Decentralised Post-Training (arxiv.org, 2026-04-06T04:00:00)
    Score: 14.78
    arXiv:2604.02372v1 Announce Type: new
    Abstract: Decentralised post-training of large language models utilises data and pipeline parallelism techniques to split the data and the model. Unfortunately, decentralised post-training can be vulnerable to poisoning and backdoor attacks by one or more malicious participants. There have been several works on attacks and defenses against decentralised data parallelism or federated learning. However, existing works on the robustness of pipeline parallelism
  6. Automated Malware Family Classification using Weighted Hierarchical Ensembles of Large Language Models (arxiv.org, 2026-04-06T04:00:00)
    Score: 14.78
    arXiv:2604.02490v1 Announce Type: new
    Abstract: Malware family classification remains a challenging task in automated malware analysis, particularly in real-world settings characterized by obfuscation, packing, and rapidly evolving threats. Existing machine learning and deep learning approaches typically depend on labeled datasets, handcrafted features, supervised training, or dynamic analysis, which limits their scalability and effectiveness in open-world scenarios.
    This paper presents a zer
  7. From Theory to Practice: Code Generation Using LLMs for CAPEC and CWE Frameworks (arxiv.org, 2026-04-06T04:00:00)
    Score: 14.78
    arXiv:2604.02548v1 Announce Type: new
    Abstract: The increasing complexity and volume of software systems have heightened the importance of identifying and mitigating security vulnerabilities. The existing software vulnerability datasets frequently fall short in providing comprehensive, detailed code snippets explicitly linked to specific vulnerability descriptions, reducing their utility for advanced research and hindering efforts to develop a deeper understanding of security vulnerabilities. T
  8. AutoVerifier: An Agentic Automated Verification Framework Using Large Language Models (arxiv.org, 2026-04-06T04:00:00)
    Score: 14.78
    arXiv:2604.02617v1 Announce Type: cross
    Abstract: Scientific and Technical Intelligence (S&TI) analysis requires verifying complex technical claims across rapidly growing literature, where existing approaches fail to bridge the verification gap between surface-level accuracy and deeper methodological validity. We present AutoVerifier, an LLM-based agentic framework that automates end-to-end verification of technical claims without requiring domain expertise. AutoVerifier decomposes every te
  9. Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study (arxiv.org, 2026-04-06T04:00:00)
    Score: 13.48
    arXiv:2604.03070v1 Announce Type: new
    Abstract: Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patt
  10. Kill-Chain Canaries: Stage-Level Tracking of Prompt Injection Across Attack Surfaces and Model Safety Tiers (arxiv.org, 2026-04-06T04:00:00)
    Score: 13.48
    arXiv:2603.28013v2 Announce Type: replace
    Abstract: We present a stage-decomposed analysis of prompt injection attacks against five frontier LLM agents. Prior work measures task-level attack success rate (ASR); we localize the pipeline stage at which each model's defense activates. We instrument every run with a cryptographic canary token (SECRET-[A-F0-9]{8}) tracked through four kill-chain stages — Exposed, Persisted, Relayed, Executed — across four attack surfaces and five defense cond
  11. When an Attacker Meets a Group of Agents: Navigating Amazon Bedrock's Multi-Agent Applications (unit42.paloaltonetworks.com, 2026-04-03T22:00:38)
    Score: 13.244
    Unit 42 research on multi-agent AI systems on Amazon Bedrock reveals new attack surfaces and prompt injection risks. Learn how to secure your AI applications. The post When an Attacker Meets a Group of Agents: Navigating Amazon Bedrock's Multi-Agent Applications appeared first on Unit 42 .
  12. Introducing Amazon Polly Bidirectional Streaming: Real-time speech synthesis for conversational AI (aws.amazon.com, 2026-03-26T17:10:20)
    Score: 13.192
    Today, we’re excited to announce the new Bidirectional Streaming API for Amazon Polly, enabling streamlined real-time text-to-speech (TTS) synthesis where you can start sending text and receiving audio simultaneously. This new API is built for conversational AI applications that generate text or audio incrementally, like responses from large language models (LLMs), where users must begin synthesizing audio before the full text is available.
  13. Open, Closed and Broken: Prompt Fuzzing Finds LLMs Still Fragile Across Open and Closed Models (unit42.paloaltonetworks.com, 2026-03-17T10:00:38)
    Score: 12.078
    Unit 42 research unveils LLM guardrail fragility using genetic algorithm-inspired prompt fuzzing. Discover scalable evasion methods and critical GenAI security implications. The post Open, Closed and Broken: Prompt Fuzzing Finds LLMs Still Fragile Across Open and Closed Models appeared first on Unit 42 .
  14. A Systematic Security Evaluation of OpenClaw and Its Variants (arxiv.org, 2026-04-06T04:00:00)
    Score: 11.78
    arXiv:2604.03131v1 Announce Type: new
    Abstract: Tool-augmented AI agents substantially extend the practical capabilities of large language models, but they also introduce security risks that cannot be identified through model-only evaluation. In this paper, we present a systematic security assessment of six representative OpenClaw-series agent frameworks, namely OpenClaw, AutoClaw, QClaw, KimiClaw, MaxClaw, and ArkClaw, under multiple backbone models. To support this study, we construct a bench
  15. Learning the Signature of Memorization in Autoregressive Language Models (arxiv.org, 2026-04-06T04:00:00)
    Score: 11.78
    arXiv:2604.03199v1 Announce Type: cross
    Abstract: All prior membership inference attacks for fine-tuned language models use hand-crafted heuristics (e.g., loss thresholding, Min-K\%, reference calibration), each bounded by the designer's intuition. We introduce the first transferable learned attack, enabled by the observation that fine-tuning any model on any corpus yields unlimited labeled data, since membership is known by construction. This removes the shadow model bottleneck and brings
  16. AlertStar: Path-Aware Alert Prediction on Hyper-Relational Knowledge Graphs (arxiv.org, 2026-04-06T04:00:00)
    Score: 11.48
    arXiv:2604.03104v1 Announce Type: new
    Abstract: Cyber-attacks continue to grow in scale and sophistication, yet existing network intrusion detection approaches lack the semantic depth required for path reasoning over attacker-victim interactions. We address this by first modelling network alerts as a knowledge graph, then formulating hyper-relational alert prediction as a hyper-relational knowledge graph completion (HR-KGC) problem, representing each network alert as a qualified statement (h, r
  17. North Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package in Supply Chain Attack (cloud.google.com, 2026-03-31T14:00:00)
    Score: 11.051
    Written by: Austin Larsen, Dima Lenz, Adrian Hernandez, Tyler McLellan, Christopher Gardner, Ashley Zaya, Michael Rudden, Mon Liclican Introduction Google Threat Intelligence Group (GTIG) is tracking an active software supply chain attack targeting the popular Node Package Manager (NPM) package " axios ." Between March 31, 2026, 00:21 and 03:20 UTC, an attacker introduced a malicious dependency named " plain-crypto-js " into axios NPM releases versions 1.14.1 and 0.30.4. Axio
  18. Ransomware Under Pressure: Tactics, Techniques, and Procedures in a Shifting Threat Landscape (cloud.google.com, 2026-03-16T14:00:00)
    Score: 10.979
    Written by: Bavi Sadayappan, Zach Riddle, Ioana Teaca, Kimberly Goody, Genevieve Stark Introduction Since 2018, when many financially motivated threat actors began shifting their monetization strategy to post-compromise ransomware deployments, ransomware has become one of the most pervasive threats to organizations across almost every industry vertical and region. In recent years ransomware operations have evolved, creating a robust ecosystem that has lowered the barrier to entry via the commodi
  19. Asking AI for personal advice is a bad idea, Stanford study shows (www.malwarebytes.com, 2026-03-31T19:40:23)
    Score: 10.907
    AI chatbots, including ChatGPT, Claude, and Gemini, were all too willing to validate and hype up their users, a new Stanford study showed.
  20. Accelerating LLM fine-tuning with unstructured data using SageMaker Unified Studio and S3 (aws.amazon.com, 2026-03-26T17:20:26)
    Score: 10.893
    Last year, AWS announced an integration between Amazon SageMaker Unified Studio and Amazon S3 general purpose buckets. This integration makes it straightforward for teams to use unstructured data stored in Amazon Simple Storage Service (Amazon S3) for machine learning (ML) and data analytics use cases. In this post, we show how to integrate S3 general purpose buckets with Amazon SageMaker Catalog to fine-tune Llama 3.2 11B Vision Instruct for visual question answering (VQA) using Amazon SageMake
  21. New Whitepaper: Stealthy BPFDoor Variants are a Needle That Looks Like Hay (www.rapid7.com, 2026-04-02T13:00:00)
    Score: 10.617
    Executive Overview Advanced persistent threats (APTs) are constantly and consistently changing tactics as network defenders plug holes in defenses. Static indicators of compromise (IoCs) for the BPFDoor have been widely deployed, forcing threat actors to get creative in their use of this particular strain of malware. What they came up with is ingenious. New research from Rapid7 Labs has uncovered undocumented features leading to the discovery of 7 new BPFDoor variants: a stealthy kernel-level ba
  22. Initial Access Brokers have Shifted to High-Value Targets and Premium Pricing (www.rapid7.com, 2026-03-31T13:00:00)
    Score: 10.141
    Initial Access Brokers (IABs) are a key component of the cybercrime ecosystem, offering hassle-free building blocks for ransomware, data theft, and extortion. Rapid7’s analysis of H2 2025 activity across five major forums grants fresh insight into a power balance shift toward initial access sales from newer marketplaces, such as RAMP and DarkForums. Higher asking prices and more focus on high-value sectors and large organizations, such as Government, Retail, and IT, reveal a mature and profit-fo
  23. The Attack Cycle is Accelerating: Announcing the Rapid7 2026 Global Threat Landscape Report (www.rapid7.com, 2026-03-18T13:00:00)
    Score: 10.046
    The predictive window has collapsed. In 2025, high-impact vulnerabilities weren’t quietly accumulating risk. They were operationalized, and often within days. Today, Rapid7 Labs released the 2026 Global Threat Landscape Report , an in-depth analysis of how attacker behavior is evolving across vulnerability exploitation, ransomware operations, identity abuse, and AI-driven tradecraft. The data shows a clear pattern: exposure is being identified and weaponized faster than most organizations are se
  24. An AI gateway designed to steal your data (securelist.com, 2026-03-26T11:01:38)
    Score: 9.631
    Dissecting the supply chain attack on LiteLLM, a multifunctional gateway used in many AI agents. Explaining the dangers of the malicious code and how to protect yourself.
  25. vSphere and BRICKSTORM Malware: A Defender's Guide (cloud.google.com, 2026-04-02T14:00:00)
    Score: 9.527
    Written by: Stuart Carrera Introduction Building on recent BRICKSTORM research from Google Threat Intelligence Group (GTIG), this post explores the evolving threats facing virtualized environments. These operations directly target the VMware vSphere ecosystem, specifically the vCenter Server Appliance (VCSA) and ESXi hypervisors. To help organizations stay ahead of these risks, we will focus on the essential hardening strategies and mitigating controls necessary to secure these critical assets.

Auto-generated 2026-04-06

Written By

More From Author

You May Also Like