Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-05-04T05:00:06.181324+00:00
- Defending Against China-Nexus Covert Networks of Compromised Devices (www.cisa.gov, 2026-04-21T15:12:37)
Score: 15.404
Defending against china-nexus covert networks of compromised devices executive summary Defending against China-nexus covert networks of compromised devices Explaining the widespread shift in tactics, techniques and procedures (TTPs) towards networks of compromised infrastructure, and how to defend against it Summary With support from the UK Cyber League , this advisory has been jointly released by the National Cyber Security Centre (NCSC-UK) and international partners: Australian Signals Directo - March 2026 CVE Landscape: 31 High-Impact Vulnerabilities Identified, Interlock Ransomware Group Exploits Cisco FMC Zero-Day (www.recordedfuture.com, 2026-04-13T00:00:00)
Score: 10.665
March 2026 saw a 139% increase in high-impact vulnerabilities, with Recorded Future's Insikt Group® identifying 31 vulnerabilities requiring immediate remediation, up from 13 in February 2026. - Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure (www.cisa.gov, 2026-04-06T11:03:58)
Score: 9.875
Advisory at a Glance Title Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure Original Publication April 7, 2026 Executive Summary Iran-affiliated advanced persistent threat (APT) actors are conducting exploitation activity targeting internet-facing operational technology (OT) devices, including programmable logic controllers (PLCs) manufactured by Rockwell Automation/Allen-Bradley. This activity has led to PLC disruptions across several U.S. - Actively exploited cPanel bug exposes millions of websites to takeover (www.malwarebytes.com, 2026-05-01T10:48:19)
Score: 9.74
A vulnerability in the cPanel/WHM admin interface lets attackers access websites without a username and password. - Fake CAPTCHA scam turns a quick click into a costly phone bill (www.malwarebytes.com, 2026-04-28T10:46:01)
Score: 7.74
Scammers are using fake CAPTCHA pages to rack up international SMS charges on victims’ phone bills, and then take a cut. - Tune In: The Future of AI-Powered Vulnerability Discovery (www.crowdstrike.com, 2026-05-01T05:00:00)
Score: 7.7 - Microsoft won’t patch PhantomRPC: Feature or bug? (www.malwarebytes.com, 2026-04-29T13:27:32)
Score: 7.425
A researcher has detailed five ways to exploit PhantomRPC, which Microsoft rates “moderate” and does not plan to fix. - Scam-checking just got a lot easier: Malwarebytes is now in Claude (www.malwarebytes.com, 2026-04-29T10:52:29)
Score: 7.407
We're in Claude! Now everyone can use our threat intel to check suspicious links, phone numbers, or email addresses. We're committed to helping you spot scams. - Apple fixes iOS bug that kept deleted notifications, including chat previews (www.malwarebytes.com, 2026-04-23T10:27:32)
Score: 6.405
A vulnerability in iPhones and iPads allowed law enforcement to recover deleted notifications, including Signal message previews. - Malicious trading website drops malware that hands your browser to attackers (www.malwarebytes.com, 2026-04-22T12:30:02)
Score: 6.252
A fake TradingView AI agent site leads to malware that can take over your browser, steal your accounts and financial data, and open the door to further attacks.
Top 10 AI / LLM-Related Threats
Generated 2026-05-04T06:00:20.115554+00:00
- Defending Your Enterprise When AI Models Can Find Vulnerabilities Faster Than Ever (cloud.google.com, 2026-04-16T14:00:00)
Score: 26.494
Introduction Advances in AI model-powered exploitation have demonstrated that general-purpose AI models can excel at vulnerability discovery, even without being purpose-built for the task. Eventually, capabilities such as these will be integrated directly into the development cycle, and code will be more difficult to exploit than ever; however, this transition creates a critical window of risk. As we harden existing software with AI, threat actors will use it to discover and exploit novel vulner - Sentra-Guard: A Real-Time Multilingual Defense Against Adversarial LLM Prompts (arxiv.org, 2026-05-04T04:00:00)
Score: 25.78
arXiv:2510.22628v2 Announce Type: replace
Abstract: This paper presents a real-time modular defense system named Sentra-Guard. The system detects and mitigates jailbreak and prompt injection attacks targeting large language models (LLMs). The framework uses a hybrid architecture with FAISS-indexed SBERT embedding representations that capture the semantic meaning of prompts, combined with fine-tuned transformer classifiers, which are machine learning models specialized for distinguishing between - Parasites in the Toolchain: A Large-Scale Analysis of Attacks on the MCP Ecosystem (arxiv.org, 2026-05-04T04:00:00)
Score: 18.78
arXiv:2509.06572v5 Announce Type: replace
Abstract: Large language models(LLMs) are increasingly integrated with external systems through the Model Context Protocol(MCP),which standardizes tool invocation and has rapidly become a backbone for LLM-powered applications. While this paradigm enhances functionality,it also introduces a fundamental security shift:LLMs transition from passive information processors to autonomous orchestrators of task-oriented toolchains,expanding the attack surface,el - Trident: Improving Malware Detection with LLMs and Behavioral Features (arxiv.org, 2026-05-04T04:00:00)
Score: 17.78
arXiv:2605.00297v1 Announce Type: new
Abstract: Traditionally, machine learning methods for PE malware detection have relied on static features like byte histograms, string information, and PE header contents. One barrier to incorporating dynamic analysis features has been the semi-structured nature of sandbox behavior reports. We show that, using the latest generation of large language models with reasoning, it is possible to efficiently process these behavior reports and utilize them as part - When RAG Chatbots Expose Their Backend: An Anonymized Case Study of Privacy and Security Risks in Patient-Facing Medical AI (arxiv.org, 2026-05-04T04:00:00)
Score: 17.48
arXiv:2605.00796v1 Announce Type: new
Abstract: Background: Patient-facing medical chatbots based on retrieval-augmented generation (RAG) are increasingly promoted to deliver accessible, grounded health information. AI-assisted development lowers the barrier to building them, but they still demand rigorous security, privacy, and governance controls. Objective: To report an anonymized, non-destructive security assessment of a publicly accessible patient-facing medical RAG chatbot and identify go - Sun Finance automates ID extraction and fraud detection with generative AI on AWS (aws.amazon.com, 2026-04-30T17:00:45)
Score: 16.857
In this post, we show how Sun Finance used Amazon Bedrock, Amazon Textract, and Amazon Rekognition to build an AI-powered identity verification (IDV) pipeline. The solution improved extraction accuracy from 79.7% to 90.8%, cut per-document costs by 91%, and reduced processing time from up to 20 hours to under 5 seconds. You'll learn how combining specialized OCR with large language model (LLM) structuring outperformed using either tool alone. You'll also learn how to architect a server - Block-wise Codeword Embedding for Reliable Multi-bit Text Watermarking (arxiv.org, 2026-05-04T04:00:00)
Score: 16.78
arXiv:2605.00348v1 Announce Type: new
Abstract: Recent multi-bit watermarking methods for large language models (LLMs) prioritize capacity over reliability, often conflating decoding with detection. Our analysis reveals that existing ECC-based extractors suffer from catastrophic false positive rates (FPR), and applying rejection thresholds merely collapses detection sensitivity (TPR) to random guessing. To resolve this structural limitation, we propose \textbf{BREW} (Block-wise Reliable Embeddi - CleanBase: Detecting Malicious Documents in RAG Knowledge Databases (arxiv.org, 2026-05-04T04:00:00)
Score: 15.48
arXiv:2605.00460v1 Announce Type: new
Abstract: Retrieval-augmented generation (RAG) is vulnerable to prompt injection attacks, in which an adversary inserts malicious documents containing carefully crafted injected prompts into the knowledge database. When a user issues a question targeted by the attack, the RAG system may retrieve these malicious documents, whose injected prompts mislead it into generating attacker-specified answers, thereby compromising the integrity of the RAG system. In th - Attention Is Where You Attack (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2605.00236v1 Announce Type: new
Abstract: Safety-aligned large language models rely on RLHF and instruction tuning to refuse harmful requests, yet the internal mechanisms implementing safety behavior remain poorly understood. We introduce the Attention Redistribution Attack (ARA), a white-box adversarial attack that identifies safety-critical attention heads and crafts nonsemantic adversarial tokens that redirect attention away from safety-relevant positions. Unlike prior jailbreak method - Skills as Verifiable Artifacts: A Trust Schema and a Biconditional Correctness Criterion for Human-in-the-Loop Agent Runtimes (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2605.00424v1 Announce Type: new
Abstract: Agent skills — structured packages of instructions, scripts, and references that augment a large language model (LLM) without modifying the model itself — have moved from convenience to first-class deployment artifact. The runtime that loads them inherits the same problem package managers and operating systems have always faced: a piece of content claims a behavior; the runtime must decide whether to believe it. We argue this paper's centra - Self-Adaptive Multi-Agent LLM-Based Security Pattern Selection for IoT Systems (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2605.00741v1 Announce Type: new
Abstract: The adoption of Internet of Things (IoT) systems at the network edge of smart architectures is increasing rapidly, intensifying the need for security mechanisms that are both adaptive and resource-efficient. In such environments, runtime defence mechanisms are no longer limited to detection alone but become a resource-constrained task of selecting mitigation actions. Security controls must be carefully selected, combined, and executed under latenc - Jailbroken Frontier Models Retain Their Capabilities (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2605.00267v1 Announce Type: cross
Abstract: As language model safeguards become more robust, attackers are pushed toward developing increasingly complex jailbreaks. Prior work has found that this complexity imposes a "jailbreak tax" that degrades the target model's task performance. We show that this tax scales inversely with model capability and that the most advanced jailbreaks effectively yield no reduction in model capabilities. Evaluating 28 jailbreaks on five benchmar - ML-Bench&Guard: Policy-Grounded Multilingual Safety Benchmark and Guardrail for Large Language Models (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2605.00689v1 Announce Type: cross
Abstract: As Large Language Models (LLMs) are increasingly deployed in cross-linguistic contexts, ensuring safety in diverse regulatory and cultural environments has become a critical challenge. However, existing multilingual benchmarks largely rely on general risk taxonomies and machine translation, which confines guardrail models to these predefined categories and hinders their ability to align with region-specific regulations and cultural nuances. To b - SoK: Security of Autonomous LLM Agents in Agentic Commerce (arxiv.org, 2026-05-04T04:00:00)
Score: 14.78
arXiv:2604.15367v2 Announce Type: replace
Abstract: Autonomous large language model (LLM) agents such as OpenClaw are pushing agentic commerce from human-supervised assistance toward machine actors that can negotiate, purchase services, manage digital assets, and execute transactions across on-chain and off-chain environments. Protocols such as the Trustless Agents standard (ERC-8004), Agent Payments Protocol (AP2), OKX Agent Payments Protocol (APP), the HTTP 402-based payment protocol (x402), - Project Glasswing and the Next Challenge for Defenders: Turning Faster Discovery into Faster Action (www.rapid7.com, 2026-04-20T16:20:32)
Score: 12.769
Anthropic’s Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next. As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders , AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, - Symbolic Execution Meets Multi-LLM Orchestration: Detecting Memory Vulnerabilities in Incomplete Rust CVE Snippets (arxiv.org, 2026-05-04T04:00:00)
Score: 12.48
arXiv:2605.00034v1 Announce Type: new
Abstract: This paper presents a system combining symbolic execution (KLEE) with a 4-agent multi-LLM architecture for detecting memory vulnerabilities in Rust unsafe code. A central challenge we address is the incomplete-code problem: CVE database entries provide only isolated code snippets that lack struct definitions, imports, and Cargo manifests, causing all existing formal verification tools to fail at compilation with zero output. Our system resolves th - ExCyTIn-Bench: Evaluating LLM agents on Cyber Threat Investigation (arxiv.org, 2026-05-04T04:00:00)
Score: 12.48
arXiv:2507.14201v3 Announce Type: replace
Abstract: We present ExCyTIn-Bench, the first benchmark to Evaluate an LLM agent X on the task of Cyber Threat Investigation through security questions derived from investigation graphs. Real-world security analysts must sift through a large number of heterogeneous security logs, follow multi-hop chains of evidence to investigate threats. With the developments of LLMs, building LLM-based agents for automatic threat investigation is a promising direction - XekRung Technical Report (arxiv.org, 2026-05-04T04:00:00)
Score: 11.78
arXiv:2605.00072v1 Announce Type: new
Abstract: We present XekRung, a frontier large language model for cybersecurity, designed to provide comprehensive security capabilities. To achieve this, we develop diverse data synthesis pipelines tailored to the cybersecurity domain, enabling the scalable construction of high-quality training data and providing a strong foundation for cybersecurity knowledge and understanding. Building on this foundation, we establish a complete training pipeline spannin - RoboKA: KAN Informed Multimodal Learning for RoboCall Surveillance System (arxiv.org, 2026-05-04T04:00:00)
Score: 11.48
arXiv:2605.00156v1 Announce Type: cross
Abstract: Wide exploration on robocall surveillance research is hindered due to limited access to public datasets, due to privacy concerns. In this work, we first curate Robo-SAr, a synthetic robocall dataset designed for robocall surveillance research. Robo-SAr comprises of ~200 unwanted and ~1200 legitimate synthetic robocall samples across three realistic adversarial axes: psycholinguistics-manipulated transcripts, emotion-eliciting speech, and cloned - DiffMI: Breaking Face Recognition Privacy via Diffusion-Driven Training-Free Model Inversion (arxiv.org, 2026-05-04T04:00:00)
Score: 11.48
arXiv:2504.18015v4 Announce Type: replace
Abstract: Face recognition poses serious privacy risks due to its reliance on sensitive and immutable biometric data. While modern systems mitigate privacy risks by mapping facial images to embeddings (commonly regarded as privacy-preserving), model inversion attacks reveal that identity information can still be recovered, exposing critical vulnerabilities. However, existing attacks are often computationally expensive and lack generalization, especially - Metasploit Wrap-Up 04/25/2026 (www.rapid7.com, 2026-04-24T20:17:56)
Score: 11.261
Check Method Visibility Metasploit has supported check methods for many years now. It’s not always desirable to jump straight into exploiting a vulnerability but instead to determine if the target is vulnerable. Metasploit tries to be very conservative with classifying a target as “vulnerable” unless the vulnerability is leveraged as part of the check method, reserving the “appears” status for version checks. The different check codes a module is capable of returning and the logic to select amon - Five Things we Took Away from Gartner SRM Sydney 2026 (www.rapid7.com, 2026-04-29T23:00:00)
Score: 10.278
At this year's Gartner Security and Risk Management Summit in Sydney, Rapid7 CISO Brian Castagna joined industry CISO Nigel Hedges for a fireside chat on the decisions security leaders are actually making right now. They discussed the real decisions being made right now about budgets, burnout, AI, and perspective on consolidation. The conversation reinforced what we see across many organizations: SecOps is very much focused on protecting business resilience, enabling confident decisions by - How Popsa used Amazon Nova to inspire customers with personalised title suggestions (aws.amazon.com, 2026-04-27T16:45:37)
Score: 9.84
In this post, we share how we applied Amazon Bedrock and the Amazon Nova family of models to reimagine our Title Suggestion feature. By combining metadata, computer vision, and retrieval-augmented generative AI, we now automatically generate creative, brand-aligned titles and subtitles across 12 languages. Using the unified API of Amazon Bedrock, Anthropic’s Claude 3 Haiku, and Amazon Nova Lite and Pro, we improved quality, reduced cost, and cut response times. This resulted in higher customer s - AWS Generative AI Model Agility Solution: A comprehensive guide to migrating LLMs for generative AI production (aws.amazon.com, 2026-04-30T17:04:41)
Score: 9.557
In this post, we introduce a systematic framework for LLM migration or upgrade in generative AI production, encompassing essential tools, methodologies, and best practices. The framework facilitates transitions between different LLMs by providing robust protocols for prompt conversion and optimization. - Unleashing Agentic AI Analytics on Amazon SageMaker with Amazon Athena and Amazon Quick (aws.amazon.com, 2026-04-30T16:52:40)
Score: 9.555
This post demonstrates how agentic AI assistant from Amazon Quick transform data analytics into a self-service capability by using Amazon Simple Storage Service (Amazon S3) as a storage, Amazon SageMaker and AWS Glue for lakehouse, Amazon Athena for serverless SQL querying across multiple storage formats (S3 Table, Iceberg, and Parquet).
Auto-generated 2026-05-04
