Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-04-27T05:00:05.670876+00:00
- Defending Against China-Nexus Covert Networks of Compromised Devices (www.cisa.gov, 2026-04-21T15:12:37)
Score: 16.571
Defending against china-nexus covert networks of compromised devices executive summary Defending against China-nexus covert networks of compromised devices Explaining the widespread shift in tactics, techniques and procedures (TTPs) towards networks of compromised infrastructure, and how to defend against it Summary With support from the UK Cyber League , this advisory has been jointly released by the National Cyber Security Centre (NCSC-UK) and international partners: Australian Signals Directo - March 2026 CVE Landscape: 31 High-Impact Vulnerabilities Identified, Interlock Ransomware Group Exploits Cisco FMC Zero-Day (www.recordedfuture.com, 2026-04-13T00:00:00)
Score: 11.832
March 2026 saw a 139% increase in high-impact vulnerabilities, with Recorded Future's Insikt Group® identifying 31 vulnerabilities requiring immediate remediation, up from 13 in February 2026. - Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure (www.cisa.gov, 2026-04-06T11:03:58)
Score: 11.042
Advisory at a Glance Title Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure Original Publication April 7, 2026 Executive Summary Iran-affiliated advanced persistent threat (APT) actors are conducting exploitation activity targeting internet-facing operational technology (OT) devices, including programmable logic controllers (PLCs) manufactured by Rockwell Automation/Allen-Bradley. This activity has led to PLC disruptions across several U.S. - “Your shipment has arrived” email hides remote access software (www.malwarebytes.com, 2026-04-17T07:40:03)
Score: 9.552
This DHL-themed email tries to get recipients to install remote access software attackers can use to deploy further malware, including ransomware. - April 2026 Patch Tuesday: Two Zero-Days and Eight Critical Vulnerabilities Among 164 CVEs (www.crowdstrike.com, 2026-04-14T05:00:00)
Score: 8.533 - Apple fixes iOS bug that kept deleted notifications, including chat previews (www.malwarebytes.com, 2026-04-23T10:27:32)
Score: 7.571
A vulnerability in iPhones and iPads allowed law enforcement to recover deleted notifications, including Signal message previews. - Malicious trading website drops malware that hands your browser to attackers (www.malwarebytes.com, 2026-04-22T12:30:02)
Score: 7.419
A fake TradingView AI agent site leads to malware that can take over your browser, steal your accounts and financial data, and open the door to further attacks. - AI Hype vs. Reality: Is AI Really Rewriting the Vulnerability Equation? (www.recordedfuture.com, 2026-04-22T00:00:00)
Score: 7.332
AI vulnerability research and discovery capabilities are improving, but they have not changed the fundamentals of vulnerability management. - Critical minerals and cyber operations (www.recordedfuture.com, 2026-04-23T00:00:00)
Score: 7.299
Learn how critical minerals and rare earth elements (REEs) are evolving from commodities into strategic flashpoints. Explore the geopolitical risks of China’s refining dominance, the race for resources in the Arctic and space, and the rising threat of state-sponsored cyber operations targeting the global mining sector. - Your Supply Chain Breach Is Someone Else's Payday (www.recordedfuture.com, 2026-04-15T00:00:00)
Score: 7.165
A supply chain attack by TeamPCP compromised trusted software tools to harvest credentials at scale, enabling payroll fraud, logistics theft, and ransomware extortion.
Top 10 AI / LLM-Related Threats
Generated 2026-04-27T06:00:20.442289+00:00
- Defending Your Enterprise When AI Models Can Find Vulnerabilities Faster Than Ever (cloud.google.com, 2026-04-16T14:00:00)
Score: 28.16
Introduction Advances in AI model-powered exploitation have demonstrated that general-purpose AI models can excel at vulnerability discovery, even without being purpose-built for the task. Eventually, capabilities such as these will be integrated directly into the development cycle, and code will be more difficult to exploit than ever; however, this transition creates a critical window of risk. As we harden existing software with AI, threat actors will use it to discover and exploit novel vulner - Train in Vain: Functionality-Preserving Poisoning to Prevent Unauthorized Use of Code Datasets (arxiv.org, 2026-04-27T04:00:00)
Score: 17.78
arXiv:2604.22291v1 Announce Type: new
Abstract: The widespread availability of large-scale code datasets has accelerated the development of code large language models (CodeLLMs), raising concerns about unauthorized dataset usage. Dataset poisoning offers a proactive defense by reducing the utility of such unauthorized training. However, existing poisoning methods often require full dataset poisoning and introduce transformations that break code compilability. In this paper, we introduce FunPois - Automation-Exploit: A Multi-Agent LLM Framework for Adaptive Offensive Security with Digital Twin-Based Risk-Mitigated Exploitation (arxiv.org, 2026-04-27T04:00:00)
Score: 17.78
arXiv:2604.22427v1 Announce Type: new
Abstract: The offensive security landscape is highly fragmented: enterprise platforms avoid memory-corruption vulnerabilities due to Denial of Service (DoS) risks, Automatic Exploit Generation (AEG) systems suffer from semantic blindness, and Large Language Model (LLM) agents face safety alignment filters and "Live Fire" execution hazards. We introduce Automation-Exploit, a fully autonomous Multi-Agent System (MAS) framework designed for adaptive - Toward Principled LLM Safety Testing: Solving the Jailbreak Oracle Problem (arxiv.org, 2026-04-27T04:00:00)
Score: 17.78
arXiv:2506.17299v2 Announce Type: replace
Abstract: As large language models (LLMs) become increasingly deployed in safety-critical applications, the lack of systematic methods to assess their vulnerability to jailbreak attacks presents a critical security gap. We introduce the jailbreak oracle problem: given a model, prompt, and decoding strategy, determine whether a jailbreak response can be generated with likelihood exceeding a specified threshold. This formalization enables a principled stu - Sovereign Agentic Loops: Decoupling AI Reasoning from Execution in Real-World Systems (arxiv.org, 2026-04-27T04:00:00)
Score: 14.78
arXiv:2604.22136v1 Announce Type: new
Abstract: Large language model (LLM) agents increasingly issue API calls that mutate real systems, yet many current architectures pass stochastic model outputs directly to execution layers. We argue that this coupling creates a safety risk because model correctness, context awareness, and alignment cannot be assumed at execution time. We introduce Sovereign Agentic Loops (SAL), a control-plane architecture in which models emit structured intents with justif - SSG: Logit-Balanced Vocabulary Partitioning for LLM Watermarking (arxiv.org, 2026-04-27T04:00:00)
Score: 14.78
arXiv:2604.22438v1 Announce Type: new
Abstract: Watermarking has emerged as a promising technique for tracing the authorship of content generated by large language models (LLMs). Among existing approaches, the KGW scheme is particularly attractive due to its versatility, efficiency, and effectiveness in natural language generation. However, KGW's effectiveness degrades significantly under low-entropy settings such as code generation and mathematical reasoning. A crucial step in the KGW met - Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model! (arxiv.org, 2026-04-27T04:00:00)
Score: 14.78
arXiv:2507.03014v2 Announce Type: replace
Abstract: Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerpri - AgentBound: Securing Execution Boundaries of AI Agents (arxiv.org, 2026-04-27T04:00:00)
Score: 14.78
arXiv:2510.21236v3 Announce Type: replace
Abstract: Large Language Models (LLMs) have evolved into AI agents that interact with external tools and environments to perform complex tasks. The Model Context Protocol (MCP) has become the de facto standard for connecting agents with such resources, but security has lagged behind: thousands of MCP servers execute with unrestricted access to host systems, creating a broad attack surface. In this paper, we introduce AgentBound, the first access control - SecureVibeBench: Benchmarking Secure Vibe Coding of AI Agents via Reconstructing Vulnerability-Introducing Scenarios (arxiv.org, 2026-04-27T04:00:00)
Score: 14.78
arXiv:2509.22097v4 Announce Type: replace-cross
Abstract: Large language model-powered code agents are rapidly transforming software engineering, yet the security risks of their generated code have become a critical concern. Existing benchmarks have provided valuable insights, but they fail to capture scenarios in which vulnerabilities are actually introduced by human developers, making fair comparisons between humans and agents infeasible. We therefore introduce SecureVibeBench, a benchmark of - Project Glasswing and the Next Challenge for Defenders: Turning Faster Discovery into Faster Action (www.rapid7.com, 2026-04-20T16:20:32)
Score: 14.436
Anthropic’s Project Glasswing has sparked plenty of discussion about what AI might soon do for vulnerability discovery, but the more useful question for most security teams is how to prepare for, and more importantly seize the opportunity of, what comes next. As we wrote in our earlier blog, What Project Glasswing Means for Security Leaders , AI is becoming more capable of finding software flaws. The pressure that follows lands on the teams responsible for deciding what matters, validating risk, - ToolSimulator: scalable tool testing for AI agents (aws.amazon.com, 2026-04-20T17:06:26)
Score: 14.143
You can use ToolSimulator, an LLM-powered tool simulation framework within Strands Evals, to thoroughly and safely test AI agents that rely on external tools, at scale. Instead of risking live API calls that expose personally identifiable information (PII), trigger unintended actions, or settling for static mocks that break with multi-turn workflows, you can use ToolSimulator's large language model (LLM)-powered simulations to validate your agents. Available today as part of the Strands Eva - Metasploit Wrap-Up 04/25/2026 (www.rapid7.com, 2026-04-24T20:17:56)
Score: 12.928
Check Method Visibility Metasploit has supported check methods for many years now. It’s not always desirable to jump straight into exploiting a vulnerability but instead to determine if the target is vulnerable. Metasploit tries to be very conservative with classifying a target as “vulnerable” unless the vulnerability is leveraged as part of the check method, reserving the “appears” status for version checks. The different check codes a module is capable of returning and the logic to select amon - Behavioral Canaries: Auditing Private Retrieved Context Usage in RL Fine-Tuning (arxiv.org, 2026-04-27T04:00:00)
Score: 12.48
arXiv:2604.22191v1 Announce Type: new
Abstract: In agentic workflows, LLMs frequently process retrieved contexts that are legally protected from further training. However, auditors currently lack a reliable way to verify if a provider has violated the terms of service by incorporating these data into post-training, especially through Reinforcement Learning (RL). While standard auditing relies on verbatim memorization and membership inference, these methods are ineffective for RL-trained models, - FixV2W: Correcting Invalid CVE-CWE Mappings with Knowledge Graph Embeddings (arxiv.org, 2026-04-27T04:00:00)
Score: 11.48
arXiv:2604.22176v1 Announce Type: new
Abstract: Accurate mapping between Common Vulnerabilities and Exposures (CVE) and Common Weakness Enumeration (CWE) entries is critical for effective vulnerability management and risk assessment. However, public databases, such as the National Vulnerability Database (NVD), suffer from inconsistent and incomplete CVE to CWE mappings, complicating automated analysis and remediation. We introduce FixV2W, a lightweight approach that leverages knowledge graph em - Company-wise memory in Amazon Bedrock with Amazon Neptune and Mem0 (aws.amazon.com, 2026-04-22T15:56:24)
Score: 11.308
Company-wise memory in Amazon Bedrock, powered by Amazon Neptune and Mem0, provides AI agents with persistent, company-specific context—enabling them to learn, adapt, and respond intelligently across multiple interactions. TrendMicro, one of the largest antivirus software companies in the world, developed the Trend’s Companion chatbot, so their customers can explore information through natural, conversational interactions - From developer desks to the whole organization: Running Claude Cowork in Amazon Bedrock (aws.amazon.com, 2026-04-21T19:13:49)
Score: 11.103
Today, we're excited to announce Claude Cowork in Amazon Bedrock. You can now run Cowork and Claude Code Desktop through Amazon Bedrock, directly or using an LLM gateway. In this post, we walk through how Claude Cowork integrates with Amazon Bedrock and show an example of how knowledge workers use it in practice. - AI is Changing Vulnerability Discovery and your Software Supply Chain Strategy has to Change with it (www.rapid7.com, 2026-04-23T13:25:47)
Score: 10.621
Wade Woolwine is Senior Director, Product Security at Rapid7. The headlines around Glasswing have focused on how quickly AI can surface vulnerabilities, which has naturally caught the attention of security leaders. In my conversations with teams and customers, the more useful discussion has been about what that speed means in practice for business protection, especially across open source risk, dependency choices, and software supply chain resilience. The deeper issue for security leaders sits e - From Bulk Export to AI-ready Security Workflows: Introducing Rapid7’s Open-Source MCP Server and Agent Skill (www.rapid7.com, 2026-04-21T13:58:29)
Score: 10.55
Security teams want more from their data than APIs and one-off reports. They want to ask better questions, move faster, and bring security context into the workflows they are already building. That’s especially true as more organizations experiment with private AI assistants, internal copilots, and LLM-powered automation. Part of this experimentation is, of course, attempting to lower the pressure on teams that have to figure out how to prioritize the sheer number of actionable vulnerabilities e - Metasploit Wrap-Up 04/17/2026 (www.rapid7.com, 2026-04-17T20:35:42)
Score: 10.264
Happy Friday – Seven New Metasploit Modules We’re happy to announce that Metasploit Framework had a big week, landing seven new modules alongside various bug fixes and enhancements. This week’s highlights include RCE modules targeting AVideo, openDCIM, Selenium Grid/Selenoid, and ChurchCRM. On the post-exploitation side, Windows saw three new persistence techniques added as modules, targeting Telemetry scheduled tasks, PowerShell profiles, and Microsoft BITS. What a time to be alive as a Metaspl - The German Cyber Criminal Überfall: Shifts in Europe's Data Leak Landscape (cloud.google.com, 2026-04-15T14:00:00)
Score: 9.622
Written by: Jamie Collier, Robin Grunewald Germany has reclaimed its position as a primary focus for cyber extortion in Europe. While data leak site (DLS) posts rose almost 50% globally in 2025, Google Threat Intelligence (GTI) data shows that the surge is hitting German infrastructure harder and faster than its regional neighbors, marking a significant return to the high-pressure levels previously observed in the country during 2022 and 2023. Cyber Criminals Pivoting Back to Germany Germany mov - Snow Flurries: How UNC6692 Employed Social Engineering to Deploy a Custom Malware Suite (cloud.google.com, 2026-04-23T14:00:00)
Score: 9.527
Written by: JP Glab, Tufail Ahmed, Josh Kelley, Muhammad Umair Introduction Google Threat Intelligence Group (GTIG) identified a multistage intrusion campaign by a newly tracked threat group, UNC6692, that leveraged persistent social engineering, a custom modular malware suite, and deft pivoting inside the victim’s environment to achieve deep network penetration. As with many other intrusions in recent years, UNC6692 relied heavily on impersonating IT helpdesk employees, convincing their victim - Who Audits the Auditor? Tamper-Proof Fraud Detection with Blockchain-Anchored Explainable ML (arxiv.org, 2026-04-27T04:00:00)
Score: 9.48
arXiv:2604.22096v1 Announce Type: new
Abstract: In enterprise fraud detection, model accuracy alone is insufficient when insiders can tamper with audit logs or bypass approval workflows. Real-world incidents show that fraud often persists not because detection algorithms fail, but because the audit trail itself is controllable by privileged operators. This exposes a fundamental trust gap: *who audits the auditor?*
We present a tamper-evident fraud detection system that anchors both ML predict - Resource-Aware Layered Intrusion Detection Allocation Model (arxiv.org, 2026-04-27T04:00:00)
Score: 9.48
arXiv:2604.22304v1 Announce Type: new
Abstract: This paper proposes a resource-aware allocation model for layered intrusion detection in het erogeneous networks. Monitoring traffic at higher protocol layers improves the ability to detect sophisticated attacks, but it also increases computational and storage costs. The problem is formu lated as an integer linear program that assigns a single monitoring depth, ranging from Ethernet to the application layer, to each device, while accounting for de - Adversarial Co-Evolution of Malware and Detection Models: A Bilevel Optimization Perspective (arxiv.org, 2026-04-27T04:00:00)
Score: 9.48
arXiv:2604.22569v1 Announce Type: new
Abstract: Machine learning-based malware detectors are increasingly vulnerable to adversarial examples. Traditional defenses, such as one-shot adversarial training, often fail against adaptive attackers who use reinforcement learning to bypass detection. This paper proposes a robust defense framework based on bilevel optimization, explicitly modeling the strategic interaction between a defender and an attacker as an adversarial co-evolutionary process. We e - Detecting Concept Drift in Evolving Malware Families Using Rule-Based Classifier Representations (arxiv.org, 2026-04-27T04:00:00)
Score: 9.48
arXiv:2604.22629v1 Announce Type: new
Abstract: This work proposes a structural approach to concept drift detection in malware classification using decision tree rulesets. Classifiers are trained across temporal windows on the EMBER2024 dataset, and drift is quantified by comparing extracted rule representations using feature importance, prediction agreement, activation stability, and coverage metrics. These metrics are correlated with both accuracy degradation and data distribution shift as co
Auto-generated 2026-04-27
