Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-04-20T05:00:06.011149+00:00
- March 2026 CVE Landscape: 31 High-Impact Vulnerabilities Identified, Interlock Ransomware Group Exploits Cisco FMC Zero-Day (www.recordedfuture.com, 2026-04-13T00:00:00)
Score: 12.999
March 2026 saw a 139% increase in high-impact vulnerabilities, with Recorded Future's Insikt Group® identifying 31 vulnerabilities requiring immediate remediation, up from 13 in February 2026. - Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure (www.cisa.gov, 2026-04-06T11:03:58)
Score: 12.209
Advisory at a Glance Title Iranian-Affiliated Cyber Actors Exploit Programmable Logic Controllers Across US Critical Infrastructure Original Publication April 7, 2026 Executive Summary Iran-affiliated advanced persistent threat (APT) actors are conducting exploitation activity targeting internet-facing operational technology (OT) devices, including programmable logic controllers (PLCs) manufactured by Rockwell Automation/Allen-Bradley. This activity has led to PLC disruptions across several U.S. - “Your shipment has arrived” email hides remote access software (www.malwarebytes.com, 2026-04-17T07:40:03)
Score: 10.719
This DHL-themed email tries to get recipients to install remote access software attackers can use to deploy further malware, including ransomware. - April Patch Tuesday fixes two zero-days, including one under active attack (www.malwarebytes.com, 2026-04-15T09:57:15)
Score: 10.201
This month’s Patch Tuesday addresses 167 vulnerabilities, including two zero-days that could lead to system compromise, data exposure, and privilege escalation. - Simply opening a PDF could trigger this Adobe Reader zero-day (www.malwarebytes.com, 2026-04-13T11:38:36)
Score: 10.079
Even though it’s patched, Adobe confirmed it was exploited in the wild, so updating is urgent, not optional. - April 2026 Patch Tuesday: Two Zero-Days and Eight Critical Vulnerabilities Among 164 CVEs (www.crowdstrike.com, 2026-04-14T05:00:00)
Score: 9.7 - Your Supply Chain Breach Is Someone Else's Payday (www.recordedfuture.com, 2026-04-15T00:00:00)
Score: 8.332
A supply chain attack by TeamPCP compromised trusted software tools to harvest credentials at scale, enabling payroll fraud, logistics theft, and ransomware extortion. - From Bazooka to Fake Nikes (www.recordedfuture.com, 2026-04-16T00:00:00)
Score: 7.499
A deep dive into business impersonation fraud — from fake companies cashing stolen checks to AI-powered shopping scams — and why the same vulnerability enables both. - Omnistealer uses the blockchain to steal everything it can (www.malwarebytes.com, 2026-04-14T11:52:15)
Score: 7.248
This malware is coming for your password managers, saved logins, cloud storage, crypto wallets, and just about anything else it can reach. - Understanding and Anticipating Venezuelan Government Actions (www.recordedfuture.com, 2026-04-08T00:00:00)
Score: 6.965
Explore an in-depth analysis of Venezuela’s political landscape following the January 2026 US operation to capture Nicolás Maduro. This executive summary examines Acting President Delcy Rodríguez’s transition strategy, her pragmatic re-engagement with Washington, and the internal threats posed by PSUV rivals like Diosdado Cabello. Gain insights into the "three-phase" US plan for stabilization, the 2026 Organic Hydrocarbons Law reforms, and the outlook for economic recovery versus the e
Top 10 AI / LLM-Related Threats
Generated 2026-04-20T06:00:22.559419+00:00
- Defending Your Enterprise When AI Models Can Find Vulnerabilities Faster Than Ever (cloud.google.com, 2026-04-16T14:00:00)
Score: 29.827
Introduction Advances in AI model-powered exploitation have demonstrated that general-purpose AI models can excel at vulnerability discovery, even without being purpose-built for the task. Eventually, capabilities such as these will be integrated directly into the development cycle, and code will be more difficult to exploit than ever; however, this transition creates a critical window of risk. As we harden existing software with AI, threat actors will use it to discover and exploit novel vulner - HarmfulSkillBench: How Do Harmful Skills Weaponize Your Agents? (arxiv.org, 2026-04-20T04:00:00)
Score: 21.78
arXiv:2604.15415v1 Announce Type: new
Abstract: Large language models (LLMs) have evolved into autonomous agents that rely on open skill ecosystems (e.g., ClawHub and Skills.Rest), hosting numerous publicly reusable skills. Existing security research on these ecosystems mainly focuses on vulnerabilities within skills, such as prompt injection. However, there is a critical gap regarding skills that may be misused for harmful actions (e.g., cyber attacks, fraud and scams, privacy violations, and - MATRIX: Multi-Layer Code Watermarking via Dual-Channel Constrained Parity-Check Encoding (arxiv.org, 2026-04-20T04:00:00)
Score: 19.78
arXiv:2604.16001v1 Announce Type: new
Abstract: Code Large Language Models (Code LLMs) have revolutionized software development but raised critical concerns regarding code provenance, copyright protection, and security. Existing code watermarking approaches suffer from two fundamental limitations: black-box methods either exhibit detectable syntactic patterns vulnerable to statistical analysis or rely on implicit neural embedding behaviors that weaken interpretability, auditability, and precise - An Agentic Workflow for Detecting Personally Identifiable Information in Crash Narratives (arxiv.org, 2026-04-20T04:00:00)
Score: 17.78
arXiv:2604.15369v1 Announce Type: new
Abstract: Crash narratives in crash reports provide crucial contextual information for traffic safety analysis. Yet, their broader use is hindered by the presence of personally identifiable information (PII), including names, home addresses, and license plate numbers. Because PII appears sparsely and inconsistently in crash narratives, manual detection is not scalable, and existing rule-based approaches often fail to capture context-dependent PII. This stud - LogJack: Indirect Prompt Injection Through Cloud Logs Against LLM Debugging Agents (arxiv.org, 2026-04-20T04:00:00)
Score: 17.48
arXiv:2604.15368v1 Announce Type: new
Abstract: LLM debugging agents that consume cloud logs and execute remediation commands are vulnerable to indirect prompt injection through log content. We present LogJack, a benchmark of 42 payloads across 5 cloud log categories, and evaluate 8 foundation models under 3 prompt conditions with 5 independent trials each (n = 160 per model per condition on 32 attack payloads). Under the active condition, verbatim command execution rates range from 0% (Claude - VeriCWEty: Embedding enabled Line-Level CWE Detection in Verilog (arxiv.org, 2026-04-20T04:00:00)
Score: 16.78
arXiv:2604.15375v1 Announce Type: cross
Abstract: Large Language Models (LLMs) have shown significant improvement in RTL code generation. Despite the advances, the generated code is often riddled with common vulnerabilities and weaknesses (CWEs) that can slip by untrained eyes. Attackers can often exploit these weaknesses to fulfill their nefarious motives. Existing RTL bug-detection techniques rely on rule-based checks, formal properties, or coarse-grained structural analysis, which either fai - SoK: Security of Autonomous LLM Agents in Agentic Commerce (arxiv.org, 2026-04-20T04:00:00)
Score: 14.78
arXiv:2604.15367v1 Announce Type: new
Abstract: Autonomous large language model (LLM) agents such as OpenClaw are pushing agentic commerce from human-supervised assistance toward machine actors that can negotiate, purchase services, manage digital assets, and execute transactions across on-chain and off-chain environments. Protocols such as the Trustless Agents standard (ERC-8004), Agent Payments Protocol (AP2), the HTTP 402-based payment protocol (x402), Agent Commerce Protocol (ACP), the Agen - Privacy-Preserving LLMs Routing (arxiv.org, 2026-04-20T04:00:00)
Score: 14.78
arXiv:2604.15728v1 Announce Type: new
Abstract: Large language model (LLM) routing has emerged as a critical strategy to balance model performance and cost-efficiency by dynamically selecting services from various model providers. However, LLM routing adds an intermediate layer between users and LLMs, creating new privacy risks to user data. These privacy risks have not been systematically studied. Although cryptographic techniques such as Secure Multi-Party Computation (MPC) enable privacy-pre - DPrivBench: Benchmarking LLMs' Reasoning for Differential Privacy (arxiv.org, 2026-04-20T04:00:00)
Score: 14.78
arXiv:2604.15851v1 Announce Type: cross
Abstract: Differential privacy (DP) has a wide range of applications for protecting data privacy, but designing and verifying DP algorithms requires expert-level reasoning, creating a high barrier for non-expert practitioners. Prior works either rely on specialized verification languages that demand substantial domain expertise or remain semi-automated and require human-in-the-loop guidance. In this work, we investigate whether large language models (LLMs - When Search Goes Wrong: Red-Teaming Web-Augmented Large Language Models (arxiv.org, 2026-04-20T04:00:00)
Score: 14.78
arXiv:2510.09689v3 Announce Type: replace
Abstract: Large Language Models (LLMs) have been augmented with web search to overcome the limitations of the static knowledge boundary by accessing up-to-date information from the open Internet. While this integration enhances model capability, it also introduces a distinct safety threat surface: the retrieval and citation process has the potential risk of exposing users to harmful or low-credibility web content. Existing red-teaming methods are largel - Cursor AI Vulnerability Exposed Developer Devices (www.securityweek.com, 2026-04-17T07:29:16)
Score: 13.8
An indirect prompt injection could be chained with a sandbox bypass and Cursor’s remote tunnel feature for shell access to machines. The post Cursor AI Vulnerability Exposed Developer Devices appeared first on SecurityWeek . - The Blind Spot of Agent Safety: How Benign User Instructions Expose Critical Vulnerabilities in Computer-Use Agents (arxiv.org, 2026-04-20T04:00:00)
Score: 13.48
arXiv:2604.10577v2 Announce Type: replace
Abstract: Computer-use agents (CUAs) can now autonomously complete complex tasks in real digital environments, but when misled, they can also be used to automate harmful actions programmatically. Existing safety evaluations largely target explicit threats such as misuse and prompt injection, but overlook a subtle yet critical setting where user instructions are entirely benign and harm arises from the task context or execution outcome. We introduce OS-B - Into the Gray Zone: Domain Contexts Can Blur LLM Safety Boundaries (arxiv.org, 2026-04-20T04:00:00)
Score: 12.48
arXiv:2604.15717v1 Announce Type: new
Abstract: A central goal of LLM alignment is to balance helpfulness with harmlessness, yet these objectives conflict when the same knowledge serves both legitimate and malicious purposes. This tension is amplified by context-sensitive alignment: we observe that domain-specific contexts (e.g., chemistry) selectively relax defenses for domain-relevant harmful knowledge, while safety-research contexts (e.g., jailbreak studies) trigger broader relaxation spanni - A Case Study on the Impact of Anonymization Along the RAG Pipeline (arxiv.org, 2026-04-20T04:00:00)
Score: 12.48
arXiv:2604.15958v1 Announce Type: new
Abstract: Despite the considerable promise of Retrieval-Augmented Generation (RAG), many real-world use cases may create privacy concerns, where the purported utility of RAG-enabled insights comes at the risk of exposing private information to either the LLM or the end user requesting the response. As a potential mitigation, using anonymization techniques to remove personally identifiable information (PII) and other sensitive markers in the underlying data - PolicyGapper: Automated Detection of Inconsistencies Between Google Play Data Safety Sections and Privacy Policies Using LLMs (arxiv.org, 2026-04-20T04:00:00)
Score: 12.48
arXiv:2604.16128v1 Announce Type: new
Abstract: Mobile application developers are required to disclose how they collect, use, and share user data in compliance with privacy regulations. To support transparency, major app marketplaces have introduced standardized disclosure mechanisms. In 2022, Google mandated the Data Safety Section (DSS) on Google Play, requiring developers to summarize their data practices. However, compiling accurate DSS disclosures is challenging, as they must remain consis - Reasoning Hijacking: Subverting LLM Classification via Decision-Criteria Injection (arxiv.org, 2026-04-20T04:00:00)
Score: 12.48
arXiv:2601.10294v4 Announce Type: replace
Abstract: Current LLM safety research predominantly focuses on mitigating Goal Hijacking, preventing attackers from redirecting a model's high-level objective (e.g., from "summarizing emails" to "phishing users"). In this paper, we argue that this perspective is incomplete and highlight a critical vulnerability in Reasoning Alignment. We expose the inherent fragility of current alignment techniques by proposing a new adversarial - Metasploit Wrap-Up 04/17/2026 (www.rapid7.com, 2026-04-17T20:35:42)
Score: 11.93
Happy Friday – Seven New Metasploit Modules We’re happy to announce that Metasploit Framework had a big week, landing seven new modules alongside various bug fixes and enhancements. This week’s highlights include RCE modules targeting AVideo, openDCIM, Selenium Grid/Selenoid, and ChurchCRM. On the post-exploitation side, Windows saw three new persistence techniques added as modules, targeting Telemetry scheduled tasks, PowerShell profiles, and Microsoft BITS. What a time to be alive as a Metaspl - DEMUX: Boundary-Aware Multi-Scale Traffic Demixing for Multi-Tab Website Fingerprinting (arxiv.org, 2026-04-20T04:00:00)
Score: 11.48
arXiv:2604.15677v1 Announce Type: new
Abstract: Website fingerprinting (WF) attacks infer the websites visited by users from encrypted traffic in anonymous networks such as Tor. Existing deep learning methods achieve high accuracy under the single-tab assumption but degrade substantially when users open multiple tabs concurrently, producing interleaved traffic that transforms WF into an implicit demixing problem. We identify three structural requirements for effective multi-tab demixing, namely - The German Cyber Criminal Überfall: Shifts in Europe's Data Leak Landscape (cloud.google.com, 2026-04-15T14:00:00)
Score: 11.289
Written by: Jamie Collier, Robin Grunewald Germany has reclaimed its position as a primary focus for cyber extortion in Europe. While data leak site (DLS) posts rose almost 50% globally in 2025, Google Threat Intelligence (GTI) data shows that the surge is hitting German infrastructure harder and faster than its regional neighbors, marking a significant return to the high-pressure levels previously observed in the country during 2022 and 2023. Cyber Criminals Pivoting Back to Germany Germany mov - How Guidesly built AI-generated trip reports for outdoor guides on AWS (aws.amazon.com, 2026-04-14T18:02:56)
Score: 11.091
In this post, we walk through how Guidesly built Jack AI on AWS using AWS Lambda, AWS Step Functions, Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon SageMaker AI, and Amazon Bedrock to ingest trip media, enrich it with context, apply computer vision and generative AI, and publish marketing-ready content across multiple channels—securely, reliably, and at scale. - Power video semantic search with Amazon Nova Multimodal Embeddings (aws.amazon.com, 2026-04-17T19:43:35)
Score: 10.822
In this post, we show you how to build a video semantic search solution on Amazon Bedrock using Nova Multimodal Embeddings that intelligently understands user intent and retrieves accurate video results across all signal types simultaneously. We also share a reference implementation you can deploy and explore with your own content. - ChatGPT under scrutiny as Florida investigates campus shooting (www.malwarebytes.com, 2026-04-14T09:45:35)
Score: 10.809
New cases and research suggest AI chatbots don’t always shut down dangerous conversations. - Patch Tuesday – April 2026 (www.rapid7.com, 2026-04-14T21:48:16)
Score: 10.628
Microsoft is publishing 167 vulnerabilities on April 2026 Patch Tuesday . Microsoft is aware of exploitation in the wild for one of today’s vulnerabilities, and public disclosure for one other. Microsoft evaluates 19 of the vulnerabilities published today as more likely to see future exploitation. So far this month, Microsoft has provided patches to address 80 browser vulnerabilities, which are not included in the Patch Tuesday count above. Increasing volumes of vulnerabilities Regular Patch Tue - The April 2026 Security Update Review (www.thezdi.com, 2026-04-14T17:49:19)
Score: 10.589
It’s time once again for Patch Tuesday, and this one is huge. We’ve also got multiple exploits in the wild, which adds another layer of urgency to this month’s release. Take a break from your regularly scheduled activities, and let’s take a look at the latest security patches from Adobe and Microsoft. If you’d rather watch the full video recap covering the entire release, you can check it out here: Adobe Patches for April 2026 For April, Adobe released 12 bulletins addressing 61 unique CVEs in A - How Automated Reasoning checks in Amazon Bedrock transform generative AI compliance (aws.amazon.com, 2026-04-16T17:34:42)
Score: 10.162
In this post, you'll learn why probabilistic AI validation falls short in regulated industries and how Automated Reasoning checks use formal verification to deliver mathematically proven results. You'll also see how customers across six industries use this technology to produce formally verified, auditable AI outputs, and how to get started.
Auto-generated 2026-04-20
