Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2025-10-13T20:51:30.310975+00:00
- CISA Shares Lessons Learned from an Incident Response Engagement (www.cisa.gov, 2025-09-22T15:12:49)
Score: 15.261
Advisory at a Glance Executive Summary CISA began incident response efforts at a U.S. federal civilian executive branch (FCEB) agency following the detection of potential malicious activity identified through security alerts generated by the agency’s endpoint detection and response (EDR) tool. CISA identified three lessons learned from the engagement that illuminate how to effectively mitigate risk, prepare for, and respond to incidents: vulnerabilities were not promptly remediated, the agency d - CrowdStrike Identifies Campaign Targeting Oracle E-Business Suite via Zero-Day Vulnerability (now tracked as CVE-2025-61882) (www.crowdstrike.com, 2025-10-06T07:00:00)
Score: 9.937 - The State of Ransomware in Healthcare 2025 (news.sophos.com, 2025-10-08T17:35:02)
Score: 8.344
292 IT and cybersecurity leaders reveal the ransomware realities for healthcare establishments today. - Fake VPN and streaming app drops malware that drains your bank account (www.malwarebytes.com, 2025-10-09T19:05:39)
Score: 7.521
Mobdro Pro IP TV + VPN hides Klopatra, a new Android Trojan that lets attackers steal banking credentials. - “Can you test my game?” Fake itch.io pages spread hidden malware to gamers (www.malwarebytes.com, 2025-10-08T09:17:20)
Score: 7.286
One click, total mess. A convincing itch-style page can drop a stealthy stager instead of a game. Here’s how to spot it and what to do if you clicked. - Is your SIEM still serving You? Why it might be time to rethink your security stack (news.sophos.com, 2025-09-30T08:19:50)
Score: 7.246
Security teams are under increasing pressure to detect and respond to threats in real time, especially as the median dwell time for ransomware attacks has dropped from weeks to a few days. Yet many organizations still rely on legacy Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) tools. These tools […] - Massive Malicious NPM Package Attack Threatens Software Supply Chains (www.recordedfuture.com, 2025-10-06T00:00:00)
Score: 6.888
A massive NPM supply chain attack leveraging “Shai-Hulud” malware has compromised 700+ packages, targeting developer credentials and CI/CD pipelines. Learn how it works—and how to protect your org. - BIETA: A Technology Enablement Front for China's MSS (www.recordedfuture.com, 2025-10-06T00:00:00)
Score: 6.688
Discover how China's Ministry of State Security (MSS) almost certainly operates BIETA and its subsidiary CIII as public fronts for cyber-espionage, covert communications, and technology acquisition. Critical insight for policy, academia, and cybersecurity stakeholders. - Scam Facebook groups send malicious Android malware to seniors (www.malwarebytes.com, 2025-10-02T13:09:30)
Score: 6.313
Cybercriminals are targeting older Facebook users with fake community and travel groups that push malicious Android apps. - Cybersecurity Awareness Month: 10 tips to Stay Safe Online that anyone can use (news.sophos.com, 2025-10-13T13:00:12)
Score: 6.145
Use this short checklist as a launchpad: adopt the basics consistently, strengthen the controls that matter most, and build routines that keep those protections current and effective.
Top 10 AI / LLM-Related Threats
Generated 2025-10-13T20:59:25.458782+00:00
- Exploiting Web Search Tools of AI Agents for Data Exfiltration (arxiv.org, 2025-10-13T04:00:00)
Score: 28.631
arXiv:2510.09093v1 Announce Type: new
Abstract: Large language models (LLMs) are now routinely used to autonomously execute complex tasks, from natural language processing to dynamic workflows like web searches. The usage of tool-calling and Retrieval Augmented Generation (RAG) allows LLMs to process and retrieve sensitive corporate data, amplifying both their functionality and vulnerability to abuse. As LLMs increasingly interact with external data sources, indirect prompt injection emerges as - Pattern Enhanced Multi-Turn Jailbreaking: Exploiting Structural Vulnerabilities in Large Language Models (arxiv.org, 2025-10-13T04:00:00)
Score: 20.631
arXiv:2510.08859v1 Announce Type: cross
Abstract: Large language models (LLMs) remain vulnerable to multi-turn jailbreaking attacks that exploit conversational context to bypass safety constraints gradually. These attacks target different harm categories (like malware generation, harassment, or fraud) through distinct conversational approaches (educational discussions, personal experiences, hypothetical scenarios). Existing multi-turn jailbreaking methods often rely on heuristic or ad hoc explo - The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections (arxiv.org, 2025-10-13T04:00:00)
Score: 18.631
arXiv:2510.09023v1 Announce Type: cross
Abstract: How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a static set of harmful attack strings, or against computationally weak optimization methods that were not designed with the defense in mind. We argue that this ev - Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training (arxiv.org, 2025-10-13T04:00:00)
Score: 18.331
arXiv:2507.01752v2 Announce Type: replace-cross
Abstract: Gradient-based optimization is the workhorse of deep learning, offering efficient and scalable training via backpropagation. However, exposing gradients during training can leak sensitive information about the underlying data, raising privacy and security concerns such as susceptibility to data poisoning attacks. In contrast, black box optimization methods, which treat the model as an opaque function, relying solely on function evaluatio - The Model's Language Matters: A Comparative Privacy Analysis of LLMs (arxiv.org, 2025-10-13T04:00:00)
Score: 17.631
arXiv:2510.08813v1 Announce Type: cross
Abstract: Large Language Models (LLMs) are increasingly deployed across multilingual applications that handle sensitive data, yet their scale and linguistic variability introduce major privacy risks. Mostly evaluated for English, this paper investigates how language structure affects privacy leakage in LLMs trained on English, Spanish, French, and Italian medical corpora. We quantify six linguistic indicators and evaluate three attack vectors: extraction, - P2P: A Poison-to-Poison Remedy for Reliable Backdoor Defense in LLMs (arxiv.org, 2025-10-13T04:00:00)
Score: 17.631
arXiv:2510.04503v2 Announce Type: replace
Abstract: During fine-tuning, large language models (LLMs) are increasingly vulnerable to data-poisoning backdoor attacks, which compromise their reliability and trustworthiness. However, existing defense strategies suffer from limited generalization: they only work on specific attack types or task settings. In this study, we propose Poison-to-Poison (P2P), a general and effective backdoor defense algorithm. P2P injects benign triggers with safe alterna - Code Agent can be an End-to-end System Hacker: Benchmarking Real-world Threats of Computer-use Agent (arxiv.org, 2025-10-13T04:00:00)
Score: 17.631
arXiv:2510.06607v2 Announce Type: replace
Abstract: Computer-use agent (CUA) frameworks, powered by large language models (LLMs) or multimodal LLMs (MLLMs), are rapidly maturing as assistants that can perceive context, reason, and act directly within software environments. Among their most critical applications is operating system (OS) control. As CUAs in the OS domain become increasingly embedded in daily operations, it is imperative to examine their real-world security implications, specifica - Fewer Weights, More Problems: A Practical Attack on LLM Pruning (arxiv.org, 2025-10-13T04:00:00)
Score: 17.631
arXiv:2510.07985v2 Announce Type: replace-cross
Abstract: Model pruning, i.e., removing a subset of model weights, has become a prominent approach to reducing the memory footprint of large language models (LLMs) during inference. Notably, popular inference engines, such as vLLM, enable users to conveniently prune downloaded models before they are deployed. While the utility and efficiency of pruning methods have improved significantly, the security implications of pruning remain underexplored. - CommandSans: Securing AI Agents with Surgical Precision Prompt Sanitization (arxiv.org, 2025-10-13T04:00:00)
Score: 17.331
arXiv:2510.08829v1 Announce Type: new
Abstract: The increasing adoption of LLM agents with access to numerous tools and sensitive data significantly widens the attack surface for indirect prompt injections. Due to the context-dependent nature of attacks, however, current defenses are often ill-calibrated as they cannot reliably differentiate malicious and benign instructions, leading to high false positive rates that prevent their real-world adoption. To address this, we present a novel approac - When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory (unit42.paloaltonetworks.com, 2025-10-09T22:00:11)
Score: 14.858
Indirect prompt injection can poison long-term AI agent memory, allowing injected instructions to persist and potentially exfiltrate conversation history. The post When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory appeared first on Unit 42 . - Medical reports analysis dashboard using Amazon Bedrock, LangChain, and Streamlit (aws.amazon.com, 2025-10-13T20:56:14)
Score: 14.699
In this post, we demonstrate the development of a conceptual Medical Reports Analysis Dashboard that combines Amazon Bedrock AI capabilities, LangChain's document processing, and Streamlit's interactive visualization features. The solution transforms complex medical data into accessible insights through a context-aware chat system powered by large language models available through Amazon Bedrock and dynamic visualizations of health parameters. - Toward a Safer Web: Multilingual Multi-Agent LLMs for Mitigating Adversarial Misinformation Attacks (arxiv.org, 2025-10-13T04:00:00)
Score: 14.631
arXiv:2510.08605v1 Announce Type: cross
Abstract: The rapid spread of misinformation on digital platforms threatens public discourse, emotional stability, and decision-making. While prior work has explored various adversarial attacks in misinformation detection, the specific transformations examined in this paper have not been systematically studied. In particular, we investigate language-switching across English, French, Spanish, Arabic, Hindi, and Chinese, followed by translation. We also stu - Privacy-Preserving Parameter-Efficient Fine-Tuning for Large Language Model Services (arxiv.org, 2025-10-13T04:00:00)
Score: 14.631
arXiv:2305.06212v3 Announce Type: replace-cross
Abstract: Parameter-Efficient Fine-Tuning (PEFT) provides a practical way for users to customize Large Language Models (LLMs) with their private data in LLM service scenarios. However, the inherently sensitive nature of private data demands robust privacy preservation measures during the customization of LLM services to ensure data security, maintain user trust, and comply with stringent regulatory standards. Based on PEFT, we propose Privacy-Pres - Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols (arxiv.org, 2025-10-13T04:00:00)
Score: 13.331
arXiv:2510.09462v1 Announce Type: cross
Abstract: AI control protocols serve as a defense mechanism to stop untrusted LLM agents from causing harm in autonomous settings. Prior work treats this as a security problem, stress testing with exploits that use the deployment context to subtly complete harmful side tasks, such as backdoor insertion. In practice, most AI control protocols are fundamentally based on LLM monitors, which can become a central point of failure. We study adaptive attacks by - Crafting a Full Exploit RCE from a Crash in Autodesk Revit RFA File Parsing (www.thezdi.com, 2025-10-08T14:00:00)
Score: 13.24
In April of 2025, my colleague Mat Powell was hunting for vulnerabilities in Autodesk Revit 2025. While fuzzing RFA files, he found the following crash ( CVE-2025-5037 / ZDI-CAN-26922 , addressed by Autodesk in July 2025): Is this an exploitable crash? From the debugger output crash point as seen above, unclear whether anything is controllable. At around this time, my colleague Nitesh Surana uncovered a highly impactful cloud-based supply chain vulnerability in Axis Communications Plugin for Aut - Oracle E-Business Suite Zero-Day Exploited in Widespread Extortion Campaign (cloud.google.com, 2025-10-09T14:00:00)
Score: 12.378
Written by: Peter Ukhanov, Genevieve Stark, Zander Work, Ashley Pearson, Josh Murchie, Austin Larsen Introduction Beginning Sept. 29, 2025, Google Threat Intelligence Group (GTIG) and Mandiant began tracking a new, large-scale extortion campaign by a threat actor claiming affiliation with the CL0P extortion brand. The actor began sending a high volume of emails to executives at numerous organizations, alleging the theft of sensitive data from the victims' Oracle E-Business Suite (EBS) envir - GREAT: Generalizable Backdoor Attacks in RLHF via Emotion-Aware Trigger Synthesis (arxiv.org, 2025-10-13T04:00:00)
Score: 11.331
arXiv:2510.09260v1 Announce Type: new
Abstract: Recent work has shown that RLHF is highly susceptible to backdoor attacks, poisoning schemes that inject malicious triggers in preference data. However, existing methods often rely on static, rare-token-based triggers, limiting their effectiveness in realistic scenarios. In this paper, we develop GREAT, a novel framework for crafting generalizable backdoors in RLHF through emotion-aware trigger synthesis. Specifically, GREAT targets harmful respon - Customizing text content moderation with Amazon Nova (aws.amazon.com, 2025-10-09T21:47:08)
Score: 9.456
In this post, we introduce Amazon Nova customization for text content moderation through Amazon SageMaker AI, enabling organizations to fine-tune models for their specific moderation needs. The evaluation across three benchmarks shows that customized Nova models achieve an average improvement of 7.3% in F1 scores compared to the baseline Nova Lite, with individual improvements ranging from 4.2% to 9.2% across different content moderation tasks. - CVE-2025-23298: Getting Remote Code Execution in NVIDIA Merlin (www.thezdi.com, 2025-09-24T16:41:25)
Score: 9.434
While investigating the security posture of various machine learning (ML) and artificial intelligence (AI) frameworks, the Trend Micro Zero Day Initiative (ZDI) Threat Hunting Team discovered a critical vulnerability in the NVIDIA Merlin Transformers4Rec library that could allow an attacker to achieve remote code execution with root privileges. This vulnerability, tracked as CVE-2025-23298 , stems from unsafe deserialization practices in the model checkpoint loading functionality. What makes thi - Connect Amazon Quick Suite to enterprise apps and agents with MCP (aws.amazon.com, 2025-10-13T17:21:02)
Score: 9.364
In this post, we explore how Amazon Quick Suite's Model Context Protocol (MCP) client enables secure, standardized connections to enterprise applications and AI agents, eliminating the need for complex custom integrations. You'll discover how to set up MCP Actions integrations with popular enterprise tools like Atlassian Jira and Confluence, AWS Knowledge MCP Server, and Amazon Bedrock AgentCore Gateway to create a collaborative environment where people and AI agents can seamlessly wor - Post-Quantum Security of Block Cipher Constructions (arxiv.org, 2025-10-13T04:00:00)
Score: 9.331
arXiv:2510.08725v1 Announce Type: new
Abstract: Block ciphers are versatile cryptographic ingredients that are used in a wide range of applications ranging from secure Internet communications to disk encryption. While post-quantum security of public-key cryptography has received significant attention, the case of symmetric-key cryptography (and block ciphers in particular) remains a largely unexplored topic. In this work, we set the foundations for a theory of post-quantum security for block ci - Psyzkaller: Learning from Historical and On-the-Fly Execution Data for Smarter Seed Generation in OS kernel Fuzzing (arxiv.org, 2025-10-13T04:00:00)
Score: 9.331
arXiv:2510.08918v1 Announce Type: new
Abstract: Fuzzing has become a cornerstone technique for uncovering vulnerabilities and enhancing the security of OS kernels. However, state-of-the-art kernel fuzzers, including the de facto standard Syzkaller, struggle to generate valid syscall sequences that respect implicit Syscall Dependency Relations (SDRs). Consequently, many generated seeds either fail kernel validation or cannot penetrate deep execution paths, resulting in significant inefficiency. - Provable Watermarking for Data Poisoning Attacks (arxiv.org, 2025-10-13T04:00:00)
Score: 9.331
arXiv:2510.09210v1 Announce Type: new
Abstract: In recent years, data poisoning attacks have been increasingly designed to appear harmless and even beneficial, often with the intention of verifying dataset ownership or safeguarding private data from unauthorized use. However, these developments have the potential to cause misunderstandings and conflicts, as data poisoning has traditionally been regarded as a security threat to machine learning systems. To address this issue, it is imperative fo - Assessing the Impact of Post-Quantum Digital Signature Algorithms on Blockchains (arxiv.org, 2025-10-13T04:00:00)
Score: 9.331
arXiv:2510.09271v1 Announce Type: new
Abstract: The advent of quantum computing threatens the security of traditional encryption algorithms, motivating the development of post-quantum cryptography (PQC). In 2024, the National Institute of Standards and Technology (NIST) standardized several PQC algorithms, marking an important milestone in the transition toward quantum-resistant security. Blockchain systems fundamentally rely on cryptographic primitives to guarantee data integrity and transacti - How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks (arxiv.org, 2025-10-13T04:00:00)
Score: 9.331
arXiv:2503.20257v2 Announce Type: replace
Abstract: As Machine Learning (ML) evolves, the complexity and sophistication of security threats against this paradigm continue to grow as well, threatening data privacy and model integrity. In response, Machine Unlearning (MU) is a recent technology that aims to remove the influence of specific data from a trained model, enabling compliance with privacy regulations and user requests. This can be done for privacy compliance (e.g., GDPR's right to
Auto-generated 2025-10-13
