Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2025-10-27T05:00:05.750252+00:00
- September 2025 CVE Landscape (www.recordedfuture.com, 2025-10-17T00:00:00)
Score: 10.299
Discover the top 16 exploited vulnerabilities from September 2025, including critical Cisco and TP-Link flaws, malware-linked CVEs, and actionable threat intelligence from Recorded Future’s Insikt Group. - October 2025 Patch Tuesday: Two Publicly Disclosed, Three Zero-Days, and Eight Critical Vulnerabilities Among 172 CVEs (www.crowdstrike.com, 2025-10-14T05:00:00)
Score: 8.533 - Ransomware Reality: Business Confidence Is High, Preparedness Is Low (www.crowdstrike.com, 2025-10-21T05:00:00)
Score: 8.2 - From Domain User to SYSTEM: Analyzing the NTLM LDAP Authentication Bypass Vulnerability (CVE-2025-54918) (www.crowdstrike.com, 2025-10-22T05:00:00)
Score: 7.367 - F5 network compromised (news.sophos.com, 2025-10-15T17:08:27)
Score: 6.584
On October 15, 2025, F5 reported that a nation-state threat actor had gained long-term access to some F5 systems and exfiltrated data, including source code and information about undisclosed product vulnerabilities. This information may enable threat actors to compromise F5 devices by developing exploits for these vulnerabilities. The UK National Cyber Security Centre also notes […] - Under the engineering hood: Why Malwarebytes chose WordPress as its CMS (www.malwarebytes.com, 2025-10-17T08:10:10)
Score: 6.555
It might surprise some that a security company would choose WordPress as the backbone of its digital content operations. Here's what we considered when choosing it. - How Falcon Exposure Management’s ExPRT.AI Predicts What Attackers Will Exploit (www.crowdstrike.com, 2025-10-17T05:00:00)
Score: 6.533 - Falcon Defends Against Git Vulnerability CVE-2025-48384 (www.crowdstrike.com, 2025-10-16T07:00:00)
Score: 6.381 - Is AI moving faster than its safety net? (www.malwarebytes.com, 2025-10-24T13:35:51)
Score: 5.76
From agentic browsers to chat assistants, the same tools built to help us can also expose us. - Locking it down: A new technique to prevent LLM jailbreaks (news.sophos.com, 2025-10-24T10:00:12)
Score: 5.735
Following on from our preview, here’s the full rundown on LLM salting: a novel countermeasure against LLM jailbreaks, developed by AI researchers at Sophos X-Ops
Top 10 AI / LLM-Related Threats
Generated 2025-10-27T06:00:15.195776+00:00
- Soft Instruction De-escalation Defense (arxiv.org, 2025-10-27T04:00:00)
Score: 20.78
arXiv:2510.21057v1 Announce Type: new
Abstract: Large Language Models (LLMs) are increasingly deployed in agentic systems that interact with an external environment; this makes them susceptible to prompt injections when dealing with untrusted data. To overcome this limitation, we propose SIC (Soft Instruction Control)-a simple yet effective iterative prompt sanitization loop designed for tool-augmented LLM agents. Our method repeatedly inspects incoming data for instructions that could compromi - The Trojan Example: Jailbreaking LLMs through Template Filling and Unsafety Reasoning (arxiv.org, 2025-10-27T04:00:00)
Score: 20.78
arXiv:2510.21190v1 Announce Type: new
Abstract: Large Language Models (LLMs) have advanced rapidly and now encode extensive world knowledge. Despite safety fine-tuning, however, they remain susceptible to adversarial prompts that elicit harmful content. Existing jailbreak techniques fall into two categories: white-box methods (e.g., gradient-based approaches such as GCG), which require model internals and are infeasible for closed-source APIs, and black-box methods that rely on attacker LLMs to - SBASH: a Framework for Designing and Evaluating RAG vs. Prompt-Tuned LLM Honeypots (arxiv.org, 2025-10-27T04:00:00)
Score: 20.78
arXiv:2510.21459v1 Announce Type: new
Abstract: Honeypots are decoy systems used for gathering valuable threat intelligence or diverting attackers away from production systems. Maximising attacker engagement is essential to their utility. However research has highlighted that context-awareness, such as the ability to respond to new attack types, systems and attacker agents, is necessary to increase engagement. Large Language Models (LLMs) have been shown as one approach to increase context awar - LLMs can hide text in other text of the same length (arxiv.org, 2025-10-27T04:00:00)
Score: 20.28
arXiv:2510.20075v2 Announce Type: replace-cross
Abstract: A meaningful text can be hidden inside another, completely different yet still coherent and plausible, text of the same length. For example, a tweet containing a harsh political critique could be embedded in a tweet that celebrates the same political leader, or an ordinary product review could conceal a secret manuscript. This uncanny state of affairs is now possible thanks to Large Language Models, and in this paper we present a simple - DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents (arxiv.org, 2025-10-27T04:00:00)
Score: 18.78
arXiv:2506.12104v2 Announce Type: replace
Abstract: Large Language Models (LLMs) are increasingly central to agentic systems due to their strong reasoning and planning capabilities. By interacting with external environments through predefined tools, these agents can carry out complex user tasks. Nonetheless, this interaction also introduces the risk of prompt injection attacks, where malicious inputs from external sources can mislead the agent's behavior, potentially resulting in economic - Security Logs to ATT&CK Insights: Leveraging LLMs for High-Level Threat Understanding and Cognitive Trait Inference (arxiv.org, 2025-10-27T04:00:00)
Score: 17.78
arXiv:2510.20930v1 Announce Type: new
Abstract: Understanding adversarial behavior in cybersecurity has traditionally relied on high-level intelligence reports and manual interpretation of attack chains. However, real-time defense requires the ability to infer attacker intent and cognitive strategy directly from low-level system telemetry such as intrusion detection system (IDS) logs. In this paper, we propose a novel framework that leverages large language models (LLMs) to analyze Suricata IDS - Adjacent Words, Divergent Intents: Jailbreaking Large Language Models via Task Concurrency (arxiv.org, 2025-10-27T04:00:00)
Score: 17.78
arXiv:2510.21189v1 Announce Type: new
Abstract: Despite their superior performance on a wide range of domains, large language models (LLMs) remain vulnerable to misuse for generating harmful content, a risk that has been further amplified by various jailbreak attacks. Existing jailbreak attacks mainly follow sequential logic, where LLMs understand and answer each given task one by one. However, concurrency, a natural extension of the sequential scenario, has been largely overlooked. In this wor - Enhanced MLLM Black-Box Jailbreaking Attacks and Defenses (arxiv.org, 2025-10-27T04:00:00)
Score: 17.78
arXiv:2510.21214v1 Announce Type: new
Abstract: Multimodal large language models (MLLMs) comprise of both visual and textual modalities to process vision language tasks. However, MLLMs are vulnerable to security-related issues, such as jailbreak attacks that alter the model's input to induce unauthorized or harmful responses. The incorporation of the additional visual modality introduces new dimensions to security threats. In this paper, we proposed a black-box jailbreak method via both te - Virus Infection Attack on LLMs: Your Poisoning Can Spread "VIA" Synthetic Data (arxiv.org, 2025-10-27T04:00:00)
Score: 17.78
arXiv:2509.23041v2 Announce Type: replace
Abstract: Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large language models (LLMs) during training and has been widely adopted in LLM development, potential security risks it may introduce remain uninvestigated. This paper systematically evaluates the resilience of synthetic-data-integrated training paradigm for LLMs against mainstream poisoning and backdoor att - Fundamental Limitations in Pointwise Defences of LLM Finetuning APIs (arxiv.org, 2025-10-27T04:00:00)
Score: 17.28
arXiv:2502.14828v2 Announce Type: replace-cross
Abstract: LLM developers have imposed technical interventions to prevent fine-tuning misuse attacks, attacks where adversaries evade safeguards by fine-tuning the model using a public API. Previous work has established several successful attacks against specific fine-tuning API defences. In this work, we show that defences of fine-tuning APIs that seek to detect individual harmful training or inference samples ('pointwise' detection) are - Dynamic Target Attack (arxiv.org, 2025-10-27T04:00:00)
Score: 15.48
arXiv:2510.02422v2 Announce Type: replace
Abstract: Existing gradient-based jailbreak attacks typically optimize an adversarial suffix to induce a fixed affirmative response. However, this fixed target usually resides in an extremely low-density region of a safety-aligned LLM's output distribution conditioned on diverse harmful inputs. Due to the substantial discrepancy between the target and the original output, existing attacks require numerous iterations to optimize the adversarial prom - REx86: A Local Large Language Model for Assisting in x86 Assembly Reverse Engineering (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.20975v1 Announce Type: new
Abstract: Reverse engineering (RE) of x86 binaries is indispensable for malware and firmware analysis, but remains slow due to stripped metadata and adversarial obfuscation. Large Language Models (LLMs) offer potential for improving RE efficiency through automated comprehension and commenting, but cloud-hosted, closed-weight models pose privacy and security risks and cannot be used in closed-network facilities. We evaluate parameter-efficient fine-tuned loc - A Reinforcement Learning Framework for Robust and Secure LLM Watermarking (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21053v1 Announce Type: new
Abstract: Watermarking has emerged as a promising solution for tracing and authenticating text generated by large language models (LLMs). A common approach to LLM watermarking is to construct a green/red token list and assign higher or lower generation probabilities to the corresponding tokens, respectively. However, most existing watermarking algorithms rely on heuristic green/red token list designs, as directly optimizing the list design with techniques s - Quantifying CBRN Risk in Frontier Models (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21133v1 Announce Type: new
Abstract: Frontier Large Language Models (LLMs) pose unprecedented dual-use risks through the potential proliferation of chemical, biological, radiological, and nuclear (CBRN) weapons knowledge. We present the first comprehensive evaluation of 10 leading commercial LLMs against both a novel 200-prompt CBRN dataset and a 180-prompt subset of the FORTRESS benchmark, using a rigorous three-tier attack methodology. Our findings expose critical safety vulnerabil - Securing AI Agent Execution (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21236v1 Announce Type: new
Abstract: Large Language Models (LLMs) have evolved into AI agents that interact with external tools and environments to perform complex tasks. The Model Context Protocol (MCP) has become the de facto standard for connecting agents with such resources, but security has lagged behind: thousands of MCP servers execute with unrestricted access to host systems, creating a broad attack surface. In this paper, we introduce AgentBound, the first access control fra - LLM-Powered Detection of Price Manipulation in DeFi (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21272v1 Announce Type: new
Abstract: Decentralized Finance (DeFi) smart contracts manage billions of dollars, making them a prime target for exploits. Price manipulation vulnerabilities, often via flash loans, are a devastating class of attacks causing significant financial losses. Existing detection methods are limited. Reactive approaches analyze attacks only after they occur, while proactive static analysis tools rely on rigid, predefined heuristics, limiting adaptability. Both de - FLAMES: Fine-tuning LLMs to Synthesize Invariants for Smart Contract Security (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21401v1 Announce Type: new
Abstract: Smart contract vulnerabilities cost billions of dollars annually, yet existing automated analysis tools fail to generate deployable defenses. We present FLAMES, a novel automated approach that synthesizes executable runtime guards as Solidity "require" statements to harden smart contracts against exploits. Unlike prior work that relies on vulnerability labels, symbolic analysis, or natural language specifications, FLAMES employs domain-a - Actionable Cybersecurity Notifications for Smart Homes: A User Study on the Role of Length and Complexity (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.21508v1 Announce Type: cross
Abstract: The proliferation of smart home devices has increased convenience but also introduced cybersecurity risks for everyday users, as many devices lack robust security features. Intrusion Detection Systems are a prominent approach to detecting cybersecurity threats. However, their alerts often use technical terms and require users to interpret them correctly, which is challenging for a typical smart home user. Large Language Models can bridge this ga - DeepTx: Real-Time Transaction Risk Analysis via Multi-Modal Features and LLM Reasoning (arxiv.org, 2025-10-27T04:00:00)
Score: 14.78
arXiv:2510.18438v2 Announce Type: replace
Abstract: Phishing attacks in Web3 ecosystems are increasingly sophisticated, exploiting deceptive contract logic, malicious frontend scripts, and token approval patterns. We present DeepTx, a real-time transaction analysis system that detects such threats before user confirmation. DeepTx simulates pending transactions, extracts behavior, context, and UI features, and uses multiple large language models (LLMs) to reason about transaction intent. A conse - Self-Jailbreaking: Language Models Can Reason Themselves Out of Safety Alignment After Benign Reasoning Training (arxiv.org, 2025-10-27T04:00:00)
Score: 13.78
arXiv:2510.20956v1 Announce Type: new
Abstract: We discover a novel and surprising phenomenon of unintentional misalignment in reasoning language models (RLMs), which we call self-jailbreaking. Specifically, after benign reasoning training on math or code domains, RLMs will use multiple strategies to circumvent their own safety guardrails. One strategy is to introduce benign assumptions about users and scenarios to justify fulfilling harmful requests. For instance, an RLM reasons that harmful r - DPRK Adopts EtherHiding: Nation-State Malware Hiding on Blockchains (cloud.google.com, 2025-10-16T14:00:00)
Score: 11.86
Written by: Blas Kojusner, Robert Wallace, Joseph Dobson Google Threat Intelligence Group (GTIG) has observed the North Korea (DPRK) threat actor UNC5342 using ‘EtherHiding’ to deliver malware and facilitate cryptocurrency theft, the first time GTIG has observed a nation-state actor adopting this method. This post is part of a two-part blog series on adversaries using EtherHiding , a technique that leverages transactions on public blockchains to store and retrieve malicious payloads—notable for - FPT-Noise: Dynamic Scene-Aware Counterattack for Test-Time Adversarial Defense in Vision-Language Models (arxiv.org, 2025-10-27T04:00:00)
Score: 11.78
arXiv:2510.20856v1 Announce Type: new
Abstract: Vision-Language Models (VLMs), such as CLIP, have demonstrated remarkable zero-shot generalizability across diverse downstream tasks. However, recent studies have revealed that VLMs, including CLIP, are highly vulnerable to adversarial attacks, particularly on their visual modality. Traditional methods for improving adversarial robustness, such as adversarial training, involve extensive retraining and can be computationally expensive. In this pape - Revealing the True Indicators: Understanding and Improving IoC Extraction From Threat Reports (arxiv.org, 2025-10-27T04:00:00)
Score: 11.78
arXiv:2506.11325v2 Announce Type: replace
Abstract: Indicators of Compromise (IoCs) are critical for threat detection and response, marking malicious activity across networks and systems. Yet, the effectiveness of automated IoC extraction systems is fundamentally limited by one key issue: the lack of high-quality ground truth. Current extraction tools rely either on manually extracted ground truth, which is labor-intensive and costly, or on automated ground truth creation methods that include n - When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory (unit42.paloaltonetworks.com, 2025-10-09T22:00:11)
Score: 11.673
Indirect prompt injection can poison long-term AI agent memory, allowing injected instructions to persist and potentially exfiltrate conversation history. The post When AI Remembers Too Much – Persistent Behaviors in Agents’ Memory appeared first on Unit 42 . - Locking it down: A new technique to prevent LLM jailbreaks (news.sophos.com, 2025-10-24T10:00:12)
Score: 11.525
Following on from our preview, here’s the full rundown on LLM salting: a novel countermeasure against LLM jailbreaks, developed by AI researchers at Sophos X-Ops
Auto-generated 2025-10-27
