Categories Uncategorized

Weekly Threat Report 2025-11-03

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-11-03T05:00:05.471043+00:00

  1. BRONZE BUTLER exploits Japanese asset management software vulnerability (news.sophos.com, 2025-10-30T17:55:55)
    Score: 12.623
    The threat group targeted a LANSCOPE zero-day vulnerability (CVE-2025-61932)
  2. Windows Server Update Services (WSUS) vulnerability abused to harvest sensitive data (news.sophos.com, 2025-10-29T19:46:50)
    Score: 9.469
    Exploitation of CVE-2025-59287 began after public disclosure and the release of proof-of-concept code
  3. September 2025 CVE Landscape (www.recordedfuture.com, 2025-10-17T00:00:00)
    Score: 9.132
    Discover the top 16 exploited vulnerabilities from September 2025, including critical Cisco and TP-Link flaws, malware-linked CVEs, and actionable threat intelligence from Recorded Future’s Insikt Group.
  4. Ransomware gang claims Conduent breach: what you should watch for next [updated] (www.malwarebytes.com, 2025-10-30T15:16:18)
    Score: 8.605
    You could be one of more than 10 million people caught up in its recent data breach. Here's what to watch out for.
  5. How to Prevent Ransomware | Recorded Future (www.recordedfuture.com, 2025-10-24T00:00:00)
    Score: 7.499
    Learn to how to prevent ransomware attacks before they materialize with proactive threat intelligence
  6. Phishing scam uses fake death notices to trick LastPass users (www.malwarebytes.com, 2025-10-27T14:15:50)
    Score: 7.098
    LastPass is warning that phishers are exploiting the digital will feature to trick people into handing over their master passwords.
  7. Ransomware Reality: Business Confidence Is High, Preparedness Is Low (www.crowdstrike.com, 2025-10-21T05:00:00)
    Score: 7.033
  8. From Domain User to SYSTEM: Analyzing the NTLM LDAP Authentication Bypass Vulnerability (CVE-2025-54918) (www.crowdstrike.com, 2025-10-22T05:00:00)
    Score: 6.2
  9. Update Chrome now: 20 security fixes just landed (www.malwarebytes.com, 2025-10-31T11:33:48)
    Score: 5.746
    Google’s latest Chrome release fixes seven serious flaws that could let attackers run malicious code just by luring you to a compromised page.
  10. Phake phishing: Phundamental or pholly? (news.sophos.com, 2025-10-31T11:00:19)
    Score: 5.742
    Debates over the effectiveness of phishing simulations are widespread. Sophos X-Ops looks at the arguments for and against – and our own phishing philosophy

Top 10 AI / LLM-Related Threats

Generated 2025-11-03T06:00:14.434975+00:00

  1. LLM-based Multi-class Attack Analysis and Mitigation Framework in IoT/IIoT Networks (arxiv.org, 2025-11-03T05:00:00)
    Score: 23.29
    arXiv:2510.26941v1 Announce Type: new
    Abstract: The Internet of Things has expanded rapidly, transforming communication and operations across industries but also increasing the attack surface and security breaches. Artificial Intelligence plays a key role in securing IoT, enabling attack detection, attack behavior analysis, and mitigation suggestion. Despite advancements, evaluations remain purely qualitative, and the lack of a standardized, objective benchmark for quantitatively measuring AI-b
  2. Broken-Token: Filtering Obfuscated Prompts by Counting Characters-Per-Token (arxiv.org, 2025-11-03T05:00:00)
    Score: 20.79
    arXiv:2510.26847v1 Announce Type: new
    Abstract: Large Language Models (LLMs) are susceptible to jailbreak attacks where malicious prompts are disguised using ciphers and character-level encodings to bypass safety guardrails. While these guardrails often fail to interpret the encoded content, the underlying models can still process the harmful instructions. We introduce CPT-Filtering, a novel, model-agnostic with negligible-costs and near-perfect accuracy guardrail technique that aims to mitigat
  3. Adapting Large Language Models to Emerging Cybersecurity using Retrieval Augmented Generation (arxiv.org, 2025-11-03T05:00:00)
    Score: 20.79
    arXiv:2510.27080v1 Announce Type: new
    Abstract: Security applications are increasingly relying on large language models (LLMs) for cyber threat detection; however, their opaque reasoning often limits trust, particularly in decisions that require domain-specific cybersecurity knowledge. Because security threats evolve rapidly, LLMs must not only recall historical incidents but also adapt to emerging vulnerabilities and attack patterns. Retrieval-Augmented Generation (RAG) has demonstrated effect
  4. Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational Agents (arxiv.org, 2025-11-03T05:00:00)
    Score: 20.79
    arXiv:2510.27275v1 Announce Type: new
    Abstract: Recent improvement gains in large language models (LLMs) have lead to everyday usage of AI-based Conversational Agents (CAs). At the same time, LLMs are vulnerable to an array of threats, including jailbreaks and, for example, causing remote code execution when fed specific inputs. As a result, users may unintentionally introduce risks, for example, by uploading malicious files or disclosing sensitive information. However, the extent to which such
  5. Decoding Latent Attack Surfaces in LLMs: Prompt Injection via HTML in Web Summarization (arxiv.org, 2025-11-03T05:00:00)
    Score: 18.79
    arXiv:2509.05831v2 Announce Type: replace
    Abstract: Large Language Models (LLMs) are increasingly integrated into web-based systems for content summarization, yet their susceptibility to prompt injection attacks remains a pressing concern. In this study, we explore how non-visible HTML elements such as , aria-label, and alt attributes can be exploited to embed adversarial instructions without altering the visible content of a webpage. We introduce a novel dataset comprising 280 static web pages
  6. Layer of Truth: Probing Belief Shifts under Continual Pre-Training Poisoning (arxiv.org, 2025-11-03T05:00:00)
    Score: 17.79
    arXiv:2510.26829v1 Announce Type: cross
    Abstract: Large language models (LLMs) continually evolve through pre-training on ever-expanding web data, but this adaptive process also exposes them to subtle forms of misinformation. While prior work has explored data poisoning during static pre-training, the effects of such manipulations under continual pre-training remain largely unexplored. Drawing inspiration from the illusory truth effect in human cognition – where repeated exposure to falsehoods
  7. RepoMark: A Code Usage Auditing Framework for Code Large Language Models (arxiv.org, 2025-11-03T05:00:00)
    Score: 17.79
    arXiv:2508.21432v2 Announce Type: replace
    Abstract: The rapid development of Large Language Models (LLMs) for code generation has transformed software development by automating coding tasks with unprecedented efficiency.
    However, the training of these models on open-source code repositories (e.g., from GitHub) raises critical ethical and legal concerns, particularly regarding data authorization and open-source license compliance. Developers are increasingly questioning whether model trainers
  8. SmoothGuard: Defending Multimodal Large Language Models with Noise Perturbation and Clustering Aggregation (arxiv.org, 2025-11-03T05:00:00)
    Score: 16.79
    arXiv:2510.26830v1 Announce Type: cross
    Abstract: Multimodal large language models (MLLMs) have achieved impressive performance across diverse tasks by jointly reasoning over textual and visual inputs. Despite their success, these models remain highly vulnerable to adversarial manipulations, raising concerns about their safety and reliability in deployment. In this work, we first generalize an approach for generating adversarial images within the HuggingFace ecosystem and then introduce SmoothG
  9. Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels (arxiv.org, 2025-11-03T05:00:00)
    Score: 14.79
    arXiv:2510.27140v1 Announce Type: new
    Abstract: Large Language Models (LLMs) have transformed software development, enabling AI-powered applications known as LLM-based agents that promise to automate tasks across diverse apps and workflows. Yet, the security implications of deploying such agents in adversarial mobile environments remain poorly understood. In this paper, we present the first systematic study of security risks in mobile LLM agents. We design and evaluate a suite of adversarial ca
  10. Unvalidated Trust: Cross-Stage Vulnerabilities in Large Language Model Architectures (arxiv.org, 2025-11-03T05:00:00)
    Score: 14.79
    arXiv:2510.27190v1 Announce Type: new
    Abstract: As Large Language Models (LLMs) are increasingly integrated into automated, multi-stage pipelines, risk patterns that arise from unvalidated trust between processing stages become a practical concern. This paper presents a mechanism-centered taxonomy of 41 recurring risk patterns in commercial LLMs. The analysis shows that inputs are often interpreted non-neutrally and can trigger implementation-shaped responses or unintended state changes even wi
  11. LAFA: Agentic LLM-Driven Federated Analytics over Decentralized Data Sources (arxiv.org, 2025-11-03T05:00:00)
    Score: 14.79
    arXiv:2510.18477v2 Announce Type: cross
    Abstract: Large Language Models (LLMs) have shown great promise in automating data analytics tasks by interpreting natural language queries and generating multi-operation execution plans. However, existing LLM-agent-based analytics frameworks operate under the assumption of centralized data access, offering little to no privacy protection. In contrast, federated analytics (FA) enables privacy-preserving computation across distributed data sources, but lac
  12. On Selecting Few-Shot Examples for LLM-based Code Vulnerability Detection (arxiv.org, 2025-11-03T05:00:00)
    Score: 14.79
    arXiv:2510.27675v1 Announce Type: cross
    Abstract: Large language models (LLMs) have demonstrated impressive capabilities for many coding tasks, including summarization, translation, completion, and code generation. However, detecting code vulnerabilities remains a challenging task for LLMs. An effective way to improve LLM performance is in-context learning (ICL) – providing few-shot examples similar to the query, along with correct answers, can improve an LLM's ability to generate correct
  13. SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM Agents (arxiv.org, 2025-11-03T05:00:00)
    Score: 14.79
    arXiv:2412.13178v5 Announce Type: replace
    Abstract: With the integration of large language models (LLMs), embodied agents have strong capabilities to understand and plan complicated natural language instructions. However, a foreseeable issue is that those embodied agents can also flawlessly execute some hazardous tasks, potentially causing damages in the real world. Existing benchmarks predominantly overlook critical safety risks, focusing solely on planning performance, while a few evaluate LL
  14. Crypto wasted: BlueNoroff’s ghost mirage of funding and jobs (securelist.com, 2025-10-28T03:00:11)
    Score: 12.042
    Kaspersky GReAT experts dive deep into the BlueNoroff APT's GhostCall and GhostHire campaigns. Extensive research detailing multiple malware chains targeting macOS, including a stealer suite, fake Zoom and Microsoft Teams clients and ChatGPT-enhanced images.
  15. AERO: Entropy-Guided Framework for Private LLM Inference (arxiv.org, 2025-11-03T05:00:00)
    Score: 11.79
    arXiv:2410.13060v3 Announce Type: replace-cross
    Abstract: Privacy-preserving computation enables language model inference directly on encrypted data yet suffers from prohibitive latency and communication overheads, primarily due to nonlinear functions. Removing nonlinearities, however, can trigger one of two failure modes restricting the potential for nonlinearity removal: entropy collapse in deeper layers, which destabilizes training, and entropic overload in early layers, causing under-utiliz
  16. Keys to the Kingdom: A Defender's Guide to Privileged Account Monitoring (cloud.google.com, 2025-10-28T14:00:00)
    Score: 11.551
    Written by: Bhavesh Dhake, Will Silverstone, Matthew Hitchcock, Aaron Fletcher The Criticality of Privileged Access in Today's Threat Landscape Privileged access stands as the most critical pathway for adversaries seeking to compromise sensitive systems and data. Its protection is not only a best practice, it is a fundamental imperative for organizational resilience. The increasing complexity of modern IT environments, exacerbated by rapid cloud migration, has led to a surge in both human a
  17. The 5 generative AI security threats you need to know about detailed in new e-book (www.microsoft.com, 2025-10-30T18:00:00)
    Score: 11.467
    In this blog post, we’ll highlight the key themes covered in the e-book, including the challenges organizations face, the top generative AI threats to organizations, and how companies can enhance their security posture to meet the dangers of today’s unpredictable AI environments. The post The 5 generative AI security threats you need to know about detailed in new e-book appeared first on Microsoft Security Blog .
  18. Introducing Amazon Bedrock cross-Region inference for Claude Sonnet 4.5 and Haiku 4.5 in Japan and Australia (aws.amazon.com, 2025-10-31T14:45:37)
    Score: 10.773
    こんにちは, G’day. The recent launch of Anthropic’s Claude Sonnet 4.5 and Claude Haiku 4.5, now available on Amazon Bedrock, marks a significant leap forward in generative AI models. These state-of-the-art models excel at complex agentic tasks, coding, and enterprise workloads, offering enhanced capabilities to developers. Along with the new models, we are thrilled to announce that […]
  19. DPRK Adopts EtherHiding: Nation-State Malware Hiding on Blockchains (cloud.google.com, 2025-10-16T14:00:00)
    Score: 10.194
    Written by: Blas Kojusner, Robert Wallace, Joseph Dobson Google Threat Intelligence Group (GTIG) has observed the North Korea (DPRK) threat actor UNC5342 using ‘EtherHiding’ to deliver malware and facilitate cryptocurrency theft, the first time GTIG has observed a nation-state actor adopting this method. This post is part of a two-part blog series on adversaries using EtherHiding , a technique that leverages transactions on public blockchains to store and retrieve malicious payloads—notable for
  20. Locking it down: A new technique to prevent LLM jailbreaks (news.sophos.com, 2025-10-24T10:00:12)
    Score: 9.859
    Following on from our preview, here’s the full rundown on LLM salting: a novel countermeasure against LLM jailbreaks, developed by AI researchers at Sophos X-Ops
  21. NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults (www.malwarebytes.com, 2025-10-28T11:39:22)
    Score: 9.828
    ChatGPT is about to get a whole lot more human. OpenAI will roll out a version that can flirt, joke, and even get steamy—with age checks in place.
  22. Build scalable creative solutions for product teams with Amazon Bedrock (aws.amazon.com, 2025-10-22T23:02:04)
    Score: 9.712
    In this post, we explore how product teams can leverage Amazon Bedrock and AWS services to transform their creative workflows through generative AI, enabling rapid content iteration across multiple formats while maintaining brand consistency and compliance. The solution demonstrates how teams can deploy a scalable generative AI application that accelerates everything from product descriptions and marketing copy to visual concepts and video content, significantly reducing time to market while enh
  23. Pwn2Own Ireland 2025 – Day Two Results (www.thezdi.com, 2025-10-22T10:19:24)
    Score: 9.686
    Welcome to Day Two of Pwn2Own Ireland 2025. Yesterday, we awarded $522,500 for 34 unique 0-day bugs. The Summoning Team took a slim lead in the Master of Pwn, but big changes could happen today as we have 19 more attempts today. We’ll be updating this blog with results as they come in, so refresh often! Day Two of Pwn2Own Ireland 2025 is complete! We saw some great work today, with the exploit of the Samsung Galaxy being the big highlight. So far, we have awarded $792,750 for 56 unique 0-days. T
  24. VISAT: Benchmarking Adversarial and Distribution Shift Robustness in Traffic Sign Recognition with Visual Attributes (arxiv.org, 2025-11-03T05:00:00)
    Score: 9.49
    arXiv:2510.26833v1 Announce Type: new
    Abstract: We present VISAT, a novel open dataset and benchmarking suite for evaluating model robustness in the task of traffic sign recognition with the presence of visual attributes. Built upon the Mapillary Traffic Sign Dataset (MTSD), our dataset introduces two benchmarks that respectively emphasize robustness against adversarial attacks and distribution shifts. For our adversarial attack benchmark, we employ the state-of-the-art Projected Gradient Desce
  25. SilhouetteTell: Practical Video Identification Leveraging Blurred Recordings of Video Subtitles (arxiv.org, 2025-11-03T05:00:00)
    Score: 9.49
    arXiv:2510.27179v1 Announce Type: cross
    Abstract: Video identification attacks pose a significant privacy threat that can reveal videos that victims watch, which may disclose their hobbies, religious beliefs, political leanings, sexual orientation, and health status. Also, video watching history can be used for user profiling or advertising and may result in cyberbullying, discrimination, or blackmail. Existing extensive video inference techniques usually depend on analyzing network traffic gen

Auto-generated 2025-11-03

Written By

More From Author

You May Also Like