Categories Uncategorized

Weekly Threat Report 2025-11-17

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-11-17T05:00:05.641427+00:00

  1. Samsung zero-day lets attackers take over your phone (www.malwarebytes.com, 2025-11-11T14:28:04)
    Score: 13.766
    A critical vulnerability that affects Samsung mobile devices was exploited in the wild to distribute LANDFALL spyware.
  2. October 2025 CVE Landscape (www.recordedfuture.com, 2025-11-06T00:00:00)
    Score: 12.632
    Discover the top 32 high-risk CVEs identified in October 2025 by Recorded Future’s Insikt Group, including active zero-day exploits, legacy system threats, and CL0P ransomware campaigns targeting Oracle EBS.
  3. Update now: November Patch Tuesday fixes Windows zero-day exploited in the wild (www.malwarebytes.com, 2025-11-12T11:53:39)
    Score: 10.415
    This month’s Windows update closes several major security holes, including one that’s already being used by attackers. Make sure your PC is up to date.
  4. How Malwarebytes stops the ransomware attack that most security software can’t see (www.malwarebytes.com, 2025-11-12T10:19:46)
    Score: 10.404
    Discover how Malwarebytes detects and blocks network-based ransomware attacks that bypass traditional ransomware protection.
  5. November 2025 Patch Tuesday: One Zero-Day and Five Critical Vulnerabilities Among 63 CVEs (www.crowdstrike.com, 2025-11-12T06:00:00)
    Score: 9.874
  6. Fake CAPTCHA sites now have tutorial videos to help victims install malware (www.malwarebytes.com, 2025-11-07T15:01:33)
    Score: 9.103
    ClickFix campaign pages now have embedded videos to helpfully walk users through the process of infecting their own systems.
  7. Be careful responding to unexpected job interviews (www.malwarebytes.com, 2025-11-14T16:30:38)
    Score: 7.78
    Contacted out of the blue for a virtual interview? Be cautious. Attackers are using fake interviews to slip malware onto your device.
  8. From Vulnerability Management to Exposure Management: The Platform Era Has Arrived (www.crowdstrike.com, 2025-11-13T06:00:00)
    Score: 7.54
  9. 5 ways to strengthen your firewall and endpoint’s defenses against ransomware (news.sophos.com, 2025-11-05T18:33:31)
    Score: 7.294
    Sophos Firewall uses intelligent TLS inspection and AI-powered analysts to reveal hidden threats — without compromising performance.
  10. Ransomware Detection With Real-Time Data (www.recordedfuture.com, 2025-11-04T00:00:00)
    Score: 6.999
    Learn why timely, relevant data is crucial for effective ransomware detection and what you can do to help prevent ransomware attacks and safeguard your organization.

Top 10 AI / LLM-Related Threats

Generated 2025-11-17T06:00:15.290139+00:00

  1. GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (cloud.google.com, 2025-11-05T14:00:00)
    Score: 35.722
    Executive Summary Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations . This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution. This report serves as an update to our January 2025
  2. SecInfer: Preventing Prompt Injection via Inference-time Scaling (arxiv.org, 2025-11-17T05:00:00)
    Score: 21.79
    arXiv:2509.24967v4 Announce Type: replace
    Abstract: Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses typically rely on fine-tuning an LLM to enhance its security, but they achieve limited effectiveness against strong attacks. In this work, we propose \emph{SecInfer}, a novel defense against prompt injection attacks built on \emph{inference-time scaling}, an emerging paradigm that boosts LLM capability by
  3. Privacy Challenges and Solutions in Retrieval-Augmented Generation-Enhanced LLMs for Healthcare Chatbots: A Review of Applications, Risks, and Future Directions (arxiv.org, 2025-11-17T05:00:00)
    Score: 20.79
    arXiv:2511.11347v1 Announce Type: new
    Abstract: Retrieval-augmented generation (RAG) has rapidly emerged as a transformative approach for integrating large language models into clinical and biomedical workflows. However, privacy risks, such as protected health information (PHI) exposure, remain inconsistently mitigated. This review provides a thorough analysis of the current landscape of RAG applications in healthcare, including (i) sensitive data type across clinical scenarios, (ii) the associ
  4. Data Poisoning Vulnerabilities Across Healthcare AI Architectures: A Security Threat Analysis (arxiv.org, 2025-11-17T05:00:00)
    Score: 19.79
    arXiv:2511.11020v1 Announce Type: new
    Abstract: Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight attack scenarios in four categories: architectural attacks on convolutional neural networks, large language models, and reinforcement learning agents; infrastructure attacks exploiting federated learning and medical documentation systems; critical resource allocation attacks affecting organ transplan
  5. PATCHEVAL: A New Benchmark for Evaluating LLMs on Patching Real-World Vulnerabilities (arxiv.org, 2025-11-17T05:00:00)
    Score: 17.79
    arXiv:2511.11019v1 Announce Type: new
    Abstract: Software vulnerabilities are increasing at an alarming rate. However, manual patching is both time-consuming and resource-intensive, while existing automated vulnerability repair (AVR) techniques remain limited in effectiveness. Recent advances in large language models (LLMs) have opened a new paradigm for AVR, demonstrating remarkable progress. To examine the capability of LLMs in AVR, several vulnerability benchmarks have been proposed recently.
  6. SEAL: Subspace-Anchored Watermarks for LLM Ownership (arxiv.org, 2025-11-17T05:00:00)
    Score: 17.79
    arXiv:2511.11356v1 Announce Type: new
    Abstract: Large language models (LLMs) have achieved remarkable success across a wide range of natural language processing tasks, demonstrating human-level performance in text generation, reasoning, and question answering. However, training such models requires substantial computational resources, large curated datasets, and sophisticated alignment procedures. As a result, they constitute highly valuable intellectual property (IP) assets that warrant robust
  7. Interpretable LLM Guardrails via Sparse Representation Steering (arxiv.org, 2025-11-17T05:00:00)
    Score: 17.79
    arXiv:2503.16851v2 Announce Type: replace
    Abstract: Large language models (LLMs) exhibit impressive capabilities in generation tasks but are prone to producing harmful, misleading, or biased content, posing significant ethical and safety concerns. To mitigate such risks, representation engineering, which steer model behavior toward desired attributes by injecting carefully designed steering vectors into LLM's representations at inference time, has emerged as a promising alternative to fine
  8. Automated Vulnerability Validation and Verification: A Large Language Model Approach (arxiv.org, 2025-11-17T05:00:00)
    Score: 17.79
    arXiv:2509.24037v2 Announce Type: replace
    Abstract: Software vulnerabilities remain a critical security challenge, providing entry points for attackers into enterprise networks. Despite advances in security practices, the lack of high-quality datasets capturing diverse exploit behavior limits effective vulnerability assessment and mitigation. This paper introduces an end-to-end multi-step pipeline leveraging generative AI, specifically large language models (LLMs), to address the challenges of
  9. Preparing for Threats to Come: Cybersecurity Forecast 2026 (cloud.google.com, 2025-11-04T14:00:00)
    Score: 16.684
    Every November, we make it our mission to equip organizations with the knowledge needed to stay ahead of threats we anticipate in the coming year. The Cybersecurity Forecast 2026 report, released today, provides comprehensive insights to help security leaders and teams prepare for those challenges. This report does not contain "crystal ball" predictions. Instead, our forecasts are built on real-world trends and data we are observing right now. The information contained in the report co
  10. PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt Sanitization (arxiv.org, 2025-11-17T05:00:00)
    Score: 16.49
    arXiv:2511.10720v1 Announce Type: new
    Abstract: Long context LLMs are vulnerable to prompt injection, where an attacker can inject an instruction in a long context to induce an LLM to generate an attacker-desired output. Existing prompt injection defenses are designed for short contexts. When extended to long-context scenarios, they have limited effectiveness. The reason is that an injected instruction constitutes only a very small portion of a long context, making the defense very challenging.
  11. Do Not Merge My Model! Safeguarding Open-Source LLMs Against Unauthorized Model Merging (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.10712v1 Announce Type: new
    Abstract: Model merging has emerged as an efficient technique for expanding large language models (LLMs) by integrating specialized expert models. However, it also introduces a new threat: model merging stealing, where free-riders exploit models through unauthorized model merging. Unfortunately, existing defense mechanisms fail to provide effective protection. Specifically, we identify three critical protection properties that existing methods fail to simul
  12. BadThink: Triggered Overthinking Attacks on Chain-of-Thought Reasoning in Large Language Models (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.10714v1 Announce Type: new
    Abstract: Recent advances in Chain-of-Thought (CoT) prompting have substantially improved the reasoning capabilities of large language models (LLMs), but have also introduced their computational efficiency as a new attack surface. In this paper, we propose BadThink, the first backdoor attack designed to deliberately induce "overthinking" behavior in CoT-enabled LLMs while ensuring stealth. When activated by carefully crafted trigger prompts, BadTh
  13. Prompt Engineering vs. Fine-Tuning for LLM-Based Vulnerability Detection in Solana and Algorand Smart Contracts (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.11250v1 Announce Type: new
    Abstract: Smart contracts have emerged as key components within decentralized environments, enabling the automation of transactions through self-executing programs. While these innovations offer significant advantages, they also present potential drawbacks if the smart contract code is not carefully designed and implemented. This paper investigates the capability of large language models (LLMs) to detect OWASP-inspired vulnerabilities in smart contracts bey
  14. Synthetic Voices, Real Threats: Evaluating Large Text-to-Speech Models in Generating Harmful Audio (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.10913v1 Announce Type: cross
    Abstract: Modern text-to-speech (TTS) systems, particularly those built on Large Audio-Language Models (LALMs), generate high-fidelity speech that faithfully reproduces input text and mimics specified speaker identities. While prior misuse studies have focused on speaker impersonation, this work explores a distinct content-centric threat: exploiting TTS systems to produce speech containing harmful content. Realizing such threats poses two core challenges:
  15. Automata-Based Steering of Large Language Models for Diverse Structured Generation (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.11018v1 Announce Type: cross
    Abstract: Large language models (LLMs) are increasingly tasked with generating structured outputs. While structured generation methods ensure validity, they often lack output diversity, a critical limitation that we confirm in our preliminary study. We propose a novel method to enhance diversity in automaton-based structured generation. Our approach utilizes automata traversal history to steer LLMs towards novel structural patterns. Evaluations show our m
  16. SCRUTINEER: Detecting Logic-Level Usage Violations of Reusable Components in Smart Contracts (arxiv.org, 2025-11-17T05:00:00)
    Score: 14.79
    arXiv:2511.11411v1 Announce Type: cross
    Abstract: Smart Contract Reusable Components(SCRs) play a vital role in accelerating the development of business-specific contracts by promoting modularity and code reuse. However, the risks associated with SCR usage violations have become a growing concern. One particular type of SCR usage violation, known as a logic-level usage violation, is becoming especially harmful. This violation occurs when the SCR adheres to its specified usage rules but fails to
  17. Powering enterprise search with the Cohere Embed 4 multimodal embeddings model in Amazon Bedrock (aws.amazon.com, 2025-11-11T20:59:54)
    Score: 13.12
    The Cohere Embed 4 multimodal embeddings model is now available as a fully managed, serverless option in Amazon Bedrock. In this post, we dive into the benefits and unique capabilities of Embed 4 for enterprise search use cases. We’ll show you how to quickly get started using Embed 4 on Amazon Bedrock, taking advantage of integrations with Strands Agents, S3 Vectors, and Amazon Bedrock AgentCore to build powerful agentic retrieval-augmented generation (RAG) workflows.
  18. Revealing Adversarial Smart Contracts through Semantic Interpretation and Uncertainty Estimation (arxiv.org, 2025-11-17T05:00:00)
    Score: 12.49
    arXiv:2509.18934v2 Announce Type: replace
    Abstract: Adversarial smart contracts, mostly on EVM-compatible chains like Ethereum and BSC, are deployed as EVM bytecode to exploit vulnerable smart contracts for financial gain. Detecting such malicious contracts at the time of deployment is an important proactive strategy to prevent losses from victim contracts. It offers a better cost-benefit ratio than detecting vulnerabilities on diverse potential victims. However, existing works are not generic
  19. The November 2025 Security Update Review (www.thezdi.com, 2025-11-11T18:30:42)
    Score: 11.596
    I’ve made it through Pwn2Own Ireland , and while many are celebrated those who served their country in the armed services, patch Tuesday stops for no one. So affix your poppy accordingly, and let’s take a look at the latest security offerings from Adobe and Microsoft. If you’d rather watch the full video recap covering the entire release, you can check it out here: Adobe Patches for November 2025 For November, Adobe released eight bulletins addressing 29 unique CVEs in Adobe InDesign, InCopy, Ph
  20. Grid-STIX: A STIX 2.1-Compliant Cyber-Physical Security Ontology for Power Grid (arxiv.org, 2025-11-17T05:00:00)
    Score: 11.49
    arXiv:2511.11366v1 Announce Type: new
    Abstract: Modern electrical power grids represent complex cyber-physical systems requiring specialized cybersecurity frameworks beyond traditional IT security models. Existing threat intelligence standards such as STIX 2.1 and MITRE ATT\&CK lack coverage for grid-specific assets, operational technology relationships, and cyber-physical interdependencies essential for power system security. We present Grid-STIX, a domain-specific extension of STIX 2.1 fo
  21. Democratizing AI: How Thomson Reuters Open Arena supports no-code AI for every professional with Amazon Bedrock (aws.amazon.com, 2025-11-07T21:51:22)
    Score: 10.176
    In this blog post, we explore how TR addressed key business use cases with Open Arena, a highly scalable and flexible no-code AI solution powered by Amazon Bedrock and other AWS services such as Amazon OpenSearch Service, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and AWS Lambda. We'll explain how TR used AWS services to build this solution, including how the architecture was designed, the use cases it solves, and the business profiles that use it.
  22. Time Travel Triage: An Introduction to Time Travel Debugging using a .NET Process Hollowing Case Study (cloud.google.com, 2025-11-13T14:00:00)
    Score: 9.527
    Written by: Josh Stroschein, Jae Young Kim The prevalence of obfuscation and multi-stage layering in today’s malware often forces analysts into tedious and manual debugging sessions. For instance, the primary challenge of analyzing pervasive commodity stealers like AgentTesla isn’t identifying the malware, but quickly cutting through the obfuscated delivery chain to get to the final payload. Unlike traditional live debugging, Time Travel Debugging (TTD) captures a deterministic, shareable record
  23. AFLGopher: Accelerating Directed Fuzzing via Feasibility-Aware Guidance (arxiv.org, 2025-11-17T05:00:00)
    Score: 9.49
    arXiv:2511.10828v1 Announce Type: new
    Abstract: Directed fuzzing is a useful testing technique that aims to efficiently reach target code sites in a program. The core of directed fuzzing is the guiding mechanism that directs the fuzzing to the specified target. A general guiding mechanism adopted in existing directed fuzzers is to calculate the control-flow distance between the current progress and the target, and use that as feedback to guide the directed fuzzing. A fundamental problem with th
  24. On the Information-Theoretic Fragility of Robust Watermarking under Diffusion Editing (arxiv.org, 2025-11-17T05:00:00)
    Score: 9.49
    arXiv:2511.10933v1 Announce Type: new
    Abstract: Robust invisible watermarking embeds hidden information in images such that the watermark can survive various manipulations. However, the emergence of powerful diffusion-based image generation and editing techniques poses a new threat to these watermarking schemes. In this paper, we investigate the intersection of diffusion-based image editing and robust image watermarking. We analyze how diffusion-driven image edits can significantly degrade or e
  25. Gynopticon: Consensus-Based Cheating Detection System for Competitive Games (arxiv.org, 2025-11-17T05:00:00)
    Score: 9.49
    arXiv:2511.10992v1 Announce Type: new
    Abstract: Cheating in online games poses significant threats to the gaming industry, yet most prior research has concentrated on Massively Multiplayer Online Role-Playing Games (MMORPGs). Competitive genres-such as Multiplayer Online Battle Arena (MOBA), First Person Shooter (FPS), Real Time Strategy (RTS), and Action games-remain underexplored due to the difficulty of detecting cheating users and the demand for complex data and techniques. To address this

Auto-generated 2025-11-17

Written By

More From Author

You May Also Like