Categories Uncategorized

Weekly Threat Report 2025-11-24

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-11-24T05:00:05.738920+00:00

  1. October 2025 CVE Landscape (www.recordedfuture.com, 2025-11-06T00:00:00)
    Score: 11.465
    Discover the top 32 high-risk CVEs identified in October 2025 by Recorded Future’s Insikt Group, including active zero-day exploits, legacy system threats, and CL0P ransomware campaigns targeting Oracle EBS.
  2. Chrome zero-day under active attack: visiting the wrong site could hijack your browser (www.malwarebytes.com, 2025-11-18T18:09:13)
    Score: 10.291
    Google has released an update to patch two high-severity vulnerabilities, one of which is already under active exploitation.
  3. November 2025 Patch Tuesday: One Zero-Day and Five Critical Vulnerabilities Among 63 CVEs (www.crowdstrike.com, 2025-11-12T06:00:00)
    Score: 8.707
  4. Addressing the vulnerability prioritization challenge (www.recordedfuture.com, 2025-11-18T00:00:00)
    Score: 7.465
    Struggling with vulnerability overload? Learn why CVSS scores alone aren't enough—and how a three-pillar framework using real-world threat intel, environmental context, and organizational realities can help you prioritize what truly matters.
  5. Thieves order a tasty takeout of names and addresses from DoorDash (www.malwarebytes.com, 2025-11-18T14:24:54)
    Score: 7.265
    It was the way DoorDash handled the communication of the breach, as much as the data leaked, that has angered customers.
  6. Be careful responding to unexpected job interviews (www.malwarebytes.com, 2025-11-14T16:30:38)
    Score: 6.613
    Contacted out of the blue for a virtual interview? Be cautious. Attackers are using fake interviews to slip malware onto your device.
  7. From Vulnerability Management to Exposure Management: The Platform Era Has Arrived (www.crowdstrike.com, 2025-11-13T06:00:00)
    Score: 6.374
  8. Ransomware Detection With Real-Time Data (www.recordedfuture.com, 2025-11-04T00:00:00)
    Score: 5.832
    Learn why timely, relevant data is crucial for effective ransomware detection and what you can do to help prevent ransomware attacks and safeguard your organization.
  9. AI teddy bear for kids responds with sexual content and advice about weapons (www.malwarebytes.com, 2025-11-21T18:45:32)
    Score: 5.796
    FoloToy's AI teddy bear, Kumma, crossed serious lines, raising fresh concerns about how little oversight exists for AI toys marketed to children.
  10. Fake calendar invites are spreading. Here’s how to remove them and prevent more (www.malwarebytes.com, 2025-11-21T15:28:23)
    Score: 5.773
    Calendar spam is a growing problem, often arriving as email attachments or as download links in messaging apps.

Top 10 AI / LLM-Related Threats

Generated 2025-11-24T06:00:14.697140+00:00

  1. GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (cloud.google.com, 2025-11-05T14:00:00)
    Score: 34.056
    Executive Summary Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations . This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution. This report serves as an update to our January 2025
  2. Password Strength Analysis Through Social Network Data Exposure: A Combined Approach Relying on Data Reconstruction and Generative Models (arxiv.org, 2025-11-24T05:00:00)
    Score: 17.79
    arXiv:2511.16716v1 Announce Type: new
    Abstract: Although passwords remain the primary defense against unauthorized access, users often tend to use passwords that are easy to remember. This behavior significantly increases security risks, also due to the fact that traditional password strength evaluation methods are often inadequate. In this discussion paper, we present SODA ADVANCE, a data reconstruction tool also designed to enhance evaluation processes related to the password strength. In par
  3. Adaptive and Robust Data Poisoning Detection and Sanitization in Wearable IoT Systems using Large Language Models (arxiv.org, 2025-11-24T05:00:00)
    Score: 17.79
    arXiv:2511.02894v3 Announce Type: replace-cross
    Abstract: The widespread integration of wearable sensing devices in Internet of Things (IoT) ecosystems, particularly in healthcare, smart homes, and industrial applications, has required robust human activity recognition (HAR) techniques to improve functionality and user experience. Although machine learning models have advanced HAR, they are increasingly susceptible to data poisoning attacks that compromise the data integrity and reliability of
  4. Preparing for Threats to Come: Cybersecurity Forecast 2026 (cloud.google.com, 2025-11-04T14:00:00)
    Score: 15.017
    Every November, we make it our mission to equip organizations with the knowledge needed to stay ahead of threats we anticipate in the coming year. The Cybersecurity Forecast 2026 report, released today, provides comprehensive insights to help security leaders and teams prepare for those challenges. This report does not contain "crystal ball" predictions. Instead, our forecasts are built on real-world trends and data we are observing right now. The information contained in the report co
  5. Streamline AI operations with the Multi-Provider Generative AI Gateway reference architecture (aws.amazon.com, 2025-11-21T20:34:56)
    Score: 14.83
    In this post, we introduce the Multi-Provider Generative AI Gateway reference architecture, which provides guidance for deploying LiteLLM into an AWS environment to streamline the management and governance of production generative AI workloads across multiple model providers. This centralized gateway solution addresses common enterprise challenges including provider fragmentation, decentralized governance, operational complexity, and cost management by offering a unified interface that supports
  6. AutoBackdoor: Automating Backdoor Attacks via LLM Agents (arxiv.org, 2025-11-24T05:00:00)
    Score: 14.79
    arXiv:2511.16709v1 Announce Type: new
    Abstract: Backdoor attacks pose a serious threat to the secure deployment of large language models (LLMs), enabling adversaries to implant hidden behaviors triggered by specific inputs. However, existing methods often rely on manually crafted triggers and static data pipelines, which are rigid, labor-intensive, and inadequate for systematically evaluating modern defense robustness. As AI agents become increasingly capable, there is a growing need for more r
  7. Steering in the Shadows: Causal Amplification for Activation Space Attacks in Large Language Models (arxiv.org, 2025-11-24T05:00:00)
    Score: 14.79
    arXiv:2511.17194v1 Announce Type: new
    Abstract: Modern large language models (LLMs) are typically secured by auditing data, prompts, and refusal policies, while treating the forward pass as an implementation detail. We show that intermediate activations in decoder-only LLMs form a vulnerable attack surface for behavioral control. Building on recent findings on attention sinks and compression valleys, we identify a high-gain region in the residual stream where small, well-aligned perturbations a
  8. SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought (arxiv.org, 2025-11-24T05:00:00)
    Score: 14.79
    arXiv:2511.07772v2 Announce Type: replace
    Abstract: As Large Language Models (LLMs) evolve into personal assistants with access to sensitive user data, they face a critical privacy challenge: while prior work has addressed output-level privacy, recent findings reveal that LLMs often leak private information through their internal reasoning processes, violating contextual privacy expectations. These leaky thoughts occur when models inadvertently expose sensitive details in their reasoning traces
  9. Reason2Attack: Jailbreaking Text-to-Image Models via LLM Reasoning (arxiv.org, 2025-11-24T05:00:00)
    Score: 12.49
    arXiv:2503.17987v3 Announce Type: replace
    Abstract: Text-to-Image(T2I) models typically deploy safety filters to prevent the generation of sensitive images. Unfortunately, recent jailbreaking attack methods manually design instructions for the LLM to generate adversarial prompts, which effectively bypass safety filters while producing sensitive images, exposing safety vulnerabilities of T2I models. However, due to the LLM's limited understanding of the T2I model and its safety filters, exi
  10. Amazon Bedrock Guardrails expands support for code domain (aws.amazon.com, 2025-11-19T22:27:14)
    Score: 12.373
    Amazon Bedrock Guardrails now extends its safety controls to protect code generation across twelve programming languages, addressing critical security challenges in AI-assisted software development. In this post, we explore how to configure content filters, prompt attack detection, denied topics, and sensitive information filters to safeguard against threats like prompt injection, data exfiltration, and malicious code generation while maintaining developer productivity .
  11. GhostEI-Bench: Do Mobile Agents Resilience to Environmental Injection in Dynamic On-Device Environments? (arxiv.org, 2025-11-24T05:00:00)
    Score: 11.79
    arXiv:2510.20333v2 Announce Type: replace
    Abstract: Vision-Language Models (VLMs) are increasingly deployed as autonomous agents to navigate mobile graphical user interfaces (GUIs). Operating in dynamic on-device ecosystems, which include notifications, pop-ups, and inter-app interactions, exposes them to a unique and underexplored threat vector: environmental injection. Unlike prompt-based attacks that manipulate textual instructions, environmental injection corrupts an agent's visual per
  12. Beyond the Watering Hole: APT24's Pivot to Multi-Vector Attacks (cloud.google.com, 2025-11-20T14:00:00)
    Score: 11.527
    Written by: Harsh Parashar, Tierra Duncan, Dan Perez Google Threat Intelligence Group (GTIG) is tracking a long-running and adaptive cyber espionage campaign by APT24, a People's Republic of China (PRC)-nexus threat actor. Spanning three years, APT24 has been deploying BADAUDIO, a highly obfuscated first-stage downloader used to establish persistent access to victim networks. While earlier operations relied on broad strategic web compromises to compromise legitimate websites, APT24 has rece
  13. SHIELD: Secure Hypernetworks for Incremental Expansion Learning Defense (arxiv.org, 2025-11-24T05:00:00)
    Score: 11.49
    arXiv:2506.08255v3 Announce Type: replace-cross
    Abstract: Continual learning under adversarial conditions remains an open problem, as existing methods often compromise either robustness, scalability, or both. We propose a novel framework that integrates Interval Bound Propagation (IBP) with a hypernetwork-based architecture to enable certifiably robust continual learning across sequential tasks. Our method, SHIELD, generates task-specific model parameters via a shared hypernetwork conditioned s
  14. Constant-Size Cryptographic Evidence Structures for Regulated AI Workflows (arxiv.org, 2025-11-24T05:00:00)
    Score: 11.29
    arXiv:2511.17118v1 Announce Type: new
    Abstract: This paper introduces constant-size cryptographic evidence structures, a general abstraction for representing verifiable audit evidence for AI workflows in regulated environments. Each evidence item is a fixed-size tuple of cryptographic fields, designed to (i) provide strong binding to workflow events and configurations, (ii) support constant-size storage and uniform verification cost per event, and (iii) compose cleanly with hash-chain and Merkl
  15. The November 2025 Security Update Review (www.thezdi.com, 2025-11-11T18:30:42)
    Score: 9.929
    I’ve made it through Pwn2Own Ireland , and while many are celebrated those who served their country in the armed services, patch Tuesday stops for no one. So affix your poppy accordingly, and let’s take a look at the latest security offerings from Adobe and Microsoft. If you’d rather watch the full video recap covering the entire release, you can check it out here: Adobe Patches for November 2025 For November, Adobe released eight bulletins addressing 29 unique CVEs in Adobe InDesign, InCopy, Ph
  16. In Other News: ATM Jackpotting, WhatsApp-NSO Lawsuit Continues, CISA Hiring (www.securityweek.com, 2025-11-21T15:30:00)
    Score: 9.88
    Other noteworthy stories that might have slipped under the radar: surge in Palo Alto Networks scanning, WEL Companies data breach impacts 120,000 people, AI second-order prompt injection attack. The post In Other News: ATM Jackpotting, WhatsApp-NSO Lawsuit Continues, CISA Hiring appeared first on SecurityWeek .
  17. Introducing AnyLanguageModel: One API for Local and Remote LLMs on Apple Platforms (huggingface.co, 2025-11-20T00:00:00)
    Score: 9.588
  18. Membership Inference Attacks Beyond Overfitting (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2511.16792v1 Announce Type: new
    Abstract: Membership inference attacks (MIAs) against machine learning (ML) models aim to determine whether a given data point was part of the model training data. These attacks may pose significant privacy risks to individuals whose sensitive data were used for training, which motivates the use of defenses such as differential privacy, often at the cost of high accuracy losses. MIAs exploit the differences in the behavior of a model when making predictions
  19. TICAL: Trusted and Integrity-protected Compilation of AppLications (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2511.17070v1 Announce Type: new
    Abstract: During the past few years, we have witnessed various efforts to provide confidentiality and integrity for applications running in untrusted environments such as public clouds. In most of these approaches, hardware extensions such as Intel SGX, TDX, AMD SEV, etc., are leveraged to provide encryption and integrity protection on process or VM level. Although all of these approaches increase the trust in the application at runtime, an often overlooked
  20. ThreadFuzzer: Fuzzing Framework for Thread Protocol (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2511.17283v1 Announce Type: new
    Abstract: With the rapid growth of IoT, secure and efficient mesh networking has become essential. Thread has emerged as a key protocol, widely used in smart-home and commercial systems, and serving as a core transport layer in the Matter standard. This paper presents ThreadFuzzer, the first dedicated fuzzing framework for systematically testing Thread protocol implementations. By manipulating packets at the MLE layer, ThreadFuzzer enables fuzzing of both v
  21. A Patient-Centric Blockchain Framework for Secure Electronic Health Record Management: Decoupling Data Storage from Access Control (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2511.17464v1 Announce Type: new
    Abstract: We present a patient-centric architecture for electronic health record (EHR) sharing that separates content storage from authorization and audit. Encrypted FHIR resources are stored off-chain; a public blockchain records only cryptographic commitments and patient-signed, time-bounded permissions using EIP-712. Keys are distributed via public-key wrapping, enabling storage providers to remain honest-but-curious without risking confidentiality. We f
  22. Compact and Selective Disclosure for Verifiable Credentials (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2506.00262v2 Announce Type: replace
    Abstract: Self-Sovereign Identity (SSI) is a novel identity model that empowers individuals with full control over their data, enabling them to choose what information to disclose, with whom, and when. This paradigm is rapidly gaining traction worldwide, supported by numerous initiatives such as the European Digital Identity (EUDI) Regulation or Singapore's National Digital Identity (NDI). For instance, by 2026, the EUDI Regulation will enable all
  23. T2I-RiskyPrompt: A Benchmark for Safety Evaluation, Attack, and Defense on Text-to-Image Model (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2510.22300v2 Announce Type: replace
    Abstract: Using risky text prompts, such as pornography and violent prompts, to test the safety of text-to-image (T2I) models is a critical task. However, existing risky prompt datasets are limited in three key areas: 1) limited risky categories, 2) coarse-grained annotation, and 3) low effectiveness. To address these limitations, we introduce T2I-RiskyPrompt, a comprehensive benchmark designed for evaluating safety-related tasks in T2I models. Specific
  24. AudAgent: Automated Auditing of Privacy Policy Compliance in AI Agents (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2511.07441v2 Announce Type: replace
    Abstract: AI agents can autonomously perform tasks and, often without explicit user consent, collect or disclose users' sensitive local data, which raises serious privacy concerns. Although AI agents' privacy policies describe their intended data practices, there remains limited transparency and accountability about whether runtime behavior matches those policies. To close this gap, we introduce AudAgent, a visual tool that continuously monito
  25. LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Design of Multi Active/Passive Core-Agent Architectures (arxiv.org, 2025-11-24T05:00:00)
    Score: 9.49
    arXiv:2409.11393v3 Announce Type: replace-cross
    Abstract: In an era where vast amounts of data are collected and processed from diverse sources, there is a growing demand for sophisticated AI systems capable of intelligently fusing and analyzing this information. To address these challenges, researchers have turned towards integrating tools into LLM-powered agents to enhance the overall information fusion process. However, the conjunction of these technologies and the proposed enhancements in s

Auto-generated 2025-11-24

Written By

More From Author

You May Also Like