Categories Uncategorized

Weekly Threat Report 2025-12-01

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-12-01T05:00:05.430863+00:00

  1. October 2025 CVE Landscape (www.recordedfuture.com, 2025-11-06T00:00:00)
    Score: 10.299
    Discover the top 32 high-risk CVEs identified in October 2025 by Recorded Future’s Insikt Group, including active zero-day exploits, legacy system threats, and CL0P ransomware campaigns targeting Oracle EBS.
  2. Chrome zero-day under active attack: visiting the wrong site could hijack your browser (www.malwarebytes.com, 2025-11-18T18:09:13)
    Score: 9.125
    Google has released an update to patch two high-severity vulnerabilities, one of which is already under active exploitation.
  3. Millions at risk after nationwide CodeRED alert system outage and data breach (www.malwarebytes.com, 2025-11-27T14:40:32)
    Score: 8.601
    A ransomware attack against the CodeRED emergency alert platform has triggered warnings across the US.
  4. How CVSS v4.0 works: characterizing and scoring vulnerabilities (www.malwarebytes.com, 2025-11-28T12:42:35)
    Score: 7.754
    This blog explains why vulnerability scoring matters, how CVSS works, and what’s new in version 4.0.
  5. November 2025 Patch Tuesday: One Zero-Day and Five Critical Vulnerabilities Among 63 CVEs (www.crowdstrike.com, 2025-11-12T06:00:00)
    Score: 7.54
  6. Fake LinkedIn jobs trick Mac users into downloading Flexible Ferret malware (www.malwarebytes.com, 2025-11-26T14:11:26)
    Score: 7.43
    Scammers are using fake jobs and a phony video update to infect Mac users with a multi-stage stealer designed for long-term access and data theft.
  7. Integrating Threat Intelligence and Vulnerability Management: A Modern Approach (www.recordedfuture.com, 2025-11-26T00:00:00)
    Score: 7.332
    Learn how combining threat intelligence and vulnerability management creates a modern approach to risk reduction and how Recorded Future integrates both.
  8. New ClickFix wave infects users with hidden malware in images and fake Windows updates (www.malwarebytes.com, 2025-11-25T16:08:03)
    Score: 7.277
    ClickFix just got more convincing, hiding malware in PNG images and faking Windows updates to make users run dangerous commands.
  9. Matrix Push C2 abuses browser notifications to deliver phishing and malware (www.malwarebytes.com, 2025-11-24T15:43:00)
    Score: 7.108
    Attackers can send highly realistic push notifications through your browser, including fake alerts that can lead to malware or phishing pages.
  10. Addressing the vulnerability prioritization challenge (www.recordedfuture.com, 2025-11-18T00:00:00)
    Score: 6.299
    Struggling with vulnerability overload? Learn why CVSS scores alone aren't enough—and how a three-pillar framework using real-world threat intel, environmental context, and organizational realities can help you prioritize what truly matters.

Top 10 AI / LLM-Related Threats

Generated 2025-12-01T06:00:16.762166+00:00

  1. A Longitudinal Measurement of Privacy Policy Evolution for Large Language Models (arxiv.org, 2025-12-01T05:00:00)
    Score: 20.79
    arXiv:2511.21758v1 Announce Type: new
    Abstract: Large language model (LLM) services have been rapidly integrated into people's daily lives as chatbots and agentic systems. They are nourished by collecting rich streams of data, raising privacy concerns around excessive collection of sensitive personal information. Privacy policies are the fundamental mechanism for informing users about data practices in modern information privacy paradigm. Although traditional web and mobile policies are we
  2. Distillability of LLM Security Logic: Predicting Attack Success Rate of Outline Filling Attack via Ranking Regression (arxiv.org, 2025-12-01T05:00:00)
    Score: 20.79
    arXiv:2511.22044v1 Announce Type: new
    Abstract: In the realm of black-box jailbreak attacks on large language models (LLMs), the feasibility of constructing a narrow safety proxy, a lightweight model designed to predict the attack success rate (ASR) of adversarial prompts, remains underexplored. This work investigates the distillability of an LLM's core security logic. We propose a novel framework that incorporates an improved outline filling attack to achieve dense sampling of the model&#
  3. Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks (arxiv.org, 2025-12-01T05:00:00)
    Score: 19.79
    arXiv:2511.22047v1 Announce Type: new
    Abstract: Large Language Model (LLM) safety guardrail models have emerged as a primary defense mechanism against harmful content generation, yet their robustness against sophisticated adversarial attacks remains poorly characterized. This study evaluated ten publicly available guardrail models from Meta, Google, IBM, NVIDIA, Alibaba, and Allen AI across 1,445 test prompts spanning 21 attack categories. While Qwen3Guard-8B achieved the highest overall accura
  4. AutoPatch: Multi-Agent Framework for Patching Real-World CVE Vulnerabilities (arxiv.org, 2025-12-01T05:00:00)
    Score: 17.79
    arXiv:2505.04195v2 Announce Type: replace
    Abstract: Large Language Models (LLMs) have emerged as promising tools in software development, enabling automated code generation and analysis. However, their knowledge is limited to a fixed cutoff date, making them prone to generating code vulnerable to newly disclosed CVEs. Frequent fine-tuning with new CVE sets is costly, and existing LLM-based approaches focus on oversimplified CWE examples and require providing explicit bug locations to LLMs, limi
  5. Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance (arxiv.org, 2025-12-01T05:00:00)
    Score: 16.99
    arXiv:2511.21901v1 Announce Type: new
    Abstract: The accelerating deployment of artificial intelligence systems across regulated sectors has exposed critical fragmentation in risk assessment methodologies. A significant "language barrier" currently separates technical security teams, who focus on algorithmic vulnerabilities (e.g., MITRE ATLAS), from legal and compliance professionals, who address regulatory mandates (e.g., EU AI Act, NIST AI RMF). This disciplinary disconnect prevents
  6. Evaluating LLMs for One-Shot Patching of Real and Artificial Vulnerabilities (arxiv.org, 2025-12-01T05:00:00)
    Score: 16.79
    arXiv:2511.23408v1 Announce Type: new
    Abstract: Automated vulnerability patching is crucial for software security, and recent advancements in Large Language Models (LLMs) present promising capabilities for automating this task. However, existing research has primarily assessed LLMs using publicly disclosed vulnerabilities, leaving their effectiveness on related artificial vulnerabilities largely unexplored. In this study, we empirically evaluate the patching effectiveness and complementarity of
  7. NegBLEURT Forest: Leveraging Inconsistencies for Detecting Jailbreak Attacks (arxiv.org, 2025-12-01T05:00:00)
    Score: 15.49
    arXiv:2511.11784v2 Announce Type: replace
    Abstract: Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite alignment with ethical guidelines. Crafting universal filtering rules remains difficult due to their inherent dependence on specific contexts. To address these challenges without relying on threshold calibration or model fine-tuning, this work introduces a semantic consistency analysis between suc
  8. Categorical Framework for Quantum-Resistant Zero-Trust AI Security (arxiv.org, 2025-12-01T05:00:00)
    Score: 15.29
    arXiv:2511.21768v1 Announce Type: new
    Abstract: The rapid deployment of AI models necessitates robust, quantum-resistant security, particularly against adversarial threats. Here, we present a novel integration of post-quantum cryptography (PQC) and zero trust architecture (ZTA), formally grounded in category theory, to secure AI model access. Our framework uniquely models cryptographic workflows as morphisms and trust policies as functors, enabling fine-grained, adaptive trust and micro-segment
  9. Ghosting Your LLM: Without The Knowledge of Your Gradient and Data (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.22700v1 Announce Type: new
    Abstract: In recent years, large language models (LLMs) have achieved substantial advancements and are increasingly integrated into critical applications across various domains. This growing adoption underscores the need to ensure their security and robustness. In this work, we focus on the impact of Bit Flip Attacks (BFAs) on LLMs, which exploits hardware faults to corrupt model parameters, posing a significant threat to model integrity and performance. Ex
  10. PRISM: Privacy-Aware Routing for Adaptive Cloud-Edge LLM Inference via Semantic Sketch Collaboration (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.22788v1 Announce Type: new
    Abstract: Large Language Models (LLMs) demonstrate impressive capabilities in natural language understanding and generation, but incur high communication overhead and privacy risks in cloud deployments, while facing compute and memory constraints when confined to edge devices. Cloud-edge inference has emerged as a promising paradigm for improving privacy in LLM services by retaining sensitive computations on local devices. However, existing cloud-edge infer
  11. Medical Malice: A Dataset for Context-Aware Safety in Healthcare LLMs (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.21757v1 Announce Type: cross
    Abstract: The integration of Large Language Models (LLMs) into healthcare demands a safety paradigm rooted in \textit{primum non nocere}. However, current alignment techniques rely on generic definitions of harm that fail to capture context-dependent violations, such as administrative fraud and clinical discrimination. To address this, we introduce Medical Malice: a dataset of 214,219 adversarial prompts calibrated to the regulatory and ethical complexiti
  12. AgentShield: Make MAS more secure and efficient (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.22924v1 Announce Type: cross
    Abstract: Large Language Model (LLM)-based Multi-Agent Systems (MAS) offer powerful cooperative reasoning but remain vulnerable to adversarial attacks, where compromised agents can undermine the system's overall performance. Existing defenses either depend on single trusted auditors, creating single points of failure, or sacrifice efficiency for robustness. To resolve this tension, we propose \textbf{AgentShield}, a distributed framework for efficien
  13. iSeal: Encrypted Fingerprinting for Reliable LLM Ownership Verification (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.08905v2 Announce Type: replace
    Abstract: Given the high cost of large language model (LLM) training from scratch, safeguarding LLM intellectual property (IP) has become increasingly crucial. As the standard paradigm for IP ownership verification, LLM fingerprinting thus plays a vital role in addressing this challenge. Existing LLM fingerprinting methods verify ownership by extracting or injecting model-specific features. However, they overlook potential attacks during the verificatio
  14. LockForge: Automating Paper-to-Code for Logic Locking with Multi-Agent Reasoning LLMs (arxiv.org, 2025-12-01T05:00:00)
    Score: 14.79
    arXiv:2511.18531v2 Announce Type: replace
    Abstract: Despite rapid progress in logic locking (LL), reproducibility remains a challenge as codes are rarely made public. We present LockForge, a first-of-its-kind, multi-agent large language model (LLM) framework that turns LL descriptions in papers into executable and tested code. LockForge provides a carefully crafted pipeline realizing forethought, implementation, iterative refinement, and a multi-stage validation, all to systematically bridge th
  15. Threat Landscape of the Building and Construction Sector Part Two: Ransomware (www.rapid7.com, 2025-11-14T14:31:42)
    Score: 12.537
    In this second installment of our two-part series on the construction industry, Rapid7 is looking at the specific threat ransomware poses, why the industry is particularly vulnerable, and ways in which threat actors exploit its weaknesses to great effect. You can catch up on the first part here: Initial Access, Supply Chain, and the Internet of Things. Ransomware and the construction industry The construction sector is increasingly vulnerable to ransomware attacks in 2025 due to its complex ecos
  16. ShieldAgent: Shielding Agents via Verifiable Safety Policy Reasoning (arxiv.org, 2025-12-01T05:00:00)
    Score: 12.49
    arXiv:2503.22738v2 Announce Type: replace-cross
    Abstract: Autonomous agents powered by foundation models have seen widespread adoption across various real-world applications. However, they remain highly vulnerable to malicious instructions and attacks, which can result in severe consequences such as privacy breaches and financial losses. More critically, existing guardrails for LLMs are not applicable due to the complex and dynamic nature of agents. To tackle these challenges, we propose Shield
  17. GEO-Detective: Unveiling Location Privacy Risks in Images with LLM Agents (arxiv.org, 2025-12-01T05:00:00)
    Score: 11.79
    arXiv:2511.22441v1 Announce Type: new
    Abstract: Images shared on social media often expose geographic cues. While early geolocation methods required expert effort and lacked generalization, the rise of Large Vision Language Models (LVLMs) now enables accurate geolocation even for ordinary users. However, existing approaches are not optimized for this task. To explore the full potential and associated privacy risks, we present Geo-Detective, an agent that mimics human reasoning and tool use for
  18. The State of Security Today: Setting the Stage for 2026 (www.rapid7.com, 2025-11-18T16:07:34)
    Score: 11.505
    As we close out 2025, one thing is clear: the security landscape is evolving faster than most organizations can keep up. From surging ransomware campaigns and AI-enhanced phishing to data extortion, geopolitical fallout, and gaps in cyber readiness, the challenges facing security teams today are as varied as they are relentless. But with complexity comes clarity and insight. This year’s most significant breaches, breakthroughs, and behavioral shifts provide a critical lens through which we can v
  19. A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures (arxiv.org, 2025-12-01T05:00:00)
    Score: 11.49
    arXiv:2506.19676v4 Announce Type: replace
    Abstract: In recent years, Large-Language-Model-driven AI agents have exhibited unprecedented intelligence and adaptability. Nowadays, agents are undergoing a new round of evolution. They no longer act as an isolated island like LLMs. Instead, they start to communicate with diverse external entities, such as other agents and tools, to perform complex tasks. Under this trend, agent communication is regarded as a foundational pillar of the next communicat
  20. Amazon SageMaker AI introduces EAGLE based adaptive speculative decoding to accelerate generative AI inference (aws.amazon.com, 2025-11-26T00:29:42)
    Score: 11.455
    Amazon SageMaker AI now supports EAGLE-based adaptive speculative decoding, a technique that accelerates large language model inference by up to 2.5x while maintaining output quality. In this post, we explain how to use EAGLE 2 and EAGLE 3 speculative decoding in Amazon SageMaker AI, covering the solution architecture, optimization workflows using your own datasets or SageMaker's built-in data, and benchmark results demonstrating significant improvements in throughput and latency.
  21. Patch Tuesday – November 2025 (www.rapid7.com, 2025-11-11T20:58:18)
    Score: 11.287
    Microsoft is publishing 66 new vulnerabilities today, which is far fewer than we’ve come to expect in recent months. There’s a lone exploited-in-the-wild zero-day vulnerability, which Microsoft assesses as critical severity, although there’s apparently no public disclosure yet. Three critical remote code execution (RCE) vulnerabilities are patched today; happily, Microsoft currently assesses all three as less likely to see exploitation. Five browser vulnerabilities and a dozen or so fixes for Az
  22. How Condé Nast accelerated contract processing and rights analysis with Amazon Bedrock (aws.amazon.com, 2025-11-26T21:37:27)
    Score: 10.364
    In this post, we explore how Condé Nast used Amazon Bedrock and Anthropic’s Claude to accelerate their contract processing and rights analysis workstreams. The company’s extensive portfolio, spanning multiple brands and geographies, required managing an increasingly complex web of contracts, rights, and licensing agreements.
  23. Accelerate generative AI innovation in Canada with Amazon Bedrock cross-Region inference (aws.amazon.com, 2025-11-24T23:56:58)
    Score: 9.911
    We are excited to announce that customers in Canada can now access advanced foundation models including Anthropic's Claude Sonnet 4.5 and Claude Haiku 4.5 on Amazon Bedrock through cross-Region inference (CRIS). This post explores how Canadian organizations can use cross-Region inference profiles from the Canada (Central) Region to access the latest foundation models to accelerate AI initiatives. We will demonstrate how to get started with these new capabilities, provide guidance for migrat
  24. Beyond the Watering Hole: APT24's Pivot to Multi-Vector Attacks (cloud.google.com, 2025-11-20T14:00:00)
    Score: 9.86
    Written by: Harsh Parashar, Tierra Duncan, Dan Perez Google Threat Intelligence Group (GTIG) is tracking a long-running and adaptive cyber espionage campaign by APT24, a People's Republic of China (PRC)-nexus threat actor. Spanning three years, APT24 has been deploying BADAUDIO, a highly obfuscated first-stage downloader used to establish persistent access to victim networks. While earlier operations relied on broad strategic web compromises to compromise legitimate websites, APT24 has rece
  25. Attackers accelerate, adapt, and automate: Rapid7’s Q3 2025 Threat Landscape Report (www.rapid7.com, 2025-11-12T13:55:11)
    Score: 9.555
    The Q3 2025 Threat Landscape Report , authored by the Rapid7 Labs team, paints a clear picture of an environment where attackers are moving faster, working smarter, and using artificial intelligence to stay ahead of defenders. The findings reveal a threat landscape defined by speed, coordination, and innovation. ⠀ The quarter showed how quickly exploitation now follows disclosure: Rapid7 observed newly reported vulnerabilities weaponized within days, if not hours, leaving organizations little ti

Auto-generated 2025-12-01

Written By

More From Author

You May Also Like