Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2025-12-15T05:00:05.074919+00:00
- GrayBravo’s CastleLoader Activity Clusters Target Multiple Industries (www.recordedfuture.com, 2025-12-09T00:00:00)
Score: 14.965
Note: The analysis cut-off date for this report was November 10, 2025 Executive Summary Insikt Group continues to monitor GrayBravo (formerly tracked as TAG-150), a technically sophisticated and rapidly evolving threat actor first identified in September 2025. GrayBravo demonstrates strong adaptability, responsiveness to public exposure, and operates a large-scale, multi-layered infrastructure. Recent analysis of GrayBravo’s ecosystem uncovered four distinct activity clusters leveraging the grou - [updated]Another Chrome zero-day under attack: update now (www.malwarebytes.com, 2025-12-11T11:58:47)
Score: 10.582
If we’re lucky, this update will close out 2025’s run of Chrome zero-days. This one is a V8 type-confusion issue already being exploited in the wild. - Critical React2Shell Vulnerability Under Active Exploitation by Chinese Threat Actors (www.recordedfuture.com, 2025-12-08T00:00:00)
Score: 10.499
A critical vulnerability in React Server Components is allegedly being actively exploited by multiple Chinese threat actors, Recorded Future recommends organizations patch their systems immediately. - December 2025 Patch Tuesday: One Critical Zero-Day, Two Publicly Disclosed Vulnerabilities Among 57 CVEs (www.crowdstrike.com, 2025-12-09T06:00:00)
Score: 9.707 - November 2025 CVE Landscape: 10 Critical Vulnerabilities Show 69% Drop from October (www.recordedfuture.com, 2025-12-09T00:00:00)
Score: 8.665
November 2025 CVE landscape: 10 exploited critical vulnerabilities, a 69% drop from October, and why Fortinet and Samsung flaws need urgent patching. - GOLD SALEM tradecraft for deploying Warlock ransomware (news.sophos.com, 2025-12-11T10:00:59)
Score: 8.568
Analysis of the tradecraft evolution across 6 months and 11 incidents - December Patch Tuesday fixes three zero-days, including one that hijacks Windows devices (www.malwarebytes.com, 2025-12-10T16:06:14)
Score: 8.444
The update patches three zero-days and introduces a new PowerShell warning meant to help you avoid accidentally running unsafe code from the web. - Inside Shanya, a packer-as-a-service fueling modern attacks (news.sophos.com, 2025-12-07T02:57:18)
Score: 7.852
The ransomware scene gains another would-be EDR killer - React2Shell flaw (CVE-2025-55182) exploited for remote code execution (news.sophos.com, 2025-12-11T18:07:12)
Score: 7.624
The availability of exploit code will likely lead to more widespread opportunistic attacks - DroidLock malware locks you out of your Android device and demands ransom (www.malwarebytes.com, 2025-12-11T16:57:58)
Score: 7.616
Researchers have found Android malware that holds your files and your device hostage until you pay the ransom.
Top 10 AI / LLM-Related Threats
Generated 2025-12-15T06:00:15.355740+00:00
- CachePrune: Neural-Based Attribution Defense Against Indirect Prompt Injection Attacks (arxiv.org, 2025-12-15T05:00:00)
Score: 25.79
arXiv:2504.21228v2 Announce Type: replace
Abstract: Large Language Models (LLMs) are susceptible to indirect prompt injection attacks, in which the model inadvertently responds to task messages injected within the prompt context. This vulnerability stems from LLMs' inability to distinguish between data and instructions within a prompt. In this paper, we propose CachePrune, a defense method that identifies and prunes task-triggering neurons from the KV cache of the input prompt context. By - Network and Compiler Optimizations for Efficient Linear Algebra Kernels in Private Transformer Inference (arxiv.org, 2025-12-15T05:00:00)
Score: 17.79
arXiv:2512.11135v1 Announce Type: new
Abstract: Large language model (LLM) based services are primarily structured as client-server interactions, with clients sending queries directly to cloud providers that host LLMs. This approach currently compromises data privacy as all queries must be processed in the cloud and in the clear. Fully Homomorphic Encryption (FHE) is a solution to this data privacy issue by enabling computations directly upon encrypted queries. However, running encrypted transf - The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation (arxiv.org, 2025-12-15T05:00:00)
Score: 17.79
arXiv:2507.05578v2 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks, yet they also exhibit memorization of their training data. This phenomenon raises critical questions about model behavior, privacy risks, and the boundary between learning and memorization. Addressing these concerns, this paper synthesizes recent studies and investigates the landscape of memorization, the factors influencing it, and metho - Scaling MLflow for enterprise AI: What’s New in SageMaker AI with MLflow (aws.amazon.com, 2025-12-11T18:16:19)
Score: 17.369
Today we’re announcing Amazon SageMaker AI with MLflow, now including a serverless capability that dynamically manages infrastructure provisioning, scaling, and operations for artificial intelligence and machine learning (AI/ML) development tasks. In this post, we explore how these new capabilities help you run large MLflow workloads—from generative AI agents to large language model (LLM) experimentation—with improved performance, automation, and security using SageMaker AI with MLflow. - Automated Penetration Testing with LLM Agents and Classical Planning (arxiv.org, 2025-12-15T05:00:00)
Score: 14.79
arXiv:2512.11143v1 Announce Type: new
Abstract: While penetration testing plays a vital role in cybersecurity, achieving fully automated, hands-off-the-keyboard execution remains a significant research challenge. In this paper, we introduce the "Planner-Executor-Perceptor (PEP)" design paradigm and use it to systematically review existing work and identify the key challenges in this area. We also evaluate existing penetration testing systems, with a particular focus on the use of Larg - Super Suffixes: Bypassing Text Generation Alignment and Guard Models Simultaneously (arxiv.org, 2025-12-15T05:00:00)
Score: 14.79
arXiv:2512.11783v1 Announce Type: new
Abstract: The rapid deployment of Large Language Models (LLMs) has created an urgent need for enhanced security and privacy measures in Machine Learning (ML). LLMs are increasingly being used to process untrusted text inputs and even generate executable code, often while having access to sensitive system controls. To address these security concerns, several companies have introduced guard models, which are smaller, specialized models designed to protect tex - Towards Privacy-Preserving Code Generation: Differentially Private Code Language Models (arxiv.org, 2025-12-15T05:00:00)
Score: 14.79
arXiv:2512.11482v1 Announce Type: cross
Abstract: Large language models specialized for code (CodeLLMs) have demonstrated remarkable capabilities in generating code snippets, documentation, and test cases. However, despite their promising capabilities, CodeLLMs can inadvertently memorize and reproduce snippets from their training data, which poses risks of privacy breaches and intellectual property violations. These risks restrict the deployment of CodeLLMs in sensitive domains and limit their - Patch Tuesday – December 2025 (www.rapid7.com, 2025-12-10T07:50:42)
Score: 14.728
Microsoft is publishing a relatively light 54 new vulnerabilities this December 2025 Patch Tuesday , which is significantly lower than we have come to expect over the past couple of years. Today’s list includes two publicly disclosed remote code vulnerabilities, and a single exploited-in-the-wild vulnerability. Three critical remote code execution (RCE) vulnerabilities are also patched today; Microsoft currently assesses those as less likely or even unlikely to see exploitation. During December, - The December 2025 Security Update Review (www.thezdi.com, 2025-12-09T18:29:16)
Score: 14.595
It’s the final patch Tuesday of 2025, but that doesn’t make it any less exciting. Put aside your holiday planning for just a moment as we review the latest security offering from Adobe and Microsoft. If you’d rather watch the full video recap covering the entire release, you can check it out here: Adobe Patches for December 2025 For December, Adobe released five bulletins addressing 139 unique CVEs in Adobe Reader, ColdFusion, Experience Manager, Creative Cloud Desktop, and the Adobe DNG Softwar - How Swisscom builds enterprise agentic AI for customer support and sales using Amazon Bedrock AgentCore (aws.amazon.com, 2025-12-11T18:24:13)
Score: 14.571
In this post, we'll show how Swisscom implemented Amazon Bedrock AgentCore to build and scale their enterprise AI agents for customer support and sales operations. As an early adopter of Amazon Bedrock in the AWS Europe Region (Zurich), Swisscom leads in enterprise AI implementation with their Chatbot Builder system and various AI initiatives. Their successful deployments include Conversational AI powered by Rasa and fine-tuned LLMs on Amazon SageMaker, and the Swisscom Swisscom myAI assist - Streamline AI agent tool interactions: Connect API Gateway to AgentCore Gateway with MCP (aws.amazon.com, 2025-12-08T16:42:39)
Score: 14.14
AgentCore Gateway now supports API Gateway. As organizations explore the possibilities of agentic applications, they continue to navigate challenges of using enterprise data as context in invocation requests to large language models (LLMs) in a manner that is secure and aligned with enterprise policies. This post covers these new capabilities and shows how to implement them. - SCOUT: A Defense Against Data Poisoning Attacks in Fine-Tuned Language Models (arxiv.org, 2025-12-15T05:00:00)
Score: 13.79
arXiv:2512.10998v1 Announce Type: new
Abstract: Backdoor attacks create significant security threats to language models by embedding hidden triggers that manipulate model behavior during inference, presenting critical risks for AI systems deployed in healthcare and other sensitive domains. While existing defenses effectively counter obvious threats such as out-of-context trigger words and safety alignment violations, they fail against sophisticated attacks using contextually-appropriate trigger - ObliInjection: Order-Oblivious Prompt Injection Attack to LLM Agents with Multi-source Data (arxiv.org, 2025-12-15T05:00:00)
Score: 13.49
arXiv:2512.09321v2 Announce Type: replace
Abstract: Prompt injection attacks aim to contaminate the input data of an LLM to mislead it into completing an attacker-chosen task instead of the intended task. In many applications and agents, the input data originates from multiple sources, with each source contributing a segment of the overall input. In these multi-source scenarios, an attacker may control only a subset of the sources and contaminate the corresponding segments, but typically does n - New Prompt Injection Attack Vectors Through MCP Sampling (unit42.paloaltonetworks.com, 2025-12-05T23:00:59)
Score: 12.588
Model Context Protocol connects LLM apps to external data sources or tools. We examine its security implications through various attack vectors. The post New Prompt Injection Attack Vectors Through MCP Sampling appeared first on Unit 42 . - MiniScope: A Least Privilege Framework for Authorizing Tool Calling Agents (arxiv.org, 2025-12-15T05:00:00)
Score: 12.49
arXiv:2512.11147v1 Announce Type: new
Abstract: Tool calling agents are an emerging paradigm in LLM deployment, with major platforms such as ChatGPT, Claude, and Gemini adding connectors and autonomous capabilities. However, the inherent unreliability of LLMs introduces fundamental security risks when these agents operate over sensitive user services. Prior approaches either rely on manually written policies that require security expertise, or place LLMs in the confinement loop, which lacks rig - Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems (www.crowdstrike.com, 2025-12-04T06:00:00)
Score: 11.581 - Granite: Granular Runtime Enforcement for GitHub Actions Permissions (arxiv.org, 2025-12-15T05:00:00)
Score: 11.49
arXiv:2512.11602v1 Announce Type: new
Abstract: Modern software projects use automated CI/CD pipelines to streamline their development, build, and deployment processes. GitHub Actions is a popular CI/CD platform that enables project maintainers to create custom workflows — collections of jobs composed of sequential steps — using reusable components known as actions. Wary of the security risks introduced by fully-privileged actions, GitHub provides a job-level permission model for controlling - Geopolitics and Cyber Risk: How Global Tensions Shape the Attack Surface (www.rapid7.com, 2025-12-11T10:01:00)
Score: 10.587
Geopolitics has become a significant risk factor for today’s organizations, transforming cybersecurity into a technical and strategic challenge heavily influenced by state behavior. International tensions and the strategic calculations of major cyber powers, including Russia, China, Iran, and North Korea, significantly shape the current threat landscape. Businesses can no longer operate as isolated entities; they now function as interconnected global ecosystems where employees, suppliers, cloud - CrowdStrike Leverages NVIDIA Nemotron in Amazon Bedrock to Advance Agentic Security (www.crowdstrike.com, 2025-12-02T06:00:00)
Score: 10.105 - Multiple Threat Actors Exploit React2Shell (CVE-2025-55182) (cloud.google.com, 2025-12-12T14:00:00)
Score: 9.765
Written by: Aragorn Tseng, Robert Weiner, Casey Charrier, Zander Work, Genevieve Stark, Austin Larsen Introduction On Dec. 3, 2025, a critical unauthenticated remote code execution (RCE) vulnerability in React Server Components, tracked as CVE-2025-55182 (aka "React2Shell"), was publicly disclosed. Shortly after disclosure, Google Threat Intelligence Group (GTIG) had begun observing widespread exploitation across many threat clusters, ranging from opportunistic cyber crime actors to su - An LLVM-Based Optimization Pipeline for SPDZ (arxiv.org, 2025-12-15T05:00:00)
Score: 9.49
arXiv:2512.11112v1 Announce Type: new
Abstract: Actively secure arithmetic MPC is now practical for real applications, but performance and usability are still limited by framework-specific compilation stacks, the need for programmers to explicitly express parallelism, and high communication overhead. We design and implement a proof-of-concept LLVM-based optimization pipeline for the SPDZ protocol that addresses these bottlenecks. Our front end accepts a subset of C with lightweight privacy anno - A Scalable Multi-GPU Framework for Encrypted Large-Model Inference (arxiv.org, 2025-12-15T05:00:00)
Score: 9.49
arXiv:2512.11269v1 Announce Type: new
Abstract: Encrypted AI using fully homomorphic encryption (FHE) provides strong privacy guarantees; but its slow performance has limited practical deployment. Recent works proposed ASICs to accelerate FHE, but require expensive advanced manufacturing processes that constrain their accessibility. GPUs are a far more accessible platform, but achieving ASIC-level performance using GPUs has remained elusive. Furthermore, state-of-the-art approaches primarily fo - Leveraging FPGAs for Homomorphic Matrix-Vector Multiplication in Oblivious Message Retrieval (arxiv.org, 2025-12-15T05:00:00)
Score: 9.49
arXiv:2512.11690v1 Announce Type: new
Abstract: While end-to-end encryption protects the content of messages, it does not secure metadata, which exposes sender and receiver information through traffic analysis. A plausible approach to protecting this metadata is to have senders post encrypted messages on a public bulletin board and receivers scan it for relevant messages. Oblivious message retrieval (OMR) leverages homomorphic encryption (HE) to improve user experience in this solution by deleg - Clip-and-Verify: Linear Constraint-Driven Domain Clipping for Accelerating Neural Network Verification (arxiv.org, 2025-12-15T05:00:00)
Score: 9.49
arXiv:2512.11087v1 Announce Type: cross
Abstract: State-of-the-art neural network (NN) verifiers demonstrate that applying the branch-and-bound (BaB) procedure with fast bounding techniques plays a key role in tackling many challenging verification properties. In this work, we introduce the linear constraint-driven clipping framework, a class of scalable and efficient methods designed to enhance the efficacy of NN verifiers. Under this framework, we develop two novel algorithms that efficiently - Hypergraph based Multi-Party Payment Channel (arxiv.org, 2025-12-15T05:00:00)
Score: 9.49
arXiv:2512.11775v1 Announce Type: cross
Abstract: Public blockchains inherently offer low throughput and high latency, motivating off-chain scalability solutions such as Payment Channel Networks (PCNs). However, existing PCNs suffer from liquidity fragmentation-funds locked in one channel cannot be reused elsewhere-and channel depletion, both of which limit routing efficiency and reduce transaction success rates. Multi-party channel (MPC) constructions mitigate these issues, but they typically
Auto-generated 2025-12-15
