Categories Uncategorized

Weekly Threat Report 2025-12-08

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-12-08T05:00:05.386497+00:00

  1. Inside Shanya, a packer-as-a-service fueling modern attacks (news.sophos.com, 2025-12-07T02:57:18)
    Score: 9.019
    The ransomware scene gains another would-be EDR killer
  2. Leaks show Intellexa burning zero-days to keep Predator spyware running (www.malwarebytes.com, 2025-12-05T13:31:54)
    Score: 8.759
    A fresh investigation uncovers how Predator spyware still reaches victims through high-priced, newly bought zero-days.
  3. Sharpening the knife: GOLD BLADE’s strategic evolution (news.sophos.com, 2025-12-05T13:24:37)
    Score: 8.758
    Updates include novel abuse of recruitment platforms, modified infection chains, and expansion into a hybrid operation that combines data theft and ransomware deployment
  4. The State of Ransomware in Manufacturing and Production 2025 (news.sophos.com, 2025-12-03T14:30:34)
    Score: 8.433
    332 IT and cybersecurity leaders reveal the ransomware realities for manufacturing and production organizations today.
  5. How Ransomware Affects Business Operations, Revenue, and Brand Reputation (www.recordedfuture.com, 2025-12-01T00:00:00)
    Score: 7.999
    Learn how ransomware works, how it can impact operations, revenue, and brand reputation, and how to prevent ransomware from infecting your business.
  6. Intellexa’s Global Corporate Web (www.recordedfuture.com, 2025-12-03T00:00:00)
    Score: 7.632
    The author, Julian-Ferdinand Vögele, thanks Amnesty International's Security Lab for its ongoing reporting on the Intellexa and Predator spyware ecosystem. Today, Security Lab published a related report on Intellexa, which can be found here . Executive Summary Insikt Group identified several individuals and entities linked to Intellexa and its broader network of associated companies. These connections span technical, operational, and corporate roles, including backend development, infrastru
  7. Millions at risk after nationwide CodeRED alert system outage and data breach (www.malwarebytes.com, 2025-11-27T14:40:32)
    Score: 7.434
    A ransomware attack against the CodeRED emergency alert platform has triggered warnings across the US.
  8. How attackers use real IT tools to take over your computer (www.malwarebytes.com, 2025-12-03T14:12:59)
    Score: 7.431
    We’ve seen a new wave of attacks exploiting legitimate Remote Monitoring and Management (RMM) tools to remotely control victims’ systems.
  9. Google patches 107 Android flaws, including two being actively exploited (www.malwarebytes.com, 2025-12-02T11:37:46)
    Score: 7.246
    Google’s December update fixes two Android bugs that criminals are actively exploiting. Update as soon as you can.
  10. New Android malware lets criminals control your phone and drain your bank account (www.malwarebytes.com, 2025-12-01T15:33:14)
    Score: 7.107
    Albiriox now targets over 400 financial apps and lets criminals operate your phone almost exactly as if it were in their hands.

Top 10 AI / LLM-Related Threats

Generated 2025-12-08T06:00:16.184512+00:00

  1. ARGUS: Defending Against Multimodal Indirect Prompt Injection via Steering Instruction-Following Behavior (arxiv.org, 2025-12-08T05:00:00)
    Score: 22.79
    arXiv:2512.05745v1 Announce Type: new
    Abstract: Multimodal Large Language Models (MLLMs) are increasingly vulnerable to multimodal Indirect Prompt Injection (IPI) attacks, which embed malicious instructions in images, videos, or audio to hijack model behavior. Existing defenses, designed primarily for text-only LLMs, are unsuitable for countering these multimodal threats, as they are easily bypassed, modality-dependent, or generalize poorly. Inspired by activation steering researches, we hypoth
  2. When Ads Become Profiles: Uncovering the Invisible Risk of Web Advertising at Scale with LLMs (arxiv.org, 2025-12-08T05:00:00)
    Score: 19.59
    arXiv:2509.18874v2 Announce Type: cross
    Abstract: Regulatory limits on explicit targeting have not eliminated algorithmic profiling on the Web, as optimisation systems still adapt ad delivery to users' private attributes. The widespread availability of powerful zero-shot multimodal Large Language Models (LLMs) has dramatically lowered the barrier for exploiting these latent signals for adversarial inference. We investigate this emerging societal risk, specifically how adversaries can now e
  3. TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations (arxiv.org, 2025-12-08T05:00:00)
    Score: 17.79
    arXiv:2512.05485v1 Announce Type: new
    Abstract: While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against jailbreak and prompt-based attacks remains insufficient. Existing safety evaluation benchmarks and frameworks are often limited by an imbalanced integration of core components (attack, defense, and evaluation methods) and an isolation between flexible evaluation frameworks and standardized benchmarkin
  4. IF-GUIDE: Influence Function-Guided Detoxification of LLMs (arxiv.org, 2025-12-08T05:00:00)
    Score: 17.79
    arXiv:2506.01790v3 Announce Type: replace-cross
    Abstract: We study how training data contributes to the emergence of toxic behaviors in large language models. Most prior work on reducing model toxicity adopts reactive approaches, such as fine-tuning pre-trained (and potentially toxic) models to align them with human values. In contrast, we propose a proactive approach, IF-GUIDE, that leverages influence functions to identify and suppress harmful tokens in the training data. To this end, we firs
  5. Please Don't Kill My Vibe: Empowering Agents with Data Flow Control (arxiv.org, 2025-12-08T05:00:00)
    Score: 14.79
    arXiv:2512.05374v1 Announce Type: new
    Abstract: The promise of Large Language Model (LLM) agents is to perform complex, stateful tasks. This promise is stunted by significant risks – policy violations, process corruption, and security flaws – that stem from the lack of visibility and mechanisms to manage undesirable data flows produced by agent actions. Today, agent workflows are responsible for enforcing these policies in ad hoc ways. Just as data validation and access controls shifted from th
  6. PrivCode: When Code Generation Meets Differential Privacy (arxiv.org, 2025-12-08T05:00:00)
    Score: 14.79
    arXiv:2512.05459v1 Announce Type: new
    Abstract: Large language models (LLMs) have presented outstanding performance in code generation and completion. However, fine-tuning these models on private datasets can raise privacy and proprietary concerns, such as the leakage of sensitive personal information. Differentially private (DP) code generation provides theoretical guarantees for protecting sensitive code by generating synthetic datasets that preserve statistical properties while reducing priv
  7. New Prompt Injection Attack Vectors Through MCP Sampling (unit42.paloaltonetworks.com, 2025-12-05T23:00:59)
    Score: 14.254
    Model Context Protocol connects LLM apps to external data sources or tools. We examine its security implications through various attack vectors. The post New Prompt Injection Attack Vectors Through MCP Sampling appeared first on Unit 42 .
  8. Beyond Detection: A Comprehensive Benchmark and Study on Representation Learning for Fine-Grained Webshell Family Classification (arxiv.org, 2025-12-08T05:00:00)
    Score: 13.79
    arXiv:2512.05288v1 Announce Type: new
    Abstract: Malicious WebShells pose a significant and evolving threat by compromising critical digital infrastructures and endangering public services in sectors such as healthcare and finance. While the research community has made significant progress in WebShell detection (i.e., distinguishing malicious samples from benign ones), we argue that it is time to transition from passive detection to in-depth analysis and proactive defense. One promising directio
  9. Indirect Prompt Injection Attacks: A Lurking Risk to AI Systems (www.crowdstrike.com, 2025-12-04T06:00:00)
    Score: 13.248
  10. Observational Auditing of Label Privacy (arxiv.org, 2025-12-08T05:00:00)
    Score: 12.49
    arXiv:2511.14084v2 Announce Type: replace-cross
    Abstract: Differential privacy (DP) auditing is essential for evaluating privacy guarantees in machine learning systems. Existing auditing methods, however, pose a significant challenge for large-scale systems since they require modifying the training dataset — for instance, by injecting out-of-distribution canaries or removing samples from training. Such interventions on the training data pipeline are resource-intensive and involve considerable
  11. Trusted AI Agents in the Cloud (arxiv.org, 2025-12-08T05:00:00)
    Score: 11.79
    arXiv:2512.05951v1 Announce Type: new
    Abstract: AI agents powered by large language models are increasingly deployed as cloud services that autonomously access sensitive data, invoke external tools, and interact with other agents. However, these agents run within a complex multi-party ecosystem, where untrusted components can lead to data leakage, tampering, or unintended behavior. Existing Confidential Virtual Machines (CVMs) provide only per binary protection and offer no guarantees for cross
  12. Concept-Guided Backdoor Attack on Vision Language Models (arxiv.org, 2025-12-08T05:00:00)
    Score: 11.79
    arXiv:2512.00713v2 Announce Type: replace
    Abstract: Vision-Language Models (VLMs) have achieved impressive progress in multimodal text generation, yet their rapid adoption raises increasing concerns about security vulnerabilities. Existing backdoor attacks against VLMs primarily rely on explicit pixel-level triggers or imperceptible perturbations injected into images. While effective, these approaches reduce stealthiness and remain vulnerable to image-based defenses. We introduce concept-guided
  13. CrowdStrike Leverages NVIDIA Nemotron in Amazon Bedrock to Advance Agentic Security (www.crowdstrike.com, 2025-12-02T06:00:00)
    Score: 11.771
  14. Self-Supervised Learning of Graph Representations for Network Intrusion Detection (arxiv.org, 2025-12-08T05:00:00)
    Score: 11.49
    arXiv:2509.16625v4 Announce Type: replace-cross
    Abstract: Detecting intrusions in network traffic is a challenging task, particularly under limited supervision and constantly evolving attack patterns. While recent works have leveraged graph neural networks for network intrusion detection, they often decouple representation learning from anomaly detection, limiting the utility of the embeddings for identifying attacks. We propose GraphIDS, a self-supervised intrusion detection model that unifies
  15. The State of Security Today: Setting the Stage for 2026 (www.rapid7.com, 2025-11-18T16:07:34)
    Score: 9.839
    As we close out 2025, one thing is clear: the security landscape is evolving faster than most organizations can keep up. From surging ransomware campaigns and AI-enhanced phishing to data extortion, geopolitical fallout, and gaps in cyber readiness, the challenges facing security teams today are as varied as they are relentless. But with complexity comes clarity and insight. This year’s most significant breaches, breakthroughs, and behavioral shifts provide a critical lens through which we can v
  16. Amazon SageMaker AI introduces EAGLE based adaptive speculative decoding to accelerate generative AI inference (aws.amazon.com, 2025-11-26T00:29:42)
    Score: 9.788
    Amazon SageMaker AI now supports EAGLE-based adaptive speculative decoding, a technique that accelerates large language model inference by up to 2.5x while maintaining output quality. In this post, we explain how to use EAGLE 2 and EAGLE 3 speculative decoding in Amazon SageMaker AI, covering the solution architecture, optimization workflows using your own datasets or SageMaker's built-in data, and benchmark results demonstrating significant improvements in throughput and latency.
  17. We Got Claude to Fine-Tune an Open Source LLM (huggingface.co, 2025-12-04T00:00:00)
    Score: 9.588
  18. A Practical Honeypot-Based Threat Intelligence Framework for Cyber Defence in the Cloud (arxiv.org, 2025-12-08T05:00:00)
    Score: 9.49
    arXiv:2512.05321v1 Announce Type: new
    Abstract: In cloud environments, conventional firewalls rely on predefined rules and manual configurations, limiting their ability to respond effectively to evolving or zero-day threats. As organizations increasingly adopt platforms such as Microsoft Azure, this static defense model exposes cloud assets to zero-day exploits, botnets, and advanced persistent threats. In this paper, we introduce an automated defense framework that leverages medium- to high-in
  19. Matching Ranks Over Probability Yields Truly Deep Safety Alignment (arxiv.org, 2025-12-08T05:00:00)
    Score: 9.49
    arXiv:2512.05518v1 Announce Type: new
    Abstract: A frustratingly easy technique known as the prefilling attack has been shown to effectively circumvent the safety alignment of frontier LLMs by simply prefilling the assistant response with an affirmative prefix before decoding. In response, recent work proposed a supervised fine-tuning (SFT) defense using data augmentation to achieve a \enquote{deep} safety alignment, allowing the model to generate natural language refusals immediately following
  20. Edge-Only Universal Adversarial Attacks in Distributed Learning (arxiv.org, 2025-12-08T05:00:00)
    Score: 9.49
    arXiv:2411.10500v2 Announce Type: replace
    Abstract: Distributed learning frameworks, which partition neural network models across multiple computing nodes, enhance efficiency in collaborative edge-cloud systems, but may also introduce new vulnerabilities to evasion attacks, often in the form of adversarial perturbations. In this work, we present a new threat model that explores the feasibility of generating universal adversarial perturbations (UAPs) when the attacker has access only to the edge
  21. Sanctioned but Still Spying: Intellexa’s Prolific Zero-Day Exploits Continue (cloud.google.com, 2025-12-03T14:00:00)
    Score: 9.289
    Introduction Despite extensive scrutiny and public reporting , commercial surveillance vendors continue to operate unimpeded. A prominent name continues to surface in the world of mercenary spyware, Intellexa. Known for its “Predator” spyware, the company was sanctioned by the US Government . New Google Threat Intelligence Group (GTIG) analysis shows that Intellexa is evading restrictions and thriving . Intellexa has adapted, evaded restrictions, and continues selling digital weapons to the high
  22. Evaluating Concept Filtering Defenses against Child Sexual Abuse Material Generation by Text-to-Image Models (arxiv.org, 2025-12-08T05:00:00)
    Score: 8.99
    arXiv:2512.05707v1 Announce Type: new
    Abstract: We evaluate the effectiveness of child filtering to prevent the misuse of text-to-image (T2I) models to create child sexual abuse material (CSAM). First, we capture the complexity of preventing CSAM generation using a game-based security definition. Second, we show that current detection methods cannot remove all children from a dataset. Third, using an ethical proxy for CSAM (a child wearing glasses, hereafter, CWG), we show that even when only a
  23. Metasploit Wrap-Up 12/05/2025 (www.rapid7.com, 2025-12-05T20:58:04)
    Score: 8.934
    Twonky Auth Bypass, RCEs and RISC-V Reverse Shell Payloads This was another fantastic week in terms of PR contribution to the Metasploit Framework. Rapid7’s very own Ryan Emmons recently disclosed CVE-2025-13315 and CVE-2025-13316 which exist in Twonky Server and allow decrypting admin credentials by reading logs without authentication (which contain them). The auxiliary module Ryan submitted which exploits both of these CVEs was released this week. Community contributor Valentin Lobsein aka Cho
  24. Lumia Security Raises $18 Million for AI Security and Governance (www.securityweek.com, 2025-12-05T11:02:08)
    Score: 8.836
    The startup will invest in expanding its engineering and research teams, deepening product integrations, and scaling go-to-market efforts. The post Lumia Security Raises $18 Million for AI Security and Governance appeared first on SecurityWeek .
  25. Analyzing PDFs like Binaries: Adversarially Robust PDF Malware Analysis via Intermediate Representation and Language Model (arxiv.org, 2025-12-08T05:00:00)
    Score: 8.79
    arXiv:2506.17162v2 Announce Type: replace
    Abstract: Malicious PDF files have emerged as a persistent threat and become a popular attack vector in web-based attacks. While machine learning-based PDF malware classifiers have shown promise, these classifiers are often susceptible to adversarial attacks, undermining their reliability. To address this issue, recent studies have aimed to enhance the robustness of PDF classifiers. Despite these efforts, the feature engineering underlying these studies

Auto-generated 2025-12-08

Written By

More From Author

You May Also Like