Categories Uncategorized

Weekly Threat Report 2026-01-19

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-01-19T05:00:04.676271+00:00

  1. New ransomware tactics to watch out for in 2026 (www.recordedfuture.com, 2026-01-05T00:00:00)
    Score: 9.132
    Ransomware groups made less money in 2025 despite a 47% increase in attacks, driving new tactics: bundled DDoS services, insider recruitment, and gig worker exploitation. Learn the emerging trends defenders must prepare for in 2026.
  2. December 2025 CVE Landscape: 22 Critical Vulnerabilities Mark 120% Surge, React2Shell Dominates Threat Activity (www.recordedfuture.com, 2026-01-13T00:00:00)
    Score: 8.665
    December 2025 saw a 120% surge in critical CVEs, with 22 exploited flaws and React2Shell (CVE-2025-55182) dominating threat activity across Meta’s React framework.
  3. Laughter in the dark: Tales of absurdity from the cyber frontline and what they taught us (www.sophos.com, 2026-01-13T00:00:00)
    Score: 8.465
    From a quintuple-encryption ransomware attack to zany dark web schemes and AI fails, Sophos X-Ops looks back at some of our favorite weirdest incidents from the last few years – and the serious lessons behind them Categories: Threat Research Tags: Ransomware, Hive, Lockbit, BlackCat, LLM, AI, Money Laundering
  4. January 2026 Patch Tuesday: 114 CVEs Patched Including 3 Zero-Days (www.crowdstrike.com, 2026-01-13T06:00:00)
    Score: 8.207
  5. Best Ransomware Detection Tools (www.recordedfuture.com, 2026-01-13T00:00:00)
    Score: 8.165
    Stop ransomware before encryption begins. Learn how intelligence-driven detection tools can help identify precursor behaviors and reduce false positives for faster response.
  6. The State of Ransomware in Enterprise 2025 (www.sophos.com, 2026-01-12T00:00:00)
    Score: 7.999
    Categories: Products & Services Tags: Ransomware, Enterprise, Solutions, The State of Ransomware
  7. Threat and Vulnerability Management in 2026 (www.recordedfuture.com, 2026-01-16T00:00:00)
    Score: 7.665
    Understand the future of threat and vulnerability management (TVM). Learn what TVM is, why traditional tools fail, and how intelligence is essential in today’s landscape.
  8. 5 ways your firewall can keep ransomware out — and lock it down if it gets in (www.sophos.com, 2026-01-08T00:00:00)
    Score: 7.332
    Categories: Sophos Insights Tags: Firewall, Ransomware
  9. Why iPhone users should update and restart their devices now (www.malwarebytes.com, 2026-01-13T12:55:44)
    Score: 7.255
    Apple has confirmed active exploitation, but full protections are limited to iPhones running iOS 26+ (yes, the one with Liquid Glass).
  10. Celebrating reviews and recognitions for Malwarebytes in 2025 (www.malwarebytes.com, 2026-01-12T13:00:00)
    Score: 7.089
    In 2025, Malwarebytes was repeatedly tested against real-world threats. Here’s what those tests found.

Top 10 AI / LLM-Related Threats

Generated 2026-01-19T06:00:16.873636+00:00

  1. AJAR: Adaptive Jailbreak Architecture for Red-teaming (arxiv.org, 2026-01-19T05:00:00)
    Score: 26.29
    arXiv:2601.10971v1 Announce Type: new
    Abstract: As Large Language Models (LLMs) evolve from static chatbots into autonomous agents capable of tool execution, the landscape of AI safety is shifting from content moderation to action security. However, existing red-teaming frameworks remain bifurcated: they either focus on rigid, script-based text attacks or lack the architectural modularity to simulate complex, multi-turn agentic exploitations. In this paper, we introduce AJAR (Adaptive Jailbreak
  2. SD-RAG: A Prompt-Injection-Resilient Framework for Selective Disclosure in Retrieval-Augmented Generation (arxiv.org, 2026-01-19T05:00:00)
    Score: 21.79
    arXiv:2601.11199v1 Announce Type: new
    Abstract: Retrieval-Augmented Generation (RAG) has attracted significant attention due to its ability to combine the generative capabilities of Large Language Models (LLMs) with knowledge obtained through efficient retrieval mechanisms over large-scale data collections. Currently, the majority of existing approaches overlook the risks associated with exposing sensitive or access-controlled information directly to the generation model. Only a few approaches
  3. Beyond Max Tokens: Stealthy Resource Amplification via Tool Calling Chains in LLM Agents (arxiv.org, 2026-01-19T05:00:00)
    Score: 17.79
    arXiv:2601.10955v1 Announce Type: new
    Abstract: The agent-tool communication loop is a critical attack surface in modern Large Language Model (LLM) agents. Existing Denial-of-Service (DoS) attacks, primarily triggered via user prompts or injected retrieval-augmented generation (RAG) context, are ineffective for this new paradigm. They are fundamentally single-turn and often lack a task-oriented approach, making them conspicuous in goal-oriented workflows and unable to exploit the compounding co
  4. LLMs, You Can Evaluate It! Design of Multi-perspective Report Evaluation for Security Operation Centers (arxiv.org, 2026-01-19T05:00:00)
    Score: 17.79
    arXiv:2601.03013v2 Announce Type: replace
    Abstract: Security operation centers (SOCs) often produce analysis reports on security incidents, and large language models (LLMs) will likely be used for this task in the near future. We postulate that a better understanding of how veteran analysts evaluate reports, including their feedback, can help produce analysis reports in SOCs. In this paper, we aim to leverage LLMs for analysis reports. To this end, we first construct a Analyst-wise checklist to
  5. Hidden-in-Plain-Text: A Benchmark for Social-Web Indirect Prompt Injection in RAG (arxiv.org, 2026-01-19T05:00:00)
    Score: 17.49
    arXiv:2601.10923v1 Announce Type: new
    Abstract: Retrieval-augmented generation (RAG) systems put more and more emphasis on grounding their responses in user-generated content found on the Web, amplifying both their usefulness and their attack surface. Most notably, indirect prompt injection and retrieval poisoning attack the web-native carriers that survive ingestion pipelines and are very concerning. We provide OpenRAG-Soc, a compact, reproducible benchmark-and-harness for web-facing RAG evalu
  6. Too Helpful to Be Safe: User-Mediated Attacks on Planning and Web-Use Agents (arxiv.org, 2026-01-19T05:00:00)
    Score: 14.79
    arXiv:2601.10758v1 Announce Type: new
    Abstract: Large Language Models (LLMs) have enabled agents to move beyond conversation toward end-to-end task execution and become more helpful. However, this helpfulness introduces new security risks stem less from direct interface abuse than from acting on user-provided content. Existing studies on agent security largely focus on model-internal vulnerabilities or adversarial access to agent interfaces, overlooking attacks that exploit users as unintended
  7. Multi-Agent Taint Specification Extraction for Vulnerability Detection (arxiv.org, 2026-01-19T05:00:00)
    Score: 14.79
    arXiv:2601.10865v1 Announce Type: new
    Abstract: Static Application Security Testing (SAST) tools using taint analysis are widely viewed as providing higher-quality vulnerability detection results compared to traditional pattern-based approaches. However, performing static taint analysis for JavaScript poses two major challenges. First, JavaScript's dynamic features complicate data flow extraction required for taint tracking. Second, npm's large library ecosystem makes it difficult to
  8. Understanding Help Seeking for Digital Privacy, Safety, and Security (arxiv.org, 2026-01-19T05:00:00)
    Score: 14.79
    arXiv:2601.11398v1 Announce Type: new
    Abstract: The complexity of navigating digital privacy, safety, and security threats often falls directly on users. This leads to users seeking help from family and peers, platforms and advice guides, dedicated communities, and even large language models (LLMs). As a precursor to improving resources across this ecosystem, our community needs to understand what help seeking looks like in the wild. To that end, we blend qualitative coding with LLM fine-tuning
  9. How Good is Post-Hoc Watermarking With Language Model Rephrasing? (arxiv.org, 2026-01-19T05:00:00)
    Score: 14.79
    arXiv:2512.16904v2 Announce Type: replace
    Abstract: Generation-time text watermarking embeds statistical signals into text for traceability of AI-generated content. We explore *post-hoc watermarking* where an LLM rewrites existing text while applying generation-time watermarking, to protect copyrighted documents, or detect their use in training or RAG via watermark radioactivity. Unlike generation-time approaches, which is constrained by how LLMs are served, this setting offers additional degre
  10. Sentiment Analysis with Text and Audio Using AWS Generative AI Services: Approaches, Challenges, and Solutions (aws.amazon.com, 2026-01-09T16:06:50)
    Score: 13.419
    This post, developed through a strategic scientific partnership between AWS and the Instituto de Ciência e Tecnologia Itaú (ICTi), P&D hub maintained by Itaú Unibanco, the largest private bank in Latin America, explores the technical aspects of sentiment analysis for both text and audio. We present experiments comparing multiple machine learning (ML) models and services, discuss the trade-offs and pitfalls of each approach, and highlight how AWS services can be orchestrated to build robust,
  11. How Palo Alto Networks enhanced device security infra log analysis with Amazon Bedrock (aws.amazon.com, 2026-01-16T15:46:36)
    Score: 12.783
    Palo Alto Networks’ Device Security team wanted to detect early warning signs of potential production issues to provide more time to SMEs to react to these emerging problems. They partnered with the AWS Generative AI Innovation Center (GenAIIC) to develop an automated log classification pipeline powered by Amazon Bedrock. In this post, we discuss how Amazon Bedrock, through Anthropic’ s Claude Haiku model, and Amazon Titan Text Embeddings work together to automatically classify and analyze log d
  12. Chatting with Confidants or Corporations? Privacy Management with AI Companions (arxiv.org, 2026-01-19T05:00:00)
    Score: 12.49
    arXiv:2601.10754v1 Announce Type: new
    Abstract: AI chatbots designed as emotional companions blur the boundaries between interpersonal intimacy and institutional software, creating a complex, multi-dimensional privacy environment. Drawing on Communication Privacy Management theory and Masur's horizontal (user-AI) and vertical (user-platform) privacy framework, we conducted in-depth interviews with fifteen users of companion AI platforms such as Replika and Character.AI. Our findings reveal
  13. LoRA as Oracle (arxiv.org, 2026-01-19T05:00:00)
    Score: 12.49
    arXiv:2601.11207v1 Announce Type: new
    Abstract: Backdoored and privacy-leaking deep neural networks pose a serious threat to the deployment of machine learning systems in security-critical settings. Existing defenses for backdoor detection and membership inference typically require access to clean reference models, extensive retraining, or strong assumptions about the attack mechanism. In this work, we introduce a novel LoRA-based oracle framework that leverages low-rank adaptation modules as a
  14. How Beekeeper by LumApps optimized user personalization with Amazon Bedrock (aws.amazon.com, 2026-01-09T16:10:52)
    Score: 12.42
    Beekeeper’s automated leaderboard approach and human feedback loop system for dynamic LLM and prompt pair selection addresses the key challenges organizations face in navigating the rapidly evolving landscape of language models.
  15. Differentially Private Subspace Fine-Tuning for Large Language Models (arxiv.org, 2026-01-19T05:00:00)
    Score: 11.79
    arXiv:2601.11113v1 Announce Type: cross
    Abstract: Fine-tuning large language models on downstream tasks is crucial for realizing their cross-domain potential but often relies on sensitive data, raising privacy concerns. Differential privacy (DP) offers rigorous privacy guarantees and has been widely adopted in fine-tuning; however, naively injecting noise across the high-dimensional parameter space creates perturbations with large norms, degrading performance and destabilizing training. To addr
  16. Closing the Door on Net-NTLMv1: Releasing Rainbow Tables to Accelerate Protocol Deprecation (cloud.google.com, 2026-01-15T14:00:00)
    Score: 9.527
    Written by: Nic Losby Introduction Mandiant is publicly releasing a comprehensive dataset of Net-NTLMv1 rainbow tables to underscore the urgency of migrating away from this outdated protocol. Despite Net-NTLMv1 being deprecated and known to be insecure for over two decades—with cryptanalysis dating back to 1999—Mandiant consultants continue to identify its use in active environments. This legacy protocol leaves organizations vulnerable to trivial credential theft, yet it remains prevalent due to
  17. VidLeaks: Membership Inference Attacks Against Text-to-Video Models (arxiv.org, 2026-01-19T05:00:00)
    Score: 9.49
    arXiv:2601.11210v1 Announce Type: new
    Abstract: The proliferation of powerful Text-to-Video (T2V) models, trained on massive web-scale datasets, raises urgent concerns about copyright and privacy violations. Membership inference attacks (MIAs) provide a principled tool for auditing such risks, yet existing techniques – designed for static data like images or text – fail to capture the spatio-temporal complexities of video generation. In particular, they overlook the sparsity of memorization sig
  18. IMS: Intelligent Hardware Monitoring System for Secure SoCs (arxiv.org, 2026-01-19T05:00:00)
    Score: 9.49
    arXiv:2601.11447v1 Announce Type: new
    Abstract: In the modern Systems-on-Chip (SoC), the Advanced eXtensible Interface (AXI) protocol exhibits security vulnerabilities, enabling partial or complete denial-of-service (DoS) through protocol-violation attacks. The recent countermeasures lack a dedicated real-time protocol semantic analysis and evade protocol compliance checks. This paper tackles this AXI vulnerability issue and presents an intelligent hardware monitoring system (IMS) for real-time
  19. Who Shares What? An Empirical Analysis of Security Conference Content Across Academia and Industry (arxiv.org, 2026-01-19T05:00:00)
    Score: 9.49
    arXiv:2404.17989v2 Announce Type: replace
    Abstract: Security conferences are important venues for information sharing, where academics and practitioners share knowledge about new attacks and state-of-the-art defenses. Despite their importance, researchers have not systematically examined who shares information and which security topics are discussed. To address this gap, our paper characterizes the speakers, sponsors, and topics presented at prestigious academic and industry security conference
  20. Beyond Known Fakes: Generalized Detection of AI-Generated Images via Post-hoc Distribution Alignment (arxiv.org, 2026-01-19T05:00:00)
    Score: 9.49
    arXiv:2502.10803v2 Announce Type: replace
    Abstract: The rapid proliferation of highly realistic AI-generated images poses serious security threats such as misinformation and identity fraud. Detecting generated images in open-world settings is particularly challenging when they originate from unknown generators, as existing methods typically rely on model-specific artifacts and require retraining on new fake data, limiting their generalization and scalability. In this work, we propose Post-hoc D
  21. AuraInspector: Auditing Salesforce Aura for Data Exposure (cloud.google.com, 2026-01-12T14:00:00)
    Score: 8.813
    Written by: Amine Ismail, Anirudha Kanodia Introduction Mandiant is releasing AuraInspector, a new open-source tool designed to help defenders identify and audit access control misconfigurations within the Salesforce Aura framework . Salesforce Experience Cloud is a foundational platform for many businesses, but Mandiant Offensive Security Services (OSS) frequently identifies misconfigurations that allow unauthorized users to access sensitive data including credit card numbers, identity document
  22. Deploy AI agents on Amazon Bedrock AgentCore using GitHub Actions (aws.amazon.com, 2026-01-16T15:37:37)
    Score: 8.781
    In this post, we demonstrate how to use a GitHub Actions workflow to automate the deployment of AI agents on AgentCore Runtime. This approach delivers a scalable solution with enterprise-level security controls, providing complete continuous integration and delivery (CI/CD) automation.
  23. Build a generative AI-powered business reporting solution with Amazon Bedrock (aws.amazon.com, 2026-01-15T15:53:15)
    Score: 8.546
    This post introduces generative AI guided business reporting—with a focus on writing achievements & challenges about your business—providing a smart, practical solution that helps simplify and accelerate internal communication and reporting.
  24. Scale creative asset discovery with Amazon Nova Multimodal Embeddings unified vector search (aws.amazon.com, 2026-01-15T15:45:02)
    Score: 8.544
    In this post, we describe how you can use Amazon Nova Multimodal Embeddings to retrieve specific video segments. We also review a real-world use case in which Nova Multimodal Embeddings achieved a recall success rate of 96.7% and a high-precision recall of 73.3% (returning the target content in the top two results) when tested against a library of 170 gaming creative assets. The model also demonstrates strong cross-language capabilities with minimal performance degradation across multiple langua
  25. SecMLOps: A Comprehensive Framework for Integrating Security Throughout the MLOps Lifecycle (arxiv.org, 2026-01-19T05:00:00)
    Score: 8.49
    arXiv:2601.10848v1 Announce Type: new
    Abstract: Machine Learning (ML) has emerged as a pivotal technology in the operation of large and complex systems, driving advancements in fields such as autonomous vehicles, healthcare diagnostics, and financial fraud detection. Despite its benefits, the deployment of ML models brings significant security challenges, such as adversarial attacks, which can compromise the integrity and reliability of these systems. To address these challenges, this paper bui

Auto-generated 2026-01-19

Written By

More From Author

You May Also Like