Weekly Threat Intelligence Summary
Top 10 General Cyber Threats
Generated 2026-01-12T05:00:04.836833+00:00
- New ransomware tactics to watch out for in 2026 (www.recordedfuture.com, 2026-01-05T00:00:00)
Score: 10.299
Ransomware groups made less money in 2025 despite a 47% increase in attacks, driving new tactics: bundled DDoS services, insider recruitment, and gig worker exploitation. Learn the emerging trends defenders must prepare for in 2026. - CISA warns of active attacks on HPE OneView and legacy PowerPoint (www.malwarebytes.com, 2026-01-08T14:29:36)
Score: 7.599
Two actively exploited flaws—one brand new, one 16 years old—have been added to CISA’s KEV catalog, signaling urgent patching. - Fake WinRAR downloads hide malware behind a real installer (www.malwarebytes.com, 2026-01-08T10:36:15)
Score: 7.572
We unpack a trojanized WinRAR download that was hiding the Winzipper malware behind a real installer. - How CrowdStrike’s Malware Analysis Agent Detects Malware at Machine Speed (www.crowdstrike.com, 2026-01-06T06:00:00)
Score: 7.207 - Digital Threat Detection Tools & Best Practices (www.recordedfuture.com, 2026-01-09T00:00:00)
Score: 5.965
Threat intelligence practitioners from Global Payments, Adobe, and Superhuman reveal how mature CTI programs transform data overload into strategic business value. Learn proven approaches to automation, cross-functional collaboration, and executive communication. - Malware in 2025 spread far beyond Windows PCs (www.malwarebytes.com, 2025-12-29T11:48:34)
Score: 5.914
Windows isn’t the only target anymore. In 2025, malware increasingly targeted Android, macOS, and multiple platforms at once. - pcTattletale founder pleads guilty as US cracks down on stalkerware (www.malwarebytes.com, 2026-01-09T15:41:07)
Score: 5.774
After years of security failures and partner-spying marketing, pcTattletale’s founder has pleaded guilty in a rare US federal stalkerware case. - Are we ready for ChatGPT Health? (www.malwarebytes.com, 2026-01-09T12:26:55)
Score: 5.752
Linking your medical records to ChatGPT Health may give you personalized wellness answers, but it also comes with serious privacy implications. - AI Tool Poisoning: How Hidden Instructions Threaten AI Agents (www.crowdstrike.com, 2026-01-09T06:00:00)
Score: 5.707 - Lego’s Smart Bricks explained: what they do, and what they don’t (www.malwarebytes.com, 2026-01-08T13:35:52)
Score: 5.593
A smart toy doesn’t have to be a risky one. Lego’s Smart Bricks add sensors and sound without apps, accounts, or AI. We explain how it works.
Top 10 AI / LLM-Related Threats
Generated 2026-01-12T06:00:15.134246+00:00
- PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline (arxiv.org, 2026-01-12T05:00:00)
Score: 24.79
arXiv:2512.19011v2 Announce Type: replace
Abstract: Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)-based systems. We present PromptScreen, an efficient and systematically evaluated defense architecture that mitigates these threats through a lightweight, multi-stage pipeline. Its core component is a semantic filter based on text normalization, TF-IDF representations, and a Linear SVM classifier. Despite its simplicity, this module achi - Multi-turn Jailbreaking Attack in Multi-Modal Large Language Models (arxiv.org, 2026-01-12T05:00:00)
Score: 23.29
arXiv:2601.05339v1 Announce Type: new
Abstract: In recent years, the security vulnerabilities of Multi-modal Large Language Models (MLLMs) have become a serious concern in the Generative Artificial Intelligence (GenAI) research. These highly intelligent models, capable of performing multi-modal tasks with high accuracy, are also severely susceptible to carefully launched security attacks, such as jailbreaking attacks, which can manipulate model behavior and bypass safety constraints. This paper - Knowledge-Driven Multi-Turn Jailbreaking on Large Language Models (arxiv.org, 2026-01-12T05:00:00)
Score: 20.79
arXiv:2601.05445v1 Announce Type: new
Abstract: Large Language Models (LLMs) face a significant threat from multi-turn jailbreak attacks, where adversaries progressively steer conversations to elicit harmful outputs. However, the practical effectiveness of existing attacks is undermined by several critical limitations: they struggle to maintain a coherent progression over long interactions, often losing track of what has been accomplished and what remains to be done; they rely on rigid or pre-d - The Echo Chamber Multi-Turn LLM Jailbreak (arxiv.org, 2026-01-12T05:00:00)
Score: 20.79
arXiv:2601.05742v1 Announce Type: new
Abstract: The availability of Large Language Models (LLMs) has led to a new generation of powerful chatbots that can be developed at relatively low cost. As companies deploy these tools, security challenges need to be addressed to prevent financial loss and reputational damage. A key security challenge is jailbreaking, the malicious manipulation of prompts and inputs to bypass a chatbot's safety guardrails. Multi-turn attacks are a relatively new form - Prompt Injection Vulnerability of Consensus Generating Applications in Digital Democracy (arxiv.org, 2026-01-12T05:00:00)
Score: 18.79
arXiv:2508.04281v3 Announce Type: replace-cross
Abstract: Large Language Models (LLMs) are gaining traction as a method to generate consensus statements and aggregate preferences in digital democracy experiments. Yet, LLMs could introduce critical vulnerabilities in these systems. Here, we examine the vulnerability and robustness of off-the-shelf consensus-generating LLMs to prompt-injection attacks, in which texts are injected to amplify particular viewpoints, erase certain opinions, or divert - Jailbreaking Large Language Models through Iterative Tool-Disguised Attacks via Reinforcement Learning (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2601.05466v1 Announce Type: new
Abstract: Large language models (LLMs) have demonstrated remarkable capabilities across diverse applications, however, they remain critically vulnerable to jailbreak attacks that elicit harmful responses violating human values and safety guidelines. Despite extensive research on defense mechanisms, existing safeguards prove insufficient against sophisticated adversarial strategies. In this work, we propose iMIST (\underline{i}nteractive \underline{M}ulti-st - Continual Pretraining on Encrypted Synthetic Data for Privacy-Preserving LLMs (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2601.05635v1 Announce Type: new
Abstract: Preserving privacy in sensitive data while pretraining large language models on small, domain-specific corpora presents a significant challenge. In this work, we take an exploratory step toward privacy-preserving continual pretraining by proposing an entity-based framework that synthesizes encrypted training data to protect personally identifiable information (PII). Our approach constructs a weighted entity graph to guide data synthesis and applie - StriderSPD: Structure-Guided Joint Representation Learning for Binary Security Patch Detection (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2601.05772v1 Announce Type: cross
Abstract: Vulnerabilities severely threaten software systems, making the timely application of security patches crucial for mitigating attacks. However, software vendors often silently patch vulnerabilities with limited disclosure, where Security Patch Detection (SPD) comes to protect software assets. Recently, most SPD studies have targeted Open-Source Software (OSS), yet a large portion of real-world software is closed-source, where patches are distribu - SRAF: Stealthy and Robust Adversarial Fingerprint for Copyright Verification of Large Language Models (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2505.06304v2 Announce Type: replace
Abstract: The protection of Intellectual Property (IP) for Large Language Models (LLMs) has become a critical concern as model theft and unauthorized commercialization escalate. While adversarial fingerprinting offers a promising black-box solution for ownership verification, existing methods suffer from significant limitations: they are fragile against model modifications, sensitive to system prompt variations, and easily detectable due to high-perplex - Leveraging Large Language Models to Bridge On-chain and Off-chain Transparency in Stablecoins (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2512.02418v2 Announce Type: replace
Abstract: Stablecoins such as USDT and USDC aspire to peg stability by coupling issuance controls with reserve attestations. In practice, however, the transparency is split across two worlds: verifiable on-chain traces and off-chain disclosures locked in unstructured text that are unconnected. We introduce a large language model (LLM)-based automated framework that bridges these two dimensions by aligning on-chain issuance data with off-chain disclosure - Transferability of Adversarial Attacks in Video-based MLLMs: A Cross-modal Image-to-Video Approach (arxiv.org, 2026-01-12T05:00:00)
Score: 17.79
arXiv:2501.01042v4 Announce Type: replace-cross
Abstract: Video-based multimodal large language models (V-MLLMs) have shown vulnerability to adversarial examples in video-text multimodal tasks. However, the transferability of adversarial videos to unseen models – a common and practical real-world scenario – remains unexplored. In this paper, we pioneer an investigation into the transferability of adversarial video samples across V-MLLMs. We find that existing adversarial attack methods face sig - VIGIL: Defending LLM Agents Against Tool Stream Injection via Verify-Before-Commit (arxiv.org, 2026-01-12T05:00:00)
Score: 17.49
arXiv:2601.05755v1 Announce Type: new
Abstract: LLM agents operating in open environments face escalating risks from indirect prompt injection, particularly within the tool stream where manipulated metadata and runtime feedback hijack execution flow. Existing defenses encounter a critical dilemma as advanced models prioritize injected rules due to strict alignment while static protection mechanisms sever the feedback loop required for adaptive reasoning. To reconcile this conflict, we propose \ - Securing the AI Supply Chain: What Can We Learn From Developer-Reported Security Issues and Solutions of AI Projects? (arxiv.org, 2026-01-12T05:00:00)
Score: 16.29
arXiv:2512.23385v2 Announce Type: replace-cross
Abstract: The rapid growth of Artificial Intelligence (AI) models and applications has led to an increasingly complex security landscape. Developers of AI projects must contend not only with traditional software supply chain issues but also with novel, AI-specific security threats. However, little is known about what security issues are commonly encountered and how they are resolved in practice. This gap hinders the development of effective securi - Sentiment Analysis with Text and Audio Using AWS Generative AI Services: Approaches, Challenges, and Solutions (aws.amazon.com, 2026-01-09T16:06:50)
Score: 15.086
This post, developed through a strategic scientific partnership between AWS and the Instituto de Ciência e Tecnologia Itaú (ICTi), P&D hub maintained by Itaú Unibanco, the largest private bank in Latin America, explores the technical aspects of sentiment analysis for both text and audio. We present experiments comparing multiple machine learning (ML) models and services, discuss the trade-offs and pitfalls of each approach, and highlight how AWS services can be orchestrated to build robust, - Memory Poisoning Attack and Defense on Memory Based LLM-Agents (arxiv.org, 2026-01-12T05:00:00)
Score: 14.79
arXiv:2601.05504v1 Announce Type: new
Abstract: Large language model agents equipped with persistent memory are vulnerable to memory poisoning attacks, where adversaries inject malicious instructions through query only interactions that corrupt the agents long term memory and influence future responses. Recent work demonstrated that the MINJA (Memory Injection Attack) achieves over 95 % injection success rate and 70 % attack success rate under idealized conditions. However, the robustness of th - LLMs as verification oracles for Solidity (arxiv.org, 2026-01-12T05:00:00)
Score: 14.79
arXiv:2509.19153v2 Announce Type: replace
Abstract: Ensuring the correctness of smart contracts is critical, as even subtle flaws can lead to severe financial losses. While bug detection tools able to spot common vulnerability patterns can serve as a first line of defense, most real-world exploits and losses stem from errors in the contract business logic. Formal verification tools such as SolCMC and the Certora Prover address this challenge, but their impact remains limited by steep learning c - How Beekeeper optimized user personalization with Amazon Bedrock (aws.amazon.com, 2026-01-09T16:10:52)
Score: 14.087
Beekeeper’s automated leaderboard approach and human feedback loop system for dynamic LLM and prompt pair selection addresses the key challenges organizations face in navigating the rapidly evolving landscape of language models. - HogVul: Black-box Adversarial Code Generation Framework Against LM-based Vulnerability Detectors (arxiv.org, 2026-01-12T05:00:00)
Score: 11.79
arXiv:2601.05587v1 Announce Type: new
Abstract: Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable to adversarial attacks that exploit lexical and syntax perturbations, allowing critical flaws to evade detection. Existing black-box attacks on LM-based vulnerability detectors primarily rely on isolated perturbation strategies, limiting their ability to efficiently explore the adversarial code space - CyberGFM: Graph Foundation Models for Lateral Movement Detection in Enterprise Networks (arxiv.org, 2026-01-12T05:00:00)
Score: 11.79
arXiv:2601.05988v1 Announce Type: new
Abstract: Representing networks as a graph and training a link prediction model using benign connections is an effective method of anomaly-based intrusion detection. Existing works using this technique have shown great success using temporal graph neural networks and skip-gram-based approaches on random walks. However, random walk-based approaches are unable to incorporate rich edge data, while the GNN-based approaches require large amounts of memory to tra - PII-VisBench: Evaluating Personally Identifiable Information Safety in Vision Language Models Along a Continuum of Visibility (arxiv.org, 2026-01-12T05:00:00)
Score: 11.79
arXiv:2601.05739v1 Announce Type: cross
Abstract: Vision Language Models (VLMs) are increasingly integrated into privacy-critical domains, yet existing evaluations of personally identifiable information (PII) leakage largely treat privacy as a static extraction task and ignore how a subject's online presence–the volume of their data available online–influences privacy alignment. We introduce PII-VisBench, a novel benchmark containing 4000 unique probes designed to evaluate VLM safety thr - Agentic LLMs as Powerful Deanonymizers: Re-identification of Participants in the Anthropic Interviewer Dataset (arxiv.org, 2026-01-12T05:00:00)
Score: 11.49
arXiv:2601.05918v1 Announce Type: new
Abstract: On December 4, 2025, Anthropic released Anthropic Interviewer, an AI tool for running qualitative interviews at scale, along with a public dataset of 1,250 interviews with professionals, including 125 scientists, about their use of AI for research. Focusing on the scientist subset, I show that widely available LLMs with web search and agentic capabilities can link six out of twenty-four interviews to specific scientific works, recovering associate - Optimizing LLM inference on Amazon SageMaker AI with BentoML’s LLM- Optimizer (aws.amazon.com, 2025-12-24T17:17:44)
Score: 11.288
In this post, we demonstrate how to optimize large language model (LLM) inference on Amazon SageMaker AI using BentoML's LLM-Optimizer to systematically identify the best serving configurations for your workload. - Accelerating LLM inference with post-training weight and activation using AWQ and GPTQ on Amazon SageMaker AI (aws.amazon.com, 2026-01-09T18:09:22)
Score: 9.806
Quantized models can be seamlessly deployed on Amazon SageMaker AI using a few lines of code. In this post, we explore why quantization matters—how it enables lower-cost inference, supports deployment on resource-constrained hardware, and reduces both the financial and environmental impact of modern LLMs, while preserving most of their original performance. We also take a deep dive into the principles behind PTQ and demonstrate how to quantize the model of your choice and deploy it on Amazon Sag - Cybersecurity AI: A Game-Theoretic AI for Guiding Attack and Defense (arxiv.org, 2026-01-12T05:00:00)
Score: 9.49
arXiv:2601.05887v1 Announce Type: new
Abstract: AI-driven penetration testing now executes thousands of actions per hour but still lacks the strategic intuition humans apply in competitive security. To build cybersecurity superintelligence –Cybersecurity AI exceeding best human capability-such strategic intuition must be embedded into agentic reasoning processes. We present Generative Cut-the-Rope (G-CTR), a game-theoretic guidance layer that extracts attack graphs from agent's context, c - Descriptor: Multi-Regional Cloud Honeypot Dataset (MURHCAD) (arxiv.org, 2026-01-12T05:00:00)
Score: 9.49
arXiv:2601.05813v1 Announce Type: cross
Abstract: This data article introduces a comprehensive, high-resolution honeynet dataset designed to support standalone analyses of global cyberattack behaviors. Collected over a continuous 72-hour window (June 9 to 11, 2025) on Microsoft Azure, the dataset comprises 132,425 individual attack events captured by three honeypots (Cowrie, Dionaea, and SentryPeer) deployed across four geographically dispersed virtual machines. Each event record includes enric
Auto-generated 2026-01-12
