Categories Uncategorized

Weekly Threat Report 2026-02-16

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-02-16T05:00:05.463157+00:00

  1. Apple patches zero-day flaw that could let attackers take control of devices (www.malwarebytes.com, 2026-02-12T11:40:35)
    Score: 10.58
    Apple issued security updates for all devices which include a patch for an actively exploited zero-day—tracked as CVE-2026-20700.
  2. February 2026 Patch Tuesday includes six actively exploited zero-days (www.malwarebytes.com, 2026-02-11T12:32:20)
    Score: 10.419
    Microsoft’s February Patch Tuesday fixes 59 flaws—including six zero-days already under active attack. How bad are they?
  3. State of Security Report | Recorded Future (www.recordedfuture.com, 2026-02-12T00:00:00)
    Score: 8.799
    Download Recorded Future's 2026 State of Security report which provides comprehensive threat intelligence on geopolitical fragmentation, state-sponsored operations, ransomware evolution, and emerging technology risk.
  4. Threat Intelligence Executive Report – Volume 2025, Number 6 (www.sophos.com, 2026-02-10T00:00:00)
    Score: 8.465
    This issue of the Counter Threat Unit’s high-level bimonthly report discusses noteworthy updates in the threat landscape during September and October Categories: Threat Research Tags: EDR killer, infostealer, Ransomware
  5. February 2026 Patch Tuesday: Six Zero-Days Among 59 CVEs Patched (www.crowdstrike.com, 2026-02-10T06:00:00)
    Score: 8.207
  6. Child exploitation, grooming, and social media addiction claims put Meta on trial (www.malwarebytes.com, 2026-02-12T12:35:26)
    Score: 7.586
    Landmark trials now underway allege Meta failed to protect children from sexual exploitation, grooming, and addiction-driven design.
  7. Criminals are using AI website builders to clone major brands (www.malwarebytes.com, 2026-02-12T08:03:00)
    Score: 7.555
    AI-assisted website builders are making it far easier for scammers to impersonate well-known and trusted brands, including Malwarebytes.
  8. Malicious use of virtual machine infrastructure (www.sophos.com, 2026-02-04T00:00:00)
    Score: 7.465
    Bulletproof hosting providers are abusing the legitimate ISPsystem infrastructure to supply virtual machines to cybercriminals Categories: Threat Research Tags: virtual machine, cybercrime, Ransomware, ISPs
  9. Malwarebytes earns PCMag Best Tech Brand spot, scores 100% with MRG Effitas (www.malwarebytes.com, 2026-02-11T10:09:52)
    Score: 7.403
    Malwarebytes is not only one of PCMag's Best Tech Brands for 2026, it also scored 100% on the MRG Effitas consumer security product test.
  10. CrowdStrike Falcon Scores Perfect 100% in SE Labs’ Most Challenging Ransomware Test (www.crowdstrike.com, 2026-02-03T08:00:00)
    Score: 7.054

Top 10 AI / LLM-Related Threats

Generated 2026-02-16T06:00:18.233160+00:00

  1. GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use (cloud.google.com, 2026-02-12T14:00:00)
    Score: 42.527
    Introduction In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development. This report serves as an update to our November 2025 findings regarding the advances in threat actor usage of AI tools. By identifying these early indicators and offensive proofs of concept, GTIG aims to arm def
  2. Measuring AI Security: Separating Signal from Panic (www.rapid7.com, 2026-02-10T18:00:00)
    Score: 23.49
    The conversation around AI security is full of anxiety. Every week, new headlines warn of jailbreaks, prompt injection, agents gone rogue, and the rise of LLM-enabled cybercrime. It’s easy to come away with the impression that AI is fundamentally uncontrollable and dangerous, and therefore something we need to lock down before it gets out of hand. But as a security practitioner, I wasn’t convinced. Most of these warnings are based on hypothetical examples or carefully engineered demos. They rais
  3. Blind Gods and Broken Screens: Architecting a Secure, Intent-Centric Mobile Agent Operating System (arxiv.org, 2026-02-16T05:00:00)
    Score: 22.79
    arXiv:2602.10915v3 Announce Type: replace
    Abstract: The evolution of Large Language Models (LLMs) has shifted mobile computing from App-centric interactions to system-level autonomous agents. Current implementations predominantly rely on a "Screen-as-Interface" paradigm, which inherits structural vulnerabilities and conflicts with the mobile ecosystem's economic foundations. In this paper, we conduct a systematic security analysis of state-of-the-art mobile agents using Doubao Mo
  4. RAT-Bench: A Comprehensive Benchmark for Text Anonymization (arxiv.org, 2026-02-16T05:00:00)
    Score: 19.79
    arXiv:2602.12806v1 Announce Type: cross
    Abstract: Data containing personal information is increasingly used to train, fine-tune, or query Large Language Models (LLMs). Text is typically scrubbed of identifying information prior to use, often with tools such as Microsoft's Presidio or Anthropic's PII purifier. These tools have traditionally been evaluated on their ability to remove specific identifiers (e.g., names), yet their effectiveness at preventing re-identification remains uncle
  5. Sparse Autoencoders are Capable LLM Jailbreak Mitigators (arxiv.org, 2026-02-16T05:00:00)
    Score: 17.79
    arXiv:2602.12418v1 Announce Type: new
    Abstract: Jailbreak attacks remain a persistent threat to large language model safety. We propose Context-Conditioned Delta Steering (CC-Delta), an SAE-based defense that identifies jailbreak-relevant sparse features by comparing token-level representations of the same harmful request with and without jailbreak context. Using paired harmful/jailbreak prompts, CC-Delta selects features via statistical testing and applies inference-time mean-shift steering in
  6. In-Context Autonomous Network Incident Response: An End-to-End Large Language Model Agent Approach (arxiv.org, 2026-02-16T05:00:00)
    Score: 17.79
    arXiv:2602.13156v1 Announce Type: new
    Abstract: Rapidly evolving cyberattacks demand incident response systems that can autonomously learn and adapt to changing threats. Prior work has extensively explored the reinforcement learning approach, which involves learning response strategies through extensive simulation of the incident. While this approach can be effective, it requires handcrafted modeling of the simulator and suppresses useful semantics from raw system logs and alerts. To address th
  7. Patch Tuesday and the Enduring Challenge of Windows’ Backwards Compatibility (www.rapid7.com, 2026-01-28T17:04:41)
    Score: 17.486
    Introduction If you received an email with the subject “I LOVE YOU” and an attachment called “LOVE-LETTER-FOR-YOU.TXT”, would you open it? Probably not, but back in the year 2000, plenty of people did exactly that. The internet learned a hard lesson about the disproportionate power available to a university dropout with some VBScript skills, and millions of ordinary people suffered the anguish of deleted family photos or even reputational damage as the worm propagated itself across their entire
  8. SmartGuard: Leveraging Large Language Models for Network Attack Detection through Audit Log Analysis and Summarization (arxiv.org, 2026-02-16T05:00:00)
    Score: 16.79
    arXiv:2506.16981v2 Announce Type: replace
    Abstract: End-point monitoring solutions are widely deployed in today's enterprise environments to support advanced attack detection and investigation. These monitors continuously record system-level activities as audit logs and provide deep visibility into security events. Unfortunately, existing methods of semantic analysis based on audit logs have low granularity, only reaching the system call level, making it difficult to effectively classify h
  9. Beyond the Battlefield: Threats to the Defense Industrial Base (cloud.google.com, 2026-02-10T14:00:00)
    Score: 16.551
    Introduction In modern warfare, the front lines are no longer confined to the battlefield; they extend directly into the servers and supply chains of the industry that safeguards the nation. Today, the defense sector faces a relentless barrage of cyber operations conducted by state-sponsored actors and criminal groups alike. In recent years, Google Threat Intelligence Group (GTIG) has observed several distinct areas of focus in adversarial targeting of the defense industrial base (DIB). While no
  10. TensorCommitments: A Lightweight Verifiable Inference for Language Models (arxiv.org, 2026-02-16T05:00:00)
    Score: 14.79
    arXiv:2602.12630v1 Announce Type: new
    Abstract: Most large language models (LLMs) run on external clouds: users send a prompt, pay for inference, and must trust that the remote GPU executes the LLM without any adversarial tampering. We critically ask how to achieve verifiable LLM inference, where a prover (the service) must convince a verifier (the client) that an inference was run correctly without rerunning the LLM. Existing cryptographic works are too slow at the LLM scale, while non-cryptog
  11. Favia: Forensic Agent for Vulnerability-fix Identification and Analysis (arxiv.org, 2026-02-16T05:00:00)
    Score: 14.79
    arXiv:2602.12500v1 Announce Type: cross
    Abstract: Identifying vulnerability-fixing commits corresponding to disclosed CVEs is essential for secure software maintenance but remains challenging at scale, as large repositories contain millions of commits of which only a small fraction address security issues. Existing automated approaches, including traditional machine learning techniques and recent large language model (LLM)-based methods, often suffer from poor precision-recall trade-offs. Frequ
  12. Watermarking Discrete Diffusion Language Models (arxiv.org, 2026-02-16T05:00:00)
    Score: 14.79
    arXiv:2511.02083v2 Announce Type: replace
    Abstract: Watermarking has emerged as a promising technique to track AI-generated content and differentiate it from authentic human creations. While prior work extensively studies watermarking for autoregressive large language models (LLMs) and image diffusion models, it remains comparatively underexplored for discrete diffusion language models (DDLMs), which are becoming popular due to their high inference throughput. In this paper, we introduce one of
  13. Provable Secure Steganography Based on Adaptive Dynamic Sampling (arxiv.org, 2026-02-16T05:00:00)
    Score: 13.79
    arXiv:2504.12579v3 Announce Type: replace
    Abstract: The security of private communication is increasingly at risk due to widespread surveillance. Steganography, a technique for embedding secret messages within innocuous carriers, enables covert communication over monitored channels. Provably Secure Steganography (PSS), which ensures computational indistinguishability between the normal model output and steganography output, is the state-of-the-art in this field. However, current PSS methods oft
  14. Patch Tuesday – February 2026 (www.rapid7.com, 2026-02-11T01:58:33)
    Score: 12.67
    Microsoft is publishing 55 vulnerabilities this February 2026 Patch Tuesday . Microsoft is aware of exploitation in the wild for six of today’s vulnerabilities, and notes public disclosure for three of those. Earlier in the month, Microsoft provided patches to address three browser vulnerabilities, which are not included in the Patch Tuesday count above. Windows/Office triple trouble: zero-day security feature bypass vulns All three of the publicly disclosed zero-day vulnerabilities published to
  15. Neighborhood Blending: A Lightweight Inference-Time Defense Against Membership Inference Attacks (arxiv.org, 2026-02-16T05:00:00)
    Score: 12.49
    arXiv:2602.12943v1 Announce Type: new
    Abstract: In recent years, the widespread adoption of Machine Learning as a Service (MLaaS), particularly in sensitive environments, has raised considerable privacy concerns. Of particular importance are membership inference attacks (MIAs), which exploit behavioral discrepancies between training and non-training data to determine whether a specific record was included in the model's training set, thereby presenting significant privacy risks. Although e
  16. UNC1069 Targets Cryptocurrency Sector with New Tooling and AI-Enabled Social Engineering (cloud.google.com, 2026-02-09T14:00:00)
    Score: 11.613
    Written by: Ross Inman, Adrian Hernandez Introduction North Korean threat actors continue to evolve their tradecraft to target the cryptocurrency and decentralized finance (DeFi) verticals. Mandiant recently investigated an intrusion targeting a FinTech entity within this sector, attributed to UNC1069 , a financially motivated threat actor active since at least 2018. This investigation revealed a tailored intrusion resulting in the deployment of seven unique malware families, including a new set
  17. The OpenClaw experiment is a warning shot for enterprise AI security (www.sophos.com, 2026-02-13T00:00:00)
    Score: 11.426
    Agentic AI promises a lot – but it also introduces more risk. Sophos’ CISO explores the challenges and how to address them Categories: Threat Research Tags: AI, LLM, OpenClaw, CISO, risk, Sophos X-Ops
  18. How Amazon uses Amazon Nova models to automate operational readiness testing for new fulfillment centers (aws.amazon.com, 2026-02-10T18:34:09)
    Score: 11.096
    In this post, we discuss how Amazon Nova in Amazon Bedrock can be used to implement an AI-powered image recognition solution that automates the detection and validation of module components, significantly reducing manual verification efforts and improving accuracy.
  19. Agent-to-agent collaboration: Using Amazon Nova 2 Lite and Amazon Nova Act for multi-agent systems (aws.amazon.com, 2026-02-09T16:00:28)
    Score: 10.833
    This post walks through how agent-to-agent collaboration on Amazon Bedrock works in practice, using Amazon Nova 2 Lite for planning and Amazon Nova Act for browser interaction, to turn a fragile single-agent setup into a predictable multi-agent system.
  20. Carding-as-a-Service: The Underground Market of Stolen Cards (www.rapid7.com, 2026-02-12T14:00:00)
    Score: 10.627
    Rapid7 software engineer Eliran Alon also contributed to this post. Introduction Despite sustained efforts by the global banking and payments industry, credit card fraud continues to affect consumers and organizations on a large scale. Underground “dump shops” play a central role in this activity, selling stolen credit and debit card data to criminals who use it to conduct unauthorized transactions and broader fraud campaigns. Rather than fading under increased scrutiny, this illicit trade has e
  21. The February 2026 Security Update Review (www.thezdi.com, 2026-02-10T18:30:28)
    Score: 10.595
    I have survived the biggest Pwn2Own ever, but I’m back in Tokyo for the second Patch Tuesday of 2026. My location never stops Patch Tuesday from coming, so let’s take a look at the latest security patches from Adobe and Microsoft. If you’d rather watch the full video recap covering the entire release, you can check it out here: Adobe Patches for February 2026 For February, Adobe released nine bulletins addressing 44 unique CVEs in Adobe Audition, After Effects, InDesign, Substance 3D Designer,
  22. RADAR: Exposing Unlogged NoSQL Operations (arxiv.org, 2026-02-16T05:00:00)
    Score: 9.49
    arXiv:2602.12600v1 Announce Type: new
    Abstract: The widespread adoption of NoSQL databases has made digital forensics increasingly difficult as storage formats are diverse and often opaque, and audit logs cannot be assumed trustworthy when privileged insiders, such as DevOps or administrators, can disable, suppress, or manipulate logging to conceal activity. We present RADAR (Record & Artifact Detection, Alignment & Reporting), a log-adversary-aware framework that derives forensic groun
  23. Reliable Hierarchical Operating System Fingerprinting via Conformal Prediction (arxiv.org, 2026-02-16T05:00:00)
    Score: 9.49
    arXiv:2602.12825v1 Announce Type: new
    Abstract: Operating System (OS) fingerprinting is critical for network security, but conventional methods do not provide formal uncertainty quantification mechanisms. Conformal Prediction (CP) could be directly wrapped around existing methods to obtain prediction sets with guaranteed coverage. However, a direct application of CP would treat OS identification as a flat classification problem, ignoring the natural taxonomic structure of OSs and providing brit
  24. Bloom Filter Look-Up Tables for Private and Secure Distributed Databases in Web3 (Revised Version) (arxiv.org, 2026-02-16T05:00:00)
    Score: 9.49
    arXiv:2602.13167v1 Announce Type: cross
    Abstract: The rapid growth of decentralized systems in theWeb3 ecosystem has introduced numerous challenges, particularly in ensuring data security, privacy, and scalability [3, 8]. These systems rely heavily on distributed architectures, requiring robust mechanisms to manage data and interactions among participants securely. One critical aspect of decentralized systems is key management, which is essential for encrypting files, securing database segments
  25. Mayfly: Private Aggregate Insights from Ephemeral Streams of On-Device User Data (arxiv.org, 2026-02-16T05:00:00)
    Score: 9.49
    arXiv:2412.07962v2 Announce Type: replace
    Abstract: This paper introduces Mayfly, a federated analytics approach enabling aggregate queries over ephemeral on-device data streams without central persistence of sensitive user data. Mayfly minimizes data via on-device windowing and contribution bounding through SQL-programmability, anonymizes user data via streaming differential privacy (DP), and mandates immediate in-memory cross-device aggregation on the server — ensuring only privatized aggreg

Auto-generated 2026-02-16

Written By

More From Author

You May Also Like