Categories Uncategorized

Weekly Threat Report 2026-03-02

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2026-03-02T05:00:06.421160+00:00

  1. January 2026 CVE Landscape: 23 Critical Vulnerabilities Mark 5% Increase, APT28 Exploits Microsoft Office Zero-Day (www.recordedfuture.com, 2026-02-24T00:00:00)
    Score: 14.165
    January 2026 saw 23 actively exploited CVEs, including APT28’s Microsoft Office zero-day and critical auth bypass flaws impacting enterprise systems.
  2. 2025 Cloud Threat Hunting and Defense Landscape (www.recordedfuture.com, 2026-02-19T00:00:00)
    Score: 9.632
    Threat actors are doubling down on cloud infrastructure — exploiting misconfigurations, abusing native services, and pivoting through hybrid environments to maximize impact. See how attack patterns are evolving across exploitation, ransomware, credential abuse, and AI service targeting in this latest cloud threat roundup.
  3. How to understand and avoid Advanced Persistent Threats (www.malwarebytes.com, 2026-02-26T18:52:11)
    Score: 8.13
    APT stands for Advanced Persistent Threat. But what does that actually mean, and how does it translate into the kind of threat you’re facing?
  4. Fake Zoom and Google Meet scams install Teramind: A technical deep dive (www.malwarebytes.com, 2026-02-26T22:40:00)
    Score: 7.656
    Attackers don’t always need custom malware. Sometimes they just need a trusted brand and a legitimate tool.
  5. Facebook ads spread fake Windows 11 downloads that steal passwords and crypto wallets (www.malwarebytes.com, 2026-02-20T10:00:30)
    Score: 6.568
    Attackers are weaponizing Facebook ads to distribute password-stealing malware masked as a Windows download.
  6. GrayCharlie Hijacks Law Firm Sites in Suspected Supply-Chain Attack (www.recordedfuture.com, 2026-02-18T00:00:00)
    Score: 6.465
    GrayCharlie turns compromised WordPress sites into malware delivery machines. Discover how this threat actor chains fake browser updates and ClickFix lures to deploy NetSupport RAT, Stealc, and SectopRAT.
  7. State of Security Report | Recorded Future (www.recordedfuture.com, 2026-02-12T00:00:00)
    Score: 6.465
    Download Recorded Future's 2026 State of Security report which provides comprehensive threat intelligence on geopolitical fragmentation, state-sponsored operations, ransomware evolution, and emerging technology risk.
  8. February 2026 Patch Tuesday: Six Zero-Days Among 59 CVEs Patched (www.crowdstrike.com, 2026-02-10T06:00:00)
    Score: 5.874
  9. Public Google API keys can be used to expose Gemini AI data (www.malwarebytes.com, 2026-02-27T12:33:22)
    Score: 5.752
    Researchers found that Google API keys long treated as harmless can now unlock access to Gemini.
  10. Inside a fake Google security check that becomes a browser RAT (www.malwarebytes.com, 2026-02-27T11:29:11)
    Score: 5.745
    Disguised as a security check, this fake Google alert uses browser permissions to harvest contacts, location data, and more.

Top 10 AI / LLM-Related Threats

Generated 2026-03-02T06:00:18.472040+00:00

  1. GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use (cloud.google.com, 2026-02-12T14:00:00)
    Score: 39.194
    Introduction In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development. This report serves as an update to our November 2025 findings regarding the advances in threat actor usage of AI tools. By identifying these early indicators and offensive proofs of concept, GTIG aims to arm def
  2. Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking (arxiv.org, 2026-03-02T05:00:00)
    Score: 20.79
    arXiv:2602.24009v1 Announce Type: new
    Abstract: Jailbreak techniques for large language models (LLMs) evolve faster than benchmarks, making robustness estimates stale and difficult to compare across papers due to drift in datasets, harnesses, and judging protocols. We introduce JAILBREAK FOUNDRY (JBF), a system that addresses this gap via a multi-agent workflow to translate jailbreak papers into executable modules for immediate evaluation within a unified harness. JBF features three core compon
  3. Manifold of Failure: Behavioral Attraction Basins in Language Models (arxiv.org, 2026-03-02T05:00:00)
    Score: 20.29
    arXiv:2602.22291v2 Announce Type: replace-cross
    Abstract: While prior work has focused on projecting adversarial examples back onto the manifold of natural data to restore safety, we argue that a comprehensive understanding of AI safety requires characterizing the unsafe regions themselves. This paper introduces a framework for systematically mapping the Manifold of Failure in Large Language Models (LLMs). We reframe the search for vulnerabilities as a quality diversity problem, using MAP-Elite
  4. Measuring AI Security: Separating Signal from Panic (www.rapid7.com, 2026-02-10T18:00:00)
    Score: 20.157
    The conversation around AI security is full of anxiety. Every week, new headlines warn of jailbreaks, prompt injection, agents gone rogue, and the rise of LLM-enabled cybercrime. It’s easy to come away with the impression that AI is fundamentally uncontrollable and dangerous, and therefore something we need to lock down before it gets out of hand. But as a security practitioner, I wasn’t convinced. Most of these warnings are based on hypothetical examples or carefully engineered demos. They rais
  5. Learning to Generate Secure Code via Token-Level Rewards (arxiv.org, 2026-03-02T05:00:00)
    Score: 17.79
    arXiv:2602.23407v1 Announce Type: new
    Abstract: Large language models (LLMs) have demonstrated strong capabilities in code generation, yet they remain prone to producing security vulnerabilities. Existing approaches commonly suffer from two key limitations: the scarcity of high-quality security data and coarse-grained reinforcement learning reward signals. To address these challenges, we propose Vul2Safe, a new secure code generation framework that leverages LLM self-reflection to construct hig
  6. On the Effectiveness of Membership Inference in Targeted Data Extraction from Large Language Models (arxiv.org, 2026-03-02T05:00:00)
    Score: 17.79
    arXiv:2512.13352v3 Announce Type: replace-cross
    Abstract: Large Language Models (LLMs) are prone to memorizing training data, which poses serious privacy risks. Two of the most prominent concerns are training data extraction and Membership Inference Attacks (MIAs). Prior research has shown that these threats are interconnected: adversaries can extract training data from an LLM by querying the model to generate a large volume of text and subsequently applying MIAs to verify whether a particular
  7. Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search (arxiv.org, 2026-03-02T05:00:00)
    Score: 17.79
    arXiv:2602.22983v2 Announce Type: replace-cross
    Abstract: As Large Language Models (LLMs) are increasingly used, their security risks have drawn increasing attention. Existing research reveals that LLMs are highly susceptible to jailbreak attacks, with effectiveness varying across language contexts. This paper investigates the role of classical Chinese in jailbreak attacks. Owing to its conciseness and obscurity, classical Chinese can partially bypass existing safety constraints, exposing notab
  8. Enhancing Continual Learning for Software Vulnerability Prediction: Addressing Catastrophic Forgetting via Hybrid-Confidence-Aware Selective Replay for Temporal LLM Fine-Tuning (arxiv.org, 2026-03-02T05:00:00)
    Score: 14.79
    arXiv:2602.23834v1 Announce Type: new
    Abstract: Recent work applies Large Language Models (LLMs) to source-code vulnerability detection, but most evaluations still rely on random train-test splits that ignore time and overestimate real-world performance. In practice, detectors are deployed on evolving code bases and must recognise future vulnerabilities under temporal distribution shift. This paper investigates continual fine-tuning of a decoder-style language model (microsoft/phi-2 with LoRA)
  9. Anansi: Scalable Characterization of Message-Based Job Scams (arxiv.org, 2026-03-02T05:00:00)
    Score: 14.79
    arXiv:2602.24223v1 Announce Type: new
    Abstract: Job-based smishing scams, where victims are recruited under the guise of remote job opportunities, represent a rapidly growing and understudied threat within the broader landscape of online fraud. In this paper, we present Anansi, the first scalable, end-to-end measurement pipeline designed to systematically engage with, analyze, and characterize job scams in the wild. Anansi combines large language models (LLMs), automated browser agents, and inf
  10. MPU: Towards Secure and Privacy-Preserving Knowledge Unlearning for Large Language Models (arxiv.org, 2026-03-02T05:00:00)
    Score: 14.79
    arXiv:2602.23798v1 Announce Type: cross
    Abstract: Machine unlearning for large language models often faces a privacy dilemma in which strict constraints prohibit sharing either the server's parameters or the client's forget set. To address this dual non-disclosure constraint, we propose MPU, an algorithm-agnostic privacy-preserving Multiple Perturbed Copies Unlearning framework that primarily introduces two server-side modules: Pre-Process for randomized copy generation and Post-Proce
  11. Train CodeFu-7B with veRL and Ray on Amazon SageMaker Training jobs (aws.amazon.com, 2026-02-24T15:46:50)
    Score: 14.368
    In this post, we demonstrate how to train CodeFu-7B, a specialized 7-billion parameter model for competitive programming, using Group Relative Policy Optimization (GRPO) with veRL, a flexible and efficient training library for large language models (LLMs) that enables straightforward extension of diverse RL algorithms and seamless integration with existing LLM infrastructure, within a distributed Ray cluster managed by SageMaker training jobs. We walk through the complete implementation, coverin
  12. Beyond the Battlefield: Threats to the Defense Industrial Base (cloud.google.com, 2026-02-10T14:00:00)
    Score: 13.217
    Introduction In modern warfare, the front lines are no longer confined to the battlefield; they extend directly into the servers and supply chains of the industry that safeguards the nation. Today, the defense sector faces a relentless barrage of cyber operations conducted by state-sponsored actors and criminal groups alike. In recent years, Google Threat Intelligence Group (GTIG) has observed several distinct areas of focus in adversarial targeting of the defense industrial base (DIB). While no
  13. Lifecycle-Integrated Security for AI-Cloud Convergence in Cyber-Physical Infrastructure (arxiv.org, 2026-03-02T05:00:00)
    Score: 12.79
    arXiv:2602.23397v1 Announce Type: new
    Abstract: The convergence of Artificial Intelligence (AI) inference pipelines with cloud infrastructure creates a dual attack surface where cloud security standards and AI governance frameworks intersect without unified enforcement mechanisms. AI governance, cloud security, and industrial control system standards intersect without unified enforcement, leaving hybrid deployments exposed to cross-layer attacks that threaten safety-critical operations. This pa
  14. Log Probability Tracking of LLM APIs (arxiv.org, 2026-03-02T05:00:00)
    Score: 12.49
    arXiv:2512.03816v2 Announce Type: replace-cross
    Abstract: When using an LLM through an API provider, users expect the served model to remain consistent over time, a property crucial for the reliability of downstream applications and the reproducibility of research. Existing audit methods are too costly to apply at regular time intervals to the wide range of available LLM APIs. This means that model updates are left largely unmonitored in practice. In this work, we show that while LLM log probab
  15. The Post-RAMP Era: Allegations, Fragmentation, and the Rebuilding of the Ransomware Underground (www.rapid7.com, 2026-02-25T13:56:38)
    Score: 11.388
    Executive summary The January 2026 seizure of RAMP disrupted a major ransomware coordination hub, but it did not dismantle the ecosystem behind it. Instead, it destabilized trust and accelerated fragmentation across the underground. Rather than consolidating around a single successor, ransomware actors are redistributing across both gated platforms like T1erOne and accessible forums such as Rehub. This shift reflects adaptation, not decline. For defenders, visibility into centralized coordinatio
  16. Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock (aws.amazon.com, 2026-02-25T20:56:13)
    Score: 11.358
    In this post, we explain how we implemented multi-LoRA inference for Mixture of Experts (MoE) models in vLLM, describe the kernel-level optimizations we performed, and show you how you can benefit from this work. We use GPT-OSS 20B as our primary example throughout this post.
  17. Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases (aws.amazon.com, 2026-02-25T19:51:08)
    Score: 11.347
    This post demonstrates how to quickly deploy a production-ready event assistant using the components of Amazon Bedrock AgentCore. We'll build an intelligent companion that remembers attendee preferences and builds personalized experiences over time, while Amazon Bedrock AgentCore handles the heavy lifting of production deployment: Amazon Bedrock AgentCore Memory for maintaining both conversation context and long-term preferences without custom storage solutions, Amazon Bedrock AgentCore Ide
  18. Scaling data annotation using vision-language models to power physical AI systems (aws.amazon.com, 2026-02-23T23:20:37)
    Score: 10.205
    In this post, we examine how Bedrock Robotics tackles this challenge. By joining the AWS Physical AI Fellowship, the startup partnered with the AWS Generative AI Innovation Center to apply vision-language models that analyze construction video footage, extract operational details, and generate labeled training datasets at scale, to improve data preparation for autonomous construction equipment.
  19. Global cross-Region inference for latest Anthropic Claude Opus, Sonnet and Haiku models on Amazon Bedrock in Thailand, Malaysia, Singapore, Indonesia, and Taiwan (aws.amazon.com, 2026-02-24T15:38:22)
    Score: 10.067
    In this post, we are exciting to announce availability of Global CRIS for customers in Thailand, Malaysia, Singapore, Indonesia, and Taiwan and give a walkthrough of technical implementation steps, and cover quota management best practices to maximize the value of your AI Inference deployments. We also provide guidance on best practices for production deployments.
  20. Introducing Amazon Bedrock global cross-Region inference for Anthropic’s Claude models in the Middle East Regions (UAE and Bahrain) (aws.amazon.com, 2026-02-24T15:33:51)
    Score: 10.066
    We’re excited to announce the availability of Anthropic’s Claude Opus 4.6, Claude Sonnet 4.6, Claude Opus 4.5, Claude Sonnet 4.5, and Claude Haiku 4.5 through Amazon Bedrock global cross-Region inference for customers operating in the Middle East. In this post, we guide you through the capabilities of each Anthropic Claude model variant, the key advantages of global cross-Region inference including improved resilience, real-world use cases you can implement, and a code example to help you start
  21. Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology (www.securityweek.com, 2026-02-27T21:30:55)
    Score: 9.94
    OpenAI and Google, along with Elon Musk’s xAI, also have contracts to supply their AI models to the military. The post Trump Orders All Federal Agencies to Phase Out Use of Anthropic Technology appeared first on SecurityWeek .
  22. Large model inference container – latest capabilities and performance enhancements (aws.amazon.com, 2026-02-26T17:45:59)
    Score: 9.564
    AWS recently released significant updates to the Large Model Inference (LMI) container, delivering comprehensive performance improvements, expanded model support, and streamlined deployment capabilities for customers hosting LLMs on AWS. These releases focus on reducing operational complexity while delivering measurable performance gains across popular model architectures.
  23. I've Seen This IP: A Practical Intersection Attack Against Tor Introduction Circuits and Hidden Services (arxiv.org, 2026-03-02T05:00:00)
    Score: 9.49
    arXiv:2602.23560v1 Announce Type: new
    Abstract: Tor onion services rely on long-lived introduction circuits to support anonymous rendezvous between clients and services. Although Tor includes some defenses against traffic analysis, the introduction protocol retains deterministic routing structure that can be leveraged by an adversary. We describe a practical intersection attack on Tor introduction circuits that can, over repeated interactions, identify each hop from the introduction point towar
  24. PLA for Drone RID Frames via Motion Estimation and Consistency Verification (arxiv.org, 2026-03-02T05:00:00)
    Score: 9.49
    arXiv:2602.23760v1 Announce Type: new
    Abstract: Drone Remote Identification (RID) plays a critical role in low-altitude airspace supervision, yet its broadcast nature and lack of cryptographic protection make it vulnerable to spoofing and replay attacks. In this paper, we propose a consistency verification-based physical-layer authentication (PLA) algorithm for drone RID frames. A RID-aware sensing and decoding module is first developed to extract communication-derived sensing parameters, inclu
  25. Cybersecurity of Teleoperated Quadruped Robots: A Systematic Survey of Vulnerabilities, Threats, and Open Defense Gaps (arxiv.org, 2026-03-02T05:00:00)
    Score: 9.49
    arXiv:2602.23404v1 Announce Type: cross
    Abstract: Teleoperated quadruped robots are increasingly deployed in safety-critical missions — industrial inspection, military reconnaissance, and emergency response — yet the security of their communication and control infrastructure remains insufficiently characterized. Quadrupeds present distinct security challenges arising from dynamic stability constraints, gait-dependent vulnerability windows, substantial kinetic energy, and elevated operator cog

Auto-generated 2026-03-02

Written By

More From Author

You May Also Like