Categories Uncategorized

Weekly Threat Report 2025-11-10

Weekly Threat Intelligence Summary

Top 10 General Cyber Threats

Generated 2025-11-10T05:00:05.583851+00:00

  1. BRONZE BUTLER exploits Japanese asset management software vulnerability (news.sophos.com, 2025-10-30T17:55:55)
    Score: 11.456
    The threat group targeted a LANSCOPE zero-day vulnerability (CVE-2025-61932)
  2. Fake CAPTCHA sites now have tutorial videos to help victims install malware (www.malwarebytes.com, 2025-11-07T15:01:33)
    Score: 10.27
    ClickFix campaign pages now have embedded videos to helpfully walk users through the process of infecting their own systems.
  3. 5 ways to strengthen your firewall and endpoint’s defenses against ransomware (news.sophos.com, 2025-11-05T18:33:31)
    Score: 8.461
    Sophos Firewall uses intelligent TLS inspection and AI-powered analysts to reveal hidden threats — without compromising performance.
  4. Windows Server Update Services (WSUS) vulnerability abused to harvest sensitive data (news.sophos.com, 2025-10-29T19:46:50)
    Score: 8.303
    Exploitation of CVE-2025-59287 began after public disclosure and the release of proof-of-concept code
  5. Ransomware Detection With Real-Time Data (www.recordedfuture.com, 2025-11-04T00:00:00)
    Score: 8.165
    Learn why timely, relevant data is crucial for effective ransomware detection and what you can do to help prevent ransomware attacks and safeguard your organization.
  6. September 2025 CVE Landscape (www.recordedfuture.com, 2025-10-17T00:00:00)
    Score: 7.965
    Discover the top 16 exploited vulnerabilities from September 2025, including critical Cisco and TP-Link flaws, malware-linked CVEs, and actionable threat intelligence from Recorded Future’s Insikt Group.
  7. Malwarebytes scores 100% in AV-Comparatives Stalkerware Test 2025 (www.malwarebytes.com, 2025-11-07T18:03:08)
    Score: 7.791
    AV-Comparatives put 13 top Android security apps to the test against stalkerware. Malwarebytes caught them all.
  8. Android malware steals your card details and PIN to make instant ATM withdrawals (www.malwarebytes.com, 2025-11-06T16:48:11)
    Score: 7.615
    Forget card skimmers—this Android malware uses your phone’s NFC to help criminals pull cash straight from ATMs.
  9. Take control of your privacy with updates on Malwarebytes for Windows (www.malwarebytes.com, 2025-11-06T16:40:02)
    Score: 7.614
    Malwarebytes for Windows introduces powerful privacy controls, so you get to decide how Microsoft uses your data—all from one simple screen.
  10. Faster, safer, stronger: Sophos Firewall v22 security enhancements (news.sophos.com, 2025-11-05T19:01:52)
    Score: 7.464
    Hardened kernel, remote integrity monitoring, an enhanced anti-malware engine, and more.

Top 10 AI / LLM-Related Threats

Generated 2025-11-10T06:00:15.681309+00:00

  1. GTIG AI Threat Tracker: Advances in Threat Actor Usage of AI Tools (cloud.google.com, 2025-11-05T14:00:00)
    Score: 37.389
    Executive Summary Based on recent analysis of the broader threat landscape, Google Threat Intelligence Group (GTIG) has identified a shift that occurred within the last year: adversaries are no longer leveraging artificial intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations . This marks a new operational phase of AI abuse, involving tools that dynamically alter behavior mid-execution. This report serves as an update to our January 2025
  2. Preparing for Threats to Come: Cybersecurity Forecast 2026 (cloud.google.com, 2025-11-04T14:00:00)
    Score: 18.351
    Every November, we make it our mission to equip organizations with the knowledge needed to stay ahead of threats we anticipate in the coming year. The Cybersecurity Forecast 2026 report, released today, provides comprehensive insights to help security leaders and teams prepare for those challenges. This report does not contain "crystal ball" predictions. Instead, our forecasts are built on real-world trends and data we are observing right now. The information contained in the report co
  3. $\mathbf{S^2LM}$: Towards Semantic Steganography via Large Language Models (arxiv.org, 2025-11-10T05:00:00)
    Score: 17.79
    arXiv:2511.05319v1 Announce Type: cross
    Abstract: Although steganography has made significant advancements in recent years, it still struggles to embed semantically rich, sentence-level information into carriers. However, in the era of AIGC, the capacity of steganography is more critical than ever. In this work, we present Sentence-to-Image Steganography, an instance of Semantic Steganography, a novel task that enables the hiding of arbitrary sentence-level messages within a cover image. Furthe
  4. Retrieval-Augmented Review Generation for Poisoning Recommender Systems (arxiv.org, 2025-11-10T05:00:00)
    Score: 15.49
    arXiv:2508.15252v2 Announce Type: replace
    Abstract: Recent studies have shown that recommender systems (RSs) are highly vulnerable to data poisoning attacks, where malicious actors inject fake user profiles, including a group of well-designed fake ratings, to manipulate recommendations. Due to security and privacy constraints in practice, attackers typically possess limited knowledge of the victim system and thus need to craft profiles that have transferability across black-box RSs. To maximize
  5. Trustworthiness Calibration Framework for Phishing Email Detection Using Large Language Models (arxiv.org, 2025-11-10T05:00:00)
    Score: 14.79
    arXiv:2511.04728v1 Announce Type: new
    Abstract: Phishing emails continue to pose a persistent challenge to online communication, exploiting human trust and evading automated filters through realistic language and adaptive tactics. While large language models (LLMs) such as GPT-4 and LLaMA-3-8B achieve strong accuracy in text classification, their deployment in security systems requires assessing reliability beyond benchmark performance. To address this, this study introduces the Trustworthiness
  6. XBreaking: Understanding how LLMs security alignment can be broken (arxiv.org, 2025-11-10T05:00:00)
    Score: 14.79
    arXiv:2504.21700v3 Announce Type: replace
    Abstract: Large Language Models are fundamental actors in the modern IT landscape dominated by AI solutions. However, security threats associated with them might prevent their reliable adoption in critical application scenarios such as government organizations and medical institutions. For this reason, commercial LLMs typically undergo a sophisticated censoring mechanism to eliminate any harmful output they could possibly produce. These mechanisms maint
  7. P-MIA: A Profiled-Based Membership Inference Attack on Cognitive Diagnosis Models (arxiv.org, 2025-11-10T05:00:00)
    Score: 12.49
    arXiv:2511.04716v1 Announce Type: new
    Abstract: Cognitive diagnosis models (CDMs) are pivotal for creating fine-grained learner profiles in modern intelligent education platforms. However, these models are trained on sensitive student data, raising significant privacy concerns. While membership inference attacks (MIA) have been studied in various domains, their application to CDMs remains a critical research gap, leaving their privacy risks unquantified. This paper is the first to systematicall
  8. AI Agentic Vulnerability Injection And Transformation with Optimized Reasoning (arxiv.org, 2025-11-10T05:00:00)
    Score: 12.49
    arXiv:2508.20866v3 Announce Type: replace
    Abstract: The increasing complexity of software systems and the sophistication of cyber-attacks have underscored the critical need for effective automated vulnerability detection and repair systems. Data-driven approaches using deep learning models show promise but critically depend on the availability of large, accurately labeled datasets. Yet existing datasets either suffer from noisy labels, limited range of vulnerabilities, or fail to reflect vulner
  9. Democratizing AI: How Thomson Reuters Open Arena supports no-code AI for every professional with Amazon Bedrock (aws.amazon.com, 2025-11-07T21:51:22)
    Score: 11.843
    In this blog post, we explore how TR addressed key business use cases with Open Arena, a highly scalable and flexible no-code AI solution powered by Amazon Bedrock and other AWS services such as Amazon OpenSearch Service, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and AWS Lambda. We'll explain how TR used AWS services to build this solution, including how the architecture was designed, the use cases it solves, and the business profiles that use it.
  10. Jailbreaking in the Haystack (arxiv.org, 2025-11-10T05:00:00)
    Score: 11.79
    arXiv:2511.04707v1 Announce Type: new
    Abstract: Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our metho
  11. Crypto wasted: BlueNoroff’s ghost mirage of funding and jobs (securelist.com, 2025-10-28T03:00:11)
    Score: 10.375
    Kaspersky GReAT experts dive deep into the BlueNoroff APT's GhostCall and GhostHire campaigns. Extensive research detailing multiple malware chains targeting macOS, including a stealer suite, fake Zoom and Microsoft Teams clients and ChatGPT-enhanced images.
  12. SesameOp: Novel backdoor uses OpenAI Assistants API for command and control (www.microsoft.com, 2025-11-03T17:00:00)
    Score: 9.942
    Microsoft Incident Response – Detection and Response Team (DART) researchers uncovered a new backdoor that is notable for its novel use of the OpenAI Assistants Application Programming Interface (API) as a mechanism for command-and-control (C2) communications. Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised environment. To do this, a c
  13. Keys to the Kingdom: A Defender's Guide to Privileged Account Monitoring (cloud.google.com, 2025-10-28T14:00:00)
    Score: 9.884
    Written by: Bhavesh Dhake, Will Silverstone, Matthew Hitchcock, Aaron Fletcher The Criticality of Privileged Access in Today's Threat Landscape Privileged access stands as the most critical pathway for adversaries seeking to compromise sensitive systems and data. Its protection is not only a best practice, it is a fundamental imperative for organizational resilience. The increasing complexity of modern IT environments, exacerbated by rapid cloud migration, has led to a surge in both human a
  14. The 5 generative AI security threats you need to know about detailed in new e-book (www.microsoft.com, 2025-10-30T18:00:00)
    Score: 9.8
    In this blog post, we’ll highlight the key themes covered in the e-book, including the challenges organizations face, the top generative AI threats to organizations, and how companies can enhance their security posture to meet the dangers of today’s unpredictable AI environments. The post The 5 generative AI security threats you need to know about detailed in new e-book appeared first on Microsoft Security Blog .
  15. Would you sext ChatGPT? (Lock and Code S06E22) (www.malwarebytes.com, 2025-11-03T15:30:23)
    Score: 9.628
    This week on the Lock and Code podcast, we speak with Deb Donig about OpenAI's stated desire to release "erotica" on ChatGPT.
  16. The Future of Fully Homomorphic Encryption System: from a Storage I/O Perspective (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.04946v1 Announce Type: new
    Abstract: Fully Homomorphic Encryption (FHE) allows computations to be performed on encrypted data, significantly enhancing user privacy. However, the I/O challenges associated with deploying FHE applications remains understudied. We analyze the impact of storage I/O on the performance of FHE applications and summarize key lessons from the status quo. Key results include that storage I/O can degrade the performance of ASICs by as much as 357$\times$ and red
  17. Chasing One-day Vulnerabilities Across Open Source Forks (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05097v1 Announce Type: new
    Abstract: Tracking vulnerabilities inherited from third-party open-source components is a well-known challenge, often addressed by tracing the threads of dependency information. However, vulnerabilities can also propagate through forking: a repository forked after the introduction of a vulnerability, but before it is patched, may remain vulnerable in the fork well after being fixed in the original project. Current approaches for vulnerability analysis lack
  18. Quantifying the Risk of Transferred Black Box Attacks (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05102v1 Announce Type: new
    Abstract: Neural networks have become pervasive across various applications, including security-related products. However, their widespread adoption has heightened concerns regarding vulnerability to adversarial attacks. With emerging regulations and standards emphasizing security, organizations must reliably quantify risks associated with these attacks, particularly regarding transferred adversarial attacks, which remain challenging to evaluate accurately.
  19. Cybersecurity AI in OT: Insights from an AI Top-10 Ranker in the Dragos OT CTF 2025 (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05119v1 Announce Type: new
    Abstract: Operational Technology (OT) cybersecurity increasingly relies on rapid response across malware analysis, network forensics, and reverse engineering disciplines. We examine the performance of Cybersecurity AI (CAI), powered by the \texttt{alias1} model, during the Dragos OT CTF 2025 — a 48-hour industrial control system (ICS) competition with more than 1,000 teams. Using CAI telemetry and official leaderboard data, we quantify CAI's trajector
  20. A Secured Intent-Based Networking (sIBN) with Data-Driven Time-Aware Intrusion Detection (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05133v1 Announce Type: new
    Abstract: While Intent-Based Networking (IBN) promises operational efficiency through autonomous and abstraction-driven network management, a critical unaddressed issue lies in IBN's implicit trust in the integrity of intent ingested by the network. This inherent assumption of data reliability creates a blind spot exploitable by Man-in-the-Middle (MitM) attacks, where an adversary intercepts and alters intent before it is enacted, compelling the networ
  21. SmartSecChain-SDN: A Blockchain-Integrated Intelligent Framework for Secure and Efficient Software-Defined Networks (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05156v1 Announce Type: new
    Abstract: With more and more existing networks being transformed to Software-Defined Networking (SDN), they need to be more secure and demand smarter ways of traffic control. This work, SmartSecChain-SDN, is a platform that combines machine learning based intrusion detection, blockchain-based storage of logs, and application-awareness-based priority in SDN networks. To detect network intrusions in a real-time, precision and low-false positives setup, the fr
  22. Optimization of Information Reconciliation for Decoy-State Quantum Key Distribution over a Satellite Downlink Channel (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2511.05196v1 Announce Type: cross
    Abstract: Quantum key distribution (QKD) is a cryptographic solution that leverages the properties of quantum mechanics to be resistant and secure even against an attacker with unlimited computational power. Satellite-based links are important in QKD because they can reach distances that the best fiber systems cannot. However, links between satellites in low Earth orbit (LEO) and ground stations have a duration of only a few minutes, resulting in the gene
  23. CompressionAttack: Exploiting Prompt Compression as a New Attack Surface in LLM-Powered Agents (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2510.22963v2 Announce Type: replace
    Abstract: LLM-powered agents often use prompt compression to reduce inference costs, but this introduces a new security risk. Compression modules, which are optimized for efficiency rather than safety, can be manipulated by adversarial inputs, causing semantic drift and altering LLM behavior. This work identifies prompt compression as a novel attack surface and presents CompressionAttack, the first framework to exploit it. CompressionAttack includes two
  24. A Multi-Stage Automated Online Network Data Stream Analytics Framework for IIoT Systems (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2210.01985v2 Announce Type: replace-cross
    Abstract: Industry 5.0 aims at maximizing the collaboration between humans and machines. Machines are capable of automating repetitive jobs, while humans handle creative tasks. As a critical component of Industrial Internet of Things (IIoT) systems for service delivery, network data stream analytics often encounter concept drift issues due to dynamic IIoT environments, causing performance degradation and automation difficulties. In this paper, we
  25. Measuring Ransomware Lateral Movement Susceptibility via Privilege-Weighted Adjacency Matrix Exponentiation (arxiv.org, 2025-11-10T05:00:00)
    Score: 9.49
    arXiv:2508.21005v2 Announce Type: replace-cross
    Abstract: Ransomware impact hinges on how easily an intruder can move laterally and spread to the maximum number of assets. We present a graph-theoretic formulation that casts lateral movement as a path-closure problem over a probability semiring to measure lateral-movement susceptibility and estimate blast radius. We build a directed multigraph where vertices represent assets and edges represent reachable services (e.g., RDP/SSH) between them. We

Auto-generated 2025-11-10

Written By

More From Author

You May Also Like