Sophos X-Ops

3.1K Followers
14 Following
807 Posts
A task force composed of our SophosLabs, SecOps, MDR, and SophosAI teams working together towards one goal: protecting our customers.
The Sophos X-Ops bloghttps://news.sophos.com/en-us/category/threat-research/

For example: in Dec 2025, we observed a ClickFix campaign that leveraged shared ChatGPT conversations containing malicious links, leading to MacSync infections.

A more recent campaign, in February, featured an updated MacSync variant – a multistage loader-as-a-service model, using shell-based loaders, API key-gated C2 infrastructure, dynamic AppleScript payloads, and aggressive in-memory execution.

Shell-based implementations provide threat actors with greater effectiveness and evasive capabilities, compared to native MachO binaries.

ClickFix is an increasingly common social engineering technique, which threat actors use to trick users into installing malicious software on their devices. Historically, it’s been aimed at Windows users – but recently we’ve seen three ClickFix campaigns targeting macOS users.

These campaigns, which involve the MacSync infostealer, suggest that approaches and tactics are evolving – possibly in response to investigation and disruption efforts, but also perhaps reflecting wider social and technological trends.

A similar trend holds for Vicuna-7B, where the ASR drops from 100% to 1.35% under salting. These results demonstrate that our approach effectively eliminates the subset of jailbreaks that remain robust under traditional defenses, outperforming both parameter-based and prompt-based strategies.

We evaluated LLM salting against the Greedy Coordinate Gradient (GCG) jailbreak attack. Experiments on LLaMA2-7B-Chat and Vicuna-7B showed that salting consistently breaks intra-model transferability, while preserving the model’s performance on benign prompts.

We seeded our evaluation with 300 GCG jailbreak prompts that achieve a 100% attack success rate (ASR) on the unmodified baseline models. We then assessed whether these attacks remain effective under a range of defenses, and whether our proposed salting method can eliminate the subset of jailbreaks that persist.

For LLaMA-2-7B, we observed that standard finetuning and system prompt changes reduce ASR only partially, bringing it down to approximately 40–60%. In contrast, salting reduces ASR from 100% to just 2.75%.

91% of #ransomware incidents involve data theft. Is your organization prepared?

Explore the latest edition of our Threat Intelligence Executive Report – Volume 2025, Number 4, packed with actionable insights from our CTU Research Team.

🔹 Discover why aligning threat‑group naming conventions matters—for stronger attribution, clearer risk modeling, and smarter responses.
🔹 Understand the implications of tensions in the Middle East tensions on cyber risks, particularly for U.S. and regional companies.
🔹 See how law enforcement’s tactics are causing ongoing disruption, and what that means for your threat‑detection strategy.

Stay ahead with expert guidance from Sophos CTU.
Get the report ➡️ https://bit.ly/467noUq

Sophos analysts are investigating a new infection chain for the GOLD BLADE cybercriminal group’s custom RedLoader malware, which initiates command and control (C2) communications. The threat actors leverage a LNK file to remotely execute and sideload a benign executable, which loads the RedLoader stage 1 payload that is hosted on GOLD BLADE infrastructure. 1/2

AI integration may seem very recent, but has been woven into the fabric of cybersecurity for many years. However, there are still improvements to be made. In our industry, models are often deployed on a massive scale, processing billions of events a day.

Large language models (LLMs) – the models that usually grab the headlines – perform well, and are popular, but are ill-suited for this application, requiring extensive GPU infrastructure and significant amounts of memory, even after optimization techniques.

Since the computational demands of maintaining LLMs make them impractical for many cybersecurity applications – especially those requiring real-time or large-scale processing – small, efficient models can play a critical role.

Many tasks in cybersecurity do not require generative solutions and can instead be solved through classification with small models – which are cost-effective and capable of running on endpoint devices or within a cloud infrastructure.

A key question when it comes to small models is their performance, which is bounded by the quality and scale of the training data. As a cybersecurity vendor, we have a surfeit of data, but there is always the question of how to best use it.

This is where LLMs have a part to play. The idea is simple yet transformative: use big models intermittently and strategically to train small models effectively. LLMs are good for extracting useful signals from data at scale, modifying existing labels, and providing new ones.

Merging the advanced learning capabilities of large, expensive models with the high efficiency of small models can create fast, commercially viable, and effective solutions.

In a new blog out today, Sophos looks at 3 methods key to this approach: knowledge distillation, semi-supervised learning, and synthetic data generation. We share the results of experiments, including command-line and website productivity classification and fake login pages.

When we analyzed the backdoors, we ended up down a rabbithole of multiple variants, obfuscation, convoluted infection chains, and identifiers. The upshot is that a threat actor seems to be creating backdoored repos at scale, and may have been doing so for some time.

We’ve previously looked into the niche world of threat actors targeting each other, so we investigated further, and found 133 backdoored repos, most linked to the same threat actor via an email address. Some repos claimed to be malware, others gaming cheats.

The threat actor appears to have gone to some lengths to make their backdoored repos seem legitimate – including multiple accounts and contributors, and automated commits.

We often get queries from customers asking if they’re protected against certain malware variants. A recent question seemed no different – a customer wanted to know if we had protections against ‘Sakura RAT,’ an open-source malware project hosted on GitHub.

We looked into Sakura RAT, and quickly realized two things. First, the RAT itself was likely of little threat to our customer. Second, the repository was backdoored, and intended to target people who compiled the RAT – with RATs and infostealers.