[Heads Up] Crafty New Phishing Attacks Abuse Free Cloudflare Pages Malwarebytes warns that threat actors are abusing the free Cloudflare Pages service to host phishing portals, helping the phishing sites avoid detection by security scanners. The attackers are building fake login pages impersonating banking, insurance and healthcare entities. The pages are designed to harvest credentials as well as security questions and multifactor authentication codes. “From the victim’s point of view, nothing seems unusual beyond an odd-looking link and a failed sign-in,” the researchers write. “For the attackers, the mix of free hosting, compromised redirectors and Telegram-based exfiltration gives them speed, scale and resilience. “The bigger trend behind this campaign is clear: by leaning on free web hosting and mainstream messaging platforms, phishing actors avoid many of the choke points defenders used to rely on, like single malicious IPs or obviously shady domains. Spinning up new infrastructure is cheap, fast and largely invisible to victims.” Malwarebytes offers the following advice to help users avoid falling for these attacks:
- Always check the full domain name, not just the logo or page design. Banks and health insurers don’t host sign-in pages on generic developer domains like *.pages[.]dev, *.netlify[.]app, or on strange paths on unrelated sites.
- Don’t click sign-in or benefit links in unsolicited emails or texts. Instead, go to the institution’s site via a bookmark or by typing the address yourself.
- Treat surprise ’extra security’ prompts after a failed login with caution, especially if they ask for answers to security questions, card numbers or email passwords.
- If anything about the link, timing or requested information feels wrong, stop and contact the provider using trusted contact information from their official site.
WeChat Phishing Attacks a Growing Threat Outside China By The KnowBe4 Threat Lab “Super-app” WeChat offers a wealth of functionality—from instant messaging, text and voice messaging, and video calls to mobile payments, ride booking, ordering food deliveries, paying bills and even accessing government services. Provided by Chinese technology conglomerate Tencent, WeChat has become deeply integrated into daily life in China and usage has been spreading globally. The app now boasts over 1.4 billion active users (24.8% of total internet users). As well as appealing to legitimate users, the breadth of functionality (particularly WeChat’s mobile payment capability) also makes it attractive to cybercriminals. An initial phishing email is used to move victims into WeChat where social engineering attacks and payment fraud can occur in an environment that it’s more difficult to monitor and secure. Much like WeChat usage, these attacks are on the rise outside of China. In 2024, only 0.04% of phishing emails detected by KnowBe4 Defend in the U.S. and EMEA (including the UK) contained WeChat “Add Contact” QR codes. Since the start of 2025, this has risen to 1.43%, spiking in November at 5.1%. While the overall volume remains relatively low, this represents a 3,475% increase across these regions. Additionally, 61.7% of these phishing emails were written in English and a further 6.5% were in languages other than Chinese or English, indicating a growing and targeted diversification. This move reflects law enforcement crackdowns on domestic and near-border fraud networks in China, pushing scam operators to branch out. GenAI has been a key accelerator of this shift, removing the need for language proficiency to send a convincing phishing email or instant message. Both global systems and domestic Chinese LLMs (such as Baidu’s ERNIE Bot) make it easy for cybercriminals to create natural, high-quality phishing emails in non-Chinese languages.
Unmasking the Deepfake Threat: A Game-Changer for Reducing Human Risk By Roger Grimes Today, anyone can find a picture of absolutely anybody and it is also not difficult to find a sample of their voice. By combining these it is shockingly easy to create a realistic AI deepfake video of that person. The video may not be perfect, and an experienced AI deepfake enthusiast might be able to see signs of it not being real, but it will be good enough to fool 99% of people. Cybercriminals have been creating and using AI-enabled deepfake technologies since early 2024 to socially engineer people. Although early on it was much harder to do and scammers only used it on selected, high-value targets, like in this $25 million heist. Those days are gone and AI deepfake tools and services are now much more accessible. There are literally hundreds on the internet and most of them have free versions available. It will take the average person longer to sign up for the free account than to make their first AI-enabled, realistic deepfake. AI deepfakes are increasingly being deployed by cybercriminals. A 2025 Gartner survey of cybersecurity leaders revealed a significant threat: 62% of orgs reported experiencing a deepfake attack involving social engineering or automated process exploitation, while 32% faced attacks against AI apps leveraging malicious application prompts. In addition, chatbot assistants are susceptible to various adversarial prompting methods where attackers manipulate Large Language Models (LLMs) or multimodal models to produce biased or harmful output. Experts expect that by the end of 2026, these tactics will become commonplace against traditional, everyday targets.
Social Engineering and Ransomware Were Top Threats in H2 2025 ESET has published its Threat Report for the second half of 2025, finding that all types of threat actors are now using AI tools in varying degrees to assist in their campaigns. The use of AI to improve social engineering attacks remains the most common application of these tools. Juraj Jánošík, ESET’s Director of Automated Systems and Intelligent Solutions, stated, “We expect direct use of AI for generating malware and scripts to remain limited and specific, with the real transformation in the threat landscape happening in the area of social engineering. “The most significant challenge will be the continuous surge in high-quality, AI-generated attack vectors, such as convincing deepfakes, emails and ads, enable even low-skilled attackers to orchestrate sophisticated scams at scale and low cost. “As demonstrated by 2025’s investment scams, attackers increasingly rely on the appearance of trustworthiness rather than genuine functionality, leveraging AI to mimic professional-grade presentations and interactions – making social engineering one of the primary battlegrounds in cyberdefense.” Attackers are also using deepfakes in scam ads, which serve as lures for phishing operations. “Looking closer at the scam ads, there have been notable upgrades over the last year,” ESET says. “Deepfakes of popular personalities, used as initial hooks for phishing forms or websites, now use higher resolution, have significantly reduced unnatural movements and breathing, and have also improved their A/V sync. All of these changes make it more difficult for potential victims to spot the deception” The researchers also observed a significant increase in ransomware attacks, as threat actors continue to refine their tactics. “On the ransomware scene, victim numbers surpassed 2024 totals well before year’s end, with ESET Research projections pointing to a 40% year-over-year increase,” they write. “Akira and Qilin now dominate the ransomware-as-a-service market, while low-profile newcomer Warlock introduced innovative evasion techniques. EDR killers continued to proliferate, highlighting that endpoint detection and response tools remain a significant obstacle for ransomware operators.”
81% of Small Businesses Sustained a Cyber Incident Over the Past Year 81% of small businesses suffered a security or data breach over the past year, and 38% of these businesses were forced to raise their prices as a result, a report from the Identity Theft Resource Center (ITRC) has found. The report notes that external hackers have overtaken malicious insiders as the most common root cause of these incidents. This trend is partially driven by AI-assisted social engineering attacks, which were cited as a root cause by more than 41% of victims. “The emergence of AI as a primary attack vector aligns with extensive industry analysis on the weaponization of generative AI for creating hyper-realistic phishing emails, deepfake audio and video, and adaptive malware,” the report says. “These tools are effectively democratizing advanced attack capabilities that were once the domain of highly skilled actors. The primary advantage of a malicious insider has always been their intimate knowledge of internal processes, communication styles and organizational hierarchies, allowing them to bypass defenses through trust and familiarity. “AI tools now allow external actors to replicate this advantage at scale.” Users should be aware of this trend, as many red flags associated with social engineering, such as typos or odd grammar, will no longer be present. “Employee security training must be updated to address these new threats,” the report says. “Staff should be educated on the tell-tale signs of AI generated content, such as subtle visual artifacts in deepfake videos, the lack of emotional nuance in a cloned voice, or the unnaturally perfect grammar of an AI-crafted email. “Fostering a culture of healthy skepticism, where employees feel empowered to question and verify unusual or urgent requests, is vitally important.”