The conversation about AI in business has largely focused on productivity — writing assistance, data analysis, customer service automation. What’s discussed less openly is the parallel adoption of AI by the criminal ecosystem, where the same technologies are being used to make cyberattacks faster, cheaper, more convincing, and harder to detect.

For small businesses, this represents a genuine shift in the threat landscape. Understanding what’s changed — and what it means for how you protect your business — is increasingly important.

What AI Has Changed About Cyberattacks

Volume and targeting. Generating a convincing phishing email used to require human effort and at least some language skill. AI language models can now generate thousands of contextually appropriate, grammatically perfect phishing emails per hour, each personalised for a specific target using data scraped from LinkedIn, company websites, and previous data breaches. The barrier to running a large-scale, targeted campaign has collapsed.

Convincingness. 68% of cyber threat analysts report that AI-generated phishing attempts are harder to detect than any previous generation of attacks. The emails read like human-written communication — because in a meaningful sense, they are. The grammar-check and awkward-phrasing signals that used to help people identify phishing no longer apply.

Deepfake audio and video. A relatively new but rapidly growing threat involves the use of AI-generated voice and video to impersonate people. There are documented cases of finance staff receiving phone calls from a convincing audio clone of their CEO, authorising an urgent wire transfer. In video calls, AI-generated avatars have been used to impersonate executives in real-time. These attacks are not yet common at the small business level, but the technology is becoming accessible.

Automated vulnerability discovery. AI tools can scan for vulnerabilities in websites, networks, and software at a speed and scale that would previously have required significant human expertise. This means smaller, less prominent businesses are increasingly being scanned and probed — previously, the effort required meant attackers focused on higher-value targets.

Attack-as-a-Service. Dark web marketplaces now offer sophisticated cyberattack capabilities as subscription services, with AI-powered components included. This has dramatically lowered the skill barrier for criminal actors. The people targeting small businesses no longer need to be technically sophisticated.

What This Means for Small Businesses Specifically

The democratisation of attack capability means small businesses can no longer rely on relative obscurity as a defence. Previously, targeted attacks required resources that made large enterprises more attractive victims. AI has changed that calculus.

Three categories of SMB are at particular risk:

Businesses with accessible customer financial data. Retailers, hospitality businesses, professional services — any organisation that holds payment card data, bank details, or financial records that can be rapidly monetised.

Businesses with access to larger clients or supply chains. If you are a supplier, contractor, or service provider to a larger organisation, you may be targeted specifically because of that access. Your business is a potential stepping stone.

Businesses in regulated industries. Healthcare-adjacent businesses, legal firms, financial services — industries where data has both financial value and carries significant regulatory consequences if breached.

The Defences That Still Work

The good news — and it is genuine good news — is that the defences that work against AI-enhanced attacks are largely the same defences that have always worked. AI makes attacks faster, more convincing, and more scalable; it doesn’t fundamentally change how they operate.

MFA. Multi-factor authentication blocks the automated credential attacks that benefit most from AI scaling. A password stolen through an AI-generated phishing email still can’t be used if MFA is in place.

Network monitoring. AI-accelerated attack tools still need to behave on your network. Traffic that doesn’t match your normal patterns — unusual outbound connections, unfamiliar device activity, unexpected data volumes — is detectable with the right monitoring in place.

Verification procedures. Against deepfake audio attacks and executive impersonation, the defence is procedural: any request involving money transfer or sensitive data access requires verification through a pre-established channel, regardless of how convincing the request appears. One phone call to a known number ends the attack.

Staff awareness — updated for AI. Traditional phishing training is no longer sufficient. Staff need to understand that AI-generated emails look legitimate, that voice calls can be faked, and that urgency is a manipulation technique. A culture of verification and scepticism is the human-layer defence.

Supplier verification processes. Given the rise of AI-enhanced supplier impersonation fraud, any change to payment details — bank account numbers, payment addresses — should require verification through a known contact before action is taken. Full stop.

The businesses that weather this evolving threat landscape well will be those that have implemented systematic defences rather than relying on their ability to spot an attack. No single person can reliably identify a well-crafted AI-generated phishing email every time. Systems, procedures, and monitoring don’t get tired, distracted, or deceived.

W3IT helps small businesses implement the monitoring and procedural foundations that reduce exposure to AI-enhanced attacks. If your current security posture relies primarily on your staff being able to identify something suspicious, that posture needs reviewing.

Book a free security check →