How Russian Disinformation Campaigns Exploit Domain Registrars and AI
The Rise of AI-Powered Disinformation Campaigns
In this episode of Breaking Badness, cybersecurity researcher Scot Terban and DomainTools Daniel Schwalbe unpack how Russian threat actors are evolving their disinformation playbook. The conversation centers on a recent DomainTools Investigations report uncovering a constellation of domains mimicking small-town US newspapers all created to push propaganda.
These domains were not simple typosquats, but appeared as entirely new, believable publications with articles often scraped or AI-generated to resemble local journalism. Many followed a consistent pattern: publish plausible news stories, then subtly inject state-aligned narratives.
Doppelganger Domains and Their Infrastructure
The team highlights how these websites are part of a campaign dubbed “Doppelganger” leveraged domain registrars that are frequently abused in similar schemes. Registrars like Namecheap, REG.RU, and Epik were listed in the report due to their high volume of disinformation-related domain registrations.
Daniel emphasizes that while domain registrars face razor-thin margins and high volumes, this makes proactive moderation difficult. The solution may lie more in transparency and public awareness than in relying on takedown mechanisms.
Homoglyphs and Typo-Squatting: Cloaking the Lies
The researchers explain two common methods attackers use to deceive users:
- Typosquatting: Registering a domain with a minor spelling error (e.g. “bbcnewz[.]com”)
- Homoglyph Attacks: Swapping Latin characters for similar-looking characters in Cyrillic or Greek scripts
“At the face of it in the browser address bar… it basically looks indistinguishable.” -Daniel Schwalbe
This allows disinformation sites to closely mimic trusted outlets, adding a false sense of credibility.
AI Supercharges Propaganda at Scale
What once took teams of content creators now takes one person with access to ChatGPT or similar tools. The speed and realism of content whether written or video is increasing exponentially.
“You could do it with a couple guys… asymmetric warfare, really.” -Scot Terban
The team also warns of a coming wave of synthetic video disinformation, where generative AI may soon produce convincing deepfakes of political figures adding yet another challenge for defenders and the public alike.
The U.S. and Global Cyber Posture
There is growing concern that U.S. federal agencies are deprioritizing active counter-disinformation efforts. According to Daniel and Scot, this shift places more pressure on private sector researchers and defenders.
They argue that without coordinated policy and federal support, response efforts risk fragmentation across state and local jurisdictions, many of which are under-resourced.
What Can Be Done?
Daniel and Scot agree that technical detection including pattern recognition, AI-assisted clustering, and passive DNS correlation is still viable. However, the most powerful defense might be public awareness and media literacy.
“We’re hoping to raise awareness… to make people a little more skeptical than they maybe have become over the last five years or so.” -Daniel Schwalbe
Scot also proposes tools that could help researchers detect disinformation patterns in near real time using content similarity, DNS history, and AI-generated correlation
Resources Mentioned in the Episode
- DomainTools Investigations Reports
- Domain Registrars Powering Russian Disinformation: A Deep Dive into Tactics and Trends
Watch on YouTube
That’s about all we have for this week, you can find us on Mastodon and Twitter/X @domaintools, all of the articles mentioned in our podcast will always be included on our podcast recap. Catch us Wednesdays at 9 AM Pacific time when we publish our next podcast and blog.
*A special thanks to John Roderick for our incredible podcast music!