
The Fake Websites Behind Seemingly Legitimate News
Exposure to Fake News
When asked in a 2023 survey how often in the last seven days EU citizens had been exposed to disinformation and fake news, 13% responded with “very often” and 22% said “often.” A similar share (33%) stated they “sometimes” came across disinformation and fake news. With the rapid evolution of AI tools, disinformation such as deep fakes are becoming more prevalent. As a result, maintaining trust and cohesion in democratic systems is becoming an increasingly difficult challenge. By blurring the boundaries of truth, the Russian “Doppelganger” campaign is a blueprint for a new era of information warfare.
Nature of the Doppelganger Disinformation Campaign
The campaign’s name Doppelganger implies its modus operandi. By creating clones of authentic media outlet domains and social media pages, operation Doppelganger actors aim to appear as legitimate news sources. Seamless duplication allows malicious actors to disseminate disinformation under the name of official news outlets. This enables them to deceive readers who are not cautious enough to double-check the source. As reported by researchers of EU DisinfoLab, the Doppelganger campaign uses various channels. The campaign actors bought dozens of internet domains similar to authentic news outlets and copied their designs. The same pattern is seen with social media accounts, like those on Facebook and X. These accounts use names that resemble those of news outlets. The Doppelganger campaigns employ various content formats, ranging from website polls to Meta ads.
Two Russian Entities
The US Department of State, Meta, Correctiv, and VSquare have identified two responsible companies that appear to be behind the Doppelganger campaign,. Russian-origin Structura National Technology and Social Design Agency are under sanctions by the EU and the United States. Meta referred to the Doppelganger scale of deception as the largest Russian-made malicious network. They also described it as the most aggressive and persistent one.
A Highly Targeted Approach
It is not a coincidence that the Doppelganger campaign was launched in May 2022. The content of the campaign falls in line with Russian propaganda narratives about its war against Ukraine. Malicious actors used several channels and languages to reach prominent citizens of Germany, France, the UK and Italy. Meta Ads used in the campaign revealed narrow and recurring age ranges (20-25, 37-42, 50-65). The ads were tailored to specific countries, with the language of the posts corresponding to the target country. Disseminated content on websites and social media platforms refers to Ukrainians as Nazis, liars, and corrupt. Messages threaten the consequences of the sanctions against Russia for the European population. An analysis of the articles and Meta Ads posted on the duplicated outlets reveals three key topic clusters. The first cluster focuses on EU or national economic distress (Spiegel, Spiegel II, T-Online, FAZ). The second cluster highlights distrust in EU or national governments (Spiegel, Welt, T-Online II). The third cluster covers anti-Ukrainian statements (Süddeutsche Zeitung, Facebook, Facebook II, 20 Minutes).
AI, Automation, and Modern Disinformation
The role of evolving technologies and AI tools was crucial for the success of a campaign such as Doppelganger. Based on analyses carried out by two affected platforms, OpenAI and Meta, the campaign heavily relies on manual efforts in addition to using AI and automation to ensure a consistent stream of content. Manual inputs are used to establish a framework to enable streamlined automation later in the process. In-depth technical reports on Doppelganger suggest that the operation of the domains and the uploading of articles were facilitated by real persons. Automation and AI enhancements simplify this content creation process. In a 2024 report, OpenAI stated that ChatGPT models were used by Doppelganger actors. The goal was to ensure both quality and quantity, correcting grammatical errors and creating comments in different languages. According to OpenAI reports, the articles texts were crafted, edited, and translated from Russian into English, German, French and other languages using AI tools. On some Doppelganger apparently was trying to use ChatGPT models to generate cartoon images of prominent European politicians and critics of Russia.
High-Level Repercussions
The threat of disinformation to societies is being picked up at the world stage. The World Economic Forum highlights disinformation in the Global Risks Report 2024. NATO’s Strategic Concept 2022 and the U.S. National Security Strategy 2022 also address disinformation, citing it as a serious threat to democratic institutions as well as national security. The Global Disinformation Index has warned of the growing risks posed by AI in amplifying disinformation. The strategic use of AI in disinformation campaigns blurs the lines of reality. This makes the content less interpretable and poses challenges in explaining and predicting their outputs. AI is becoming a disinformation amplifier and a geopolitical influence tool.
Media Literacy
True resilience against disinformation lies in empowering societies through media literacy and public awareness. Finland exemplifies this approach, integrating media education into its national curriculum from early childhood through secondary school. After school, the initiatives are extended to adults via campaigns like Media Literacy Week. The rise of AI-powered disinformation marks the early stages of a deeper transformation in information warfare. As technology evolves, forged photos, phoney robocalls, and convincing deepfakes can further erode societal trust. Addressing early threats will demand a combination of technological innovation and smart regulatory oversight. It will also require a renewed focus on media literacy to safeguard democratic societies.