top of page


The impact of AI on disinformation


Ari Soonawalla

How does AI change how disinformation is created, spread and received?

AI can be used in a variety of ways at each stage of a disinformation campaign:

Understanding the information environment

Advances in ML make social media monitoring, combined with text and sentiment analysis, much more powerful. ML-powered analytics help threat actors identify and predict social issues, virality of news events, and which narratives may be most compelling across specific demographic groups. This can also help identify groups most vulnerable to disinformation on particular topics.

Content Creation

The cost of training large language models is falling drastically: while it originally cost millions of dollars to train GPT-3 in 2020, in 2022 MosaicML was able to train a model from scratch to reach GPT-3 level performance for less than $500k. As the availability of LLMs increases and cost falls, this makes it easier for threat actors to create more personalized and more effective content. As content creation is becoming more automated, this reduces the financial and time costs associated with micro targeting and hyper personalization, and an improved understanding of the information environment allows threat actors to craft more compelling and effective narratives for each target segment. This also makes disinformation campaigns more difficult to detect as new content is easier to generate, preventing a need for copypasta, and the quality of deepfakes is drastically improving.

Message Amplification

The spread of campaigns often relies on large numbers of accounts across social media, and the perceived authenticity of the accounts is key. ML techniques allow generation of increasingly realistic profile photos, reducing the need for image scraping and the potential for reverse image searches to aid in detection of a campaign. When combined with the improvements in text generation through LLMs for bio’s and online presence, this results in en masse creation of credible accounts to spread disinformation.

ML systems can also improve social engineering techniques to target influencers or so called “super-spreaders” who can organically amplify a message or campaign. Deepfakes also make it easier to impersonate experts or credible sources to amplify a message.


Advancements in conversational AI or chatbots could automate engagement with targeted individuals. These use large volumes of data, ML, and NLP to imitate human interactions, recognizing speech and text input and generating a response. This can be used to take part in online discussions and respond to comments to stimulate controversy and disputes, and increase polarization.

As AI reduces the costs, increases the effectiveness, and reduces the ease of detection of disinformation campaigns, threat monitoring and early detection are becoming increasingly important. There is a need for greater societal resilience to AI enabled disinformation, however current digital and media literacy efforts do not account for AI enabled disinformation.

bottom of page