AI and malign information influence

AI has changed the way we manage and disseminate information. While the technology opens up new opportunities, at the same time it presents a challenge because it enables the rapid creation and dissemination of realistic and misleading information.

Generative AI

Generative AI is a form of artificial intelligence that can rapidly produce new content in the form of text, images, audio or video. For example, the technology can create realistic news articles, images, or deepfakes. Generative AI is used for both creative and commercial purposes, but also carries risks such as the dissemination of misleading information and difficulty distinguishing between genuine and manipulated information.

Symbol robot.

Often spread on social media

Foreign powers use AI to sway a population’s perceptions on various issues, undermine trust in societal functions and create instability. By creating AI-generated texts, images and videos, actors can, for example, create fake news pages that can be disseminated rapidly. Information dissemination requires people to share it, and this is done mainly on the internet and over social media.

Even real material can be suspected to be AI-generated, making it harder to distinguish between what is real and what is fake. This, in turn, can affect trust in both the media and organisations.

What does the Psychological Defence Agency do?

The Psychological Defence Agency works actively to reinforce the resilience of the population in an era of rapid technological advancement in which AI plays a core role. It is important that society continually improves its ability to understand and critically scrutinise information, as it is becoming increasingly difficult to detect manipulated and misleading information. By availing of knowledge resources for the population to enable recognising malign information influence and enhancing knowledge about how it is disseminated, trust in reliable senders is strengthened at the same time. This combination of knowledge and trust is important to upholding an open and democratic society.

The Agency monitors developments to detect potential threats, but also to understand any weaknesses in society that could be exploited.

Source criticism as a countermeasure

When it comes to AI, it is important for the population to understand how AI can be used to amplify and disseminate malign information influence and how source criticism can work as a countermeasure for reducing the risk of being influenced.

Read more about source criticism.

Examples of AI-generated information influence

False statements by leaders

AI can generate audio clips or videos that appear to show a political leader making controversial statements. These can be used to damage trust in a person or stir up unease ahead of an election.

Credible – but fake – news sites

Using AI, foreign powers can create entire news platforms that look like established media. Articles can be directed at specific groups and amplify ongoing conflicts.

Manipulated images, videos and audio

Deepfake is a technique that can be used to manipulate images, videos, or audio to show something that never happened, such as fake recordings of demonstrations, violent crimes, or other events.

Automated social media campaigns

Botnets can use AI to mass produce posts, comments and interactions on social media, creating the illusion of support for a particular opinion or movement.

Misleading crisis information

In natural disasters or other crises, AI-generated content can be used to spread false advice, mislead the general public or create panic, such as through false warnings or weather reports.