Tactics used in malign information influence

Antagonistic foreign powers use a number of different tactics to sway people’s decisions. In this section you can immerse yourself in these and sharpen your ability to recognise malign information influence.

The tactics are constantly evolving, and common tactics can be divided into six groups.

  • Disinformation
  • Malevolent rhetoric
  • Social and cognitive hacking
  • Symbolic actions
  • Technical manipulation
  • Misleading and deceptive identities

In most cases, the tactics are neutral. The same tactics can be used as a natural part of democratic discourse (in cases where they are applied in a transparent and accepted way) or as a malign information influence tactic (when used to mislead the general public).

Disinformation is false or manipulated information that is spread deliberately to harm an individual, organisation or country

Fabrication

False information published in a way that makes the recipient believe it is true.

Manipulation

Information that is manipulated to communicate a misleading and false message, such as by adding, removing or changing elements of text, images, video or audio clips.

False or erroneous context

Presentation of accurate facts in an unrelated context to portray an issue, event or person in a misleading way. For example, pictures taken in other contexts can be used to reinforce the narrative of a news article.

Satire and parody

Satire and parody are usually a harmless form of entertainment. But, in malign information influence, humour can be used as a tool to spread misleading information or mock people, narratives or opinions. Humour can also be used to make controversial opinions more accepted.

Rhetoric is an accepted and natural feature of democratic public debate, where everyone has the right to express their opinion. Malevolent rhetoric is pursued for a different purpose, which may involve exploiting public discourse to mislead or distract the audience. It may also involve strategies to deceive, mislead and deter certain actors from participating in the public debate. An actor that frequently engages in malevolent rhetoric is a troll. Trolls are social media users who intentionally provoke others through their online comments and actions. Their activity contributes to widened polarisation, silencing critical voices and drowning out the open debate.

Personal attacks

Attacking, discrediting and ridiculing the person behind an argument instead of criticising the argument itself. Personal attacks are often used to silence, prevent and deter others from taking part in the discussion.

Whataboutism

Shifting the focus of an argument by calling attention to a similar phenomenon that has not attracted as much attention, but that is not actually relevant to the issue.

Gish gallop

Overwhelming the opponent with a barrage of arguments, facts and sources, many of which are false or unrelated to the issue.

Strawman

Assigning to the opponent arguments and stances that the opponent does not represent, and then arguing against these stances instead of the opponent’s actual stances.

Hijacking the debate

Taking over a debate and changing its direction. Especially effective on social media in relation to hashtags and memes.

Social and cognitive hacking exploits our social relationships and thought processes. It resembles hacking computer systems, for example, in the sense that a hostile actor tries to trick or “hack” a process by exploiting its vulnerabilities. For example, we prefer to adapt to what people similar to ourselves think and do, and sometimes find it difficult to think rationally when faced with emotionally charged content. These predictable behavioural patterns can be exploited by hostile actors who deliberately press on sore points, for example in sensitive social issues, to achieve their aim.

Dark ads

Messages tailored to an individual’s psychographic profile. Through data from social media, it is possible to create databases of individuals with specific views, interests or personality traits. Ads that can only be seen by specific individuals may contain messages that appeal to their particular preferences or opinions.

Bandwagon effect

People who feel that they are part of a majority are more inclined to voice their opinion. Bots and trolls can be used to give more likes, comments or shares on social media to give the impression that some opinions are more popular than they really are. This creates social acceptance for a message or opinion, which plays on our cognitive need for social alignment.

Spiral of silence

People who feel that they are part of a minority are less inclined to share their opinions. In contrast to the bandwagon effect, the impression of being in the minority can lead to not wanting or daring to speak out. This plays on our fear of exclusion or being singled out as deviant.

Echo chambers and filter bubbles

Natural groupings within which people primarily communicate with others who share the same views and opinions. Echo chambers and filter bubbles can emerge both online and offline. People with similar opinions perhaps read the same newspapers or mainly spend time with likeminded people. They are rarely exposed to different opinions. Online, this can be used to spread targeted information to specific groups.

Actions speak louder than words. Sometimes an action is not just about what is done, but about what it signals. When an action is primarily aimed at communicating a message, it is called a symbolic action.

Unlike ordinary actions, which are often guided by practical objectives, symbolic actions are designed with a communicative and strategic logic. They can be clear to everyone – as in terrorist attacks, which have the purpose of spreading fear through brutal violence. However, they can also be more subtle, such as when cultural symbols are used to send a message to a specific target group.

Leaks

Leaks have a strong symbolic significance as they can divulge injustices and cover-ups that would otherwise not come have come to light. In malign information influence, however, leaked information is often taken out of context and used to systematically undermine an actor’s credibility and distort the information landscape. The leaked information may have been obtained, for example, through data intrusion or theft.

Hacking

Hacking involves gaining unauthorised access to a computer or network, and is a crime in itself. In malign information influence, hacking sometimes works as a symbolic action in which the intrusion itself is secondary. The real objective is to raise suspicion that a system is exposed or insecure, which can undermine confidence in the system in question or an organisation responsible for it.

Public demonstrations

Demonstrations are symbolic actions that are used to express support
for a particular political issue or stance. They are an important part of our democratic dialogue. Within malign information influence, demonstrations can be orchestrated to give a false impression of support for a particular issue at the grassroots level (known as astroturfing).

Information influence often uses modern technology to amplify its impact. With advanced technological methods, actors can manipulate the online flow of information, for example through automated accounts, algorithms or a combination of human and technical influences. Technology enables new ways of orchestrating classic influencing methods, such as creating fake identities online or spreading disinformation. However, technological advancement is proceeding faster than our ability to understand and manage its consequences.

More recently, threats from deepfakes, machine learning and artificial intelligence (AI) have become increasingly topical. In the future, we can expect that these technologies will be used even more in malign information influence.

Bots

Bots are computer programs that perform automated tasks, such as sharing certain types of information on social media or answering frequently asked questions on a customer services platform. In malign information influence, they can be used to amplify certain messages online, to spam forums and comments sections, to like or share social media posts or to commit cyber-attacks.

Sock puppets

Fake accounts belonging to an individual who does not reveal their true identity or intentions. These fake identities are used to join groups and participate in online debates. Two or more sock puppets can be used simultaneously to simulate both sides of a debate.

Deepfakes

Modern learning algorithms can be used to perform highly advanced manipulation of audio and video. For example, fake but highly credible video clips in which politicians read out fabricated speeches can be produced. People’s faces can also be replaced in video clips, or their voices can be digitally modified.

Phishing

Phishing is a technique that tricks users into entering passwords or other sensitive information online. Phishing also includes automated spamming through e-mails that appear to have been sent from a known sender, but actually belong to a fraudster in search of personal information. Spear-phishing is a sophisticated type of phishing to access information on secure data systems.

When assessing information, we often look at the source. Who is communicating with me, and why? What do they know about the issue? Who are they claiming to be? Actors can mimic credible sources of information such as individuals, organisations or platforms, and use misleading identities to exploit the reputational capital of such senders.

Decoy

A decoy is a person who gives the impression of being independent but who in fact cooperates with or receives payment from someone else. Decoys are sometimes used to write positive product reviews in online stores and to give credibility to a person or message. It can be equated with a paid audience that guarantees applause after a performance. In malign information influence, decoys can for example be a group of internet trolls who are paid to write comments.

Imitators and fraudsters

Imitators pretend to be someone other than who they really are and assume a fake identity. These can be fraudsters who claim to possess expertise or qualifications they actually lack, such as pretending to be a doctor or a lawyer without having obtained the necessary qualifications.

Forgeries

Fabricating and falsifying information is an effective way to make disinformation look like authentic information. False page headers, stamps, or signatures can be used to make outright forgeries look genuine.

Potemkin villages

Resourceful actors can go one step further and create fake and misleading institutions and networks. Fake companies, research institutes and think tanks are
examples of what are known as potekim villages, which can be created and used to make disinformation look authentic.

Fake media

Disinformation can also be spread through fake news sites that mimic authentic ones. On the internet, for example, a fake website can be created that is largely identical to a real website, but with different content.

The next step

When our thoughts lead us astray