Cybersecurity

Governance, Resilience, IAM, PAM, DLP, SIEM, SOC

2024, a year of elections… and deepfakes

Mar 13, 2024 | Cyber Security | 0 comments

After fake news, a combination of disinformation and deepfakes that would deceive even the experts is threatening us on the eve of this election year, according to ESET.

Deepfakes, weapons of mass deception! Easier access to GenAI will accelerate their profusion and spread. In a recent analysis, ESET refers to their “democratisation” and talks of “large-scale damage”. We will see more and more state actors and hacktivists launching convincing disinformation campaigns. The WEF recently ranked disinformation as the greatest global risk over the next two years, after surveying 1,490 experts from the academic and business worlds.

In 2024, in particular. This year, some sixty elections will be held, including those of the five most populous nations on the planet. Again this year, a record number of voters, representing 49% of the world’s population, are expected to go to the polls, underlining the importance and scale of these elections.

A risk of democratic disaster

During these elections, which are likely to redefine the geopolitical landscape, we will see all kinds of attacks. Sweden, for example, has been the target of repeated DDoS attacks as it moves through the process of joining NATO. With deepfakes, the danger is more insidious. And, as the WEF notes, “there is a risk that some governments will be too slow to react, not knowing how to prevent disinformation and protect freedom of expression”.

The aim, explains ESET, is to reduce voter confidence in a particular candidate. It’s easier to convince someone not to do something than the other way round. If supporters of a political party or candidate can be suitably influenced by fake audio or video recordings, it would be a definitive victory for rivals. Rogue states may seek to undermine confidence in the whole democratic process, so that whoever wins will find it difficult to govern legally.

At the heart of the challenge is a simple truth: when humans process information, they prefer to value quantity and ease of understanding. The more we look at content with a similar message and the easier it is to understand, the more likely we are to believe it. As a result, marketing campaigns tend to be made up of short messages that are repeated over and over again. What’s more, distinguishing deepfakes from real content is becoming increasingly difficult and can lead to a democratic disaster.

Anchoring bias

We have seen that YouTube and Facebook were slow to react to certain deepfakes influencing the recent elections. The recent European Digital Act, which obliges social media companies to crack down on attempts at electoral manipulation, has remained a dead letter… Efforts are being made, however. OpenAI, for example, is going to implement the Coalition for Content Provenance and Authenticity’s digital identification information for images generated by DALL -E3. The cryptographic watermarking technology – also tested by Meta and Google – is designed to make it more difficult to produce fake images.

However, these are only small steps. And there are legitimate concerns that the technological response is too little, too late, as election fever grips the world, notes ESET. Particularly when threats spread through fairly closed networks such as WhatsApp or automated calls, it is difficult to track and quickly discredit any fake audio or video.

The theory of “anchoring bias” suggests that the information that sticks in our minds is the first we hear, even if it’s wrong. If the deepfakers succeed in swaying the electorate first, all bets are off as to who the final winner will be.