Catalyst Campaign’s founder, Scott Goodstein, collaborated with UK communications consultant Stuart Thomson to discuss ways to combat deepfakes, fake news, and disinformation campaigns.
Regulative and legislative solutions are being suggested to deal with deep fakes, but campaigns can take real actions now.
Nefarious audio and video content made to trick voters present a clear and present danger to free and fair elections. Deepfakes technology has advanced at a rapid speed as computer processing technology has become faster and cheaper, and audio and video editing software has become universally available.
These advances in technology have already been deployed in the U.S. primary elections with a fake robocall with a spoofed voice of Joe Biden nefariously made to confuse voters on what day the election was held, all to suppress voter turnout amongst a targeted group of voters. With the UK General Election having been called and the US presidential election coming into view, we need to be more aware than ever of the dangers of deepfakes.
Political campaigns cannot stop the advancement of technology, instead they should embrace the new reality of modern campaigns and how dirty tricks have also evolved. Nonetheless, there are steps that every political campaigns (and concerned citizens) can take to minimize the chances of being thrown off course and being duped by a deepfake.
Deepfakes are realistic-looking content created without consent. They can employ voices, videos, images to create online content to deceive people. They can cause significant harmful impacts on individuals being used to blackmail, harass, commit fraud, gain revenge and other purposes. As AI advances, the quality of the deepfakes increases.
Read 10 Actions Every Campaign Can Take to Make A Difference Against Deepfakes at CommonDreams.org
It is not that the technology is inherently bad. Some businesses have recently experimented with using AI to send personalized messages to their staff.
Similarly, some politicians have used the technology for light-hearted purposes such as creating online games involving the candidates. But we have already seen examples of deepfakes being used in elections to try and trick voters. Joe Biden, Keir Starmer and Sadiq Khan has already been the victims of deepfakes. Examples from all parts of the world are increasing in frequency – Indonesia and France.
It is not just about video but audio as well. What could be better than a slightly poor-quality ‘illicitly recorded’ phone conversation or comments from an event where a candidate says something outrageous? The more amateur the sound quality, the more damage it may do.
And with elections across the world, not least the US, EU, and UK, taking place this year, there is a focus on what can be done about the danger.
According to a new survey from the BCS, The Chartered Institute for IT in the UK, the influence of deepfakes on the UK General Election is a concern for most tech experts. 65% of IT professionals polled said they feared AI-generated fakes would affect the result.
But they also think the parties themselves will be involved: “92% of technologists said political parties should agree to publicize when and how they are using AI in their campaigns.” This suggests that they do not entirely trust politicians either…
According to another survey, 70% of UK MPs fear deepfakes.
Regulative and legislative solutions are being suggested to deal with deep fakes, but campaigns can take real actions now.
1) Avoid the void – problems arise when there is a space to fill. The more content that a campaign has, the more it can cover a wide range of topics, the less space there is for a deepfake to fill a void.
2) Deal with controversy – rather than failing to have a position on a difficult issue of the day, a campaign needs to tackle it. Again, this prevents a deepfake from being able to exploit an issue where there are firm views but political silence.