“Although they have been around for years, today’s versions are more realistic than ever, and even trained eyes and ears may not identify them. Harnessing the power of and defending against artificial intelligence depends on the ability to connect the conceptual with the tangible. If the security industry fails to demystify AI and its potential malicious use cases, 2024 will be a busy day for malicious actors targeting the election space.
Slovakia’s general elections in September could serve as a lesson in how deepfake technology can disrupt electronic devices. As the country’s highly contested legislative elections approach, the far-right Republika party deepfakes released videos with altered voices of progressive Slovakia’s leader Michal Simecka announcing his intention to raise the price of beer and, more seriously, discussing how his party planned to rig the elections. Although it is unclear how much influence these deepfakes had on the final outcome of the elections, which saw the pro-Russian and Republika-aligned party finish first, the election demonstrated the power of deepfakes.
Political deepfakes have already appeared on the American political scene. Earlier this year, an edited television interview with Democratic U.S. Senator Elizabeth Warren has been diffused on social networks. In September, Google announcement it would require that political ads using artificial intelligence be accompanied by a prominent notice whether the images or sounds have been synthetically modified, which would encourage lawmakers to pressure Meta and X, formerly Twitter, will follow suit.
Deepfakes are “pretty scary stuff”
Fresh from attending AWS’s Re:Invent 2023 conference, Tony Pietrocola, president of AgileBlue, said the conference focused heavily on artificial technology regarding election interference. “When you think about what AI can do, you see a lot more misinformation, but also more fraud, deception and deepfakes,” he told CSO. “It’s pretty scary because it looks like the person, whether it’s a congressman, a senator, a presidential candidate, whatever, and they say something. Here’s the crazy part: someone sees it, and it gets millions of views. This is what people see and remember; they never come back to see it, oh, it was fake.
Pietrocola believes the combination of massive amounts of data stolen in hacks and breaches, combined with improved AI technology, can make deepfakes a “perfect storm” of misinformation in the run-up to next year’s elections. “So it’s the perfect storm, but it’s not just the AI that makes it sound and behave real. This is the social engineering data that (threat actors have) either stolen or voluntarily given to us, and that they use to create a digital profile that is, to me, a double whammy. Okay, they know everything about us, and now they look and act like us.
This worrying scenario adds to the fact that due to the open and increasingly widespread availability of AI technology, deepfakes may not be limited to traditional nation-state adversaries such as Russia, China and Iran. “If we thought it was bad in 2020 and 2016, which mostly involved extremely sophisticated threat actors…people around the world can now use these tools,” Jared Smith, Distinguished Engineer, R&D Strategy, SecurityScorecard . , tells CSO. “In a way, we’re moving from one industrial age to another where a lot more people now have tools to do things they couldn’t do before.”