As the GOP primary in South Carolina approaches, political campaigns are strategizing on how to combat the growing threat of deep fakes. Deep fakes, which involve the use of artificial intelligence to create realistic but fabricated audio and video content, have already made an impact in the political arena. During the New Hampshire primary, a deep fake of President Biden's voice was used in a robocall. In response to the rising concern, the Biden administration is conducting drills with national security leaders to develop strategies for countering misinformation.
A Chicago mayoral candidate, Paul Vallis, became a victim of a deep fake attack during his closely contested campaign. The AI-generated deep fake audio portrayed Vallis as not being Democrat enough for the Democratic Party, as it depicted him as excessively pro-police. Vallis, who lost the election by a mere four points, acknowledged the damaging impact of the deep fake. He expressed frustration at the portrayal of him as a conservative Republican, which did not align with his actual political stance.
The deep fake targeting Vallis was shared by an account named 'Chicago Lakefront News,' which was created explicitly for the purpose of character assassination. Investigations revealed that the account did not exist, further emphasizing the malicious intent behind the attack. The tight race and the potential influence of the deep fake on the outcome highlight the urgent need to address this emerging threat.
Experts warn that deep fakes are no longer a hypothetical problem but a genuine threat to the integrity of elections. Notably, deep fake audio poses a significant concern, as it can be highly persuasive without any visual cues. The impact of audio deep fakes has been described as visceral, creating the illusion of eavesdropping on individuals. However, the readiness of the US government and election officials to effectively respond to this threat remains questionable.
A recent analysis revealed that only 33 out of 50 states responded to inquiries regarding their preparations against deep fakes, with less than half of them taking specific actions to mitigate AI threats. Election officials, like Francisco Aguilar from Nevada, express concerns about resource constraints and lack of preparedness. Although advancements in technology have facilitated the spread of misinformation over the past decade, the injection of AI-generated content amplifies the risks.
It is crucial to recognize that the effects of deep fakes extend beyond political affiliations, as individuals of all backgrounds can be susceptible to manipulation. The belief that only certain groups, such as Trump supporters, are prone to falling for online misinformation is debunked by the reality that no one is immune to the influence of deep fakes.
As the threat of deep fakes becomes increasingly prevalent, it is evident that the future is already upon us. The need for proactive measures to combat misinformation and protect the integrity of elections has become a pressing concern. The pioneering efforts of officials like Aguilar in identifying vulnerabilities and advocating for necessary resources highlight the ongoing challenges in navigating this new landscape.