The Justice Department has expressed worries about the potential misuse of artificial intelligence in the release of an audio recording of President Joe Biden's interview with special counsel Robert Hur. The department cited concerns about deepfakes and disinformation that could mislead the American public, particularly in the lead-up to this year's election.
In a court filing, a senior Justice Department official highlighted the challenges in preventing the misuse of AI technology and emphasized the potential impact on voting integrity. The Biden administration is seeking to block the release of the recording, which focused on the president's handling of classified documents.
A conservative group pushing for the recording's release dismissed the department's arguments as a diversion tactic. The group accused the Justice Department of shielding Biden from potential embarrassment, pointing to details in a transcript of the interview that showed the president struggling with certain recollections.
Despite concerns raised by the Justice Department, some lawmakers, including Democratic Senator Mark Warner, have called for the audio to be made public. Warner emphasized the importance of transparency while acknowledging the risks of AI manipulation.
Special counsel Robert Hur's report concluded that no criminal charges were warranted in Biden's handling of classified documents. The report described the president's memory as 'hazy' and noted significant limitations in his recollection of key events.
The Justice Department's concerns about deepfakes were presented in response to legal action under the Freedom of Information Act by a coalition of media outlets and advocacy groups. The coalition argued that the public has a right to access the recording to assess the accuracy of the special counsel's findings.
While acknowledging the potential for malicious actors to exploit AI technology, experts cautioned that restricting the release of original content could have broader implications. The debate over balancing transparency with the risks of AI manipulation is likely to continue in future cases.