Since ChatGPT's launch in November, generative AI has grown significantly more powerful. It can do a lot more than write articles; it can create code and generate original music. And ChatGPT's connection to the internet through plugins has further pushed the envelope for what generative AI can do.
Many of these accomplishments are not being looked upon with much optimism -- the music industry doesn't like it for copyright infringement reasons, the education sector doesn't like it for cheating implications and now, despite huge tech-driven gains throughout 2023, the financial and news sector is struggling to parse through what is factual and what are AI-driven deepfakes.
DON'T MISS: AI Companies Beg For Regulation
An AI-generated image of an explosion at the U.S. Pentagon went around Twitter on the morning of May 22. The image was convincing enough that multiple breaking news accounts on Twitter tweeted it out, and as a result, the S&P 500 fell 30 points in mere minutes.
The S&P rebounded once it was clear that the image -- and the subsequent story that developed around that image -- was not real.
One CFA explained that this is why AI regulation is so vital.
"This AI-generated image of an explosion at the Pentagon tricked several breaking news accounts, and caused the stock market to drop temporarily," Genevieve Roch-Decter wrote. "@elonmusk this is why we need to regulate AI."
A Washington Post correspondent posted a photo of the undamaged Pentagon to prove that the photo was faked.