A new study shows OpenAI's GPT-4 has surpassed human financial statement analysis and forecasting capabilities. These findings could potentially disrupt the financial services industry, which, along with other sectors, is rapidly embracing generative AI technologies.
The University of Chicago's Booth School of Business conducted the study revealing that the large language model (LLM) outperformed humans in analysing financial statements and generating predictions based on that analysis.
"Even without any narrative or industry-specific information, the LLM outperforms financial analysts in its ability to predict earnings changes," the study said. "The LLM exhibits a relative advantage over human analysts in situations when the analysts tend to struggle."
'Chain-of-Thought' Prompt Mimics Human Reasoning for Predictions
Researchers employed "chain-of-thought" prompts to guide GPT-4 in recognising trends within financial statements and calculating various financial ratios. Equipped with this analysis, the LLM then forecasted future earnings.
The study revealed that employing the chain-of-thought prompt to mimic human reasoning yielded a 60% accuracy rate for GPT's predictions. This significantly surpasses the performance of human analysts, who typically fall within the low 50 per cent accuracy range.
The study's authors suggest the LLM's proficiency in identifying financial patterns and business concepts, even with limited data, highlights its potential to become a crucial tool for future financial decision-making.
The study led to developing trading strategies leveraging GPT-4's financial expertise. These strategies yielded superior profitability, reflected in higher share ratios and alpha, ultimately outperforming the stock market.
"We find that the long-short strategy based on GPT forecasts outperforms the market and generates significant alphas and Sharpe ratios," the study said. To those unaware, the GPT-4 technology also drives OpenAI's widely popular online chatbot, ChatGPT.
GPT-4 Successor Expected to Deliver 'Next Level of Capabilities'
Even as GPT-4 makes waves in financial forecasting, OpenAI, its creator, is already preparing to unveil its successor, an even more advanced AI system. In a recent blog post, the Sam Altman-led AI company expressed anticipation for their next model.
This successor to GPT-4 is expected to deliver "the next level of capabilities" on the path towards Artificial General Intelligence (AGI), a machine capable of human-level cognitive abilities. Last December, Altman refuted speculation regarding a leaked document (shared by tipster @futuristflower) linked to GPT-4.5, the presumed successor to GPT-4.
Despite the denial, a user named Dimitris Papailiopoulos shared a screenshot of ChatGPT Plus suggesting it might be powered by "gpt5-feeltheAGI-this-is-what-ilya-saw-turbo-512k." OpenAI's recent announcement of a GPT-4 successor lends credence to this earlier speculation.
OpenAI further highlighted that the new model will be a foundation for various AI products, including chatbots, virtual assistants like Apple's Siri, search engines, and image generation tools.
The company also announced the formation of a new Safety and Security Committee tasked with investigating potential risks associated with the latest model and future AI technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," OpenAI said.
OpenAI is spearheading the rapid advancement of AI technology, navigating concerns about potential risks like misinformation, job displacement, and even existential threats. While experts debate the arrival of AGI, companies like OpenAI, Google, Meta, and Microsoft have been consistently pushing the boundaries of AI capabilities, with significant progress evident every two to three years.
Building on its groundbreaking GPT series, OpenAI recently unveiled GPT-4o, marking a monumental stride in AI development. The "o" in GPT-4o signifies "omnimodal," reflecting its ability to seamlessly handle text, audio, and image inputs and outputs, paving the way for a revolutionary era of human-computer interaction.