Hello and welcome to Eye on AI.
Last week, I was in Singapore at the first Fortune Brainstorm AI conference we’ve held in Asia. The conference attracted top regional executives from some of the largest tech companies and biggest names in AI—Microsoft, OpenAI, and Google—as well as IBM and hardware companies like Qualcomm and HP. There were also leaders from some of Asia’s most successful tech companies, including super-app Grab, Japanese e-commerce giant Rakuten, Singapore’s DBS Bank, as well as some of the region’s largest insurance firms, most prominent venture capital investment houses, and founders of some of the regions hottest AI startups, not to mention key officials from Singapore’s tech-savvy government.
I want to share some impressions and highlights from a fascinating two days of panels and conversations.
It was clear that AI, and generative AI in particular, is diffusing rapidly across the world. If anything, Asia may be adopting the technology faster than elsewhere. I was struck by the large number of executives at the conference who raised their hands when asked if they had not only conducted proofs of concept using generative AI but actually had generative AI applications in full deployment. It was far more hands than I had seen raised when I asked the same question at our Brainstorm AI conference in San Francisco in December and in London in April. (When I pressed the executives in Singapore about what applications were in deployment, the leading answer involved chatbots or AI coaches to assist human call center operators resolve customer questions faster and more accurately.)
Many of the issues that are preoccupying executives in Asia about how to build AI products successfully—particularly concerns about reliability and cost—are the same ones that are top of mind for executives in the U.S. and Europe. Debanjan Saha, the CEO of AI company DataRobot, said that businesses had to figure out how to close three essential “gaps” with AI before they could realize its full potential: the value gap, the confidence gap, and the expertise gap. It was clear from a lot of sessions that businesses are struggling with AI’s lack of reliability, but that many companies are figuring out ways to use AI effectively through techniques such as retrieval augmented generation (RAG), fine-tuning, and the use of a mix of AI models of different sizes to moderate other AI models.
But some issues are of particular salience in the region—ensuring that Gen AI solutions work well in a variety of local languages, for one. Singapore has helped to train its own LLM for Southeast Asian languages called SEALion, although there was debate at the conference about whether this was a useful national project over the long term. It was also an open question whether open-source models, including small language models, from companies such as Meta or Mistral—or a Chinese company like ModelBest—could provide good coverage of these languages, with better overall performance across tasks, and better optimizations over time to run on a variety of different devices, including mobile phones.
Another issue that takes on even more prominence in Southeast Asia is the computing and energy demands of Gen AI and how that may affect countries’ sustainability plans. Singapore, which is both power- and water-constrained, had for a time barred the construction of any new data centers in the country. But last year it allowed an additional 80 megawatts of datacenter capacity to be built and it has approved an additional 300 megawatts to come online soon. Jacqueline Poh, managing director of Singapore’s Economic Development Board, said she was confident the city-state would have enough data center capacity to serve the AI needs of its economy. And Josephine Teo, Singapore’s minister for digital development and information, noted that the country already has some of the densest data center infrastructure on the planet.
In one fascinating session, Tim Rosenfield, from Australian company Sustainable Metal Cloud, explained how using specialized immersion cooling (where the entire server rack is surrounded by flowing oil-based coolant that can transfer heat away much more efficiently than air-cooling or water-cooling alone) and reengineered server racks could reduce AI’s energy demands and carbon footprint by 50%. On the same panel, former IBM chief AI scientist Seth Drobin, who is now a general partner with venture capital fund 1infinity Ventures, said the idea that we will need ever-bigger models to deliver AI capabilities was essentially a dead end from a sustainability perspective, and that the industry should turn back toward more-specialized, smaller models if it wanted to deploy AI without giving up on the climate.
I chaired two of the more optimistic sessions at the conference: One on how AI can accelerate our quest to find cures for diseases and perhaps even help us discover ways to slow or reverse natural aging, and the other on AI’s transformational, and overwhelmingly positive impacts on education.
If you weren’t able to make it to Singapore and want to see what you missed, you can catch up on most of the conference sessions on Fortune’s website here. With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Before we get to the news... If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my new book, Mastering AI: A Survival Guide to Our Superpowered Future. It's out now in the U.S. from Simon & Schuster and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.