Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
National
Nihal Krishan, Aroon Deep

Amid fears of AI misuse in upcoming poll, OpenAI executives met Election Commission officials in February

Representatives from OpenAI, the Artificial Intelligence firm that developed ChatGPT, met with officials from the Election Commission of India in February to ensure that its popular platform is not misused in the upcoming Lok Sabha election, and to find ways to collaborate with the ECI. 

The ECI confirmed the meeting in a Right to Information response to The Hindu. Rishi Jaitly, an OpenAI advisor and former India head at Twitter (now X), had reached out to the ECI to request the meeting.

His emailed request sheds some light on what the executives may have discussed at the meeting. “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections and, in this meeting, would like to discuss any concerns ECI may have as well as explore opportunities for collaboration to ensure more voters are able to exercise their franchise,” Mr. Jaitly wrote.

India and the AI story

In his email, Mr. Jaitly added that, as OpenAI’s senior advisor, he is focussed “on ensuring the company’s artificial intelligence mission advances the India story, and that India becomes a global leader in the AI story”.

OpenAI’s chief strategy officer Jason Kwon, its global public policy head James Hairston, and its global elections head Becky Waite attended the meeting from the company’s side. The ECI declined to disclose which officials represented the Commission in the meeting. Anuj Chandak, an ECI joint director to whom Mr. Jaitly had addressed the meeting request, declined to discuss the meeting when contacted. The ECI’s spokesperson did not respond to queries from The Hindu.

On the day that the ECI meeting was held, OpenAI executives also held a roundtable discussion with civil society representatives in India on the upcoming election 

Combating misinformation

During and after the closed-door meeting in Delhi, as The Hindu reported last month, former top Information Technology Ministry officials and tech scholars who attended the meeting said that the ECI could be doing much more in coordination with major tech platforms such as OpenAI to combat misinformation and disinformation during a time of heightened sensitivities ahead of election season.

During that meeting with civil society representatives, OpenAI officials also emphasised that they were in the country to get a lay of the land and understand the most pressing issues surrounding AI, such as synthetic media (deepfakes) and misinformation. The outreach takes on added significance as India emerges as the firm’s second largest user base after the United States.

OpenAI scopes out India

OpenAI does not yet have an office in India or any full time employees based in the country. The company is expanding internationally, and is reportedly seeking trillions of dollars in investments to build out the computer infrastructure needed to run its resource-intensive systems, even as more and more industries look to incorporate generative AI applications in their businesses. 

The tech giant is also conducting some small-scale research within India, focussed on assessing the country’s approach to AI risk and policy, semiconductor supply chains, and public-private partnerships, according to an OpenAI employee familiar with the matter. Some of OpenAI’s research initiatives in India will remain private and for internal use only and some of it may eventually be made public.

The research is being conducted through surveys and expert interviews with a few dozen people in India, including those within the Indian government, civil society, and AI academic scholars, with a focus on career officials within the Ministries as well as non-governmental entities in India.

Defining AI risk

The San Francisco-based company’s research into AI risk perceptions and semiconductor supply chains in India is focussed on understanding India’s approach to defining AI risk. These efforts are based on government and civil society’s AI strategy documents and ethical guidelines, which will then be used to understand the country’s perceptions and prioritisation of AI risk.

Some of the specific elements of AI risk and policymaking in India that OpenAI is looking into include risks within the education sector as well as the growing optimism and trust that India has placed in public-private cooperation mechanisms, especially in comparison to other countries in the region. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.