
Sony wants to be a leader in "ethical" artificial intelligence amid cynicism about companies using the technology for societal good.
In recent years, big tech companies like Google and Facebook have promoted ethical A.I. research, in which in-house teams are given the freedom to publish papers that reveal flaws in the A.I. software of their employers and other organizations. One goal is to use the research to create better products. A paper highlighting the problems a voice-translation service has in understanding Singaporean-accented English, for instance, could spur a company to create software that works well for everyone and not just people who speak American English.
But in practice, companies have trouble promoting ethical A.I. research because it can conflict with their core business. Critics have slammed Facebook, for instance, for steering its A.I. ethics team away from projects intended to curb the spread of misinformation because such efforts may halt user growth and engagement, as MIT Technology Review previously reported.
Most recently, critics have pummeled Google for ousting high-profile A.I. researcher Timnit Gebru after she wrote a critical paper. The paper detailed racial bias and problems related to the huge consumption of energy that result from the use of large language models that spit text back at people based on what they write. It reflected poorly on Google because it seemed as if the search giant did not take Gebru's work seriously and that it wanted to avoid criticism.
Last year, Sony said it would implement “A.I. ethics assessments” to investigate how certain A.I.-powered products could pose societal harm, said Alice Xiang, a Sony AI senior research scientist. Sony’s camera division, for instance, has been developing sensors to power computer-vision tasks, like recognizing cars in videos or photos. Part of Xiang’s work is to help Sony study how to mitigate potential racial bias problems created by such systems, which have shown to be better at recognizing white men than women and people of color. By working with Sony’s business units “who are struggling with these issues,” Xiang hopes the company can prevent A.I. ethical disasters.
Like other tech companies, Xiang’s team plans to publish papers about what it finds, but Sony is still debating how much it wants to share about its internal work. In theory, it could lead to Sony abandoning certain products, but Xiang doesn’t want to be put in a position where she has to publicly “point a finger at someone and be like, ‘Yeah, this product was unethical.'”
It’s this kind of struggle—companies wanting to publicize their A.I. ethical research work without revealing details about their internal business decisions— that complicates matters. A lack of transparency is one reason there is skepticism about corporate A.I. ethics. It’s easy for companies to make a vague statement like “A.I. can be used for good,” but it’s difficult for them to say anything more substantial.
But Xiang is optimistic that Sony’s A.I. ethics research will have an impact rather than be window dressing. Sony isn't interested in A.I. ethics merely to quell “PR blowouts” and instead aims to “integrate ethics by design,” she said, meaning that the company will review products thoroughly before debuting them.
“I think Sony is in a unique position where we haven't had all of this negative PR,” Xiang said, obliquely contrasting her company with others that have suffered A.I. meltdowns like chatbots learning to parrot offensive phrases from Internet trolls. “We're doing this at an early stage, voluntarily, because we see that if we want to really be competitive globally and sustainably in the long term, then thinking about this from the get-go is really important.”
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com