Google fears inherent vulnerabilities in artificial intelligence chatbots could leak sensitive corporate information, raising questions over the suitability of its own Bard A.I. product for commercial use.
Citing unnamed sources briefed on the matter, Reuters reported on Thursday that the $1.6 trillion tech giant warned employees not to input certain types of data into advanced chatbots for fear it could be exploited. That warning included its own entrant in the A.I. race.
“Don’t include confidential or sensitive information in your Bard conversations,” stated a Google privacy notice that was updated at the start of June, according to the newswire.
Not only can human reviewers read the chats, but researchers found that similar A.I. can reproduce the data fed into it to train its neural net, creating a backdoor leak.
A.I. race
Big Tech has been locked in a race over who can develop the first killer applications based on generative artificial intelligence. Microsoft affiliate OpenAI struck the first blow against Google’s DeepMind by unveiling ChatGPT on Nov. 30, taking the world by storm.
Amid all the hype over generative A.I.’s ability to pass professional entrance tests like the bar exam, concerns have already begun to arise owing to its penchant to “hallucinate”—presenting inaccurate or flat-out incorrect information as incontrovertible fact.
Should it prove a security risk as well, there could be potentially serious implications for its commercial use. Google is currently rolling Bard A.I. out in more than 180 countries and in 40 languages, including higher-priced versions for business clients that do not absorb data into public A.I. models.
Yet if the company cannot trust even its own chatbot to prevent its corporate secrets from being reverse engineered by rivals, how can its customers ultimately?
Google’s A.I. track record has not exactly been reassuring, either.
The company’s perceived lackadaisical behavior toward ethical A.I. questions indirectly led Elon Musk to cofound its main rival, OpenAI, while Google’s own A.I. employees nearly revolted at one point over its decision to work for the Pentagon.
Last year Google fired a software employee who falsely claimed the company’s artificial intelligence had achieved sentience. Large language models in fact merely mimic human intelligence by using probability to predict correct outcomes.
Finally, in its haste to keep up with OpenAI, Google rushed out the presentation of Bard in February, including a promotional video that unwittingly revealed how error-prone it was. This early example of a chatbot’s tendency to hallucinate temporarily cost parent company Alphabet $100 billion in lost market value.
Google did not respond immediately for comment, but in a statement made to Reuters, it did not deny the report.