ChatGPT can be a great work tool, especially if you know the best ChatGPT tips and tricks. But unfortunately, putting in that work data can have some unintended consequences. Samsung employees found this out the hard way last month when they accidentally leaked Samsung’s secrets to ChatGPT multiple times.
Now it looks like Samsung is taking steps to ensure this never happens again. According to Bloomberg’s Mark Gurman, Samsung has now banned employees from using generative AI tools such as ChatGPT or Google Bard. This comes from a leaked memo to Samsung staff that laid out new policies on AI use in the workplace last week, which Samsung has since confirmed.
Apparently this probably wasn’t a shock to Samsung employees — in fact, it may have even been a welcome development for some. Following the unintended data leaks, Samsung reportedly ran an internal survey and found that 65% of respondents agreed that generative AI and similar tools pose a serious security risk.
Samsung ChatGPT leak: What happened?
Back in April, it was reported by us and other outlets that Samsung employees had been using the popular AI chatbot to (among other things) fix coding errors. Specifically, members of the semiconductor division used the AI tool to identify faults in its chips. Unfortunately for Samsung, that data is now part of the vast trove of data that ChatGPT’s GPT-4 model is trained on — though the leaked data has yet to surface so far.
But that wasn’t the only Samsung leak. In a separate instance, a Samsung employee used ChatGPT to turn meeting notes into a presentation, a feature that is common with generative AI and is even a highlight of tools such as Microsoft Copilot 365. Unfortunately, again, this data then became part of the user data OpenAI collects (which they explicitly state in their terms of service) and is now at risk of being divulged to the public. Luckily for Samsung, it seems that this data has still evaded the public eye — for now.
How to stay safe using ChatGPT
If you want to stay safe using ChatGPT, Google Bard or Microsoft’s Bing with ChatGPT — really any AI tool — the key is to remember that this data is almost always stored somewhere. There are some AI tools that store data locally, but for the most part, the data is stored on a server somewhere once it’s entered into the chatbot.
The good news is that companies are starting to change how they handle some of this data. ChatGPT in particular now has the option to disable chat history and training on ChatGPT, which deletes your conversations after 30 days. Still, the best method is simply to never tell (or type) the chatbot something you’d be uncomfortable with other people knowing. In fact, that’s just a good rule for the internet in general.