Multiple employees of Samsung’s Korea-based semiconductor business plugged lines of confidential code into ChatGPT, effectively leaking corporate secrets that could be included in the chatbot’s future responses to other people around the world.
One employee copied the buggy source code from a semiconductor database into the chatbot and asked it to identify a fix, according(Opens in a new window) to The Economist Korea. Another employee did the same for a different piece of equipment, requesting “code optimization” from ChatGPT. After a third employee asked the AI model to summarize meeting notes, Samsung executives stepped in. The company limited each employee’s prompt to ChatGPT to 1,024 bytes.
Just three weeks earlier, Samsung had lifted its ban on employees using ChatGPT over concerns around this issue. After the recent incidents, it’s considering re-instating the ban, as well as disciplinary action for the employees, The Economist Korea says.
Samsung Korea offices (Credit: Samsung)
“If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network,” reads an internal memo. “As soon as content is entered into ChatGPT, data is transmitted and stored to an external server, making it impossible for the company to retrieve it.”
The OpenAI user guide(Opens in a new window) warns users against this behavior: “We are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.” It says the system uses all questions and text submitted to it as training data.
The use of ChatGPT to find and fix buggy code has become pervasive in software engineering. If users ask a coding question, it attempts to identify the solution and provides a snippet of code with a one-click copy-and-paste button. The chatbot’s coding knowledge is apparently enough for it to get hired at Google as an entry-level engineer.
ChatGPT suggests a code fix and provides a snippet to copy and paste. (Credit: ChatGPT, Emily Dreibelbis)
This can replace the time (sometimes hours) engineers spend surfing sites like Stack Overflow, a popular resource for troubleshooting. As a testament to the high usage rates of ChatGPT by software engineers, Stack Overflow was banned(Opens in a new window) ChatGPT-generated responses just days after its Nov. 30, 2022 release over concerns about inaccurate answers that look believable.
Recommended by Our Editors
“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” Stack Overflow says in a post updated four days ago. “The volume of these answers (thousands)…has effectively swamped our volunteer-based quality curation infrastructure.”
Samsung is reportedly considering building its own AI to prevent future mishaps, though engineers could likely bypass any measures by using ChatGPT on personal devices. Microsoft Bing and Google Bard can also detect bugs in lines of code, so banning ChatGPT is not a bulletproof solution.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.