Gaming PC

Apple Restricts Employee Use of ChatGPT and AI Tools, Report Claims

According to a report released by wall street journal According to (WSJ), Apple has introduced restrictions on how employees can use AI tools like ChatGPT. Not too surprising considering the Cupertino tech company is notoriously wary of leaks. The WSJ reports that Apple is busy developing “its own similar technology,” so employees won’t forever be penalized by a lack of AI assistance. The report is based on internal Apple documents that appear to support these claims, as well as anonymous sources “familiar with the matter.”

One of the problems with using AI based on Large Language Models (LLM) is that tools typically use interactions (inputs, questions, etc.) as training data. Samsung ran into this kind of problem in his April, leaking its own data from its use of ChatGPT. A fab engineer at a South Korean tech giant uses his ChatGPT assistance for coding, note-taking, and data analysis on fab performance and yield, and his three cases of data breaches traced to his use of ChatGPT. tracked by

ChatGPT is not only prone to leaking other users’ data through training, but it also has vulnerabilities and bugs like any other software. Earlier this week, a vulnerability was identified that facilitates injection via YouTube transcripts. In March, ChatGPT was briefly taken offline after it was discovered that some users of ChatGPT were able to view the titles of other users’ chat histories.

(Image credit: Apple)

In response to increasing reports on data security, OpenAI has introduced new privacy controls and “incognito mode” for AI assistance with sensitive documents. Perhaps these measures haven’t gone far enough for Apple. Another AI tool specifically mentioned in the WSJ report is GitHub’s Copilot (Microsoft), which developers use to save time with its smart code autocompletion feature.

Related Articles

Back to top button