Developments in the AI landscape have brought security concerns to the forefront, as both OpenAI’s ChatGPT and Amazon’s Q grapple with privacy leaks.
OpenAI has swiftly addressed a vulnerability in ChatGPT that allowed the extraction of internal company data.
The technique, involving prompting ChatGPT to repeat a word indefinitely, has been labelled as spamming and a breach of OpenAI’s terms of service.
Researchers from leading institutions, including the University of Washington and Google DeepMind, discovered that prompting ChatGPT to engage in a perpetual word repetition unveiled private information from OpenAI’s pre-training distribution.
However, following the release of their report, OpenAI implemented preventative measures, with ChatGPT-3 and GPT-4 now issuing warnings to users attempting similar manipulations.
While OpenAI’s content policy doesn’t explicitly mention forever loops, it denounces fraudulent activities like spam. More notably, the terms of service explicitly prohibit attempts to access private information or unveil the source code of OpenAI’s AI tools.
The company attributes its refusal to fulfill the indefinite repetition request to processing constraints, character limitations, network and storage constraints, and the impracticality of executing such commands.
This incident follows last month’s revelation of a Distributed Denial of Service (DDoS) attack on ChatGPT.
OpenAI acknowledged the abnormal traffic pattern causing periodic outages. A request for comment from OpenAI on this latest incident remains unanswered.
Amazon, a key player in the AI space, faces its own challenges with the recently launched Q chatbot.
Reports indicate potential leaks of private information, prompting comparisons to OpenAI’s situation. Amazon downplayed the revelation, attributing it to employees sharing feedback through internal channels. Despite Amazon asserting no security issues, concerns linger as Q transitions from a preview product to general availability.