- ChatGPT can be used for leaking your data, warning says
- Buterin reacts to this warning
Ethereum co-creator and its frontman, Vitalik Buterin, has shared a hot take on a recent warning that OpenAIās product, ChatGPT, can be utilized to leak personal user data.
ChatGPT can be used for leaking your data, warning says
X user @Eito_Miyamura, a software engineer and an Oxford graduate, published a post, revealing that after the new update, ChatGPT may pose a significant threat to personal user data.
Miyamura tweeted that on Wednesday, OpenAI rolled out full support for MCP (Model Context Protocol) tools in ChatGPT. This upgrade allows the AI bot to connect to a user’s Gmail box, Google Calendar, SharePoint, and other services.
However, Miyamura and his friends spotted a fundamental security issue here: āAI agents like ChatGPT follow your commands, not your common sense.ā He and his team have staged an experiment that allowed them to exfiltrate all user private information from the aforementioned sources.
Miyamura shared all the steps they followed to perform this test data leak ā it was done by sending a user a calendar invite with a ājailbreak prompt to the victim, just with their email.ā The victim needs to accept the invite.
What happens next is the user tells ChatGPT āto help prepare for their day by looking at their calendar.ā After the AI bot reads the malicious invite, it is hijacked, and from that point on it will āact on the attacker’s command.ā It will āsearch your private emails and send the data to the attacker’s email.ā
Miyamura warns that while so far ChatGPT needs a userās approval for every step, in the future many users will likely just click āapproveā on everything AI suggests. āRemember that AI might be super smart, but can be tricked and phished in incredibly dumb ways to leak your data,ā the developer concludes.
Buterin reacts to this warning
In response, Vitalik Buterin slammed the āAI governanceā idea in general as ānaive.ā He stated that if utilized by users to āallocate funding for contributions,ā hackers will hijack it to syphon all the money from users.
Instead, he suggested an alternative approach called āinfo finance,ā which is an open market where AI models can be checked for security issues: āanyone can contribute their models, which are subject to a spot-check mechanism that can be triggered by anyone and evaluated by a human jury.ā