Beware of what you tell ChatGPT
ChatGPT is an amazing tool, and its developer, OpenAI, keeps adding new features from time to time.
Recently, the company introduced a new memory feature in ChatGPT, which essentially enables it to remember things about you. For example, it can recall your age, gender, philosophical beliefs and pretty much anything else.
These memories are meant to remain private, but a researcher recently demonstrated how ChatGPT's artificial intelligence memory features can be manipulated, raising questions about privacy and security.
I’M GIVING AWAY A $500 GIFT CARD FOR THE HOLIDAYS
ChatGPT introduction screen. (Kurt "CyberGuy" Knutsson)
What is ChatGPT's Memory feature?
ChatGPT’s memory feature is designed to make the chatbot more personal to you. It remembers information that might be useful for future conversations and tailors responses based on that information, even if you open a different chat. For example, if you mention that you’re vegetarian, the next time you ask for recipes, it will provide only vegetarian options.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
You can also train it to remember specific details about you, such as saying, "Remember that I like to watch classic movies." In future interactions, it will tailor recommendations accordingly. You have control over ChatGPT’s memory. You can reset it, clear specific memories or all memories, or turn this feature off entirely in your settings.
A prompt on ChatGPT. (Kurt "CyberGuy" Knutsson)
WINDOWS FLAW LETS HACKERS SNEAK INTO YOUR PC OVER WI-FI
The security vulnerability in ChatGPT
As reported by Arstechnica, security researcher Johann Rehberger found that it’s possible to trick the AI into remembering false information through a method called indirect prompt injection. This means the AI can be manipulated into accepting instructions from unreliable sources like emails or blog posts.
For instance, Rehberger demonstrated that he could trick ChatGPT into believing a certain user was 102 years old, lived in a fictional place called the Matrix and thought the Earth was flat. After the AI accepts this made-up information, it will carry it over to all future chats with that user. These false memories could be implanted by using tools like Google Drive or Microsoft OneDrive to store files, upload images or even browse a site like Bing — all of which could be manipulated by a hacker.
Rehberger submitted a follow-up report that included a proof of concept, demonstrating how he could exploit the flaw in the ChatGPT app for macOS. He showed that by tricking the AI into opening a web link containing a malicious image, he could make it send everything a user typed and all the AI's responses to a server he controlled. This meant that if an attacker could manipulate the AI in this way, they could monitor all conversations between the user and ChatGPT.
Rehberger's proof-of-concept exploit demonstrated that the vulnerability could be used to exfiltrate all user input in perpetuity. The attack isn't possible through the ChatGPT web interface, thanks to an API OpenAI rolled out last year. However, it was still possible through the ChatGPT app for macOS.
When Rehberger privately reported the finding to OpenAI in May, the company took it seriously and mitigated this issue by ensuring that the model doesn’t follow any links generated within its own responses, like those involving memory and similar features.
HOW TO REMOVE YOUR PRIVATE DATA FROM THE INTERNET
Johann Rehberger’s ChatGPT conversation. (Johann Rehberger)
CYBER SCAMMERS USE AI TO MANIPULATE GOOGLE SEARCH RESULTS
OpenAI’s response
After Rehberger shared his proof of concept, OpenAI engineers took action and released a patch to address this vulnerability. They released a new version of the ChatGPT macOS application (version 1.2024.247) that encrypts conversations and fixes the security flaw.
So, while OpenAI has taken steps to address the immediate security flaw, there are still potential vulnerabilities related to memory manipulation and the need for ongoing vigilance in using AI tools with memory features. The incident underscores the evolving nature of security challenges in AI systems.
The company says, "It’s important to note that prompt injection in large language models is an area of ongoing research. As new techniques emerge, we address them at the model layer via instruction hierarchy or application-layer defenses like the ones mentioned."
How do I disable ChatGPT memory?
If you're not cool with ChatGPT keeping stuff about you or the chance that it could let a bad actor access your data, you can just turn off this feature in the settings.
- Open the ChatGPT app or website on your computer or smartphone.
- Click on the profile icon in the top right corner of the screen.
- Go to Settings and then select Personalization.
- Switch the Memory option off, and you’re all set.
This disables ChatGPT’s ability to retain information between conversations, giving you full control over what it remembers or forgets.
A man using ChatGPT on his laptop (Kurt "CyberGuy" Knutsson)
DON’T LET SNOOPS NEARBY LISTEN TO YOUR VOICEMAIL WITH THIS QUICK TIP
Cybersecurity best practices: Protecting your data in the age of AI
As AI technologies like ChatGPT become more prevalent, it's crucial to adhere to cybersecurity best practices to protect your personal information. Here are some tips for enhancing your cybersecurity:
1. Regularly review privacy settings: Stay informed about what data is being collected. Periodically check and adjust privacy settings on AI platforms like ChatGPT and others to ensure you’re only sharing information you’re comfortable with.
2. Be cautious about sharing sensitive information: Less is more when it comes to personal data. Avoid disclosing sensitive details such as your full name, address, or financial information in conversations with AI.
3. Use strong, unique passwords: Create passwords that are at least 12 characters long, combining letters, numbers, and symbols, and avoid reusing them across different accounts. Consider using a password manager to generate and store complex passwords.
4. Enable two-factor authentication (2FA): Add an extra layer of security to your ChatGPT and other AI accounts. By requiring a second form of verification, such as a text message code, you significantly reduce the risk of unauthorized access.
5. Keep software and applications up to date: Stay ahead of vulnerabilities. Regular updates often include security patches that protect against newly discovered threats, so enable automatic updates whenever possible.
6. Have strong antivirus software: In an age where AI is everywhere, protecting your data from cyber threats is more important than ever. Adding strong antivirus software to your devices adds a critical layer of protection. The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe. Get my picks for the best 2024 antivirus protection winners for your Windows, Mac, Android & iOS devices.
7. Regularly monitor your accounts: Catch issues early. Frequently check bank statements and online accounts for any unusual activity, which can help you identify potential breaches quickly.
Kurt’s key takeaways
As AI tools like ChatGPT get smarter and more personal, it's pretty interesting to think about how they can tailor conversations to us. But, as Johann Rehberger’s findings remind us, there are some real risks involved, especially when it comes to privacy and security. While OpenAI is able to mitigate these issues as they arise, it also shows that we need to keep a close eye on how these features work. It's all about finding that sweet spot between innovation and keeping our data safe.
What are your thoughts on AI remembering personal details—do you find it helpful, or does it raise privacy concerns for you? Let us know by writing us at Cyberguy.com/Contact
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter
Ask Kurt a question or let us know what stories you'd like us to cover.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
- What is the best way to protect your Mac, Windows, iPhone and Android devices from getting hacked?
- What is the best way to stay private, secure and anonymous while browsing the web?
- How can I get rid of robocalls with apps and data removal services?
- How do I remove my private data from the internet?
New from Kurt:
- Try CyberGuy's new games (crosswords, word searches, trivia and more!)
- Enter Cyberguy’s $500 Holiday Gift Card Sweepstakes
Copyright 2024 CyberGuy.com. All rights reserved.
Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.