OpenAI Atlas Jailbroken by Malicious Prompt URLs
OpenAI’s new AI-powered browser, ChatGPT Atlas, has been compromised by a vulnerability that allows attackers to disguise malicious prompts as URLs. The exploit, revealed by security firm NeuralTrust, enables threat actors to jailbreak the system by embedding harmful instructions in malformed web addresses, a flaw now referred to as OpenAI Atlas Jailbroken. These deceptive strings bypass safety protocols by tricking the browser’s omnibox into interpreting them as trusted commands instead of navigation requests.
Attackers craft these fake URLs to resemble legitimate links, starting with “https://” and including domain-like elements, but deliberately break the format. When pasted or clicked, Atlas treats the input as a privileged prompt. This behavior has enabled actions such as phishing redirects or unauthorized data access, highlighting the severity of the OpenAI Atlas Jailbroken flaw. NeuralTrust shared real-world examples, including clipboard manipulation tactics that silently inject prompts. OpenAI has acknowledged the issue and continues to develop safeguards.
Read the full article at: https://cybersecuritynews.com/chatgpt-atlas-browser-jailbroken/
