
simply
- In the demo, the COMET ‘SAI Assistant posted a personal email and code along the embedded prompt.
- Brave said this vulnerability remains for several weeks since he claimed that embarrassment had solved it.
- Experts warn that quick injection attacks have a serious security gap in the AI agent system.
Brave Software found a security defect in the comet browser of Perplexity AI, showing how the attacker can deceive AI assistant into personal user data leaks.
The brave researchers in the concept proof demo announced on August 20 confirmed the hidden instructions in the reddit opinion. When the AI assistant in COMET was asked to summarize the page, it was not just summarized, but followed the hidden command.
The embarrassment challenged the seriousness of the result. The spokesman said decoding The problem is that “it was patched before someone was noticed and the user data was not damaged. The spokesman said, “We have a very powerful bounty program.” We worked directly with Brave to identify and repair. “
Brave, which is developing its own agent browser, insisted that a few weeks after the patch, the defect could be exploited and the comet’s design was open to further attacks.
Brave said that this vulnerability depends on how agent browsers like comet browsers handle web content. The report said, “If the user asks you to summarize the page, the comet does not distinguish between the user’s instructions and the untrusted content, but directly supplies some of the page to the language model.” The attacker can include the hidden commands that AI will run from the user. “
Rapid injection: old ideas, new goals
This type of explo it is called a quick injection attack. Instead of deceiving a person, hide the instructions with ordinary text to deceive the AI system.
Matthew Mullins, a lead hacker of public security, said, “It is similar to traditional injection attacks such as SQL injection, LDAP injection, and command injection. decoding. “The concept is not new, but the method is different. Instead of structured code, we are exploiting natural languages.”
Security researchers warned for months of rapid injection as a major headache as the AI system gained more autonomy. In May, Princeton researchers showed how encryption AI agents could be manipulated by the “memory injection” attack, where malicious information was stored in the memory of AI and later acted as if it was as if it was as if it was as if it was as if it were as if it were as if it were as if it were as if it were as if it were as if it was as if it were as if it were as if it were as if it were as if it were as if it were as if it were as if it were as if it were.
Even Simon Willison Developers who created this term Rapid injectionThe problem is beyond the comet. “The brave security team reported seriously injected vulnerabilities in IT, but Brave is developing similar features that appear to have similar problems,” he posted to X.
SHIVAN SAHIB, vice president of personal information protection and security, said that the upcoming browser will include a series of easing sets to help reduce the risk of indirectly quick injections.
“We plan to quarantine agent browsing into its own storage areas and browsing sessions, so that users accidentally give access to agents on banks and other sensitive data.” decoding. “We will soon share more details.”
Greater danger
Comet demonstrations emphasize wider problems. The AI Agent has strong power but is being deployed with weak security control. Large language models are especially vulnerable to hidden promptes because they can misinterpret the instructions or follow the characters literally.
Mullins said, “These models can be hallaked.” They say, ‘What is the taste of your favorite twizzler?’ We are instructed to make a homemade gun. ”
If the AI agent can directly access emails, files and live user sessions, the steak is high. “Everyone wants to wield AI into everything,” Merlins said. “But no one tests what permissions in the model or what happens when leaking.”
In general, it is intelligent newsletter
A weekly AI trip by GEN, a creation AI model.