Anthropic’s Best-of-N jailbreak technique proves how introducing random characters in a prompt is often enough to successfully bypass AI restrictions.
Source link
AI Won’t Tell You How to Build a Bomb—Unless You Say It’s a ‘b0mB’


Anthropic’s Best-of-N jailbreak technique proves how introducing random characters in a prompt is often enough to successfully bypass AI restrictions.
Source link