‘Hypnotised’ ChatGPT and Bard Will Convince Users to Pay Ransoms and Drive Through Red Lights

Security researchers at IBM say they were able to successfully “hypnotise” prominent large language models like OpenAI’s ChatGPT into leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights. The researchers were able to trick the models—which include OpenAI’s GPT models and Google’s Bard—by…