A hacker and artist, who goes online as Amadon, tricked ChatGPT into providing instructions to make homemade bombs bypassing the safety guidelines implemented by the chatbot.<\/p>\n\n\n\n
Initially, the expert asked for detailed instructions to create a fertilizer bomb similar to the one used in the 1995 Oklahoma City bombing, but the chatbot refused due to ethical responsibilities. Further interaction allowed the hacker to bypass these restrictions tricking the chatbot to generate instructions for creating powerful explosives.
Amadon told<\/strong><\/a> Lorenzo Franceschi-Bicchierai<\/a> from TechCrunch that he carried out a \u201csocial engineering hack to completely break all the guardrails around ChatGPT\u2019s output.\u201d <\/p>\n\n\n\n
\u201cthere really is no limit to what you can ask it once you get around the guardrails.\u201d Amadon told<\/a> TechCrunch. \u201cThe sci-fi scenario takes the AI out of a context where it\u2019s looking for censored content in the same way,\u201d<\/em><\/p>\n\n\n\n
Follow me\u00a0on Twitter:\u00a0@securityaffairs<\/strong><\/a>\u00a0and\u00a0Facebook<\/strong><\/a>\u00a0and\u00a0Mastodon<\/strong><\/a><\/p>\n\n\n\n
Pierluigi Paganini<\/strong><\/a><\/p>\n\n\n\n
(<\/strong>SecurityAffairs<\/strong><\/a>\u00a0\u2013<\/strong>\u00a0hacking, Generative AI<\/a>)\u00a0<\/strong><\/p>\n\n\n\n