{"id":168423,"date":"2024-09-16T07:05:17","date_gmt":"2024-09-16T07:05:17","guid":{"rendered":"https:\/\/securityaffairs.com\/?p=168423"},"modified":"2024-09-16T07:08:55","modified_gmt":"2024-09-16T07:08:55","slug":"chatgpt-provided-instructions-to-make-homemade-bombs","status":"publish","type":"post","link":"https:\/\/securityaffairs.com\/168423\/hacking\/chatgpt-provided-instructions-to-make-homemade-bombs.html","title":{"rendered":"Hacker tricked ChatGPT into providing detailed instructions to make a homemade bomb"},"content":{"rendered":"
<\/div>\n

A hacker tricked ChatGPT into providing instructions to make homemade bombs demonstrating how to bypass the chatbot safety guidelines.<\/h2>\n\n\n\n

A hacker and artist, who goes online as Amadon, tricked ChatGPT into providing instructions to make homemade bombs bypassing the safety guidelines implemented by the chatbot.<\/p>\n\n\n\n

Initially, the expert asked for detailed instructions to create a fertilizer bomb similar to the one used in the 1995 Oklahoma City bombing, but the chatbot refused due to ethical responsibilities. Further interaction allowed the hacker to bypass these restrictions tricking the chatbot to generate instructions for creating powerful explosives.<\/gwmw><\/p>\n\n\n\n

Amadon told<\/strong><\/a> Lorenzo Franceschi-Bicchierai<\/a> from TechCrunch that he carried out a \u201csocial engineering hack to completely break all the guardrails around ChatGPT\u2019s output.\u201d <\/p>\n\n\n\n

The hacker used a ‘jailbreaking’ technique, disguising the request by framing it as part of a fictional game. TechCrunch consulted an explosives expert who confirmed that the instructions could enable the creation of a bomb, making the response too sensitive to be released.<\/p>\n\n\n\n

ChatGPT told the hacker that combining the materials allows to create \u201ca powerful explosive that can be used to create mines, traps, or improvised explosive devices (IEDs).\u201d <\/p>\n\n\n\n

Amadon refined the prompts, tricking ChatGPT into generating increasingly specific instructions for creating “minefields” and “Claymore-style explosives.<\/p>\n\n\n\n

\u201cthere really is no limit to what you can ask it once you get around the guardrails.\u201d Amadon told<\/a> TechCrunch. \u201cThe sci-fi scenario takes the AI out of a context where it\u2019s looking for censored content in the same way,\u201d<\/em><\/p>\n\n\n\n

Amadon reported his findings to OpenAI through the company’s bug bounty program operated by Bugcrowd, but he was told that the problem is related to model safety and doesn’t match the program’s criteria. Bugcrowd invited the hacker to report the issue through a different channel.<\/p>\n\n\n\n

Follow me\u00a0on Twitter:\u00a0@securityaffairs<\/strong><\/a>\u00a0and\u00a0Facebook<\/strong><\/a>\u00a0and\u00a0Mastodon<\/strong><\/a><\/p>\n\n\n\n

Pierluigi Paganini<\/strong><\/a><\/p>\n\n\n\n

(<\/strong>SecurityAffairs<\/strong><\/a>\u00a0\u2013<\/strong>\u00a0hacking, Generative AI<\/a>)\u00a0<\/strong><\/p>\n\n\n\n

<\/gwmw><\/p>\n","protected":false},"excerpt":{"rendered":"

A hacker tricked ChatGPT into providing instructions to make homemade bombs demonstrating how to bypass the chatbot safety guidelines. A hacker and artist, who goes online as Amadon, tricked ChatGPT into providing instructions to make homemade bombs bypassing the safety guidelines implemented by the chatbot. Initially, the expert asked for detailed instructions to create a […]<\/p>\n","protected":false},"author":1,"featured_media":144067,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[3323,5],"tags":[4414,5060,13516,4112,9508,9506,10918,687,841,1533],"class_list":["post-168423","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-breaking-news","category-hacking","tag-ai","tag-artificial-intelligence","tag-chatgpt","tag-hacking","tag-hacking-news","tag-information-security-news","tag-it-information-security","tag-pierluigi-paganini","tag-security-affairs","tag-security-news"],"yoast_head":"\n杭州江阴科强工业胶带有限公司