OpenAI suffered a security breach in 2023, the New York Times reported. The American newspaper revealed that the threat actors gained access to the internal discussions among researchers and other employees, but they did not access the source code of the company’s systems.
OpenAI hasn’t publicly disclosed the security breach before and did not notify law enforcement authorities because no information about customers or partners had been stolen.
“Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies.” . “The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.”
The executives believed the threat actor was a lone hacker with no link to a foreign government.
The employees were informed about the security breach, they were concerned about cyber espionage activities conducted by nation-state actors like Chinese ATP groups.
This incident raised questions about the level of the security of OpenAI, following the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the board of directors, arguing that the company needed to enhance its measures to prevent foreign adversaries from stealing its secrets.
Aschenbrenner claimed that OpenAI fired him this spring for leaking information, on a recent podcast, he alluded to a security breach at OpenAI, which had not been previously reported. He said that the security measures implemented by the company were insufficient to protect against the theft of key secrets by foreign actors.
Chinese companies are investing significant effort in researching AI models and developing highly powerful AI systems. Many experts believe China has already surpassed the United States as the largest producer of AI talent, contributing nearly half of the world’s leading AI researchers.
Intelligence and cybersecurity experts have different opinions on AI development. Some believe that AI technologies pose a significant national security risk, while others state that there is no proof of these risks.
However, the risk that state-sponsored hackers can target organizations and private firms is high, for this reason, they need to adopt high-level cyber security standards.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, AI)