Telegram has updated its privacy policy informing users that it will share users’ phone numbers and IP addresses with law enforcement in response to valid legal requests.
The company CEO announced the policy update this week. Telegram will comply with requests from law enforcement if the user under investigation is found to be violating the platform’s rules.
“If Telegram receives a valid order from the relevant judicial authorities that confirms you’re a suspect in a case involving criminal activities that violate the Telegram Terms of Service, we will perform a legal analysis of the request and may disclose your IP address and phone number to the relevant authorities. If any data is shared, we will include such occurrences in a quarterly transparency report published at: https://t.me/transparency.” .
In a message on its Telegram Channel, Durov revealed that over the last few weeks, a dedicated team of moderators, leveraging AI, has worked on its platform to identify and remove problematic content from the app.
“To further deter criminals from abusing Telegram Search, we have updated our Terms of Service and Privacy Policy, ensuring they are consistent across the world. We’ve made it clear that the IP addresses and phone numbers of those who violate our rules can be disclosed to relevant authorities in response to valid legal requests.” Durov wrote on its . “These measures should discourage criminals. Telegram Search is meant for finding friends and discovering news, not for promoting illegal goods. We won’t let bad actors jeopardize the integrity of our platform for almost a billion users.”
Durov also revealed that the Search on Telegram was enhanced allowing to find public channels and bots.
The policy appears to be under development, the independent website 404Media first reported that previously, Telegram’s policy that user data would only be shared with authorities in cases of confirmed terror-related suspicions, following a court order.
Data shared with authorities will be disclosed in the company’s quarterly transparency reports, accessible via a .
At the time of this writing, the bot displays the following message: “We are updating this bot with current data. Please come back within the next few days.”
Durov also revealed that Telegram had improved its search feature, which is known for widespread abuse to sell and promote illegal goods. He said a dedicated team has been working over the last few weeks to remove problematic content from the platform’s search results.
At the end of August, French prosecutors formally charged Telegram CEO Pavel Durov with facilitating various criminal activities on the platform, including the spread of child sexual abuse material (CSAM), enabling organized crime, illicit transactions, drug trafficking, and fraud. The authorities announced a formal investigation of Durov following .
Durov was indicted and French authorities released under judicial supervision with a ban on leaving the French territory.
Telegram CEO spent more than eighty hours in police custody before being charged on August 28 with twelve offences, including “complicity in administering an online platform to enable illicit transactions as part of an organized gang,” refusal to provide necessary information for lawful interceptions, “complicity in the dissemination of child pornography by an organized gang,” drug trafficking, fraud, criminal association, and money laundering by an organized gang. Durov has been placed under judicial supervision and is prohibited from leaving French territory.
Pavel Durov was also “placed under judicial supervision, including the obligation to post a €5 million bail, the obligation to report to the police station twice a week, and the ban on leaving French territory ,” said the Paris prosecutor’s office on Wednesday.
Durov was charged with refusing to provide information required by authorities to carry out legal interceptions. To avoid pretrial detention Durov paid a €5 million bail and cannot leave France, and must report to authorities twice a week. The arrest is linked to a judicial investigation opened in France in July 2024, focused on Telegram’s lack of moderation, which have allowed extremist and malicious activities to proliferate on the platform.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Telegram)
]]>Apple is its lawsuit against Israeli spyware company , citing the risk of “threat intelligence” information exposure.
Apple wants to dismiss its lawsuit against NSO Group due to three key developments. First, continuing the lawsuit could compromise advanced threat intelligence gathered by Apple by exposing sensitive information to third parties. Second, the spyware industry has diversified, making a lawsuit against NSO less impactful, as other spyware companies continue their operations. Third, obstacles in obtaining critical information from NSO undermine the effectiveness of the legal action. Apple pointed out that it prefers to focus its efforts on developing technical measures to protect users from spyware like Pegasus.
The IT giant fears that the disclosures of its threat intelligence related to commercial spyware operations could aid NSO and other surveillance firms.
“Apple’s teams work tirelessly to protect the critical threat-intelligence information that Apple uses to protect its users worldwide. Because of these efforts, along with the efforts of others in the industry and national governments to combat the rise of commercial spyware, Defendants have been substantially weakened.” . “At the same time, unfortunately, other malicious actors have arisen in the commercial spyware industry. It is because of this combination of factors that Apple now seeks voluntary dismissal of this case.” reads
The court filing referenced an article published by The Guardian article reporting that Israeli officials seized files from NSO Group’s headquarters.
“The Israeli government took extraordinary measures to frustrate that threatened to reveal closely guarded secrets about one of the world’s most notorious hacking tools, leaked files suggest.” reads the published by the Guardian mentioned in the court filing. “Israeli officials seized documents about Pegasus spyware from its manufacturer, NSO Group, in an effort to prevent the company from being able to comply with demands made by in a US court to hand over information about the invasive technology.”
The officials requested an Israeli court to keep this action secret, even from parties involved in Meta’s ongoing WhatsApp hacking lawsuit against NSO.
The hacked Israeli ministry of justice communications revealed concerns that sensitive information could be accessed by Americans.
“while Apple takes no position on the truth or falsity of the Guardian Story described above, its existence presents cause for concern about the potential for Apple to obtain the discovery it needs.” the court filing.
In November 2021, Apple sued NSO Group and its parent company Q Cyber Technologies in a U.S. federal court for illegally targeting its customers with the surveillance spyware Pegasus.
According to the lawsuit, NSO Group is accountable for hacking into Apple’s iOS-based devices using zero-click exploits. The software developed by the surveillance firm was used to spy on activists, journalists, researchers, and government officials.
Apple also announced it would support with a contribution of $10 million to the academic research in unmasking the illegal surveillance activities
“Apple today filed a lawsuit against NSO Group and its parent company to hold it accountable for the surveillance and targeting of Apple users. The complaint provides new information on how NSO Group infected victims’ devices with its Pegasus spyware. To prevent further abuse and harm to its users, Apple is also seeking a permanent injunction to ban NSO Group from using any Apple software, services, or devices.” reads the published by Apple.
The legal action aims at permanently preventing the infamous company from breaking into any Apple software, services, or devices.
The complaint included details about the NSO Group’s FORCEDENTRY exploit that was used to target multiple users and drop the latest version of NSO Group’s Pegasus.
Threat actors leveraged two zero-click iMessage exploits to infect the iPhones with spyware, respectively known as 2020 KISMET exploit and FORCEDENTRY.
The latter exploit was discovered by Citizen Lab researchers, it is able to bypass the “BlastDoor” sandbox introduced early this year in iOS to block iMessage zero-click exp
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Apple)
]]>The Dutch Data Protection Authority (DPA) has fined Uber €290 million ($324 million) for allegedly failing to comply with the EU data protection regulation GPDR when transferring the personal data of European taxi drivers to the U.S.
“The Dutch Data Protection Authority (DPA) imposes a fine of 290 million euros on Uber. The Dutch DPA found that Uber transferred personal data of European taxi drivers to the United States (US) and failed to appropriately safeguard the data with regard to these transfers. According to the Dutch DPA, this constitutes a serious violation of the General Data Protection Regulation (GDPR). In the meantime, Uber has ended the violation.” reads the press release published by the Dutch Data Protection Authority.
Aleid Wolfsen, the chairman of the Dutch DPA, emphasized that the GDPR is designed to protect people’s fundamental rights by ensuring that businesses and governments handle personal data responsibly. Businesses must take extra precautions when storing Europeans’ personal data outside the EU. Wolfsen criticized Uber for failing to meet GDPR requirements in protecting data transferred to the U.S., calling the violation “very serious.”
The Dutch DPA launched an investigation into Uber after over 170 French drivers filed complaints with the Ligue des droits de l’Homme (LDH), which then reported the issue to the French DPA. The Dutch DPA investigated in close cooperation with the French DPA and coordinated the decision with other European DPAs.
The Dutch Data Protection Authority (DPA) determined that Uber collected sensitive information from European drivers and stored it on servers in the U.S. for over two years without using proper data transfer tools. The collected data included account details, location data, payment information, and even criminal and medical records. After the EU-US Privacy Shield was invalidated in 2020, the use of Standard Contractual Clauses was required to ensure equivalent data protection. However, Uber stopped using these clauses in August 2021, leaving the data insufficiently protected until it adopted the Privacy Shield’s successor at the end of last year.
“All DPAs in Europe calculate the amount of fines for businesses in the same manner. Those fines amount to a maximum of 4% of the worldwide annual turnover of a business. Uber had a worldwide turnover of around 34.5 billion euro in 2023. Uber has indicated its intent to object to the fine.” concludes the press release. “This is the third fine that the Dutch DPA imposes on Uber. The Dutch DPA imposed a fine of 600,000 euro on Uber in 2018, and a fine of 10 million euro in 2023. Uber has objected to this last fine.”
The company refuses any accusation and claims that its data transfer process is compliant with European laws. The company will appeal against the decision, its spokesman Caspar Nixon Bloomberg.
The fine is “completely unjustified,” said Caspar Nixon.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, DPA)
]]>
The Justice Department and the Federal Trade Commission (FTC) filed a civil lawsuit in the U.S. District Court for the Central District of California against TikTok Inc., its parent company ByteDance Ltd., and their affiliates (together, TikTok) for extensive violations of the Children’s Online Privacy Protection Act and its implementing regulations (COPPA) in connection with the popular TikTok app.
According to COPPA, website operators are forbidden from collecting, using, or disclosing personal information from children under 13 without parental consent and mandates deletion of such data upon parental request. In 2019, the government sued TikTok’s predecessor, Musical.ly, for COPPA violations. Since then, TikTok and ByteDance have been under a court order to implement measures to comply with COPPA.
“According to the complaint, from 2019 to the present, TikTok knowingly permitted children to create regular TikTok accounts and to create, view, and share short-form videos and messages with adults and others on the regular TikTok platform. The defendants collected and retained a wide variety of personal information from these children without notifying or obtaining consent from their parents.” reads the published by DoJ. “Even for accounts that were created in “Kids Mode” (a pared-back version of TikTok intended for children under 13), the defendants unlawfully collected and retained children’s email addresses and other types of personal information.”
DoJ also added that when parents requested the deletion of their children’s accounts and information, TikTok and ByteDance often failed to comply. The companies also had inadequate internal policies and processes for identifying and removing accounts created by children.
The social network giant exposed millions of children under 13 to extensive data collection, interactions with adult users, and adult content by violating COPPA. The complaint seeks civil penalties and injunctive relief.
“The Department is deeply concerned that TikTok has continued to collect and retain children’s personal information despite a court order barring such conduct,” said Acting Associate Attorney General Benjamin C. Mizer. “With this action, the Department seeks to ensure that TikTok honors its obligation to protect children’s privacy rights and parents’ efforts to protect their children.”
TikTok disagrees with these allegations, it said that many of them relate to past events and practices that have been already addressed. It is also proud of its efforts to protect children.
In September 2023, the Irish Data Protection Commission (DPC) €345 million for violating children’s privacy. The Irish data regulators discovered that the popular video-sharing app allowed adults to send direct messages to certain teenagers who have no family connection with them.
The investigation conducted by the DPC revealed a severe flaw in TikTok’s “family pairing” feature that could be abused to link children’s accounts to “unverified” adults.
Children under 13 are exposed to serious risks due to the default account setting that allows anyone to view the content they publish.
“The decision further details that non-child users had the power to enable direct messages for child users above the age of 16, thereby making this feature less strict for the child user,” . “This also meant that, for example, videos that were posted to child users’ accounts were public by default, comments were enabled publicly by default, the Duet and Stitch features were enabled by default.”
TikTok was also accused of lacking adequate transparency when dealing with the way it processes data of its young users.
The DPC also criticized the processes behind the TikTok registration and the publishing of videos, which according to the Irish authority were designed to drive the users toward selecting options that exposed their privacy to risks.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, ByteDance)
]]>Brazil’s data protection authority, Autoridade Nacional de Proteção de Dados (ANPD), has imposed a temporary ban on Meta from processing users’ personal data for training its artificial intelligence (AI) models.
“The National Data Protection Authority (ANPD) issued today a Preventive Measure determining the immediate suspension, in Brazil, of the validity of the new privacy policy of the company Meta , which authorized the use of personal data published on its platforms for the purpose of training artificial intelligence (AI) systems.” reads the announcement published by ANPD.
ANPD also announced a daily fine of R$50,000 for non-compliance.
The Board of Directors issued a Preventive Measure due to the “use of an inadequate legal basis for data processing, insufficient disclosure of clear and accessible information about privacy policy changes and data processing, excessive limitations on the exercise of data subjects’ rights, and processing of children’s and adolescents’ personal data without proper safeguards.”
Meta’s updated privacy policy allows the social media giant to use public posts for its AI systems.
Meta expressed disappointment with the decision, claiming its practices comply with Brazilian privacy laws.
“This is a step backwards for innovation, competition in AI development and further delays bringing the benefits of AI to people in Brazil,” .
Human Rights Watch recently published a revealing that LAION-5B, a major image-text dataset used for training AI models, includes identifiable photos of Brazilian children. These models can be used by tools employed to create malicious deepfakes that put even more children at risk of exploitation,
In June, Meta it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.
Meta added it is disappointed by request from the Irish Data Protection Commission (DPC), the social network giant pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.” reads the from Meta. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The company explained that its AI, including LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Meta added that the delay will allow it to address requests from the U.K. Information Commissioner’s Office (ICO) before starting the training.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Meta)
]]>
The Treasury Department’s Office of Foreign Assets Control (OFAC) has sanctioned twelve Kaspersky Lab executives for their role in the Russian company.
All the sanctioned individuals are in executive and senior leadership roles at AO Kaspersky Lab (Kaspersky Lab).
“Today’s action against the leadership of Kaspersky Lab underscores our commitment to ensure the integrity of our cyber domain and to protect our citizens against malicious cyber threats,” “The United States will take action where necessary to hold accountable those who would seek to facilitate or otherwise enable these activities.”
On June 20, 2024, the Biden administration the ban on selling Kaspersky antivirus software due to the risks posed by Russia to U.S. national security. The U.S. government is implementing a new rule leveraging powers established during the Trump administration to ban the sale of Kaspersky software, citing national security risks posed by Russia.
The Commerce Department’s Bureau of Industry and Security banned the Russian cybersecurity firm because it is based in Russia.
Government experts believe that the influence of the Kremlin over the company poses a significant risk, . Russia-linked actors can abuse the software’s privileged access to a computer’s systems to steal sensitive information from American computers or spread malware, Commerce Secretary Gina Raimondo said on a briefing call with reporters on Thursday.
“Russia has shown it has the capacity and… the intent to exploit Russian companies like Kaspersky to collect and weaponize the personal information of Americans and that is why we are compelled to take the action that we are taking today,” Raimondo said on the call.
TechCrunch that the ban will start on July 20, however, the company’s activities, including software updates to its US customers, will be prohibited on September 29.
“That means your software and services will degrade. That’s why I strongly recommend that you immediately find an alternative to Kaspersky,” Raimondo said.
Raimondo is inviting Kaspersky’s customers to replace their software, it also explained that U.S. clients who already use Kaspersky’s antivirus are not violating the law.
“U.S. individuals and businesses that continue to use or have existing Kaspersky products and services are not in violation of the law, you have done nothing wrong and you are not subject to any criminal or civil penalties,” Raimondo added. “However, I would encourage you in the strongest possible terms, to immediately stop using that software and switch to an alternative in order to protect yourself and your data and your family.”
The Department of Homeland Security and the Justice Department will notify U.S. consumers about the ban. They will also set up a website to provide impacted customers with more information about the ban and instructions on the replacement.
The US cybersecurity agency CISA will notify critical infrastructure operators using Kaspersky software to support them in the replacement of the security firm.
The U.S. Department of Commerce has also added AO Kaspersky Lab, OOO Kaspersky Group (Russia), and Kaspersky Labs Limited (United Kingdom) to the Entity List. The US government speculates the two companies cooperate with Russian military and intelligence authorities in support of the Russian government’s cyber intelligence activities.
The U.S. Department of Commerce has added AO Kaspersky Lab, OOO Kaspersky Group (Russia), and Kaspersky Labs Limited (United Kingdom) to the Entity List. This designation is due to their cooperation with Russian military and intelligence authorities in support of the Russian government’s cyber intelligence activities, which pose risks to U.S. national security and foreign policy interests.
The US government sanctioned the following Kaspersky Lab employees:
The individuals listed were designated under for their involvement in the technology sector of the Russian Federation economy.
The company CEO and founder, Eugene Kaspersky, was not sanctioned.
As a result of the sanctions, the U.S. Department of the Treasury’s Office of Foreign Assets Control has frozen all property and interests in property of the designated individuals and entities under U.S. jurisdiction. These assets must be reported to OFAC. Any entities owned 50% or more by one or more blocked persons are also blocked. Transactions involving these blocked persons are generally prohibited unless authorized by OFAC. Additionally, foreign financial institutions facilitating significant transactions with Russia’s military-industrial base risk sanctions by OFAC. OFAC aims to encourage positive behavioral change rather than punishment. For guidance on sanctions and removal from OFAC lists, refer to the OFAC advisory and FAQs.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, US government)
]]>
The Biden administration announced it will ban the sale of Kaspersky antivirus software due to the risks posed by Russia to U.S. national security. The U.S. government is implementing a new rule leveraging powers established during the Trump administration to ban the sale of Kaspersky software, citing national security risks posed by Russia.
The Commerce Department’s Bureau of Industry and Security banned the Russian cybersecurity firm because it is based in Russia.
Government experts believe that the influence of the Kremlin over the company poses a significant risk,, . Russia-linked actors can abuse the software’s privileged access to a computer’s systems to steal sensitive information from American computers or spread malware, Commerce Secretary Gina Raimondo said on a briefing call with reporters on Thursday.
“Russia has shown it has the capacity and… the intent to exploit Russian companies like Kaspersky to collect and weaponize the personal information of Americans and that is why we are compelled to take the action that we are taking today,” Raimondo said on the call.
This isn’t the first time that Western governments have banned Kaspersky, but the Russian firm has always denied any link with the Russian government.
Reuters reported that the U.S. government plans to add three units of the cybersecurity company to a trade restriction list. The move will significantly impact the company’s sales in the U.S. and potentially in other Western countries that may adopt similar restrictions against the security firm.
TechCrunch that the ban will start on July 20, however, the company’s activities, including software updates to its US customers, will be prohibited on September 29.
“That means your software and services will degrade. That’s why I strongly recommend that you immediately find an alternative to Kaspersky,” Raimondo said.
Raimondo is inviting Kaspersky’s customers to replace their software, it also explained that U.S. clients who already use Kaspersky’s antivirus are not violating the law.
“U.S. individuals and businesses that continue to use or have existing Kaspersky products and services are not in violation of the law, you have done nothing wrong and you are not subject to any criminal or civil penalties,” Raimondo added. “However, I would encourage you in the strongest possible terms, to immediately stop using that software and switch to an alternative in order to protect yourself and your data and your family.”
The Department of Homeland Security and the Justice Department will notify U.S. consumers about the ban. They will also set up a website to provide impacted customers with more information about the ban and instructions on the replacement.
The US cybersecurity agency CISA will notify critical infrastructure operators using Kaspersky software to support them in the replacement of the security firm.
In March 2022, the US Federal Communications Commission (FCC) multiple Kaspersky products and services to its saying that they pose unacceptable risks to U.S. national security.
The Covered List, published by Public Safety and Homeland Security Bureau published, included products and services that could pose an unacceptable risk to the national security of the United States or the security and safety of United States persons.
In March 2022, the German Federal Office for Information Security agency, aka BSI, also consumers uninstall Kaspersky anti-virus software. The Agency warns the cybersecurity firm could be implicated in hacking attacks during the ongoing Russian invasion of Ukraine.
According to §7 BSI law, the BSI warns against using Kaspersky Antivirus and recommends replacing it asap with defense solutions from other vendors.
The alert pointed out that antivirus software operates with high privileges on machines and if compromised could allow an attacker to take over them. BSI remarks that the trust in the reliability and self-protection of a manufacturer as well as his authentic ability to act is crucial for the safe use of any defense software. The doubts about the reliability of the manufacturer, lead the agency in considering the antivirus protection offered by the vendor risky for the IT infrastructure that uses it.
BSI warns of potential offensive cyber operations that can be conducted with the support of a Russian IT manufacturer, it also explains that the vendor could be forced to conduct attacks or be exploited for espionage purposes without its knowledge.
The United States banned government agencies from using Kaspersky defense solutions since 2017, The company rejected any allegation and also clarified that Russian policies and laws are applied to telecoms and ISPs, not security firms like Kaspersky.
In June 2018, the European Parliament passed a resolution that classifies the security firm’s software as “malicious” due to the alleged link of the company with Russian intelligence.
Some European states, including the UK, the Netherlands, and Lithuania also excluded the software of the Russian firm on sensitive systems.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, cyberespionage)
]]>Meta announced it is delaying the training of its large language models (LLMs) using public content shared by adults on Facebook and Instagram following the Irish Data Protection Commission (DPC) request.
“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta.” reads the DPC’s request. “The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”
Meta added it is disappointed by request from the Irish Data Protection Commission (DPC), the social network giant pointed out that this is a step “backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram — particularly since we incorporated regulatory feedback and the European DPAs have been informed since March.” reads the from Meta. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe.”
The company explained that its AI, including LLM, is already available in other parts of the world. Meta explained that to provide a better service to its European communities, it needs to train the models on relevant information that reflects the diverse languages, geography and cultural references of the people in Europe. For this reason, the company initially planned to train its large language models using the content that its European users in the EU have publicly stated on its products and services.
Meta intended to implement these changes on June 26, giving users the option to opt out of data usage by submitting a request.
“
We remain highly confident that our approach complies with European laws and regulations. AI training is not unique to our services, and we’re more transparent than many of our industry counterparts.” continues the statement.“We are committed to bringing Meta AI, along with the models that power it, to more people around the world, including in Europe. But, put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”
Meta added that the delay will allow it to address requests from the U.K. Information Commissioner’s Office (ICO) before starting the training.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Meta)
]]>
In January 2025, financial and insurance institutions in Europe and any organizations that do business with them must comply with the Digital Operation Resilience Act, also known as . This regulation from the European Union (EU) is intended to both strengthen IT security and enhance the digital resilience of the European financial market. Much like GDPR, this act promises to exert significant influence on the activities of organizations around the world. Its official launch date of January 17, 2025, means there are some pretty stringent deadlines.
Can this be done? Will organizations be ready? These were questions posed in with guest Romain Deslorieux, Strategic Partners Director, Global System Integrators at Thales. He suggested that it might be a “tough call for any organization to follow and to reach as a compliance deadline.” But he also pointed out that the European Supervisory Authority (ESA) is busy defining some of the regulatory technical standards that will provide precise and technical guidelines for organizations to follow. He added that most financial entities have already started to investigate DORA, including defining a roadmap, although it may be time for them to accelerate these activities.
Companies that operate in the world of finance and insurance are no strangers to broad regulations, both internal and international. Still, DORA is a reminder of just how agile they must remain, given that speed is all around them. The incredible rate at which AI technologies were discovered and embraced by end users and then deployed into workplaces everywhere shows just how difficult it can be for an organization to keep on a safe and even keel. The challenge doubles when we factor in the relentless creativity and determination of a criminal element that is always keen to exploit new technologies before adequate safeguards are implemented.
Perhaps one of the most striking elements of DORA is its focus on third-party risk management, which is one of its key pillars. Additional podcast guest Mark Hughes, Global Managing Partner, Cybersecurity Services, IBM Consulting, pointed out how events such as clearly showed how a single piece of a supply chain can have a disproportionate impact on all the other parts. He says this is why DORA places such focus on third-party risk management – not just in conducting risk assessments but also monitoring them.
In a single word, the DORA initiative is about resilience. That’s what the “R” stands for, after all. It’s an updated effort to enhance a fortress while still allowing the free movement of the vital data that keeps economies going.
Sticking with the supply chain in the context of resilience, Romain suggests we take a lesson from cloud technology. Cloud systems and services, he says, represent an essential part of operational resilience, and being a central point of an organization’s data, they must remain up and available. Yet, at the same time, they are also subject to challenges of territoriality in terms of where data can be stored, where the most influential cloud organizations come from, and how sovereignty can be maintained.
The fact is there’s not much time for companies to get their various ducks in a row. Therefore, financial organizations based in Europe that will be at the forefront of compliance preparation must fully assess their current digital systems and processes to find vulnerabilities and resilience gaps. They must also strengthen cybersecurity measures, including encryption, firewalls, and regular security audits, and have incident response plans in place. The same type of requirements should be made for operational risk management and business continuity planning, both of which help ensure they can maintain critical operations in the event of disruptions or cyberattacks.
Strategic activities to be built into this very short timeline include ongoing vigilance of DORA itself within an evolving regulatory landscape, increased or improved collaboration and information sharing, investment in technology and talent, and improved board oversight and governance.
Organizations based outside the areas where DORA directly applies (most of Europe plus Iceland and Norway), should also ensure they understand DORA Requirements and open communication channels with their European partners. In addition to staying informed, they may also consider adopting other internationally recognized cybersecurity and operational resilience standards and frameworks, such as for information security management and for business continuity management.
It is virtually guaranteed that similar sets of regulations will be imposed by other economic areas of the world, creating challenges for companies either in finance or working with them. This promises to generate sets of economic blocks at the same time as it opens new areas of commerce. However, these changes are best seen as opportunities to finetune an organization’s information security systems and to reaffirm relationships with vendors and experts to ensure continued security and compliance.
About the author: Steve Prentice
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, Europe financial industry)
]]>
The U.K. National Cyber Security Centre (NCSC) is urging manufacturers of smart devices to comply with new legislation that bans default passwords.
The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will be effective on April 29, 2024.
“From 29 April 2024, manufacturers of consumer ‘smart’ devices must comply with new UK law.” reads the announcement published by NCSC. “The law, known as the Product Security and Telecommunications Infrastructure act (or PSTI act), will help consumers to choose smart devices that have been designed to provide ongoing protection against cyber attacks.”
The U.K. is the first country in the world to ban default credentia from IoT devices.
The law prohibits manufacturers from supplying devices with default passwords, which are easily accessible online and can be shared.
The law applies to the following products:
Threat actors could use them to access a local network or launch cyber attacks.
Manufacturers are obliged to designate a contact point for reporting security issues and must specify the minimum duration for which the device will receive crucial security updates.
The NCSC clarified that the PSTI act also applies to organizations importing or retailing products for the UK market, including most smart devices manufactured outside the UK. Manufacturers that don’t comply with the act will be punished with fines of up to £10 million or 4% of qualifying worldwide revenue.
Follow me on Twitter: and and Mastodon
(SecurityAffairs – hacking, smart device manufacturers)
]]>