AI and cybersecurity: optimizing the rules of the game

“While AI opens up new opportunities, it also introduces unprecedented vulnerabilities into the systems we use every day (software, connected objects, transportation, etc.). Cybersecurity is therefore one of the major challenges of its regulation,” explains Thomas Le Goff, senior lecturer in the Economics and Social Sciences department at Télécom Paris. The researcher analyzes European legal texts governing artificial intelligence and cybersecurity. “It is important to understand them well in order to help companies comply as simply as possible (organizational or technical measures) and make the most of the technology.”
A bridge between business and regulators
Thomas Le Goff contributes to the Intelligent Cybersecurity for Mobility System chair at Télécom Paris (ICMS, in partnership with IRT SystemX, Renault, Solent, Thales, Valeo, ZF Group, and BCG), where he co-directs the “Protection of data from connected vehicles” research area. There, he and his team map general and sector-specific cybersecurity regulations as they apply to connected vehicles. “We want to provide the chair's partners with keys to understanding this complex regulatory framework. This work may also lead to proposals for simplifying the texts to the French government or European institutions.”
At the same time, the researcher is working with his cryptographer colleagues to evaluate encryption techniques. The new regulations (such as the Data Act) require a large amount of data to be exchanged between the various players involved in connected vehicles (users, manufacturers, repairers, garages, etc.). However, when this data is sensitive (intellectual property, personal information, etc.), it must be protected without preventing manufacturers from complying with their regulatory obligations, under penalty of sanctions (up to 2% of turnover for non-compliance with the GDPR). "They are in limbo and don't know whether or not to share the data, or how to do so. We are here to analyze these tensions and resolve them with technical solutions. This work is all the more necessary given that this legal framework is very recent and particularly fluid," emphasizes the researcher.
The challenge for Thomas Le Goff's team is to anticipate this regulatory dynamic and the issues that manufacturers will ultimately face. “In this case, academic research builds a bridge between the company and the regulator. It provides fertile ground for neutral exchanges and joint progress toward the right solutions.”
AI, competitiveness, and sovereignty
The principle remains the same in the field of AI regulation. The texts governing artificial intelligence are very complex and also have a significant impact in terms of sovereignty, economic competitiveness, and technological independence. “European regulation can slow down innovation and penalize the Old Continent in relation to the United States and China,” says Thomas Le Goff. The European Union is therefore considering simplifying its AI regulations via the Digital Omnibus.
Paradoxically, this text, which is intended to simplify the rules, is proving to be very complex. It calls for standardization processes that are costly for economic partners, especially as they invest in compliance with current standards (which could well be relaxed). “This creates great uncertainty for companies, and our role is to anticipate their operational needs but also to develop the least costly technical compliance solutions possible.” This research is essential because the current trend is towards the creation of a cybersecurity culture in AI that applies to all types of sectors.
A specific framework for defense
However, defense is exempt from some of these legal provisions. The sector is subject to validation processes, regulations, and international treaties governing the use of various weapons technologies. "The big challenge for defense is therefore to develop its own standards and ethics regarding the use of AI. And given the current geopolitical context, this is a matter of urgency. We can contribute to this by drawing on best practices from projects we have carried out for sectors that are already highly regulated, such as energy and finance, and for others that are in the process of being regulated at the European level."
Furthermore, by deploying artificial intelligence in its embedded systems, autonomous vehicles, etc., the defense sector, like others, will be led to consider the environmental cost/benefit balance of its activities. "Once again, our approach in this area is multidisciplinary. On the one hand, we will support AI researchers in setting up projects to develop more efficient algorithms (known as “frugal AI”). On the other hand, we will develop standardization methods to quantify the environmental footprint of AI while considering how to integrate environmental issues into the regulation of this field."

About
Thomas Le Goff is a lecturer in digital law and regulation at laboratory I3 at Télécom Paris, where he teaches and conducts research on regulations relating to digital technologies, data, cybersecurity, and AI. His research focuses more specifically on the links between AI and sustainability and how regulation can contribute to the development of environmentally friendly technologies. Before entering academia, he worked as a corporate lawyer at Électricité de France (EDF), where he was responsible for data protection and digital regulation expertise.
*I³ : a joint research unit CNRS, Mines Paris - PSL, Télécom Paris, École Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France