Institut Polytechnique de Paris
Ecole Polytechnique ENSTA Ecole des Ponts ENSAE Télécom Paris Télécom SudParis
Share

AI, a (reliable) decision-making tool

03 Oct. 2025
At the Computer Science Laboratory of the École Polytechnique (LIX*) and as part of the ORAILIX** team, Sonia Vanier combines operational research and artificial intelligence. In her roles as chair holder, she develops reliable, efficient, and secure algorithms that provide decision support to companies and industries facing complex problems.
AI, a (reliable) decision-making tool
See you on October 21, 2025, for the Cybersecurity and Defense Meetings at the Institut Polytechnique de Paris.

Today, artificial intelligence is indispensable, particularly in the fields of cybersecurity and defense. It is used to generate massive attacks, detect system vulnerabilities, anticipate and guard against unknown attacks, and respond to high-stakes industrial problems. The results provided by the models must therefore be explainable, secure, and reliable. 

Sonia Vanier devotes much of her work to issues of trust and responsibility in artificial intelligence. One of the two chairs she holds is dedicated to this issue (trustworthy and responsible AI). "In this context, we are developing hybrid approaches that combine the efficiency and reliability of operational research with AI models, particularly reinforcement learning and generative models. Our work aims to provide companies with decision-making tools that enable them to solve complex problems," explains the LIX professor. In concrete terms, the scientist models these problems using mathematics and uses the properties associated with them to develop effective algorithms capable of providing solutions in dynamic contexts where there are many sources of uncertainty.

Combined with operational research approaches, artificial intelligence makes it possible to model complex industrial constraints and deal with situations requiring large amounts of data, involving dynamic processes and multiple sources of uncertainty. Researchers use Large Language Models (LLMs). “These AI tools are designed to generate content, but with what ethics and reliability?” asks the researcher. 

The decisions made by language models are generally difficult to explain and justify, which hinders their adoption in many fields. Sonia Vanier therefore analyzes LLMs using algorithms that she designs and trains on existing large language models. “Our method explores their representation space in order to extract interpretable concepts and better understand their decisions.”

Data security and confidentiality in LLMs 

LLMs can also be targeted by specialized attacks aimed at extracting their training data. For example, a bank that uses an LLM to target its customers with commercial products, prevent overdrafts, etc. could see its customer data exposed. “In this context, we have developed a proactive method that is computationally light and easy to implement, aimed at anticipating these risks and predicting the degree of vulnerability of each piece of training data in the model.” Thanks to this method, it is possible to choose the most appropriate method to protect the confidentiality of the LLM while preserving its performance. 

Other attacks, known as prompt injection attacks, corrupt the context data of models to disrupt their decisions or even divert them from their initial task. Sonia Vanier's team has demonstrated the feasibility of these attacks and the need to deepen our understanding of the mechanisms that govern them in order to strengthen the defense of LLMs.

Multiple applications 

The researcher is also interested in the constraints and uncertainties faced by transport networks. While her work is primarily of interest to the SNCF, it is also perfectly applicable to the defense sector. As part of the Optimization and AI for Mobility Chair, Sonia Vanier is developing artificial intelligence architectures to support the railway company in its decision-making on issues such as traffic regulation, predictive maintenance, and quality of service on a national scale. "We are developing algorithms capable of taking into account numerous movements on a very large network as well as a large number of uncertainties (breakdowns, safety, passenger flows, staff availability, etc.) in order to calculate the best possible solutions and minimize costs. These operational issues are comparable to those raised when deploying military troops in dangerous areas: detecting rare events, dealing with major uncertainties, limiting risks, etc. Our tools are particularly well suited to this, especially our work on multi-agent systems."

To support the emergence of these topics, the Institut Polytechnique de Paris offers a master's degree in cybersecurity and trains the specialists of tomorrow. “Several industrial partners welcome our students for internships and are very involved in our courses. We are also developing academic partnerships with institutions such as Bocconi University in Milan, Berkeley, and Columbia in the United States.” Finally, a master's degree dedicated to Trustworthy and Responsible AI has also been launched, training students in the technological advances and industrial applications of AI, as well as its practical and social limitations, the implications of these limitations, and ways to address them.

 

About

Sonia Vanier is a professor in the Department of Computer Science at Ecole Polytechnique (DIX), director of the ORAILIX research team, head of the advanced computer science program at École Polytechnique, co-director of the MScT TRAI master's program, and head of the “Trustworthy and Responsible AI” chair. X Crédit Agricole, head of the “AI and Optimization for Mobility” chair X-SNCF, and scientific director of industrial relations for the Department and Computer Science Laboratory at École Polytechnique (DIX and LIX).

Her research topics focus mainly on the development of decision support tools for complex industrial problems, artificial intelligence (AI), operational research (OR), network optimization, and hybrid approaches between AI and OR for future ethical, sustainable, and trustworthy AI systems.

>> Sonia Vanier on Google Scholar 

 

*LIX: a joint research unit CNRS, École Polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France