Sujets de thèses

PhD position : Security and Safety of Complex Systems with AltaRica (SSA)

Failures of safety-critical embedded systems used in industries such as aeronautics, automotive, railway or nuclear can lead to catastrophic consequences. These more and more connected complex systems, also known as Cyber-Physical Systems (CPS), also have to face cyber-attacks, which most of time cause serious dysfunctions and undermine the security of such systems. For instance, in 2015, an attack via the SPRINT cellular network targeted a Jeep Cherokee. The vehicle happens to be fitted with a multimedia device named Uconnect, connected to a CAN bus using a Renesas V850 processor; at the software level, it also manages the cellular open TCP port 6667 to support a DBus service for interprocess communication (IPC) and remote procedure call (RPC). The security breach consisted in injecting a modified firmware into the V850 co-processor, exploiting a “buffer overflow” error. It became then possible, and actually tested in the field, to remotely, over-the-air, inject CAN packets into the TCP port, and thus taking the hand over the car's various controls, from the radio volume to the brakes: clearly a serious design flaw...

The relationships between security and safety are thus at the heart of the current concerns of specialists in the field of complex embedded systems. In fact, one can no longer consider designing safe systems without ensuring them to be also secured. For instance, a vulnerability may compromise the functional safety of an autonomous car, while, on the other hand, a safety constraint such as the introduction of redundant components or diagnostic ports can increase the attack surface of the system. The increasing complexity of software and hardware components used in complex embedded systems has thus motivated the adoption of new approaches to anticipate security and safety problems. In particular, system designers have been advised to adopt an early modeling and validation approach against potential threats during the design phase to reduce the costs of lately detected errors and correction time.

Mots-clés : sécurité, sûreté de fonctionnement, langage formel, système embarqué, système complexe, Cyber-Physical System.

Keywords: security, safety, formal language, embedded system, complex system, Cyber Physical System.


Nga Nguyen: nn@eisti.eu, nga.nguyen@cyu.fr, 01 34 25 10 21

PhD position in Explainable Recommender Systems


A recommender aids the user explore the set of items in a system and find the most relevant items to him/her. The two basic recommender categories are the context- and score-based ones. The first category exploits the characteristics of users and items, while the latter depends on the item scores given by the users. Traditional implementations of recommenders are based on TF-IDF and nearest neighbors techniques, while more recent recommenders follow machine learning approaches, like matrix factorization and neural networks. A natural issue that comes along with recommendations is whether a user, or even the system designer understands the results of the recommender. This problem has given rise to the so-called explainable recommenders.


Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. It also facilitates system designers for better system debugging. So far, the research in explainable recommendations is focused on the Why question: “Why is an item recommended?”. Solutions either consider the recommendation system as a black-box, and thus try to reveal relationships among users and items, the importance of different features with respect to the predicted value, or to dwell into the intrinsic characteristics of the recommendation system in order to truly explain the system. What has not yet been studied though, is the Why-Not aspect of a recommendation: “Why is not a specific item a recommendation?”. We argue that explaining why certain items or categories of items are not recommended can be as valuable as explaining why items are recommended. Why-Not questions have recently gained the attention of the research community in multiple settings, e.g., for relational databases. In machine learning, Why-Not questions are shown to improve the intelligibility of predictions but remain vastly unexplored.

In this thesis proposal we aim to explore Why-Not, machine learning based explainable recommenders. In a second phase, we aim to extend the recommenders so that they can leverage the Why-Not explanations for auto-tuning.

More information at: https://perso-etis.ensea.fr/tzompanaki/phd_proposal.html

Candidate's profile

The candidate should hold a Msc Degree in fields related to Computer Science, Machine Learning, or Applied Mathematics/Statistics. She/He should have solid knowledge of data management, algorithms and programming. Knowledge and previous experience on machine learning, recommender systems, explainability are a plus. She/He should master the english language (oral and written); knowledge of the french language is not obligatory. She/He must have strong analytical skills, be proactive, self-driven and capable to collaborate with a group of international researchers.

Candidate's skills

The candidate should hold a Msc Degree in fields related to Computer Science, Machine Learning, or Applied Mathematics/Statistics.


CY Cergy Paris Université
Site Saint Martin, 2 av. Adolphe Chauvin, Pontoise 95000 France