Recrutement CEA

Dynamic Distribution Shifts Ood Detection With Dynamic Thresholds H/F - CEA

  • Palaiseau - 91
  • Stage
  • CEA
Publié le 13 novembre 2025
Postuler sur le site du recruteur

Les missions du poste

Le CEA est un acteur majeur de la recherche, au service des citoyens, de l'économie et de l'Etat.

Il apporte des solutions concrètes à leurs besoins dans quatre domaines principaux : transition énergétique, transition numérique, technologies pour la médecine du futur, défense et sécurité sur un socle de recherche fondamentale. Le CEA s'engage depuis plus de 75 ans au service de la souveraineté scientifique, technologique et industrielle de la France et de l'Europe pour un présent et un avenir mieux maîtrisés et plus sûrs.

Implanté au coeur des territoires équipés de très grandes infrastructures de recherche, le CEA dispose d'un large éventail de partenaires académiques et industriels en France, en Europe et à l'international.

Les 20 000 collaboratrices et collaborateurs du CEA partagent trois valeurs fondamentales :

- La conscience des responsabilités
- La coopération
- La curiositéThe detection of out-of-distribution (OoD) samples is crucial for deploying deep learning (DL) models in real-world scenarios. OoD samples pose a challenge to DL models as they are not represented in the training data and can naturally arrive during deployment (i.e., a distribution shift), increasing the risk of obtaining wrong predictions. Consequently, OoD samples detection is crucial in safety-critical tasks, such as healthcare or automated vehicles, where trustworthy models are required.
The existing literature for the OoD detection problem focuses on the development of confidence scores where a threshold is applied to build a binary classifier to tell if a sample is in-distribution (InD) or OoD. In particular, the confidence score threshold is typically set using the values that correspond to InD samples, such that 95% of the confidence score values from InD samples fall above the selected thresholds, i.e., 95% True Positive Rate. However, setting a fixed threshold can lead to high False Positive Rate (FPR) values. In addition, even if the InD remains the same after deployment, the OoD could vary, resulting in FPR fluctuations. These two situations are of high interest in safety-critical applications, as misclassifying the confidence score value of an OoD sample as InD (False Positive) can result in more catastrophic consequences than misclassifying the confidence score value of an InD as OoD (False Negative).
To address the limitations and impact of a single fixed threshold selection, recent works propose using adaptive thresholds or a set of candidate thresholds to tackle the problem of dynamic distribution shifts. Specifically, in this internship position, we propose building on the work of Timans et al., who proposed a framework that leverages game theory and sequential hypothesis testing to assess the validity of a set of candidate thresholds. Therefore, the internship position aims to extend this work by exploring one or multiple of the following directions of improvement:
Dynamic threshold selection (vs. fixed thresholds)
Adaptive betting strategies (vs. static betting strategy)
Adaptive windowing/batching (vs. fixed windows/batches size)
Game theory methods: e.g., use of market-making algorithms (for threshold selection, and finding the optimal size of windows/batches)

Le profil recherché

Master students (M1/M2 - France)
Proficiency in Python, NumPy, SciPy, sciki-tlearn, PyTorch,...
Solid background in math, probability & statistics

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.