Recrutement CEA

Backdoor Attack Scalability And Defense Evaluation In Large Language Models H/F - CEA

  • Gif-sur-Yvette - 91
  • Stage
  • CEA
Publié le 30 octobre 2025
Postuler sur le site du recruteur

Les missions du poste

Le CEA est un acteur majeur de la recherche, au service des citoyens, de l'économie et de l'Etat.

Il apporte des solutions concrètes à leurs besoins dans quatre domaines principaux : transition énergétique, transition numérique, technologies pour la médecine du futur, défense et sécurité sur un socle de recherche fondamentale. Le CEA s'engage depuis plus de 75 ans au service de la souveraineté scientifique, technologique et industrielle de la France et de l'Europe pour un présent et un avenir mieux maîtrisés et plus sûrs.

Implanté au coeur des territoires équipés de très grandes infrastructures de recherche, le CEA dispose d'un large éventail de partenaires académiques et industriels en France, en Europe et à l'international.

Les 20 000 collaboratrices et collaborateurs du CEA partagent trois valeurs fondamentales :

- La conscience des responsabilités
- La coopération
- La curiositéContext: Large Language Models (LLMs) deployed in safety-critical domains face significant threats from backdoor attacks. Recent empirical evidence contradicts previous assumptions about attack scalability: poisoning attacks remain effective regardless of model or dataset size, requiring as few as 250 poisoned documents to compromise models from up to 13B parameters. This suggests data poisoning becomes easier, not harder, as systems scale.
Backdoors persist through post-training alignment techniques like Supervised Fine-Tuning and Reinforcement Learning from Human Feedback, compromising current defenses. However, persistence depends critically on poisoning timing and backdoor characteristics. Current verification methods are computationally prohibitive-Proof-of-Learning requires full model retraining and complete training transcript access. While step-wise verification shows promise for runtime detection, scalability to production models and resilience against adaptive adversaries remain unresolved.
Existing defenses focus on post-training detection rather than preventing attack success during training. Advancing data poisoning scaling dynamics-understanding how attack success correlates with dataset composition, poisoning density, and model capacity-is essential for developing evidence-based threat models and defense strategies.
Objective: This internship aims to empirically test and advance data poisoning attacks and defenses for LLMs through systematic experimentation and adversarial evaluation. Key responsibilities include: implementing state-of-the-art attack methods across multiple vectors (jailbreaking, targeted refusal, denial-of-service, information extraction); testing attacks on diverse model architectures and scales; establishing standardized evaluation protocols with metrics such as Attack Success Rate and Clean Accuracy; evaluating existing defenses, particularly step-wise verification; and developing reproducible test suites for objective defense benchmarking.

Le profil recherché

Requirements:
Background in computer science or a related field, with a focus on machine learning security, or adversarial machine learning.
Strong programming skills in languages commonly used for machine learning tasks (e.g., Python, C++).
Experience with machine learning systems, model training, or adversarial robustness is a plus.
Ability to work independently and collaborate in a research-driven environment.
Comfortable working in English, essential for documentation purposes.

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.