Recrutement INRIA

Phd Position F - M Towards Discovering Information From Very Heterogeneous Data Sources In a Data Lake Environment H/F - INRIA

  • Palaiseau - 91
  • CDD
  • INRIA
Publié le 15 septembre 2025
Postuler sur le site du recruteur

Les missions du poste

A propos d'Inria

Inria est l'institut national de recherche dédié aux sciences et technologies du numérique. Il emploie 2600 personnes. Ses 215 équipes-projets agiles, en général communes avec des partenaires académiques, impliquent plus de 3900 scientifiques pour relever les défis du numérique, souvent à l'interface d'autres disciplines. L'institut fait appel à de nombreux talents dans plus d'une quarantaine de métiers différents. 900 personnels d'appui à la recherche et à l'innovation contribuent à faire émerger et grandir des projets scientifiques ou entrepreneuriaux qui impactent le monde. Inria travaille avec de nombreuses entreprises et a accompagné la création de plus de 200 start-up. L'institut s'eorce ainsi de répondre aux enjeux de la transformation numérique de la science, de la société et de l'économie.PhD Position F/M Towards discovering information from very heterogeneous data sources in a data lake environment
Le descriptif de l'offre ci-dessous est en Anglais
Type de contrat : CDD

Niveau de diplôme exigé : Bac +5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

The Inria Saclay-Île-de-France Research Centre was established in 2008. It has developed as part of the Saclay site in partnership with Paris-Saclay University and with the Institut Polytechnique de Paris .

The centre has 40, 27 of which operate jointly with Paris-Saclay University and the Institut Polytechnique de Paris; Its activities occupy over 600 people, scientists and research and innovation support staff, including 44 different nationalities.

Contexte et atouts du poste

The PhD will be funded by ANR TopOL project to start in October 2025. This will collaborations and visits between the collaborating labs funded by the project.

Mission confiée

Context: Heterogeneous Data Lakes Exploiting datasets requires identifying what each dataset contains and what it is about. Users with Information Technology skills may do this using dataset schemata, documentation, or by querying the data. In contrast, non-technical users (NTUs, in short) are limited in their capacity to discover interesting datasets. This hinders their ability to develop useful or even critical applications. The problem is compounded by the large number of datasets which NTUs may be facing, in particular when value lies in exploiting many datasets together, as opposed to one or a few at a time, and when datasets are of different data models, e.g., tables or relational databases, CSV files, hierarchical formats such as XML and JSON, PDF or text documents, etc. Following our team's experience in collaborating with French media journalists [1, 4] and ongoing collaboration with the International Consortium of Investigative Journalists (ICIJ), we will primarily draw inspiration from journalist NTU applications. These include several high-profile journalistic investigations based on massive, heterogeneous digital data, e.g., Paradise Papers or Forever Pollutants. The setup we consider is: how to help NTUs identify useful parts of very large sets of heterogeneous datasets, assemble and discover the information from these datasets. For example, faced with a corpus of thousands or tens of thousands files (text, spreadsheets, etc.), a journalist may want to know: what subventions did the Region grant, and where geographically? or What shipping companies have shipped on routes towards Yemen, and who contracted with them? The tools we aim to develop generalize also beyond journalism, for instance, to enterprise data lakes containing documents and various internal datasets, scientific repositories with reports and experimental results, etc. State of the art Many techniques and systems target one dataset (or database), of one data model. NTUs are used to work with documents, such as texts, PDFs, or structured documents in Office formats, on which Information Retrieval (IR) enables efficient keyword searches. Large Language Models (LLMs, in short), and tools built on top of them, such as chatbots, or Google's NotebookLLM, add unprecedented capacities to summarize and answer questions over documents provided as input. However, because of possible hallucinations [9, 15], LLM answers still require manual verification before use in a setting with real-world consequences. In particular, a recent study has shown high error rate on the task of identifying the source of a news, across 8 major chatbots [8]. LLMs are not reliable information sources also (i) for realworld facts that happened after their latest training input (ii) for little-known entities not in the training set, e.g., a small French company active in a given region. Finally, LLMs hosted outside of the user's premises are not acceptable for users such as the ICIJ, for which dataset confidentiality during their investigation is crucial; locally deployed models are preferable for confidentiality, and smaller (frugal) ones also reduce the computational footprint. While we consider that language models should not be taken a reliable sources of knowledge, they are crucial ingredients for matching (bridging) user questions with answer components from various datasets, thanks to the semantic embeddings we can compute for the questions and the data. Database systems allow users to inspect and use the data via queries. NTUs find these unwieldy, especially if multiple systems must be used for multiple data models. Natural language querying leverages trained language models to formulate structured database queries, typically SQL ones [10]. However, errorsstill persist in the translation, and SQL is not applicable beyond relational data. Keyword search in database returns sub-trees or sub-graphs answers from a large data graph, which may model a relational database, an XML document, an RDF graph etc., e.g., [2]. However, these techniques have not been scaled up to large sets of datasets. Challenges Dataset summarization and schema inference have been used to extract, from a given dataset, e.g., an XML, JSON, or Property Graphs (PG) one, a suitable schema [3, 6, 11, 14], which is a technical specification that experts or systems can use to learn how the data is structured; each technique is specific to one data model only. Dataset abstraction [5] identifies, in a (semi-)structured dataset, nested entities, and binary relationships (only). Generalizing it to large numbers of datasets, and to text-rich documents, is also a challenge. More recent data lakes hold very large sets (e.g., tens or hundreds of thousands) of datasets, each of which may be large [7]. In a data lake, one may search for a dataset using keywords or a question, for a dataset which can be joined with another [13]. However, modeling, understanding, and exploring large, highly heterogeneous collections of datasets (other than tables) are still limited in data lakes.

[1] A. Anadiotis, O. Balalau, T. Bouganim, F. Chimienti, S. Horel, I. Manolescu, et al. Empowering investigative journalism with graph-based heterogeneous data management. IEEE Data Eng. Bull., 44(3), 2021.

[2] A. Anadiotis, I. Manolescu, and M. Mohanty. Integrating Connection Search in Graph Queries. In ICDE, 2023.

[3] M. Baazizi, C. Berti, D. Colazzo, G. Ghelli, and C. Sartiani. Human-in-the-loop schema inference for massive JSON datasets. In Extending Database Technology (EDBT), 2020.

[4] O. Balalau, S. Ebel, T. Galizzi, I. Manolescu, A. Deiana, E. Gautreau, A. Krempf, et al. Fact-checking Multidimensional Statistic Claims in French. In Truth and Trust Online, Oct. 2022.

[5] N. Barret, I. Manolescu, and P. Upadhyay. Computing Generic Abstractions from Application Datasets. In EDBT 2024, volume 27, 2024.

[6] S. Cebiric, F. Goasdou´e, H. Kondylakis, D. Kotzinos, I. Manolescu, et al. Summarizing Semantic Graphs: A Survey. The VLDB Journal, 28(3), 2019.

[7] M. P. Christensen, A. Leventidis, M. Lissandrini, L. D. Rocco, R. J. Miller, and K. Hose. Fantastic tables and where to find them: Table search in semantic data lakes. In EDBT, pages 397-410, 2025.

[8] K. Ja´zwi´nska and A. Chandrasekar. AI search has a citation problem. Available at: https://rb.gy/r022za, March 2025.

[9] Z. Ji, N. Lee, R. Frieske, et al. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), Mar. 2023.

[10] G. Koutrika. Natural language data interfaces: A data access odyssey (invited talk). In ICDT, 2024.

[11] H. Lbath, A. Bonifati, and R. Harmer. Schema inference for property graphs. In EDBT, 2021.

[12] P. S. H. Lewis, E. Perez, A. Piktus, et al. Retrieval-augmented generation for knowledge-intensive NLP tasks. In NeurIPS, 2020.

[13] N. Paton, J. Chen, and Z. Wu. Dataset discovery and exploration: A survey. ACM Comput. Surv., 56(4), 2024.

[14] K. Rabbani, M. Lissandrini, and K. Hose. Extraction of validating shapes from very large knowledge graphs. PVLDB, 16(5), 2023.

[15] M. Zhang, O. Press, W. Merrill, et al. How language model hallucinations can snowball. ICML, 2024.

Principales activités

The PhD will focus on natural language question answering over large corpora of highly heterogeneous data (relational databases, CSV/TSV files, JSON, RDF, XML, or Property Graphs, text or Office documents, etc.).
- Natural language question answering: We will leverage Retrieval-Augmented Generation (RAG) [12] to evaluate natural language queries using a vector store, where unstructured data is represented as vectors embedded in a latent space. The vector store is particularly valuable for handling queries that require flexible or approximate matching. The retrieval-augmented techniques aim to enhance the accuracy and completeness of query answers while covering insights from both structured and unstructured data sources in the data lake. To tackle with large corpora, we will leverage dataset summaries to effectively search and retrieve answers and open LLMs will be used to finally generate the natural language response.
- Optimizing connection queries over graphs: As a final step, we will specifically optimize connection queries pertinent to the needs of NTU journalists. These would include typical path search (e.g., connect this politician to carbon emission corporations), but should also allow the expression of more complex intentions, especially looking for similar situations (e.g., compare the carbon emissions of corporations with similar groups of shareholders; highlight corporations having non conventional policies; and describe corporations showing decreasing emission tendency. Optimizing such intentions would leverage prior indexes and also build new ones at the schema-level to cater to specific needs of the journalists.

Compétences

A successful candidate should have demonstrated academic excellence in Computer Science, with a particular interest in Data Management, Algorithms, and/or Natural Language Processing. Good software development skills in large projects (C++ or Java) are also required. Excellent communication skills and prior experience in research are a plus.

Avantages

- Subsidized meals
- Partial reimbursement of public transport costs
- Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
- Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
- Professional equipment available (videoconferencing, loan of computer equipment, etc.)
- Social, cultural and sports events and activities
- Access to vocational training
- Social security coverage

Rémunération

Monthly gross salary : 2.200 euros

Postuler sur le site du recruteur

Ces offres pourraient aussi vous correspondre.

Parcourir plus d'offres d'emploi