Data Platform Engineer H/F - Step Up
- Bordeaux - 33
- CDI
- Step Up
Les missions du poste
STEP UP est une société d'ingénierie experte en pilotage de projets industriels et informatiques (+ 250 collaborateurs sur 11 agences en France), plaçant le potentiel humain comme 1er vecteur d'excellence et de performance en entreprise.
Oubliez les sociétés d'ingénierie qui ne valorisent que vos seules compétences, chez STEP UP, nous visons également l'adéquation entre votre personnalité et la culture d'entreprise de nos clients. Cela se traduit pour vous par une différence fondamentale en termes de bien être, d'épanouissement au travail et de succès dans vos missions.
Ce que nous vous proposons :
- Un cadre de travail épanouissant, stimulant et collaboratif, nous sommes certifiés entreprise où il fait bon vivre !
- Des projets innovants et variés.
- La possibilité de se perfectionner continuellement avec des formations internes.
- Des perspectives d'évolution de carrière.
- Un accompagnement individualisé avec un programme de développement du potentiel humain.
- Un programme de cooptation.
Et bien sûr, nous prenons en charge 70% de votre mutuelle santé et encourageons financièrement la mobilité douce.We are looking for a highly skilled and hands-on Data Platform Engineer to join our Data fabric team. In this role, you will design, build, and maintain secure, scalable, and reliable data pipelines that support enterprise analytics and datadriven decisionmaking.
Collaboration with analysts, application teams, and security functions is essential to deliver robust data solutions aligned with corporate standards, and security requirements.
You will apply DataOps best practices-including CI/CD, version control, automated testing, and documentation-while supporting production environments through monitoring, troubleshooting, and continuous improvement.
Key responsibilities :
--> Data pipeline design & engineering
- Design, build, and maintain secure, scalable, and reliable data pipelines supporting enterprise analytics and datadriven decisionmaking.
- Collaborate with analysts, application teams, and cloud, infrastructure, and security teams to translate requirements into robust technical solutions.
- Develop and manage data ingestion, transformation, and integration processes using the DDSselected technology stack (MS Fabric, Azure, Terraform, some open source)
- Implement and maintain data models optimized for analytics, reporting, and downstream consumption.
- Apply engineering best practices including CI/CD, version control, automated testing, and documentation.
- Implement and maintain CI/CD for data workloads using GitHub integration and/or Azure DevOps pipelines.
- Use Infrastructure as Code (Terraform) for repeatable environment provisioning and controlled promotion across environments.
--> Platform reliability, quality & operations :
- Ensure data quality, consistency, availability, and performance across the data platform.
- Monitor, troubleshoot, and resolve issues in production data pipelines and platform components.
- Support operational activities such as incident management, performance tuning, and capacity planning.
- Contribute to continuous improvement of engineering standards and platform evolution
--> Enable and support internal data consumers :
- Provide onboarding, documentation, and best practices.
- Support analysts and application teams in using platform components.
- Contribute to governance (security, access models, data quality rules).
Le profil recherché
Required qualifications and experience :
-Bachelor's degree in IT, Software engineering, Computer science, Business informatics, or equivalent relevant work experience.
- 5+ years of practical experience in a Data engineering role focused on production environments, including ETL/ELT, data warehousing, and data modeling.
- Strong experience with data engineering technologies, such as relational and non-relational databases, Microsoft Fabric, Microsoft Azure data services, open-source data technologies, Power BI.
- Strong proficiency in SQL; Python is strongly preferred, including hands-on experience with PySpark.
- Experience working with cloud-based data platforms and distributed data processing.
- Understanding of data security, access control, and governance principles in a corporate IT environment
- Familiarity with DevOps and DataOps practices, including CI/CD pipelines and GitHub, Azure DevOps, and Terraform-based IaC.
- Communication and collaboration: Ability to clearly document platform functionality and train/support engineering teams on tools, processes, practices.
- Fluency in English language: B2-C1 level for written and oral communication.
Preferred Skills :
- Prior experience building a data platform or shared platform services for internal consumers.
- Ability to analyze complex data-related problems and deliver robust, maintainable solutions.
- Having relevant certificates i.e. Azure DevOps Engineer Expert, Azure Data Engineer Associate, MS Fabric Data Engineer Associate, etc.
- Autonomy and sense of ownership.
- Proactive problem-solving.
- Ability to work effectively with both technical and non-technical stakeholders.