Data Engineer
The project:
International project in the financial sector
Data platform in Azure, with ADF and Databricks as core
Strong focus on data engineering, governance and quality
Critical platform in production, with high reliability and stability
requirements
Responsibilities
Develop and maintain data pipelines in Azure Databricks (PySpark / Spark SQL).
Implement and optimize Delta Lake tables (ETL/ELT, medallion architecture).
Collaborate in the design of analytical datasets.
Integrate Databricks workloads with Azure Data Factory (orchestration).
Apply good practices in data performance, quality and reliability.
Contribute to the evolution of the platform for analytics and AI scenarios.
Requirements
At least 4 years of experience as a Data Engineer.
Hands-on experience with Azure Databricks and Apache Spark.
Good knowledge of Python (PySpark) and SQL.
Experience with Delta Lake.
Experience with Azure services:
Azure Data Lake Storage (ADLS Gen2),
Azure Data Factory
fluent English (oral and written).
Preferred:
Experience with CI/CD for data platforms (Azure DevOps, Git).
Notions of data governance (e.g., Unity Catalog, Purview).
Sensitivity to safety, quality and regulated environments.
Experience in international projects or distributed teams.
Data Engineer
Data Engineer