📍 100% remote | 🕒 Full-time | 🌍 International environment
We are looking for an experienced Data Engineer with strong expertise in the Azure ecosystem to join a dynamic data team delivering scalable and high-performance data solutions. You’ll play a key role in designing, building, and optimizing modern data pipelines and data lake architectures using cutting-edge cloud technologies.
- Design and develop robust and efficient data pipelines using Azure Databricks, Spark, and PySpark
- Work with Delta Lake architecture to manage structured and semi-structured data
- Perform data modeling, transformation, and performance tuning for large datasets
- Build and manage Azure Data Factory pipelines and Azure Functions for orchestrating workflows
- Integrate various data formats such as Parquet, Avro, and JSON
- Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions
- Use Git for version control and manage code in a collaborative environment
- Write efficient Python and SQL code for data processing and querying
- Ensure data quality, consistency, and reliability across the platform
- Solid hands-on experience in Azure Databricks, Spark, and PySpark
- Deep knowledge of Delta Lake and modern data lakehouse architectures
- Proficiency in data modeling and performance optimization techniques
- Experience with ADF (Azure Data Factory) and Azure Functions
- Strong skills in Python, SQL, and data serialization formats (Parquet, Avro, JSON)
- Familiarity with version control systems, especially Git
- Ability to work independently in a fully remote, distributed team
- Good communication skills in English