We are a leading eCommerce company active across multiple European markets, offering fashion, lifestyle, and family products to millions of customers. With a strong in-house performance marketing team and a data-driven mindset, we are on a mission to become one of Europe’s top online shopping destinations.
With over many years on the market and offices across Europe, we strike a balance between the stability of an established company and the agility of a fast-growing business. We are currently seeking a Data Engineer to join our international team and contribute to the development of a modern, scalable data platform.
Your Role and Main Responsibilities
As a Data Engineer, you will play a key role in building and maintaining our Central Lakehouse Data Platform, based on AWS and Databricks. Your daily tasks will include:
- Designing and developing data pipelines using PySpark, Delta Live Tables, and SparkSQL
- Managing data infrastructure via Terraform for AWS and Databricks
- Implementing monitoring systems, data quality tests, unit testing, and automated alerts
- Refactoring legacy data solutions (AWS Glue, Redshift) into a CI/CD-deployed Lakehouse architecture
- Supporting the implementation of medallion architecture and data mesh concepts
- Collaborating closely with business analytics and ML teams, enabling efficient data usage for reporting and modeling
Candidate Profile
You bring:
- A degree in Information Systems, Computer Science, Mathematics, Engineering, or a related technical field
- Several years of hands-on experience in data engineering (eCommerce experience is a plus)
- Strong Python skills, including testing, packaging, and deployment of applications
- Proven experience with PySpark and solid understanding of Spark’s architecture
- Advanced SQL skills and deep knowledge of data modeling for data lakes
- Experience with infrastructure as code (Terraform) and CI/CD pipelines in Git-based environments
- Practical background in deploying scalable ETL solutions
- Fluency in English (you’ll be working in an international setting)
- A collaborative, team-oriented mindset and eagerness to learn new technologies
Nice-to-have:
- Familiarity with AWS services and the Databricks platform
- Experience with AWS Glue, Redshift, and CloudFormation
- Knowledge of stream processing with AWS Kinesis and Spark Streaming
- Good command of Scala and functional programming concepts
- Exposure to MLOps tools like MLflow