Project information:
Industry: Insurance and IT services
Rate: up to 175 zł/h netto + VAT
Location: remote work with occasional meetings (Warsaw)
Project language: English
Summary:
Join a newly forming team responsible for building the next Data Hub in a large-scale enterprise environment. You’ll have a direct impact on the architecture and development of a modern data platform based on a proven internal framework (Databricks + Azure). The platform supports a variety of use cases, from analytics and reporting to operational systems and Generative AI. We’re looking for an experienced Senior Data Engineer to shape the foundation of this new initiative and bring deep expertise in scalable data engineering solutions.
Your Responsibilities:
Data Hub Development: Implement a scalable Data Hub platform based on a company-wide framework using Azure and Databricks.
Data Engineering: Build and optimize both batch and streaming data pipelines using Python and PySpark.
Architectural Collaboration: Work closely with the Data & Solution Architect to implement enterprise-grade architecture.
Technical Standards & Mentorship: Help define best practices, participate in code reviews, and support knowledge sharing within the team.
Automation & CI/CD: Automate deployment and data operations using tools like Azure DevOps, Terraform, Docker, and Kubernetes.
Data Quality & Monitoring: Ensure data validation, anomaly detection, and pipeline monitoring.
Cross-Team Collaboration: Collaborate with multidisciplinary teams to align technical solutions with business goals.
Documentation: Produce high-quality technical documentation in line with corporate standards.
Must-Have Qualifications
At least 6 years of experience as a Data Engineer in enterprise-scale environments.
Proficiency in Python and PySpark.
Solid hands-on experience with Microsoft Azure.
Familiarity with Azure DevOps and automation workflows.
Practical knowledge of Databricks.
Comfortable working in cross-functional teams (e.g., architects, DevOps, analysts) within complex organizational structures.
Fluent in English (minimum B2 level).
Nice to Have
Hands-on experience with dbt (Data Build Tool) and DLT pipelines in Databricks.
Understanding of medallion architecture in a production context.
Background in large enterprises or consulting firms, ideally with exposure to complex data ecosystems.
Experience working in Agile/Scrum development environments.
Experience with infrastructure as code (Terraform), containerization (Docker), orchestration (Kubernetes).
Net per hour - B2B
Check similar offers