🌍 Work mode – 100% remote / Warsaw
⏰Start – ASAP/ April 1
📙 Language – English min. B2
About the Role
As a Senior Data Engineer, you will lead the development of scalable Data Hubs, enabling analytical, reporting, operational, and Generative AI use cases. This role involves defining standards, designing architecture patterns, and guiding the engineering team in implementing best practices. You will play a key role in shaping the data strategy, mentoring engineers, and ensuring the robustness and efficiency of data solutions.
Key Responsibilities
· Data Hub Architecture – Design scalable, efficient, and reusable Data Hub architectures for integrating structured and unstructured data from multiple sources.
· Technical Leadership – Define standards, best practices, and design patterns, ensuring consistency across implementations.
· Mentoring & Knowledge Sharing – Guide and mentor other data engineers, providing technical direction and code reviews.
· Data Pipeline Optimization – Develop and optimize batch and real-time data processing workflows.
· Quality & Monitoring – Implement data validation, anomaly detection, and automated monitoring, ensuring high data reliability.
· Automation & CI/CD – Drive automation in data workflows, ensuring smooth deployment through DevOps pipelines.
· Collaboration & Strategy – Work closely with architects, AI engineers, and business teams to align data solutions with enterprise goals.
· Security & Compliance – Ensure that data solutions meet governance, security, and regulatory requirements.
· Documentation & Knowledge Management – Maintain and promote high-quality technical documentation for data models, pipelines, and best practices.
Required Skills and Experience
· Programming – Advanced proficiency in Python and SQL for data engineering.
· Cloud & Big Data – Strong experience with Azure Data Factory, ADLS, Azure SQL, Synapse.
· Databricks & Spark – Deep expertise in Databricks (primary tool) and Apache Spark for scalable data processing.
· Data Architecture – Proven ability to design enterprise-scale data platforms, ensuring scalability, security, and efficiency.
· Streaming & Real-Time – Experience with real-time data processing, using Azure Stream Analytics, Event Hubs, or similar tools.
· Automation & DevOps – Strong knowledge of CI/CD, Terraform, Kubernetes/AKS, Docker for infrastructure automation.
· Data Governance – Experience ensuring data security, lineage, and compliance with industry standards.
· Leadership & Mentoring – Ability to guide, mentor, and set technical direction for data engineering teams.
· Documentation & Communication – Strong ability to create clear, structured technical documentation and communicate complex topics effectively.
· Language Skills – Proficient in English (spoken and written), minimum B2 level.
Technology Stack
· Data Platform – Databricks (primary), Apache Spark, Delta Lake.
· Cloud & Data Services – Azure Data Factory, ADLS, Azure SQL, Synapse, Azure DevOps.
· Streaming & Real-Time – Azure Stream Analytics, Azure Event Hubs.
· Development & Automation – Python, SQL, GitHub, Terraform, Docker, Kubernetes/AKS.
This role offers an opportunity to shape data strategy, define engineering standards, and lead complex data initiatives. If you're excited about building high-impact data solutions and mentoring others, we’d love to hear from you!
Beata Tworko
E:beata.tworko@makeitright.pl
Net/month - B2B
Check similar offers