We’re hiring a Data Engineer!
Are you passionate about turning raw data into business-ready insights? We’re looking for a skilled Data Engineer to join our client’s team and build modern, scalable data platforms using cutting-edge tools.
Our client is a dynamic tech company specializing in cloud solutions and digital transformation. They work with international clients across various industries, delivering scalable and innovative solutions.
Key responsibilities
Designing and building robust ETL/ELT data pipelines in Microsoft Fabric using PySpark and SQL
Modeling data using approaches like star schema, data vault, and lakehouse
Creating well-structured datasets, reports, and Power BI dashboards focused on business usability and self-service
Implementing best practices around data governance, security, and documentation
Automating tests, CI/CD workflows, and monitoring using Azure DevOps or GitHub Actions
Collaborating closely with product owners, analysts, and fellow engineers in cross-functional teams
Ideal candidate profile
3+ years of experience as a Data Engineer or in a similar role
Advanced SQL skills – query optimization, indexing, partitioning
Strong Python programming skills
Solid knowledge of Apache Spark for batch & streaming data, Delta Lake
Experience with Power BI – data modeling, DAX, RLS, deployment pipelines
Familiarity with cloud platforms – ideally Azure (Data Lake Gen2, Data Factory, Synapse/Databricks)
Proficiency with version control and DevOps tools (Git, pull requests, CI/CD basics)
Fluency in English (minimum B2+)
Conditions
Work model: 100% remote
Rate: 100–150 PLN/h net (B2B)
Benefits: Private medical care, Life insurance, Multisport card
Net per hour - B2B
Check similar offers