PySpark Data Engineer
We are looking for a Data Engineer to design and maintain modern data solutions based on Databricks, PySpark, and Microsoft Azure. The role focuses on building scalable data platforms and delivering reliable data pipelines to support business needs.
✅ Your responsibilities:
Design, build, and maintain scalable data pipelines
Develop data processing and transformation workflows
Support data ingestion from multiple sources into cloud environments
Ensure performance, reliability, and data quality
Collaborate with business and technical stakeholders to translate requirements into data solutions
Contribute to documentation and continuous improvement of data processes
Work in a distributed team with a focus on product and business goals
🧠 Our requirements:
Minimum 3 years of experience in Data Engineering
Strong hands-on experience with PySpark (DataFrames, SparkSQL optimization, partitioning)
Practical experience with Databricks and Azure Data Factory
Knowledge of Azure SQL and core Azure services
Experience with CI/CD processes
Ability to work independently and solve problems with minimal supervision
Strong written and spoken English
Experience with Power BI
🌟 What we offer:
Work on modern data solutions in Azure and Databricks environment
Flexible working model and stable long term cooperation
Exposure to international projects and stakeholders
Support for professional growth and continuous learning
PySpark Data Engineer
PySpark Data Engineer