Design, implement, and maintain scalable data pipelines using Azure Databricks, Spark, and PySpark
Work with Delta Lake to manage large-scale data storage and optimize performance
Develop robust data integration solutions using Azure Data Factory and Azure Functions
Build and maintain structured and semi-structured data models, leveraging formats such as Parquet, Avro, and JSON
Ensure efficient and secure data processing through proper performance tuning and code optimization
Collaborate with development and analytics teams to support business data needs
Apply version control best practices using Git and follow coding standards in Python and SQL
Requirements:
Strong hands-on experience with Azure Databricks, Spark, and PySpark
Proficiency in building and tuning data pipelines with Delta Lake
Solid understanding of data modeling and performance optimization techniques
Practical experience with Azure Data Factory, Azure Functions, and Git
Competence in working with data formats such as Parquet, Avro, and JSON
Strong programming skills in Python and SQL
Ability to work effectively in a fast-paced, enterprise-level environment
Strong communication skills and fluency in spoken and written English (C1)
Nice to have:
Understanding of blockchain-related concepts and data structures
Offer:
Private medical care
Co-financing for the sports card
Training & learning opportunities
Constant support of dedicated consultant
Employee referral program
Published: 06.09.2025
Office location
Data Engineer with Blockchain
150 - 170 PLNNet per hour - B2B
Apply
Data Engineer with Blockchain
-, Kraków
DCG
150 - 170 PLNNet per hour - B2B
ADVERTISEMENT: Recommended by Just Join IT
Salary
150 - 170 PLN
Net per hour - B2B
Informujemy, że administratorem danych jest DCG Sp. z o.o., ul. Towarowa 28, 00-839 Warszawa (dalej jako "administrator"... MoreThis site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.