Tasks:
Design and implement data processing solutions using Databricks for large-scale and diverse datasets
Design, build, and enhance data pipelines with Python and cloud- native tools
Work closely with solution architects to define and uphold best practices in data engineering
Ensure data consistency, security, and scalability within cloud-based environments
Requirements:
Experience in data engineering, with hands-on Databricks expertise
Strong experience in Python for automation and data transformation
Experience working with at least one major cloud platform (AWS, Azure, or GCP)
Strong communication skills and strong English language skills
Nice to have:
Solid understanding of SQL with experience in query optimization and data modeling
Familiarity with DevOps methodologies, CI/CD pipelines, and Infrastructure as Code (Terraform, Bicep)
Experience with real-time data streaming technologies such as Kafka or Spark Streaming
Knowledge of cloud storage solutions like Data Lake, Snowflake, or Synapse
Hands-on experience with PySpark for distributed data processing
Relevant certifications such as Databricks Certified Data Engineer Associate or cloud-based data certifications
Our offer:
Employment based on B2B contract via Experis for a period of 12 months
Compensation: 150-175 PLN per hour
100% remote work
Multisport card
Private healthcare system
Life insurance
Net per hour - B2B
Check similar offers