✅ Your responsibilities:
Design and develop scalable data pipelines to support business and analytical needs.
Implement efficient data processing solutions in distributed systems.
Work closely with cross-functional teams to deliver reliable and performant data solutions.
Automate testing, integration, and deployment workflows to ensure smooth delivery.
Identify, troubleshoot, and resolve code and system issues in collaboration with the engineering team.
Thrive in a dynamic environment by staying proactive and adaptable to changing priorities.
🧠 Our requirements:
Experience with Apache Spark.
Proficiency in SQL and distributed data systems.
Familiarity with Google Cloud Platform and data processing tools.
Knowledge of DevOps practices and CI/CD pipelines.
Familiarity with Hadoop, and Scala.
Experience with Git and Agile methodologies.
Excellent communication and collaboration skills.
Ability to work in fast-paced environments and a team-oriented approach.
🌟 What we offer:
Remote type of work - great if you are open to meeting the team once in a while at the office :)
Long-term cooperation
Engaging projects with real impact
Supportive, experienced team and modern development environment
Opportunities for continuous learning and growth
Net per month - B2B
Check similar offers