We are seeking a skilled Data Engineer to join our team. If you have a strong background in data engineering, with expertise in Hadoop, Spark, and cloud technologies, and enjoy working in a collaborative, fast-paced environment, we’d love to hear from you. 🙋♀️
✅ Your responsibilities:
Design and build scalable data pipelines.
Develop and implement data processing solutions in a distributed environment.
Collaborate with cross-functional teams to deliver robust data solutions.
Automate testing, integration, and deployment processes.
Troubleshoot and debug code issues, working with the development team.
Adapt to dynamic work environments and maintain a proactive approach
🧠 Our requirements:
Strong experience with Apache Spark.
Proficiency in SQL and distributed data systems.
Familiarity with Google Cloud Platform and data processing tools.
Knowledge of DevOps practices and CI/CD pipelines.
Familiarity with Hadoop, and Scala.
Experience with Git and Agile methodologies.
Excellent communication and collaboration skills.
Ability to work in fast-paced environments and a team-oriented approach.
🌟 What we offer:
Remote type of work - great if you are open to meeting the team once in a while at the office :)
Long-term cooperation
Engaging projects with real impact
Supportive, experienced team and modern development environment
Opportunities for continuous learning and growth
Net per month - B2B
Check similar offers