Hadoop / Big Data Developer (she / he)
Hadoop / Big Data Developer (she / he)
Location: hybrid (Gdynia, Gdańsk, Warszawa or Łódź.)
Tasks:
Design, develop, and maintain scalable Big Data solutions using Hadoop ecosystem technologies
Build and optimize data pipelines
Develop high‑quality, efficient, and reusable code using technologies such as Java, SQL, Hive, Spark, and related tools
Work closely with business stakeholders and product owners
Participate actively in SAFe ceremonies, including PI planning, sprint planning, daily stand‑ups, reviews, and retrospectives
Optimize existing Big Data processes and queries for performance and cost efficiency
Collaborate with cross‑functional teams (developers, architects, QA, DevOps) to deliver end‑to‑end solutions
Support deployment, monitoring, and troubleshooting of Big Data applications in production environments
Requirements:
• 3+ years of Java 8+ and/or Scala experience
• Experience in working in Spark
• Knowledge of Linux Shell Scripting
• Knowledge of SQL
• Knowledge of Hadoop stack (YARN, Sqoop, Hive, Impala, MapReduce Oozie, etc)
• Familiar with version control and CI/CD tools (Git, Ansible, Bamboo, Jenkins...)
Nice-to-have skills:
• Experience with functional programming techniques & principles – Scala
• Experience with Streaming Technologies, such as Flink and Kafka
• Familiar with Data Analysis
• Familiar with SAFe Agile way of working
Hadoop / Big Data Developer (she / he)
Hadoop / Big Data Developer (she / he)