#1 Job Board for tech industry in Europe

Data Engineer with Hadoop
New
Data

Data Engineer with Hadoop

48 - 58 USD/hNet per hour - B2B
48 - 58 USD/hNet per hour - B2B
Type of work
Full-time
Experience
Senior
Employment Type
B2B
Operating mode
Hybrid

Tech stack

    Jenkins

    advanced

    CI/CD

    advanced

    Ansible

    advanced

    Spark

    advanced

    Hadoop

    advanced

    Kafka

    advanced

    Big Data

    advanced

    Linux

    regular

Job description

Data Engineer with HadoopLocation: Cracow (6 days per month)

About the Role:We are currently looking for a Data Engineer with Hadoop to join a dynamic Data Platform team. The role offers the opportunity to work on large-scale, global data solutions designed to enable innovation and improve data accessibility across business units. You'll contribute to the modernization and automation of a hybrid platform that spans on-premises and multi-cloud environments (GCP and private cloud).

This role focuses on enhancing platform resilience, building automation tools, and improving developer experience for data engineering teams. It involves both back-end and front-end work, including integration with CI/CD tools, service management systems, and internal applications.

 

Key Responsibilities:

  • Develop automation tools and integrate existing solutions within a complex platform ecosystem

  • Provide technical support and design for Hadoop Big Data platforms (Cloudera preferred)

  • Manage user access and security (Kerberos, Ranger, Knox, TLS, etc.)

  • Implement and maintain CI/CD pipelines using Jenkins and Ansible

  • Perform capacity planning, performance tuning, and system monitoring

  • Collaborate with architects and developers to design scalable and resilient solutions

  • Deliver operational support and improve engineering tooling for platform management

  • Analyze existing processes and design improvements to reduce complexity and manual work

 

Challenges You’ll Tackle:

  • Building scalable automation in a diverse ecosystem of tools and frameworks

  • Enhancing service resilience and reducing operational toil

  • Supporting the adoption of AI agents and real-time data capabilities

  • Integrating with corporate identity, CI/CD, and service management tools

  • Collaborating with cross-functional teams in a global environment

 

Required Skills & Experience:

  • Minimum 5 years of experience in engineering Big Data environments (on-prem or cloud)

  • Strong understanding of Hadoop ecosystem: Hive, Spark, HDFS, Kafka, YARN, Zookeeper

  • Hands-on experience with Cloudera distribution setup, upgrades, and performance tuning

  • Proven experience with scripting (Shell, Linux utilities) and Hadoop system management

  • Knowledge of security protocols: Apache Ranger, Kerberos, Knox, TLS, encryption

  • Experience in large-scale data processing and optimizing Apache Spark jobs

  • Familiarity with CI/CD tools like Jenkins and Ansible for infrastructure automation

  • Experience working in Agile or hybrid development environments (Agile, Kanban)

  • Ability to work independently and collaboratively in globally distributed teams

 

To learn more about Antal, please visit www.antal.pl

48 - 58 USD/h

Net per hour - B2B