#1 Job Board for tech industry in Europe

  • Job offers
  • Senior Hadoop Administrator
    New

    Senior Hadoop Administrator

    Warszawa
    27 000 - 33 600 PLN/monthNet per month - B2B
    Type of work
    Full-time
    Experience
    Senior
    Employment Type
    B2B
    Operating mode
    Hybrid

    Tech stack

      English

      C1

      Python

      advanced

      Bash

      advanced

      Big Data

      advanced

      Apache Hadoop

      advanced

      Spark

      advanced

      Kafka

      advanced

      Docker

      regular

      Kubernetes

      regular

      Terraform

      regular

    Job description

    Hybrid- 2 days per week from the office in Warsaw


    Altimetrik Poland is a digital enablement company. We deliver bite-size outcomes to enterprises and startups from all industries in an agile way to help them scale and accelerate their businesses. We are unique in Poland's IT market. Our differentiators are an innovation-first approach, a strong focus on core development, and an ability to attack the challenging and complex problems of the biggest companies in the world.


    As a Senior Hadoop Administrator you will be:

    • Part of a team that maintains and supports Data Platform and provides support for key cloud based Big data and Kafka Platforms.
    • Responsible for driving innovation for our partners and clients, within and globally.
    • Responsible for open-source Big Data and Kafka clusters focusing on Cloud, ensuring their availability, performance, reliability, and improving operational efficiency.


    Responsibilities:

    • Familiarity with big data tools (Big Data, Spark, Kafka, etc.) and frameworks (HDFS, MapReduce, etc.).
    • Design, build and manage Big Data and Kafka infrastructure
    • Manage and optimize Apache Big Data and Kafka clusters for high performance, reliability, and scalability.
    • Develop tools and processes to monitor and analyze system performance and to identify potential issues.
    • Collaborate with other teams to design and implement Solutions to improve reliability and efficiency of the Big data cloud platforms.
    • Ensure security and compliance of the platforms within organizational guidelines.
    • Effective root cause analysis of major production incidents and the development of learning documentation (identify and implement high-availability solutions for services with a single point of failure).
    • Planning and performing capacity expansions and upgrades in a timely manner to avoid any scaling issues and bugs. This includes automating repetitive tasks to reduce manual effort and prevent human errors.
    • Tune alerting and set up observability to proactively identify issues and performance problems.
    • Reviewing new use cases and cluster hardening techniques to build robust and reliable platforms.
    • Creating standard operating procedure documents and guidelines on effectively managing and utilizing the platforms.
    • Leveraging DevOps tools, disciplines (Incident, problem, and change management), and standards in day-to-day operations.
    • Perform security remediation, automation, and self-healing as per the requirement.
    • Developing automations and reports to minimize manual effort. This can be achieved through various automation tools such as Shell scripting, Ansible, or Python scripting, or by using any other programming language.


    And if you possess...

    • Experience with managing and optimizing Big Data and Kafka clusters.
    • Proficient in scripting languages (Python, Bash) and SQL.
    • Familiarity with big data tools (Big Data, Spark, Kafka, etc.) and frameworks (HDFS, MapReduce, etc.).
    • Strong knowledge in system architecture and design patterns for high-performance computing.
    • Good understanding of data security and privacy concerns.
    • Experience with infrastructure automation technologies like Docker, Kubernetes, Ansible, Terraform is a plus.
    • Excellent problem-solving and troubleshooting skills.
    • Strong communication and collaboration skills.
    • Observability: knowledge on observability tools like Grafana, opera and Splunk.
    • Linux: understanding of Linux, networking, CPU, memory, and storage.
    • Programming Languages: Knowledge of and ability to code or program in one of Java, python or a widely used coding language.
    • Communication: Excellent interpersonal skills, along with superior verbal and written communication abilities


    We work 100% remotely or from our hub in Kraków.

    🔥We grow fast.

    🤓We learn a lot.

    🤹We prefer to do things instead of just talking about them.

    If you would like to work in an environment that values trust and empowerment... don't hesitate, just apply!

    27 000 - 33 600 PLN/month

    Net per month - B2B

    Apply for this job

    File upload
    Add document

    Format: PDF, DOCX, JPEG, PNG. Max size 5 MB

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    Informujemy, że administratorem danych jest Altimetrik z siedzibą w Warszawie, ul. Towarowa 28 (dalej jako "administrato...more

    Check similar offers

    Signal Processing / Matlab Developer

    New
    ASTROKYON Sp. z o.o.
    16K - 22K PLN/month
    Warszawa
    , Fully remote
    Fully remote
    Matlab
    Git

    Senior Data Engineer (AI solutions)

    New
    7N
    0.13K - 0.155K PLN/h
    Warszawa
    , Fully remote
    Fully remote
    Python
    SQL
    ETL/ELT

    Senior Data Engineer (GCP)

    New
    Addepto
    21K - 31.9K PLN/month
    Warszawa
    , Fully remote
    Fully remote
    Looker
    SQL
    Python

    MS Data Engineer (Azure Data Factory / ETL / PySpark)

    New
    1dea
    0.165K - 0.19K PLN/h
    Warszawa
    , Fully remote
    Fully remote
    Azure Data Factory
    Python
    Databricks

    Senior Data Engineer

    New
    Acaisoft
    25K - 30K PLN/month
    Warszawa
    , Fully remote
    Fully remote
    PostreSQL
    Sidekiq
    Django