#1 Job Board for tech industry in Europe

  • Job offers
  • Senior Hadoop Administrator
    New
    Data

    Senior Hadoop Administrator

    Warszawa
    6 955 - 8 989 USD/monthNet per month - B2B
    6 955 - 8 989 USD/monthNet per month - B2B
    Type of work
    Full-time
    Experience
    Senior
    Employment Type
    B2B
    Operating mode
    Hybrid

    Tech stack

      English

      C1

      Big Data

      advanced

      Apache Spark

      advanced

      Apache Kafka

      advanced

      Python

      advanced

      Bash

      advanced

      AWS

      regular

      GCP

      regular

      SQL

      regular

      Grafana

      regular

      Docker

      regular

    Job description

    2 days per week from the office in Warsaw


    Altimetrik Poland is a digital enablement company. We deliver bite-size outcomes to enterprises and startups from all industries in an agile way to help them scale and accelerate their businesses. We are unique in Poland's IT market. Our differentiators are an innovation-first approach, a strong focus on core development, and an ability to attack the challenging and complex problems of the biggest companies in the world.


    As a Senior Hadoop Administrator you will be part of a team that maintains and supports Data Platform and provides support for key cloud based Big data and Kafka Platforms. You will be responsible for driving innovation for our partners and clients, within and globally. You will work on open-source Big Data and Kafka clusters focusing on Cloud, ensuring their availability, performance, reliability, and improving operational efficiency.


    Responsibilities:

    • Familiarity with big data tools (Big Data, Spark, Kafka, etc.) and frameworks (HDFS, MapReduce, etc.).
    • Design, build and manage Big Data and Kafka infrastructure
    • Manage and optimize Apache Big Data and Kafka clusters for high performance, reliability, and scalability.
    • Develop tools and processes to monitor and analyze system performance and to identify potential issues.
    • Collaborate with other teams to design and implement Solutions to improve reliability and efficiency of the Big data cloud platforms.
    • Ensure security and compliance of the platforms within organizational guidelines.
    • Effective root cause analysis of major production incidents and the development of learning documentation (identify and implement high-availability solutions for services with a single point of failure).
    • Planning and performing capacity expansions and upgrades in a timely manner to avoid any scaling issues and bugs. This includes automating repetitive tasks to reduce manual effort and prevent human errors.
    • Tune alerting and set up observability to proactively identify issues and performance problems.
    • Reviewing new use cases and cluster hardening techniques to build robust and reliable platforms.
    • Creating standard operating procedure documents and guidelines on effectively managing and utilizing the platforms.
    • Leveraging DevOps tools, disciplines (Incident, problem, and change management), and standards in day-to-day operations.
    • Perform security remediation, automation, and self-healing as per the requirement.
    • Developing automations and reports to minimize manual effort. This can be achieved through various automation tools such as Shell scripting, Ansible, or Python scripting, or by using any other programming language.


    And if you possess...

    • Experience with managing and optimizing Big Data and Kafka clusters.
    • Demonstrated experience with AWS or GCP cloud platforms.
    • Proficient in scripting languages (Python, Bash) and SQL.
    • Familiarity with big data tools (Big Data, Spark, Kafka, etc.) and frameworks (HDFS, MapReduce, etc.).
    • Strong knowledge in system architecture and design patterns for high-performance computing.
    • Good understanding of data security and privacy concerns.
    • Experience with infrastructure automation technologies like Docker, Kubernetes, Ansible, Terraform is a plus.
    • Excellent problem-solving and troubleshooting skills.
    • Strong communication and collaboration skills.
    • Observability: knowledge on observability tools like Grafana, opera and Splunk.
    • Understanding of Linux, networking, CPU, memory, and storage.
    • Knowledge and ability to code or program in one of Java, Python.
    • Excellent interpersonal skills, along with superior verbal and written communication abilities


    🔥We grow fast.

    🤓We learn a lot.

    🤹We prefer to do things instead of just talking about them.


    If you would like to work in an environment that values trust and empowerment... don't hesitate, just apply!

    6 955 - 8 989 USD/month

    Net per month - B2B

    Apply for this job

    File upload
    Add document

    Format: PDF, DOCX, JPEG, PNG. Max size 5 MB

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    Informujemy, że administratorem danych jest Altimetrik z siedzibą w Warszawie, ul. Towarowa 28 (dalej jako "administrato...more

    Check similar offers

    MS Data Engineer (Azure Data Factory / ETL / PySpark)

    New
    1dea
    44 - 51 USD/h
    Warszawa
    , Fully remote
    Fully remote
    Azure Data Factory
    PySpark
    Azure