#1 Job Board for tech industry in Europe

  • Job offers
  • Pyspark Developer
    New
    Python

    Pyspark Developer

    4 531 - 5 355 USDNet/month - B2B
    Type of work
    Full-time
    Experience
    Senior
    Employment Type
    B2B
    Operating mode
    Remote
    Link Group

    Link Group

    We're a team of tech enthusiasts who love putting together amazing IT teams for Fortune 500 companies and startups across the globe. Our goal is to help you find your ideal job and work with some of the biggest players in the tech industry.

    Company profile

    Tech stack

      Data

      advanced

      PySpark

      regular

      Apache Spark

      regular

    Job description

    Online interview

    Job Description

    We are seeking a highly skilled PySpark Developer with at least 5 years of experience in big data processing and analytics. The ideal candidate will design, implement, and optimize large-scale data processing pipelines, leveraging the capabilities of Apache Spark and Python.


    Key Responsibilities

    • Develop, test, and maintain PySpark-based ETL pipelines to process and analyze large datasets.
    • Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and design optimal solutions.
    • Optimize PySpark applications for performance and scalability in distributed computing environments.
    • Work with Hadoop-based data platforms and integrate with other tools like Hive, HDFS, or Kafka.
    • Ensure data quality and integrity through robust validation and monitoring practices.
    • Debug and resolve issues in production and pre-production environments.
    • Document technical solutions and best practices.


    Requirements

    Technical Skills:


    • 5+ years of experience in data engineering or big data development, with a strong focus on PySpark.
    • Proficiency in Python programming, with experience in libraries commonly used in data processing (e.g., Pandas, NumPy).
    • Strong understanding of Apache Spark concepts: Spark Core, Spark SQL, and Spark Streaming.
    • Experience with distributed data processing frameworks and working in cloud-based environments (e.g., AWS, Azure, GCP).
    • Solid knowledge of big data technologies like Hadoop, Hive, HDFS, Kafka, or Airflow.
    • Hands-on experience with relational and NoSQL databases (e.g., PostgreSQL, Cassandra).
    • Familiarity with CI/CD pipelines and version control (e.g., Git).


    Soft Skills:


    • Strong analytical and problem-solving skills.
    • Ability to work collaboratively in a team and communicate technical concepts effectively.
    • Detail-oriented, with a commitment to delivering high-quality code.


    Preferred Qualifications:


    • Experience with streaming data using Spark Streaming or Kafka.
    • Knowledge of machine learning workflows and integration with big data pipelines.
    • Understanding of containerization tools like Docker or orchestration with Kubernetes.

    Check similar offers

    Senior Python Developer

    New
    BlockWise
    4.41K - 7.85K USD
    Poznań
    , Fully remote
    Fully remote
    Python

    Staff Python Engineer

    New
    Adverity
    6.13K - 7.92K USD
    Warszawa
    , Fully remote
    Fully remote
    Django
    Python
    Project Management

    Pyspark Developer

    New
    Link Group
    4.53K - 5.36K USD
    Wrocław
    , Fully remote
    Fully remote
    Data
    Apache Spark
    PySpark

    Staff Software Engineer (Privacy Engineering)

    New
    Affirm
    7.32K - 9.36K USD
    Wrocław
    , Fully remote
    Fully remote
    Databases
    Backend
    AWS

    Senior Machine Learning Engineer

    New
    Netguru
    3.68K - 5.76K USD
    Poznań
    , Fully remote
    Fully remote
    TensorFlow
    Python
    NLP