Senior Data Engineer

Python

Senior Data Engineer

Python
Daszyńskiego, Warszawa

Link Group

Full-time
B2B
Senior
Remote
55 - 68 USD
Net per hour - B2B

Job description

Senior Data Engineer

We are looking for a seasoned Senior Data Engineer to join a dynamic team that’s building a cutting-edge platform designed to enable automated decision-making and intelligent automation across the entire business lifecycle. This role offers a unique opportunity to help shape a transformative solution from the ground up. We're seeking individuals who are mission-driven, proactive, and passionate about data engineering and innovation.

Key Responsibilities

  1. Partner with business stakeholders and product owners to gather data requirements and design scalable technical solutions.
  2. Build and maintain robust data models and schema structures to support analytics, business intelligence, and machine learning initiatives.
  3. Optimize data processing pipelines for speed, scalability, and cost-efficiency across cloud and on-premise environments.
  4. Ensure high data quality and consistency through validation frameworks, automated monitoring, and comprehensive error-handling processes.
  5. Collaborate closely with data analysts and data scientists to deliver reliable, well-structured, and easily accessible datasets.
  6. Stay informed of emerging trends, tools, and best practices in data engineering to help drive innovation and technical excellence.
  7. Maintain operational stability and system performance across data pipelines and platforms.
  8. Provide Level 3 production support when necessary, resolving critical data-related issues swiftly and effectively.

Required Experience and Skills

  1. Minimum 8+ years of experience in data engineering, data architecture, or a similar technical role.
  2. Strong programming skills in SQL, Python, Java, or equivalent languages for data processing and pipeline development.
  3. Experience with both relational (e.g., PostgreSQL, SQL Server, Oracle) and NoSQL (e.g., MongoDB) databases, OLAP tools like Clickhouse, and vector databases (e.g., PGVector, FAISS, Chroma).
  4. Expertise in distributed data processing frameworks such as Apache Spark, Flink, or Storm. Experience with cloud data solutions (e.g., Azure, AWS Redshift, BigQuery, Snowflake) is highly desirable.
  5. Solid understanding of ETL/ELT pipelines, data transformation, and metadata management using tools such as Airflow, Kafka, NiFi, Airbyte, and Informatica.
  6. Proficiency in query performance tuning, profiling, and data pipeline optimization.
  7. Hands-on experience with data visualization platforms like Power BI, Tableau, Looker, or Apache Superset.
  8. Familiarity with DevOps principles, version control systems (Git), and CI/CD pipelines.
  9. Strong problem-solving skills, attention to detail, and the ability to work under pressure.
  10. Effective communicator with the ability to collaborate across multidisciplinary teams.


Tech stack

    English

    C1

    SQL

    advanced

    Python

    advanced

    Java

    advanced

    ETL

    advanced

Office location

Published: 09.06.2025