Data Engineer - Financial Markets / Fixed Income

Data

Data Engineer - Financial Markets / Fixed Income

Data
-, Gdańsk

Link Group

Full-time
B2B
Mid
Remote
33 - 44 USD
Net per hour - B2B

Job description

We are looking for a Data Engineer with a strong background in ETL pipelines, database management, and data integrations. The ideal candidate will have hands-on experience with Python and SQL to develop and maintain data processing workflows, ensuring data integrity, performance, and security.

This role involves building integrations with both external APIs and internal systems, managing data storage solutions such as data warehouses and relational databases, and conducting data quality checks. The candidate will collaborate closely with data scientists and IT teams to ensure seamless and secure integration of data infrastructure within the organization.


Project Description

The project focuses on developing and optimizing data pipelines to support business intelligence, analytics, and machine learning initiatives. The Data Engineer will be responsible for integrating data from various sources, ensuring data consistency, and maintaining high-performance storage solutions.

The role requires collaboration with data scientists and IT teams to align data architecture with business objectives while maintaining security and compliance standards. Additionally, the engineer will monitor system performance, troubleshoot issues, and contribute to the automation and scalability of data workflows.

This is an excellent opportunity to work with cutting-edge data technologies in a dynamic environment, contributing to data-driven decision-making and business transformation.


Key Responsibilities

  • Design, develop, and maintain ETL data pipelines using Python and SQL.
  • Implement and manage data storage solutions, including relational databases and data warehouses.
  • Build integrations with external APIs and internal systems to enable efficient data exchange.
  • Perform data analysis and quality checks, ensuring data accuracy and reliability.
  • Monitor data infrastructure to optimize performance, security, and scalability.
  • Work closely with data scientists and IT teams to align data workflows with business needs.


Skills & Qualifications

  • 3+ years of experience developing ETL processing pipelines using Python and SQL.
  • Strong data modeling and database management skills, particularly with relational databases (preferably PostgreSQL).
  • Proficiency in Linux environments (especially RedHat distributions) and version control systems (Git).
  • Familiarity with DevOps pipelines, particularly Azure DevOps Services.
  • Understanding of containerization technologies like Docker or Podman.
  • Strong problem-solving skills and willingness to learn new technologies.


Nice to Have

  • Experience with Apache Airflow or other data orchestration tools.
  • Familiarity with Databricks, Dataiku, MLflow, or similar Machine Learning platforms.
  • Knowledge of kdb+/q and distributed processing frameworks like Hadoop and Spark/PySpark.
  • Understanding of financial markets, especially fixed income instruments.


Tech stack

    ETL

    regular

    Python

    regular

    SQL

    regular

    redhat

    regular

    DevOps

    regular

    Azure DevOps

    regular

    Docker

    regular

    PostgreSQL

    nice to have

    Podman

    nice to have

    Apache Airflow

    nice to have

Office location

Published: 28.03.2025