Dear Consultant,
We are seeking an experienced Data Engineer with strong proficiency in Python/PySpark, Azure Databricks, and a solid understanding of investment banking products and processes. This role is part of a dynamic team driving data platform modernization for a leading financial institution. Send CV to (marcillina.tietjen@dcvtechnologies.co.uk) if you are interested.
Location: Hybrid – Krakow (CET time zone preferred)
Contract Type: B2B Contract
Start Date: Immediate
Duration: 12 months (extendable)
Key Responsibilities:
Design and implement scalable data pipelines using Azure Databricks and PySpark
Collaborate with domain experts to translate financial product requirements into robust data solutions
Optimize data processing frameworks for performance and reliability
Contribute to ongoing improvements in data warehouse architecture and development practices
Must-Have Skills:
Strong hands-on experience in Python and PySpark
Azure Databricks implementation and optimization
Solid knowledge of data warehousing principles
Investment banking domain experience – Must have worked directly with financial products and systems
Unix shell scripting – mandatory
Nice to Have:
PL/SQL development skills
Broader experience in financial IT or trading systems
What We Offer:
Flexible remote work
Competitive daily rate
Opportunity to work with a global investment banking team on mission-critical data infrastructure
B2B
Check similar offers