AI Data Engineer (Remote from Poland)
About the Role
Join a pioneering team that’s redefining how AI and cloud-native platforms power front-office trading.
We’re hiring a hands-on Data Engineer to help shape the future of our platform, a critical infrastructure that supports Quantitative and Strat developers in building next-generation trading solutions.
This is not a standard engineering role. You’ll work closely with front‑office traders and quantitative developers, focusing on the data engineering required to design, build, and operate bespoke generative AI and agent‑based systems used directly in trading workflows.
The work you do will have a measurable impact on how strategies are developed, tested, and executed. If you’re motivated by building novel, production‑grade systems at the leading edge of technology, this role gives you the scope to do exactly that.
What You’ll Do
You’ll design, build, and innovate across both cloud and on-prem environments, scaling platform capabilities and driving AI adoption:
• Design, build, and maintain robust data pipelines for batch and streaming workloads, ensuring high data quality, reliability, and observability across cloud and on‑prem platforms.
• Model, store, and serve large‑scale datasets optimised for analytics, machine learning, and low‑latency consumption by AI‑driven trading systems.
• Build and optimise real‑time and near‑real‑time data pipelines using Databricks and streaming technologies to ingest, process, and serve high‑volume market and trading data at scale.
• Design and implement secure, cost-aware, scalable systems using AWS services and Kubernetes.
• Contribute to best practices for agent‑based system infrastructure and mentor junior engineers when needed.
• Work across organizational boundaries and champion modern engineering trends.
• Stay ahead of the curve in agent‑based systems, AI infrastructure, and cloud-native tooling.
• Architect and develop cutting-edge platform services for AI-driven trading.
Tech Stack
• Programming: Python
• AWS: S3, Kinesis, Glue, Lambda, Step Functions, SageMaker, and more.
• On-Prem: Managed Kubernetes Platform and Hadoop ecosystem.
• Databricks as a nice to have.
What we are looking for
• 5 – 10 years of experience in data engineering, ideally in platform or infra roles.
• Strong programming skills in Python; passion for code quality and testing.
• Experience with Databricks or similar tools.
• Experience with AWS services (S3, Glue, Kinesis, Lambda, ECS, IAM, KMS, API Gateway, Step Functions, MSK, CloudFormation).
• Experience working in a fast-paced environment in either engineering or analytical roles.
• Passion for being hands-on and contributing to a collaborative engineering culture.
• Direct Impact: Be part of a team building agent‑based systems that traders and quants use daily to optimise strategies.
• Creative Freedom: Open collaboration and the chance to bring your ideas to life.
• Visibility: Be a big player in a small, high-impact team with exposure across the organisation.
Nice to Have
• Experience with on-prem Hadoop and Kubernetes.
• Familiarity with AWS cost management and optimisation tools.
• Knowledge of Databricks
• Knowledge of front-office developer workflows in financial services
AI Data Engineer (Remote from Poland)
AI Data Engineer (Remote from Poland)