Senior Data Platform Engineer
ABOUT THE COMPANY
We are a global legal technology company that has been building software for the legal industry for over two decades. Our AI-powered cloud platform is used by leading law firms, Fortune 500 corporations, and government agencies worldwide to organise complex data, surface critical insights, and act on them — across litigation, investigations, regulatory inquiries, and data breach response.
We're valued at $3.6 billion and invest over $170 million annually in R&D. We're making substantial investments in data lake technology and distributed systems to support future growth and advanced analytics. Our scale means the data problems here are genuinely hard — and the platforms you build will have real consequence across the organisation.
ABOUT THE ROLE
We're building a specialised team focused on enabling advanced analytics and reporting capabilities across our internal data ecosystem. As a Senior Data Platform Engineer, you'll combine strong software engineering principles with deep data expertise to build robust, cloud-native platforms that process large-scale datasets efficiently and enable internal teams to build reporting and analytics on top of them.
The role emphasises cloud-native architecture, lakehouse integration, data warehousing, and governance best practices. You'll work on systems using Apache Spark, Delta Lake, and Iceberg, and help deliver curated data models and self-service analytics capabilities to internal stakeholders. You'll also participate in on-call rotations as part of shared team responsibility.
WHAT YOU'LL WORK ON
Data pipeline and distributed systems
Design and implement scalable data pipelines and distributed systems using Spark and Python to process and transform large-scale datasets for analytics and reporting.
Lakehouse platform development
Develop and maintain lakehouse capabilities with Delta Lake and Iceberg, ensuring data reliability, versioning, and performance optimisation at scale.
Analytics workflow enablement
Integrate dbt for SQL transformations running on Spark. Collaborate with internal teams to deliver curated datasets and self-service analytics capabilities for reporting and advanced use cases.
Data warehousing optimisation
Integrate and optimise Databricks and Snowflake for scalable storage and query performance. Drive performance tuning and cost optimisation across Spark jobs and cloud-native environments.
Governance and observability
Implement observability and governance frameworks including data lineage, quality checks, and compliance controls. Build platforms that allow secure and compliant access to diverse data sources.
Engineering best practices
Apply and champion clean code, modular design, CI/CD, automated testing, and code review standards across all data engineering work.
On-call participation
Participate in on-call rotations as part of shared team responsibility for platform reliability.
WHAT WE LOOK FOR
Python and SQL
Strong programming skills in both Python and SQL, applied to production data platform work at scale.
Apache Spark
Solid hands-on experience with Spark for distributed data processing, including performance tuning in production environments.
Lakehouse architecture
Expertise in Delta Lake and/or Apache Iceberg. You've applied these in production and understand the trade-offs in real-world scenarios.
dbt and analytics tooling
Practical experience with dbt for transformation workflows. Familiarity with Databricks and Snowflake for large-scale analytics workloads.
Data governance and compliance
Understanding of data governance, lineage tracking, and compliance requirements in large-scale, multi-tenant data environments.
Infrastructure and containerisation
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools in cloud-native environments.
Software engineering fundamentals
Solid understanding of software engineering principles — CI/CD, automated testing, clean code, and modular design applied to data systems.
Bonus
Exposure to event-driven architectures and advanced analytics platforms. Experience enabling self-service analytics for internal stakeholders. Experience in Java, Scala, or Rust.
THE TEAM
You'll join a global engineering organisation working on a platform used by some of the world's largest legal teams. The culture is diverse, inclusive, and driven by high standards. Engineers here work on genuinely complex technical problems at scale — and are supported with the coaching, development, and tooling to keep growing.
COMPENSATION & BENEFITS
Salary
208,000 – 312,000 PLN per year, plus an annual performance bonus and long-term incentives.
Health coverage
Comprehensive health, dental, and vision plans.
Parental leave
Parental leave available for both primary and secondary caregivers.
Flexible working
Flexible work arrangements with a remote-first model.
Company breaks
Two week-long company-wide breaks per year, plus additional time off.
Training investment
Dedicated training investment programme to support ongoing professional development.
Senior Data Platform Engineer
Senior Data Platform Engineer