As a Data Engineer, you will be instrumental in developing and scaling our data integration capabilities. You will work with a modern, cloud-native stack and be a key player in building the future of our platform.
What about the project:
Our client is a forward-thinking technology company on a bold mission: to build the AI foundation for SaaS growth. Their platform transforms data chaos into clarity – empowering businesses with an integrated, intelligent core for better decision-making, deeper insights, and scalable automation.
Founded and backed by seasoned entrepreneurs from successful Scandinavian ventures, the company is purpose-built for long-term impact. Their vision combines technical excellence with a strong product mindset – creating tools that are not only powerful, but built to last.
They foster a culture where engineers solve meaningful problems with autonomy, transparency, and mutual trust. It’s a place where curiosity is welcomed, and every team member plays a key role in shaping the future of AI-driven SaaS infrastructure.
What you will do:
Design, build, and manage scalable data solutions within Microsoft Fabric, including Data Factory, lakehouse architecture, and Fabric workspaces.
Develop and maintain data pipelines with a focus on real-time streaming, ETL/ELT processing, and data transformation best practices.
Ensure data quality and governance by implementing robust frameworks for validation, cleansing, and lineage tracking.
Work with REST APIs, webhooks, and custom integrations to connect diverse data sources and enable automated workflows.
Apply knowledge of tenant-level data partitioning, security boundaries, and resource isolation to build secure, compliant data environments.
Collaborate on CI/CD practices in Microsoft Fabric, including Git integration, deployment pipelines, and workspace lifecycle management.
Use Python for data processing, automation scripts, and the development of custom integration components.
Leverage Azure data services for storage, compute, and analytics, ensuring cost-effective and performant solutions.
Apply Infrastructure as Code techniques (e.g., ARM templates, Bicep, Terraform) to provision and manage cloud resources reproducibly.
Support integration with SaaS platforms, ensuring alignment with common middleware patterns and SaaS-specific operational models.
What you need:
3+ years of experience as a Data Engineer.
Proven experience with Microsoft Fabric architecture, including Data Factory.
Hands-on experience with Fabric workspaces, data pipelines, and lakehouse architecture.
Knowledge of tenant-specific data partitioning, security boundaries, and resource allocation.
Proficiency with REST APIs, webhooks, and real-time data streaming.
Experience with data transformation, ETL/ELT processes, and data quality frameworks.
Knowledge of common SaaS integration patterns and middleware solutions.
Strong Python programming skills for data processing, automation, and custom integrations.
A strong background in Azure data services.
Experience with Infrastructure as Code (ARM templates, Bicep, or Terraform).
Expertise in Microsoft Fabric CI/CD using deployment pipelines, Git integration, and workspace lifecycle management.
An understanding of SaaS business models, scaling challenges, and operational requirements.
Upper-intermediate English level.
What's in it for you:
Ownership – we trust that you will do the right things to deliver maximum impact.
Transparency – we say what we think and every voice is heard and respected, even when our opinions differ.
Service – whether it’s for our customers or teammates, we always support each other.
No bureaucracy, no micromanagement.
Flexible working schedule – you plan your working day based on your tasks and meetings.
Net per month - B2B
Check similar offers