Senior Data Engineer

Data

Senior Data Engineer

Data
Wierzbięcice 1B, Poznań

ROCKWOOL GBS

Full-time
Permanent
Senior
Hybrid

Job description

We are seeking a Senior Data Engineer position based in our Poznań location to join the Data Science & Engineering team.

  

Ready to help build a better future for generations to come?  

In an ever-changing world, we owe it to ourselves and our future generations to live life responsibly. At ROCKWOOL, we work with dedication to enrich modern living through our innovative stone wool solutions.

Join us and make a meaningful difference!

 

Your future team:  

You will join our Data Science & Engineering Team, a group of 14 skilled professionals including the Team Leader. The team combines strong expertise in data engineering, analytics, and machine learning, and is structured into several project‑focused sub‑teams working across a variety of business areas.

 

What we are building:

Our data platform is already there in cloud, already in Databricks. But we’re not here to maintain the status quo – we’re rebuilding it from the ground up to jump into exciting world of real-time data and streaming.


We will migrate from a batch-oriented Airflow + Databricks to a streaming-first architecture: Kafka, Databricks with new cool features like Declarative Pipelines, Unity Catalog, Apache Iceberg / Polaris Catalog, and new serving layer, which you will help us to select.


This is a greenfield build inside a global company – real budget, real data, real stakes. No startup chaos, but real room to make meaningful architectural decisions.

 

What you will be doing: 

You'll be the go-to Databricks expert on the team. You'll have important role in the migration from the legacy stack while designing and building the new platform in parallel – and “in parallel” is doing real work in that sentence.

 

The legacy platform runs. Not beautifully, but it runs – and it serves real business needs that can't wait for the migration to finish. You'll split your time between keeping it stable (and gradually less painful) and building its replacement. If the idea of legacy firefighting makes you want to close this tab, this probably isn't the right role. If you see it as part of the job and take quiet satisfaction in fixing things that are broken – read on.

 

You will work on:

  • Design & build the new streaming platform (Kafka → Databricks with Declarative Pipelines)

  • Migrate existing batch workflows from Airflow + Docker + on‑prem Databricks to cloud‑native architecture

  • Keep the current platform stable while improving its reliability, performance and operability

  • Architect the serving layer

  • Govern data properly – Unity Catalog, lineage, access control, data quality – not as an afterthought

  • Enable sharing across organization with Polaris and Iceberg

  • Collaborate with data scientists, ML engineers, and business teams across regions

  • Use AI tools daily – we use GitHub Copilot and internal homemade assistants/agents we build on our own within a team; we expect you to help the team get real value from them

 

You’ll thrive here if you:

·       Know Databricks deeply – Unity Catalog, Delta Live Tables / Declarative Pipelines, Workflows, Bundles – not just “used it on a project”

·       Have streaming experience – Kafka, event-driven architectures, late data handling, exactly-once semantics

·       Have worked in consulting or client-facing roles – you can communicate with business stakeholders and stay focused on outcomes

·       Write production code – PySpark, Python, SQL, CI/CD (e.g., Github Actions), IaC (e.g., Terraform)

·       Are comfortable with imperfect systems – the legacy stack has rough edges; you’ll sand them down while building something better

·       Don’t need the work to be glamorous – some weeks it’s streaming architecture, some weeks it’s debugging a broken Airflow DAG

·       Are genuinely curious about AI tooling – Copilot/LLMs/agents are part of your workflow

 

What you bring:  

  • 5+ years in data engineering, with 2-3+ years hands-on Databricks

  • Streaming experience (Kafka or equivalent)

  • Consultancy or strong cross-functional stakeholder experience is a real differentiator

  • Comfortable owning end-to-end: design → build → monitor → improve

  • Degree in Computer Science/Engineering or equivalent real‑world experience

  • Proficiency in English at a minimum B2 level, both spoken and written - as an international company, we use English in our daily communication

 

Tech you’ll touch:

  • New stack:

    • Kafka

    • Databricks Declarative Pipelines

    • Unity Catalog

    • Apache Iceberg

    • Polaris Catalog

    • incremental materialized views

    • AI developer tooling

  • Legacy stack:

    • Apache Airflow

    • Docker

    • on‑prem compute/servers

    • existing Databricks jobs and batch ETL

 

What we offer:  

By joining our team, you become a part of the people-centric work environment of a Danish company. We offer you a competitive salary, permanent contract after the probation period, development package, team building events, activity-based office in Poznan’s city center in the new prestigious office building – Nowy Rynek. The building is recognized as a building without barriers, which means that it is fully adapted to the needs of people with disabilities.

Our compensation package on employment contracts includes:   

  • An office-first approach: home office is available up to 2 days per week

  • Adaptable Hours: start your workday anytime between 7:00 AM and 9:00 AM

  • Home office subsidy

  • Private Medical Care

  • Multikafeteria MyBenefit

  • Wellbeing program

  • Extra Day Off for voluntary activities

… and while in the office you can also use modern office space with beautiful view and high standard furniture, bicycle parking facilities & showers, chill-out rooms with PlayStation, football table, pool table, board games, subsidized canteen with delicious food & fruit.

 

Interested?  

If you recognize yourself in this profile and challenge, we kindly invite you to apply with CV written in English. If you want to include a short note on how you would approach a Databricks-based streaming migration, we would love to read it, too.

 

Who we are  

We are the world leader in stone wool solutions. Founded in 1937 in Denmark, we transform volcanic rock into safe, sustainable products that help people and communities thrive. We are a global company with more than 12,200 employees, located in 40+ countries with 51 manufacturing facilities… all focused on one common purpose – to release the natural power of stone to enrich modern living.    

Sustainability is central to our business strategy. ROCKWOOL was one of the first companies to commit to actively contributing to the United Nations Sustainable Development Goals (SDGs) framework and are actively committed to 11 SDGs, including SDG 14, Life Below Water. Through our partnership with the One Ocean Foundation and in connection with our sponsorship of the ROCKWOOL Denmark SailGP team, we will help raise awareness around ocean health challenges in an effort to accelerate solutions to protect it.     

Tech stack

    English

    B2

    Apache Kafka

    advanced

    Databricks

    advanced

Office location

Published: 03.03.2026

About the company

ROCKWOOL GBS

In Poland ROCKWOOL made its debut in 1993 and in 2016 launched the ROCKWOOL GBS competence center in Poznań, and later in Warsaw. The office supports the ROCKWOOL Group globally in finance, IT, controlling, R&D, engineer...

Company profile

Senior Data Engineer

Summary of the offer

Senior Data Engineer

Wierzbięcice 1B, Poznań
ROCKWOOL GBS
By applying, I consent to the processing of my personal data for the purpose of conducting the recruitment process. Informujemy, że administratorem danych jest ROCKWOOL Global Business Service Center Sp. z o.o. z siedzibą w Poznaniu 61-569, ul. Wierz... MoreThis site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.