Data Engineer (Senior)
We are #VLteam – tech enthusiasts constantly striving for growth. The team is our foundation, that’s why we care the most about the friendly atmosphere, a lot of self-development opportunities and good working conditions. Trust and autonomy are two essential qualities that drive our performance. We simply believe in the idea of “measuring outcomes, not hours”. Join us & see for yourself!
About the role
You will participate in defining the requirements and architecture for the new platform, implement the solution, and remain involved in its operations and maintenance post-launch. Your work will introduce data governance and management, laying the foundation for accurate and comprehensive reporting that was previously impossible.You will adhere to and actively promote engineering best practices, data governance standards, and the use of open standards. Build data ingestion & processing pipelines. Collaborate with stakeholders to define requirements, develop data pipelines and data quality metrics
Data Foundation & AI Enablement
Project Scope
We are architecting a modern Data Platform for a fast-scaling client in the Insurance sector. Our work consolidates fragmented legacy systems, organises data from a vast number of sources, and establishes a standardised, governed, and future-proof data foundation.
We aim to unlock the full value of the company’s data, enabling faster, informed decision-making and providing the backbone for business growth and AI readiness.
Tech stack
SQL, Python, Snowflake, dbt, Data modelling, Data quality, Power BI, Azure, Terraform
Challenges
The primary objective is to deliver a robust data foundation and enable AI capabilities for a client that has grown organically. The work focuses on several key areas:
Establishing a production-ready, fully operational Snowflake environment and driving operational excellence.
Translating complex business logic into accurate data models to ensure the platform truly reflects business reality.
Integrating diverse data sources to build reliable data products and comprehensive data dictionaries.
Managing the full Data Engineering and Data Science lifecycle to support production ML and AI experimentation.
Taking ownership from concept to deployment.
Cultivating an engineering mindset by promoting automation, CI/CD, and rigorous standards.
Team
We are building a small (4-6 people), agile, cross-functional team capable of delivering the complete data platform, from initial architecture to production operations.
Roles involved: DevOps, Data Engineer, Snowflake Specialist, MLOps/AI Engineer, Business Analyst (BA). The team will collaborate closely with business stakeholders to ensure effective knowledge transfer and strict alignment with strategic goals.
What we expect in general
Hands-on experience with Python
Proven experience with data warehouse solutions (e.g., BigQuery, Redshift, Snowflake)
Experience with Databricks or data lakehouse platforms
Strong background in data modelling, data catalogue concepts, data formats, and data pipelines/ETL design, implementation and maintenance
Ability to thrive in an Agile environment, collaborating with team members to solve complex problems with transparency
Experience with AWS/GCP/Azure cloud services, including: GCS/S3/ABS, EMR/Dataproc, MWAA/Composer or Microsoft Fabric, ADF/AWS Glue
Experience in ecosystems requiring improvements and the drive to implement best practices as a long-term process
Experience with Infrastructure as Code practices, particularly Terraform, is an advantage
Proactive approach
Familiarity with Spark is a plus
Familiarity with Streaming tools is a plus
Don’t worry if you don’t meet all the requirements. What matters most is your passion and willingness to develop. Apply and find out!
A few perks of being with us
Building tech community
Flexible hybrid work model
Home office reimbursement
Language lessons
MyBenefit points
Private healthcare
Training Package
Virtusity / in-house training
And a lot more!
Data Engineer (Senior)
Data Engineer (Senior)