Data Engineer
Join the client's team in the insurance industry and take ownership of designing scalable data solutions that power analytical, reporting, and Generative AI use cases across the organization
Key Responsibilities
Data Hub Architecture – Design and implement efficient, scalable Data Hubs integrating multiple data sources.
Technical Leadership – Define best practices, patterns, and standards, ensuring consistency across implementations.
Mentoring & Strategy – Guide engineers, review code, and provide technical direction.
Data Pipelines – Develop batch and real-time data processing workflows.
Quality & Monitoring – Implement data validation, anomaly detection, and automated monitoring, taking action when needed.
Automation & CI/CD – Enable seamless deployment and infrastructure automation.
Security & Compliance – Ensure data governance, security, and regulatory compliance.
Documentation – Maintain clear, structured technical documentation.
Required Skills and Experience
Python, SQL – Strong data engineering and automation expertise.
Databricks & Spark – Deep knowledge of Databricks (primary tool) and Apache Spark.
Azure Data Stack – Experience with Data Factory, ADLS, Synapse, Azure SQL, Event Hubs.
Real-Time Processing – Expertise in streaming data and real-time analytics.
Automation & DevOps – Proficiency in CI/CD, Terraform, Kubernetes/AKS, Docker.
Leadership & Mentoring – Ability to set direction, mentor engineers, and drive best practices.
Documentation & Communication – Clear, structured technical writing and knowledge sharing.
Language Skills – English proficiency, minimum B2 level.
Technology Stack:
Primary Tools – Databricks, Apache Spark, Delta Lake.
Azure Services – Data Factory, ADLS, Azure SQL, Synapse, Azure DevOps, Event Hubs.
Development & Automation – Python, SQL, GitHub, Terraform, Docker, Kubernetes/AKS.
Data Engineer
Data Engineer