As part of a key initiative to expand our Supply Chain Analytics team, we are seeking a Data Engineer to lead and support data ingestion, preparation, processing, and delivery using MS Azure services.
Your responsibilities:
- Design, optimize, and maintain large-scale data pipelines (ETL/ELT) to enable business intelligence and statistical modeling.
- Ensure data quality and integrity through continuous monitoring and validation.
- Set up and manage data storage solutions (e.g., SQL databases, data lakes).
- Oversee and support production workflows that connect critical components of the data infrastructure.
- Write and maintain secure, scalable, efficient, and reliable code to transform business requirements into functional solutions.
- Promote best engineering practices, including automation, CI/CD processes, and code maintainability.
- Drive standardization and automation efforts by adhering to common development guidelines and industry best practices.
- Collaborate with BI analysts, data scientists, machine learning engineers, and core IT teams in cross-functional projects to deliver data-driven solutions.
What we’re looking for:
- Strong proficiency in Python, SQL, MDX, and Bash scripting.
- 2+ years of hands-on experience with Azure data tools (e.g., Data Factory, Synapse, Data Lake, Blob Storage, SharePoint, Databricks).
- Practical experience with data workflow orchestration tools (ADF, Databricks Jobs, or Airflow).
- Solid understanding of version control systems and CI/CD pipelines using Azure DevOps or similar tools.
- Experience working with Snowflake in an Azure cloud environment.
- Familiarity with data governance concepts (e.g., data cataloging, metadata management, data lineage, master data, security, and compliance).
- Professional-level English proficiency.