#1 Job Board for tech industry in Europe

Senior Azure Data Engineer
New
Data

Senior Azure Data Engineer

4 970 - 6 903 USD/monthGross per month - Permanent
4 970 - 6 903 USD/monthGross per month - Permanent
Type of work
Full-time
Experience
Senior
Employment Type
Permanent
Operating mode
Remote
Link Group

Link Group

Hundreds of IT opportunities are waiting for you—let’s make it happen! Since 2016, our team of tech enthusiasts has been building exceptional IT teams for Fortune 500 companies and startups worldwide. Join impactful projects in BFSI, CPG, Industrial, and Life Sciences & Healthcare industries. Work with cutting-edge technologies like Cloud, Business Intelligence, Data, and SAP. Unlock your potential, grow your skills, and collaborate with top global clients. Ready for your next big career move? Let’s link with us!

Tech stack

    English

    B2

    Azure Databricks

    regular

    Spark

    regular

    PySpark

    regular

    Delta Lake

    regular

    Data modeling

    regular

    Azure Data Factory

    regular

    Python

    regular

    SQL

    regular

Job description

Online interview

Data Engineer – Azure


📍 100% remote | 🕒 Full-time | 🌍 International environment

We are looking for an experienced Data Engineer with strong expertise in the Azure ecosystem to join a dynamic data team delivering scalable and high-performance data solutions. You’ll play a key role in designing, building, and optimizing modern data pipelines and data lake architectures using cutting-edge cloud technologies.


🔧 Key Responsibilities:

  • Design and develop robust and efficient data pipelines using Azure Databricks, Spark, and PySpark
  • Work with Delta Lake architecture to manage structured and semi-structured data
  • Perform data modeling, transformation, and performance tuning for large datasets
  • Build and manage Azure Data Factory pipelines and Azure Functions for orchestrating workflows
  • Integrate various data formats such as Parquet, Avro, and JSON
  • Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions
  • Use Git for version control and manage code in a collaborative environment
  • Write efficient Python and SQL code for data processing and querying
  • Ensure data quality, consistency, and reliability across the platform


✅ Core Requirements:

  • Solid hands-on experience in Azure Databricks, Spark, and PySpark
  • Deep knowledge of Delta Lake and modern data lakehouse architectures
  • Proficiency in data modeling and performance optimization techniques
  • Experience with ADF (Azure Data Factory) and Azure Functions
  • Strong skills in Python, SQL, and data serialization formats (Parquet, Avro, JSON)
  • Familiarity with version control systems, especially Git
  • Ability to work independently in a fully remote, distributed team
  • Good communication skills in English


4 970 - 6 903 USD/month

Gross per month - Permanent