Azure Backend/Data Engineer (Python, FastAPI, Databricks)
Al. Jerozolimskie 134, Warszawa
Craftware
Role Description
We are looking for an experienced developer with strong skills in Python and FastAPI, solid knowledge of Azure services, and hands-on experience with Databricks (PySpark). This role requires a versatile engineer who can work across backend development, cloud architecture, data integration, and optionally AI/ML deployment
Key Responsibilities
Build and maintain APIs and microservices (FastAPI/Django/Flask) supporting data and AI workflows
Design and implement scalable solutions on Azure (Apps, Containers, Storage, SQL)
Work with Databricks (PySpark, Delta Lake, Delta Live Tables) to process and integrate data
Implement Git-based workflows, testing, and CI/CD automation (GitHub Actions/Azure DevOps)
Apply DevOps-first practices with automation and deployment using Databricks Asset Bundles (DAB)
Ensure clean code, testing, and maintain high engineering standards
Set up monitoring, logging, and alerting (Azure Monitor, Log Analytics, Cost Management)
Contribute to solution architecture and propose improvements
Collaborate with Data Scientists to deploy and maintain AI models in production (MLOps experience is a plus, not a must)
Qualifications & Competencies
Solid knowledge of the Databricks ecosystem: architecture, Delta Lake, and Delta Live Tables
Strong Python skills (OOP, testing, clean code) with experience in advanced data processing (preferably PySpark)
Hands-on experience with API development and integration using FastAPI (or Flask/Django)
Practical experience with Azure services: Apps, Containers, Storage, SQL
Familiarity with DevOps practices: automation-first mindset, CI/CD pipelines, and deployment automation (DAB)
Experience with Git, agile teams, and working in Scrum-based environments
Knowledge of Docker, Terraform/ARM is a plus
Strong problem-solving skills, ownership mindset, and ability to collaborate across teams
Nice to Have
Basic understanding of Generative AI and Agentic AI use cases
Experience in debugging and optimizing Spark jobs (Photon, Catalyst, query plans)
Experience with ML model deployment, GenAI tools (LangChain, vector DBs, ML pipelines)