We are a team of experts, bringing together the best talents in IT and analytics. Our mission is to provide solutions through our flagship service, which includes forming tech teams from scratch, and growing existing units, all tailored to help our partners to become real data driven entities.
Currently we are looking for a Senior Azure MLOps Engineer as we are supporting our partner in developing Global Analytics unit which is a global, centralized team with the ambition to strengthen data-driven decision-making and the development of smart data products for day-to-day operations.
Team's essence is an innovative spirit, which permeates throughout the company, nurturing a data-first approach in every facet of the business. From sales and logistics to marketing and purchasing, our smart data products have been pivotal in rapid growth and operational excellence. As team expands its analytics solutions on a global scale, we are on the lookout for an experienced Senior Azure MLOps Engineer.
The Global Analytics team is a diverse collective of Data Scientists, Data Engineers, Business Intelligence Specialists, and Analytics Translators, with footprints across three continents and five countries. Its ethos revolves around fostering collaboration, driving innovation, and ensuring reliability. Together, we're committed to transforming whole organization into a leader in data-driven decision-making, leveraging global diversity to tackle challenges and create value.
The team has a lot of freedom to shape this, especially in the use of tools and technology, but also by introducing new concepts, solutions and ways of working.
If you want to:
- take part in the development and implementation of a complex system of smart data solutions
- have opportunity to work on bleeding-edge projects
- have a chance to see how your visions come true
- be a member of an international and diverse team of ground-breaking data scientists, data engineers, BI developers, UX designers and analytics translators
- carry out projects which address real business challenges
- have a real impact on the projects you work on and the environment you work in
- have a chance to propose innovative solutions and initiatives
- opportunity and tools to grow, develop and drive your career forward,
it’s probably a good match.
Moreover, if you like:
- flexible working hours
- casual working environment and no corporate bureaucracy
- having an access to such benefits as Multisport and private medical care
- working in modern office in the centre of Warsaw with good transport links or working remotely as much as you want
- a relaxed atmosphere at work where your passions and commitment are appreciated
- vast opportunities for self-development (e.g. online courses and library, experience exchange with colleagues around the world, partial grant of certification),
it’s certainly a good match!
If you join us, your responsibilities will include:
-
collaborate with Data Scientists to understand business challenges and design smart data products
-
develop and enhance MLOps frameworks for Data Science projects, ensuring adherence to best practices
-
design and implement Azure-based cloud infrastructure to support the development and deployment of AI-driven applications
-
build and optimize Azure cloud-hosted, automated pipelines for running, monitoring, and retraining data science models in business applications
-
support Data Engineers in automating data ingestion and processing workflows
-
create and maintain workflows for training, testing, and deploying data science models into production environments in close collaboration with Data Scientists and Data Engineers
-
manage the life cycle of deployed ML applications, including new releases, change management, monitoring, and troubleshooting
-
implement frameworks for measuring and optimizing the quality of deployed solutions
We expect:
- advanced degree in Computer Science, Statistics, or a related STEM field
- minimum of 5 years of experience in ML engineering or engineering support for data science in an industrial or commercial setting
- proven experience with Azure cloud services and infrastructure
- strong background in operationalizing Data Science projects using Azure
- experience in deploying and managing ML models in production environments
- advanced proficiency in Python, with nice-to-have experience in Django and FastAPI
- expertise in DevOps tools and methodologies, such as Docker, Kubernetes, and CI/CD pipelines
- experience with Azure DevOps, including CI/CD pipelines and releases, with test plans as a nice-to-have
- proficiency in Git, with a focus on Trunk-based development; other Git branching strategies are a nice-to-have
- solid understanding of RDBMS, with PostgreSQL as a nice-to-have
- experience with Infrastructure as Code (IaC), specifically Bicep
- strong skills in Bash scripting
- familiarity with Azure Machine Learning, with Python SDK v2 as a nice-to-have
- experience with Azure App Service, Azure Container Apps, Azure Data Lake Storage, and Azure Functions
- strong software engineering skills, including unit testing and object-oriented programming (OOP)
- ability to develop and enhance CI/CD pipelines for continuous integration and deployment during both development and production phases
- solid understanding of machine learning algorithms and AI concepts, with hands-on experience in ML model development
- experience in building and optimizing data pipelines, architectures, and datasets
- ability to write high-quality, well-documented, and thoroughly tested code
- experience in production deployment and the ability to design and deliver products with a modular approach
- experience working in start-up environments or organizations with an agile culture, particularly in cross-functional teams
- professional attitude and a strong service orientation
- team player with the ability to work autonomously on complex projects
- fluency in English, as it will be the primary language of communication
Nice to have:
- experience with Pandas for data manipulation
- familiarity with Spark and Databricks for big data processing
- experience with DBT (Data Build Tool) for data transformations
If interested please let us get to know you by sending your CV using "Apply" button.
Please add to your CV the following clause:
"I hereby agree to the processing of my personal data included in my job offer by hubQuest spółka z ograniczoną odpowiedzialnością located in Warsaw for the purpose of the current recruitment process.”
If you want to be considered in the future recruitment processes please add the following statement:
"I also agree to the processing of my personal data for the purpose of future recruitment processes.”