Start Date: January/February 2025
Co-op Duration: Long-term, minimum 1 year.
Seniority Level: Strong Mid/Senior
Recruitment Process:
-
Round 1: 60 minutes of CV review and technical questions, including theory and live coding (with screen sharing).
-
Round 2: If needed, an additional 30-minute round.
Preferred Location: Wrocław
Remote work is possible, with availability to travel to Wrocław several times a year.
- Designing and implementing data pipelines that ingest data from different data sources
- Delivering analytical solutions together with data scientists and analysts
- Development of scalable data architectures (machine learning) on MS Azure platform
- Converting data into insights
- Maintaining and documenting delivered solutions
- Deep knowledge of data engineering principles and working experience in developing advanced analytics data pipelines
- Knowledge of SQL and excellent coding skills in (Python / Scala)
- Experience working with big data technologies (Hadoop / Hive / Spark)
- Understanding of data modelling, data security best practices and data provisioning concepts
- BS/MS degree in Computer Science or related field
Technologies we use - must have:
Azure Databricks&Unity Catalog / Azure Data Factory / IoT Hub / Azure DevOps / Azure Storage / Azure Stream Analytics / GitHub /
Azure Key Vault / Azure Functions / SQL Server / Azure Cybersecurity and Networking configurations
Technologies we use - nice to have:
Azure Machine Learning / Azure AI Foundry / PowerBI / Cosmos DB / Purview / EntraID / MS Fabric
- Deep understanding of the domain knowledge in the field of molecular biology
- Exposure and hands on experience in the development of cutting-edge cloud data&ai solutions
- Ability to deliver proof of concept solutions and chance to experiment with different technologies
- Implementing solutions that create real value for the company and has an impact on the industry
- Opportunity to work with experienced data engineers, data scientists and domain experts