Your main responsibilities will involve:
- Designing, developing, and maintaining the Data Layer for our new solution.
- Sourcing data from legacy solutions using cloud-based architecture.
- Utilizing SQL Server, Azure Functions, Azure Web Apps in the solution's architecture.
- Developing robust and efficient code in Python, with extensive use of PySpark and other relevant frameworks.
The ideal candidate:
- Has hands-on experience with Kafka and SQL.
- Is familiar with Azure
- Is proficient in Python, with extensive experience in PySpark.
- Has solid experience in data modeling and implementing Data Layers for solutions.
- Is an innovative thinker who understands the transformative role of IT in our organization.
- Thrives in a fast-paced environment and enjoys learning new technologies.
- Can communicate complex IT related issues in an understandable manner.
Is proficient in written and spoken English.
It is an advantage if you have experience with:
- Data Bricks and Delta Tables
- Experience building data-intense application
- DevOps/git/Agile (Continuous integration / Continuous delivery)