About the project:
In your role as a Cloud Engineer you will be working in the Big Data Cluster, which is the enabler for data scientists and provides a huge collection of data and a data science workbench in one place.
On a daily basis you will develop products based on database technology which are running in the On-Premise or Cloud. Your responsibilities will encompass the full Software Development Lifecycle including analysis, architecture, testing and also supporting during production issues during normal working hours.
The Big Data cluster is the enabler for data scientists and provides a huge collection of data and a data science workbench in one place.
- BI technology within the lake infrastructure
- Establish a stable, state-of-the-art technology base with on-prem and cloud solutions
- Set up data lake as single data and analytics hub and effectively ingest most important data sources
- Establish data quality and metadata management
- Provide Data Marts and sandboxes for segments and functions with the most important combination of data sources
What you will be doing? 👇
- Developing data models, designing and implementing ETL processes on cloud platforms (in particular Google Cloud), and ensuring data quality and integrity throughout the data lifecycle.
- Implementing new features to enable new business cases
- Focusing on stability, performance tunning and innovation of the applications]
- Taking care of proper up-to-date system documentation
- Actively contributing to knowledge sharing and to a learning culture
- Working in the international projects in agile methodologies.
Which technology & skills are important for us? 👌
- Cloud Data Development:
- Design and develop cloud-based solutions using Google Cloud Platform
- Optimize data infrastructure for performance, scalability and reliability
- Infrastructure as Code (IaC):
- Utilize Terraform to create and manage cloud resources efficiently
- Implement CI/CD pipelines for automated deployment and continuous integration
- Microservices and API:
- Work with microservices architecture and design APIs for seamless data integration
- Scripting and Testing:
- Proficient in Python and SQL for ETL processes, schema evolution pipeline development and performance tuning
- Develop, test, and deploy solutions using Dataproc, Dataflow, and Cloud Functions
How?
📌 Hybrid on Wersalska 6 street (Twice a week work from the office - Łódź)
Below you can find more information about Commerzbank
Commerzbank is a leading international commercial bank with branches and offices in almost 50 countries. The world is changing, becoming digital, and so we are. We are leaving the traditional bank behind us and are choosing to move forward as a digital enterprise. This is exactly why we need talented people who will join us on this journey. We work in inter-locational and international teamwork in agile methodologies.
