We are looking for a skilled Data Engineer (Mid or Senior level) to help design and build a GCP-native data processing framework. The role involves close collaboration with the data engineering team to create scalable, high-performance solutions.
- Design and implement a new data processing framework using GCP native services
- Ensure data quality, integrity, and availability across systems
- Develop technical documentation and best practices
- 5+ years in data engineering, with solid experience in pipeline architecture
- Strong background in GCP and big data tools like PySpark, Databricks, and BigQuery
- Hands-on experience with orchestration tools like Airflow or similar GCP-native alternatives
- Familiarity with data lake table formats such as Delta and Iceberg
- Proficiency in Python
- Solid understanding of data lake design and optimization
- Strong analytical and problem-solving abilities
- Clear and effective communication skills
- Collaborative mindset with the ability to work in distributed teams