A leading platform transforming capital projects and operations with a focus on energy transition, leveraging 30 years of software expertise and 180 years of industrial legacy.
Responsibilities:
- Design, build, and maintain data pipelines using Python.
- Collaborate with an international team to develop scalable data solutions.
- Troubleshoot and debug system issues (Tier 2).
- Create and maintain documentation, checklists, and workflows.
- Set up and configure new tenants for smooth onboarding.
- Write integration tests to ensure data quality.
- Manage code using Gitlab and process data in Databricks.
Requirements:
- 3-4 years of experience as a data engineer or similar role.
- Advanced Python skills for production-grade data pipelines and bug fixing.
- Familiarity with cloud platforms (preferably Azure).
- Experience with Databricks, Snowflake, or similar platforms.
- Strong knowledge of SQL and relational databases.
- Experience with Apache Spark and testing frameworks.
- Proficient in Gitlab and documentation.
- B2 level English proficiency.
- Strong collaboration skills and experience working in international teams.
Nice to Have:
- Experience with Docker and Kubernetes.
- Familiarity with document/graph databases.
- Willingness to travel for on-site workshops.