Description:
We are looking for a skilled Big Data Engineer to join an exciting data-driven project. You will work on building robust, scalable data pipelines and delivering high-performance data solutions for enterprise customers in a Google Cloud environment using tools like Spark, Flink, Airflow, and Dremio.
Responsibilities:
Design and implement scalable data pipelines
Analyze and develop complex features with minimal supervision
Model data structures to support business requirements
Write efficient and maintainable code in Java or Python
Implement batch and stream processing using Spark and Flink
Automate data workflows using Airflow and CI/CD tools (e.g., Jenkins)
Collaborate with stakeholders and perform functional analysis
Optimize system performance and data reliability
Contribute to documentation and system design
Work within agile methodologies and use tools like Jira, Bitbucket, etc.
Min requirements:
Bachelor’s degree in Computer Science, Computer Engineering or similar
2–3 years of experience as a Data Engineer
Strong knowledge of SQL, Apache Spark, Airflow, and PostgreSQL
Hands-on experience with streaming tools (preferably Flink)
Proficiency in Java or Python
Experience with CI/CD (e.g., Jenkins), version control (Bitbucket), and branching strategies
Knowledge of at least one major cloud provider (preferably Google Cloud)
Understanding of functional requirements and ability to work independently
Familiarity with data warehousing and Dremio
Strong analytical and communication skills
Team player attitude
Would be a plus:
Experience with DataMesh architecture
Familiarity with Dremio
Net per month - B2B
Check similar offers