Senior Data Engineer
We are looking for a seasoned Senior Data Engineer to join a dynamic team that’s building a cutting-edge platform designed to enable automated decision-making and intelligent automation across the entire business lifecycle. This role offers a unique opportunity to help shape a transformative solution from the ground up. We're seeking individuals who are mission-driven, proactive, and passionate about data engineering and innovation.
Key Responsibilities
- Partner with business stakeholders and product owners to gather data requirements and design scalable technical solutions.
- Build and maintain robust data models and schema structures to support analytics, business intelligence, and machine learning initiatives.
- Optimize data processing pipelines for speed, scalability, and cost-efficiency across cloud and on-premise environments.
- Ensure high data quality and consistency through validation frameworks, automated monitoring, and comprehensive error-handling processes.
- Collaborate closely with data analysts and data scientists to deliver reliable, well-structured, and easily accessible datasets.
- Stay informed of emerging trends, tools, and best practices in data engineering to help drive innovation and technical excellence.
- Maintain operational stability and system performance across data pipelines and platforms.
- Provide Level 3 production support when necessary, resolving critical data-related issues swiftly and effectively.
Required Experience and Skills
- Minimum 8+ years of experience in data engineering, data architecture, or a similar technical role.
- Strong programming skills in SQL, Python, Java, or equivalent languages for data processing and pipeline development.
- Experience with both relational (e.g., PostgreSQL, SQL Server, Oracle) and NoSQL (e.g., MongoDB) databases, OLAP tools like Clickhouse, and vector databases (e.g., PGVector, FAISS, Chroma).
- Expertise in distributed data processing frameworks such as Apache Spark, Flink, or Storm. Experience with cloud data solutions (e.g., Azure, AWS Redshift, BigQuery, Snowflake) is highly desirable.
- Solid understanding of ETL/ELT pipelines, data transformation, and metadata management using tools such as Airflow, Kafka, NiFi, Airbyte, and Informatica.
- Proficiency in query performance tuning, profiling, and data pipeline optimization.
- Hands-on experience with data visualization platforms like Power BI, Tableau, Looker, or Apache Superset.
- Familiarity with DevOps principles, version control systems (Git), and CI/CD pipelines.
- Strong problem-solving skills, attention to detail, and the ability to work under pressure.
- Effective communicator with the ability to collaborate across multidisciplinary teams.