Data Engineer
Responsibilities:
Develop, maintain, and optimize ETL pipelines using Python and SQL.
Process large datasets using PySpark or Spark.
Work with Databricks for data engineering tasks.
Utilize at least one public cloud platform (AWS, Azure, GCP) for data solutions.
Collaborate with teams to design and implement data models.
Ensure data quality, monitor pipelines, and troubleshoot issues.
Requirements:
2 years of experience as a Data Engineer or in a similar role.
Proficient in Python and SQL.
Experience with PySpark or Spark.
Experience with Databricks.
Experience with at least one public cloud provider (AWS, Azure, or GCP).
Understanding of ETL, data modeling, and data warehousing concepts.
Ability to work independently and in a team environment.
Data Engineer
Data Engineer