Data Engineering(AWS/ Python/ Pyspark/ Databricks)
Role Description:
AWS Data Engineering, Databricks, Python, Pyspark Handson Exp in Building Data Pipelines and Transformation for both Batch and Streaming Datastrong understanding data pipelines in Apache Spark, Apache Airflow and similar orchestrators understanding on mid. level Databricks or other modern platforms BigData solutions inside (i.e. data catalogs, computes engines, SQL engines, observability layers, etc.)strong focus on delivery data pipelines on high quality from data product delivery perspective practical exp. with data streams and Petabytes scale Expertise in technical integrations and different data architectures Cloud PaaS environment: AWS, Databricks Proficient with following AWS tools: EMR, EC2, Airflow, AWS Lambda, AWS Step function, SQS, CloudWatch, Powertools Good to have : Scala, Kafka knowledge
Competencies:
Digital : Python, Digital : Amazon Web Service(AWS) Cloud Computing, Digital : Databricks, Digital : PySpark
Experience (Years):
6-8
Country:
Poland
Branch | City | Location: Warsaw, Poland
Keywords:
Data Engineering, AWS, Python, Pyspark and Databricks.