All offersKrakówPythonData Engineer
Data Engineer
Python
Revolut

Data Engineer

Revolut
Kraków
Type of work
Undetermined
Experience
Mid
Employment Type
Permanent
Operating mode
Office
Revolut

Revolut

People deserve more from their money. More visibility, more control, more freedom. And since 2015, Revolut has been on a mission to deliver just that. With an arsenal of awesome products that span spending, saving, travel, transfers, investing, exchanging and more, our super app has helped 35+ million customers get more from their money. And we're not done yet. As we continue our lightning-fast growth,‌ two things are essential to continuing our success: our people and our culture. We've been officially certified as a Great Place to Work™ in recognition of our outstanding employee experience!

Company profile

Tech stack

    Python
    master
    SQL
    master
    Docker
    advanced
    Kubernetes
    advanced
    Django
    regular
    Flask
    regular
    Google Cloud Platform
    nice to have
    Bash
    nice to have

Job description

ABOUT THE TEAM
Data sits at the heart of Revolut and plays a uniquely crucial role in what we do. With data we build intelligent real-time systems to personalise our product, tackle financial crime, automate reporting, track team performances and enhance customer experiences.
Fundamentally, data underpins all operations at Revolut and being part of the team gives you the chance to have a major impact across the company – apply today to join our world class data department.

ABOUT THE ROLE
We are looking for Data Engineers and https://www.revolut.com/en-PL/careers/department/data#data-engineer-a1d2f627-b44e-429b-a761-183a083ed011Python Engineers for our Core Data Infrastructure to push our teams to new heights and has combination of laziness, unwillingness to write overcomplicated code and pathological desire to automate everything.

What you'll be doing:
-Enforcing consistent quality by incorporating tests and performing code reviews with data scientists and data engineers
-Explore and experiment new tools, libraries and technologies to improve our solutions
-Support and train new and existing users of the platform
-Taking ownership of certain parts of the automation and abstraction framework dealing recurring etl tasks, ensuring monitoring, reliability and scaling up of data both in volumes and variety
-Creating and maintaining company-wide repository of metadata and related artefacts
-Collaborating with product owners, engineers and data scientists to implement a seamless data platform
-Coming up with and enforcing best practises regarding everything - coding, testing, deployment, etc.

WHAT SKILLS YOU’LL NEED
-Fluency in SQL, Python, Unix/bash scripting.
-Ability to write easily understandable and maintainable code in multiple programming languages

Databases:
- SQL {Redshift, Vertica, Exasol, PostgreSQL, MySQL, BigQuery}
- NoSQL {DataStore, CouchDB, Redis}
- Understanding their strengths and weaknesses

Big data: 
- Experience using, configuring, and tweaking one of Kafka, Spark, Flink, etc
Productionizing: 
- Docker, K8s, Ansible/Puppet, Teamcity/Jenkins, monitoring and alerting
Versioning: 
- GIT, Jira, or similar

Desired:  
- Interest in data analysis/data visualisation (D3 is a plus)
- Experience with prototyping and sketching
- Side projects or open source contributions
- Cloud: GCP
- Java, Javascript, GO, etc.

DOMAIN KNOWLEDGE
If you have expertise in one of the following, awesome!
- Fraud Detection
- AML/CTF Risk
- Monetization
- Engagement and User Activation
- Credit and Operational Risk Modelling
- Productionizing models
- Code generation