If you are an experience DevOps Engineer with an interest in Big Data and passion for Linux, this opportunity is for you.
You will have a chance to work with a truly Big Data company, processing 1.2 billion data inputs a day. You will be working with distributed databases, Hadoop and Spark components, clustered services etc. and you will have a real impact on what technologies you will use.
You have:
- Proficiency in at least two programming languages: Bash, Powershell, Python, Perl etc.
- Proficiency in Linux, Windows administration is a plus.
- Knowledge of orchestration software.
- Knowledge about Highly Available services and monitoring of systems components.
- Knowledge about network stack and storage devices.
- Knowledge of at least two: clustered systems, distributed services, cloud computing, Hadoop.
You will be responsible for:
- Deployment and maintenance using Ansible.
- Management of AWS cloud and on-premise virtual infrastructure.
- Deployment and management of distributed systems like: Hadoop, ELK, MemSQL and others.
- Extended monitoring system to be able to detect anomalies faster, react on failures in backend components.
- Development lifecycle with Jenkins/Teamcity/etc. also creating test environments for pipelines.
- Microservices infrastructure with Kubernates/Dockers.
On top of that, you will have the opportunity to:
- Learn about new technologies which will be used in backed system: distributed databases, Hadoop and Spark components, clustered services etc.
- Learn about tuning of backend systems to achieve maximum efficiency.
- Learn or polish programming languages – if you feel like learning you will be supported and encouraged to take Developer tasks.
- Contribute to Open Source community.
We offer:
- Attractive salary (perm or B2B).
- Medical care.
- Relocation bonus if required.
- At least one conference of your choice (also international) paid by your employer.
- Exciting opportunity to join a new office of a company which is present on the market for 15 years.