🤓 Data Engineer
6 000 - 9 500 PLN net
🌍 Scalac | Czesława Miłosza 9/9, Gdańsk🖥 https://scalac.io/
Scalac Sp. z o.o. - Data Engineer
Scalac is a software house that within just four years has managed to grow to more than 80 developers. Scalac is the Team! This is crucial. We love to work together. We specialize in systems development on a large scale, based on Functional Programming Languages. Working with Scalac means working with great Scala hAkkers, Frontend and Data engineers. We develop complex projects for various types of customers mainly in Fintech, ecommerce and health sector. We believe that truly great products happen when employer and employee go hand in hand. That's why we put strong emphasis on your well being and personal development.
Scalac Happiness Recipe:
- Work Hard
- Do the right thing
- Have fun
- You write code :)
- You cooperate closely with sales team, participating in tech talks with prospects as our Data Engineering consultant.
- You are a consultant for customers. Help them to evaluate their problems, requirements, their architecture, engineering needs. You are a listener and a problem solver for them. You participate in meetings and estimations with clients.
- You contribute to open source projects, write tutorials and blog posts.
- You participate in conferences and meetups.
- You are ready to be onsite in EU countries and in US/Canada for at least a week in a month.
- Bring your own ideas - we’re waiting for passionate people!
What do we search for:
- Programming experience in Data Engineering and distributed computing domain. Functional programming preferable (Scala) but, Python or Java are also acceptable.
- Strong understanding of data processing and distributed computing concepts and technologies (being able to choose an optimum tech stack).
- Experience in designing and implementing data processing architectures.
- Experience in architecting and developing data pipelines and systems:
- databases and FS (Cassandra, Hadoop, Redis, Postgres, MongoDb, SQL, S3, Aerospike),
- queues (RabbitMQ, Kafka, Kinesis),
- processing (Kafka, Flink, Spark, Hive, Processing),
- distributed / Big Data frameworks (Finagle, Akka, Data Science / ML, Zookeeper),
- data formats (Avro, ProtoBuf, Parquet),
- workflow managers (Luigi / Airflow).
- Experience in growing and scaling technically project: from a single app with a proto-data pipeline to high load / availability systems, data lakes, etc.
- Experience in consulting and applying different architectural patterns: event sourcing, streaming, microservices, etc.
- Understanding of pros and cons of different cloud providers.
- Knowledge of agile methodologies.
- Experience with the design of applications for scale and load.
- Experience in growing an internal expertise in data engineering and science.
- High level of the self effectiveness.
- Experience in being a consultant for customers.
Nice to have:
- Experience with Machine Learning.
- Great customers to work with (small, agile, startup-ish clients AND bigger well-respected ones, Western Europe companies).
- Freedom (team decides) to choose conventions and work tools.
- Best work equipment.
- Opportunity for professional development (unlimited books budget, budget for trainings, opportunities to travel to technical conferences).
- Regular company-wide retreats - we meet in person to work and play together.
- 100% remotely job.
- Working with the international team.
If you are interested, please send your CV.
Please make sure your CV includes the following clause: “I hereby give my consent for processing my personal data included in the employment offer for the needs of the recruitment process (in accordance with the Data Protection Act of 29 August 1997, Journal of Laws of 2002, No. 101, item 926, as amended).”