#1 Job Board for tech industry in Europe

Senior Spark/ Scala Specialist
Other

Senior Spark/ Scala Specialist

Type of work
Undetermined
Experience
Mid
Employment Type
Permanent
Operating mode
Office

Tech stack

    Scala

    advanced

    Hadoop

    advanced

    Elasticsearch

    advanced

    PySpark

    advanced

    Java/Python

    advanced

Job description

Online interview
Senior Spark/ Scala Specialist

You've heard of Artificial Intelligence (AI), but what about Applied Intelligence?
AI may be the talk of the town, but it's not a silver bullet. That's where Applied Intelligence comes in: It's our unique approach to combining AI with data, analytics and automation under a bold strategic vision to transform business in a very pragmatic way.

AI Centre of excellence in Poland is part of a global team of more than 6,000 deep AI experts and 3,000 data scientists. We work with almost any technology partner including SAS, AWS, GCP, and Microsoft.
We build resilient data approaches, make AI vision a reality and make it scalable. Our work makes an impact and help businesses prepare for tomorrow.  
  
We are looking for Data Scientists and Data Engineers who will:
  • Manage, transform and cleanse high volume data.
  • Write defensive, fault tolerant and efficient code for data processing.
  • Automate data processing to enable on-going alerts on high risk activity.
  • Present project results to clients
  • Work very closely with data scientists to ensure efficient and effective delivery of solutions
  • Use leading open source big-data tools, such as Spark, Hadoop, Scala and Elasticsearch.
  • Work with our expert software development team to produce reusable applications
  • Collaborate on scalability issues involving access to massive amounts of data and information.

WE EXPECT THAT YOU:
  • Have minimum Bachelor's degree in discipline like Mathematics, Informatics, Physics, Statistics from a university of repute
  • Have excellent English communication skills (written & spoken). Good level of the other foreign language e.g. German, Italian, Russian, French, Spanish would be an asset
  • Proven big data experience (4+ years), either from an implementation or a data science prospective within industry or consulting.
  • Excellent technical skills including expert knowledge of at least one big data technology such as Spark, Hadoop, or Elasticsearch.
  • Experience of building data processing pipelines for use in production “hands off” batch systems, including either (or preferably both) traditional ETL pipelines and/or analytics pipelines.
  • Strong coding experience in either Scala, Java or Python.
  • Strong client facing, communication and presentation skills.
  • Enthusiasm to learn and develop emerging technologies and techniques.
  • Demonstrate strong analytical and problem-solving skills and the ability to debug and solve technical challenges with sometimes unfamiliar technologies.
  • Experience with data interrogation and modelling
  • Strong experience in agile practices and CI/CD.
  • Arrive with experience at working with a variety of modern development tooling (e.g. Git, Gradle, Jenkins, Nexus) as well as technologies supporting automation and DevOps (e.g. Ansible, Chef, Puppet, Docker, Bash scripting).
  • Have a Quantexa certification (nice to have)

WE OFFER YOU:
  • Technologically and personally developing projects for the largest global enterprises.
  • For the candidates that already know pySpark – there are Scala and Quantexa upskilling options  
  • Working with the latest technologies in the field of data analytics.
  • Career and competence support, including ongoing mentoring and financing of certificates in different technologies and tools.
  • Upskilling opportunities and best in the world training curriculum including graph analytics.
  • Clearly defined career paths.
  • Market competitive salary