All offersWarszawaPythonData Science Analyst
Data Science Analyst
new
Python
Allegro

Data Science Analyst

Allegro
Warszawa
Type of work
Full-time
Experience
Mid
Employment Type
Permanent
Operating mode
Hybrid
Allegro

Allegro

We’re Poland’s most popular shopping platform and the largest e-commerce player of European origin. With over 250 million offers, we’re now expanding our successful marketplace model across Central and Eastern Europe. Join us!

Company profile

Tech stack

    Python
    advanced
    SQL
    advanced
    Analytical Thinking
    advanced
    Machine Learning
    regular

Job description

A hybrid work model requires 1 - 2 days a week in the office.

Please note: Despite being a role within the Data Science team, this is primarily an Analyst position. We are seeking candidates with strong analytical prowess, not specifically looking for a Data Scientist.


About the team

The Data Science Hub is the place where we apply analytical techniques, mathematics, and machine learning to solve a wide range of business problems. We provide valuable insights and make informed decisions by processing terabytes of data on a daily basis. Our team offers excellent growth opportunities and a rare chance to gain interdisciplinary knowledge about the functioning of e-commerce platforms. The breadth of our impact on different business domains is exemplified by our diverse portfolio of projects, which includes: logistics, logistic network optimization, marketing, pricing, finance, and more.

Data Science Hub consists of 5 teams:

  • 3 Data Science teams,
  • Data Analytics team,
  • Data Engineering team.

We are looking for new members for the Data Analytics team. 


We are looking for people who

  • Are very familiar with Python and SQL
  • Have basic knowledge of ML modeling 
  • Have at least 1 year of experience working as an analyst
  • Worked with Git
  • Knowing Looker Studio or Tableau is advantageous
  • Worked with tabular data and their visual representations
  • Understand mathematical concepts, statistical modeling, and probability theory
  • Are not afraid to challenge the current state of things
  • Understand how the business side of data projects works
  • Are curious and open to learning new things
  • Know English at B2+ level


In your daily work you will handle the following tasks

  • You will become part of a team that is responsible for assessing incremental value for ML projects
  • You will be analyzing the complete ML pipeline, highlighting potential areas for improvement and bottlenecks
  • Daily, you will mine data to prove/disprove hypotheses
  • Designing experiments for ML projects, such as AB tests, causality exploration, and model analysis, and subsequently offering insights derived from observations.
  • You will co-develop new methodologies and functionalities inside of internal Python library


What we offer

  • Well-located offices (with fully equipped kitchens and bicycle parking facilities) and excellent working tools (height-adjustable desks, interactive conference rooms)
  • A wide selection of fringe benefits in a cafeteria plan – you choose what you like (e.g. medical, sports or lunch packages, insurance, purchase vouchers)
  • English classes that we pay for related to the specific nature of your job
  • Macbook Pro / Air (depending on the role) or Dell with Windows (if you don't like Macs) and other gadgets that you may need
  • Working in a team you can always count on — we have on board top-class specialists and experts in their areas of expertise
  • A high degree of autonomy in terms of organizing your team’s work; we encourage you to develop continuously and try out new things
  • Hackathons, team tourism, training budget and an internal educational platform, MindUp (including training courses on work organization, means of communications, motivation to work and various technologies and subject-matter issues)
  • If you want to learn more,  check it out


Why is it worth working with us

  • We are researching and developing our own state-of-the-art tools
  • Big Data – several petabytes of data and Machine Learning used in production
  • We practice Code Review, Continuous Integration, Scrum/Kanban, Domain Driven Design, Test Driven Development, Pair Programming, depending on the team
  • Our deployment environment combines private Data Centers (tens of thousands of servers) and Public Clouds (Google Cloud and Microsoft Azure)
  • Over 100 original open-source projects and a few thousand stars on GitHub


This may also interest you

Allegro Tech Podcast → https://podcast.allegro.tech/

Send in your CV and see why it is #dobrzetubyć (#goodtobehere)