About project:
We’re building an enhanced AdTech analytical platform using existing real-world proven business logics and cutting-edge technologies. It will help Advertisers to manage their Ad campaigns by providing an extensive set of Ad quality measures including but not limited to such groups as Brand safety, Viewability (by different standards), Fraud and Geo.
The platform consists of numerous parts dealing with BigData: collecting, preparing and visualising it.
You will join a team that:
- Includes data and backend engineers producing user attentiveness insights and recommendations that help big brands optimize their digital advertising strategy.
- Processes 10’s of billions of records a day using Python, utilizing advanced technologies like Spark, Kubernetes, DataBricks, DBT and GCP managed solutions.
- Builds data pipelines that integrate with internal and external data sources from various platforms to enhance business insights while keeping standards high with testing, automated deployment and high degree of observability.
What you will do:
- You will have closer collaboration with one of the senior members of the team that would be providing technical guidance and doing task management.
- You will interact other team members mainly for code review, consultation and coordination with changes they are planning.
- You will assess the cost of data processes and build tools to visualize, track and alert on cost abnormalities.
- You will pursue improvements to cost and resource utilization within existing data processes while providing guideline to the team.
- You will build tools for monitoring and visualization of key data aspects of our products.
- You will build infrastructural solutions for existing needs and migrate existing use cases to use the infrastructure, as well as contribute to improve existing infrastructure.
- You will consistently share your observations, suggest guidelines and potential adoption of practices relevant to your tasks.
What you need to have:
- At least 8 years in Software Development
- At least 5 years experience coding in Python
- At least 5 years of experience building ETLs or ELTs
- At least 5 years experience working with relational databases and expert with SQL.
- Experience with Big data, cost optimizations and scalability (at the level of 10B rows a day).
- Experience with Spark, batch & streaming processing; Familiarity with under-the-hood operation; Ability to identify bottlenecks and optimize
- Experience with Databricks; Ability to self-learn & integrate with existing data platform on top of Databricks with minimal onboarding and support.
- Experience with CI/CD pipeline, Docker & Kubernetes, public cloud providers; Ability to self-learn & integrate with existing DevOps practices with minimal support.
- Great interpersonal and communication skills, including verbal & written English
- Strong sense of responsibility; Ability to work independently and reflect progress clearly
- Experience maintaining business-critical, high-scale production systems
- Nice to have - Experience with BI tools, preferably Looker.
We offer
- Work at an international company that serves high-profile clients like Google, Amazon, Microsoft, Vodafone, etc.
- Atmosphere of trust and empowerment across key projects and business processes
- Best practices around process and team management
- Competitive salary and payments in foreign currency — no matter your location
- Flexible schedules and work environments: remote work, office facility, co-working space
- Legal and accounting assistance, including for those who are self-employed
- Paid vacation, sick and family leave, birthday off
- Transparency and trust within the team: we encourage employee feedback and organize regular Q&A meetings with top managers
- Corporate discounts, team-building parties, corporate events, and other perks
- Gamification system with social activity bonuses.