Senior Data Engineer (Spark)
Centrum, Kraków +2 Locations
Addepto
Addepto is a leading consulting and technology company specializing in AI and Big Data, helping clients deliver innovative data projects. We partner with top-tier global enterprises and pioneering startups, including Rolls Royce, Continental, Porsche, ABB, and WGU. Our exclusive focus on AI and Big Data has earned us recognition by Forbes as one of the top 10 AI consulting companies.
As a Senior Data Engineer, you will have the exciting opportunity to work with a team of technology experts on challenging projects across various industries, leveraging cutting-edge technologies. Here are some of the projects we are seeking talented individuals to join:
Design and development of the platform for managing vehicle data for global automotive company. This project develops a shared platform for processing massive car data streams. It ingests terabytes of daily data, using both streaming and batch pipelines for near real-time insights. The platform transforms raw data for data analysis and Machine Learning, this empowers teams to build real-world applications like digital support and smart infotainment and unlocks data-driven solutions for car maintenance and anomaly detection across the organization.
Design and development of a universal data platform for global aerospace companies. This Azure and Databricks powered initiative combines diverse enterprise and public data sources. The data platform is at the early stages of the development, covering design of architecture and processes as well as giving freedom for technology selection.
This role represents a gradual shift away from hands-on coding towards a more strategic focus on system design, business consultation, and creative problem-solving. It offers an opportunity to engage more deeply with architecture-level decisions, collaborate closely with clients, and contribute to building innovative data-driven solutions from a broader perspective.
🎁 Discover our perks & benefits:
Work in a supportive team of passionate enthusiasts of AI & Big Data.
Engage with top-tier global enterprises and cutting-edge startups on international projects.
Enjoy flexible work arrangements, allowing you to work remotely or from modern offices and coworking spaces.
Accelerate your professional growth through career paths, knowledge-sharing initiatives, language classes, and sponsored training or conferences, including a partnership with Databricks, which offers industry-leading training materials and certifications.
Choose from various employment options: B2B, employment contracts, or contracts of mandate.
Make use of 20 fully paid days off available for B2B contractors and individuals under contracts of mandate.
Participate in team-building events and utilize the integration budget.
Celebrate work anniversaries, birthdays, and milestones.
Access medical and sports packages, eye care, and well-being support services, including psychotherapy and coaching.
Get full work equipment for optimal productivity, including a laptop and other necessary devices.
With our backing, you can boost your personal brand by speaking at conferences, writing for our blog, or participating in meetups.
Experience a smooth onboarding with a dedicated buddy, and start your journey in our friendly, supportive, and autonomous culture.
🚀 Your main responsibilities:
Develop and maintain a high-performance data processing platform for automotive data, ensuring scalability and reliability.
Design and implement data pipelines that process large volumes of data in both streaming and batch modes.
Optimize data workflows to ensure efficient data ingestion, processing, and storage using technologies such as Spark, Cloudera, and Airflow.
Work with data lake technologies (e.g., Iceberg) to manage structured and unstructured data efficiently.
Collaborate with cross-functional teams to understand data requirements and ensure seamless integration of data sources.
Monitor and troubleshoot the platform, ensuring high availability, performance, and accuracy of data processing.
Leverage cloud services (AWS) for infrastructure management and scaling of processing workloads.
Write and maintain high-quality Python (or Java/Scala) code for data processing tasks and automation.
🎯 What you'll need to succeed in this role:
At least 5 years of commercial experience implementing, developing, or maintaining Big Data systems, data governance and data management processes.
Strong programming skills in Python (or Java/Scala): writing a clean code, OOP design.
Hands-on with Big Data technologies like Spark, Cloudera, Data Platform, Airflow, Iceberg, Kafka, and CI/CD.
Excellent understanding of dimensional data and data modeling techniques.
Experience implementing and deploying solutions in cloud environments.
Consulting experience with excellent communication and client management skills, including prior experience directly interacting with clients as a consultant.
Ability to work independently and take ownership of project deliverables.
Fluent in English (at least C1 level).
Bachelor’s degree in technical or mathematical studies.
Are you interested in Addepto and would like to join us?
Get in touch! We are looking forward to receiving your application. Would you like to know more about us?
Visit our website (career page) and social media (Facebook, LinkedIn).
Senior Data Engineer (Spark)
Senior Data Engineer (Spark)
Centrum, Kraków
Addepto