We are on a mission to transform the spreadsheet-ridden world of supply chain operations. There are, on average, 13 copies of the same data stored in different systems across an organisation's supply chain. This creates huge inefficiencies for planning and operations teams, resulting in unnecessary safety stocks, lost sales, waste and poor supply chain transparency. In a world starting to prioritise responsible consumption, these are increasingly urgent problems to solve.
Data automation is core to what we do. Our technology stack leverages the latest and greatest cloud data technologies. Many of these technologies are not accessible to the average supply chain operations manager, but we believe that they should be.
Through our consulting work over the past 18 months, we've gained first-hand experience of the problems faced in this area. We're obsessively focused on our users. This will continue to be our core philosophy as we build out our product.
We're looking for a mid-level DataOps Engineer to help manage the data pipelines at the core of our product. You'll work alongside two dynamic, ambitious and mission-driven entrepreneurs, experienced in analytics, machine learning and creating user-focused enterprise software. This is a unique and exciting opportunity to work on real customer data problems, but also help shape product development in this nascent and fast-moving area.
Role
- Ensure that our managed customer data pipelines run reliably and efficiently within our multi-tenant cloud environment
- Work with the product development team to architect the "no-code" and adaptive data pipeline execution engine behind our product
- Assist the product development team with knowledge of best practices in data governance, schema management, data lineage, observability and data quality
- Ensure that all cloud environments are secure and customer data is managed securely and in accordance with our information security policies
- Continue to innovate on our automated data quality reporting systems
- Maintain metrics on how data pipelines are running to meet SLAs
- Bring expertise in DevOps and cloud automation to continuously improve our data operations stack
- Maintain documentation (internal wiki) and run-books for our data infrastructure
Skills and experience
- 2+ years of commercial experience in an individual contributor role
- Knowledge and experience using Python and SQL to create modular, reliable and scalable data pipelines
- Experience using pipeline orchestration tools, such as Airflow, dbt and Dagster
- Experience using containerisation and automated deployment technologies, such as Docker and Kubernetes
- Experience managing and automating cloud infrastructure, ideally using Infrastructure as Code approaches
- Previous experience in a DataOps role is advantageous, but not strictly necessary
- Bonus: Software engineering experience, to aid collaboration with our product development team
- Bonus: Analytics/Data Science experience, to understand our users' point-of-view and our customers' data models
- Bonus: Experience operating in the dynamic and fast-paced environment of an early-stage startup