Project & Team:
Join our R&D team in developing the Core Intelligence platform, an integral part of Accuris products–Engineering Workbench, Goldfire and Parts Intelligence. Our mission is to drive digital transformation in engineering organizations by unlocking valuable insights from the unstructured content of corporate repositories and industry sources, seamlessly integrating them into daily engineering workflows. Our solutions serve over 7,500+companies, empowering engineers and knowledge workers with cutting-edge tools and industry content from 450+ Standards Development Organizations. The project focuses on creating intelligent mechanisms for the extraction, decomposition, analysis, and retrieval of relevant engineering data, utilizing advanced Machine Learning and Deep Learning models within a scalable and optimized cloud infrastructure.
The Impact:
This role is critical for developing new features and maintaining existing API services and pipelines for unstructured data processing and search in high-load scenarios. You will focus on optimizing performance and cost-efficiency in a cloud-based infrastructure, specifically AWS, with Kubernetes orchestration. Your work will involve leveraging GPUs for deep learning models, optimizing both the serving and inference stages, and ensuring the seamless integration of data scientists' models into production. This solution is a core component of data processing pipelines across multiple products of our organization.
What We Offer:
- Engaging and innovative tasks with a dedicated team focused on developing proprietary solutions utilizing advanced state-of-the-art Machine Learning models and data-driven algorithms.
- Close collaboration with experienced software developers, data scientists, data analysts, and researchers.
- Comprehensive support for personal growth and career development at the corporate level.
- A fully remote work environment.
- Provision of all necessary equipment.
Role & Responsibilities:
- Develop and maintain core backend components of data processing pipelines, focusing on performance and efficiency, using Python and Go.
- Design, implement, and optimize APIs that promote data contracts and seamless collaboration with other teams.
- Design and implement new algorithms and data structures that improve the time and memory efficiency of data processing workflows. Continuously analyze performance metrics, identify bottlenecks, and apply innovative solutions to enhance the overall efficiency and scalability of the system.
- Optimize the use of GPUs for model serving, ensuring cost-effective scaling and high performance in production environments.
- Oversee the deployment and scaling of services in AWS, leveraging Kubernetes for autoscaling and scaling to zero.
- Collaborate with data scientists to productionize deep learning models, focusing on inference performance and model integration.
- Maintain comprehensive documentation, actively collaborate with other development teams, and contribute to the continuous improvement of DevOps practices.
Job Requirements:
-
Experience: 3+ years as a Backend Engineer with experience in Python and Go, focusing on developing and maintaining API services and data processing pipelines in cloud environments.
-
Python & Go Proficiency: Advanced programming and engineering skills in Python and Go, with a strong emphasis on writing high-performance, scalable, and efficient code. Proficiency with Git for version control, experience with unit testing, and library packaging to ensure code reliability and maintainability.
-
Multithreading and Asynchronous Programming: Deep understanding and practical experience with multithreading and asynchronous programming principles. Proven ability to design and implement concurrent systems that effectively manage resources and improve the responsiveness and scalability of applications in high-performance environments.
-
Messaging and Streaming Systems: Experience with messaging and streaming platforms such as Apache Kafka or RabbitMQ. Ability to design and implement distributed messaging systems that ensure reliable data flow, scalability, and fault tolerance in high-throughput environments.
-
Cloud & DevOps: Hands-on experience with AWS cloud services, Kubernetes, and container orchestration, with a focus on autoscaling, performance optimization, and cost management. Skills in shell scripting and Linux.
-
API Development: Proven expertise in developing and maintaining robust APIs, with a focus on promoting data contracts and cross-team collaboration.
-
Communication: Fluent in English, with excellent communication and collaboration abilities.
Nice-to-Have:
Experience in integrating and optimizing machine learning models, particularly in using GPUs for deep learning models in production.