AI Specialist / Machine Learning Engineer
We are currently supporting our partner - a leading company in the EduTech sector - in building a new AI Implementation Team. They are looking for an experienced AI Specialist / Machine Learning Engineer who wants to have a real impact on how modern education and business systems utilize artificial intelligence.
This role is designed for a person with a strong software engineering background, practical experience in LLMs/OLMs, and a solid understanding of vector databases.
Location: Remote / Hybrid (Poland-based)
Contract Type: B2B / Permanent
The Role and Environment:
Our partner is moving beyond simple AI integration toward building complex, autonomous systems. As part of a newly formed team, you will be responsible for the end-to-end design and deployment of AI-driven features that serve thousands of users.
Areas of Expertise:
Engineering & Production Code:
Core: Proficiency in Python with a focus on writing clean, modular, and production-ready code.
Frameworks: Experience with PyTorch, TensorFlow/Keras, and the Hugging Face ecosystem.
Integrations: Practical knowledge of NumPy, Pandas, and connecting models with external APIs.
Ecosystem: Familiarity with TypeScript, Node.js, or Go for backend synchronization is a welcome addition.
Language Models (LLM/OLM):
Lifecycle: Experience in selecting, benchmarking, and fine-tuning models (OpenAI, Claude, Llama, Mistral).
Evaluation: Understanding of model quality metrics and the ability to choose the right architecture for specific tasks.
Techniques: Mastery of prompt engineering, tokenization, and context management, as well as fine-tuning (LoRA) and RAG.
Vector Databases & Real-time Context:
Knowledge Management: Experience with vector stores like FAISS, Pinecone, Milvus, Weaviate, or Chroma.
RAG Systems: Building and optimizing Retrieval-Augmented Generation pipelines.
Data Flow: Understanding of semantic search, indexing processes, and embedding management.
Scalable Architecture:
System Design: Designing architectures where LLMs interact with dynamic, frequently updated knowledge bases.
Data Pipelines: Familiarity with tools like Airflow or Dagster and event-driven systems (Kafka, Redis Streams).
Reliability: Experience in building scalable and fault-tolerant solutions within cloud environments (AWS, GCP, or Azure).
Agents & Protocols:
Agentic AI: Practical use of frameworks like LangChain, AutoGen, or LlamaIndex to build autonomous agents.
Connectivity: Knowledge of the Model Context Protocol (MCP) for integrating custom tools and services.
Examples of Current Challenges:
Intelligent ERP Support: Creating self-learning mechanisms for document classification and accounting, reaching over 10,000 instances.
Dynamic Knowledge Ecosystems: Developing SaaS-integrated AI agents that update their knowledge base in real-time from technical support feeds and documentation.
What the Client Offers:
Technical Influence: A significant role in a fresh team where your architectural decisions matter.
Scale: The opportunity to work on large-scale production systems in a stable, growing industry.
Modern Stack: Access to high-compute environments and the latest AI research.
Flexibility: A culture that values professional growth and a healthy work-life balance.
If you are interested in discussing the details of this project, we would be happy to share more information.
AI Specialist / Machine Learning Engineer
AI Specialist / Machine Learning Engineer