Senior AI Engineer (Generative AI / LLM Systems)

AI/ML

Senior AI Engineer (Generative AI / LLM Systems)

AI/ML
100% remote, Poznań +4 Locations

cerebre

Full-time
B2B
Senior
Remote

Job description

About Cerebre 

Cerebre builds software that helps industrial companies understand and operate complex facilities. Our platform transforms engineering diagrams, operational data, and documentation into an ontology -driven knowledge graph, PlantGraph, that models equipment, instrumentation, flow, and process relationships across a facility. 

This foundation enables engineers and operators to reason about industrial systems with greater clarity and speed. We are now integrating advanced AI capabilities directly into this platform, enabling natural language interaction with facility data, graph-aware reasoning over engineering systems, and AI-driven workflows that operate across diagrams, documentation, and operational processes. 

 

About the role 

Cerebre is seeking a Senior AI Engineer to help design and build the next generation of AI-powered capabilities within the Cerebre platform. 

This role focuses on applying modern generative AI technologies including large language models, retrieval-augmented generation (RAG), and agent-based systems to real industrial problems. You will build production systems that connect AI models with structured engineering data, knowledge graphs, and operational workflows. 

A central aspect of this role is enabling AI systems to reason over Cerebre’s PlantGraph, which represents equipment, flow, and process relationships derived from P&IDs and other engineering sources. You will design systems that allow large language models and agents to interact with structured graph data through well-defined tools and APIs, combining graph queries, document retrieval, and system context into coherent inputs that support reliable reasoning. 

The problems you will work on involve integrating multiple forms of industrial data, including engineering documentation, procedures, maintenance work orders, and safety artifacts such as Lock-Out-Tag-Out (LOTO) plans. This requires building AI workflows that not only generate useful outputs, but do so in a way that is grounded in underlying data, traceable, and suitable for real-world engineering use. 

You will help design the infrastructure that allows AI agents to safely interact with Cerebre’s platform capabilities, enabling both internal and third-party AI-driven workflows. This includes exposing platform functionality as structured tools and ensuring that agent interactions with industrial systems are observable, reliable, and aligned with the constraints of the domain. 

You will work closely with machine learning engineers, software engineers, and domain experts to translate these capabilities into robust product features. This role emphasizes production-grade AI engineering, including building reliable pipelines, evaluating model performance, and integrating AI systems into scalable software platforms. 

 

What You’ll Do

  • Build AI Systems for Ontology-Driven Graph Reasoning 
    Design and implement AI systems that enable large language models to interpret and reason over Cerebre’s ontology-backed PlantGraph, including equipment, instruments, flow, and process relationships derived from P&IDs. This includes constructing retrieval and context pipelines that combine graph queries, ontology structures, and engineering data into coherent inputs that support reliable, explainable outputs.

  • Develop Chat-Based Interfaces Over the Facility Graph 
    Build natural language interfaces that allow users to search, explore, and reason over facility data through conversation. This includes enabling workflows such as querying equipment and instrument relationships, understanding process flow, and navigating the PlantGraph and associated diagrams through chat-driven interactions.

  • Orchestrate AI Workflows Across Graph, Documents, and Engineering Systems 
    Design and implement AI workflows that coordinate model reasoning and tool usage across Cerebre’s PlantGraph and related data sources. This includes orchestrating graph queries, ontology structures, and document retrieval (e.g., P&IDs, procedures, maintenance work orders, LOTO plans, and time series data) to construct coherent context for model reasoning. Ensure that outputs are grounded in underlying data, aligned with real engineering workflows, and meet strong expectations for correctness, traceability, and validation in production environments.

  • Expose PlantGraph and Platform Capabilities to AI Agents 
    Design and implement the interfaces that allow AI agents to interact with Cerebre’s platform. This includes exposing ontology-driven graph queries, entity data, and diagram-level interactions as structured tools and APIs, and implementing Model Context Protocol (MCP) or similar standards to support both internal and third-party agent access in a safe and observable way.

  • Productionize and Scale AI Capabilities in the Product 
    Translate AI prototypes into reliable, user-facing product features by designing scalable services and APIs, optimizing inference performance and cost, and implementing evaluation, monitoring, and testing frameworks that ensure consistent behavior in production.

  • Collaborate to Deliver End-to-End AI Product Experiences 
    Work closely with machine learning engineers, software engineers, product teams, and domain experts to design and deliver AI capabilities that are tightly integrated into the product experience, including graph exploration, diagram navigation, and workflow execution. Take ownership of ambiguous problems and drive solutions across system boundaries, including identifying and improving gaps in data, ontology, or system design.

  • Evaluate and Apply Emerging AI Techniques Pragmatically 
    Continuously assess new models, tools, and architectural patterns, and apply them where they meaningfully improve system capability, reliability, or development

Required Skills

  • 5+ years of experience in software engineering, machine learning engineering, or applied AI development

  • Experience designing AI systems that combine structured data (e.g., APIs, databases, or similar systems) with LLM-based reasoning

  • Experience building end-to-end AI workflows that involve retrieval, tool usage, and context construction, with an emphasis on correctness, traceability, and evaluation

  • Demonstrated ability to take ownership of ambiguous technical problems and drive solutions across systems, including identifying gaps in data, ontology, or system design and collaborating to improve them

  • Strong programming experience in Python and experience building production-grade backend services

  • Experience building and deploying production systems that integrate large language models into user-facing or operational workflows

  • Experience with retrieval-augmented generation, semantic search, and embedding-based systems

  • Experience designing and building scalable APIs and services that support AI-driven features

  • Strong problem-solving skills and ability to operate effectively in complex, evolving technical environments

  • Proficient in English language

Preferred Skills 

  • Experience building LLM agents or tool-using AI systems that interact with external systems via APIs or structured tools, including familiarity with emerging standards such as Model Context Protocol (MCP) or similar approaches for exposing platform capabilities to AI agents

  • Experience working with knowledge graphs or graph databases (e.g., querying, traversal, or integrating graph data into AI workflows)

  • Familiarity with industrial process diagrams (P&IDs), equipment/instrumentation concepts, or chemical manufacturing processes

  • Experience integrating AI systems with real-world operational data such as documentation, procedures, or maintenance workflows

  • Experience with ML frameworks such as PyTorch or TensorFlow for model evaluation, fine-tuning, or experimentation

  • Experience working with distributed systems and cloud infrastructure in production environments

  • Experience working in industrial, engineering, or other domain-heavy software systems

  • Basic familiarity with C# or willingness to learn how to read and write C# code for integration with existing platform components 

 

Technology Stack 

Our AI platform leverages a modern AI and software engineering stack designed to support ontology-driven reasoning over industrial systems:

  • Model Layer 
    Large language models accessed through LLM APIs/SDKs such as OpenAI, as well as open-source models via Hugging Face and similar ecosystems.

  • AI Orchestration & Agent Infrastructure
    Frameworks and tooling for retrieval-augmented generation, agent orchestration, and tool-based AI workflows (e.g., LangChain, LlamaIndex, or similar technologies). Includes support for Model Context Protocol (MCP) and related standards that enable external agents to interact with Cerebre’s platform and PlantGraph capabilities.

  • Machine Learning Frameworks
    PyTorch, TensorFlow, scikit-learn, and modern model evaluation, experimentation, and performance monitoring in production environments.

  • Data & Retrieval Infrastructure
    Embedding pipelines, vector databases, hybrid search systems combined with graph database infrastructure (FalkorDB) powering the PlantGraph industrial knowledge graph. This layer enables structured reasoning over equipment, instrumentation, flow, and process relationships derived from P&IDs and related engineering data.

  • Application & Service Layer
    Python services and APIs powering AI workflows, agent interactions, and backend platform capabilities.

  • UI Layer
    Integration with product interfaces built using .NET-based application frameworks and modern web technologies, supporting AI-driven experiences such as chat-based graph exploration and diagram navigation.

 

Why Join Cerebre 

You will work at the intersection of AI, engineering knowledge, and industrial systems, helping build an AI-native platform that transforms how engineers reason complex facilities. 

This is an opportunity to apply cutting-edge AI techniques to real-world infrastructure problems, where systems must be grounded in structured data, aligned with engineering workflows, and reliable in production. Your work will have meaningful impact across global industry by directly shaping how engineers interact with facility models and make decisions using AI-driven insights. 

 

More about Cerebre 

We are cross-functional collaborators.  

We blend manufacturing process knowledge with software and big data engineering expertise to create value in physical settings 

We are experienced.  

We are armed with industry-leading experts in numerical simulation, combustion, power, computational fluid dynamics, and chemical process modeling 

We are serious builders.  

We develop our platforms using leading practices in IT/OT architecture, OT security, AI architecture, ML Ops, and Platform engineering 

Tech stack

    English

    B2

    Python

    master

    System Design

    advanced

    API Development

    advanced

    vector databases

    advanced

    Embeddings

    advanced

    AI Agents

    advanced

    RAG

    advanced

    LLMs

    advanced

    Cloud

    regular

    Industrial / Manufacturing / Operations domain

    nice to have

Office location

About the company

cerebre

cerebre is an industrial intelligence company. Like the brain, we centralize data, systems, and knowledge so facilities can think faster, act smarter, and operate safer.

Company profile

Senior AI Engineer (Generative AI / LLM Systems)

Summary of the offer

Senior AI Engineer (Generative AI / LLM Systems)

100% remote, Poznań
cerebre
By applying, I consent to the processing of my personal data for the purpose of conducting the recruitment process. Informujemy, że administratorem danych jest Cerebre z siedzibą w USA, Marston Mills, MA ul. Cedar Neck Tree Road 120 (dalej jako "admi... MoreThis site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.