AI Systems QA Engineer

Testing

AI Systems QA Engineer

Testing
Domaniewska 47/10, Warszawa

AdaptaIQ

Freelance
B2B, Specific-task
Senior
Remote

Job description

We are building something that does not exist yet.


AdaptaIQ is creating the world’s first adaptive intelligence engine that understands how a mind processes information and reshapes content accordingly.


Our first product, Kidaro, is a science-backed AI platform for parents of children ages 7 to 10. Through a psychologist-designed onboarding journey, we generate a personalized Learning Profile that maps how a child absorbs knowledge, maintains attention, and stays motivated. This becomes the foundation for adaptive reports and eventually dynamic, AI-generated learning experiences.


This is not traditional QA. You will be testing intelligence, not just code.


What You Will Own


You will own product quality across:

  • Onboarding interview flows (parent and child)

  • AI-generated phenotype reports

  • Admin dashboard and backend logic

  • Adaptive content logic

  • AI output consistency and safety

  • Regression testing as models evolve


You will work closely with our CTO, Head of Applied AI, psychologists, and engineers.


What Makes This Role Different


We need someone who can test:


Responsibilities

  • Deterministic code

  • Automation frameworks

  • APIs and integrations

  • AND probabilistic LLM systems


You must be comfortable evaluating AI behavior, including:

  • Prompt testing

  • Hallucination detection

  • Persona drift

  • Safety boundary testing

  • Bias detection

  • AI regression testing after model updates


If you have never tested non-deterministic systems, this role is probably not for you.


Responsibilities


Manual QA

  • End-to-end onboarding testing

  • Edge case validation

  • Report logic and scoring consistency

  • UX friction identification

Automation QA

  • Build regression suites

  • API and integration tests

  • Data integrity validation

  • Versioning validation

AI QA

  • Build evaluation frameworks

  • Define measurable output criteria

  • Create synthetic personas for stress testing

  • Track output stability over time


Requirements

  • 4+ years QA experience

  • Strong automation background

  • Experience testing APIs

  • Experience with tools like Postman, Bruno or similar

  • Understanding of SQL

  • Basic understanding of auth mechanisms (e.g. OAuth, IDP concept)

  • Basic experience in software development (e.g. Python, TypeScript, Java)

  • Experience with AI or LLM systems is a strong plus

  • Strong analytical and documentation skills

  • Comfortable in an early-stage startup

  • Nice to have: basic experience in AWS


Bonus if you have experience in EdTech, psychology-driven products, or conversational AI.

We are early, focused, and building with long-term architecture in mind. This is a contractor role with potential for long-term collaboration. If you want to help shape how adaptive intelligence systems are validated, we would love to speak.


Tech stack

    English

    C1

    LLM Testing

    master

    Automated Testing

    master

    Manual Testing

    master

    Automation Tools

    advanced

    Postman, Bruno or similar

    advanced

    Amazon AWS

    regular

    Typescript, Java, Python

    regular

    SQL

    regular

Office location

Published: 03.03.2026

AI Systems QA Engineer

Summary of the offer

AI Systems QA Engineer

Domaniewska 47/10, Warszawa
AdaptaIQ
By applying, I consent to the processing of my personal data for the purpose of conducting the recruitment process. Please be informed that the data controller is AdaptaIQ sp. z o. o. (hereinafter "controller"). You have the right to request access t... MoreThis site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.