Founding AI Engineer
BEFORE YOU READ THIS
Sequoia published a piece called Services: The New Software. It argues that AI is collapsing the gap between services and software companies — that services firms can now build compounding IP, run at software-like margins, and scale without the traditional headcount pyramid. If that thesis excites you, keep reading. If it doesn't, this role probably isn't for you.
ABOUT THE COMPANY
We're an AI transformation firm. We help companies turn AI ambition into working business systems — combining strategic advisory with hands-on agentic execution, from board-level governance to AI systems deployed in real workflows. We ship working software in 4–6 weeks. Manufacturing firms, e-commerce platforms, healthcare providers, PE portfolio companies.
A small team of practitioners augmented by AI agents. Two to three people producing the output of a 14-person consulting team. Every engagement produces working software and leaves behind reusable components that make the next engagement faster. Sprint #1 runs at 50% margin. Sprint #16 runs at 75%. That's the Sequoia thesis in practice.
WHY THIS ROLE
This is a founding engineering role. You'll own the technical foundation of a company built on the premise that services are the new software. Equity and profit-sharing are included because you're building the engine, not renting your time.
HOW AN ENGAGEMENT WORKS
Weeks 1–2 — Discovery (with the FDE and founder)
Assess client data quality, existing systems, and integration points. Score technical feasibility of AI opportunities. Design the architecture for the selected build target. Map agent workflows, integrations, and data flows.
Weeks 3–5 — Build (you lead)
Build the system. AI agents handle 40–50% of code generation — you handle architecture, integration, edge cases, and the parts that require judgement. Design and build AI agents that automate client workflows: agentic systems that replace manual processes, multi-agent pipelines, autonomous decision-making loops. Integrate with client systems via APIs, MCP connectors, and data pipelines. Test and harden — LLM outputs are non-deterministic, so you build robust evaluation patterns. Ship iteratively — working demos to stakeholders, not "it'll be ready next week."
Week 6 — Prove
Measure results against the business case. Support the founder's leadership presentation with hard numbers. Extract reusable components into the internal IP library — agents, connectors, patterns, prompt templates.
Between sprints
Build and maintain the IP library — the compounding asset that makes Sprint #16 run at 75% margin. Build AI agents that do the services work itself — discovery agents, analysis agents, delivery automation. Contribute to internal product development. Experiment with new models and agent architectures.
WHAT THE WORK LOOKS LIKE
One sprint you're building an agentic knowledge assistant for a 600-person engineering firm. The next you're designing an AI-powered customer service platform for a PE portfolio company. Then you're automating proposal generation for a professional services firm using RAG over their project history. New client, new problem, every 4–6 weeks. This is management consulting meets engineering — you need to understand the business before you write the code.
TECH STACK
This changes fast — you'll help decide what comes next. LLMs: Claude, OpenAI, GCP Vertex AI, OpenRouter. Voice and media: ElevenLabs, Whisper, emerging multimodal APIs. Backend: Python, FastAPI. Frontend: React, Next.js. AI patterns: RAG, agentic workflows, MCP, function calling, tool use, evaluation frameworks. Infrastructure: AWS, Azure, GCP — client-dependent. Dev workflow: Claude Code, Cursor, agentic coding. Code-gen agents are the default way we write software, not an add-on.
WHAT WE LOOK FOR
AI-native by default
Claude Code, Cursor, or equivalent — you already write software with AI agents as co-developers. You've built systems that call LLM APIs in production. If you still think of AI-assisted coding as a novelty, this isn't the right place. Dealbreaker. Required.
Full-stack and you ship
Python backend, React frontend, API integrations — you can build an end-to-end system and put it in front of users. You don't need a separate team for each layer.
You think in systems
When you build something, you think about how it integrates, how it fails, and how someone else reuses it on a different client six months from now.
Velocity is everything
We ship in 4–6 weeks what others take quarters to deliver. You'd rather ship an 80% solution on Tuesday than a 95% solution three weeks from now — and you can articulate what's in the missing 20%. If you need long planning cycles or perfect conditions to start, this pace will break you. Dealbreaker. Required.
Client-facing
You'll present technical decisions to CTOs and COOs. You don't need to be a salesperson, but you need to explain what you built and why it matters in language they understand. Dealbreaker. Required.
Low ego
You'll work alongside a founder, an FDE, client stakeholders, and AI agents. The best idea wins regardless of who had it. If you can't take direct feedback and adjust, this won't work. We reference check. High ego gets spotted fast.
WHAT YOU'LL GET
Rate
20,000 – 40,000 PLN per month net on a B2B contract, Warsaw.
Equity and profit share
Equity stake in the company and profit-sharing tied to engagement performance. You're building the engine — you share in the upside.
Architectural ownership
You're building the foundation, not inheriting someone else's.
Variety
New client, new industry, new problem every 4–6 weeks.
AI-native environment
40–50% of delivery is agent-augmented. Maxing out tokens for 24/7 agentic work is the way. You'll have the tools and the budget to push what's possible.
Founding AI Engineer
Founding AI Engineer