AI Red Teaming Specialist
About the Astek Group
Founded in France in 1988, Astek Group is a global partner in engineering and IT consulting. Leveraging deep expertise across a wide range of industrial and technological sectors, Astek supports international clients in the development and delivery of products and services, while actively contributing to their digital transformation initiatives.
Since its inception, Astek Group has built its growth on a strong culture of entrepreneurship and innovation, as well as on the continuous development of the skills of its more than 10,000 employees, who work every day on diverse and challenging engineering and technology projects.
Join a rapidly growing group in France and worldwide, with 2024 revenues of €705 million.
For more information, please visit: https://astek.net
Role Overview
We are seeking a proactive and technically skilled AI Red Teaming Specialist to join a security-focused AI/ML project. In this role, you will be responsible for evaluating the security, safety, and resilience of AI systems, with a strong focus on generative, reasoning, and agentic Large Language Models (LLMs).
You will adopt an adversarial mindset to identify vulnerabilities and failure modes in AI-driven applications and provide actionable recommendations to harden systems against real-world threats before production deployment.
Project Context
The project is part of an advanced AI security initiative focused on secure, confidential, and resilient AI solutions. The work spans infrastructure, AI workloads, and application-level security, with a strong emphasis on agentic AI systems and next-generation AI threat models.
The environment is highly technical, research-driven, and collaborative, involving close interaction with engineering, data science, and product teams.
Key Responsibilities
Conduct AI Red Teaming Exercises
Plan and execute end-to-end red teaming operations and adversarial simulations targeting LLM-powered systems and applications
Design & Execute Adversarial Attacks
Develop and perform attacks such as:
prompt injection
data poisoning
jailbreaking
model evasion and misuse scenarios
Identify weaknesses, unsafe behaviors, and failure modes in generative and agentic AI systems
Vulnerability Analysis & Reporting
Systematically document findings and analyze red teaming results
Produce clear, high-quality reports describing:
identified risks and vulnerabilities
impact and severity
actionable remediation recommendations
Communicate results to both technical and non-technical stakeholders
Tooling & Automation
Contribute to the development and improvement of internal tools and frameworks for AI security testing
Work with automated prompt generation and scenario testing tools such as:
Garak
PyRIT
custom red teaming solutions
Cross-Team Collaboration
Work closely with data science, engineering, and product teams
Ensure AI security considerations are embedded throughout the AI development lifecycle
Provide expert guidance on AI security best practices
Research & Threat Intelligence
Continuously research emerging AI security threats, adversarial ML techniques, and evolving attack vectors
Stay current with industry trends, academic research, and real-world incidents
Technical Requirements (Must Have / Preferred)
Strong experience in AI security, adversarial ML, or offensive security roles
Hands-on experience red teaming LLMs or generative AI systems
Familiarity with:
OWASP Top 10 for LLMs
NIST AI Risk Management Framework
AI guardrails systems (e.g. Amazon Bedrock Guardrails, NVIDIA NeMo Guardrails)
Experience with cloud platforms:
AWS, GCP, or Azure
Understanding of MLOps pipelines and AI deployment workflows
Preferred certifications:
Offensive Security Certified Professional (OSCP)
Certified AI Red Teaming Professional (CAIRTP)
Background in one or more of the following is a plus:
content moderation
disinformation analysis
cyber-threat intelligence
Soft Skills & Availability
Strong analytical and critical thinking skills
Clear communication and reporting abilities
Ability to work with distributed, international teams
Working hours requirement:
overlap with San Francisco timezone
either:
4 days per week until 7:00 PM CET, or
2 days per week until 9:00 PM CET
What We Offer
Long-term collaboration – stability and ongoing career opportunities
Technical training and certifications – continuous skill development and professional growth
Mentoring through our Competence Center – from day one, become part of a community that allows you to enhance your skills, participate in conferences, and share knowledge with colleagues facing similar challenges
Clear career path – transparent progression opportunities
Employee benefits package, including:
Multisport card
Private medical care
Life insurance
Subsidy for public transport
Friendly work environment – team-building events, social gatherings, and corporate parties
Referral Program
Do you know someone who might be interested in this offer? Take advantage of our referral program and earn a bonus of up to PLN 7,000!
Link: https://astek.pl/system-rekomendacji/
Privacy Notice
The administrator of your personal data is ASTEK Polska sp. z o.o., located in Warsaw (00-133) at Al. Jana Pawła II 22. You have the right to access your data, request its deletion, and other rights regarding personal data. Detailed information on data processing can be found here: https://astek.pl/polityka-prywatnosci
You may withdraw your consent at any time. To withdraw consent, please contact us via email at privacy@astek.pl or by writing to the data administrator at the address above.
AO217841
AI Red Teaming Specialist
AI Red Teaming Specialist