AWS DevOps Engineer
We are looking for experienced team players to fill the position of DevOps Engineer and participate in our up-and-coming project for our client from pharmaceutical industry.
Responsibilities:
Design, deploy, and maintain Kubeflow (or equivalent) for pipeline orchestration, model training, evaluation, and serving on large image datasets; ensure reliability, security, and cost efficiency.
Manage and tune Kubernetes clusters (EKS/GKE/AKS), set up namespaces, RBAC, autoscaling, network policies, and service meshes where appropriate; keep upgrades and operations predictable.
Define infrastructure-as-code with Terraform; implement repeatable environment provisioning, configuration management, and golden paths for teams.
Establish CI/CD workflows (GitHub Actions/Jenkins/GitLab CI), build/test standards, and progressive delivery patterns that keep releases fast and low-risk.
Implement logging, metrics, and tracing (e.g., Prometheus, Grafana, CloudWatch, Splunk/New Relic) with actionable SLOs, alerts, and runbooks; embed security and compliance by design.
Requirements
At least 5 years of experience as DevOps or a similar role
Production experience with Kubeflow Pipelines for scalable, reproducible ML workflows
Strong Kubernetes expertise (EKS/GKE/AKS): upgrades, autoscaling, RBAC, networking, reliability
Solid Unix/Linux fundamentals
Hands-on AWS experience (EKS, EC2, S3, IAM, CloudWatch; RDS a plus) with secure, cost-efficient architecture design
Proficiency in Terraform and Git-based workflows for repeatable infrastructure
Experience with CI/CD systems (GitHub Actions/Jenkins/GitLab CI), including artifact management and progressive delivery
Strong Python and/or shell scripting for automation
Observability experience (logging, metrics, tracing, SLOs, alerts, runbooks) with a security-first mindset
Ability to lead initiatives, communicate trade-offs, and collaborate cross-functionally
Nice to Have:
Familiarity with ML tooling (MLflow, Feast, Argo, Airflow, Ray) and model lifecycle management
Experience with object storage (S3), artifact registries, large image datasets, and basic SQL/NoSQL
Exposure to digital pathology or large-scale image processing (e.g., OpenSlide, scikit-image, OpenCV)
Experience optimizing high-throughput pipelines and working with GPUs/accelerators
Knowledge of VPC design, networking, service meshes, secrets management, IAM, and policy as code
Experience in regulated environments (e.g., GxP), including data governance and privacy
Familiarity with Jira/Zendesk and JavaScript/TypeScript for internal tools or dashboards
Benefits
Hybrid work mode ( 1-2 days/per week from customer office in Warsaw)
Professional training programs – including Udemy and other development plans
Work with a team that’s recognized for its excellence. We’ve been featured in the Deloitte Technology Fast 50 & FT 1000 rankings. We’ve also received the Great Place To Work® certification for five years in a row
AWS DevOps Engineer
AWS DevOps Engineer