DevOps Engineer
Piastowska 7, Gdańsk
Karbone
Requirements:
2+ years of commercial experience with AWS
Solid hands-on experience with PostgreSQL (TimescaleDB a plus, not mandatory)
Strong troubleshooting skills for cloud infrastructure and data systems
Experience with scripting (e.g., Python) for automation and operational tasks
Self-starter with ability to manage infrastructure independently
Good English communication skills — must coordinate directly with US-based team
Nice to Have:
Familiarity with geospatial data
Previous experience working solo in DevOps roles
Knowledge of AWS monitoring best practices
Understanding of infrastructure-as-code tools (Terraform, CloudFormation, etc.)
Key Responsibilities:
Ensure infrastructure uptime and respond to alerts
Troubleshoot infrastructure issues
Manage and improve data storage and monitoring systems (PostgreSQL/TimescaleDB on AWS)
Build robust application and infrastructure monitoring (alerts, dashboards, stability tools)
Suggest and implement AWS architecture improvements
Support the operation and scheduling of ETL pipelines (Python scripts + ETL tools)
Collaborate with data analysts and project leads to ensure smooth delivery
What You’ll Work With:
Infrastructure: AWS (manual setup currently, automation planned)
Database: PostgreSQL / TimescaleDB (TigerData)
ETL: Python scripts (tool TBD)
Monitoring: You choose & implement (full ownership)
Orchestration: None yet (Docker/K8s not in use)
CI/CD: Not currently used (no active codebase or compiled code)
Automation: Terraform/CloudFormation possible later
Data: Geodata + time series streams (daily updates)
Why This Role?
Direct ownership of a clean and buildable AWS infrastructure
Influence on tooling and architecture from day one
Work on a focused data pipeline with flexibility in implementation
Fast-track hiring and decision-making process