📍 100% remote | 🕒 Full-time | 🌍 International environment
We are seeking a Backend Engineer with strong Big Data and data engineering skills to support the development of scalable, high-performance data solutions in a modern cloud environment. This role is ideal for someone passionate about clean code, robust architecture, and processing massive data sets in production-grade systems.
- Design, build, and maintain backend components for large-scale data pipelines using Python, Spark, Databricks, and other modern data tools
- Implement and manage workflow orchestration using Airflow or similar tools
- Collaborate with cross-functional teams including DevOps, data analysts, and product owners
- Optimize data processing for performance, reliability, and scalability in a cloud-native environment (Azure)
- Ensure deployment readiness and integration with Docker and CI/CD workflows
- Contribute to the continuous improvement of backend and data engineering practices
- Proven experience in Python backend development with a focus on data-heavy applications
- Hands-on expertise with Big Data tools such as Apache Spark, Databricks, etc.
- Experience with data orchestration tools like Airflow
- Working knowledge of Azure Cloud services
- Familiarity with Docker and containerized application deployment
- Ability to write clean, efficient, and maintainable code
- Good communication skills and a collaborative mindset
- Proficiency in English in an international team environment
- Understanding of blockchain technologies and related data structures
- Experience with monitoring and observability tools (e.g., Datadog) and CI/CD automation
- Familiarity with Agile methodologies and enterprise-scale delivery environments