Sigma Software
Sigma Software is a global software development company that enables enterprises, startups, and product houses to meet their technology needs through end-to-end delivery. We have been working since 2002, from all over the world.
We are looking for a Senior Data Engineer to join one of our teams and help us build great products for our clients.
You’ll be part of a high-performance team where innovation, collaboration, and excellence are at the core of everything we do. As a Senior Data Engineer, you’ll have the chance to design and develop optimized, scalable big data pipelines that power our products and applications we work on. Your expertise will be valued, your voice will be heard, and your career will be supported every step of the way.
Does this sound like an interesting opportunity? Keep reading to learn more about your future role!
Customer
Our client is an international technology company that specializes in developing high-load platforms for data processing and analytics. The company’s core product helps businesses manage large volumes of data, build models, and gain actionable insights. The company operates globally, serving clients primarily in the Marketing and Advertising domain. They focus on modern technologies, microservices architecture, and cloud-based solutions.
Responsibilities:
Design, develop, and maintain end-to-end big data pipelines that are optimized, scalable, and capable of processing large volumes of data in real-time and batch modes
Collaborate closely with cross-functional stakeholders to gather requirements and deliver high-quality data solutions that align with business goals
Implement data transformation and integration processes using modern big data frameworks and cloud platforms
Build and maintain data models, data warehouses, and schema designs to support analytics and reporting needs
Ensure data quality, reliability, and performance by implementing robust testing, monitoring, and alerting practices
Contribute to architecture decisions for distributed data systems and help optimize performance for high-load environments
Ensure compliance with data security and governance standards
Qualifications:
4+ years of experience in data engineering, big data architecture, or related fields
Strong proficiency in Python and PySpark
Advanced SQL skills, including query optimization, complex joins, and window functions. Experience using NoSQL databases
Strong understanding of distributed computing principles and practical experience with tools, such as Apache Spark, Kafka, Hadoop, Presto, and Databricks
Experience designing and managing data warehouses and data lake architectures in cloud environments (AWS, GCP, or Azure)
Familiarity with data modeling, schema design, and performance tuning for large datasets
Experience working with business intelligence tools, such as Tableau or Power BI for reporting and analytics
Strong understanding of DevOps practices for automating deployment, monitoring, and scaling of big data applications (e.g., CI/CD pipelines)
At least Upper-Intermediate level of English
Personal profile:
Excellent communication skills
Ability to collaborate effectively within cross-functional and multicultural teams
B2B, Permanent
Check similar offers