Spyrosoft
Spyrosoft is an authentic, cutting-edge software engineering company, established in 2016. We have been included in the Financial Times ranking of 1000 fastest growing companies for three consecutive years: 2021, 2022 and 2023.
Requirements:
Experience in any SQL (or similar)
Knowledge of BI tools (particularly Power BI)
Ability to design data models
Optimizing interactive dashboards and reports
Familiarity with ETL/ELT processes
Experience with GCP, especially Looker and LookML
Data governance principles and experience in monitoring data quality
Fluency in English
We are seeking a Data & BI Specialist responsible for designing, optimizing, and developing BI solutions. Depending on your experience level (Regular/Senior), your tasks will include advanced data modeling, performance optimization, and data visualization. Additionally, you may work on developing infrastructures and processing large datasets, ensuring their high quality and availability across the organization.
Main responsibilities
Depending on a project:
Data Visualization:
Creating and optimizing interactive dashboards and data visualizations.
Performance optimization of reports.
UX/UI development.
Data Modeling & SQL:
Designing scalable data models.
Advanced SQL queries.
Supporting Master Data Management (MDM).
Infrastructure & Data Processing:
Building and optimizing ETL/ELT processes.
Incident management and monitoring cloud infrastructure.
Data Governance & Quality:
Implementing data governance principles.
Monitoring data quality (KPIs).
Managing data catalogs (e.g., Purview).
Business Support:
Supporting stakeholders through data analysis and delivering key insights.
Gathering business requirements.
Requirements:
Must-have:
Proficiency in SQL (or Python, Scala, etc.) and ability to design data models (Lookml, AAS)
Hands-on experience with BI tools (Looker, Power BI) and optimizing interactive dashboards and reports.
Knowledge of ETL/EL processes
Strong expertise in Google Cloud Platform, particularly with Looker and LookML.
Understanding of data governance principles and experience in monitoring data quality.
Fluent communication in English, especially in the context of technical documentation and collaboration with international teams.
Nice-to-have:
Ability to create scripts for process automation (Python, Scala).
Experience in Fabric, Databricks, Snowflake, etc.
Basic knowledge of frameworks for processing large datasets, such as Hadoop or Spark.
Experience with data management tools (e.g., Purview, Collibra).
Net per hour - B2B
Check similar offers