Data Architect
Overview
We are a company that focuses on AI-powered enterprise operations. We provide digital solutions and consulting services that help businesses grow and improve their performance. Our goal is to simplify processes, increase efficiency, and help companies find new ways to earn revenue, especially in private capital markets.
Our ecosystem includes three main parts.
PaaS (Platform as a Service) – our Core Platform, an AI-native system that improves workflows, provides useful insights, and helps create value across company portfolios.
SaaS (Software as a Service) – a cloud platform that offers strong performance, intelligence, and the ability to operate at large scale.
S&C (Solutions and Consulting Suite) – modular technology playbooks that help manage, grow, and improve company performance.
With more than ten years of experience working with fast-growing companies and private-equity platforms, we combine strong industry knowledge with the ability to turn technology into a real business advantage.
The Opportunity
We are looking for a Senior Data Architect with strong experience in cloud data architecture, data modeling, SQL, and data governance for large enterprise systems.
Responsibilities and Duties
Create and maintain enterprise data architecture strategies, standards, and design plans that support operational systems, analytics, and AI/ML workloads.
Design cloud-native data solutions on platforms like AWS (Redshift, RDS, Glue, Lake Formation) or similar systems, making sure they are scalable, secure, and cost-efficient.
Define and apply data modeling standards, including dimensional models, denormalized schemas, OLTP/OLAP patterns, and AI-ready data structures.
Design and manage data transformation layers using DBT with clear, tested, and well-documented models for analytics and reporting.
Lead the design of data integration and orchestration processes using tools such as Prefect and Airflow. This includes batch ETL, real-time streaming, event-based systems, and API data exchange.
Build frameworks for data validation, quality checks, and testing to ensure data accuracy and consistency across pipelines and warehouses.
Set data quality SLAs, monitoring, and alert systems, and create automatic checks to detect problems early.
Create and maintain data governance practices, including data quality, lineage tracking, cataloging, classification, and access control.
Work closely with Data Engineers, Software Engineers, Product teams, and Analytics teams to turn business needs into scalable data solutions.
Review and recommend data tools, technologies, and platforms, and make technical decisions for data infrastructure.
Design partitioning, indexing, and optimization strategies to support fast queries and large datasets.
Define and document data contracts, schemas, and interface specifications between services and teams.
Make sure data systems support AI/ML use cases, including feature stores, embedding pipelines, and training datasets.
Perform architecture and code reviews to ensure good standards, strong performance, and long-term maintainability.
Validate and clean data and manage errors correctly.
Guide and mentor data engineers on best practices in architecture and data modeling.
Support automated release management and CI/CD processes for data infrastructure and pipelines.
Requirements
7+ years of experience in data architecture, data engineering, or similar roles.
5+ years of experience designing cloud data systems (AWS, GCP, or Azure).
5+ years of experience writing complex SQL queries with relational databases.
5+ years of experience building ETL/ELT pipelines using tools such as Airflow or Prefect.
Strong experience with DBT for data transformation, testing, and documentation.
Experience designing data warehouses, including OLTP, OLAP, star schemas, snowflake schemas, dimensions, and facts.
Knowledge of data modeling methods and tools (conceptual, logical, and physical models).
Experience with cloud data warehouses such as Redshift, Snowflake, or BigQuery.
Experience building data validation frameworks, quality processes, and automated tests for pipelines.
Understanding how data systems support AI/ML workloads, including feature stores and vector-based retrieval.
Strong knowledge of data governance, data quality systems, and metadata management.
Experience with cloud-based data systems, messaging tools, and analytics platforms.
Bachelor’s degree in Computer Science or similar field (preferred).
Additional Skills (Nice to Have)
Python development (Pandas, PySpark)
Docker
Kubernetes
CI/CD automation
AWS Lambda or Step Functions
Data partitioning techniques
Databricks
Vector databases such as Pinecone, Weaviate, or pgvector
Data mesh or data fabric architecture
Graph databases or knowledge graph design
Cloud certifications
Why Join Us?
We appreciate creative people who solve problems, learn quickly, and enjoy working in an open and diverse environment. We work hard to reach high goals, but we also like to enjoy the work we do.
Data Architect
Data Architect