#1 Job Board for tech industry in Europe

  • Job offers
  • Data Solution Architect
    New
    Data

    Data Solution Architect

    Type of work
    Full-time
    Experience
    C-level
    Employment Type
    B2B
    Operating mode
    Hybrid

    Tech stack

      Spanish

      C1

      Databricks

      advanced

      Azure

      advanced

      Azure Data Factoy

      advanced

      Big Data

      advanced

      PySpark

      advanced

      SQL

      advanced

    Job description

    Job Description:

     


    Responsibilities

    1 Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL

    2 Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions

    3 Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices

    4 Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools

    5 Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems

    6 Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement

    7 Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards

    8 Ensure best practices are followed in terms of code quality data security and scalability

    9 Stay updated with the latest developments in Databricks and associated technologies to drive innovation

     

    Essential Skills

    • Strong experience with Azure Databricks including cluster management notebook development and Delta Lake
    • Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark
    • Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory
    • Experience with ETLELT processes data warehousing and building data lakes
    • Strong SQL skills and familiarity with NoSQL databases
    • Experience with CICD pipelines and version control systems like Git
    • Knowledge of cloud security best practices

     

    Soft Skills

    • Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders
    • Strong problemsolving skills and a proactive approach to identifying and resolving issues
    • Leadership skills with the ability to manage and mentor a team of data engineers

     

    Nice to have Skills

    • Power BI for dashboarding and reporting
    • Microsoft Fabric for analytics and integration tasks
    • Spark Streaming for processing realtime data streams
    • Familiarity with Azure Resource Manager ARM templates for infrastructure as code IaC practices

     

    Experience

    1 Demonstrated expertise of 12 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory

    2 Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2

    3 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation

    4 Proficiency in building and optimizing query layers using Databricks SQL

    5 Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions

    6 Prior experience in developing optimizing and deploying Power BI reports

    7 Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions

     

    Candidate is required to be fluent in English and any of the language like French or Portuguese or Spanish or German



    Undisclosed Salary

    B2B

    Apply for this job

    File upload
    Add document

    Format: PDF, DOCX, JPEG, PNG. Max size 5 MB

    This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    Informujemy, że administratorem danych jest LTIMindtree z siedzibą w Warszawie, ul. Rondo ONZ 1 (dalej jako "administrat...more

    Check similar offers

    PL/SQL & Oracle Data Integrator Developer

    New
    4IT Solutions
    27 - 37 USD/h
    Warszawa
    ETL
    Jira
    SQL

    Project Lead / Data Governance SME

    New
    Link Group
    40 - 45 USD/h
    Warszawa
    , Fully remote
    Fully remote
    Data Operating Models
    Data Strategy
    Big Data