#1 Job Board for tech industry in Europe

  • Job offers
  • All offersWrocławDataLead Big Data Engineer
    Lead Big Data Engineer
    Data
    SoftServe

    Lead Big Data Engineer

    SoftServe
    Wrocław
    Type of work
    Full-time
    Experience
    Senior
    Employment Type
    B2B, Permanent
    Operating mode
    Remote

    Tech stack

      Redshift

      advanced

      Azure

      advanced

      Python

      advanced

      AWS

      advanced

    Job description

    Online interview
    Friendly offer

    Caution: working hours 5pm - 1am


    WE ARE

    SoftServe is a global digital solutions company with headquarters in Austin, Texas, founded in 1993. Our associates are currently working on 2,000+ projects with clients in the USA, Europe, and APAC region. We are about people who create bold things, make a difference, have fun, and love their work.

    Our client is a pro-consumer financial technology innovator who aims to reinvent personal banking for the masses, being one of the largest providers of reloadable prepaid debit cards and cash reload processing services in the United States. It is additionally the largest processor of tax refund disbursements in the U.S. Client’s products and services are available to consumers through a large-scale "branchless bank" distribution network of more than 100,000 U.S. retail locations and neighborhood financial service center locations, online, in the leading app stores, and through 25,000 tax preparation offices and leading online tax preparation providers.

    The corporation is headquartered in Austin, TX, with additional facilities throughout the United States and in Shanghai, China.


    IF YOU ARE

    • Bachelor or Master in Computer Science, Computer Engineering or related field
    • Possessing 8+ years of overall software development experience with Data warehouse and Business Intelligence applications
    • Having 5+ years of hands-on development experience in a Cloud-based data warehouse platform – a combination of Azure/AWS Databricks, AWS Redshift, Snowflake or similar
    • Experienced with Relational DBs (MS SQL Server, PostgreSQL, Oracle, etc.) and with complex stored procedures and functions using SQL, optimization of SSIS packages, and Redshift database queries
    • Skilled with a combination of Python, windows batch scripting, or Unix Shell scripting
    • Developing with ETL tools (Informatica, Talend, Azure Data Factory, etc) and Databricks
    • Knowledgeable and experienced in OLAP data modeling & design
    • An expert in performance tuning in various DB environments with large volumes of data
    • Used to building and scaling distributed, highly available systems in a cloud environment (Azure / AWS)
    • Expertized in understanding complex business needs, analyzing, designing, and developing solutions
    • Possessing great communication skills and the ability to navigate relationships across business units


    AND YOU WANT TO

    • Architect and maintain the Operational data stores (ODS), Enterprise Data Warehouse (EDW), and associated data intelligence environments including data lakes, data marts, and metadata repositories
    • Design and implement performance data movement pipelines using multiple ETL tools with batch and real-time event data streams (Databricks, Informatica, Apache Kafka, Azure Event Hub, AWS Kinesis, SQL Packages, etc.)
    • Develop scalable and re-usable frameworks for ingestion of structured and semi-structured data; and implement various large-scale settlement and reconciliation frameworks
    • Gather business and information requirements from various product owners across various functional units
    • Design and create data architecture and data flow diagrams and collaborate with peers to translate requirements into logical and physical data models
    • Conduct design review sessions and communicate design artifacts with stakeholders
    • Ensure database features and capabilities are incorporated into data model designs to optimize performance, resiliency, and scalability
    • Deliver and present proofs of concept of key technology components to project stakeholders and internal technology leaders
    • Develop scalable and re-usable frameworks for ingestion of structured and semi-structured data
    • Implement various large-scale settlement and reconciliation frameworks
    • Lead and coordinate the design and operationalization of Master Data Management and Data Governance initiatives including custom-developed solutions
    • Work with other members of the project team to support the delivery of additional project components associated with the cloud data platform
    • Problem-solve Data warehouse and ODS application issues and production errors, including high-level critical production issues that require immediate attention
    • Perform other duties as assigned including major new project initiatives


    TOGETHER WE WILL

    • Support your technical and personal growth – we have a dedicated career plan for all roles in our company
    • Investigate new technologies, build internal prototypes, and share knowledge with the SoftServe Big Data Community
    • Gain access to unlimited upskill opportunities with full access to Udemy learning courses
    • Embrace professional certifications, encouraged and covered by the company
    • Assimilate best practices from experts, working in a team of top-notch Engineers and Architects
    • Access to different business and technology challenges, drive multiple projects and initiatives as a part of the Center of Excellence