#1 Job Board for tech industry in Europe

Senior/Lead Big Data Engineer (Java)
Java

Senior/Lead Big Data Engineer (Java)

Wrocław
Type of work
Undetermined
Experience
Senior
Employment Type
B2B
Operating mode
Remote

Tech stack

    Java 8+

    advanced

    SQL

    advanced

    Kafka

    regular

    Elasticsearch

    regular

    Kudu

    regular

    Spark

    regular

    Data Lakes

    regular

    Data Pipelines

    regular

    Zookeeper

    junior

Job description

Online interview
WE ARE
A growing team in SoftServe working with the global financial services company (investments bank).

Our client provides investment management and services all over the world. In the region of Europe, the Middle East, and Africa, our customer has been delivering services for over one hundred years to a broad range of clients seeking access to global capital markets.

Technology is a strategic focus for our customer. They are investing in infrastructure, data management platform, applications for business lines needs as well as building additional capabilities. Our customer is using technology to automate a number of tasks to increase productivity and improve efficiency. Another technological pursuit is artificial intelligence and predictive analytics, being investigated to use both in internal and client-facing processes for fraud detection.

The company is headquartered in New York, also there are regional HQs in London and other countries; Dev centres in New York (the USA), Wroclaw (Poland) and others.

Our team is growing, and we are looking for an experienced Big Data Software Engineer! As a part of the team, you will get a chance to work with an amazing multinational team.

The objective is to create Core Components in Java Frameworks, build ETL batch Processors, Build MQ Adapters, UI Screens (Angular), Distribution Components (Kafka), High Performant Data Processors.

YOU ARE
A professional willing to write clean, correct, and efficient code having the below-mentioned skills

  • More than 5 years of experience with Java 8 +
  • Solid understanding of data structures, algorithms, and OO Design
  • Deep concept of big data technologies
  • Great savvy of Data Lake concepts, data partitioning, and security
  • Knowledge about designing Data Pipelines
  • Understanding of Data Lineage strategies
  • The basics of Master Data Management
  • Experience in Data Quality methods
  • Hands-on SQL skills and different DB engines
  • Strong familiarity with Kafka, Apache Kudu, Elasticsearch, Apache Storm, Zookeeper, Spark
  • Willingness and ability to take on new technologies, as you are a fast learner
  • Ability to break down complex problems into simple solutions
  • Skilled team cooperation
  • Experience in finance domain and understanding of financial terms (ideally, background in finance/regulatory reporting) will be an advantage
  • Upper-Intermediate English level for everyday communication with a client

YOU WANT TO WORK WITH
  • Java 8 & 11
  • Data Lake, Data partitioning and security
  • Designing Data Pipelines
  • Designing Data Lineage strategies
  • Master Data Management
  • Implementing Data Quality methods
  • Different DB engines like Elastic Search, Oracle
  • Denodo, Pentaho
  • Logstash, Apache Spark, IBM MQ
  • Kafka, Apache Storm, Apache Kudu, Zookeeper
  • DevOps principles and CI/CD built in GitLab
  • Finance domain

TOGETHER WE WILL
  • Increase robustness of existing Enterprise Data Platform (Data Lake) to serve as a single-stop shop for all data across the firm and also applications which are integrated with this data platform (as of today over 160 applications are connected to it and 22 lines of business are using the platform)
  • Use a Lake to eliminate Point-to-Point system integration to a more consolidated data source and data distribution platform. Enterprise Data Platform is geared to enable that
  • Work with multiple Transaction & Position systems that are feeding data into the Platform and going into next year, there is a plan to onboard hundreds of data sources and leverage the pipe to perform data standardizations and enrichment before the data is made available for Client Consumption from Canonical System
  • Ensure Data ingestion solution aims to streamline the metadata collection process, and vastly improve the quality of the metadata collected and ultimately stored in the enterprise data catalog
  • Provide execution and delivery of the solution to meet the ingestion demands