Location: Cracow (hybrid - 1 time per month in the office)
Contract: b2b
The Data Engineer role involves the development of critical data pipelines and services integral to our Generative AI models. Successful candidates will be proficient in Python and familiar with both on-premises Unix/Linux environments and Azure Cloud. The position is ideal for detail-oriented individuals who thrive in dynamic settings and possess a strong foundation in data processes.
Main Responsibilities:Key responsibilities include:
Creation of agents for data sourcing from various systems.
Building data transfer pipelines.
Developing microservices synchronized with Generative AI solutions.
Key Requirements:Candidates must possess the following skills and qualifications:
Experienced in Python and Unix/Linux environment on-premises.
Experience with time-series/analytics databases such as Elasticsearch.
Experience in Azure Cloud.
Experience in RESTful APIs using Python FastAPI.
Experience with Generative AI APIs.
Experience in building data models and pipelines for Retrieval-Augmented Generation.
Experience in building microservices.
Familiarity with industry-standard version control tools (Git, GitHub) and deployment tools (Ansible & Jenkins).
Basic shell-scripting proficiency.
Understanding of big data modeling techniques using relational and non-relational approaches.
Self-starter, proactive, and team-oriented.
Willingness to learn and adapt to changing requirements.
Experience and understanding of the Software Development Lifecycle (SDLC).
Nice to Have:Preferred skills and qualifications include:
Understanding or experience of Cloud design patterns.
Great communication skills.
B2B
Check similar offers