About the job
In this role, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product challenges. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering teams to build and constantly drive excellence in our products.
Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
Responsibilities:
- Interact with stakeholders to translate complex customer requirements into recommendations for appropriate solution architectures and advisory services.
- Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platforms (GCP).
- Design, Migrate/Build and Operationalise data storage and processing infrastructure using Cloud native products.
- Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data.
- Organize various project requirements into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders.
Minimum qualifications:
- Bachelor's degree in Computer Science, Mathematics, related technical field, or equivalent practical experience.
- 6 years of experience in developing and troubleshooting data processing algorithms and software using Python, Java, Scala, Spark and hadoop frameworks.
- Experience with SQL coding across standard commercial databases (e.g., Teradata, MySQL, subqueries, multiple table joining, multiple join types).
- Experience with distributed data processing frameworks and modern investigative and transactional data stores.
Preferred qualifications:
- Experience with data warehouses, including technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures.
- Experience in Big Data, information retrieval, data mining, or Machine Learning with building applications with NoSQL, MongoDB, SparkML, and TensorFlow.
- Experience architecting, developing software, internet scale production-grade Big Data solutions in environments.
- Experience with techniques like symmetric, asymmetric, HSMs, envelopes, and implement secure key storage using Key Management System.
- Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc.
- Knowledge in data processing frameworks and modern age investigative and transactional data stores with ability to write complex SQLs.
Benefits:
-
Health and Wellbeing (Medical, dental, and vision insurance for employees and dependents)
-
Financial wellbeing (Competitive compensation, regular bonus and equity refresh opportunities)
-
Flexibility and time off (Paid time off, including vacation, bereavement, jury duty, sick leave, parental leave, disability, and holidays)
-
Family support and care (Fertility and growing family support, parental leave and baby bonding leave)
-
Community and personal development (Educational reimbursement)
-
Googley extras (Inspiring spaces to work, recharge, and collaborate with fellow Googlers)