The common responsibilities for this position include designing, building, and maintaining resilient and scalable ETL/ELT pipelines on Google Cloud Platform (GCP) to process data and load it into BigQuery. Developing and optimizing data architectures for real-time and batch processing, as well as creating flexible APIs for core data and functionality, are key duties. Additionally, the role involves analyzing and optimizing performance of large data workloads, implementing and automating data quality checks, and identifying opportunities for workflow automation. Collaborating with cross-functional teams to develop data projects and integrating Generative AI models into internal tools are also essential tasks. Continuous engagement with stakeholders to transition financial assets and data pipelines, along with staying updated on emerging trends in data and analytics, is part of the job. Finally, leading the exploration and implementation of innovative data solutions and ensuring seamless communication across teams are critical responsibilities.
The percentages next to each skill reflect the sector’s demands in these respective skills. E.g., 30% means this skill has been listed in 30% of all the job postings in this sector.
The skills distribution tells you what specific skill sets are in demand. E.g., Skills with a distribution of “More than 50%” means that these skills are wanted in more than 50% of the job postings.
Job classifications that have advertised a position
Academic degree required as indicated by all job postings
Job subclassifications that have advertised a position