Experience architecting, designing, and delivering data pipelining solutions in cloud technologies such as Azure, AWS or Snowflake. Certifications are a plus!
Experience implementing data lake solutions
Experience on Databricks Deltalake will be added advantage
Solid foundation in DW and BI design, tools, processes, and implementation approaches
Data Warehousing, Business Intelligence or Data Analytics project experience
Hands-on experience on data processing frameworks e.g. Spark using a range of programming languages (Python, Java, Scala, SQL).
Experience in building schedule-driven workflows, serialisation formats, data modelling and architecting for performance.
The following technologies and skills are also desirable:
Experience with JIRA, Confluence and Bamboo
Experience with Git (or appropriate source control) and scripting (e.g. Python/Unix shell)
Data modelling skills (e.g. Kimball, Data Vault)
Experience in leading projects or units of work
ETL/ELT tools –Azure Data Factory, AWS Glue, Spark