Sr. Data Engineering Lead

About Us:

Centelon is a technology company that carries an unflinching passion to help clients innovate, adapt, and succeed. We are a technology products and services company. At Centelon, we harness the power of AI, digital and other emerging technologies to help our clients adapt to the evolving world. Centelon is a trusted partner of large and mid-size businesses in Financial Services, Media, Logistics, Energy & Utilities industries. We have offices in Australia, Singapore, and India. 

Who are you? 

  • Ambitious: You are an ambitious individual who is eager to develop their skills and progress their career at every opportunity in the rapidly developing data ecosystem.
  • Mateship: You thrive as a part of high performing teams and love sharing your knowledge with your peers and clients.
  • Bias to Action: You seize opportunities when you see them and use them for personal, professional, and business growth.
  • Passionate: You are passionate about data and love to deliver high performing data solutions for our clients.

What is the opportunity? 

We are searching for a Data Engineering Consultant to strengthen our capabilities in following areas 

  • Data movements is about bringing the right data to the right places. This could be using data integration tools or data processing frameworks for real-time / batch integrations
  • Datastores are about finding a home for your data. Ranging from databases (relational or NoSQL) and warehouse to cloud based data platforms
  • Data modelling is about act of exploring data-oriented structures and architecting data systems for performance.

Expected Technical Skills: 

  • 5+ years of experience in building and optimizing data pipelines and architectures in high availability environments
  • Strength in SQL, data modelling and ETL development
  • Ability to extract data from multiple sources using tools such as Kafka, Sqoop and Nifi
  • Demonstrated experience of working in Hadoop ecosystem using Parquet and Avro
  • Experience in working on databases such as Postgres and MongoDb
  • Ability to write complex Hive queries
  • Data transformation using Spark
  • Experience on pipelining tools such as Oozie and Airflow
  • Experience on AWS tools such as EMR, RDS and Redshift will be an added advantage