Job details
Discipline: | |
Reference: | Ad-44260 |
Posted: | about 4 years ago |
Job description
Senior Data Engineer
- 6 Months contract
- Melbourne CBD location
- $600+ per day (Negotiable)
Skills & Experience required:
- Hadoop Stack: Hive, Spark (Core and Streaming), Kafka, Flink, NIFI etc.
- Stream transform using Spark Streaming or Flink
- Change data capture in big data ecosystem
- Building real time or batch ingestion and transformation pipelines
- Exposure to ‘container’ technology (e.g. Docker, Kubernetes)
- Advanced programming knowledge (e.g. Core Java, Python and Scala )
- Experience wrangling data using library such as Pandas, Scikit-learn or Numpy
- Experience using Notebook such as Jupyter or Polyglot
- Experience with test-driven development, test automation & continuous delivery
- Experience in Java build automation technologies such as Gradle, Maven, Integration with Sonarqube etc
- Experience supporting a production service in a DevOps friendly environment
- Excellent analytical skills and proven track record solving difficult problems
- Experienced in Agile / Scrum projects
- Experience with design and developing data reconciliation & metadata driven data processing frameworks.
- Good to have Background in API and cloud-native architecture
Qualifications Necessary:
- Education level – Degree in Computer Science or related field
- A minimum of 6 to 8 years of industry experience
- AWS certification would be advantageous
- Candidate must have relevant VISA’s, Australian PR or Australian Citizenship
- Understanding of Financial Services Industry
How to Apply
Click APPLY or contact Rishi on rishib@charterhouse.com.au for a confidential discussion
www.charterhouse.com.au