Job details
| Location: | New South Wales |
| Salary: | $150000 - $160000 per annum |
| Job Type: | Permanent |
| Discipline: | |
| Reference: | V-139576 |
| Posted: | about 1 hour ago |
Job description
What you’ll be doing- Build and maintain production-grade data pipelines using SQL, Python and dbt
- Design and optimise analytics-ready data models in Snowflake or BigQuery
- Own dbt projects end-to-end: models, tests, documentation and deployments
- Implement and maintain CI/CD pipelines for data workflows (Git-based version control, automated testing, promotion between environments)
- Work closely with analysts and downstream users to turn raw data into reliable, well-modelled datasets
- Monitor pipeline performance, data freshness and failures; fix issues before users feel them
- Improve data quality through testing, observability and better modelling patterns
- Contribute to platform standards and engineering best practices
- SQL (advanced)
- Python
- dbt (essential)
- Snowflake or BigQuery
- Git + CI/CD (e.g. GitHub Actions, GitLab CI, Azure DevOps)
- Cloud data platforms (AWS / GCP / Azure)
- Deep SQL capability and real-world Python usage (not just scripts)
- Proven experience delivering production dbt projects
- Experience working with large datasets in Snowflake or BigQuery
- Understanding of CI/CD concepts applied to data (testing, versioning, deployment)
- Comfortable working in a collaborative, engineering-led environment
- Pragmatic mindset: you care about reliability, performance and maintainability
- Orchestration tools (Airflow, Prefect, Dagster)
- Data quality / observability tooling
- Experience in enterprise-scale data platforms
- Exposure to security, access controls and governance in cloud data environments
Why Join?
- Work on a modern, cloud-based data platform
- Influence how data is used across market leader
- Flexible working and strong focus on wellbeing
- Long-term career growth in a stable, purpose-driven business