The Role and Opportunity
We are looking for a Data Engineer to join our leading client and be responsible for building and optimising data pipelines, enabling high-quality data ingestion, and supporting the build of its cloud data platform.
Key Responsibilities
- Design, build, and maintain scalable data pipelines using Python and PySpark
- Develop data models and transformation frameworks that enable reporting and analysis
- Work closely with data scientists, analysts, and engineers to ensure seamless data flow
- Support data governance, lineage, and quality initiatives across the platform
- Optimise data storage and processing performance across large datasets
Skills & Experience
- Proven experience as a Data Engineer.
- Strong hands-on experience with Python and PySpark
- Solid understanding of distributed data processing and ETL concepts
- Familiarity with Databricks, Azure Data Factory, Snowflake, or similar tools is a plus
- Strong problem-solving, analytical thinking, and communication skills
Does this sound like the role for you and would like to know more? Or would simply like to hear about more opportunities in the data space? If so, submit your application using the link or for a confidential discussion, please contact Peter Sing on 021 233 1007.
Under the provisions of the Privacy Act 2020 you have the right to access and request the correction of information held by us concerning you. We will retain all information for future vacancies (permanent or contract). Should you wish Momentum Consulting Group to delete this information from our database we shall require written notification to do so, subject to any legal obligations that require us to retain such information.

