Data Engineer, Information Technology
We usually respond within two weeks
Mandai Wildlife Group is the steward of Mandai Wildlife Reserve, a unique wildlife and nature destination in Singapore that is home to world-renown wildlife parks which connect visitors to the fascinating world of wildlife. The Group is driving an exciting rejuvenation plan at Mandai Wildlife Reserve, adjacent to Singapore’s Central Catchment Nature Reserve, that will integrate five wildlife parks with distinctive nature-based experiences, green public spaces and an eco-friendly resort.
Job Duties and Responsibilities:
We are seeking an analytical and process-driven Data Engineer to serve as the backbone of our data ecosystem, with a primary focus on maximizing the value of our Snowflake and dbt stack. In this role, you will be responsible for the reliability, efficiency, and scalability of our data transformations and storage. You will partner closely with data engineers and analysts to ensure our data platform is observable, well-governed, and resilient.
The ideal candidate possesses a deep understanding of cloud data warehousing and treats "Data as Code." You will focus on automating the data lifecycle - from ingestion to production-ready models—while ensuring that our data infrastructure remains high-performing and cost-effective.
Assist in monitoring Snowflake compute usage and managing basic user access and permission requests.
Support the data transformation lifecycle by updating models, managing documentation, and ensuring code follows our team’s standards.
Monitor daily data ingestion and transformation jobs, identifying failures and assisting in the rapid resolution of synchronization issues.
Help build and maintain automated tests to verify data accuracy and freshness before it reaches our business stakeholders.
Assist in promoting code changes from development to production environments while ensuring data integrity is maintained.
Monitor database query logs to identify slow-performing models and work with senior team members to apply SQL optimizations.
Help maintain the data catalog and lineage records to ensure all team members understand where our data comes from and how it is defined.
Provide initial root-cause analysis for pipeline errors and assist data analysts with basic technical queries regarding data availability.
Design and build new data transformations pipelines using dbt and Snowflake
Monitor resource utilization on the data platform and identify opportunities for optimization
Job Requirements:
Foundational SQL: Ability to write and understand basic to intermediate SQL queries (joins, aggregations, and subqueries).
Exposure to dbt: A basic understanding of how dbt works or experience building simple data models in a project environment.
Familiarity with Snowflake: Basic knowledge of cloud data warehouses; prior experience with Snowflake is a plus but not required.
Version Control Basics: Familiarity with Git (branching, committing, and pulling code).
Analytical Mindset: A natural curiosity for how data flows through a system and a strong attention to detail.
Communication Skills: Ability to clearly explain technical issues and work collaboratively within a team.
Eagerness to Learn: A proactive approach to picking up new tools and a desire to automate repetitive manual tasks.
- Division
- Corporate Services
- Department
- Information Technology
- Locations
- Corporate Office
- Remote status
- Hybrid
- Employment type
- Full-time
- Function
- Technology