SkillzPage

Jobs

Intermediate Data Engineer

Facebook Twitter LinkedIn WhatsApp

Back

Intermediate Data Engineer
 Location: Menlyn, Pretoria
 Industry: Financial Services
 Level: Intermediate Level Data Engineer
 Salary Market related increase
 Benefits:

Flexi hours ( start at 6 am leave by 15:00 or similar/ Gaurenteed 13th Cheque / Monthly Incentive Bonusses / Provident fund

About the Company

Founded in 1996, this Pretoria-based financial services organisation has grown into one of South Africa’s most trusted and widely accessible micro-finance providers. With 216 branches and over 1,500 employees nationwide, the business is built around one mission: empowering modest-income, formally employed South Africans through responsible credit solutions and community-centred service delivery.

As the organisation expanded, it made a strategic shift from outsourced IT to a fully in-house technology capability. Today, the internal IT and Development division comprises 20 professionals and is scaling rapidly as part of a major digital transformation initiative. Their goal is to modernise core systems, broaden digital channels, and seamlessly integrate branch and online experiences into a unified customer ecosystem.

For engineers joining this team, it’s an opportunity to build meaningful products at scale—spanning back-end systems, customer platforms, API-driven integrations, cloud infrastructure, and data-centric solutions that directly impact how financial services are delivered across South Africa.

Job Overview

The Intermediate Data Engineer plays a central role in designing, building, and maintaining the data infrastructure that underpins analytics, reporting, and daily operational processes. Sitting between junior and senior engineering levels, this role blends hands-on development with deeper problem-solving, performance optimisation, and the delivery of reliable, scalable data solutions.

You’ll collaborate closely with Data Analysts, BI Developers, Software Engineers, and business stakeholders to ensure the organisation’s data is accurate, timely, secure, and consistently available for decision-making.

Duties and Responsibilities

Data Pipeline Development (30%)

  • Design, build, and maintain scalable ETL/ELT data pipelines.
  • Automate data ingestion from internal systems, cloud platforms, APIs, and third-party providers.
  • Optimise pipelines for performance, reliability, and cost efficiency.

Data Modelling & Architecture (25%)

  • Develop and maintain conceptual, logical, and physical data models.
  • Implement star and snowflake schemas for reporting and analytics.
  • Contribute to the design and build of data warehouses, data lakes, and lakehouse environments.

Database Development & Management (20%)

  • Work with relational and NoSQL databases including SQL Server, MySQL, PostgreSQL, MongoDB, and ClickHouse.
  • Write efficient SQL queries, stored procedures, views, and scripts.
  • Perform database optimisation, indexing, and performance tuning.

Data Quality & Governance (5%)

  • Implement and maintain data quality checks, validation processes, and monitoring mechanisms.
  • Ensure alignment with governance, data privacy, and security requirements.
  • Participate in metadata management and proper documentation.

Collaboration & Support (5%)

  • Engage with Business Analysts and product teams to understand data needs.
  • Assist business users by troubleshooting data-related issues.
  • Support junior engineers through guidance and code reviews.

Automation & DevOps Practices (5%)

  • Apply CI/CD principles to data workflows.
  • Implement automated testing and deployment practices for data pipelines.

Performance Optimisation (10%)

  • Improve processing speed, scalability, and resource usage.
  • Optimise queries, data storage formats, and partitioning strategies.
  • Identify bottlenecks and recommend architectural improvements.

Minimum Requirements

  • 3–5 years of hands-on experience in Data Engineering with proven pipeline development and optimisation.
  • Advanced Python and scripting skills (non-negotiable).
  • Strong proficiency in SQL for data manipulation, modelling, and backend optimisation.
  • Experience working with SQL Server, MySQL, and ClickHouse.
  • Experience with custom Python-based ETL/ELT and scripting.
  • Experience using Dagster (or similar tools) for pipeline orchestration.
  • Exposure to AWS and Azure cloud environments.
  • Solid understanding of data modelling principles and schema design.
  • Knowledge of Big Data frameworks (Spark, Hadoop, ClickHouse, etc.).
  • Strong analytical and problem-solving ability.
  • Comfortable operating independently and managing multiple priorities.
  • Clear communication skills and the ability to collaborate within cross-functional teams.

Advantageous

  • Bachelor’s degree in Computer Science, IT, Engineering, or related field.
  • Experience in fintech, micro-lending, credit risk, or transactional systems.
  • Exposure to event-driven data architectures (Kafka, Kinesis).
  • Experience preparing data for machine learning workflows.
  • Understanding of data security and PII protection best practices.
  • Familiarity with dashboards (Power BI, Tableau).

How to Apply?

This is not just another Data Engineering role, it’s an opportunity to join a fast-evolving organisation at the forefront of digital change in South Africa’s financial sector.
You’ll be part of a team that’s building new patterns, modern architectures, and direct bank integrations while leveraging automation and AI to improve service delivery.

Be part of a company with deep roots, strong stability, and genuine transformation momentum where your code directly shapes how thousands of South Africans access financial solutions every day.

Email your CV to charne@skillzpage.com

(If you don’t hear from us within 2 weeks, please consider your application unsuccessful.)