Islamabad, Pakistan | Posted on 05 / 20 / 2025
DPL is one of the leading software development and IT companies worldwide. Established in 2003, DPL serves clients across major regions, with a focus on Europe, the Middle East, and the Americas. The company is headquartered in Islamabad, Pakistan, with regional offices in the USA and Sweden.
DPL pioneered Agile practices and fosters an innovation-driven culture in Pakistan. Recognized globally for its workplace environment, the company promotes a flat organizational culture and holacratic approach to encourage employee engagement and innovation.
Our diverse client portfolio includes industries such as Healthcare, Fintech, Automotive, Mobility, Telco, Education, Media, and E-commerce. Our services encompass Digital Transformation, Product Engineering, IT Strategy & Consulting, and Custom Software Development.
Job Description
We are seeking a Data Engineer who recognizes that each data point represents human decision-making. Your role involves building and maintaining data infrastructure that supports our lending models and operational processes, primarily on AWS . You will also assist in the machine learning lifecycle by helping data scientists deploy, monitor, and retrain models with appropriate data at optimal times.
This position offers a unique opportunity to contribute to building the financial backbone of one of Africa’s most ambitious digital banks.
Key Responsibilities
- Build & Maintain Pipelines : Develop and operate ETL / ELT pipelines using AWS Glue, Lambda, Athena, and Step Functions . Your work will support reporting and real-time decision-making.
- Curate a Robust Data Lakehouse : Structure and maintain our data lake with proper partitioning, schema evolution, metadata tagging, and access control across multiple jurisdictions.
- Support MLOps Lifecycle : Collaborate with data scientists to deploy models using SageMaker Pipelines , update the Feature Store , and set up triggers for model retraining.
- Ensure Precision & Integrity : Monitor pipeline outputs and data quality dashboards to ensure all data is accurate, traceable, and reproducible .
- Automate, Audit & Secure : Use infrastructure-as-code tools like Pulumi or Terraform to build reproducible infrastructure, implementing logging, versioning, and KMS-encrypted security practices .
- Collaborate with Impact : Work across analytics, engineering, and credit teams to understand operational needs and develop technical pipelines and products.
Required Skills & Experience
Ownership mindset (willing to take responsibility)Willingness to learn4+ years in data engineering or cloud data architecture rolesSolid experience with AWS data stack : S3, Glue, Athena, Lambda, Step FunctionsProficiency in SQL , Python , and PySparkExperience with data lake architecture and handling semi-structured data (Parquet, JSON)Experience with MLOps , model deployment pipelines, and monitoringExposure to infrastructure-as-code (Pulumi preferred, Terraform acceptable)Knowledge of secure data handling and anonymization best practicesNice to Have
Experience with event-driven data flows (e.g., Kafka, EventBridge)Familiarity with SageMaker Feature Store and SageMaker PipelinesBackground in financial services, credit scoring, or mobile money ecosystemsPassion for building ethical, inclusive financial systems#J-18808-Ljbffr