Overview
Recruiter at Glints Cross-Border Team | Actively Hiring!
A leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend / send management, enabling our clients to drive growth and enhance customer experiences.
Role Description
As a Technical Lead Engineer - Data , you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS . You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability , while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.
Responsibilities
- Architect scalable, cost-optimized pipelines across real-time and batch paradigms , using tools such as AWS Glue, Step Functions, Airflow, or EMR.
- Manage ingestion from transactional sources using AWS DMS , with a focus on schema drift handling and low-latency replication.
- Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
- Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
- Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
- Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
- Enforce version control, CI / CD, and infrastructure-as-code practices using GitOps and tools like Terraform.
Requirements
At least 7 years of experience in data engineering.Deep hands-on experience with AWS data stack : Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.Expertise in designing data pipelines for real-time, streaming, and batch systems , including schema design, format optimization, and SLAs.Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.Understanding of stream processing with Kinesis / Kafka and orchestration via Airflow or Step Functions.Experience implementing data access controls , encryption policies, and compliance workflows in regulated environments.Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.Senior / Employment Details
Seniority level : Mid-Senior levelEmployment type : Full-timeJob function : Information Technology and EngineeringIndustries : Financial Services, Software Development, and IT Services and IT Consulting#J-18808-Ljbffr