External Integrations & Async Processing Pipeline

An event-driven pipeline connecting lead data to external partners — insurance providers, financial advisors, and third-party CRMs — with reliable delivery guarantees.

Node.jsTypeScriptGCP Pub/SubGCP Cloud TasksPostgreSQLWebhook systemDead-letter queues
Duration
12 months
Team
5 engineers
Role
Senior Frontend / Integration Lead

Overview

This pipeline handles the delivery of processed lead data to a range of external partners. Each partner has different integration requirements: some accept REST webhooks, others require file-based batch delivery or proprietary API formats. The pipeline abstracts this complexity behind a unified internal interface.

Problem

Integrations with external partners were implemented ad hoc — synchronous HTTP calls made directly from the intake flow. If a partner's API was slow or unavailable, it blocked lead processing entirely. Error handling was inconsistent, there was no retry mechanism, and failed deliveries were silently dropped with no audit trail.

Solution

We extracted all external delivery logic into an async processing pipeline. When a lead is ready to be forwarded, an event is published to a dedicated Pub/Sub topic. A dispatcher service reads the event, resolves the appropriate delivery adapter for the target partner, and executes the delivery attempt. Failed attempts are retried with exponential backoff via Cloud Tasks, with a dead-letter queue capturing undeliverable events for manual review.

Your Role

I defined the integration adapter interface, implemented several partner-specific adapters, and built the admin tooling for monitoring delivery status and replaying failed events. I also collaborated with the DevOps engineer to define the infrastructure-as-code setup for the retry queues and alerting thresholds.

Architecture

  • Dispatcher service (Node.js / TypeScript): Routes delivery events to the correct adapter based on partner config
  • Integration adapters: Per-partner modules implementing a shared interface (REST, SFTP, batch file)
  • GCP Cloud Tasks: Manages retry scheduling with configurable backoff and max attempts
  • Dead-letter queue (Pub/Sub): Captures events that exceed retry limits for manual inspection
  • Delivery log (PostgreSQL): Full record of every delivery attempt, status, and response payload
  • Admin dashboard (React): Internal tool to view delivery status, retry failures, and search by lead ID

Impact

The async model eliminated synchronous blocking during partner outages. Delivery reliability improved substantially — the dead-letter queue replaced silent failures with a recoverable, inspectable state. Onboarding a new external partner now requires adding a single adapter module rather than modifying any core pipeline logic, reducing integration effort per partner significantly.