Go Monorepo · Architecture

Hazelnut

LinkRunner's mobile attribution platform. A single Go binary tracing digital signals from ad click through multi-strategy attribution to conversion postback.

4Services
7Strategies
7Analytics
6Ad Networks
1

System Overview

Single binary, four service modes, clean architecture with interface-driven DI.

Gateway
API Gateway
HTTP :4000 + gRPC :50051. SDK ingestion for installs, triggers, payments, events, clicks. Publishes to Kafka.
Consumers
Kafka Consumers
Attribution — installs
Events — triggers, payments
Click — click processing
Retry — failed attribution
Postback
Dispatch
gRPC :50051 + :50052. Fan-out to ad networks + analytics.
cmd/              CLI commands — gateway, consumer, postback
gateway/          HTTP server, gRPC, handlers
internal/         Private application code
├─ consumer/      attribution (72 files), events, click, analytics
├─ postback/      43 files — dispatch + ad network clients
├─ domain/        Core entities + repository interfaces
├─ service/       Business logic services
└─ store/         In-memory LRU (sub-ms clicks)
broker/           Kafka (13 files), SQS, hybrid, fallback
db/               Postgres + ClickHouse, 13 repositories
pkg/              cache (16 files), grpcserver (10), redis, uaparser
telemetry/        62 files — Zap + OTel (7 subdirs)
proto/            Protobuf definitions
schema/           Generated proto code
CLI Tools
Operations
metricscampaignscacheredis-opsdlq
2

Service Topology

Each service is a hazelnut subcommand sharing the same codebase.

Gateway Service
Client SDK /api/client/*
POST /initPOST /triggerPOST /capture-paymentPOST /capture-eventPOST /set-user-dataPOST /integrationsPOST /capture-page-viewPOST /attribution-data
Server API /api/v1/*
POST /capture-payment
POST /capture-event
GET /attributed-users
GET /get-attribution-result

API key auth · Redis rate limited

Attribution Consumer
install-events-hazelnut
72-file pipeline. Multi-strategy attribution with 18 injected collaborators.
Events Consumer
events
Triggers, payments, custom events. Enriches, writes ClickHouse, forwards via gRPC.
Click Consumer
click-events
Dedup via Valkey, writes ClickHouse, stores Redis sorted set (30d TTL).
Retry Consumer
INSTALL_EVENTS_RETRY
Failed/lagging attribution reprocessing with configurable delay.
Postback Server
PostbackService :50051
SendInstallPostback · SendEventPostback · SendRegistrationPostback
AnalyticsService :50052
SendAnalyticsEvent
Publishes to Kafka, embedded consumer dispatches to 7 analytics platforms.
3

Hot Paths — Gateway → Kafka → Consumer

Every /api/client/* route traced from HTTP handler through Kafka to consumer processing. File paths are clickable references.

Route → Topic → Consumer Map
RouteHandlerKafka TopicConsumerSync?
POST /inithandler/init.go:74install-events-hazelnutAttributionAsync (202)
POST /triggerhandler/trigger.go:56events (ONBOARD)EventsSemi (200)
POST /capture-paymenthandler/data.go:60events (PAYMENT)EventsSemi (201)
POST /capture-eventhandler/data.go:204events (EVENT)EventsSemi (200)
POST /capture-page-viewhandler/data.go:276click-events (CLICK)ClickSemi (201)
POST /deeplink-triggeredhandler/client.go:36Sync (DB)
POST /set-user-datahandler/client.go:135Sync (DB)
POST /integrationshandler/client.go:229Sync (DB)
POST /update-push-tokenhandler/client.go:296Sync (DB)
POST /attribution-datahandler/attribution_data.go:40Sync (query)
POST /remove-captured-paymenthandler/data.go:144Sync (DB)
Topic Routing Logic
File: broker/kafka/queue_publisher.go:233-249getTopicName() switch:
InittopicNames.Init"install-events-hazelnut"
TriggertopicNames.Event"events" (unified)
PaymenttopicNames.Event"events" (unified)
EventtopicNames.Event"events" (unified)
ClicktopicNames.Click"click-events"
WebToApptopicNames.WebToApp"lr-web-to-app-clicks-events"
Config: cmd/gateway.go:179-186 builds kafka.TopicNames struct. KafkaAdapter at broker/events/adapter.go:10-50 bridges the publisher and injects W3C trace headers into Kafka message headers.
POST /api/client/init — Install Init Async → Attribution Consumer
Handler: gateway/handler/init.go:74-272InitHandler.Handle()

Synchronous Steps

  1. DecodeJSON(r, &req) — JSON decode request body (line 89)
  2. Token empty check, test token bypass → returns 201 immediately (lines 95-101)
  3. middleware.GetClientIP(r.Context()) — extract IP (line 119)
  4. h.eventPublisher.IsHealthy() — Kafka health check; returns 503 if down (line 127)
  5. h.tokenService.ValidateToken(ctx, req.Token) — validate against cache/DB (line 132)
  6. Extract signature headers: X-Timestamp, X-Signature, X-Key-Id (lines 165-167)
  7. Compute contentHash from raw body, generate processingID (UUID) (lines 171-179)
  8. Infer platform from device_data.system_name if missing (lines 186-195)
  9. Build events.InstallEventMessage struct (lines 199-227)

Async Kafka Publish

h.publishInstallEventAsync() in a goroutine (line 241). Calls eventPublisher.PublishInstallEvent(ctx, msg). On failure, falls back to h.fallbackStore.Store() (Redis Streams). Side-effect: setter.SetNotPicked() sets install status in Redis (line 248).
Message struct: broker/events/messages.go:7-45InstallEventMessage with Request, Verification, SignatureInput, Timing fields.
Topic: "install-events-hazelnut" (configured at config/kafka.go:59)
Response: HTTP 202 Accepted with processingID
Consumer: Attribution consumer (cmd/consumer_attribution.go:66) → AttributionOrchestrator.Process()
POST /api/client/trigger — User Trigger/Onboard Semi-sync → Events Consumer
Handler: gateway/handler/trigger.go:56-181TriggerHandler.Handle()
  1. DecodeJSON(r, &req) — decode + validate token, installInstanceID (lines 68-82)
  2. Extract IP, User-Agent, userID from user_data.id (lines 85-103)
  3. Build service.TriggerRequest (lines 106-117)
  4. h.installService.ProcessTrigger(ctx, triggerReq)Kafka publish inside (line 128)
Inside ProcessTrigger (internal/service/install.go:440-483): Builds domain.QueueMessage{Type: "ONBOARD"} with onboard data. Publishes to queue name Trigger → resolved to "events" topic.
Post-publish (sync): h.cacheUserMapping() — caches user_id → install_instance_id in Redis (line 153)
Response: HTTP 200 with deeplink + SKAN data
Consumer: Events consumer (cmd/consumer_events.go:64) — processes ONBOARD type → enrich → ClickHouse write → forward to PostbackService + AnalyticsService via gRPC
POST /api/client/capture-payment — Payment Capture Semi-sync → Events Consumer
Handler: gateway/handler/data.go:60-133DataHandler.CapturePayment()
  1. DecodeJSON + validateCapturePaymentRequest — checks token/API key (lines 67-75)
  2. Amount validation, default type to "DEFAULT" (lines 88-94)
  3. h.dataService.CapturePayment(ctx, req) — Kafka publish inside (line 112)
Inside CapturePayment (internal/service/data.go:264-348): Token validation → dedup check (ErrDuplicatePayment) → builds domain.QueueMessage{Type: "PAYMENT"} → publishes to "events" topic.
Response: HTTP 201 with SKAN data
Consumer: Events consumer — processes PAYMENT type → enrich with install/campaign → ClickHouse user_events_denormalized → forward postback + analytics
POST /api/client/capture-event — Custom Event Semi-sync → Events Consumer
Handler: gateway/handler/data.go:204-273DataHandler.CaptureEvent()
  1. DecodeJSON + validate event_name, token/API key (lines 211-225)
  2. Extract IP (line 228)
  3. h.dataService.CaptureEvent(ctx, req) — Kafka publish inside (line 243)
Inside CaptureEvent (internal/service/data.go:580-616): Token validation → builds domain.QueueMessage{Type: "EVENT"} with event_name, event_data (serialized JSON) → publishes to "events" topic.
Response: HTTP 200 with SKAN data
POST /api/client/capture-page-view — Web Page View Semi-sync → Click Consumer
Handler: gateway/handler/data.go:276-358DataHandler.CapturePageView()
  1. DecodeJSON + validate token, link, origin (lines 283-299)
  2. Extract IP, User-Agent (lines 302-305)
  3. h.dataService.CapturePageView(ctx, req) — Kafka publish inside (line 319)
Inside CapturePageView (internal/service/data.go:710-759): Token validation → origin check → builds domain.QueueMessage{Type: "CLICK"} → publishes to "click-events" topic.
Response: HTTP 201 with store_link
Consumer: Click consumer (cmd/consumer_click.go:41) — dedup → domain resolution → Google Web-to-App campaign creation → ClickHouse batch write → Redis click store (4 key types, 30d TTL)
Synchronous Routes (no Kafka) 5 routes — direct DB/cache
/deeplink-triggeredhandler/client.go:36-103installService.SetDeeplinkTriggered() — Redis/DB write, no Kafka
/set-user-datahandler/client.go:135-178installService.UpdateUserData() — DB write with retryOnNotFound(2, 50ms)
/integrationshandler/client.go:229-279installService.UpsertIntegrationInfo() — CleverTap check + DB upsert
/update-push-tokenhandler/client.go:296-347installService.UpdatePushToken() — DB write with retry
/attribution-datahandler/attribution_data.go:40-99svc.GetAttributionData() — synchronous query, returns deeplink + campaign + source
All share: token validation via tokenService.ValidateToken(), retryOnNotFound wrapper for race conditions between init publish and consumer processing.
4

Attribution Pipeline Internals

72 files, 16 interfaces, 20+ injected collaborators. The most complex subsystem in the monorepo.

Phase 1 — Ingestion
Mobile SDK /api/client/init Validate install-events-hazelnut HTTP 202
Phase 2 — Attribution
Attribution Consumer Enrich Click Match Strategy Select Arbiter
Phase 3 — Persist & Dispatch
ClickHouse+ Postgres Postback postback-logs Ad Networks
Attribution Strategies — Priority Order
1Google Adsgclid / gbraid
2Metainstall referrer
3Apple Searchadservices token
4TikTokttclid
5Snapchatsccid
6Click Matchip / fingerprint
7Organicfallback
Orchestrator Pipeline orchestrator.go · 20+ collaborators
File: internal/consumer/attribution/orchestrator.go
Entry point: func (o *AttributionOrchestrator) Process(ctx context.Context, msg *InstallEventMessage) error

Pipeline Phases

  1. Token → Projecto.enricher.FindProjectByToken()
  2. SDK signature verificationo.verifySignature() (consumer-side, deferred from gateway)
  3. Package integration upserto.enricher.FindOrCreatePackageIntegration()
  4. Dedup gateo.checkDuplicates(): SetProcessing lock (SetNX), install status check (processing/retrying/processed), existing install lookup
  5. Data extractionextractData(msg)*ExtractedData: GAID, IDFA, GCLID, GBRAID, LrIaID, install referrer params via FlexString unmarshaler
  6. Click matchingo.clickMatcher.FindMatchingClick(): Redis lookups by GAID+IDFA → LrIaID → IP sorted set
  7. Data enrichment from clickenrichDataFromClick(): backfills GCLID/GBRAID from click record if missing
  8. Strategy selectiono.strategySelector.Select(): returns all strategies sorted by priority
  9. Strategy executiono.runStrategiesParallel(): all eligible run in concurrent goroutines, arbiter resolves winner
  10. Campaign resolutiono.resolveCampaign()findCampaignByPriority()
  11. Reinstall detectiono.reinstallDetector.Detect()
  12. Finalizeo.writeAndFinalize(): ClickHouse write → mark processed → drain lagging events → publish to downstream event handlers
On error at any phase, o.installCache.ClearProcessing() releases the lock so the retry consumer can reprocess.
Consumer Loop & Error Routing consumer.go
File: internal/consumer/attribution/consumer.go
Struct: AttributionConsumer with source Source, orchestrator, parser Parser, errorRouter ErrorRouter
Main loop (Start(ctx)): for { select { default: } } polling with source.PollRecords(ctx, batchSize). 30s heartbeat ticker. 1-minute message counter reset.
Batch processing (processBatch): Bounded concurrency via sem := make(chan struct{}, workers). Each record in a goroutine with panic recovery. Tracks success/failed/skipped/retry/dlq atomically.
Per-record (processRecord): Extracts W3C trace context from Kafka headers → parser.Parse(record.Value)orchestrator.Process(ctx, msg). On LaggingAppOpenSignal: stores lagging app open, returns "skipped". Otherwise classifies via ClassifyError(err).
Error routing (routeProcessError): ErrorClassRetryableerrorRouter.RouteRetryable() (retry topic). ErrorClassPermanenterrorRouter.RoutePermanent() (DLQ).
Flush: flushAndCommit() — flushes ClickHouse writer buffer, then commits Kafka offsets. Offsets only committed if flush succeeds.
16 Key Interfaces interfaces.go
File: internal/consumer/attribution/interfaces.go
StrategyName() string, Priority() int, CanHandle(msg, click) bool, Execute(ctx, msg, click) (*StrategyResult, error)
Enricher15 methods — FindProjectByToken, FindInstallByInstanceID, FindCampaignByID, FindNetworkAccountCredentials, etc.
ClickMatcherFindMatchingClick(ctx, *ClickMatchParams) (*ClickMatch, error)
WriterWriteInstall(ctx, *InstallRecord), WriteAppOpen(ctx, *AppOpenDenormalizedRecord), Flush(ctx), Close()
EventPublisherPublish(ctx, *InstallEvent), RegisterHandler(EventHandler)
EventHandlerName() string, Handle(ctx, *InstallEvent) error
AttributionArbiterResolve(results []*StrategyResult, click *ClickMatch) *ArbiterResult
InstallCacheIsProcessed, SetProcessed, IsProcessing, SetProcessing (SetNX), ClearProcessing, SetUserMapping, GetUserMapping, Delete
ErrorRouterRouteRetryable(ctx, msg, err, retryCount), RoutePermanent(ctx, msg, err), RepublishDelayed(ctx, msg, headers)
ReinstallDetectorDetect(ctx, installInstanceID, projectID) (bool, error)
SignatureVerifierVerify(ctx, input) error
ConfidenceCalculatorComputes confidence score for attribution result
LaggingHandlerDrains lagging events queued before attribution completed
InstallStatusCacheSetNotPicked, SetPicked, SetAttributed — Redis status tracking
CachedEnricher — Cache-Aside Pattern enricher.go
File: internal/consumer/attribution/enricher.go
Struct: CachedEnricher with repo EnricherRepository (Postgres), cache EnricherCache (Redis)
Every method follows cache-aside: check Redis → on miss, query Postgres → write back to Redis with TTL.
Project lookupTTL: 1 hour
Install lookupTTL: 30 minutes
Campaign lookupTTL: 1 hour
PackageIntegrationTTL: 1 hour
IP resolutionTTL: 1 hour. Returns 0 on miss (non-fatal)
SDK credentialsTTL: 1 hour
Network credentialsNo cache (tokens may rotate)
ExistsByProcessingIDValkey-only, no Postgres fallback
Click Matcher — Redis Matching Priority click_matcher.go
File: internal/consumer/attribution/click_matcher.go
Struct: DefaultClickMatcher with clickStore ClickStoreReader, attributionStore AttributionStore
Matching priority in FindMatchingClick():
  1. GAID + IDFA (device identifier) — findOldestDeviceMatch(): queries both Redis keys, sorts by ClickedAt ASC, picks oldest lockable click
  2. LrIaID (LinkRunner attribution ID) — Redis key attribution:click:instance:{lr_ia_id}
  3. IP address — Redis sorted set attribution:click:ip:{ip}:{projectID}, oldest-first (debug mode: newest-first), limited to 10 candidates
Lock mechanism: tryLockClick() acquires Redis lock attribution:click_lock:instance:{id} with 30-day TTL. Also checks attributionStore.IsClickAttributed() for persistent dedup across restarts.
Redis Click Store Reader (redis_click_store.go): Key prefixes: attribution:click:gaid:, attribution:click:idfa:, attribution:click:instance:, attribution:click:ip:. IP lookups use ZRange (oldest) or ZRevRange (newest).
Strategies — 6 Implementations strategies/
Directory: internal/consumer/attribution/strategies/
Selector: selector.goDefaultStrategySelector sorts by Priority() descending. Orchestrator calls CanHandle on each, then Execute in parallel.
MetaAttributionmeta.go · Priority 100 · Decrypts Meta install referrer via ProcessMetaAdsData, extracts campaign_group_id, ad set, creative. Fallback: utm_content from link URL.
AppleSearchAdsapple.go · Priority 95 · Calls Apple AdServices API with token (apple_ads_client.go). Rejects non-"Download" conversion types. Returns ad group, keyword, creative IDs.
GoogleAdsgoogle.go · Priority 90 · Calls Google Ads API in parallel across all network account credentials (google_ads_client.go). GCLID fallback. Web-to-App fallback via gad_campaignid in click URL.
MetaInstallReferrermeta_referrer.go · Priority 85 · Parses meta_install_ref object directly (not from install referrer URL), decrypts utm_content.
ClickMatchStrategyclick_match.go · Priority 80 · Maps Redis click match type (device_identifier/lr_ia_id/ip_address) to attribution source. No external API calls.
OrganicStrategyorganic.go · Priority 0 · Always returns Success:true with empty attribution source. Catch-all fallback.
Arbiter — 3-Tier Resolution Logic arbiter.go
File: internal/consumer/attribution/arbiter.go
Method: func (a *DefaultAttributionArbiter) Resolve(results []*StrategyResult, click *ClickMatch) *ArbiterResult
Tier 1 (highest)Install referrer-based — Meta always, Google with FromInstallReferrer=true. Deterministic app-store signal. Last-touch wins (most recent EngagementTime).
Tier 2Deterministic matches — device_identifier, lr_ia_id, ad network API. Last-touch wins.
Tier 3 (lowest)IP-based probabilistic — ip_address match type. First-touch wins (oldest EngagementTime).
Fallback when all engagement times are zero: sourcePriority() map — meta=100, apple=95, google=90, meta_install_referrer=85, others=80.
Non-winning successful results become ContributingNetwork entries (up to 3 written to ClickHouse for multi-touch attribution reporting).
ClickHouse Writer writer.go
File: internal/consumer/attribution/writer.go
Struct: ClickHouseWriter with two buffers: buffer []*InstallRecord and appOpenBuffer []*AppOpenDenormalizedRecord. Thread-safe via sync.Mutex.
WriteInstall() and WriteAppOpen() append to buffers. Flush(ctx) drains buffer, calls conn.PrepareBatch() with massive INSERT INTO installs_denormalized (70+ columns), appends each record, then batch.Send().
click_id for ClickHouse: CRC32 of click_instance_id for Redis-matched clicks, or real Postgres Click.id for DB-matched clicks.
Downstream Event Handlers — 7 Post-Attribution handlers/
Directory: internal/consumer/attribution/handlers/
All implement EventHandler. Published to concurrently by KafkaEventPublisher (publisher.go). Handler errors logged but don't block others.
PostbackGRPCHandlerpostback_grpc.go · Condition: AdNetworkID != nil && != 0 · gRPC SendInstallPostback to postback service (fire-and-forget)
MetaCAPIHandlermeta_capi.go · Condition: Meta-attributed OR WebToApp · POST to graph.facebook.com/v23.0/{datasetID}/events
GoogleCAPIHandlergoogle_capi.go · Condition: AdNetworkID==Google + GCLID≥21 chars · POST to googleadservices.com/pagead/conversion/app/1.0 with 3 retries
TikTokHandlertiktok.go · Condition: ttclid in click · POST to business-api.tiktok.com/open_api/v1.3/event/track/
SnapchatHandlersnapchat.go · Condition: sccid in click · POST to tr.snapchat.com/v3/{pixelID}/events
AffiliateHandleraffiliate.go · Condition: CampaignID + postback URL · GET to affiliate tracking URL with 3 retries
WebhookHandlerwebhook.go · Condition: Customer webhook configured · POST JSON payload with 3 retries + Slack formatting
Common: handlers/common.go provides CredentialsFetcher, doPostResult(), fetchCreds().
4

Events Pipeline

Triggers, payments, custom events through enrichment to multi-platform dispatch.

Ingestion
Mobile SDK Gateway Kafka "events"
Processing
Events Consumer Enrich ClickHouse
Dispatch
Ad Network Postback+ Analytics Forward+ SKAN CV
Events Consumer Internals internal/consumer/events/
Consumer: internal/consumer/events/consumer.goConsumer struct with source KafkaSource, dlq DLQPublisher, handler BatchHandler
Message types (models.go): "PAYMENT", "EVENT", "ONBOARD"
Batch flow:
  1. Pre-filter: records with RetryCount ≥ 3 → direct to DLQ
  2. handler.HandleBatch(ctx, records)BatchResult
  3. DLQ records published. If HasRetryable, do NOT commit (Kafka redelivers).
  4. Otherwise mark all records + commit offsets.
Handler (handler.go): parseAndGroupRecords() → groups into payments, events, onboards → processPaymentGroup(), processEventGroup(), processOnboardGroup()writeBatch() flushes to ClickHouse user_events_denormalized.

gRPC Forwarding

PostbackForwardergrpc_postback.goGRPCPostbackForwarder calls SendInstallPostback, SendEventPostback, SendRegistrationPostback with 30s timeout
AnalyticsForwardergrpc_analytics.goKafkaAnalyticsForwarder builds AnalyticsEventRequest with per-project credentials (8 platforms), marshals JSON, publishes to analytics Kafka topic
Forwarder interfaces (forwarder.go): PostbackForwarder (3 methods) + AnalyticsForwarder (ForwardAnalyticsEvent)
6

Click Tracking

Sub-ms response via in-memory LRU. CAS versioning prevents stale writes.

Capture & Redirect
Browser /{domain}/{id} LRU Resolve Kafka "click-events"& Redirect
Async Processing
Click Consumer Dedup ClickHouse Redis Click Store
Click Consumer Internals internal/consumer/click/
Consumer: internal/consumer/click/consumer.goClickConsumer with same poll-process-commit pattern as attribution.
Processor (processor.go): ClickProcessor with dedup Dedup, writer Writer, clickStore ClickStore, domainResolver, campaignCreator
Process(ctx, msg) pipeline:
  1. Dedupdedup.IsDuplicate(ctx, clickInstanceID) via SETNX with TTL. Fail-open on Redis error. Optional WAL.
  2. Domain enrichmentenrichDomainID() resolves domain_id from click link hostname
  3. Google Web-to-AppenrichGoogleWebToAppCampaign() creates campaign if gad_campaignid present
  4. Build recordbuildClickRecord(msg) parses UA string for device/browser fields
  5. ClickHouse bufferwriter.WriteClick(ctx, record)
  6. Redis storeclickStore.StoreClick(ctx, record) (fail-open)

Redis Click Store — 4 Key Types

File: internal/consumer/click/redis_click_store.goRedisClickStore.StoreClick() writes to:
attribution:click:instance:{id}Always written. SETNX (first wins). Used by LrIaID matching.
attribution:click:gaid:{gaid}:{projectID}When GAID present. SETNX. Used by device identifier matching.
attribution:click:idfa:{idfa}:{projectID}When IDFA present. SETNX. Used by device identifier matching.
attribution:click:ip:{ip}:{projectID}When IP present. Sorted set (ZADD NX), score = unix ms of click time. Multiple clicks per IP tracked.
All keys have 30-day TTL. Each write has 3 retries with exponential backoff. Optional WAL writer for failed writes.

Key Interfaces

DedupIsDuplicate(ctx, clickInstanceID) (bool, error)
WriterWriteClick(ctx, *ClickRecord), Flush(ctx), Close()
ClickStoreStoreClick(ctx, *ClickRecord) error
CampaignCreatorCreateGoogleWebToAppCampaign(ctx, *WebToAppCampaignParams) (*int32, error)
In-Memory Store — LRU + CAS internal/store/ · 3 files
File: internal/store/store.go
Domainsmap[string]*DomainEntry with sync.RWMutex — ~300 entries, keyed by hostname
Projectsmap[int]*ProjectEntry with sync.RWMutex — similarly small
Campaignslru.Cache[string, *CampaignSlim] from hashicorp/golang-lru — 50K cap (of 7.7M total), fills on demand
CAS versioning — Every entry carries Version int64 (Unix nanoseconds). SetDomain, SetProject, SetCampaign all use compare-and-swap: write only lands if incoming.Version > stored.Version. Kafka consumers apply updates idempotently.
Interfaces (invalidator.go): Getter (read-only: GetDomain, GetProject, GetCampaign, SetCampaign for LRU fill) + Invalidator (write: DeleteDomain, DeleteProject, DeleteCampaign, ReloadFromLoader, Stats)
Loader (loader.go): Loads domains + projects from Postgres on startup. Campaigns NOT preloaded (7.7M too many — LRU fills on demand). Background goroutine refreshes at configurable interval. Uses time.Now().UnixNano() as version on full reload so CAS always succeeds.
6

Infrastructure

Postgres for OLTP, ClickHouse for OLAP, Kafka for messaging, 4-instance Redis.

PostgreSQL 16OLTP · 13 repos
ClickHouse 24.3OLAP Analytics
Apache Kafka6 topics + DLQ
Redis / Valkey4 instances
PostgreSQL 16
OLTP Store
projectscampaignsinstallsclickseventspaymentsuser_identitiesskan_config
sqlx + pgx/v5 · Connection pooling · Parameterized queries
ClickHouse 24.3
OLAP Analytics
installs_denormalizeduser_events_denormalizedclickscampaign_metricsaggregatesretention
clickhouse-go/v2 · Native protocol · Decimal support
Apache Kafka — Topics
install-events-hazelnut +DLQ +Retry events +DLQ click-events +DLQ +Retry postback-logs +DLQ analytics-events +DLQ api-metrics
franz-go · SASL · Circuit breaker · Redis Streams fallback
Redis / Valkey — 4 Instances
Single Valkey :6379 in dev. Separate DragonflyDB / Redis in prod.
InstanceDBPurposeEngineStatus
Gateway Cache0Domain/token/project lookupsDragonflyDBFatal
Enricher1Attribution state, dedup, install statusDragonflyDBGraceful
Lock Store2Distributed locks (SETNX + Lua)RedisNon-fatal
Click Store3Click sorted sets (30d TTL)DragonflyDBFatal
Kafka Broker Internals 13 files · 5 interfaces
SourcePollRecords, MarkRecords, CommitOffsets — abstracts Kafka consumption
KafkaPublisherPublish, PublishBatch, HealthCheck, IsHealthy
CircuitBreakerPer-topic: AllowRequest, RecordSuccess/Failure, IsOpen
Producer — Singleton with atomic state machine (Disconnected→Connecting→Connected). Proxy-aware dialing (HTTP CONNECT + SOCKS5 + NO_PROXY). SASL auth. OTel trace context injection into Kafka headers.
CircuitBreakerPublisher — Decorator. When open, fails fast with ErrCircuitBreakerOpen and triggers background Reconnect().
QueuePublisher — Routes by type: Init→install-events-hazelnut, Click→click-events, Trigger/Payment/Event→unified events topic.
Cache Abstractions 16 files · 4 interfaces
CacheCore: Get, Set, SetBatch, Delete, SwapData, Metrics
ValkeyCacheDistributed: install dedup, identity processing, throttle (SetNX)
CacheWarmerWarmCache(ctx), ReloadCache(ctx) — startup pre-loading
InMemoryCacheatomic.Pointer[sync.Map] for lock-free reads. SwapData builds new map, atomic pointer swap — zero cache misses during reload.
RedisCacheSetBatch uses chunked pipelines (50 keys/chunk) with 5 retries + exponential backoff. SwapData overwrites with 180-day TTL instead of FLUSHDB (no thundering herd).
Key namespacingdomain:, project_data:, token:, attr:install:, rate_limit_notif:
Database Repositories 13 files
CampaignRepositoryGetByID, GetOrganicCampaign (find-or-create pattern)
DomainRepositoryIn-process cache with sync.RWMutex + 180-day TTL. Caches nil results too.
InstallRepositoryGetByInstallInstanceID, Create (RETURNING id)
EnricherRepositoryRichest — pre-joined data with ProjectAnalyticsConfig (credentials for 8 platforms)
UserIdentityRepositoryGetByCustomerUserID, HasOnboardForInstall
Patterns — All use sqlx with GetContext/SelectContext. sql.ErrNoRows → domain ErrXxxNotFound. Prisma-style quoted table names ("Campaign", "Install").
7

External Integrations

6 ad networks for postback dispatch. 7 analytics platforms for event forwarding.

Ad Networks
G
Google Ads
Conversion API
M
Meta CAPI
Conversions API
A
Apple Search
Attribution API
T
TikTok
Events API
S
Snapchat
Conversions API
W
Webhooks
Custom + Affiliate
Analytics Platforms
M
Mixpanel
Track Events
A
Amplitude
HTTP V2
P
PostHog
Capture API
C
CleverTap
Events API
M
MoEngage
Data API
B
Braze
Track API
G
GA4 / Firebase
Measurement
Postback Dispatch Internals internal/postback/ · 43 files

gRPC → Kafka → Consumer Flow

gRPC Server (internal/consumer/postback/server.go): Receives SendInstallPostback, SendEventPostback, SendRegistrationPostback → serializes as PostbackMessage JSON envelope (types.go) → publishes to postback-logs Kafka topic.
Consumer (internal/consumer/postback/consumer.go): Polls postback-logs. Records with maxRetriesBeforeDLQ=3 → DLQ. processRecord(): unmarshal → postbackMessageToUserEvent()dispatch() fans out to ALL registered handlers concurrently (30s timeout per handler).

Service Layer

PostbackService (internal/postback/service.go + service_impl.go): For event postbacks, looks up PostbackEventMap entries for project+event → creates PostbackMessage per mapped network → Google: synchronous HTTP, others: batch-publish to Kafka via AsyncPostbackPublisher.

Ad Network Clients

All implement AdNetworkClient (Name() string, SendEvent(ctx, *AdNetworkEvent) (*AdNetworkResult, error)). Created in factory.go via createAdNetworkClients().
googleClientgoogle_client.go · googleadservices.com/pagead/conversion/app/1.0 — synchronous, developer token, DB access for link ID resolution
metaClientmeta_client.go · graph.facebook.com/v23.0 CAPI — POST with data array, dataset_access_token auth
tiktokClienttiktok_client.go · business-api.tiktok.com/v1.3/event/track/ — event name mapping (Download, CompleteRegistration, Purchase)
snapchatClientsnapchat_client.go · tr.snapchat.com/v3/{pixelId}/events — pixel ID + access token in URL path
affiliateClientaffiliate_client.go · GET-based URL template substitution per network

Cross-Cutting Concerns

Circuit breakercircuit_breaker_client.go — wraps any AdNetworkClient, keyed by inner.Name()
Rate limitingrate_limiter.go — per-project/per-network via Redis
Dedupdedup.go — Redis SET NX for message-level dedup
Audit logch_writer.go — per-attempt postback logs to ClickHouse
Cached reposcached_repos.go — Redis caching over NetworkAccount + PostbackEventMap repos
DLQ replayerdlq_replayer.go — replays DLQ messages on demand
Analytics Consumer Internals internal/consumer/analytics/ · 15 files
Consumer: internal/consumer/analytics/consumer.goAnalyticsConsumer reads AnalyticsEventRequest protobuf from Kafka.
Dispatch pattern: dispatch() runs ALL 7 handlers in parallel goroutines. Only fails if ALL handlers fail. Individual failures logged but don't block. Panic recovery per goroutine. Bounded concurrency via semaphore (default 16). RetryCount ≥ 3 → DLQ.
gRPC Server (server.go): AnalyticsGRPCServer implements AnalyticsServiceServer. Incoming RPCs serialize to JSON and publish to analytics Kafka topic — the consumer does the actual fan-out.

Handler Interface

handlers/common.go: AnalyticsHandler { Name() string; Handle(ctx, *AnalyticsEventRequest) error }
Shared doPost() helper: JSON POST with retry (exponential backoff), 5xx retried, 4xx returns nil (non-retryable).

7 Platform Handlers

AmplitudeHandleramplitude.go — HTTP identify call, sets lr_campaign/lr_ad_network via $setOnce
BrazeHandlerbraze.go — Two-step: resolve braze_device_idbraze_id, then Users Track API
CleverTapHandlerclevertap.go — Upload API with attribution properties
MixpanelHandlermixpanel.go — Track API
PostHogHandlerposthog.go — Capture API
MoEngageHandlermoengage.go — Customer API
GoogleAnalyticsHandlergoogle_analytics.go — GA4 Measurement Protocol
Each handler skips silently (return nil) if credentials missing. Per-message credentials from AnalyticsEventRequest take precedence over static env vars. All wrapped with per-handler circuit breakers (circuit_breaker.go).
8

Observability

62 files across 7 subdirectories. Full-stack tracing, logging, metrics.

Telemetry Pipeline
Application OTel SDK Collector ClickStack
Tracing
OTel spans for all I/O. Semantic conventions. W3C context across services + Kafka headers.
Logging
Zap + OTel bridge. Trace IDs in every entry. Dynamic levels. Custom ZapCore injects span context.
Metrics
OTel + Prometheus. Exemplar support for trace→metric links. Per-layer metric definitions.
Telemetry Package Internals 62 files · 7 subdirs
correlation/CorrelationContext extracts TraceID/SpanID. CorrelatedLogger auto-injects trace fields. ExemplarRecorder attaches trace context to metric points. ZapCore custom core.
instrumentation/BrokerInstrumentation (inject/extract W3C in Kafka headers), CacheInstrumentation, DBInstrumentation, ServiceInstrumentation
metrics/7 metric scopes: Gateway, Consumer, Cache, DB, Handler, HTTP, Business
logger/Environment-aware. File logging. Dynamic level management.
otel/Resilient exporters: resilient_trace.go, resilient_metric.go, resilient_log.go
Observer facade — Unified structured log methods for dashboards: LogAttribution (P0 alert source), LogDLQ (P2 alert), LogConsumerHeartbeat (30s liveness — P0 on stall), LogCacheStats, LogRedisHealth
9

Clean Architecture

Interface-driven DI. Accept interfaces, return structs. Dependencies flow inward.

Layer 1
HTTP Handlers / gRPC Servers
Request parsing, validation, response formatting
gateway/server/gateway/grpc/pkg/grpcserver/
Layer 2
Services (Business Logic)
Orchestration, strategy selection, enrichment
internal/service/internal/consumer/attribution/internal/postback/
Layer 3
Repositories
Database queries, cache ops, message publishing
db/repository/db/clickhouse/pkg/cache/
Layer 4
Domain
Pure entities, interfaces, zero dependencies
internal/domain/events/
gRPC Server Internals 10 files
Interceptor chain (both unary + stream): Recovery → OTel spans+metrics → Zap logging → Auth hook. Log level follows gRPC status: OK=Info, client errors=Warn, server errors=Error.
HealthCheckerHealthCheck(ctx) error — registered per service name, empty name checks ALL
ServerOptionWithLogger, WithTracer, WithMeter, WithAuthInterceptor, WithHealthChecker
ConfigMax msg size, concurrent streams, keepalive, TLS (opt-in), reflection (disable in prod)
spf13/cobra CLI
go-chi/chi HTTP
jmoiron/sqlx Postgres
jackc/pgx/v5 Driver
clickhouse-go/v2 CH
twmb/franz-go Kafka
uber-go/zap Log
otel Observability
google/grpc gRPC
10

Middleware Stack

Carefully ordered interceptor chains for HTTP and gRPC.

HTTP Global
Recovery RequestID RealIP OTel HTTP Request Logger IP Extractor Body Parser API Metrics
Rate Limiting
Token-based (in-memory) for /api/client. Redis (IP + key) for /api/v1.
API Key Auth
Server-side /api/v1 routes. Project-scoped validation.
Signature Verify
HMAC-SHA256 SDK request signatures.
Cache API Key
X-Cache-API-Key for admin endpoints.
gRPC Interceptors
Recovery OTel Tracing OTel Metrics Zap Logging Auth Hook
Auth hook auto-skips /grpc.health.v1.Health/*. Log level follows gRPC status code.