LinkRunner's mobile attribution platform. A single Go binary tracing digital signals from ad click through multi-strategy attribution to conversion postback.
Single binary, four service modes, clean architecture with interface-driven DI.
:4000 + gRPC :50051. SDK ingestion for installs, triggers, payments, events, clicks. Publishes to Kafka.:50051 + :50052. Fan-out to ad networks + analytics.cmd/ CLI commands — gateway, consumer, postback gateway/ HTTP server, gRPC, handlers internal/ Private application code ├─ consumer/ attribution (72 files), events, click, analytics ├─ postback/ 43 files — dispatch + ad network clients ├─ domain/ Core entities + repository interfaces ├─ service/ Business logic services └─ store/ In-memory LRU (sub-ms clicks) broker/ Kafka (13 files), SQS, hybrid, fallback db/ Postgres + ClickHouse, 13 repositories pkg/ cache (16 files), grpcserver (10), redis, uaparser telemetry/ 62 files — Zap + OTel (7 subdirs) proto/ Protobuf definitions schema/ Generated proto code
metricscampaignscacheredis-opsdlqEach service is a hazelnut subcommand sharing the same codebase.
/api/client/*POST /initPOST /triggerPOST /capture-paymentPOST /capture-eventPOST /set-user-dataPOST /integrationsPOST /capture-page-viewPOST /attribution-data/api/v1/*POST /capture-paymentPOST /capture-eventGET /attributed-usersGET /get-attribution-resultAPI key auth · Redis rate limited
:50051SendInstallPostback · SendEventPostback · SendRegistrationPostback:50052SendAnalyticsEventEvery /api/client/* route traced from HTTP handler through Kafka to consumer processing. File paths are clickable references.
| Route | Handler | Kafka Topic | Consumer | Sync? |
|---|---|---|---|---|
POST /init | handler/init.go:74 | install-events-hazelnut | Attribution | Async (202) |
POST /trigger | handler/trigger.go:56 | events (ONBOARD) | Events | Semi (200) |
POST /capture-payment | handler/data.go:60 | events (PAYMENT) | Events | Semi (201) |
POST /capture-event | handler/data.go:204 | events (EVENT) | Events | Semi (200) |
POST /capture-page-view | handler/data.go:276 | click-events (CLICK) | Click | Semi (201) |
POST /deeplink-triggered | handler/client.go:36 | — | — | Sync (DB) |
POST /set-user-data | handler/client.go:135 | — | — | Sync (DB) |
POST /integrations | handler/client.go:229 | — | — | Sync (DB) |
POST /update-push-token | handler/client.go:296 | — | — | Sync (DB) |
POST /attribution-data | handler/attribution_data.go:40 | — | — | Sync (query) |
POST /remove-captured-payment | handler/data.go:144 | — | — | Sync (DB) |
broker/kafka/queue_publisher.go:233-249 — getTopicName() switch:| Init | topicNames.Init → "install-events-hazelnut" |
| Trigger | topicNames.Event → "events" (unified) |
| Payment | topicNames.Event → "events" (unified) |
| Event | topicNames.Event → "events" (unified) |
| Click | topicNames.Click → "click-events" |
| WebToApp | topicNames.WebToApp → "lr-web-to-app-clicks-events" |
cmd/gateway.go:179-186 builds kafka.TopicNames struct. KafkaAdapter at broker/events/adapter.go:10-50 bridges the publisher and injects W3C trace headers into Kafka message headers.POST /api/client/init — Install Init Async → Attribution Consumergateway/handler/init.go:74-272 — InitHandler.Handle()DecodeJSON(r, &req) — JSON decode request body (line 89)middleware.GetClientIP(r.Context()) — extract IP (line 119)h.eventPublisher.IsHealthy() — Kafka health check; returns 503 if down (line 127)h.tokenService.ValidateToken(ctx, req.Token) — validate against cache/DB (line 132)X-Timestamp, X-Signature, X-Key-Id (lines 165-167)contentHash from raw body, generate processingID (UUID) (lines 171-179)device_data.system_name if missing (lines 186-195)events.InstallEventMessage struct (lines 199-227)h.publishInstallEventAsync() in a goroutine (line 241). Calls eventPublisher.PublishInstallEvent(ctx, msg). On failure, falls back to h.fallbackStore.Store() (Redis Streams). Side-effect: setter.SetNotPicked() sets install status in Redis (line 248).broker/events/messages.go:7-45 — InstallEventMessage with Request, Verification, SignatureInput, Timing fields."install-events-hazelnut" (configured at config/kafka.go:59)processingIDcmd/consumer_attribution.go:66) → AttributionOrchestrator.Process()POST /api/client/trigger — User Trigger/Onboard Semi-sync → Events Consumergateway/handler/trigger.go:56-181 — TriggerHandler.Handle()DecodeJSON(r, &req) — decode + validate token, installInstanceID (lines 68-82)user_data.id (lines 85-103)service.TriggerRequest (lines 106-117)h.installService.ProcessTrigger(ctx, triggerReq) — Kafka publish inside (line 128)internal/service/install.go:440-483): Builds domain.QueueMessage{Type: "ONBOARD"} with onboard data. Publishes to queue name Trigger → resolved to "events" topic.h.cacheUserMapping() — caches user_id → install_instance_id in Redis (line 153)cmd/consumer_events.go:64) — processes ONBOARD type → enrich → ClickHouse write → forward to PostbackService + AnalyticsService via gRPCPOST /api/client/capture-payment — Payment Capture Semi-sync → Events Consumergateway/handler/data.go:60-133 — DataHandler.CapturePayment()DecodeJSON + validateCapturePaymentRequest — checks token/API key (lines 67-75)"DEFAULT" (lines 88-94)h.dataService.CapturePayment(ctx, req) — Kafka publish inside (line 112)internal/service/data.go:264-348): Token validation → dedup check (ErrDuplicatePayment) → builds domain.QueueMessage{Type: "PAYMENT"} → publishes to "events" topic.user_events_denormalized → forward postback + analyticsPOST /api/client/capture-event — Custom Event Semi-sync → Events Consumergateway/handler/data.go:204-273 — DataHandler.CaptureEvent()DecodeJSON + validate event_name, token/API key (lines 211-225)h.dataService.CaptureEvent(ctx, req) — Kafka publish inside (line 243)internal/service/data.go:580-616): Token validation → builds domain.QueueMessage{Type: "EVENT"} with event_name, event_data (serialized JSON) → publishes to "events" topic.POST /api/client/capture-page-view — Web Page View Semi-sync → Click Consumergateway/handler/data.go:276-358 — DataHandler.CapturePageView()DecodeJSON + validate token, link, origin (lines 283-299)h.dataService.CapturePageView(ctx, req) — Kafka publish inside (line 319)internal/service/data.go:710-759): Token validation → origin check → builds domain.QueueMessage{Type: "CLICK"} → publishes to "click-events" topic.cmd/consumer_click.go:41) — dedup → domain resolution → Google Web-to-App campaign creation → ClickHouse batch write → Redis click store (4 key types, 30d TTL)| /deeplink-triggered | handler/client.go:36-103 → installService.SetDeeplinkTriggered() — Redis/DB write, no Kafka |
| /set-user-data | handler/client.go:135-178 → installService.UpdateUserData() — DB write with retryOnNotFound(2, 50ms) |
| /integrations | handler/client.go:229-279 → installService.UpsertIntegrationInfo() — CleverTap check + DB upsert |
| /update-push-token | handler/client.go:296-347 → installService.UpdatePushToken() — DB write with retry |
| /attribution-data | handler/attribution_data.go:40-99 → svc.GetAttributionData() — synchronous query, returns deeplink + campaign + source |
tokenService.ValidateToken(), retryOnNotFound wrapper for race conditions between init publish and consumer processing.72 files, 16 interfaces, 20+ injected collaborators. The most complex subsystem in the monorepo.
internal/consumer/attribution/orchestrator.gofunc (o *AttributionOrchestrator) Process(ctx context.Context, msg *InstallEventMessage) erroro.enricher.FindProjectByToken()o.verifySignature() (consumer-side, deferred from gateway)o.enricher.FindOrCreatePackageIntegration()o.checkDuplicates(): SetProcessing lock (SetNX), install status check (processing/retrying/processed), existing install lookupextractData(msg) → *ExtractedData: GAID, IDFA, GCLID, GBRAID, LrIaID, install referrer params via FlexString unmarshalero.clickMatcher.FindMatchingClick(): Redis lookups by GAID+IDFA → LrIaID → IP sorted setenrichDataFromClick(): backfills GCLID/GBRAID from click record if missingo.strategySelector.Select(): returns all strategies sorted by priorityo.runStrategiesParallel(): all eligible run in concurrent goroutines, arbiter resolves winnero.resolveCampaign() → findCampaignByPriority()o.reinstallDetector.Detect()o.writeAndFinalize(): ClickHouse write → mark processed → drain lagging events → publish to downstream event handlerso.installCache.ClearProcessing() releases the lock so the retry consumer can reprocess.internal/consumer/attribution/consumer.goAttributionConsumer with source Source, orchestrator, parser Parser, errorRouter ErrorRouterStart(ctx)): for { select { default: } } polling with source.PollRecords(ctx, batchSize). 30s heartbeat ticker. 1-minute message counter reset.processBatch): Bounded concurrency via sem := make(chan struct{}, workers). Each record in a goroutine with panic recovery. Tracks success/failed/skipped/retry/dlq atomically.processRecord): Extracts W3C trace context from Kafka headers → parser.Parse(record.Value) → orchestrator.Process(ctx, msg). On LaggingAppOpenSignal: stores lagging app open, returns "skipped". Otherwise classifies via ClassifyError(err).routeProcessError): ErrorClassRetryable → errorRouter.RouteRetryable() (retry topic). ErrorClassPermanent → errorRouter.RoutePermanent() (DLQ).flushAndCommit() — flushes ClickHouse writer buffer, then commits Kafka offsets. Offsets only committed if flush succeeds.internal/consumer/attribution/interfaces.go| Strategy | Name() string, Priority() int, CanHandle(msg, click) bool, Execute(ctx, msg, click) (*StrategyResult, error) |
| Enricher | 15 methods — FindProjectByToken, FindInstallByInstanceID, FindCampaignByID, FindNetworkAccountCredentials, etc. |
| ClickMatcher | FindMatchingClick(ctx, *ClickMatchParams) (*ClickMatch, error) |
| Writer | WriteInstall(ctx, *InstallRecord), WriteAppOpen(ctx, *AppOpenDenormalizedRecord), Flush(ctx), Close() |
| EventPublisher | Publish(ctx, *InstallEvent), RegisterHandler(EventHandler) |
| EventHandler | Name() string, Handle(ctx, *InstallEvent) error |
| AttributionArbiter | Resolve(results []*StrategyResult, click *ClickMatch) *ArbiterResult |
| InstallCache | IsProcessed, SetProcessed, IsProcessing, SetProcessing (SetNX), ClearProcessing, SetUserMapping, GetUserMapping, Delete |
| ErrorRouter | RouteRetryable(ctx, msg, err, retryCount), RoutePermanent(ctx, msg, err), RepublishDelayed(ctx, msg, headers) |
| ReinstallDetector | Detect(ctx, installInstanceID, projectID) (bool, error) |
| SignatureVerifier | Verify(ctx, input) error |
| ConfidenceCalculator | Computes confidence score for attribution result |
| LaggingHandler | Drains lagging events queued before attribution completed |
| InstallStatusCache | SetNotPicked, SetPicked, SetAttributed — Redis status tracking |
internal/consumer/attribution/enricher.goCachedEnricher with repo EnricherRepository (Postgres), cache EnricherCache (Redis)| Project lookup | TTL: 1 hour |
| Install lookup | TTL: 30 minutes |
| Campaign lookup | TTL: 1 hour |
| PackageIntegration | TTL: 1 hour |
| IP resolution | TTL: 1 hour. Returns 0 on miss (non-fatal) |
| SDK credentials | TTL: 1 hour |
| Network credentials | No cache (tokens may rotate) |
| ExistsByProcessingID | Valkey-only, no Postgres fallback |
internal/consumer/attribution/click_matcher.goDefaultClickMatcher with clickStore ClickStoreReader, attributionStore AttributionStoreFindMatchingClick():findOldestDeviceMatch(): queries both Redis keys, sorts by ClickedAt ASC, picks oldest lockable clickattribution:click:instance:{lr_ia_id}attribution:click:ip:{ip}:{projectID}, oldest-first (debug mode: newest-first), limited to 10 candidatestryLockClick() acquires Redis lock attribution:click_lock:instance:{id} with 30-day TTL. Also checks attributionStore.IsClickAttributed() for persistent dedup across restarts.redis_click_store.go): Key prefixes: attribution:click:gaid:, attribution:click:idfa:, attribution:click:instance:, attribution:click:ip:. IP lookups use ZRange (oldest) or ZRevRange (newest).internal/consumer/attribution/strategies/selector.go — DefaultStrategySelector sorts by Priority() descending. Orchestrator calls CanHandle on each, then Execute in parallel.| MetaAttribution | meta.go · Priority 100 · Decrypts Meta install referrer via ProcessMetaAdsData, extracts campaign_group_id, ad set, creative. Fallback: utm_content from link URL. |
| AppleSearchAds | apple.go · Priority 95 · Calls Apple AdServices API with token (apple_ads_client.go). Rejects non-"Download" conversion types. Returns ad group, keyword, creative IDs. |
| GoogleAds | google.go · Priority 90 · Calls Google Ads API in parallel across all network account credentials (google_ads_client.go). GCLID fallback. Web-to-App fallback via gad_campaignid in click URL. |
| MetaInstallReferrer | meta_referrer.go · Priority 85 · Parses meta_install_ref object directly (not from install referrer URL), decrypts utm_content. |
| ClickMatchStrategy | click_match.go · Priority 80 · Maps Redis click match type (device_identifier/lr_ia_id/ip_address) to attribution source. No external API calls. |
| OrganicStrategy | organic.go · Priority 0 · Always returns Success:true with empty attribution source. Catch-all fallback. |
internal/consumer/attribution/arbiter.gofunc (a *DefaultAttributionArbiter) Resolve(results []*StrategyResult, click *ClickMatch) *ArbiterResult| Tier 1 (highest) | Install referrer-based — Meta always, Google with FromInstallReferrer=true. Deterministic app-store signal. Last-touch wins (most recent EngagementTime). |
| Tier 2 | Deterministic matches — device_identifier, lr_ia_id, ad network API. Last-touch wins. |
| Tier 3 (lowest) | IP-based probabilistic — ip_address match type. First-touch wins (oldest EngagementTime). |
sourcePriority() map — meta=100, apple=95, google=90, meta_install_referrer=85, others=80.ContributingNetwork entries (up to 3 written to ClickHouse for multi-touch attribution reporting).internal/consumer/attribution/writer.goClickHouseWriter with two buffers: buffer []*InstallRecord and appOpenBuffer []*AppOpenDenormalizedRecord. Thread-safe via sync.Mutex.WriteInstall() and WriteAppOpen() append to buffers. Flush(ctx) drains buffer, calls conn.PrepareBatch() with massive INSERT INTO installs_denormalized (70+ columns), appends each record, then batch.Send().click_id for ClickHouse: CRC32 of click_instance_id for Redis-matched clicks, or real Postgres Click.id for DB-matched clicks.internal/consumer/attribution/handlers/EventHandler. Published to concurrently by KafkaEventPublisher (publisher.go). Handler errors logged but don't block others.| PostbackGRPCHandler | postback_grpc.go · Condition: AdNetworkID != nil && != 0 · gRPC SendInstallPostback to postback service (fire-and-forget) |
| MetaCAPIHandler | meta_capi.go · Condition: Meta-attributed OR WebToApp · POST to graph.facebook.com/v23.0/{datasetID}/events |
| GoogleCAPIHandler | google_capi.go · Condition: AdNetworkID==Google + GCLID≥21 chars · POST to googleadservices.com/pagead/conversion/app/1.0 with 3 retries |
| TikTokHandler | tiktok.go · Condition: ttclid in click · POST to business-api.tiktok.com/open_api/v1.3/event/track/ |
| SnapchatHandler | snapchat.go · Condition: sccid in click · POST to tr.snapchat.com/v3/{pixelID}/events |
| AffiliateHandler | affiliate.go · Condition: CampaignID + postback URL · GET to affiliate tracking URL with 3 retries |
| WebhookHandler | webhook.go · Condition: Customer webhook configured · POST JSON payload with 3 retries + Slack formatting |
handlers/common.go provides CredentialsFetcher, doPostResult(), fetchCreds().Triggers, payments, custom events through enrichment to multi-platform dispatch.
internal/consumer/events/consumer.go — Consumer struct with source KafkaSource, dlq DLQPublisher, handler BatchHandlermodels.go): "PAYMENT", "EVENT", "ONBOARD"RetryCount ≥ 3 → direct to DLQhandler.HandleBatch(ctx, records) → BatchResultHasRetryable, do NOT commit (Kafka redelivers).handler.go): parseAndGroupRecords() → groups into payments, events, onboards → processPaymentGroup(), processEventGroup(), processOnboardGroup() → writeBatch() flushes to ClickHouse user_events_denormalized.| PostbackForwarder | grpc_postback.go — GRPCPostbackForwarder calls SendInstallPostback, SendEventPostback, SendRegistrationPostback with 30s timeout |
| AnalyticsForwarder | grpc_analytics.go — KafkaAnalyticsForwarder builds AnalyticsEventRequest with per-project credentials (8 platforms), marshals JSON, publishes to analytics Kafka topic |
forwarder.go): PostbackForwarder (3 methods) + AnalyticsForwarder (ForwardAnalyticsEvent)Sub-ms response via in-memory LRU. CAS versioning prevents stale writes.
internal/consumer/click/consumer.go — ClickConsumer with same poll-process-commit pattern as attribution.processor.go): ClickProcessor with dedup Dedup, writer Writer, clickStore ClickStore, domainResolver, campaignCreatorProcess(ctx, msg) pipeline:dedup.IsDuplicate(ctx, clickInstanceID) via SETNX with TTL. Fail-open on Redis error. Optional WAL.enrichDomainID() resolves domain_id from click link hostnameenrichGoogleWebToAppCampaign() creates campaign if gad_campaignid presentbuildClickRecord(msg) parses UA string for device/browser fieldswriter.WriteClick(ctx, record)clickStore.StoreClick(ctx, record) (fail-open)internal/consumer/click/redis_click_store.go — RedisClickStore.StoreClick() writes to:| attribution:click:instance:{id} | Always written. SETNX (first wins). Used by LrIaID matching. |
| attribution:click:gaid:{gaid}:{projectID} | When GAID present. SETNX. Used by device identifier matching. |
| attribution:click:idfa:{idfa}:{projectID} | When IDFA present. SETNX. Used by device identifier matching. |
| attribution:click:ip:{ip}:{projectID} | When IP present. Sorted set (ZADD NX), score = unix ms of click time. Multiple clicks per IP tracked. |
| Dedup | IsDuplicate(ctx, clickInstanceID) (bool, error) |
| Writer | WriteClick(ctx, *ClickRecord), Flush(ctx), Close() |
| ClickStore | StoreClick(ctx, *ClickRecord) error |
| CampaignCreator | CreateGoogleWebToAppCampaign(ctx, *WebToAppCampaignParams) (*int32, error) |
internal/store/store.go| Domains | map[string]*DomainEntry with sync.RWMutex — ~300 entries, keyed by hostname |
| Projects | map[int]*ProjectEntry with sync.RWMutex — similarly small |
| Campaigns | lru.Cache[string, *CampaignSlim] from hashicorp/golang-lru — 50K cap (of 7.7M total), fills on demand |
Version int64 (Unix nanoseconds). SetDomain, SetProject, SetCampaign all use compare-and-swap: write only lands if incoming.Version > stored.Version. Kafka consumers apply updates idempotently.invalidator.go): Getter (read-only: GetDomain, GetProject, GetCampaign, SetCampaign for LRU fill) + Invalidator (write: DeleteDomain, DeleteProject, DeleteCampaign, ReloadFromLoader, Stats)loader.go): Loads domains + projects from Postgres on startup. Campaigns NOT preloaded (7.7M too many — LRU fills on demand). Background goroutine refreshes at configurable interval. Uses time.Now().UnixNano() as version on full reload so CAS always succeeds.Postgres for OLTP, ClickHouse for OLAP, Kafka for messaging, 4-instance Redis.
projectscampaignsinstallsclickseventspaymentsuser_identitiesskan_configsqlx + pgx/v5 · Connection pooling · Parameterized queriesinstalls_denormalizeduser_events_denormalizedclickscampaign_metricsaggregatesretentionclickhouse-go/v2 · Native protocol · Decimal supportfranz-go · SASL · Circuit breaker · Redis Streams fallback:6379 in dev. Separate DragonflyDB / Redis in prod.| Instance | DB | Purpose | Engine | Status |
|---|---|---|---|---|
| Gateway Cache | 0 | Domain/token/project lookups | DragonflyDB | Fatal |
| Enricher | 1 | Attribution state, dedup, install status | DragonflyDB | Graceful |
| Lock Store | 2 | Distributed locks (SETNX + Lua) | Redis | Non-fatal |
| Click Store | 3 | Click sorted sets (30d TTL) | DragonflyDB | Fatal |
| Source | PollRecords, MarkRecords, CommitOffsets — abstracts Kafka consumption |
| KafkaPublisher | Publish, PublishBatch, HealthCheck, IsHealthy |
| CircuitBreaker | Per-topic: AllowRequest, RecordSuccess/Failure, IsOpen |
ErrCircuitBreakerOpen and triggers background Reconnect().install-events-hazelnut, Click→click-events, Trigger/Payment/Event→unified events topic.| Cache | Core: Get, Set, SetBatch, Delete, SwapData, Metrics |
| ValkeyCache | Distributed: install dedup, identity processing, throttle (SetNX) |
| CacheWarmer | WarmCache(ctx), ReloadCache(ctx) — startup pre-loading |
atomic.Pointer[sync.Map] for lock-free reads. SwapData builds new map, atomic pointer swap — zero cache misses during reload.SetBatch uses chunked pipelines (50 keys/chunk) with 5 retries + exponential backoff. SwapData overwrites with 180-day TTL instead of FLUSHDB (no thundering herd).domain:, project_data:, token:, attr:install:, rate_limit_notif:| CampaignRepository | GetByID, GetOrganicCampaign (find-or-create pattern) |
| DomainRepository | In-process cache with sync.RWMutex + 180-day TTL. Caches nil results too. |
| InstallRepository | GetByInstallInstanceID, Create (RETURNING id) |
| EnricherRepository | Richest — pre-joined data with ProjectAnalyticsConfig (credentials for 8 platforms) |
| UserIdentityRepository | GetByCustomerUserID, HasOnboardForInstall |
sqlx with GetContext/SelectContext. sql.ErrNoRows → domain ErrXxxNotFound. Prisma-style quoted table names ("Campaign", "Install").6 ad networks for postback dispatch. 7 analytics platforms for event forwarding.
internal/consumer/postback/server.go): Receives SendInstallPostback, SendEventPostback, SendRegistrationPostback → serializes as PostbackMessage JSON envelope (types.go) → publishes to postback-logs Kafka topic.internal/consumer/postback/consumer.go): Polls postback-logs. Records with maxRetriesBeforeDLQ=3 → DLQ. processRecord(): unmarshal → postbackMessageToUserEvent() → dispatch() fans out to ALL registered handlers concurrently (30s timeout per handler).internal/postback/service.go + service_impl.go): For event postbacks, looks up PostbackEventMap entries for project+event → creates PostbackMessage per mapped network → Google: synchronous HTTP, others: batch-publish to Kafka via AsyncPostbackPublisher.AdNetworkClient (Name() string, SendEvent(ctx, *AdNetworkEvent) (*AdNetworkResult, error)). Created in factory.go via createAdNetworkClients().| googleClient | google_client.go · googleadservices.com/pagead/conversion/app/1.0 — synchronous, developer token, DB access for link ID resolution |
| metaClient | meta_client.go · graph.facebook.com/v23.0 CAPI — POST with data array, dataset_access_token auth |
| tiktokClient | tiktok_client.go · business-api.tiktok.com/v1.3/event/track/ — event name mapping (Download, CompleteRegistration, Purchase) |
| snapchatClient | snapchat_client.go · tr.snapchat.com/v3/{pixelId}/events — pixel ID + access token in URL path |
| affiliateClient | affiliate_client.go · GET-based URL template substitution per network |
| Circuit breaker | circuit_breaker_client.go — wraps any AdNetworkClient, keyed by inner.Name() |
| Rate limiting | rate_limiter.go — per-project/per-network via Redis |
| Dedup | dedup.go — Redis SET NX for message-level dedup |
| Audit log | ch_writer.go — per-attempt postback logs to ClickHouse |
| Cached repos | cached_repos.go — Redis caching over NetworkAccount + PostbackEventMap repos |
| DLQ replayer | dlq_replayer.go — replays DLQ messages on demand |
internal/consumer/analytics/consumer.go — AnalyticsConsumer reads AnalyticsEventRequest protobuf from Kafka.dispatch() runs ALL 7 handlers in parallel goroutines. Only fails if ALL handlers fail. Individual failures logged but don't block. Panic recovery per goroutine. Bounded concurrency via semaphore (default 16). RetryCount ≥ 3 → DLQ.server.go): AnalyticsGRPCServer implements AnalyticsServiceServer. Incoming RPCs serialize to JSON and publish to analytics Kafka topic — the consumer does the actual fan-out.handlers/common.go: AnalyticsHandler { Name() string; Handle(ctx, *AnalyticsEventRequest) error }doPost() helper: JSON POST with retry (exponential backoff), 5xx retried, 4xx returns nil (non-retryable).| AmplitudeHandler | amplitude.go — HTTP identify call, sets lr_campaign/lr_ad_network via $setOnce |
| BrazeHandler | braze.go — Two-step: resolve braze_device_id → braze_id, then Users Track API |
| CleverTapHandler | clevertap.go — Upload API with attribution properties |
| MixpanelHandler | mixpanel.go — Track API |
| PostHogHandler | posthog.go — Capture API |
| MoEngageHandler | moengage.go — Customer API |
| GoogleAnalyticsHandler | google_analytics.go — GA4 Measurement Protocol |
return nil) if credentials missing. Per-message credentials from AnalyticsEventRequest take precedence over static env vars. All wrapped with per-handler circuit breakers (circuit_breaker.go).62 files across 7 subdirectories. Full-stack tracing, logging, metrics.
ZapCore injects span context.| correlation/ | CorrelationContext extracts TraceID/SpanID. CorrelatedLogger auto-injects trace fields. ExemplarRecorder attaches trace context to metric points. ZapCore custom core. |
| instrumentation/ | BrokerInstrumentation (inject/extract W3C in Kafka headers), CacheInstrumentation, DBInstrumentation, ServiceInstrumentation |
| metrics/ | 7 metric scopes: Gateway, Consumer, Cache, DB, Handler, HTTP, Business |
| logger/ | Environment-aware. File logging. Dynamic level management. |
| otel/ | Resilient exporters: resilient_trace.go, resilient_metric.go, resilient_log.go |
LogAttribution (P0 alert source), LogDLQ (P2 alert), LogConsumerHeartbeat (30s liveness — P0 on stall), LogCacheStats, LogRedisHealthInterface-driven DI. Accept interfaces, return structs. Dependencies flow inward.
| HealthChecker | HealthCheck(ctx) error — registered per service name, empty name checks ALL |
| ServerOption | WithLogger, WithTracer, WithMeter, WithAuthInterceptor, WithHealthChecker |
| Config | Max msg size, concurrent streams, keepalive, TLS (opt-in), reflection (disable in prod) |
spf13/cobra CLIgo-chi/chi HTTPjmoiron/sqlx Postgresjackc/pgx/v5 Driverclickhouse-go/v2 CHtwmb/franz-go Kafkauber-go/zap Logotel Observabilitygoogle/grpc gRPCCarefully ordered interceptor chains for HTTP and gRPC.
/grpc.health.v1.Health/*. Log level follows gRPC status code.