Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.adcontextprotocol.org/llms.txt

Use this file to discover all available pages before exploring further.

The TMP Router

The TMP Router is infrastructure that sits between publishers and buyer agents. It handles request fan-out, response merging, and privacy enforcement. It does not make decisions — it routes requests and aggregates responses. The publisher configures which providers the router calls.

What the Router Does

  1. Fans out requests: Sends Context Match requests to all configured providers with context_match capability. Sends Identity Match requests to all configured providers with identity_match capability.
  2. Merges responses: Combines offers, enrichment signals, and eligibility results from multiple providers into unified responses.
  3. Enforces separation: Context and identity code paths are structurally separate — the context path never accesses identity data and vice versa.
  4. Manages latency: Applies adaptive timeouts and deprioritizes providers that consistently exceed the latency budget.

Single Binary, Separate Code Paths

The router is a single Go binary with two structurally separate code paths: one for context match, one for identity match.
┌──────────────────────────────────────────────────────────────┐
│                         TMP Router                            │
│                                                               │
│  ┌───────────────────────────┐  ┌───────────────────────────┐│
│  │    Context Match Path      │  │    Identity Match Path     ││
│  │                            │  │                            ││
│  │  Inputs:                   │  │  Inputs:                   ││
│  │  • Artifact IDs / artifact  │  │  • Opaque user token       ││
│  │  • Context signals         │  │  • ALL active package IDs  ││
│  │  • Geo, URL hash           │  │                            ││
│  │  • Available packages      │  │                            ││
│  │                            │  │  Outputs:                  ││
│  │  Outputs:                  │  │  • Eligible package IDs    ││
│  │  • Offers                  │  │  • TTL (seconds)           ││
│  │  • Enrichment signals      │  │                            ││
│  │                            │  │                            ││
│  │  Never touches:            │  │  Never touches:            ││
│  │  • User tokens             │  │  • URLs                    ││
│  │  • Any identity data       │  │  • Content signals         ││
│  └───────────────────────────┘  └───────────────────────────┘│
│                                                               │
│  No shared state between code paths.                          │
│  One binary, one audit surface, one Docker image.             │
└──────────────────────────────────────────────────────────────┘
The separation is in the code and auditable. The context path cannot read identity data because it is not passed to it, not stored in any reachable location, and not referenced in any data structure the context path processes. The same applies in reverse for the identity path. The router is open-source — anyone can verify this by reading the source. TEE attestation is an upgrade path. Without TEE, you trust that the operator deployed the published binary. With TEE, attestation proves the deployed binary matches the audited source, removing that trust requirement.

Provider Registration

Publishers configure which providers the router calls. This is an operational relationship — the publisher trusts the provider to participate in their ad decisioning. Provider registrations follow the provider-registration schema (/schemas/tmp/provider-registration.json).

Discovery models

Provider registration typically comes from the page configuration — the publisher declares providers in their Prebid module config or surface-specific setup. This is the standard path and works well for publishers with a stable set of providers. Static configuration (Prebid config, YAML file, infrastructure-as-code):
  • Publisher declares providers at deploy time
  • Router reads config at startup and on config reload
  • Changes require a config update and reload/redeploy
  • Appropriate for most deployments — provider lists change infrequently
Dynamic registration (API-driven, database-backed):
  • Publisher manages providers through an admin interface
  • Router polls a discovery endpoint or watches for configuration changes
  • Changes take effect within one refresh cycle (recommended: 30 seconds)
  • Appropriate for publishers managing many providers or needing runtime updates without redeploys
  • Dynamic registration endpoints MUST validate that provider endpoint URLs are external HTTPS addresses. Implementations MUST reject private (RFC 1918), link-local (169.254.x.x), and cloud metadata IP ranges to prevent SSRF through provider registration. See Provider registration security in the specification for the full normative requirements — endpoint URL validation (with DNS re-resolution), dynamic registration endpoint authentication, router-to-provider auth minimum bar, and /health endpoint guidance.
Both models use the same schema. The router does not distinguish between providers loaded from a YAML file and providers loaded from an API — the registration fields are identical.

Registration fields

SettingTypeRequiredDescription
provider_idstringYesStable identifier for this provider. Used in logs, metrics, and cache keys.
endpointURLYesProvider’s base URL. The router appends /context or /identity when dispatching.
context_matchboolNoProvider handles Context Match requests. At least one of context_match or identity_match must be true.
identity_matchboolNoProvider handles Identity Match requests. At least one of context_match or identity_match must be true.
countriesList<string>ConditionalISO 3166-1 alpha-2 country codes. MUST be present when identity_match is true.
uid_typesList<string>ConditionalIdentity types this provider resolves. MUST be present when identity_match is true.
timeout_msintegerNoPer-provider timeout. Must be ≤ the router’s latency_budget_ms. Default: 50.
priorityintegerNoMerge conflict resolution order (lower = higher priority). Default: 0.
statusenumNoactive, inactive, or draining. Default: active.
propertiesList<UUID>NoProperty RIDs this provider serves. When absent, the provider serves all properties.
At least one of context_match or identity_match must be true — a provider that handles neither operation is invalid. When identity_match is true, countries and uid_types are required — the router cannot route Identity Match requests without them. The router MUST reject invalid provider registrations and SHOULD log a warning identifying the misconfigured provider.

Provider lifecycle

Providers have three lifecycle states:
  • Active: Provider receives requests normally.
  • Draining: Provider stops receiving new requests. In-flight requests complete normally. Use when taking a provider offline for maintenance.
  • Inactive: Provider is skipped entirely. Use to disable a provider without removing its configuration.

Provider health

Providers SHOULD expose GET /health at their base URL. The router uses this for:
  • Pre-flight checks: On startup or config reload, verify each provider is reachable before including it in fan-out.
  • Periodic monitoring: Check provider health on a configurable interval (recommended: 30 seconds). Providers that fail consecutive health checks MAY be temporarily excluded from fan-out and automatically re-included when health recovers.
Health checks are not in the request hot path — they run on a background interval. The router’s /healthz endpoint reflects overall router health, not individual provider status. Providers MAY support any combination of context_match and identity_match. A context-only provider handles enrichment or contextual targeting. An identity-only provider handles frequency capping — the publisher evaluates context locally from the media buy’s targeting rules and calls the buyer only for identity checks. All communication uses JSON over HTTP/2. TMP messages are small (200-600 bytes) — at these sizes, serialization format is less than 1% of total latency.

Integration

Prebid integration

Publishers with Prebid Server or Prebid.js add a TMP module that replaces vendor-specific RTD modules. The TMP module sends Context Match and Identity Match requests to the router and returns the merged response as targeting signals and package activation data. The publisher’s ad server (GAM, etc.) receives targeting key-values and activates the corresponding line items.

Non-Prebid surfaces

For AI assistants, mobile apps, CTV, and retail media, the router provides a direct HTTP/2 API. Any platform that can make HTTP/2 POST requests can integrate. The request and response schemas are the same regardless of surface.

SSP and DSP integration

SSPs and DSPs integrate as TMP providers — they expose an endpoint that the router calls during fan-out. This is the same pattern as existing RTD integrations.

Identity tokens

Identity tokens come from existing providers (ID5, LiveRamp, UID2, etc.) that are already present on the page or in the app. TMP does not specify token lifecycle — it consumes tokens that the publisher’s identity stack already produces.

Fan-Out and Response Merging

Context Match fan-out

When the publisher sends a Context Match request:
  1. The router identifies all providers configured for the request’s property_rid with context_match capability.
  2. It sends the request to all matching providers in parallel over HTTP/2.
  3. It waits for responses up to the latency budget (default: 50ms).
  4. It merges responses:
    • Offers are collected from all providers. If two providers return offers for the same package_id (uncommon — packages are typically provider-specific), the router keeps the first response received. Duplicate package_id across providers is a configuration error; the router SHOULD log a warning.
    • Enrichment signals are concatenated. Segments from all providers are combined into a single list. Targeting key-values from different providers are namespaced to prevent collisions.
  5. It returns the merged response to the publisher.

Identity Match fan-out

The router filters Identity Match providers by country and identity type:
  1. The router reads the country field from the request (a routing directive, not an identity signal).
  2. It selects providers whose countries list includes that country code.
  3. It further filters to providers whose uid_types list overlaps with any uid_type in the request’s identities array.
  4. For each selected provider, it filters the identities array to the intersection of the request’s identities and that provider’s declared uid_types. Providers MUST NOT receive identity tokens for types they did not declare — this enforces minimum-necessary-data as a structural privacy property, not an operational one. The router MUST NOT add, substitute, or transform identity tokens; the forwarded set MUST be a subset of the publisher-origin identities array.
  5. If the intersection is empty, the router MUST skip that provider entirely. An empty identities array is not a valid IMR payload (schema enforces minItems: 1), and emitting skip-vs-forward as distinguishable telemetry would leak which identity types each user had available.
  6. It strips the country field before forwarding the request to the buyer agent.
  7. Because the per-provider payload differs from the inbound request, the router re-signs each per-provider forward using the canonical identities_hash of the filtered set. Providers verify signatures against the router’s public key.
  8. It fans out to all matching providers in parallel, merges eligibility results, and returns a unified response.
Duplicate package_id across providers is a configuration error — packages come from media buys and are provider-specific. If it occurs, the router applies conservative merging: the package is only eligible if it appears in eligible_package_ids from both providers. The router uses the minimum serve_window_sec across providers and SHOULD log a warning.

Timeout handling

The router manages two distinct timeout values:
  • Overall latency budget (latency_budget_ms): The total time the router has to fan out, collect responses, and merge. Default: 50ms. This is the end-to-end budget the publisher allocates to TMP within their ad serving pipeline.
  • Per-provider timeout (timeout_ms on the provider registration): The maximum time the router waits for a single provider. Must be ≤ the overall latency budget. Default: 50ms (equal to the budget for single-provider setups).
When multiple providers are configured, the per-provider timeout is the effective cap for each individual provider, and the overall budget is the cap for the entire fan-out. The router enforces the tighter of the two for each provider. For example: with a 50ms overall budget and two providers each set to 40ms, both providers are called in parallel and the router waits at most 50ms total — if provider A responds in 45ms, provider B has already timed out at 40ms.
  • Single provider timeout: Skip that provider, log its latency percentile, proceed with responses from remaining providers. The skipped provider’s packages are treated as “not activated” for this request.
  • All providers timeout: Return an empty response — no offers for Context Match, no eligibility for Identity Match. The publisher falls back to existing demand sources (Prebid open auction, direct-sold, etc.).
  • Adaptive timeout: The router tracks per-provider latency percentiles (p50, p95, p99) and adjusts allocation over time. Consistently slow providers receive smaller timeout allocations or are preemptively skipped. Higher-priority providers (lower priority value) receive a larger share of the budget when adaptive allocation is active. This is an operational decision, not a protocol requirement.

Latency Budget

TMP targets sub-50ms end-to-end latency: publisher sends request, router fans out, providers respond, router merges, publisher receives response. This is achievable because:
  • Small messages: TMP requests are 200-600 bytes of JSON — roughly 10-20x smaller than a typical OpenRTB bid request. Serialization is sub-microsecond.
  • No price computation: Packages are pre-negotiated. The provider evaluates targeting criteria, not auction dynamics.
  • Parallel fan-out: All providers are called simultaneously. The total latency is the slowest provider’s response time, not the sum.
  • Stateless router: No database lookups in the hot path. The router’s only job is forwarding and merging.
  • Connection reuse: HTTP/2 multiplexing allows concurrent requests to each provider over a single connection.

Comparison to Vendor RTD Modules

The TMP Router generalizes what vendor-specific RTD modules do today. A single-vendor RTD module evaluates packages against content in real time, but it is locked to one provider, one surface (Prebid), and sends the full OpenRTB BidRequest. The TMP Router replaces this with a multi-provider, multi-surface, protocol-standard alternative:
Vendor RTD Module (today)TMP Router
ProvidersSingle vendorAny provider declaring TMP capabilities
DiscoveryPublisher configurationPublisher configuration
SurfacesWeb (Prebid Server)Web, AI, mobile, CTV, retail media
Request formatFull OpenRTB BidRequest (~2-10KB JSON)TMP ContextMatchRequest (~200-600 bytes JSON)
PrivacyData masking before sendingStructural separation (TEE-ready)
Identity handlingUser ID in bid requestSeparate Identity Match operation
For existing Prebid Server deployments, the TMP module replaces vendor-specific RTD modules with a generic TMP client. For surfaces without Prebid, the router’s HTTP/2 API provides the same functionality.

Relationship to TEE Auction Infrastructure

TEE-based auction infrastructure (encrypted bids, attestation proofs, verifiable winner selection) is complementary to TMP. When a publisher wants competitive selection among activated packages from multiple buyers:
  1. TMP Router collects Context Match responses (which packages each buyer wants to activate).
  2. Publisher submits the activated packages (with their pre-negotiated prices) to a TEE auction.
  3. The TEE enclave selects the winner and produces an attestation proof.
  4. Publisher activates the winning package.
TMP handles matching. TEE auctions handle competition. Publishers choose whether they need competition at all — many surfaces (editorial AI content, CTV pod composition, retail carousels) are better served by publisher-side relevance ranking than by price-based auctions. TEE auction infrastructure (AWS Nitro Enclaves, attestation, key management) is directly applicable when upgrading the TMP Router to TEE-attested operation, making it a natural infrastructure partner for the protocol.

Deployment

The TMP Router is a single Go binary built on adcp-go. It reads a configuration file listing providers and their capabilities. Each provider exposes two path-based endpoints under its base URL — POST /context and POST /identity — and the router dispatches by path.

Configuration

# tmp-router.yaml
listen: ":8443"
tls:
  cert: /etc/tmp/tls.crt
  key: /etc/tmp/tls.key

latency_budget_ms: 50
adaptive_timeout: true
health_check_interval_sec: 30

providers:
  # US cluster — UID2, RampID, ID5
  - provider_id: acme-outdoor-us
    endpoint: https://us.tmp.acmeoutdoor.example/v1
    context_match: true
    identity_match: true
    countries: [US]
    uid_types: [uid2, rampid, id5]
    timeout_ms: 40
    priority: 0
    properties: ["01916f3a-9c4e-7000-8000-000000000010"]

  # EU cluster — EUID, ID5
  - provider_id: acme-outdoor-eu
    endpoint: https://eu.tmp.acmeoutdoor.example/v1
    context_match: true
    identity_match: true
    countries: [DE, FR, IT, ES, NL, BE, AT, PL, SE, DK, FI, IE, PT, GR, CZ, RO, HU, BG, HR, SK, SI, LT, LV, EE, CY, MT, LU, GB]
    uid_types: [euid, id5]
    timeout_ms: 40
    priority: 0
    properties: ["01916f3a-9c4e-7000-8000-000000000010"]

  # Context-only enrichment provider (no identity match, no country scoping needed)
  - provider_id: enrichment-co
    endpoint: https://enrichment.example/v1
    context_match: true
    identity_match: false
    timeout_ms: 30
    priority: 10

Container deployment

FROM ghcr.io/adcontextprotocol/tmp-router:latest
COPY tmp-router.yaml /etc/tmp/config.yaml
EXPOSE 8443
The router is stateless — no database, no persistent storage. It can be horizontally scaled behind any load balancer. Health checks are available at /healthz.

Capacity planning

Each router instance handles approximately 10,000 requests per second on a 2-vCPU container. Memory usage scales linearly with the number of concurrent connections to providers, not with request volume. For web publishers, one router instance per point of presence (PoP) is typical. For AI platforms, a centralized deployment with regional failover is sufficient since the router adds < 5ms to end-to-end latency.

Monitoring

The router exposes Prometheus metrics at /metrics:
MetricDescription
tmp_context_match_duration_msContext Match end-to-end latency histogram
tmp_identity_match_duration_msIdentity Match end-to-end latency histogram
tmp_provider_duration_msPer-provider response time histogram
tmp_provider_timeout_totalPer-provider timeout counter
tmp_provider_error_totalPer-provider error counter
tmp_offers_totalTotal offers returned across all providers
Alert on tmp_provider_timeout_total increasing — a provider consistently exceeding its timeout budget degrades match quality for all requests that include it.