Documentation Index
Fetch the complete documentation index at: https://docs.adcontextprotocol.org/llms.txt
Use this file to discover all available pages before exploring further.
Proposal: TMP as Core Prebid Infrastructure
Problem
Prebid has 61 RTD (Real-Time Data) modules. Each one follows the same pattern: fetch external data, enrich bid requests and/or set ad server targeting. Each vendor ships its own module with its own API format, configuration, and maintenance burden.What RTD modules actually do
RTD modules hook into the auction viagetBidRequestData (modify bid requests
to bidders) and getTargetingData (set ad server key-values). They fall into
five categories:
| Category | Count | What they do | Examples |
|---|---|---|---|
| Contextual/content | 13 | Classify page content, return topics/keywords/safety scores | IAS, Browsi, DG Keywords, Qortex, Relevad |
| Audience/identity | 18 | Append user segments, cohorts, or identity data to bid requests | Permutive, Sirdata, Experian, LiveIntent, 1plusX, BlueConic |
| Brand safety | 8 | Monitor or verify ad/creative safety | Confiant, GeoEdge, Clean.io, Human Security |
| Bid enrichment | 7 | Modify bid structure, floors, or filtering | PubXai (floors), Greenbids (bid shaping), Hadron |
| Other | 15 | Device detection, video context, timeout control, measurement | 51Degrees, WURFL, JW Player, Chrome AI Topics |
What they send
Most RTD modules send some combination of: full page URL, referrer, viewport dimensions, ad unit structure (bidders, sizes, params), existing user eIDs, and consent strings. Many send the complete OpenRTB BidRequest (2-10KB). The vendor API returns data that gets injected into:ortb2.user.dataβ user segments sent to all biddersortb2.site.extβ site-level extensionsortb2Imp.extβ per-impression extensions- Per-bidder
ortb2Fragmentsβ bidder-specific targeting - Ad server targeting β GAM key-values
The problems
- 61 modules, same pattern. Each is a separate integration with its own configuration, testing, and Prebid PR cycle. New vendors require new modules.
- Large payloads. Sending the full BidRequest (2-10KB) when most modules only need page context and ad unit codes.
- No privacy separation. User identity and page context travel in the same request. Privacy depends on field-level masking β one missed field leaks data.
- No standard protocol. Vendors define their own request/response formats. Switching vendors means rewriting the integration.
What TMP can replace
Of the 61 modules, approximately 38 (contextual + audience + brand safety + bid enrichment) follow the βfetch data, enrich requestβ pattern that TMP standardizes. Specifically:| TMP operation | Replaces | How |
|---|---|---|
| Context Match | Contextual modules (IAS scores, Browsi predictions, keyword extraction, content classification) | Page context in, targeting signals out. Same data, standard format. |
| Identity Match | Audience modules (Permutive cohorts, Sirdata segments, Experian RTID, LiveIntent segments) | Opaque user token in, per-package eligibility out. Segments flow as signals. |
| Both | Hybrid modules (Sirdata, 1plusX, Optable) that do contextual + audience enrichment | Two operations, structurally separated. |
Proposal
Make TMP (Trusted Match Protocol) a core Prebid capability β configured viapbjs.setConfig() in Prebid.js and via YAML in Prebid Server. Publishers
register TMP providers the same way they register bidder adapters: declare them,
configure endpoints, done.
TMP is an open protocol (part of AdCP) that standardizes what RTD modules do
today. It defines two operations:
- Context Match: page context in, offers + targeting signals out. No user data.
- Identity Match: opaque user token in, per-package eligibility out. No page data.
What changes in Prebid.js
Configuration
pbjs.que.push. Just config.
Per-ad-unit configuration
What Prebid.js does internally
- On auction init: For each ad unit with
tmpconfig, callbuildContextMatchRequest()from@adcp/client/tmpand send to the router. - On context match response: Store offers and signals per ad unit.
- After temporal delay (randomized): Call
buildIdentityMatchRequest()with the userβs token (from existing identity module) and ALL active package IDs. Send to the router. - On identity match response: Call
joinResults()to intersect offers with eligibility. CalltoTargetingKVs()to flatten to key-values. - Set targeting: Apply key-values to the ad unit before bid requests go out.
GAM line items match on
adcp_pkgandadcp_seg. Signals also flow to bidders viaortb2.site.extandortb2.user.dataβ the same injection points RTD modules use today. Bidders see enriched bid requests without needing to know TMP exists.
@adcp/client/tmp package handles steps 1, 3, and 4 as pure functions. Prebid
handles the HTTP calls, timing, and ad unit targeting β exactly what Prebid is
good at.
Dependency: @adcp/client/tmp
- Zero dependencies, under 3KB gzipped
- Tree-shakeable β only the functions Prebid uses get bundled
- Types + pure functions β no network calls, no side effects
- Prebid already supports npm dependencies for core modules
- Ed25519 request signing is handled by
@adcp/client/tmpwhen a signing key is configured (see Request signing below for the requirements that apply to both Prebid.js and PBS)
What changes in Prebid Server
YAML configuration
What Prebid Server does internally
Usesadcp-go/tmp/client for:
- Fan-out to configured providers in parallel over HTTP/2
- Per-provider timeouts with graceful degradation
- Response merging (offers concatenated, signals merged, eligibility conservative-merged)
- Ed25519 request signing (see Request signing below)
processed-auction-request hook stage β after imps are parsed and before bid
requests are dispatched β so the module can read per-imp tmp ext and set
targeting key-values without racing bidder fan-out.
Temporal decorrelation in a server-side embed
The goal of temporal decorrelation is to make pairing ambiguous to a buyer agent or network observer between router and buyer β not to add delay for its own sake. The specβs 100-2000ms random delay is one lever; volume, batching, and cross-page caching are others. Publishers pick a profile that suits their traffic and latency budget. Delay applies to Identity Match, not Context Match. Context Match fires on the auction critical path β delaying it has no decorrelation value (correlation is about the relative timing of the pair, not their absolute arrival). Delaying both independently widens the observable window but doesnβt improve pairing ambiguity and doubles latency cost. Delaying both by a correlated random amount gains nothing at all. The deferrable side is Identity; thatβs where randomization goes. Levers publishers combine:- Volume / k-anonymity. A high-traffic publisher firing hundreds or thousands of Identity Match requests per second across users on the same placement naturally produces an ambiguous pairing set. Short or zero explicit delay is defensible when traffic volume alone prevents individual-pair recovery. Low-traffic publishers have no such cover and need explicit delays toward the top of the recommended range.
- Cross-page caching. Cache Identity Match responses per
user_tokenfor the buyerβsttl_sec(typically 60s+). Cache hits produce no wire request, so most auctions have no pair to correlate. Cache misses still emit an Identity Match β publishers pair this with an explicit delay or rely on volume. Each cache refresh MUST generate a new Identity Matchrequest_id; reusing a cachedrequest_idon a new wire send violates the specβs per-epoch dedup requirement. - Batching. Identity Match requests for multiple users bundled on the
wire within a short window destroy individual-pair timing. Spec permits
Publishers MAY batch Identity Match requests across multiple page views. - Explicit delay on Identity. Uniform random delay from an interval appropriate to traffic volume. 100-2000ms is a default that works for most publishers; smaller windows are defensible with high volume, larger with low volume. Context Match fires immediately.
- A large publisher with heavy traffic and aggressive caching may set Identity delay to 0 and rely on volume + caching. The trade-off is auditability: βweβre not delayingβ needs a defense and a measurable k-anonymity target.
- A small or long-tail publisher picks a longer explicit delay (closer to 2000ms) and runs Identity on a background schedule or a client companion so the auction critical path stays under timeout budget.
- A hybrid deployment fires Context Match server-side on the auction critical path and defers Identity Match to a Prebid.js companion after a randomized post-auction delay. The decorrelation comes from the delay, not the origination source β the router still emits to the buyer from its own egress IP β but client origination defeats publisher-edge client-IP correlation and removes identity from the auction hot path.
adagents.json so auditors and users can see what temporal properties the
publisher is actually providing. βPure parallel with no explicit delay and
no volume defenseβ is a valid configuration only if the publisher is willing
to surface it; it does not satisfy the specβs temporal decorrelation SHOULD
on its own.
Request signing
Per the specβs Request Authentication model, the router signs all TMP requests β Context Match and Identity Match β with Ed25519. Providers verify signatures using the publisherβs public key from the property registry. Providers typically sample-verify (e.g., 5% of requests) rather than verify every request to keep per-request cost under 30Β΅s; sustained failures trigger property suppression. This prevents unauthorized parties from probing provider targeting logic by forging requests. Implementations usingadcp-go/tmp/client inherit outbound signing β the
client loads the publisherβs signing key at startup and signs every request.
Verification cost sits on the provider side and isnβt affected. Implementations
building against the TMP schemas directly without the SDK must implement:
X-AdCP-SignatureandX-AdCP-Key-Idheaders on every request.- Daily-epoch replay window (
floor(unix_timestamp / 86400)); see the signature envelope for per-message-type signed-field ordering. - Per-provider signatures. Context Match signatures are bound to
provider_endpoint_url, so the router generates one signature per fan-out target. Signature caches key on(placement_id, provider_endpoint_url), notplacement_idalone. - Signature invalidation on active-package-set change. Context Match
signatures cover the sorted
package_idslist; when the buyerβs active set changes, cached per-placement-per-provider signatures must be regenerated. Daily-epoch rollover alone isnβt sufficient. - Key rotation and
revocation via
agent-signing-key.json; providers cache keys with a 5-minute TTL and honor therevoked_atmarker.
- Signing key storage. The publisherβs Ed25519 private key is high-value
material β it authorizes forged Context Match and Identity Match requests
to the providers for which the router holds valid registrations, for the
remainder of the ~48-hour replay window if leaked. Store in HSM or KMS,
not a mounted file. The spec supports explicit revocation via
revoked_at, which propagates within the 5-minute cache TTL β but revocation does not retroactively invalidate signatures already captured before the revocation timestamp. Rotate and revoke proactively on any suspicion. - End-to-end signing verification before go-live. Per spec Β§Signature verification, providers SHOULD suppress a property for 24 hours on verification failure. Misconfigured signing is silent-then-catastrophic β run a signed probe against at least one provider before flipping traffic.
- 401 handling. Treat signature verification failures as non-retryable;
exclude the provider from the current auction and alert operations.
Sustained failures indicate key rotation drift, clock skew across the
epoch boundary, or a
provider_endpoint_urlmismatch between the routerβs provider registration and the providerβs self-advertised endpoint.
Dependency: adcp-go/tmp
- Standard Go module, no CGO
- HTTP/2 client with connection pooling (stdlib
net/http) - Ed25519 signing (stdlib
crypto/ed25519) - No external dependencies beyond Go stdlib
Migration from Scope3 RTD module
Scope3 is the first TMP provider. Migration for publishers currently using the Scope3 RTD module:Step 1: Scope3 exposes TMP endpoint
Scope3 adds a TMP-compatible endpoint alongside their existing RTD API. The endpoint acceptsContextMatchRequest and returns ContextMatchResponse.
Scope3βs existing contextual targeting, content classification, and enrichment
signals map directly to TMP offers and signals.
Step 2: Publisher switches config
Before (Scope3 RTD module):Step 3: Deprecate Scope3 RTD module
Once publishers have migrated, the vendor-specific module can be deprecated. Other vendors (DoubleVerify, IAS, etc.) can expose TMP endpoints and join the same config β no new Prebid modules needed.Benefits for Prebid
Fewer modules to maintain
One TMP adapter in Prebid core replaces up to 38 vendor RTD modules (the contextual, audience, brand safety, and bid enrichment categories). Each vendor becomes a provider endpoint in config. New vendors donβt require new Prebid modules, PRs, or releases.Smaller payloads
| RTD module (today) | TMP | |
|---|---|---|
| Request size | 2-10KB (full BidRequest) | 200-600 bytes |
| Whatβs sent | Everything OpenRTB has | Only page context |
| User data in request | Yes (masked) | No (structural separation) |
Privacy by design
RTD modules send user identity and page context in the same request. Privacy depends on field-level masking β one missed field leaks data. TMP separates context and identity into different requests on different code paths. The context path never has access to identity data. This is structural, not policy-based. TEE attestation can make it independently verifiable.Open provider ecosystem
Any company can become a TMP provider by exposing a standard HTTP/2 endpoint. No Prebid module PR needed. No vendor-specific configuration format. Publishers add providers in config the same way they add bidder adapters.Aligns with Prebidβs direction
Prebid already standardized demand (bidder adapters) and identity (userId modules). TMP standardizes the remaining piece: real-time contextual and identity-based enrichment. The pattern is the same: define a protocol, let vendors implement it, publishers configure endpoints.Reference adapter: Prebid.js
This is a starting point for the Prebid team to adapt to Prebid.js internals. It uses@adcp/client/tmp for data transformation and Prebidβs hooks for
lifecycle integration.
Reference adapter: Prebid Server (Go)
Timeline
- SDK development β Build
@adcp/client/tmpandadcp-go/tmpagainst the TMP schemas shipping in AdCP 3.0. - Reference adapters β Working Prebid.js and Prebid Server adapters that the Prebid team can review and adapt.
- Prebid proposal submission β Submit to Prebid.org with working code, performance benchmarks (payload size, latency), and Scope3 migration plan.
- Scope3 TMP endpoint β Scope3 ships TMP-compatible endpoint.
- Publisher pilot β One publisher runs TMP via Prebid alongside existing Scope3 RTD module, A/B comparison.
- Prebid core merge β Prebid team adapts reference adapters to their codebase standards and merges.