Learn
Best CDN for Video Streaming in 2026: Full Comparison with Real Performance Data
Best CDN for Video Streaming in 2026: Full Comparison with Real Performance Data If you are choosing the best CDN for ...
Google Search can prefetch a page from a third-party cache and still land the user on your own URL. That sounds like a violation of the web’s trust model until you look at Signed HTTP Exchanges closely. The trick is not URL rewriting. It is a signed payload that lets a distributor serve a response while the browser attributes it to the publisher’s origin, within strict certificate and validity constraints. For teams chasing sub-second search entry latency, signed http exchanges are one of the few delivery mechanisms that can reduce navigation cost without handing branding and attribution to an intermediary.

The baseline problem is simple: shared caches are good at moving bytes closer to users, but the web security model normally prevents a third party from serving your document as if it came from your origin. If a cache serves the HTML directly from its own host, the visible URL changes, cookies partition differently, origin-bound state changes, and publisher trust signals degrade. That trade-off is acceptable for some assets. It is usually unacceptable for primary documents.
Signed exchanges change that by packaging an HTTP response with a signature from the publisher. A distributor can cache and deliver the package, and the browser can verify that the publisher authorized the response for a specific URL. The practical use case most engineers care about is search-driven entry traffic. Search can prefetch the signed exchange, then commit navigation to the publisher URL instead of a cache URL.
The naive alternative is to rely on ordinary CDN caching plus aggressive preconnect or prefetch hints. That helps, but only after the browser has decided it is safe to contact your origin or your CDN hostname directly. The whole value of signed exchanges is cross-origin distribution without losing first-party URL presentation.
Operationally, a signed exchange sxg is a serialized HTTP response plus metadata and a signature. The browser validates the signature, certificate chain, URL binding, and freshness window before treating the exchange as authoritative for the claimed URL. The mechanism sits inside the broader web packaging work, but for most production teams the mental model is narrower: SXG is not “site mirroring.” It is a constrained, signed distribution format for a specific response.
That constraint matters. Signed exchanges are not a general replacement for normal navigation. They fit best when the response is cacheable, deterministic enough to sign ahead of time, and useful to prefetch from an external cache. If your page depends on highly personalized HTML at request time, the fit gets bad quickly.
The most cited performance effect is on search-entry Largest Contentful Paint. Public guidance from the Chrome and web.dev ecosystem reports average LCP reductions in the 300 ms to 400 ms range for SXG-enabled prefetches, and Cloudflare’s published testing reported TTFB improvements for 98% of tested sites with LCP improvement for 85% of eligible page loads, with median LCP improvement above 20% on those loads. Those are not universal wins across all traffic. They are wins on the subset of navigations where the cache can prefetch and deliver a valid signed exchange.
The engineering takeaway is that signed exchanges mainly shave off navigation startup work. You are reducing one or more of the following on eligible entries: origin selection latency, connection establishment to your delivery stack, queueing behind cold edge state, and the delay between result click and first document bytes. The improvement is often visible at p50 and p75, but the strategic value is in p95 search-entry latency where mobile radio transitions and geographic distance amplify startup costs.
As of 2026, a realistic framing for signed exchanges google search is this:
If you want a rough budget model, treat the upside as a 300 ms class improvement on eligible search landings, not on all sessions. Then compare that with your current p75/p95 LCP split by referrer class. Many teams instrument only global LCP and miss that search-entry pages have a different latency envelope than direct or internal navigations.
Signed exchanges let a referrer-side cache prefetch without exposing user-specific request context to your origin before navigation. That matters on mobile paths where one extra network round trip can dominate the first few hundred milliseconds. It also matters on congested networks where connection setup to your stack competes with radio wake-up and DNS cache misses.
The subtle point many teams miss is that SXG is not merely “HTML from cache.” It changes who is allowed to fetch before the click, and that is where the latency win originates. A normal shared cache cannot safely do that and still present the first-party URL.
The answer is signature-bound authority, not delegation of origin. The publisher signs a response for a specific URL and validity interval. A third-party cache distributes that package. The browser verifies that the publisher certified the content for that URL, then commits navigation under the publisher’s address if the package is still valid. The distributor never becomes the origin in browser security terms.
At a high level, the flow looks like this:
The hard part is the word deterministic. If the response varies by user, cookie, negotiation quirks, or hidden edge logic, signing becomes operationally fragile. You are effectively taking a document that used to be generated per request and forcing it into a signed, portable representation with a bounded lifetime.
Because those approaches break the security and attribution model in different ways. Reverse proxying from a distributor under your domain can work if you own the distribution path end to end, but that is not what signed exchanges were built for. SXG exists for situations where an external cache can distribute a document while preserving publisher URL identity. URL masking, if treated as a shortcut, does not provide the browser-verifiable integrity guarantees that SXG requires.
A production design for signed http exchanges usually has five moving parts: deterministic page generation, SXG signing, certificate lifecycle management, edge distribution, and observability. The design choice that matters most is where signing happens. If you sign at origin build time, you get stability but less freshness. If you sign at the edge, you get lower content staleness and more operational complexity.
| Approach | Best fit | Latency upside | Operational cost | Common failure mode |
|---|---|---|---|---|
| Build-time signing | Static or mostly static publishing flows | High on evergreen documents | Low to moderate | Expired validity window or stale signed artifact |
| Origin-time signing | Dynamic publishing with controlled cache policy | Moderate to high | Moderate | Signer bottleneck under revalidation bursts |
| Edge-time signing | Large fleets with strict freshness goals | Moderate to high | High | Certificate distribution, clock skew, inconsistent variants |
For most teams, build-time or origin-time signing is the sane starting point. Edge-time signing sounds attractive until you factor in key isolation, clock discipline, invalidation races, and the need to guarantee that all edge paths normalize the response identically before signing.
This is one of the places where the delivery platform matters more than the feature checklist. If you are evaluating BlazingCDN alongside CloudFront or other large providers for document-heavy delivery, the practical requirement is predictable caching behavior, flexible configuration around variant handling, and enough fault tolerance that signed and unsigned paths fail independently instead of taking the whole publishing path down. That is also where cost starts to matter: for teams distributing large document volumes or mixed media plus HTML, starting at $4 per TB and scaling down to $2 per TB at 2 PB+ changes the economics of keeping both SXG and non-SXG variants hot, especially when you need enterprise-grade stability comparable to Amazon CloudFront with more flexible commercial terms and fast scaling under demand spikes.
The shortest honest answer is that implementation is less about one magic header and more about making your HTML signing-safe. The workflow has four stages: make pages eligible, generate SXG artifacts, publish them on a stable path, and validate that search-facing systems accept them.
Your canonical document needs stable content and stable headers. Kill hidden variance. That includes non-deterministic timestamps in HTML, request-specific IDs embedded in markup, and cache key drift caused by header noise. If the page varies, sign only the variant you can reason about.
A practical normalization checklist:
Teams often use a dedicated signer in CI or in the publish pipeline. Pseudocode for the process looks like this:
input_url = "https://example.com/article/123"
input_status = 200
input_headers = [
[":status", "200"],
["content-type", "text/html; charset=utf-8"],
["cache-control", "public, max-age=300"],
["content-encoding", "mi-sha256-03"]
]
input_body = canonical_html_bytes
cert_chain = load_sxg_certificate_chain()
private_key = load_signing_key()
sxg = sign_exchange(
url = input_url,
status = input_status,
headers = input_headers,
body = input_body,
cert_chain = cert_chain,
private_key = private_key,
date = now_utc(),
expires = now_utc_plus_minutes(5)
)
store("article-123.sxg", sxg)
publish_variant(url = input_url, artifact = "article-123.sxg")
The exact tooling differs, but the engineering concerns are consistent: body digest, signature envelope, certificate compatibility, and expiry. Keep expiry tight enough to reduce replay risk and stale content exposure, but not so tight that regeneration jitter creates gaps.
Your system still needs a normal HTML representation. Signed exchange delivery is additive, not a replacement. You will often maintain two cacheable outputs for the same canonical page: the ordinary HTML response and the signed exchange artifact intended for search-facing distribution.
A simplified NGINX-style routing example:
map $http_accept $sxg_variant {
default "";
"~*application/signed-exchange;v=b3" ".sxg";
}
server {
listen 443 ssl http2;
server_name example.com;
location /articles/ {
try_files $uri$sxg_variant $uri /index.html =404;
add_header Vary "Accept";
}
location ~ \.sxg$ {
default_type application/signed-exchange;v=b3;
add_header Cache-Control "public, max-age=300";
}
}
The point is not the specific syntax. The point is explicit variant control. If you let SXG and HTML collapse into the same cache key accidentally, debugging gets ugly fast.
For signed exchanges google search use cases, success is not “the file exists.” Success is that the crawler fetches it, validation passes, and the search path actually uses it. Treat validation as a pipeline with separate stages: fetchability, certificate acceptance, signature verification, freshness, and content consistency with the canonical document.
AMP signed exchange adoption was one of the earliest visible deployments because it solved a specific product problem: cached AMP pages could be shown on the publisher’s canonical URL rather than a cache URL. That preserved brand and URL identity while still allowing the performance benefits of distributed prefetch and cache delivery.
If you still operate AMP, the engineering pattern is similar to non-AMP SXG, but the implementation path is stricter because the AMP document and its canonical relationships already have tight validation rules. In practice:
The easy mistake is to think AMP signed exchange errors are always certificate problems. In many real systems they come from content drift, canonical mismatches, or a publishing path that updates HTML faster than SXG regeneration.
Most failures cluster into five buckets. You can debug them systematically instead of staring at a generic validation status.
If the certificate chain is not acceptable for SXG use, or if the exchange validity is outside the allowed window, the package will not verify. Watch renewal timing, signer clock skew, and deployment lag between certificate rotation and signer rollout. This is the first place to look after an otherwise healthy system suddenly drops SXG eligibility.
If the signed content does not match the canonical target you think you are serving, search-facing validation gets flaky. Common causes are A/B fragments leaking into the signed path, CMS-side late mutation, or device-class templating that was never folded into the cache key.
Headers that seem harmless in ordinary delivery can become a problem in signed delivery. Inconsistent content-type, surprise redirects, and variant-specific cache-control are common offenders. Make the signer consume the exact post-normalization response, not a pre-edge approximation.
If the signed exchange expires before regeneration completes across all paths, you get intermittent eligibility loss. This often shows up as a sawtooth pattern after deploys or around editorial bursts. The fix is rarely “extend the validity forever.” It is usually better regeneration orchestration, prewarming, and tighter cache invalidation discipline.
Teams often have good logs for origin HTML and terrible logs for SXG generation. Add explicit metrics:
Ordinary CDN caching reduces distance to content. Signed exchanges reduce distance plus trust-boundary friction for specific cross-origin prefetch cases. That distinction is why the two are complementary rather than mutually exclusive.
| Capability | Ordinary document caching | Signed exchange sxg |
|---|---|---|
| Third-party cache can distribute HTML | Yes | Yes |
| Browser can present publisher URL on navigation | Not by default | Yes, if validation passes |
| Works well with personalized HTML | Sometimes | Usually no |
| Operational complexity | Moderate | High |
| Main value | General acceleration and origin offload | Cross-origin prefetch with first-party URL presentation |
That is the right frame for architecture reviews. If the team is asking whether signed exchanges replace your CDN strategy, the answer is no. If the team is asking whether SXG is worth adding to a mature CDN and edge caching strategy for high-search, cacheable landing pages, that is the right question.
This section is where most glossy explainers stop. Signed exchanges add real complexity, and some of it is ugly.
The more your primary document varies by user, the less SXG helps. You can push personalization behind async APIs, edge includes, or client rendering, but then you need to ask whether moving work out of the signed shell hurts the very user metrics you were trying to improve.
Editorial sites with frequent updates can create signing churn. If an article updates every few minutes, your signing path becomes part of the publishing critical path. If it lags, users get stale signed content or lose SXG eligibility intermittently.
Certificate lifecycle bugs can take down the feature silently. Treat SXG certificate management like a production control-plane service, not a sidecar script someone wrote during a launch sprint.
When plain HTML works but SXG does not, the problem can sit in origin rendering, build pipeline, cert management, cache variant handling, or search-facing validation. That cuts across platform, web performance, SEO, and sometimes editorial systems. Have one owner for the end-to-end pipeline.
Web packaging and signed exchanges have always had uneven ecosystem support and changing product emphasis. That means you should justify the feature based on your actual entry traffic and measurable search-path gains, not on abstract enthusiasm for web packaging.
Signed http exchanges fit when you have:
They do not fit well when you have:
For teams in the middle, start with a narrow slice. Pick one search-heavy template class. Instrument eligible versus non-eligible search landings. Compare p50, p75, p95 LCP and bounce or session depth. If the gain is material, expand. If not, put your effort into dependency graph cleanup, render-path pruning, or better cache hit ratios first.
On the delivery side, this is also where provider economics become very concrete. If you need to keep multiple variants hot, absorb burst traffic after crawler waves, and still preserve budget headroom, a cost-optimized enterprise-grade CDN can be more important than the SXG feature itself. BlazingCDN is worth a look in that context because it combines 100% uptime, flexible configuration, and fast scaling with pricing that starts at $100 per month for 25 TB and drops to $0.002 per GB at 2 PB+ commitment tiers. For engineering teams comparing platform spend against measurable search-entry latency gains, that cost profile can make controlled SXG rollouts easier to justify.
If you want to evaluate the broader delivery trade space around signed exchanges, cache variants, and enterprise rollout controls, start here: BlazingCDN's enterprise edge configuration.
Do one experiment, not ten. Pick your top search-entry template, split traffic into SXG-eligible and non-eligible cohorts, and chart p75 and p95 LCP for mobile users over seven days. At the same time, instrument three control-plane metrics: certificate days to expiry, SXG generation success rate, and HTML-to-SXG revision mismatch rate.
If those control-plane metrics are noisy, your rollout is not ready no matter how good the first benchmark looks. If they are clean and search-entry LCP moves by the expected 300 ms class range, you have a feature worth operationalizing. If not, the result is still useful: you have proven your bottleneck is elsewhere.
Learn
Best CDN for Video Streaming in 2026: Full Comparison with Real Performance Data If you are choosing the best CDN for ...
Learn
Video CDN Providers Compared: BlazingCDN vs Cloudflare vs Akamai for OTT If you are choosing a video CDN for an OTT ...
Learn
Video CDN Pricing Explained: How to Stop Overpaying for Streaming Bandwidth Video already accounts for 38% of total ...