Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
HTTP2 [pdf] (haxx.se)
83 points by kator on Nov 27, 2014 | hide | past | favorite | 19 comments


From the article: “The protocol is only useful for browsers and big services” This is sort of true. One of the primary drivers behind the http2 development is the fixing of HTTP pipelining. If your use case originally didn't have any need for pipelining then chances are http2 won't do a lot of good for you.

This may be a big issue, and may impact net neutrality. You get better performance if your stuff is inside a pipe from a Big Service. This makes Google look good. It also increases the relative benefit of running everything through a content delivery service.

This, in turn, creates a use case for CDNs which suck up all the components needed to display a page by whatever means necessary and deliver them to the end user over one HTTP2 connection. It then makes sense for the CDN, not the end user page, to be the advertising insertion point. If ads are loaded from a third-party ad server, they might not show up before the user is done viewing the page. This sometimes happens now. With server-side control of ordering within a single HTTP2 pipe, advertisers tied in with the CDN can be sure that ads will appear when the advertiser wants them to appear.

So there's a net neutrality issue. When multiple streams are multiplexed over a single pipe, the server gets to determine who goes first. Control over that order is valuable. That control may end up in the hands of the CDN, not the site operator.


Unless the client is able to change/enforce the priority of the streams or block a stream altogether with a reset. Then it should still be possible to strip out the ads through some Adblock-like mechanism.

I see how this could potentially shift the ad-bearing responsibility to CDNs, but I'm not seeing how it's any more of a net neutrality issue than what we've got now. Could you propose a hypothetical scenario?


Whoever owns the big single pipe gets to determine what goes into it first. A CDN can insure that preferred ads go in early, so they always appear before the content is fully loaded. Ads that don't go through the CDN may show up, eventually.

There's also an encryption issue. The big pipe is one SSL/TLS session under a single key. Whoever owns the big pipe gets to see everything. Things like embedded Facebook content, normally encrypted with Facebook's keys, can't go through the big pipe unless they accept the CDN looking at their traffic.

It's not clear how all this plays out, but it definitely implies more centralization.


Unless I missed something, this has nothing to do with net neutrality. Web neutrality maybe, if such thing existed or even mattered.


Slightly off-topic, but I know Google is working on efforts to completely replace TCP with something specifically designed for web applications. Does anyone know if they also plan on optimizing HTTP 2.0 to fit the new transport protocol they're trying to design, or vice versa?

It's currently implemented as a layer over UDP: http://en.wikipedia.org/wiki/QUIC


See this presentation: https://www.youtube.com/watch?v=hQZ-0mXFmk8 Slides, with quote about why quic is being worked on: https://docs.google.com/presentation/u/0/d/13LSNCCvBijabnn1S...


Working well with quic is not a design goal of HTTP/2. However, my understanding is that quic was originally designed to work well with SPDY, and so ought to work well with HTTP/2 as well.

There'll definitely be some Googlers paying attention to the WG to ensure that the two protocols play well together.


Perhaps also/more OT, but in the past couple of months I've occasionally observed my Gmail web connection using QUIC instead of TLS. They are already trialing it publicly and in non-beta use (discounting whether Gmail is still "beta") at least in Chrome stable on Linux.


Good document and read. Got a question about using separate CDNs for image serving. The initial HTTP connection to the host takes a while, especially with a long latency for some countries if there's no close server to the user so by splitting it up into separate image hosting servers, that may actually cause longer... love to hear thoughts on this as that's always been my thoughts?


I wonder how many experimental deployments we're missing out on due to the pay-to-play mandatory SSL CA regime.


https://letsencrypt.org/ is a new promising solution to the 'pay-to-play' system


But what if the organization behind it goes bankrupt or some browser removes the CA because one of the sites has malware?

I think this whole CA system is needlessly complex. Your registrar should be providing you with a free certificate for your domain and that should be the end of the hassle.


I'm working on Let's Encrypt, and I'd be happy to see domain registrars reduce the need for Let's Encrypt by issuing cryptographic credentials to domain registrants.

Every CA that issues DV certs for public DNS names takes registrars' databases as the ultimate ground truth about domain ownership -- at least for the domains that the CA is willing to issue for -- so the DV-cert-issuing world is reliant on them to be correct, secure, up-to-date, and so on. That's true whether the CA is using whois data plus DNS data, or just DNS data, to verify domain control.

(In saying that, I thought about the idea that Let's Encrypt may use safeguards to limit issuance based on historical observations of domain control and prior issuance history by other CAs. For example, our draft ACME spec has a mechanism where we could ask a requestor to prove control of an existing subject key from a cert that we know was issued for the same subject domain by another CA. So if we've already seen a valid cert in the wild, or in Certificate Transparency, for example.com, we could say that you have to show that you have control of the key in that cert before you can get a new cert from Let's Encrypt for example.com. But all of those things ultimately go back to what registrars said in the past, even if some of them are independent of what registrars say today.)


They're doing that too, with opportunistic encryption where self-signed certs are trusted if DNSSEC says they are:

https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...


That is indeed the only model that would make sense as long as we use domain names for addressing.

You can publish and sign your TLSA records today, without interfering with your current CA signed certificates.


I saw that story earlier. Have they announced which CA they will be signing under? Or will this only work with browsers released in 2015 where the LetsEncrypt CA has been accepted? And what about intranet domains or IP addresses?


IdenTrust will cross-sign their root certs.


Im thinking close to zero: https://letsencrypt.org/


Thanks for a great document.

I think in the future all protocol diagrams should use colored lego blocks. That was fantastic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: