Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But there are rare exceptions where pipelining is disabled. For example, users cannot request pages of results for 100 different queries from Google in a single connection.

> Efficient. But not allowed. Always wondered why.

The likelihood is that your HTTP requests are getting routed to a backend server/instance/whatever that knows about your query, and can thus return data about that query, but that a different search would get routed somewhere else, to an instance which knows about that query specifically.

> Advertising-filled web pages opened in a "modern browser" are expected to auto-request files from many third party domains. I will call these "conglomerate" pages for lack of a better term.

> Not sure HTTP/1.1 pipelining works very well for that. Hence HTTP/2.

> HTTP/1.1 benefits users like me. HTTP/2 benefits advertisers, Google, and other companies in the web advertising racket, but not sure how it would benefit users except to serve them advertising and conglomerate pages more efficiently.

I think you misunderstand how HTTP/2 works.

HTTP/1.1 pipelining allows you to send five requests to a server, in order, and receive five responses in order. Very straightforward.

HTTP/2 allows you to send any number of requests to a server, in any order, and for the client or server to provide a priority for each of them, and for them to download out of order or in parallel.

What doesn't change between the two is that you're still connecting to one specific server, the same as you were before in HTTP/1.x, and not any number of third party servers, for any reason, in any context. It has zero impact on third-party services like ad networks, trackers, or the like.



Let me clarify a few things about my usage. I like the responses to received be in order, and groups of say 100 responses to separated by a header (these can act as delimiters for my text filters as I import into my databases). I have no need for compressed headers especially huge cookies because I have no need for cookies. Nor have I any need for preferential serving of my requests because I only want what I ask for, not what the server thinks I need. Finally, I am not doing interactive "browsing" or "user experience" and do not need a "warm" connection for "pushing" anything to the server. I have no need for Javascript, CSS and other window dressing. I am requesting information, preferably 7-bit ASCII, in bulk. Boring and old, but useful. There is no room for advertising, garnering "impressions", etc. This is www information retrieval. For this usage, ye olde HTTP/1.1 pipelining works well. The companies/organizations behind HTTP/2 rely on advertising. If HTTP/2 has no benefit to the business of serving ads (i.e., serving very heavy pages which are heavy because they are full of ads), then I am not sure why it was introduced. It stands to reason that pages without ads, devoid of "interactive features" to entice users to click, and served without large cookies for tracking should not not need HTTP/2. But I am not an expert on HTTP/2, and I am not running a business that needs to serve web pages with ads. I am just an ordinary dump web user who prefers plain text, likes HTTP/1.1 pipelining but does not like ads and other web cruft.


So basically you have a very nonstandard workflow, and expect the protocol to be optimised towards your needs?


With all due respect for your comment, that is not how I see it. I adjusted my "workflow" to what was available for the past decades: HTTP/1.1 pipelining. It has worked for me with no problems. And I am sharing this experience with others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: