Interesting web applications generally end up with a large number of web-exposed endpoints that might reveal sensitive data about a user, or take action on a user’s behalf. Since users' browsers can be easily convinced to make requests to those endpoints, and to include the users' ambient credentials (cookies, privileged position on an intranet, etc), applications need to be very careful about the way those endpoints work in order to avoid abuse.
Being careful turns out to be hard in some cases ("simple" CSRF), and practically impossible in others (cross-site search, timing attacks, etc). The latter category includes timing attacks based on the server-side processing necessary to generate certain responses, and length measurements (both via web-facing timing attacks and passive network attackers).
It would be helpful if servers could make more intelligent decisions about whether or not to respond to a given request based on the way that it’s made in order to mitigate the latter category. For example, it seems pretty unlikely that a "Transfer all my money" endpoint on a bank’s server would expect to be referenced from an img tag, and likewise unlikely that evil.com is going to be making any legitimate requests whatsoever. Ideally, the server could reject these requests a priori rather than delivering them to the application backend.
Here, we describe a mechanism by which user agents can enable this kind of decision-making by adding additional context to outgoing requests. By delivering metadata to a server in a set of fetch metadata headers, we enable applications to quickly reject requests based on testing a set of preconditions. That work can even be lifted up above the application layer (to reverse proxies, CDNs, etc) if desired.
If site producers want to it's already pretty much impossible today. At least without some "tricks", and nothing prevents your video-downloader from just adding a header which pretend it's origin is a website. (Or more funny you inject the downloading JS code into the website in question extending it with a download functionality ;=) ).
Sec-Fetch-Site: It's more like `Origin`but without actually containing the Origin instead just delivering information about if it's the same site, same origin different origin or has not origin.
This makes it much more privacy friendly then both the `Origin` and `Refer` header, it also makes it easier to user for the intended use case and in turn IMHO a strict improvement over both `Origin` and `Referer`.
Sec-Fetch-Dest, Sec-Fetch-Mode, Sec-Fetch-User: Provide a bit more context about how the request was made, while this can leak slightly more information compared to `Origin` or worse `Referer` it's still much better.
So from a privacy POV I would say this is a strict improvements.
From a functionality POV it might look like it further limits 3rd party resource re-usage but CORS already does so. And like CORS it can be circumvented by using download apps which are not your browsers or servers republishing things or similar.
I could imagine there could be some web-extensions which "extend" a website by injecting code or similar which would become harder to do with this. Through I don't know of any where there isn't a reasonable workaround.
So from what I can tell the worst thing it might do is that using `curl` for sites where you need to set a `Origin` header now also need you to set some other headers which could be annoying.
How can you be sure you actually need to send a Referer header. Whats the (user) benefit of sending one. I never send Referer and I have never had any problems as result.
I remove unwanted headers from requests generated by user agents I cannot adequately control, e.g., graphical web browsers, using a loopback-bound forward proxy. Perhaps this will be another one to remove.
Some websites (IIRC twitter does) open link differently based on the referrer header, so automatically sending a twitter referref for all twitter links improves your experience.
Sending a Referer header does not improve [your] experience. It is the opposite for me. I cannot tolerate Twitter's Javascript, I do not use a graphical browser. I find it annoying when people submit Twitter status messages to HN.
I have a script that reformats the JSON into a simple HTML page with no Javascript. This improves the experience for me:
You should reach out to dang. You seem to be shadowbanned or something. All your comments show up as grayed out and 'dead'. I 'vouched' for this before I could reply.
How is this different from the origin header? Does the origin header not tell the webbserver if the requested originated from the same website? Is the origin header flawed in some way?
Reading the documentation on MDN[1] it looks like it sends more data than just the Origin of the request. Metadata headers include if the user initiated the request (e.g. navigation or click events?) and how the data is meant to be used (e.g. as audio data for <audio> or a top-level document).
This spec seems really powerful, provided all browser support it :)
Firefox, Chrome, Edge and Opera support it (including mobile).
Internet Explorer is dead (ok, is a Zombie. But was supper-seeded by Edge for most users).
Safari is sadly not yet supported.
The nice thing is that you can employ security enhancements based on this technique even
if it's not supported by all your clients.
I.e. you can automatically reject requests if the headers are given and have a bad value, which would add additional protection against certain attacks for all users except the ones stuck on IE or Safari.
This is a story that you can often hear on HN but I don't think it's correct.
There were three correlated reasons for the bad reputation of IE some years ago:
1. it was largely dominant, so people thought they could develop just taking that browser in consideration
2. for the previous point, MS started to develop proprietary features (like ActiveX)
3. at a certain point its development was stopped for a long time
Safari certainly cannot match the first two reasons. But it cannot match the third either, because the development of standard web features is going on at good pace (see <https://webkit.org/status/>).
Developers hated having to work around missing features for IE even when FF and Chrome took over the market, Safari is the exact same, except you can't even update the rendering engine on iOS, Apple doesn't want webapps to eat away at app store profit (notice how shitty and slow moving the webgl/webgpu thing has been mostly due to iOS Safari)
> Safari certainly cannot match the first two reasons.
1. Most users view websites on their phones. Safari is the only browser on iPhone (there are other browser skins, but they're all forced to run on top of Safari). The market share of iOS devices is usually about at least 50% in developed nations.
2. iOS has proprietary features, it is known as the App Store. If you want to develop certain things, you must use the app store, the browser is locked out of those features (even if all other browser vendors have them).
> But it cannot match the third either, because the development of standard web features is going on at good pace (see <https://webkit.org/status/>).
3. I probably don't need to go into this point since it's common knowledge that Safari has always been the least compliant browser in terms of web standards. Their history of holding back features or implementing features with critical flaws that make them useless has been a recurring trend for the last decade. Just because they have checked a box on a table, doesn't mean the feature is anything close to useable.
Right now but you can reasonably develop a website which will work in Chrome and Firefox even without testing (not talking about any supper modern features), but Safari is riddled with bugs you wouldn't expect. Recently I have encountered multiple bugs regarding svg clipping in safari. Safari 14 also broke localstorage and indexeddb it's almost funny how bad safari is at actually just working.
Well I found some chrome bugs, but here is the thing after I reported them, they have been immediately responded to and fixed and released in nearest version. Safari though you have to wait a year for bug fixes to be released, if they even acknowledge you at all inside their bug tracker the only way to get Webkit people attention is tag them on twitter.
I don’t quite understand this argument. Can you give me a couple examples of Safari holding back major parts of web design? Or is it more obscure stuff like some webGL engine?
Because I use Safari specifically for privacy reasons and it also used to never trigger my fans to full speed just to play videos, like Chrome. I also have read that while Safari does tend to take longer, their implementations tend to be more polished. But this was more like a tweet so take that anecdata with a grain of salt.
I often try to use a feature and it doesn't work properly on one browser. It's nearly always mobile safari. This week, I've dealt with scroll-snap (which makes URL anchors work correctly with a sticky header) being supported but only for some layouts (every other browser works).
Today I spent hours debugging why pages with a particular iframe embed would log you out of the parent site on Safari / iOS. Possibly because the same first-party resource was requested from both outside and inside the frame? Not sure yet.
If you attempt to use localStorage from a private tab on safari, it reports that it's present and working, but raises an exception on any access (every other platform either does not expose localStorage in private tabs, or clears it after).
Absence of push notifications makes it impossible to develop a lots of web apps.
Also more user-friendly way to install desktop shortcut would help tremendously to make web apps more popular. Of course Apple is not interested with that, but it's still sad.
I have done web-development both in the bad IE days but also recently and IMO it wasn't as bad to develop for IE as it is for Safari today. Safari is broken in strange and random ways and missing odd features and is a moving target (and seem to break more with time). Developing for IE was extremely well documented (especially in later versions) and avoiding pitfalls was very easy, even for people new to creating webpages using a few Google searches. Not so for Safari - unless you cut it completely off from all modern advances on the web. It just felt worse back then because IE was much more widespread.
Let's completely sidestep the whole debate that we always have. This is a safety feature, Safari will implement it, you can bet on it. It's merely going to be the last to do it.
Yep, I don't know why my first thought was that malicious actors could just bypass this by using external HTTP clients (like curl) when in fact this spec is meant to augment CORS: browsers _will_ send these headers to the server and the server can choose to honour them or not (well, in the CORS case the browser will block the request if the response headers are incorrect).
“ There are some exceptions to the above rules; for example if a cross-origin GET or HEAD request is made in no-cors mode the Origin header will not be added.”
That's an interesting find thanks. I was not aware of no-cors mode.
It seems though that a browser would not allow 'non-simple' headers in no-cors mode[0].
Authorization headers for example would not be allowed (if i'm reading correctly). So any API using that header would not be affected by this issue right?
If all the parts of the site are at the same place, then checking an origin header would probably do the same thing. This seems to be adding semantics for when the frontend is requesting data from a different backend, as well as for specific types of content, and if it was based on a user action.
The user action part is very nice if it can't be overwritten with just javascript. The other parts I'm not sure what the browser is helping with, that can't just be done with standard headers.
> Hence the banking server or generally web application servers will most likely simply execute any action received and allow the attack to launch.
While these are useful headers, there are protections today via XSRF tokens to prevent these attacks that all major sites implement, so it isn’t likely your bank is vulnerable.
It's not FUD. There are protections, but csrf tokens are a workaround while these headers are more akin to proper solution. Also, it won't magically make CSRF obsolete same way Origin header and CORS didn't make CSRF obsolete, but it's another tool in the appsec toolbox.
The original CORS protection is enforced by the browser, not the server. That means that it is much harder for it to cause a privacy problem. Given that this only works if you are using a browser anyway (any other user agent can spoof all this) I don't see how there can be any security gain from the server doing the enforcement. Which leaves me wondering whether the increased flexibility is worth the potential privacy issue.
CORS doesn’t protect against CSRF. CORS also permits the initial request, where this change will permit the server to drop the clients request.
The problem is the header isn’t really usable until uptake is substantial, dropping requests now creates a workflow deviation based on user agent, meaning while some gain security, the header cannot be relied on entirely.
Pardon my ignorance. I thought the way to deal with csrf was csrf tokens. It seems like you would still have to ignore the headers and rely on the token in your logic if ever they disagreed. I’m not sure how to use these new headers
CSRF tokens have overhead and they have to be implemented for all inputs which isn't trivial (judging by amount of CSRF related vulnerabilities disclosed in hacker one reports). I think the intention here is to make cross site requests stand out so that they can be dealt with in a more streamlined/uniform fashion.
I’m quite surprised that Sec-Fetch-Dest doesn’t have a “form” type for form submissions, and the spec makes almost no mention of forms. Does this spec finally allow a simple header check to squash CSRF form posts or not?
For me, recent firefox releases have been MISERABLE. I guess it could just be my computers but the browser constantly locks, no performance whatsoever, and just no help on troubleshooting anywhere.
I went with the long term support releases and have had a better experience. Course, still no sound lol but I use Chrome when I want sound. I still like Firefox, just can't use recent releases.
I think they could replace XSRF tokens, but until all major browsers support the headers (Safari 11 seems to be missing support, see other comments) you can't really block requests that don't have the new Sec-Fetch-* headers.
In the web, requests are made in either `cors` mode or `no-cors` mode. In `cors` mode, the `Origin` header is sent in the request. So yes, in `cors` mode the server could reject the request based on the `Origin` header. But in `no-cors` mode (the default if you do something like `<img src='...'>`) the `Origin` header isn't set, so CORS doesn't help defend against any attacks.
Can you explain the risk with regards to no-cors requests? Like presumably an attacker requesting an image isn't scary, right? I'd think the real issue would be the attacker making credential'd requests.
The point is that the endpoint can be anything, it doesn't need to have anything to do with images. But because of the context of the request, it's no cors.
Right but that's why CORS exists, so I'm trying to figure out what this mitigation is for. Like, you can't just fetch with credentials by accident - I guess if you don't use http cookies, which sure that's fine, maybe you can?
This isn't my area of security so I'm trying to figure out what the scenario is supposed to be where this mitigation is important.
That sounds like a reproducible bug report, with an easy test case (“run this script and measure laptop battery life, in Firefox and Safari”). If you haven’t already provided that in a bug report to the software developers, you should do so. Shopping your concern to an unrelated thread about security headers isn’t likely to get far on HN, certainly nowhere near as much as a testable bug would.
Isn't it less work to do the bug report in the proper place than post about everytime in half-related threads? You could even send it as its own post to HN to do the raise awareness part.
"Raising awareness" by spamming your issue into unrelated posts' comments isn't really an acceptable behavior here. You need to find or write a blog post about the issues you're experiencing, and submit that to HN for independent consideration.
You'll likely need a laptop CPU/GPU/APU, OS, browser, and site that natively support WebRTC and the media codec being used in hardware as much as possible. That's probably going to be AV1 support across the board for the foreseeable future. That's probably a couple years out.
Real time AV codecs generally did not choose the "cheap" aspect of the "good / fast / cheap" triangle.
I hear this, but also it works fine in Safari... and even less bad in Chrome on a MBP. Not an expert, but that makes me think there's something the Firefox folks can do to improve this issue. If they wanted to.
It's not like 2x, it's like 40x more power used. This makes the MBP super heat. Thing was never cool to start, but I suspect it's getting hot enough to shorten the battery life. It's really hot. Happens on every MBP 16" I've tested it on.
Grab 2-3 friends, fire up Zoom, Google Meets, or stream something on Twitch (or whatever service you want). And you'll hear the fans kick in, and the machine's temp will rapidly rise. Toggle open the power inspector, or just watch as 1-2% of the battery life drains per minute.
Well, it's much easier for Apple to get the optimizations on their hardware, drivers, OS, and web browser to all line up. There are some benefits of the cathedral.
However, I agree with others that there may be an actual bug going on. There are definitely listings in Bugzilla for WebRTC bugs[0], the oldest of which is 7 years old.
I'm curious if you're just seeing high battery usage, or is there high CPU as well?
CPU goes up some, but not a ton. Like it's weird. It really just super heats my computer, fans go to max RPM, and burns battery like there's no tomorrow... but I don't see CPU usage off the charts -- it's using CPU more than Safari or Chrome would, but doesn't seem like CPU is maxed. It's like it just gives the instructions, "Heat up!" I don't know, like just grab a MBP and try a call. Happens on all of them I've tried it on. 2019 MBP 16" is my daily computer, and the one I have the most issues with. I had a 2017 (Maybe 2016? First model with butterfly switches) and it happened there too. Happens on the 13" MBP I got form work. Happens on my girlfriends MBP 13" with the M1 chip, but it's not nearly as much of an issue as the Intel Macs.
Well, if the fans are spinning, it heats up, power is drained, and the CPU usage is fine... it's gotta be the GPU. There's only so many things in the system that can use power and generate heat, after all.
I don't know enough about MBPs of that age, but I'd guess they have discrete GPUs and it's those are running. I wonder if you'd be better off disabling GPU optimizations, or if there's a codec issue of some kind.
ESNI wasn't just incomplete/providing partial protection it had interop/scaling problems that would have made it a bad thing to continue to promulgate. It was probably a smart move to cut the cord for the redo to ECH while ESNI had single digit percentage server support considering most of the waiting for ECH is for server support as well.
In the meantime DoH is still very useful for a typical end user. It is orders of magnitude more work to filter all web traffic for the SNIs than it is to literally get directly notified by the client when a new site is looked up. Not to mention it's nice to have the scale proven out independently instead of trying to throw the kitchen sink at privacy when everything is fully baked and hope nothing falls apart that day.
This article and accompanying discussion is not about ESNI or ECH, nor about concerns thereof; it is about the W3 `fetch-metadata` specification that Firefox is implementing:
From Mozilla link above: Since publication of the ESNI draft specification at the IETF, analysis has shown that encrypting only the SNI extension provides incomplete protection. As just one example: during session resumption, the Pre-Shared Key extension could, legally, contain a cleartext copy of exactly the same server name that is encrypted by ESNI. The ESNI approach would require an encrypted variant of every extension with potential privacy implications, and even that exposes the set of extensions advertised. Lastly, real-world use of ESNI has exposed interoperability and deployment challenges that prevented it from being enabled at a wider scale.
IMO, SNI should only be added at the firewall (using HTTPS proxy, for example), so the network operator can monitor/filter which hosts are being accessed.
If you actually have a "HTTPS proxy" then the entire transaction is plaintext at that proxy, the operator can do whatever they want.
In particular they can choose whether they want to support protocol extensions like eSNI or ECH on either or both sides of the proxy.
If your idea is "But surely it should just pass through extensions it doesn't understand" what you've got there is nonsense, it isn't a "firewall" it's a dumpster fire. The extensions have meaning to the peers, if it tries to pass extensions through without understanding them it's now speaking gibberish to both sides.
For the people wondering what the motivation is, https://www.w3.org/TR/fetch-metadata/#intro has a good summary:
Interesting web applications generally end up with a large number of web-exposed endpoints that might reveal sensitive data about a user, or take action on a user’s behalf. Since users' browsers can be easily convinced to make requests to those endpoints, and to include the users' ambient credentials (cookies, privileged position on an intranet, etc), applications need to be very careful about the way those endpoints work in order to avoid abuse.
Being careful turns out to be hard in some cases ("simple" CSRF), and practically impossible in others (cross-site search, timing attacks, etc). The latter category includes timing attacks based on the server-side processing necessary to generate certain responses, and length measurements (both via web-facing timing attacks and passive network attackers).
It would be helpful if servers could make more intelligent decisions about whether or not to respond to a given request based on the way that it’s made in order to mitigate the latter category. For example, it seems pretty unlikely that a "Transfer all my money" endpoint on a bank’s server would expect to be referenced from an img tag, and likewise unlikely that evil.com is going to be making any legitimate requests whatsoever. Ideally, the server could reject these requests a priori rather than delivering them to the application backend.
Here, we describe a mechanism by which user agents can enable this kind of decision-making by adding additional context to outgoing requests. By delivering metadata to a server in a set of fetch metadata headers, we enable applications to quickly reject requests based on testing a set of preconditions. That work can even be lifted up above the application layer (to reverse proxies, CDNs, etc) if desired.