That last step is nonsensical: WebGPU is a shim layer that Vulkan-like layer (in the sense that WebGL is GLES-like) that allows you to use the native GPGPU-era APIs of your OS.
On a "proper OS", your WebGPU is 1:1 translating all calls to Vulkan, and doing so pretty cheaply. On Windows, your browser will be doing this depending on GPU vendor: Nvidia continues to have not amazing Vulkan performance, even in cases where the performance should be identical to DX12; AMD does not suffer from this bug.
If you care about performance, you will call Vulkan directly and not pay for the overhead. If you care about portability and/or are compiling to a WASM target, you're pretty much restricted to WebGPU and you have to pay that penalty.
Side note: Nothing stops Windows drivers or Mesa on Linux from providing a WebGPU impl, thus browsers would not need their own shim impl on such drivers and there would be no inherent translation overhead. They just don't.
WebGPU is far from cheap and has to do a substantial amount of extra work to translate to the underlying API in a safe manner. It's not 1:1 with Vulkan and diverges in a few places. WebGPU uses automatic synchronization and must spend a decent amount of CPU time resolving barriers.
You can't just ship a WebGPU implementation in the driver because the last-mile of getting the <canvas> on screen is handled by the browser in entirely browser specific ways. You'd require very tight coordination between the driver and browsers, and you still wouldn't be saving much because the overhead you get from WebGPU isn't because of API translation, rather it's the cost to make the API safe to expose in a browser.
We already do this by exposing the canvas surface with a semaphore lock. The browser can flip the surface to the canvas (or your app can flip it onto a window surface).
It’s just a HINSTANCE pointer.
You’re right about the waiting, but that’s entirely app driven. Browsers don’t want to render at 144fps but rather wait until drawing has occurred in order to update the view.
wgpu, dawn, already support drawing to arbitrary surfaces (not just a canvas but any window surface).
You've mentioned dawn more than once, but isn't dawn dead since the team at Google that was working on it isn't part of either the Android nor Chrome teams, and Android and Chrome both have their own (and incompatible with each other) preferred API manglers?
WebGL and WebGPU must robustly defend against malicious web content making the API calls, just like other browser JavaScript APIs, which makes for some overhead and resulted in leaving out some features of the underlying APIs.
Vulkan has also evolved a lot and WebGPU doesn't want to require new Vulkan features, lacking for example bindless textures, ray tracing etc.
All APIs must robustly defend against malicious content, this is not something unique to WebGL and WebGPU.
Programs can use Vulkan, D3D, OpenGL, OpenCL, etc, to ex: read memory that isn't in your program's space via the GPU/driver/OS not properly handling pointer provenience. Also, IOMMUs are not always setup correctly, and they are also not bug free, ex: Intel's 8 series.
Using hardware to attack hardware is not new, and not a uniquely web issue.
> All APIs must robustly defend against malicious content, this is not something unique to WebGL and WebGPU.
This is not the case for C/C++ APIs. A native code application using your API can already execute arbitrary code on your computer, so the library implementing eg OpenGL is not expected to be a security boundary and does not need to defend against for example memory safety bugs to get RCE, info leakage, etc by for example sending in booby trapped pointers or sending in crafted inputs designed to trigger bugs in your API internals.
The kernel side stuff is of course supposed to be more robust but also contains a much smaller amount of code than the user facing graphics API. And robustness there is not taken as seriously because they're not directly internet-facing interfaces so browsers can't rely on correctness any protections there.
Which brings us to: drivers throughout the stack are generally very buggy, and WebGL/WebGPU implementations also have to take responsibility for preventing exploitation of those bugs by web content, sometimes at rather big performance cost.
To see what it's like you might browse https://chromereleases.googleblog.com/ and search for WebGPU and WebGL mentions and bug bounty payouts in the vulnerabilities such as
[$10000.0] [448294721] High CVE-2025-14765 Use after free in WebGPU.
[TBD][443906252] High CVE-2025-12725: Out of bounds write in WebGPU.
[$25000.0] [442444724] High CVE-2025-11205 Heap buffer overflow in WebGPU.
[$15000][1464038] High CVE-2023-4072: Out of bounds read and write in WebGL.
[$TBD][1506923] WebGPU High CVE-2024-0225
etc.
C/C++ memory safety is hard, even when you're the biggest browser vendor trying your hardest to expose C APIs to JS bindings safely.
There were a lot of WebGL vulnerabilities in a constant stream as well earlier, before WebGPU became more lucrative for bug bounties.
I wouldn't call it nonsensical to target WebGPU. If you aren't on the bleeding edge for features, its overhead is pretty low and there's value in having one perfectly-consistent API that works pretty well everywhere. (Similar to OpenGL)
I'm not killing it, but there is no C API written verbatim. WebGL was fucky because it was a specific version of GLES that never changed and you couldn't actually do GL extensions; it was a hybrid of 2.0 and 3.0 and some extra non-core/ARB extensions.
WebGPU is trying to not repeat this mistake, but it isn't a 100% 1:1 translation for Vulkan, so everyone is going to need to agree to how the C API looks, and you know damned well Google is going to fuck this up for everyone and any attempt is going to die.
The problem is the same as it was 20 years ago. There’s 2 proprietary API’s and then there’s the “open” one.
I’m sick of having to write code that needs to know the difference. There’s only a need for a Render Pass, a Texture handle, a Shader program, and Buffer memory. The rest is implementation spaghetti.
I know the point you’re making but you’re talking to the wrong person about it. I know all the history. I wish for a simpler world where a WebGPU like API exists for all platforms. I’m working on making that happen. Don’t distract.
Opus is the most used codec on the planet, currently.
Can't really get more popular than that.
I think you meant to say, "why didn't it get more popular for _pirates_"? Because pirates are purists and prefer lossless codecs (ie, FLAC), and even when they wish to use lossy, Opus being locked to 48khz (to reduce implementation overhead for low power SoCs) kind of pisses them off, even though Opus's reference impl includes a perceptually lossless resampler (ie, equivalent to SoX VHQ, the gold standard, and better than the one in Speex).
Examples of users: Discord, Whatsapp, Jitsi, Mumble, Teamspeak, Soundcloud, Vimeo, Youtube (but not Youtube Music), in-game voice chat on both the PS4/5 era PSN network and the Xbone/XSX era Xbox network, the new Switch 2 in-game voice chat, games that use Steam's in-game voice chat (ie, TF2), all browsers (required to impl webm and webrtc), most apps on Android that have their own sound files (incl. base apps in Android itself). Windows and OSX also have native OOTB support for Opus. Some "actual" VoIP platforms use Opus. Some phone calls routed over the LTE phone network use Opus.
It is also standardized by the IETF as RFC 6716, and most for-audio SoCs support Opus natively as part of their platform SDKs.
You're not going to find anything more popular than this.
> I think you meant to say, "why didn't it get more popular for _pirates_"? Because pirates are purists and prefer lossless codecs (ie, FLAC), and even when they wish to use lossy, Opus being locked to 48khz (to reduce implementation overhead for low power SoCs) kind of pisses them off, even though Opus's reference impl includes a perceptually lossless resampler (ie, equivalent to SoX VHQ, the gold standard, and better than the one in Speex).
MP3s don't (really) support higher than 48 kHz sample rates either, and MP3s are if anything more popular among that community.
> MP3s don't (really) support higher than 48 kHz sample rates
Neither does the human ear.
While there may benefits for intermediate representations during mastering/modification, for playback higher frequencies can only ever make things worse as it increases the chance of unintentional frequencies causing distortion etc.
And for those intermediate steps any lossy compression is probably a bad idea.
I agree with you, I’m just noting that this argument doesn’t hold because pirates (who listen, they don’t do mastering) basically only care about flac or mp3s. And mp3s are limited to 48k.
Arguably most MP3s are limited lower than 48k, depending on the implementation.
Like LAME uses a low pass filter unless you explicitly disable it, even on the "insane" preset it cuts off about 20khz.
But I can still understand why mp3 is still used, if only because of compatibility and intertia of keeping a collection in a consistent format. I see the worries about file size becoming less important over time, so many people I don't don't really see an advantage to a more modern codec like Opus.
And piracy has always been more about "branding" that people seem to like to admit - many video rips were labelled DivX for years after they had already moved to other mp4 encoders. And over the years the "brand power" of various pirate groups was surprisingly large.
And I suspect that mp3 and flac were the last "big" changes that made a significant difference to many end users, so newer formats just don't have quite the same improvement to promote their own branding.
He's maybe referring to the fact that platforms such as Spotify, Tidal etc. doesn't offer music in Opus format - high quality while conserving bandwidth and storage. Instead they try to win the market using "master" or "lossless" quality, which is pretty much b.s.
When (not if) Ford goes under, it's not going to be overnight. I would be surprised if they don't have enough gas in the tank (ha) for another two decades, no matter how bad their decisions are; there's always customers willing to buy their products.
Speaking of the soundtrack, before Virt (Jake Kaufman) made it big (composer behind Shantae, Shovel Knight, Ducktales Remastered, a few others), he made this: https://www.youtube.com/watch?v=5uEUImofSms
"Bogey at your 6", the combat theme from the game, but remixed as if Konami had made it for the VRC6, the NES mapper chip that added 3 additional oscillators that made the Japanese release of Castlevania III what it was; he made this using Scream Tracker (or possibly a newer tracker, but its saved in S3M format), because tracker-like chip emulators didn't exist yet (Furnace, et al).
reply