Hacker Newsnew | past | comments | ask | show | jobs | submit | forrestthewoods's commentslogin

> They’re asking for $100+/mo for the plans that are actually usable at scale. If I’m paying that much I have very high expectations.

They’re losing money on you at that price point.

Or more precisely you’re paying for it by giving them training data.


I'm not convinced, Kimi 2.5, GLM 5.1, Minimax M2.7 are all fraction of the price and still make money on inference.

Interesting breakdown of levels. I like it.

I’m not sure I believe that Level 7 exists for most projects. It is utterly *impossible* for most non-trivial programs to have a spec that doesn’t not have deep, carnal knowledge of the implementation. It can not be done.

For most interesting problems the spec HAS to include implementation details and architecture and critical data structures. At some point you’re still writing code, but in a different language, and it migtt hurt have actually been better to just write the damn struct declarations by hand and then let AI run with it.


I agree, I'm venturing into Level 6 myself and it often feels like being one step too high on a ladder. Level 7 feels like just standing on the very top of the ladder, which is terrifying (to me anyway as an experienced software engineer).

To me it’s not terrifying because it’s just so plainly bad and not good enough. If you try L7 it just doesn’t work. Unless you’re making a dashboard in which case sure yeah it’s fine. But not for complex problems.

Really great post. Thanks for sharing.

Websites that don’t tell me what they’re doing are infuriating. I’m on mobile. This landing page experience is awful.

For desktops it's not different.

it is absolutely amazing experience on mobile if you guys do not understand how to use a search bar and a couple of segmeneted controls -- there is nothing much I can do about it

SteamDeck should be excluded from “Linux use” imho. Especially when it comes to click bait headlines.

Like yes it is Linux. But SteamDeck is a completely different beast from desktop Linux. They might as well be entirely different OS’s. Especially if the SteamDeck is being used to play Win32 binaries!


> Like yes it is Linux. But SteamDeck is a completely different beast from desktop Linux. They might as well be entirely different OS’s.

It's really not; SteamOS is just another GNU/Linux, and pretty close to vanilla Arch Linux for that matter.

> Especially if the SteamDeck is being used to play Win32 binaries!

Proton works fine on other distros.


> SteamOS is just another GNU/Linux

if you are a gamedev considering support for SteamOS and considering support for generic Linux desktop they really really really REALLY are not the same. At all.


That is not true. Proton and steam linux runtime which are the components actually responsible to run games are literally the exact same code provided by the steam client.

https://github.com/ValveSoftware/steam-runtime

https://github.com/valvesoftware/proton


Why not? Could you elaborate? I'd love to know more. I always had the feeling that supporting SteamOS basically meant that generic Linux Desktop support was almost implied because in the end it's almost always on Proton rather than native.

If you are a gamedev considering native support for linux then you are using the steam runtime (i.e. a debian container) anyways.

Or you just write a Windows program but explicitly target Proton. (Granted, either of those remain portable to at least any Linux with Steam installed.)

Try it. Switch to desktop mode. Behold! A desktop linux!

SteamOS is so very much linux that even WebOS and Androids pale in comparisom.


SteamOS is really a desktop linux. You can switch to "desktop mode" to see the "normal desktop" and you get a KDE where you can run whatever you want.

It's "just" immutable Arch that defaults to Steam's console mode interface.


SteamOS is just an immutable Arch, and all Steam Linux games use the Steam Linux container runtime or Proton.

Bazzite and a few others provide a similar console-style experience.


>completely different beast from desktop Linux

Absolutely not. If you ever actually used it you would know that the only difference is a custom big picture mode like interface. Anything else is literally the same code.


Have a SteamDeck. Have shipped games for Linux.

SteamOS is Arch with atomic updates and some custom patches here and there. The system stack is pretty standard; Mesa drivers, Steam Linux Runtime, Proton, it's all what ships on every other distro. The only significant difference is that games run in gamescope-session by default, but that isn't exclusive to SteamOS either and doesn't meaningfully affect the execution of software, it's just a different window manager.

In all your posts I haven't seen you actually explain what it is that's so different about it.


Let me take the game developer lens. You love Linux and want to support Linux. What is the cost to you?

SteamDeck is a very specific set of hardware running a very specific OS with a specific runtime. This is very easy. The fact that it is Linux is almost immaterial. If it were not Linux at all it would require a similar amount of effort. Might as well be a Nintendo Switch.

Now let’s imagine you want to support generic Linux desktop with a native Linux exe. May God have mercy on your soul. Deploying pre-compiled binaries that run on an infinite number of hardware variations running an infinite number of local variables env permutations is an unfathomable nightmare.

Once upon a time I shipped a native Linux binary (Planetary Annihilation). Somewhat infamously our Linux users were less than 1% of users but ~50% of bug reports. And no it wasn’t because Linux users simply report more general gameplay bugs.

These days you can support Linux by just giving them a Win32 binary. Which is objectively hilarious.

In any case. It would be profoundly fascinating to know the number of gameplay hours played across OSs. And I would imagine that SteamDeck accounts for over 90% of Linux gaming hours.

The Year of the Linux Desktop is still not here. Not yet. IMHO. YMMV.


Steam Deck is an x64-based PC running Arch Linux with FOSS Mesa drivers, which are shared among all modern AMD GPUs. There's extra wrinkles with Nvidia GPUs, but their proprietary driver is the Windows driver with a bunch of kludges to get it to work on Linux and if you're using Vulkan then it's mostly the same code paths. It's also improved greatly in the past couple years.

You're right about native Linux binaries, but the rub is that you don't need to create generic binaries, there's a bunch of options that use containers to deal with environmental permutations and given the Linux version of Planetary Annihilation uses the Steam Linux Runtime environment, you know this.

It is funny that supporting Linux is as easy as providing a win32 binary, but it's not a joke. This is the case because it works.

I think your experience is a little out of date, or you've somehow been missing what's been happening over the last half decade, because in practice gaming on Linux is now absolutely fantastic. Not just on Steam Deck, as since Valve is using the same general software stack that every other distro uses, all the improvements they've made have permeated out to the rest of the ecosystem. On my CachyOS PC with an RTX 3090, the only games that consistently give me problems anymore are titles that ship with kernel-level anti-cheat. Otherwise when I buy something from Steam I simply assume that it'll work.

Steam Deck sales have actually softened quite a bit over the last couple of years, all this recent explosive growth has been driven by desktop users.


Thanks for response. Not sure I have much more to add. But wanted you to know I saw and read it. :)

Thanks for reading!

I've been following this all pretty closely, it's been exciting. The year of the linux desktop is kind of a punchline, but it's sort of a misnomer anyway. It was never going to happen in the span of a year. But it has been happening; when online discussion spaces can never seem to shut the hell up about all these new idiot users asking all these stupid questions, that's when you know you're seeing a lot of growth.


> Once upon a time I shipped a native Linux binary (Planetary Annihilation).

Pretty sure I kickstarter'd that! But also never actually played it.


PA Titans is pretty good! Definitely niche. In hindsight the whole spherical planets thing is definitely bad. Vanilla flat rectangular maps would have been better.

One of the interesting consequences of Kickstarter is you get hard locked into “promises” even if those ideas turn out to be bad. Naval was so bad but it was a stretch goal so had to ship it. Lesson learned!


SteamOS is just steam big picture mode by default on an arch linux. You can switch to a regular KDE in one click

What you say is not even remotely true. SteamOS is basically just Arch with steam preinstalled.

you can just navigate to the full linux desktop on the steamdeck?


No. Modules are a failed idea. Really really hard for me to see them becoming mainstream at this point.

The idea is great, the execution is terrible. In JS, modules were instantly popular because they were easy to use, added a lot of benefit, and support in browsers and the ecoysystem was fairly good after a couple of years. In C++, support is still bad, 6 years after they were introduced.

The idea is great in the same way the idea of a perpetual motion machine is great: I'd love to have a perpetual motion machine (or C++ modules), but it's just not realistic.

IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.


Exactly. C++ is still waiting for its "uv" moment, so until then modules aren't even close to solved.

And uv required some ground work, where the PEP process streamlined how you define a python project, and then uv could be built on top.

No idea if modules themselves are failed or no, but if c++ wants to keep fighting for developer mindshare, it must make something resembling modules work and figure out package management.

yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.


I emphatically agree. C++ needs a standard build system that doesn’t suck ass. Most people would agree it needs a package manager although I think that is actually debatable.

Neither of those things require modules as currently defined.


That is not even half realistic. Are you going to port all that code out there (autotools, cmake, scons,meson, bazel, waf...) to a "true" build system?

Only the idea is crazy. What Conan does is much more sensible: give s layer independent of the build system (and a way to consume packages and if you want some predefined "profiles" such as debug, etc), leave it half-open for extensions and let existing tools talk with that communication protocol.

That is much more realistic and you have way more chances of having a full ecosystem to consume.

Also, noone needs to port full build system or move from oerfectly working build systems.


Are the perfectly working build systems in the room with us now? Cmake and Conan ain’t it.

> That is not even half realistic.

uv is an existence proof that when you make something that doesn’t suck ass the entire industry will very very rapidly converge.

Claude makes converting any particular configuration from one system to another very very very tractable.


Much like contracts--yes, C++ needs something modules-like, but the actual design as standardized is not usable.

Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.


Usable enough for Office, and the initial proposal was done by Microsoft.

I know Microsoft invested a lot into modules development and migrated a few small pieces of Office onto modules, but I'm not sure if they are actually using it extensively, and I'm also not sure if they're actually all that beneficial. Every time I hear about modules, it's stories about a year of migration work for a single-digit build-time improvement.

It has the developer mindshare of game engines, games and VFX industry standards, CUDA, SYCL, ROCm, HIP, Khronos APIS, game consoles SDK, HFT, HPC, research labs like CERN, Fermilab,...

Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.


Can you explain why you think modules are a failed idea? Because not that many use them right now?

Personally I use them in new projects using XMake and it just works.


Because as a percentage of global C++ builds they’re used in probably 0.0001% of builds with no line of sight to that improving.

They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.


I'm not the PC but I think you miss most of the pain points due to: 'personal' projects.

There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.

This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?


BMIs are not considered distributable artifacts and were never designed to be. Same as PCHs and clang-modules which preceded them. Redistribution of interface artifacts was not a design goal of C++ modules, same as redistribution of CPython byte code is not a design goal for Python's module system.

Modules solve the problems of text substitution (headers) as interface description. It's why we call the importable module units "interface units". The goals were to fix all the problems with headers (macro leakage, uncontrolled export semantics, Static Initialization Order Fiasco, etc) and improve build performance.

They succeeded at this rather wonderfully as a design. Implementation proved more difficult but we're almost there.


My opinion of the median webdev is… impolite at best.

This article does not do much to improve their standing.


I hate Git. I think it is mediocre at absolute best.

But nothing in this article is in my top 10. So this doesn’t really do anything for me.

All I really want is support for terabytes scale repo history with super fast, efficient, sparse virtual clones. And ideally a global cache for binary blobs with copy-on-write semantics. Which is another way to say I want support for large binary files, and no GitLFS is not sufficient.


Cool post!

I don’t quite understand the “funnel” section. Users see some change local immediately (S1->S2). And all users send all commands to the host. The host then effectively chooses the sort order and broadcasts back.

So does the initial user effectively rewind and replay state? If action applied to S1 was actually from another player they rewind to S1 and apply the sequence?

How many state snapshots do they need to persist? There was no mention of invertible steps.

I feel like I’m missing a step.


Whether users see the action locally or not is decided on a per-action basis. For more visual actions like movement, we use a local imposter that gets overwritten on completion of the action (success or failure). These revert to the most recent “committed” state which already passed through the action funnel.

We have some actions which happen when saving config UI which don’t really need realtime sync when “save” is pressed and we treat them more like a normal POST request (e.g. we wait for full commit before rerendering). The form fields are kinds a built-in imposter.

For state snapshots we only do big ones. The “full save” that happens every 25 actions or so. These aren’t generally used for reverting people.

So it kinda works like:

1. Do I have an outstanding imposter? If so read from it.

2. Otherwise read from the committed state.


Ahh that’s interesting.

So instead of “rollback and replay” it’s more like “have a committed state and transient preview on top that may or may not commit”.

Implementation could vary between “commit the preview” vs “delete preview and create commit”. But that’s just an implementation detail.

Anyhow. That’s neat and clever. Thanks!


omg this x1000

I’ve been very happy with Claude Code. I saw enough positive things about Codex being better I bought a sub to give it a whirl.

ChatGPT/Codex’s insistence on ending EVERY message or operation with a “would you like to do X next” is infuriating. I just want codex to write and implement a damn plan until it is done. Stop quitting and the middle and stop suggesting next steps. Just do the damn thing.

Cancelled and back to Claude Code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: