Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Plan 9 is definitely not a second system from UNIX; the aspects of it that failed are not due to over-complexity. Far from it.

Plan 9 failed for a combination of reasons:

1) Gratuitous NIH: the whole lack of support for a POSIX runtime, C++ compiler, etc. meant you couldn't run "normal programs" on it. So no web browser, no access to the gigantic pile of programs that run everywhere else.

There was a project called "APE" (The ANSI Posix Environment) but it was never really kept up to date.

What this meant was first, you couldn't run stuff like a web browser on Plan 9 (and you still can't; please no-one write to inform me that some dweebish half-assed attempt at a web browser is a solution), so you were always using other machines, and second, the design of the system was never impacted by the idea of running "Lots of Software Written By Other People".

It's pretty easy to decide you don't need shared libraries, for example, when your graphics/UI library is written by one guy.

2) The assumptions made about hardware went rapidly wrong: the gap between "CPU servers" and the thin-client-ish things you logged on to were substantial at the time; this assumption went out of date rather rapidly with the explosion of fast PC hardware. Similarly, "everything is a file" based windowing/graphics systems ("/dev/bltblt") started to look nutty with 3D acceleration. Many of the things that looked great in 1992 started to look kinda weird in by 1998.

3) Somewhat related: it never was entirely clear what Plan 9 was for. People keep calling it a research OS, but aside from the early 'research-ish' ideas, I don't think there was a lot of actual research. It was sort of more a "daily driver that won't go" (see point 1). There wasn't a whole lot of reevaluation of design decisions either (point 2). There were some really good ideas in it, right from its inception, but calling it a research OS is weird.



Agree with most of it, however the jab here:

> What this meant was first, you couldn't run stuff like a web browser on Plan 9 (and you still can't; please no-one write to inform me that some dweebish half-assed attempt at a web browser is a solution)

is needlessly provocative, given that these days it is impossible to write a web browser from scratch (https://drewdevault.com/2020/03/18/Reckless-limitless-scope....).

Granted, at the time it was possible, however that didn't fit into it's philosophy.

Notably, there was a way to interface to the internet -- Plan9 has a filesystem for exactly that purpose, all that needed to be written was the frontend.


Like you say, it was possible at the time the decision was "made"; we're talking about duplicating NSCA Mosaic circa 1993, not Chrome.

And in any case, it wasn't necessary to write a web browser from scratch. What was necessary was to support a "normal" side environment to run POSIX, gcc/g++, etc. This would have been a fair bit of effort, but if it had been done (a nice first stop would be to have avoided gratuitous incompatibilities, like "ken c"), the OS could have supported programs not written at the Labs (and a tiny number of other places) and been an actual daily driver.

I write this out of frustration but with a lot of positive feelings about Plan 9; I would have loved to see it thrive. The gratuitous incompatibility made it almost immediately a museum piece, as did the fact that many of the intriguing-for-1990 research decisions made for it went obsolete shortly after with the rise of cheap, powerful PC hardware.


> The gratuitous incompatibility made it almost immediately a museum piece

And yet it made it what it was.

The entire point of plan9 was not to be "Unix System 8" or whatever, it was to go beyond that and shed off the half a century of accumulated kludge. But if it had done what you are suggesting, that's exactly what it would have been.


I don't think this was a failure of Plan9, they just never intended it to be the case. After all, there is no reason to support c++, etc., when many other systems already do that quite well. The goal of Plan9 was to create new systems that would take computers in a different direction, and to do this it is better to avoid legacy.


I'm assuming you're a Red Hatter?


> it never was entirely clear what Plan 9 was for. People keep calling it a research OS, but aside from the early 'research-ish' ideas, I don't think there was a lot of actual research. It was sort of more a "daily driver that won't go" (see point 1). There wasn't a whole lot of reevaluation of design decisions either (point 2). There were some really good ideas in it, right from its inception, but calling it a research OS is weird.

i always thought it was just a proof of concept for the filesystem they built which was a networkish extension of the original unix ideal (everything on the network is a file)? seems kinda researchy.

it was also good for blowing up some crt displays as the video drivers often tried to drive them with unsupported modes.

was cool that it fit on a few floppies though. i think it was a tech demo.


I wish the plan 9 commentators here would get their facts straight because I'm so bored of dispelling them that I can barely be bothered to do so anymore. tl;dr this post is full of misinformation.

1) Plan 9 has the Mothra web browser which was originally written by Tom Duff but doesn't support any modern features as it was built in the 90's. However, there is a "working" (send patches) port of Netsurf with JS support.

"was a project called APE" ??? You mean there is a library called APE, which has allowed people to port things like video codecs. See the Treason video player.

2) everything as a file is just an abstraction between services and resources. /dev/bltblt is also long long long dead and replaced by /dev/draw sometime in the mid 90's. Where did you get such outdated nonsense from? /dev/draw is a 2d engine that you load text and bitmaps into and issue commands to draw them to the screen. Rumor has it someone is working on a 3D subsystem too. It'd also be nice to have a /dev/gpu where we can load different graphical kernels or gpgpu kernels into. Though have fun reading through 1500+ page hardware manuals for each gpu architecture, if you can get them (it's an extremely hard problem for any os community to solve outside of big corps.)

3) You information is woefully incorrect and your 2/3 points moot. There are a few people who have the ability to run it as a daily driver today if you put the effort in. It hasn't been research since bell gave up on it. It's a lovely little lightweight os with built in network transparent rpc between all programs. Very clean interface and design. It's a shame you didnt put more effort into learning about it and instead post misinformation.


I'm enjoying your tone of weary condescension here, as you incorrect me or barrage me with irrelevant counterpoints on almost every point.

Mothra - and Abaco, and all the others - are a fiasco. None of them have ever been quality browsers for the time; the fact that no-one has given themselves a hernia trying to carry them forward is just an understandable add-on.

The obvious answer has always been right in front of everyone: support standard languages, compilers and tools, even in an emulation layer. Perhaps it could be called "APE", which for some unknown reason you feel the need to furiously incorrect me about ("You mean there is a library"). There was not just a library but a series of programs designed to support the Rest of the World, by Howard Trickey. At one stage I recall claims that it compiled X11.

http://doc.cat-v.org/plan_9/4th_edition/papers/ape

I'm not entirely clear how a project that came with a brace of different executables and its own environment is a "library", but you do you.

Your point about /dev/draw is not as profound as you think it is. /dev/bltblt got replaced by some other "everything is a file" resource that works in a roughly similar way? You could have knocked me down with a feather. Meanwhile, the rest of the world will keep pumping rich type-checked (more or less) binary data structures into graphics devices, the way they have for the past 2 decades.

There are so many things I really liked about Plan 9; I used it almost exclusively from 1992-1994. The networking was astonishing for the time. The ability to go back to a given day on the file server!!?! But there were a brace of bad decisions that no amount of yelling in Irate Fanboy will compensate for.

I wish the things about Plan 9 had been preserved with a view towards building a viable system that, I dunno, was good for more than LARPing for the handful of people that are "willing to put in the effort to run it as a daily driver". Plan 9 as a 'living system' could have re-evaluated some of the architectural decisions that were made in the late 80s and early 90s as practically every detail of hardware, software and networking changed (notably, the idea that a 'thin client' was a good idea pretty much quietly expired). Instead, it's a museum for people to worship every bad decision made by a small group of smart people; every decision glued together into an unwieldy and incompatible ball.


> Plan 9 as a 'living system' could have re-evaluated some of the architectural decisions that were made in the late 80s and early 90s as practically every detail of hardware, software and networking changed

But that's exactly what it did. And that's exactly the reason you're claiming for it's failure.

What it seems like to me, is that you're trying to have your cake and eat it here. You're saying, on the one hand, that you like Plan9 because of what sets it apart from contemporary and modern systems. Then you're saying that you dislike the distance that it had from those systems because you wish it could have had better interoperability. Do you see how this is inherently contradictory, though? You quite literally cannot have the changes that Plan9 had, and have better interoperability, because those changes were made on a base that is fundamentally conflicting.

It sounds like what you wish they had written UNIX System 8, with Plan 9 Addons™


I think you've misinterpreted what I said: the obsoleted architectural decisions were made by Plan 9 in the late 80s and early 90s.

Main line Plan 9 never went through any real iterations - by 1996, attention was elsewhere, and all of the original designers went off to other things.

If the original designers had still been working on it, they may well have reworked a number of those decisions with the rise of fast and cheap local CPU and storage, 3D acceleration, etc.

I dislike the gratuitous incompatibility of Plan 9. The features that made Plan 9 good have nothing to do with having a weirdo variant C and no working C++ compiler. They broke all sorts of compatibility for no particularly valid reason. As such, they built a system that couldn't run 99% of programs written by people sitting outside the Unix Room.

If they hadn't been so stubborn about this Plan 9 might well be where Linux is - or at least, one of the livelier BSDs.

It's one thing to have original ideas in the OS. I suspect if you hew too close to Linux in an attempt to pick up device drivers, user space programs "entire" (e.g. system call compatibility), you wind up building... Linux. But not being able to run a large pool of programs is laughable. It makes for Bad Research not to mention a really crappy Daily Driver: something that requires amazing levels of Stockholm Syndrome to claim is workable as a regular use OS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: