It's interesting to envision the what-if universe where licensing OS X (and other software) became Apple's main business model. I wonder if in this universe, Apple is basically the Google of our time (everything is a cloud service, massive ad tracking, using metadata as their revenue generation), or if they would be more like the MS of our time (playing a game of catch-up with Google).
While I don't really blame Dell for not taking this deal (it seemed like a pretty raw deal for Dell), I wonder how much more popular Unix would be nowadays had this gone through. OS X was (and is) almost certainly the largest desktop *nix distro out there, and if it were installed on every Dell computer (and presumably later other manufacturers as well), I wonder if Unix might have become the "standard" for desktop OS's.
> I wonder if Unix might have become the "standard" for desktop OS's
Microsoft is pouring buckets of money and human capital into WSL2. Today, it's their way to drag cloud developers away from the Mac and Linux ecosystems into the corporate-blessed Windows ecosystem. But tomorrow, maybe developers and server admins will demand more Linux compatibility and, who knows, maybe they will reach a point where the only way to improve performance is to "port" Windows services and applications to the Linux kernel. The open source community has already proven that much of it can be done via Wine.
One of those things that doesn't look terribly likely now but might prove obvious in retrospect.
Are they really, is there anything to back up the claim WSL2 is a huge team internally? Because the way I see it WSL1 was much more novel and likely required a dedicated team to manage the kernel API to windows API shim layer, but now with WSL2 it's just a gussied up hyper V integration.
My hunch is that the monumental amount of work to make virtualized GPU support and zero-copy hardware accelerated graphics work from WSL2 to the desktop (a technology called "VAIL", which hasn't shipped yet) is in order to support Android apps on Windows, and that's always been the goal of this team and a series of high profile projects.
WSL2 is actually the third iteration of this technology, not the second, and I believe has the same original goal and they've dedicated a large amount of engineering effort to that goal.
Here's my case:
Windows 10 Mobile had "Project Astoria", which involved working with app developers to port apps to Windows Phone. This involved library support and translation layers to enable APKs to run on Windows.
WSL1 abandoned the approach of requiring developers to modify apps, and used "pico processes" with kernel driver and Windows kernel's support for alternative subsystems to Win32 to map Linux syscalls to Windows equivalents. This enabled binary compatibility for most apps, but fell short.
WSL2 switched to the lightweight Hyper-V integration (now using the host compute system APIs to manage lightweight containers/VMs) but added deep driver integration with graphics to enable machine learning and hardware accelerated graphic workloads to work within Linux using vGPU drivers and RAIL respectively. And soon VAIL, to enable full fidelity graphical applications (i.e.: Android apps) to run on Windows without compromise.
That's a lot of engineering time on 3 high complexity projects over a span of at least 6 years, publicly.
> My hunch is that the monumental amount of work to make virtualized GPU support and zero-copy hardware accelerated graphics work from WSL2 to the desktop (a technology called "VAIL", which hasn't shipped yet) is in order to support Android apps on Windows, and that's always been the goal of this team and a series of high profile projects.
Unfortunately, Google's anticompetitive moves with SafetyNet will ensure that many Android apps will only run on Google-blessed hardware and software platforms.
WSL 1 & 2 are amazingly innovative platforms, as is WINE. Microsoft has undoubtedly created untold value for developers and their shareholders with WSL. The anticompetitive limitations implemented in SafetyNet, however, serve to stifle such innovation, and prevent many Android apps from running on other platforms.
> Microsoft has undoubtedly created untold value for developers and their shareholders with WSL.
If there is a larger purpose around WSL, VS Code, and "MS <3 Linux" messaging, it is definitely this. They have really changed their reputation among developers.
In my opinion, Microsoft is the same Microsoft they were two decades ago. The company is still as efficient as they were back then at destroying what goodwill they have with developers[1].
However, I can't ignore that they've certainly created value for engineers that use Windows to develop software that runs on Linux, similar to the value created by WINE's developers. It's just a shame that similar value and innovation are stifled by the limitations set by Google's SafetyNet when it comes to running Android software on other platforms.
VSCode has been a HUGE deal. Being able to use the same editor for all your languages and all your operating systems is just enormous to most developers.
However, the goal wasn't to make "Linux" a first class citizen so much as to bring "non-Windows" developers underneath the mothership.
It really is a huge amount better than pretty much everything else, yeah.
... now if only the language servers it relies on could be anywhere near as polished :| rust-analyzer is a stunning example of what's possible with language services, gopls is only barely functional enough to be worth running.
VAIL and WSLg are available now in stable builds of Windows 11. I think VAIL was originally designed to be able to connect to containers that ran Win32 apps so that they looked natural, but since containerizing Win32 apps and their 30 years of legacy cruft was considered too good of an idea to finish, it was repurposed for this.
I think I disagree on both points. VAIL is not available yet, that I have seen, and I don't believe it was intended originally for Win32 apps.
I don't think VAIL has shipped yet, and getting that working correctly and securely, as it involves sharing memory between the WSL2 VM with a Linux kernel and the host Windows kernel, seems to me the most likely blocking issue for why they haven't yet released Android app support on Windows. See this comment on the readme of the WSLg repo:
> Please note that for the first release of WSLg, vGPU interops with the Weston compositor through system memory. If running on a discrete GPU, this effectively means that the rendered data is copied from VRAM to system memory before being presented to the compositor within WSLg, and uploaded onto the GPU again on the Windows side.
As for originally being intended for Win32 apps, I do wonder if "Vail" and "VAIL" are different projects within the OS division. The Linux/X11/Wayland and vGPU interop part of VAIL requires different expertise and deep knowledge of the Linux graphical stack.
I mean, I’m using it on stable Windows 11. The README says it shipped in build 21xxx which is 1000 builds behind Windows 11’s number. I can check though.
And for the other parts, the interop bits are new, but the deck from when the WSL2 devs presented this specifically said it was a great solution because it was already being used for Azure Virtual Desktop (hosted Windows VMs).
The difference is not whether GPU applications do or do not work. They work with both. The difference is, do they work with zero copying via shared memory between the Windows host and the WSL virtual machine?
RAIL enabled graphical applications - including those that use the GPU - to be presented as windows on the Windows desktop. But it involves extra latency on the order of several to tens of frames, depending on how expensive the framebuffer copying and synchronization steps are.
VAIL enables performance (and input lag) that is on par with running Linux natively.
I use a different set of platforms, but there are a couple of iOS programs I run under MacOS.
In general the experience is worse than using a real macos app but it as terrible than running in Web browser/electron (e.g. Slack, Spotify), or a macOS app that looks like it was written under duress by someone who didn’t give a shit (e.g. WhatsApp)
Probably not a huge team, but I guess the number of people involved has easily 3 digits, as MS needs to pay the developers, marketing people, manage the releases through Windows Updates, collect feedback, check compatibility etc.
At the scale of MS a "major" Windows feature can't be cheap, even if it looks simple.
Seriously, look at how difficult it is for them to port the old style control center or improve notepad.exe.
Not only that, but it severely underperforms vs e.g. the VMware Hypervisor, even on the latest Windows. (Good luck disabling Hyper-V so that you can even run a different hypervisor on Windows these days, btw)
> Today, it's their way to drag cloud developers away from the Mac and Linux ecosystems into the corporate-blessed Windows ecosystem
No, its their way to avoid corporate (now and becoming cloud) developers already on Windows from being dragged off the Windows ecosystem. Its defensive, not (primarily) expansive.
Microsoft's next step might be to 'Extend'; add feature to wsl2 that aren't available outside of windows. (To lock in WSL2 Users, and since they have the manpower they can outpace any other linux distro on features)
Every person I know who dual-booted or ran *nix in a vm has now switched over to WSL, including myself. It works ok, I got a korn shell, I can access all the windows data.
This move killed cygwin and a lot of dual-boot linux installs within a year of availability.
here's the thing though - everyone i know just needs the basic korn shell and basic unix like features for their laptop. no one is writing apps targeting wsl, no one's running it on a server. the features we use wsl for have been the same since 1988, and we're not interested in the new ones.
Exactly because of this kind of experience, and my own at the time, I deeply believe if Microsoft had been half as serious with Windows NT POSIX compatilitly as they are now with WSL, Linux would never had taken off.
I got into Slackware Linux, as means to do university work at home, and when I started working it was Windows NT, Aix, HP-UX and Solaris. We had a couple of Linux based servers but they were used as toys, like smuggling a couple of networked games into the office.
I don't know who out there might want to deploy their production services using WSL2. The appeal of using WSL is to develop and test apps in Linux distributions that might be in use in production.
I started to use so much WSL that I finally ditched Windows totally. I doubt that they can add something so special to keep users which Linux community can’t
> "port" Windows services and applications to the Linux kernel.
In every category imaginable, the NT kernel is better designed. Async IO, better power management, pageable kernel memory, stable driver API. I'd much rather KDE shell on NT kernel.
I get that Linux is popular, but no objective evaluation would claim that Linux kernel is better in any metric other than "how open source is it?" Spend some time to read the NT docs and get familiar with the design and you'll see.
This may be a hard pill to swallow, but in many areas linux's design is still very much "my first kernel" (I will give credit where it is due though, it is slowly being patched piece by piece, slowly). Whereas NT was written by a team who's done it a few times and avoided a lot of pitfalls (eg: synchronous by default IO, really??)
> In every category imaginable, the NT kernel is better designed. Async IO, better power management, pageable kernel memory, stable driver API.
I can't comment on most of these things since I don't know much coding about them, but I will definitely grant you the stable driver API. Drivers are a huge pain in the ass in Linux, and until DKMS it was pretty common for upgrading my kernel to break my wifi card, which was really annoying.
> I get that Linux is popular, but no objective evaluation would claim that Linux kernel is better in any metric other than "how open source is it?"
I mean, that's a pretty big feature. Didn't Linux get multicore support well before Windows almost entirely because Linux was open source and popular on servers? When something is open source, companies (or people) don't have to wait for a specific monolithic company to decide that a certain feature is worthy.
> synchronous by default IO, really??
I will totally admit a lot of ignorance as I don't work in kernel spaces, but hasn't Linux had Async IO in the form of `epoll` for like 20 years? It's not the default but I feel like it's pretty commonly used.
-----
Of the things I do like about Linux (outside of open source), it follows the "everything's a file" mantra from Unix. It makes it fairly fun and easy to arbitrarily glue applications together (admittedly at the cost of performance sometimes).
Also, I like that you can choose different filesystems more or less arbitrarily. I don't have a Linux laptop anymore (I run macOS), but I when I did I was running ZFS on root with Ubuntu, and ZFS is pretty awesome. As far as I'm aware, with Windows you're basically stuck with NTFS.
>I will totally admit a lot of ignorance as I don't work in kernel spaces, but hasn't Linux had Async IO in the form of `epoll` for like 20 years? It's not the default but I feel like it's pretty commonly used.
As an API, epoll is kind of... suboptimal. Of course, lots of great things have been built with it, but there's a reason it's being replaced with io-uring, which is a lot more similar to Windows iocp.
From a high level, the philosphy of doing i/o with IOCP, posix AIO, and io-uring are the same. All of those are true async i/o rather than a readiness model like epoll/kqueue/etc.
The novel thing of io-uring is using a new data structure to do it with fewer syscalls, thus less syscall overhead.
NT was multicore much earlier (1994 [1]). Linux had SOMEWHAT working SMP support in 2.0 in 1996 [2]. But really I would hardly count it as real, what with BKL.
epoll() was introduced in Linux 2.5.44 [1], which was released in 2002 [2]. Epoll is not a wonderful API. Linux's real reply to NT's anync io is io_uring, which only appeared in Linux 5.1 [4] in 2019(!!) [2], and is still wildly evolving
I mean, I don’t think you’re wrong; I think the main reason that people haven’t complained that much about being stuck with epoll is that a lot of this is somewhat abstracted nowadays. I’ve only used epoll once to play with it, and most of the time when I need nonblocking IO, I use Java NIO or Netty or something. I generally do not care about how pretty the underlying code powering these things is as long as it works and as long as my code is pretty.
Granted, that’s not really a good excuse for Linux having a crappy system, just why I think there wasn’t ever a ton of urgency for it.
epoll is not an asyncio implementation, it really doesn’t belong in any discussion about aio, it’s a relatively better poll/select. Apparently less well known is that Linux had an async io interface also introduced around the same time in 2.5 (see io_setup, io_submit, and libaio). Initially extremely limited and generally not widely used (limitations include only working on certain disk files opened O_DIRECT), however it was incrementally improved over the years until io_uring came around. glibc has had a POSIX AIO implementation for years, implemented with userspace threads since the kernel implementation was originally not suitable.
In begin of AMD64 a lot of Windows software was single threaded 32 bit still, while on Debian with APT you had a working 64 bit environment. It was mostly userland which could not handle the amount of max cores found in even consumer grade multicore AMD64.
Even recently many games did not take advantage of multiple cores, while AMD has been strong performance wise multiple cores, Intel was best single core until Zen 3 recently. Windows would be unable to perform well (use max cores) on AMD64 by AMD.
You can see it now too. Intel gets slight performance boost with Windows 11 compared to 10 while AMD loses performance. Which AMD said they will address in October.
This is a dishonest take on my post. I was talking about 64 bit in the context of AMD64. Not 64 bit in general. Alpha is irrelevant in this discussion.
When I first used Interix back in the late 90s I had this dream that someday I'd see a version of NT that booted into text mode and ran an all-POSIX userland. That's never going to happen now but, architecturally, NT could totally pull it off.
Edit:
Microsoft wouldn't ever open-source NT but, damn, GNU/Windows NT would be a sweet OS to use. (Make it license-compatible with ZFS and it'd be awesome.)
I wouldn't say never. I think the chance of that happening is ever so slightly increasing every year. The benefits of open source NT kernel will some day outweigh Microsoft keeping it close. Its strategic value is decreasing. ( The whole OS on the other hand is a different question though. )
Not necessarily private; Windows Server (without year-number) is GUI-less. Windows Server (with year-number) you can choose right in the installer, if you want "Desktop experience" or not.
Not particularly. I'd rather have seen Interix hang around.
Interix was implemented in more of a pure "NT" manner, being a subsystem, as compared w/ WSL1. WSL1 just shims Linux syscalls. I'd rather build software from source targeting Interix (as just another "Unix"-flavored build target) than running native Linux binaries.
Microsoft bought Hotmail in 1997. Hotmail was programmed in C++ and Perl--both of these programming languages had already been ported to NT at that time.
It took them 7 years to get it stable enough to run on Windows Server, and they had to rewrite the entire stack to do it.
Until then, they were running the service on Unix.
I think the parent comment is talking about the NT kernel rather than the Win32 subsystem that is Windows. Today NT really only has 1 subsystem (Windows) but it definitely can support multiple ones, it even had one for Unix (really more a POSIX layer) back in the day.
Two subsystems for POSIX, really. There was the original POSIX subsystem[0] that shipped in NT 3.1 thru 4.0, and the third-party OpenNT that became Interix[1] after Microsoft purchased it.
Even Windows doesn't agree with that. They started with a syscall wrapper for running Linux binaries, and ended up running a full Linux kernel next to the NT one because that was faster.
There are absolutely parts of Linux with all the architecture and engineering rigor of a favela. Real big "I want a new room and have two sheets of corrugated steel, a neighbor's wall, and an afternoon" energy.
There are also major parts of it that have benefited from decades of 'this tweak that won't be driver ABI compatible increases efficiency in this use case by measurable but single digit perf'. And yes, there's politics to getting that merged, but the barrier is way lower in Linux. You try a lot of that in how Microsoft builds software and at best you put a target on your back because you just made a lot of work for another team. At worst that amount of differently siloed work just kills the idea in the first place.
windows is idling at less than half the power! Idle power matters since most laptops spend most of their time idle waiting for the comparatively-slow humans to press keys
> And did you come back here an hour later to get another quip in?
And "quip" is a questionable (bordering on implied ad-hominem) word choice given that my only fault is disagreeing with you.
> windows is idling at less than half the power! Idle power matters since most laptops spend most of their time idle waiting for the comparatively-slow humans to press keys
The min use under Windows is slightly lower, the average is higher, and most importantly of all, the battery life (the metric we actually care about) appears about the same.
As the article says "Overall, the power use between Windows 10 and the four tested Linux distributions was basically on-par with each other." and "Beyond this data, the battery life of this Dell XPS laptop has been about the same as seen under Windows 10 with the testing thus far. So overall it's a pleasant surprise with not having tested any other Kabylake-R laptops and wasn't quite sure if the Linux power efficiency would be able to run on-par with Windows 10 at this point."
So it would appear that your new claim (that Windows gets twice the battery life) is without merit.
I wish laptops were like a phones, where race to idle is key, but they're not. The upper levels of the software stacks both on Windows and Linux aren't there yet.
No, it's not NTFS versus ext in the sense you mean. The Microsoft messaging on this is misleading at best.
It's that the NT filesystem stack puts the FS cache between user space and the FS driver, whereas on Linux it sits between FS driver and the block device driver. What that means is that on Linux, you get caching of blocks containing FS metadata for free, but on NT it has to be manually performed. That kills perf when looking at a lot of metadata.
When they say NTFS is to blame, they mean the NT FileSystem stack.
> I get that Linux is popular, but no objective evaluation would claim that Linux kernel is better in any metric other than "how open source is it?"
There's probably many areas where Linux comes strongly ahead - a much wider array of supported architectures, and even on the windows supported ones Linux scales much better by the virtue of being deployed from the smallest embedded computers to (all of) the largest supercomputers.
Linux supports a larger number of architectures than NT ever did, but NT was originally implemented on a non-x86 processor to insure portability. The MIPS, PowerPC, Alpha, and IA64 ports of NT are all dead, but they were all supported in the past. The speed that Microsoft brought up NT on ARM illustrates the portability too.
Yeah one of the features of NT is its hardware abstraction. Even though those other platforms have fallen by the wayside I would presume that abstraction layer is still there.
install windows 10. install WSL(2). install apache in WSL(2). start edge. point the edge browser at WSL(2). Notice the default network settings disallow edge from talking to WSL(2).
or start up something that uses mDNS.
or try to use non-latin characters on the terminal.
I'm sure MSFT is putting a lot of money in WSL2. but it's hard to argue they're serious about it.
Or they would be bankrupt ? Mac OS lost to Windows and if Apple was about software instead of hardware they would probably miss iPhone and iPod just like Microsoft.
Entirely possible, or bought out by a company like Sun Microsystems; it might have been a way for Sun to try and penetrate the desktop market instead of mainframes and servers.
Pulling on this thread, if the iPhone didn't exist (because Apple was solely software), I think there's a chance that the big player in the mobile space today would probably be Blackberry. They were doing extremely well before the iPhone, and if Apple hadn't disrupted the entire smartphone industry, I don't really see why that would haven't continued. I wonder if that means that smartphones would still have keyboards?
Another what if scenario would have been if Apple bought Sun instead of Oracle. We'd have ZFS on Apple devices say 5 years before APFS was rolled out.
As for Blackberry: and Nokia. I know Nokia was not very popular in US (for smartphone), but elsewhere in the world they were. In Europe it is Android which replaced both Nokia and Blackberry.
Android was coming regardless of iPhone. A major thing would be if OSK would be as good as it is on Android if iPhone would not have existed.
But Nokia was ready for the masses with N9 were it not for burning platform memo.
Symbian also had and used capability based security. iOS got it much later, and Android plays catch up on security and privacy with iOS.
Even though what they had paled in comparison to iOS, Android was almost ready and probably would also have overtaken Blackberry. Unfortunately for them the release of the iPhone and the ‘slab’ form factor meant they practically had to start over.
Wouldn't that kind of confirm my point though? In this theoretical universe where Apple is going bankrupt because there's no iPhone, but they do have a decent operating system they're wholly dedicated to (in addition to Sun's previous interest in the system before), might have made it attractive to an executive at Sun.
I'm not saying it would have worked out any better for Sun, just that they were one of the bigger players in the tech world in the late 90's/early 2000's, and I could see trying to break into the desktop space seeming valuable to them at the time.
Remember that this was on the heels of NeXTs failure as a hardware company and then their failure to capture any real market as a software company. At the time Jobs was probably thinking that it would be quick infusion income to get a major supplier like Dell to license OPENSTEP/Mac OS X or whatever it was at the time, even if he had longer term visions of getting back to hardware (in the consumer space this time, not higher ed/corporate as was the NeXT strategy).
At the time (2000) BeOS was the only interesting enough operating system that I spent days or weeks to pirate over a slow phone connection because it wasn't available to purchase in my country. Even if I used a download manager, the code was corrupted somehow and installing crashed after I tried to boot from the CD. CRC was wrong. I had to retry a few times. Phone bill was crazy and I had to justify it to my parents. But it was worth it, I enjoyed BeOS more than any other OS at the time. Windows, Linux, SkyOs, AtheOS/Syllable.
Mine is if Apple actually bought Bungie (which they reportedly missed out on my three days), Halo would have been killed and Bungie dissolved, and who knew what would have happened to Xbox and the console market.
Halo was originally demoed at Macworld for the Mac! I'm sure it would have launched, but I can't imagine Apple would be very good at running a games company.
I think money wasn't as big a problem as it's been reported.
Take this snippet from the wikipedia:
"Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million;[11] Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs.[12]" [ https://en.wikipedia.org/wiki/BeOS ]
It seems to me that BOTH $300M and $429M are greater than $125M and $429M is greater than $300M.
Having been both a NeXTStep and BeOS developer, I can assure you that NeXTStep was a MUCH more mature product than BeOS. Was it money? maybe. a little. Was NeXTStep better value for money? maybe. probably.
I rankle every time I hear the story about Gassée being greedy. He might have been asking for more than Apple wanted to pay, but I think it's simplistic to say it was only about money. NeXTStep morphed into Rhapsody and MacOS reasonably quickly. And the MetroWerks compiler for BeOS was DEFINITELY buggy.
I think there was no discount Gassée could offer that would make up for the longer time-to-market for a BeOS based next-gen Mac OS.
What's missing from this is what each OS brings to the table. It's entirely possible that they considered BeOS to not have as many desired features as NeXTStep delivered. Whatever the perceived relative value of each, I don't think it makes sense to consider them equivalent, as Apple apparently didn't. In other words, obviously $300 million wasn't too much for Apple to buy an OS for, but they seemed to consider too much to buy that OS for.
This would certainly have destroyed Apple, right? BeOS despite its cult followers was really almost useless and JLG would not have initiated the projects that made Apple a success after acquiring Next.
I agree in the sense that BeOS wouldn't have saved Apple, though this has less to do with the merits of BeOS and more to do with Apple's circumstances. Even with the purchase of NeXT, there was still a considerable time period between December 1996 (when the purchase of NeXT was announced) and March 2001 (when Mac OS X 10.0 was released) where Apple's customers still had to use the aging classic Mac OS (and even then Mac OS X didn't start getting widespread adoption among Mac users until the Jaguar/Panther eras). More to the point, Apple's operating system strategy wasn't the only issue Apple faced. NeXT's OpenStep API and OPENSTEP operating system weren't enough by themselves to turn around Apple; it was Steve Jobs' leadership and the successful launches of products such as the iMac G3 (1998), the iBook G3 (1999) that kept Apple afloat until Mac OS X was released. I don't know if Apple would have survived had Gil Amelio remained in power or had Jean-Louis Gassée took control.
It's not like something came out of Google recently. If anything, Google is desperately trying to catch-up with Amazon and Microsoft in cloud and Apple in consumer etc. They are entering a long path of gradual decline.
Does it count as "popular" if you aren't even aware you're using it? I suspect 95+% of Apple product users do not know the GUI is interacting with 'nix, and even fewer have opened a terminal and interacted with it directly. The GUI is not 'nix. It's an interface to it.
Edit: removed asterisks before *nix as it triggered markdown
it's a bigger issue for developers. The fact that I can do *nix style development, then run some script that wraps it up in a nice .app bundle and ship that off to users means that its popularity very much matters.
Around that time I was forced to use MacOS 9 at work. It was terrible. The OS was very unstable and I had to reboot serveral times a day. This was one of the reasons a lot of companies were using Windows because it was way more productive (except for Windows ME).
So I can imagine large companies would not trust OS X at that time and it would have been a big risk for Dell.
apple would never become the google of our time, they know what they are doing. also, microsoft is not playing catch up. how is that office product coming along? does it work yet?
Microsoft actually developed and sold an Unix distribution called Xenix. I'd imagine that if it was a great demand for it it would have replaced Windows.
If Apple had switched to an OS-licensing model, I think it would fix both of the issues I see with MacOS right now: a lack of communication between the users and designers, and their lack of incentive to work with other manufacturers.
This would have been a re-badged version of OpenStep, then named Rhapsody, well before OS X roadmap was plotted out, let alone released.
NextStep/OpenStep had support for x86 since 1994, so it was all ready to go if Dell wanted to play ball. But it wouldn't have included any support for Classic Mac software.
In other words, it wouldn't have supported any software at all, except for the few OpenStep devs still hanging on. But perhaps it would have been a good way to help kick-start OpenStep development.
Apple's original plan was to ship OpenStep as the next generation Mac OS, while running classic Mac OS software in a VM. This OS was eventually released as Mac OS X Server in 1999
But this plan was hugely unpopular in the Mac community. Microsoft and Adobe told Apple that there was no way they would port Office and Photoshop to OpenStep, and that killed the original plan.
This forced Apple to go back and tightly merge classic Mac OS and OpenStep, with the Carbon APIs and a ported version of the classic Mac Finder.
And thus, Mac OS X was conceived.
Three years later it was released.
Two years after that, it was good enough to use.
Two years after that, it was running circles around an aged Windows XP.
I'm half way through the book and found it quite good if you are interested in strategy, private equity because it talks about relatively recent events. There's also a recent a16z podcast with Dell, Martin Casado and Andreessen that I enjoyed https://future.a16z.com/podcasts/cloud-wars-company-wars-pla...
Michael Dell went on John Calacanis' podcast last week too, discussing the book. It was a pretty good listen.
Jobs wanted a cut on every PC sold if the deal went through. That was too much for Dell since they were selling so much PCs already, so it fell through.
I believe one of the founders of PowerComputing (one of the original Mac Clone Makers) was an ex-executive from Dell. PowerComputing was really putting the hurt on Apple's sales of computers for a while.
Random factoid since we're talking about PowerComputing. Back in the day, you could run MacOS on an IBM RS/6000-43P (I think it was the only RS/6000 with MacOS drivers) and you could run AIX on the Apple Network Server machines (which were the only macs with AIX drivers.)
But since other people in this thread are talking about other Unixes, it's fun to think of what would happen if IBM and Apple continued the AIX / Mac experiments.
Sony VAIOs were the MacBook pro’s of their day. I har a couple of them and so did my brother. All were very nice computers even with the garbage OS that was Windows Vista.
VAIO were a solid computers (at least my memory of them).
I would have killed to have Vista on my VAIO. I had Windows ME on it! Despite that fact, it still sits among my most beloved computers that I have owned. It was also the first that I had purchased new whereas all the previous ones were hand-me-downs.
In a testament to how powerful product placement can really be, the image of James Bond using a VAIO laptop while sitting on a boat is still burned in my brain.
He was also trying to get big ISVs including Adobe and (believe it or not) Microsoft to port their applications to Rhapsody, basically a cross-platform Windows NT/OS X version of Cocoa. https://arstechnica.com/staff/2008/04/rhapsody-and-blues/
Jobs still reached an audience through iTunes. It paved the way for the iPhone to be untouched by carriers. Verizon rejected the first iPhone, but Cingular agreed to Apple's terms.[0]
It would have been interesting if it ended up like Microsoft today that sells their own hardware (e.g. Surface) and licenses out the OS. I just cannot see Apple giving up hardware of their own at any point. I doubt whatever went on with OS X would have transferred to the iPhone or iPods before it.
Truthfully, it would certainly relieve a lot of pressure on Apple surrounding the Pro models. There are certain models that Apple always look like its reluctant to build and the Mac Pro has been that model. It would also have made the virtualization story a bit better.
I don't think it would have worked. Jobs had fixed views about hardware Dell was never going to align to. And, Jobs would have wanted serious rent (IPR) for designs put into Dell. And, the tension between Apple-hw and Dell-hw would have torn them apart.
iphone and ipad basically work because they're owned. There could have been a port out to Samsung native OLED tablets any time. The lesson was learned early: don't bother forming alliances which leave "their" name on the box.
Those were tremendously bad terms, especially given what happened with the licensed Mac clone program which was ended, apparently shortly after these discussions with Dell.
I'm not exactly a huge fan of MS, but at least when MS basically got a royalty for every Dell sold, there was a reasonable certainty that a vast majority of them were going to actually use Windows. It was the only OS pre-installed, meaning that unless the person was a geek (like basically anyone who hangs around HN), they were probably going to use Windows.
If there were two operating systems installed, there's a good chance that a majority of people would still only use Windows, and Dell would be paying a license to Apple that only 10% of their customers actually used.
If they could have worked out a deal to license MacOS X in the late 90s, it would have been a real game changer. Its too bad Jobs would not give Dell reasonable terms.
Mac OSX Server wasn't very popular. I'm guessing since Dell made very respectable servers in the 90's this was an effort to put a kind of NeXT server on Dell hardware, or at least an early version of Mac OSX Server.
Mac OS X was a work in progress during the late 90s. The Apple/NeXT merger was in 1996, there were developer releases of Rhapsody (with x86 support) in 1997 and 1998, and Mac OS X Server 1.0 was released in 1999. But the first public beta of Mac OS X with the Aqua UI was in 2000, and OS X version 10.0 was in 2001. So any deal negotiated in the late 90s to bring Mac OS X to Intel PCs would have taken a few years to come to fruition, but Apple did have a usable x86 operating system to base negotiations on in the late 90s.
OPENSTEP was running on x86 even before the merger. The work would have been mostly to make it look and feel more like what Mac OS users were familiar with, and adding the ability to run "Classic" Mac OS applications.
True, but OPENSTEP on its own would not have been of much interest to Dell given the very small software ecosystem. The promise of also inheriting the Mac application ecosystem made Mac OS X far more marketable.
It might have been worth it in 1992. By 1998 there was no point.
That is a really interesting what-if scenario, but I don't think the mid-90s Apple corporate management would have been willing to go through with it. At the very least they would have tried to lock it down to Apple designed Intel hardware, they've never been able to let go of being a hardware company.
I can't see that working very well. Dell computers are profoundly ugly. Mac OS X was a decent operating system but it would have hurt the Mac brand to have it running on the dull gray boxes that are Dell computers. Just seems like a last ditch strategy to get some revenue for Apple.
You're looking back with all the knowledge and information today, the NeXT computer was for all it's glory just a black box. At the time Apple did not have anything like what you see now, that didn't come until the 1998 iMac and only because of Ive.
Steve Jobs would have literally done anything to save Apple, you only need to watch the 1997 Macworld Expo to see how far he was willing to go. Those people absolutely hated Gates and you can guarantee you would have seen Michael Dell on that stage if it had gone the other way.
With that said Mac Clones were a thing, Jobs even tried to keep those deals going but they did not make sense at the time. Apple for all its worth was both a hardware and software company and the clone makers would have been competition anyways.
Yes the NeXT was a black box but it was visually striking. Nothing else looked like it. It appeared to be from the future. But you're right at the time Macs were just beige boxes, and pretty dull looking.
I think we're in agreement. I'm saying that this was a last ditch effort to save Apple, but it wouldn't have scaled beyond a few years of runway. Jobs likely understood that a long term OEM deal would have hurt Apple's brand, but would also stop the bleeding short term.
At the time, you could buy licensed Mac clones; so this seems like it could have been an attempt to get wider distribution with a bigger computer builder. Macs were not very pretty beige boxes at the time, so other not very pretty beige boxes wouldn't be a big deal.
Based on the terms discussed, and that wikipedia says the licensed clones were ended because of unfavorable terms for Apple, I'm getting the impression that Jobs was trying to find a bigger partner and also make the deal work well for Apple, but was more interested in a good deal for Apple than a sustainable partnership for both companies. This deal didn't happen, and the licensed clone program was ended, and Apple returned to being the only maker of machines running Mac OS, which mostly worked out for them.
While I don't really blame Dell for not taking this deal (it seemed like a pretty raw deal for Dell), I wonder how much more popular Unix would be nowadays had this gone through. OS X was (and is) almost certainly the largest desktop *nix distro out there, and if it were installed on every Dell computer (and presumably later other manufacturers as well), I wonder if Unix might have become the "standard" for desktop OS's.