- Mac os 8.6 doesnt have enough power software#
- Mac os 8.6 doesnt have enough power code#
- Mac os 8.6 doesnt have enough power plus#
It just differed from a traditional BSD kernel in that the lowest layers (VM, process, thread, and scheduler stuff) were replaced by the Mach microkernel. It was always built as a monolithic kernel, AFAIK. That kernel lives on in OS X and its descendants. NeXTStep was based on a Mach+BSD kernel, which at the time would've been an obvious thing to do if your goal was to create a modernized Unix system with all the latest cutting-edge (in the mid-80s!) technologies available. This makes sense because it began development in 1990, and at the time that might've seemed like both a forward thinking and sensible thing to do. A lot of these experimental OSes were based on Mach. A lot of companies were experimenting with them.
Mac os 8.6 doesnt have enough power software#
There was a period of time in the '80s when it was widely assumed by the software cognescenti that microkernels were the future. It's not that Mach is so great, but rather that circumstances made it creep in various places. MkLinux was then a joint effort in 1996 by Apple and the OSF to both have Linux on the PPC and exploit Mach features in the single-server emulation.Īs such, the history is convoluted. They adopted Mach as part of this and ended up extending it to add locksets, semaphores and resource ledgers. In the late 80s, the Open Software Foundation (OSF) was founded as a consortium of seven major tech companies spearheaded by DEC to standardize Unix in response to AT&T and Sun's adoption of SVR4.
Mac os 8.6 doesnt have enough power plus#
You had single-server Unixes like Lites and OSF/1 (the latter sold commercially by DEC as Tru64 UNIX), plus multi-server systems like Mach-US, MASIX, MK++ (for high assurance) and GNU Hurd. It was envisioned as a generic resource multiplexer to build all sorts of platforms on top of, and particularly for emulating Unix. Mach had an unusually large amount of fanfare starting from its introduction at USENIX circa 1985 up to the mid-90s.
See "layered design" especially for verbal and visual explanation: Just got to find the right balance.Ī lot of good wisdom on such things is from Karger et al in the link below on high assurance virtualization. Bernstein et al's Ethos & NaCl projects are perfect examples where internally they're a bit complex but interface is simple to use & securely. This led me to shift from "simple & tiny as possible" to the simplest version of tools that make it as easy as possible to do right. So, there was less safety & consistency anyway despite underlying tool being primitive. C and UNIX were perfect examples with all kinds of extensions there was a mess trying to cover stuff that came by default in prior languages and platforms. One problem I noticed, though, is that systems without needed functionality get continuous, ad-hoc versions of that functionality from their developers. So, the thing can be 4Kloc or 100Kloc, but it should be easily analyzed and composed modules. More important is you know what states or execution effects each component can have in a way that can be abstracted into analyzing those above it.
Mac os 8.6 doesnt have enough power code#
We simplify it because simpler code = easier verification. We tend to keep the TCB small as less code = less bugs. Hamilton's team's flawless code for Apollo certainly wasn't small either: So was Dijkstra's THE OS which pioneered the robust, construction processes. A lot of the security kernels of the past were a decent size. The point is less primitive and more verifiable. I remember Dan Hildebrandt telling me that the microkernel easily fit in a 4K instruction cache (it pretty much had to, it didn't do much so you needed some space for whatever it was dispatching).
They were the only guys that understood what the "micro" in microkernel meant. You have 4-5 people logged into an 80286 doing work. If you want to see what a real microkernel done right looks like, go look at QNX before they added the POSIX conformance. If someone can point to actual real world data that shows Mach to be better performing on the same hardware I'd be interested in seeing that, I tend to think that's not possible. Not Sun level readable by any stretch, but better than my memory of mach (which, to be fair, is in the distant past).
Linux has a nicer VM system, performs better, and is more readable. Reading the mach code made it worse, especially compared to the Sun code.Īll these years later, I remain underwhelmed. I went to Sun and learned how a kernel could work and could perform and the allure of Mach, for me, started to wane. I wasn't alone in getting drawn in by the research papers. I read all the Mach papers with stars in my eyes, it all sounded so good. I was big into OS dev when Mach was coming around but I was a newbie.