Well I agree that it is not wise, I think Apple with try to do it.
As for the argument that you can shoehone a "one size fits all", you are missing the concepts; however, when applied to certain OS models is spot on.
Regarding OS X as a whole, it is not designed to be, nor is it a modular OS, and this is why I agree with your base arguments.
OS X has inherent issues that Apple mangled when it put OS X together from XNU, that is a massive spaghetti bowl, with a lot of duct tape and super glue to keep up with the technology. iOS is a better design, but even it has many of the OS X problems and limitations that are fundamental problems with the kernel architecture/model.
Linux also fails the full modular needs of one fits all, even though many people try to make it fit this due to the OSS nature and some base coding that tries to keep it portable. However the monolithic kernel is what fails Linux to be fully modular, and the inherent dependencies that are also a side effect of the unix OS model.
For example, if you look at the Linux kernel used on Android, it doesn't fit, just like you state. Android has to bypass key functions of the Linux kernel and handle them itself using only simplistic calls to the kernel. A good example is Android implementing its own scheduling and memory manager, which is crap. If Android were to use the Linux memory manager and scheduler, it would also have to include a large chunk of other services/functions that would be way too resource intensive/heavy for most phone hardware.
However, modular OS models do exist, and they can handle the one size fits all better than expected.
This is where people need to go old school in thinking and pick back up where the world dropped out in the early 90s. As some of the best OS theories and conceputal designs were abandoned when everyone went back to Linux and OpenBSD when running from Microsoft and the horrible Win3.x/9x/Me generation of OSes. (Which made a lot of sense at the time, as these OSes were crap, but sadly needed for the hardware generation they were designed into.)
So if we go back to where the unix model was failing, in the late 80s, and pick up the best OS model concepts from the time, we can pick out some essential things that are key to a modular/portable/extensible OS model and set of technologies.
This is around the time I was in University, and we spent a lot of time on OS theory and engineering concepts of the time, which is why today it is freaking amazing that the 'crap' we were trying to get away from is still considered to be 'awesome' by a large portion of the OSS world and especially the younger generation.
So taking this in mind, lets pick out a few things that are necessary:
-Object Based Model (Back then was overhead and seen as bad, today the overhead is tiny, and offset by the inherent extensibility.)
-Architecture Agnostic (This is beyond portable, as the code doesn't have to change no matter what the underlying hardware is.)
-Side Scaled Layering (This is moving beyond just a microkernel and a separate kernel API interface set, the layering should be virtually unlimited, with multiple side layers operating in parallel transparently accessing lower layers and providing access from higher layers.)
These are just a few concepts that I remember were the philosopher's stone of OS theory back then.
Oddly, these concepts were implemented in an OS within a couple of years. And as we expected, the OS was 'heavy' because of the complexity these concepts introduce. However, as time progressed, it started to really hit some 'surprising' strides in terms of capabilities and performance in just a few years.
So ask yourself, when you look around at OS technology today, where do you see these conceptual OS theories actually in use?
The best example, is one that people around here ignore and would never expect to be this advanced...
Windows NT (aka Windows 2K/XP/Vista/7)
It fits all these OS 'concepts' that the technology world was talking about, and seems to have forgotten.
NT is an OS that was designed to be a "one size fits all".
-It is fully Object Based, even low level functionality uses objects and object based concepts. It doesn't use static functions and static parameters.
-NT uses portable C, and a HAL. (The HAL is what makes NT beyond simply portable, as the NT kernel and OS code itself does not have to change, as it is written to target a base generic archtiecture that the HAL provides. The only thing that has to change is the 64-256KB HAL, that does the high speed translation between the base NT architecture it provides and the actual hardware architecture it is running on.)
-NT is extensiblity layered, and even designed around a client/server concept of OS layers. This is why there are multiple kernel APIs that run side by side, and above this it uses a 'subsystem' concept for the higher level OSes that run on top of NT. Win32 is a subsystem with its own kernel running on top of NT, just like the SUA is a ful R5/BSD unix OS that runs on top of NT, and side-by-side Win32. (Win32 could be removed, as MinWin demostrated, and replaced with any subsystem OS, even the BSD subsystem, as the main subsystem.)
BTW the MinWin project was taking the XP code base after the security revamp at Microsoft and ensuring that NT was adhering to its inhernet layering model designs, as some crap had been cross layered, and the XBox 360 team brought attention to this when splitting off layers they didn't need for the 360. This is one additional reason Vista was delayed, as it incorporated the fixes to NT's layering.
So what does NT gain out of all this that is not obvious?
These fundamentals make NT highly extensible. Changes and new concepts don't break the old ones, and very little code has to ever change.
If you look at a major yet simple thing, the WDDM/WDM changes that happened in Vista, it didn't require the removal of the XPDM, and didn't require much work to even implement the new WDDM/WDM concepts, even though it adds in a set of kernel level technologies no other OS currently offers. The most important is the new driver duality and kernel level GPU control. i.e. GPU virtualization, GPU scheduling, etc.
Windows7 with WDM1.1 does pre-emptive multi-tasking of GPU threads, which is unique right now, as implementing this at the kernel level on Linux or OS X would require a lot of rewritting, and fixing broken dependencies. We are still twitching from the fair scheduling changes in Linux, and those are tiny in comparison to the WDDM/WDM concepts added to NT rather transparently. Windows 7 even revamped the scheduler and memory prioritization flags that were more extensive than the Linux fair scheduling changes, but were a tiny thing due to how NT is designed.
Now, I know this sounds like a rant about 'you should love NT', it is not. This is shoving out some old concepts with a simple question...
Why are we not fostering these technologies in a new OSS OS model, and instead we keep putting duct tape on the unix model, and kernel technologies like Linux? Why are we staying ignorant to what Microsoft and NT is doing, just because we don't like them? Why shouldn't we learn from what they have done right, or even try to do better, rather than trying to limp along our same old crap?
When the NT Team started out, they dumped the OS/2 work, and the VMS model that Cutler was familar with, even though a lot of outsiders didn't realize this at the time and tried to compare NT to VMS. They specifically used the opportunity to use the best OS technologies and 'conceptual' technologies at the time.
They could have made NT unix based, and Gates even assumed they might, but the team did not want the limitations of the unix OS model or any other existing kernel technology traps. This is why they made NT an Object Based model, and why they designed a new kernel technology that at the time was called a 'microkernel' as there was no other way to describe it, but was far beyond a microkernel. Later it was called a client/server kernel and now is usually called a hybrid kernel (But it is nothing like the same 'hybrid kernel' technology used in OS X).
So yes, a OS can be one size fits all, and NT is an example of one, even though the concept eludes most people. There is no reason that by breaking from the past crap, using emulation/virtualization/subsystems, an OSS OS couldn't be designed to be even beyond what NT is doing.