There are projects to provide unified process space and inter-node IPC like Mosix, bproc, etc. Generally, these aren't used much in HPC. Having a "bunch of individual machines networked together" works pretty well when you also consider that the network might be 20Gb 4x DDR InfiniBand sending frames from point to point at ~2us. I'm just saying...
Chances are, the GP built an HPC cluster and used a typical SPMD approach with something like MPI or PVM for communications and a centralized job manager/scheduler for executing his jobs and those of others he was working with. Im also not sure what you mean by "useful software view". There are lots of tools like Ganglia or even Nagios with PNP that are good for keeping track of utilization, memory usage, etc. over a large number of machines. In HPC, there is very little need for seeing a cluster of machines as one coherent machine except to introduce further overhead in coordinating actual threads between a huge cluster of machines. A simple (yet sophisticated) job scheduler handles this just fine, with a light-weight daemon spawning tasks on your compute nodes when they get the call from a central scheduler. They monitor some performance attributes and aggregate them back to the central scheduler. This keeps things simple and the overhead low so that CPUs can be put to work crunching numbers and not handling mundane OS tasks.