III. No computer is an island, entire in itself
response to human
Centralized code is also called "client-server architecture"; the central code is called the server, while those who use it are called clients. And we saw that a function server is definitely something that no sensible man would use directly; human users tend to write a library that will encapsulate calls to the server. But it's how most operating systems and net-aware programs are implemented, as it's the simplest implementation way. Many companies boast about providing client-server based programs, but we see there's nothing to boast about it; client-server architecture is the simplest and dumbest mechanism ever conceived; even a newbie is able to do that easy. What they could boast about would be not using client-server architecture, but truely distributed yet dependable software.
A server is nothing more than a bogus implementation for a library, and shares all the disadvantages and limits of a library, with enhanced extensibility problem, and additional overhead. It's only advantage is to have a uniform calling convention, which can be useful in a system with centralized security, or to pass the stream of arguments through a network to allow distant client and servers to communicate. This last use is particularly important, as it's the simplest trick ever found for accessing an object's multiple services through a single communication line. Translating software interface from library to server is called multiplexing the stream of library/server access, while the reverse translation is called demultiplexing it.
Multiplexing means to split a single communication line or some other resource into multiple sub-lines or sub-resources, so that this resource can be shared between multiple uses. Demultiplexing is recreating a single line (or resources) from those multiple ones; but as dataflow is often bi-directional, this reverse step is most often unseparable from the first, and we'll only talk about multiplexing for these two things. Thus, multiplexing can be used to share a multiple functions with a single stream of calls, or convertly to have a function server be accessed by multiple clients.
Traditional computing systems often allow multiplexing of some physical resources, thus spliting them into a first (but potentially very large) level of equivalent logical resources. For example, a disk may be shared with a file-system; CPU time can be shared by task-switching; a network interface is shared with a packet-transmission protocol. Actually, what any operating system does can be considered multiplexing. But those same traditional computing systems do not provide the same multiplexing capability for arbitrary resource, and the user will eventually end-up with having to multiplex something himself (see the term user-level program to multiplex a serial line; or the screen program to share a terminal; or window systems, etc), and as the system does not support anything about it, he won't do it the best way, and not in synergy with other efforts.
What is wrong with those traditional systems is precisely that they only allow limited, predefined, multiplexing of physical resources into a small, predefined, number of logical resources; there they create a big difference between physical resources (that may be multiplexed), and logical ones (which cannot be multiplexed again by the system). This gap is completely arbitrary (programmed computer abstractions are never purely physical, neither are they ever purely logical); and user-implemented multiplexers must cope with the system's lacks and deficiencies.
So we see that system designers are ill-advised when they provide such specific multiplexing, that may or may not be useful, whereas other kind of multiplexing is always needed (a proof of which being people always boasting about writing -- with real pain -- "client/server" "applications"). What they really should provide is generic ways to automatically multiplex lines, whenever such thing is needed.
More generally a useful operating system should provide a generic way to share resources; for that's what an operating system is all about: sharing disks, screens, keyboards, and various devices between multiple users and programs that may want to use those accross time. But genericity is not only for operating systems/sharing. Genericity is useful in any domain; for genericity is instant reuse: your code is generic -- works in all cases -- so you can use it in any circumstances where it may be needed, whereas specific code must be rewritten or readapted each new time it must be used. Specificity may be expedient; but only genericity is useful on the long run.
Let us recall that genericity is the property of writing things in their most generic forms, and having the system specialize them when needed, instead of hard-coding specific values (which is some kind of manual evaluation).
Now, How can genericity be achieved ?
Page Maintainer: