Utility is a moral concept; it directly depends on the goal you defined for general interest. Actually, Utility is as well defined by the moral concept of Good, as Good is defined by Utility; to maximize Good is to maximize Utility. We'll admit that all the previous objects described as valuable (time, effort, etc) are indeed valuable as far as general interest is concerned.
Let no computer be the basic reference for computer utility. Now, can computers be useful at all ? Well yes, indeed, as they can quickly and safely add or multiply tons of numbers (that's called number-crunching) that no human could handle in decent times and be sure not to commit mistakes. But number-crunching is not the only utility of computers. Computers can also memorize large amount of data, and remember them much more exactly and unerringly than humans. But all these utilities are related to quantity: managing quickly large amounts of information. Can computers be more useful than that ?
Of course, computer utility has got limits; and those limits are those of the human understanding, as computers are creatures of the human. The dream of Artificial Intelligence (AI), older than computer itself, is to push those limits beyond the utility of humans themselves, that is, Human beings creating something better than (or as good as) themselves. But, human-like computer utility won't evolve from nil to beyond human understanding at the first try; it will have to reach every intermediate states before it may ever attain AI (if it ever does).
So as for quality vs quantity, yes, computers can be useful. By their mechanical virtues of predictability, they can be used to verify consistency of mathematical theorems, to help people schedule their work, to help them diminish the number of mistakes they do. But do note that the same predictability virtues makes AI difficult to attain, for the limits of humans intelligence are far beyond the limits of what it can predict. However this is yet another problem, out of the range of this paper...
Computers are made of hardware, that is, physical, electronical,
circuitry of semiconductors, or other substratum. Living organical creatures
themselves, including humans, are complex machines made mostly of water,
carbon and nitrogen.
But if the hardware defines limits for a computer's (or living creature's)
utility, it does not define the actual utility of a computer: as well as all
creatures are not equal, all computers are not equal either.
Note that this does not mean that the less useful should
be prematurely destroyed (actually, such kind of eugenism would be quite
harmful), only harmful ones may have to be. What makes computers different is
the physical, external, environment in which they are used (the same stands
for humans), but also software, that is, the way they are organized
internally (again, humans also have the same, their mental being),
and of course develops and evolves according to the previous external
environment.
The basic or common state of software, shared by a large number of
different computers, is called their Operating System.
This concept does not mean that some part of the software is exactly the same
accross different computers; indeed, what does "the same" shall mean ?
Some will want software of a computer to be exactly the same,
which is a very limited definition, that furthermore is often meaningless:
for example, if the two computers have in common a software
part full of useless zeros, are they using the same system ?
However, even if the computers use completely different technologies,
you can sometimes say that they share the same behavior, that they
manipulate similarly the same kind of objects.
Let's use our human analogy again: can you find cells that are "the same" on
different humans ? No, and even if you found exactly the same fat on two
humans, it wouldn't be meaningful. But you can nonetheless say that those two
humans share the same ideas, the same languages and
idioms or colloquialisms, the same
Cultural Background.
It's the same with computers; even though computers may be of completely
different models (and thus cannot be a copy one of one another),
they may still share a common background, be able to communicate
and understand each other, to react similarly to similar environments.
That we called the Operating System.
We see that actual OS limits are very dull, for "what is shared" depends on the class of computers considered, and the criteria used. Some people try to define the OS as a piece of software providing some list of services; but they then find their definition ever evolving, and never agree when it comes to having a definition that matches truely different systems. That's because they have a very narrow-minded self-centric view of what a operating system is. The ancient Chinese, or the ancient Greek did not learn the same customs, language, habits, did not read the same books; but either had a culture and no one could ever say that one was better than the other, or that one wasn't really a culture. Even if their culture is less sophisticated than ours, our ancestors, or some native tribes of continents recently discovered by the white, all did/do have a culture. As for computers, you cannot define being an OS in terms of what exactly an OS should provide (eating with a fork or accepting a mouse as a pointing input device), but in terms of if it is a software background common to different computers, and accessible to all of them. When you consider all possible humans or computers, the system is what they have in common: the hardware, their common physical characteristics; when you consider a given computer or individual, the system is just the whole computer or individual.
That's why the power and long-term utility of an OS musn't be measured according to what the OS does currently allow to do, but according to how easily it can be extended so that more and more people share more and more complex software. That is, the power of an OS is not expressed in terms of services it statically provide, but in terms of services it can dynamically manage; intelligence is expressed not in terms of knowledge, but in terms of evolutivity toward more knowledge. A culture with a deep knowledge but that would prevent further innovations, like the ancient chinese civilization, would indeed be quite harmful. An OS providing lots of services, but not allowing its user to evolve would likewise be harmful.
Again, we find the obvious analogy with human culture for which the same stands; the analogy is not fallacious at all, as the primary goal of an operating system is allowing humans to communicate with computers more easily to achieve better software. So an operating system is a part of human culture, a part which involves the computers that run under this OS.
As a blatant example of the lack of evolution of system software quality is the fact that the most popular system software in the world (MS-DOS) is a twenty-year old thing that does not allow the user to do either simple tasks, or complicated ones, thus being a no-operating system, and forces programmers to rewrite low-level tasks everytime they develop any non-trivial program, while not even providing trivial programs. This industry-standard has always been designed as a least sub-system possible for the Unix system, which itself is a system made of features assembled in undue ways on top of only two basic abstractions, the raw sequence of bytes ("files"), and the ASCII character string; Unix has had a huge number of new special files and bizarre mechanisms added to allow access to new kind of hardware or software abstractions, but its principles are still those unusable unfriendly files and strings, that were originally thought as a minimal abstractions to fit the tiny memory of then existing computers, and not as an interesting concept for today's computers; now known as POSIX, it is the new industry standard OS to come.
It may be said that computing has been doing quantitative leaps, but has not done any comparable qualitative leap; computing grows in extension, but does not evolve toward intelligence; it sometimes rather becomes more largely stupid. This is the problem of operating systems not having a good kernel: however large and complete their standard library, their utility will be essentially restricted to the direct use of the library.
This is the cult of external look, of the container,
instead of the internal being, the contents;
this problem does not concern only the computer industry,
so the why of such tendency is beyond the scope of this paper;
but one must be conscious of the problem anyway.
The container may seem important when one hasn't used a computer much;
but if you really use computers regularly,
you'll see that a container improvement is useless
unless a corresponding improvement of the contents has been done.
This phenomenon can also be explained by the fact that programmers, long used to software habits from the heroic times when computer memories were too tight to contain more than just the specific software you needed (when they even could), do not seem to know how to fill today computers' memory, but with pictures of gorgeous women and digitized music (which is the so-called multimedia revolution). Computer hardware capabilities evolved much quicker than human software capabilities; thus humans find it simpler to fill computers with raw data (or almost raw data) than with intelligence, especially since companies put a "proprietary" label on any non-trivial software they produce.
We already see how such a conception fails: it would perhaps be perfect for a finite unextensible static system; but we fell it may not be able to express a dynamically evolving system. However, a solid argument why it shouldn't be able to do so is not so obvious at first sight. The key is that like any complex enough systems, like human beings, computer have some self-knowledge. The fact becomes obvious when you see a computer being used as a development system for programs that will run on the same computer. And indeed the exceptions to that "kernel" concept are those kind of dynamic languages and systems that we call "reflective", that is, that allow dynamical manipulation of the language constructs themselves: FORTH and LISP (or Scheme) development systems, which can be at the same time editors, interpreters, debuggers and compilers, even if those functionalities are available separately, so there is no "kernel" design, but rather an integrated OS.
And then, we see that if the system is powerful enough (that is, reflective), any knowledge in the system can be applied to the system itself; any knowledge is also self-knowledge; so it can express system structure. As you discover more knowledge, you also discover more system structure, perhaps better structure than before, and certainly structure that is more efficiently represented directly than through stubborn translation to those static kernel constructs. So you can never statically settle once and for all the structure of the system without ampering the system's ability to evolve toward a better state; any structure that cannot adapt, even those you trust the most, may eventually (though slowly) become a burden as new meta-knowledge is available. Even if it actually won't, you can never be sure of it, and can expect only refutation, never confirmation of any such assumption.
The conclusion to this is that you cannot truely separate a "kernel" from a "standard library" or from "device drivers"; in a system that works properly, all have to be integrated into the single system concept. Any clear cut distinction inside the system is purely arbitrary (but it is sometimes expedient to have arbitrary decisions).
The truth is any computer user, whether a programming guru or a novice
user, is somehow trying to communicate with the machine. The easier
the communication, the quicker better larger the work is getting done.
Of course, there are different kind of using; actually, there are
infinitely many. You can often say that such kind of computer use is
much more advanced and technical than another; but you can never find
a clear limit, and that's the important point (in mathematics, we'd say
the space of kinds of computing is connected).
Of course also, any given computer object has been created by some user(s),
which programmed it above a given system, and is being used by other (or the
same) user(s), who program using it, above the thus enriched system. That is,
there are providers and consumers; anyone can provide some objects and consume
others; but this is in no way a user/programmer opposition.
Some say that common users are too stupid to program; that's only despising
them; most of them don't have time and mind to learn all the
subtleties of advanced programming; but they often do manually emulate macros,
and if shown once how to do it, are very eager to use or even write their own
macros or aliases.
Others fear that encouraging people to use a powerful programming
language is the door open to piracy and system crash,
and argue that programming languages are too complicated anyway.
Well, if the language library has such security holes and cryptic syntax,
then it is clearly misdesigned;
and if the language doesn't allow the design of a secure, understandable
library, then the language itself is misdesigned (e.g. "C").
Whatever was misdesigned, it should be redesigned, amended or replaced
(as should be "C").
If you don't want people to cross an invisible line, just do not draw roads
that cross the line, write understandable warning signs, then hire an army of
guards to shoot at people trying to trespass or walk out of the road.
If you're really paranoid, then just don't let people near the line:
don't have them use your computer. But if they have to use your computer,
then make the line appear, and abandon these ill-traced roads and fascist
behavior.
So as for those who despise higher-order and user-customizability, I shall repeat that there is NO frontier between using and programming. Programming is using the computer while using a computer is programming it. The only thing you get by having different languages and interfaces for "programmers" and mere "users" is building plenty of inefficient languages, and stupefying all computer users with a lot of unuseful ill-conceived, similar but different ones, and waste a considerable lot of human and computer resources.
Well, we have seen that an OS' utility is not defined in terms
of static behavior, or standard library functionality; that it
should be optimally designed for dynamic extensibility, that it
shall provide a unified
interface to all users, without enforcing arbitrary layers
(or anything arbitrary at all). That is, an OS should be primarily
open and
rational.
But then, what kind of characteristics are these ? They are features
of a computing language. We defined an OS by its observational semantics,
and thus logically ended into a good OS being defined by a good way to
communicate with it and have it react.
People often boast about their OS being "language independent", but what
does it actually mean ?
Any powerful-enough (mathematicians say universal/Turing-equivalent)
computing system is able to emulate any language, so this is no valid argument.
Most of the time, this brag only means that they followed no structured plan
as for their OS semantics, which will lead to some horrible inconsistent
interface, or voluntarily limited their software do interface with the
least powerful language.
So before we can say how an OS should be, we must study computer languages, what they are meant to, how to compare them, how they should be or not.
Faré -- rideau@clipper.ens.fr