Utility and Expediency are of course relative, not absolute concepts: how much you save depends on a reference, so you always compare the utility of two actions. Utility of an isolated, unmodifiable, action is therefore meaningless; particularly, from the point of view of the present, utility of past actions is a meaningless concept (though the study of the utility such actions may have had when they were taken can be quite meaningful); More generally, Utility is meaningful only for projects, never for objects; but of course, projects can be considered in turn as objects of a higher, more abstract, "meta-" system.
Utility is a moral concept, that is, a concept that allows pertinent discourse on its subject. More precisely, it is an ethical concept, that is, a concept colored with the ideas of Good and Duty. It directly depends on the goal you defined for general interest. Actually, Utility is as well defined by the moral concept of Good, as Good is defined by Utility; to maximize Good is to maximize Utility.
Like Good, Utility needs not be a totally ordered concept, where you could always compare two actions and say that one is "globally" better than the other; Utility can be made of many distinct, sometimes conflicting criteria; of course, such concepts of Utility could be refined in many ways to obtain compatible concepts that would solve any particular conflict, gaining separation power, but losing precision; however, a general theory of utility is beyond the scope of this article.
We'll therein admit that all the objects previously described as valuable (time, effort, etc) are indeed valuable as far as general interest is concerned.
Let no computer be the basic reference for computer utility. Now, can computers be useful at all ? Well yes, indeed, as they can quickly and safely add or multiply tons of numbers (that's called number-crunching) that no human could handle in decent times and be sure not to commit mistakes. But number-crunching is not the only utility of computers. Computers can also memorize large amount of data, remember and retrieve it much more exactly and unerringly than humans. But all these utilities are related to quantity: managing quickly large amounts of information. Can computers be more useful than that ?
Of course, computer utility has got limits; and those limits are those of the human understanding, as computers are creatures of the human. The dream of Artificial Intelligence (AI), older than computer itself, is to push those limits beyond the utility of humans themselves, that is, Human beings creating something better than (or as good as) themselves. But human-like computer utility won't evolve from nil to beyond human understanding at the first try; it will have to reach every intermediate states before it may possibly attain AI (if it ever does).
So as for quality vs. "mere" quantity (can one call "mere" something that is several orders of magnitude larger than any rival ?), can computers be useful, too ? Again, yes they can: by their mechanical virtues of predictability, computers can be used to verify consistency of mathematical theorems, to help people schedule their work, to help them diminish the number of mistakes they do, and control the constant quality of human or mechanical work. But do note that the same predictability virtues makes AI difficult to attain, for the limits of humans intelligence are far beyond the limits of what it can predict. However this is yet another problem, out of the range of this paper...
Computers are made of hardware, that is, physical, electronical, circuitry of semiconductors, or other substratum. Living organical creatures themselves, including humans, are complex machines made mostly of water, carbon and nitrogen. But if the hardware defines theoretical limits for a computer's (or living creature's) utility, it does not define the actual utility of a computer: as well as all creatures are not equal, all computers are not equal either, and both men and computers are worth much more than the equal mass of corresponding raw substratum (e.g. a mostly sand powder as for computers, some dirty mud as for humans). What makes computers different is the physical, external environment in which they are used (the same stands for humans), but also software, that is, the way they are organized internally (again, humans also have the same, their mental being), which of course develops and evolves accordingly to the previous external environment.
The basic or common state of software, shared by a large number of different computers, is called their Operating System. This concept does not mean that some part of the software is "exactly the same" accross different computers; indeed, what does "exactly the same" shall mean, for physically different beings ? sameness is an abstract, moral, concept, which must be relative to some pertinent frame of comparison, such as Utility. For instance, two systems that use completely different hardware technologies, and share no actual software code, but behave similarly to humans and manipulate the same computer abstractions, are much nearer from each other than can be systems of same hardware model, that share large portions of useless code and data, but run conceptually different programs.
Let's use our human analogy again:
can you find cells that are "exactly the same" on distinct humans ?
No, and even if you could find exactly the same cell on two humans,
it wouldn't be meaningful.
But you can nonetheless say that those two
humans share the same ideas,
the same languages and idioms or colloquialisms,
the same manners, the same Cultural Background.
It's the same with computers:
even though computers may be of completely different models
(and thus cannot be a copy one of one another),
they may still share a common background,
that enables them to communicate and understand each other,
and react similarly to similar environments.
That we called the Operating System.
We see that actual limits of what an OS is are very dull,
for "what is shared" depends on the collection of computers considered,
and on the criteria used.
This is why some people try to define the OS
as a piece of software providing some precise list of services;
but they then find their definition ever evolving,
and never agree when it comes to having a definition
that matches truely different systems.
That's because they have a very narrow-minded self-centric,
view of what a operating system is,
based on current technology and other contengencies,
not on any understanding of the world.
The ancient Chinese and the ancient Greek
did not learn the same customs, language, habits,
did not read or write the same books;
but each had a great culture,
and no one could ever say that one was better than the other,
or that one wasn't really a culture.
Even if their culture is less sophisticated than ours,
our ancestors,
and native tribes of continents recently discovered by the white alike,
all did or do have a culture.
As for computers,
you cannot define being an OS
in terms of what exactly an OS should provide
(eating with a fork or accepting a mouse as a pointing input device),
but in terms of whether it is a software background common to different computers,
and accessible to all of them.
When you consider all possible humans or computers,
the system is what they have in common:
the hardware, their common physical characteristics;
when you consider a single given computer or individual,
the system is just the whole computer or individual.
That's why the power and long-term utility of an OS musn't be measured according to what the OS does currently allow to do, but according to how easily it can be extended so that more and more people share more and more complex software. That is, the power of an OS is not expressed in terms of services it statically provide, but in terms of services it can dynamically manage; intelligence is expressed not in terms of knowledge, but in terms of evolutivity toward more knowledge. A culture with a deep knowledge but that would prevent further innovations, like the ancient chinese civilization, would indeed be quite harmful. An OS providing lots of services, but not allowing its user to evolve would likewise be harmful.
Again, we find the obvious analogy with human culture for which the same stands; the analogy is not fallacious at all, as the primary goal of an operating system is allowing humans to communicate with computers more easily to achieve better software. So an operating system is a part of human culture, though a part that involves computers.
It is also remarkable that
while new standard libraries arise,
they do not lead to reduced code size
for programs of same functionality,
but to enhanced code size for them,
so that they take into account all the newly added capabilities.
As a blatant example
of the lack of evolution of system software quality
is the fact that
the most popular system software in the world (MS-DOS)
is a fifteen-year old thing that does not allow the user
to do either simple tasks, or complicated ones,
thus being a no-operating system,
and forces programmers to rewrite low-level tasks
everytime they develop any non-trivial program,
while not even providing trivial programs.
This industry-standard has always been designed
as a least sub-system possible for the Unix system,
which itself is a least subsytem of Multics
made of features assembled in undue ways
on top of only two basic abstractions,
the raw sequence of bytes ("files"),
and the ASCII character string.
As these abstractions proved not enough to express adequately the semantics of new hardware and software that appeared, Unix has had a huge number of ad-hoc "system calls" added, to extend the operating system in special ways. Hence, what was an OS meant to fit the tiny memory of then available computers, has grown into a tentaculous monster with ever growing pseudopods, that wastes without counting the resources of the most powerful workstations. And this, renamed as POSIX, is the new industry standard OS to come, whose promoters crown as the traditional, if not natural, way to organize computations.
Following the same tendency, widespread OSes are
found upon a large number of human interface services,
video and sound.
This is known as the "multi-media" revolution,
which basically just means that your computer produces
high-quality graphics and sound.
All that is fine:
it means that your system software
grants you access to your actual hardware,
which is the least it can do!
But software design, a.k.a. programming,
is not made simpler for that;
it is even made quite harder:
while a lot of new primitives are made available,
no new combinatorials are provided
that could ease their manipulation;
worse, even the old reliable software is made obsolete
by the new interface conventions.
Thus you have computers with beautiful interfaces
that waste lots of resources,
but that cannot do anything new;
to actually do interesting things,
you must constantly rewrite everything from almost scratch,
which leads to very expensive low-quality slowly-evolving software.
Of course, we easily can diagnose about the "multimedia revolution" that it stems from the cult of external look, of the container, to the detriment of the internal being, the contents; such cult is inevitable whenever non-technical people have to choose without any objective guide among technical products, so that the most seductive wins. So this general phenomenon, which goes beyond the scope of this paper, though it does harm to the computing world, is a sign that computing spreads and benefits to a large public; by its very nature, it may waste a lot of resources, but won't compromise the general utility of operating systems. Hence, if there is some flaw to find in current OS design, it must be looked for deeper.
Computing is a recent art, and somehow, it left its Antiquity for its Ancien Régime. isn't the deeply rooted .....
Actually, the the informational status of the computer world is quite remindful of the political status of
The truth is any computer user, whether a programming guru or a novice
user, is somehow trying to communicate with the machine. The easier
the communication, the quicker better larger the work is getting done.
Of course, there are different kind of using; actually, there are
infinitely many. You can often say that such kind of computer use is
much more advanced and technical than another; but you can never find
a clear limit, and that's the important point (in mathematics, we'd say
the space of kinds of computing is connected).
Of course also,
any given computer object has been created by some user(s),
who programmed it above a given system,
and is being used by other (or the same) user(s),
who program using it, above the thus enriched system.
That is, there are computer object providers and consumers.
But anyone can provide some objects and consume other objects;
providing objects without using some is unimaginable,
while using objects without providing any is pure useless waste.
The global opposition between users and programmers that roots
the computer industry is thus inadequate;
instead, there is a local complementarity between providers and consumers
of every kind of objects.
Some say that common users are too stupid to program;
that's only despising them;
most of them don't have time and mind
to learn all the subtleties of advanced programming;
Most of the time, such subtleties shouldn't be really needed,
and learning them is thus a waste of time
but they often do manually emulate macros,
and if shown once how to do it,
are very eager to use or even write their own macros or aliases.
Others fear that encouraging people to use a powerful programming
language is the door open to piracy and system crash,
and argue that programming languages are too complicated anyway.
Well, if the language library has such security holes and cryptic syntax,
then it is clearly misdesigned;
and if the language doesn't allow the design of a secure, understandable
library, then the language itself is misdesigned (e.g. "C").
Whatever was misdesigned, it should be redesigned, amended or replaced
(as should be "C").
If you don't want people to cross an invisible line, just do not draw roads
that cross the line, write understandable warning signs, then hire an army of
guards to shoot at people trying to trespass or walk out of the road.
If you're really paranoid, then just don't let people near the line:
don't have them use your computer. But if they have to use your computer,
then make the line appear, and abandon these ill-traced roads and fascist
behavior.
So as for those who despise higher-order and user-customizability, I shall repeat that there is NO frontier between using and programming. Programming is using the computer while using a computer is programming it. Which does not mean there is no difference between various users-programmers; but creating an arbitrary division in software between "languages" for "programmers" and "interfaces" for mere "users" is asking reality to comply to one's sentences instead of having one's sentences reflect reality: one ends with plenty of unadapted, inefficient, unpowerful tools, stupefies all computer users with a lot of unuseful ill-conceived, similar but different languages, and wastes a considerable lot of human and computer resources, writing the same elementary software again and again.
Well, firstly, we may like to find some underlying structure of mind in terms of which everything else would be expressed, and that we would call "kernel". Most existing OSes, at least, all those software that claim to be an OS, are conceived this way. Then, over this "kernel" that statically provides most basic services, "standard libraries" and "standard programs" are provided that should be able to do all that is needed in the system, that would contain all the system knowledge, while standard "device drivers" would provide complementary access to the external world.
We already see why such a conception may fail: it could perhaps be perfect for a finite unextensible static system, but we feel it may not be able to express a dynamically evolving system. However, a solid argument why it shouldn't be able to do so is not so obvious at first sight. The key is that like any complex enough systems, like human beings, computer have some self-knowledge. The fact becomes obvious when you see a computer being used as a development system for programs that will run on the same computer. And indeed the exceptions to that "kernel" concept are those kind of dynamic languages and systems that we call "reflective", that is, that allow dynamical manipulation of the language constructs themselves: FORTH and LISP (or Scheme) development systems, which can be at the same time editors, interpreters, debuggers and compilers, even if those functionalities are available separately, are such reflective systems. so there is no "kernel" design, but rather an integrated OS.
And then, we see that if the system is powerful enough (that is, reflective), any knowledge in the system can be applied to the system itself; any knowledge is also self-knowledge; so it can express system structure. As you discover more knowledge, you also discover more system structure, perhaps better structure than before, and certainly structure that is more efficiently represented directly than through stubborn translation to those static kernel constructs. So you can never statically settle once and for all the structure of the system without ampering the system's ability to evolve toward a better state; any structure that cannot adapt, even those you trust the most, may eventually (though slowly) become a burden as new meta-knowledge is available. Even if it actually won't, you can never be sure of it, and can expect only refutation, never confirmation of any such assumption.
The conclusion to this is that you cannot truely separate a "kernel" from a "standard library" or from "device drivers"; in a system that works properly, all have to be integrated into the single concept, the system itself as a whole. Any clear cut distinction inside the system is purely arbitrary, and harmful if not done due to strong reasons of necessity.
Well, we have seen that an OS' utility is not defined in terms
of static behavior, or standard library functionality; that it
should be optimally designed for dynamic extensibility, that it
shall provide a unified
interface to all users, without enforcing arbitrary layers
(or anything arbitrary at all). That is, an OS should be primarily
open and
rational.
But then, what kind of characteristics are these ? They are features
of a computing language. We defined an OS by its observational semantics,
and thus logically ended into a good OS being defined by a good way to
communicate with it and have it react.
People often boast about their OS being "language independent", but what
does it actually mean ?
Any powerful-enough (mathematicians say universal/Turing-equivalent)
computing system is able to emulate any language, so this is no valid argument.
Most of the time, this brag only means that they followed no structured plan
as for their OS semantics, which will lead to some horrible inconsistent
interface, or voluntarily limited their software to interface with the
least powerful language.
So before we can say how an OS should be, we must study computer languages, what they are meant to, how to compare them, how they should be or not.
Faré -- rideau@clipper.ens.fr