Utility and Expediency are relative, not absolute concepts: how much you save depends on a reference, so you always compare the utility of two actions, even though one of the actions might be implicit. Utility of an isolated, unmodifiable, action is therefore meaningless. Particularly, from the point of view of present action, utility of past actions is a meaningless concept; however, the study of the utility that such actions may have had when they were taken can be quite meaningful. More generally, Utility is meaningful only for projects, never for objects. Projects here must not be understood in the restricted meaning of conscious projects, but in the more general one of a consistent, lasting, dynamic behavior.
Note that projects can be considered in turn as objects of a more abstract "meta-" system; but the utility of of the objectized project becomes itself an object of study to (an extension of) the metasystem, and should not be confused with the utility of the studying metasystem, Sciences of man and nature (history, biology, etc) lead to the careful study of terrifying events and dangerous phenomena, but the utility of such study is proportional rather to some kind of amplitude of the studied projects, than to the their utility from the point of view of their various actors.
Utility is a moral concept, that is, a concept that allows pertinent discourse on its subject. More precisely, it is an ethical concept, that is, a concept colored with the ideas of Good and Duty. It directly depends on the goal you defined for general interest. Actually, Utility is as well defined by the moral concept of Good, as Good is defined by Utility; to maximize Good is to maximize Utility.
Like Good, Utility needs not be a totally ordered concept, where you could always compare two actions and say that one is "globally" better than the other. Utility can be made of many distinct, sometimes conflicting criteria. Partial concepts of Utility can be refined in many ways to obtain compatible concepts that would solve more conflicts, gaining separation power, but losing precision.
However, a general theory of Utility is beyond the scope of this article (for those interested, a first sketch could be found in J.S.Mill's Utilitarianism). Therefore, we'll herein admit that all the objects previously described as valuable (time, effort, etc) are indeed valuable as far as general interest is concerned.
Judgements of Utility deeply depend on the knowledge of the project being judged, of its starting point, of its approach. Now, humans are no gods who have universal knowledge to base their opinions upon; they are no angels who by supernatural ways, receive infuse moral knowledge. Surely, many people believed it, and some still do. But everyone's personal experience, and mankind's collective experience, History, show how highly unprobable such things are. Without any further discussion, we will admit an even stronger result: that, by the finiteness of the human brain's structure, any human being, at any moment, can only handle a finite amount of information.
This concept of information should be clarified. The judicial term from the Middle Ages slowly took the unformal meaning of the abstract idea of elements of knowledge; it was only with seventeenth century mathematicians that a formal meaning could timidly appear, that surprizingly found its greatest confirmation in the nineteenth century with thermodynamics, a branch of physics that particularly studied large dynamical systems. Information could thus be formalized as the measurable concept of anentropy, which irreversibly decreases as we look forward into the future, or backward beyond the origin of our knowledge, which principally lies in the present, but that testimonies and remembrance of the past enrich. That is, information is a timely notion, valid only in dynamical systems. And such is Life, a dynamical system. It is not until late twentieth century, with cybernetics, that the deep relation between information and morals explicitly appeared. Few people remember cybernetics as something else than a crazy word associated to the appearance of information technology, but we invite the reader to consult the original works of Norbert Wiener on the subject [Wie].
Moral judgements depend on the information we have, so that in order to make a better judgement, we must gather more information. Of course, even though we might have rough ways to quantify information, this doesn't make elements of information of same quantity interchangeable. What information is interesting depends on the information we already have, and on the information we can expect to have. Now, gathering information itself has a cost, that physicists may associate to free energy, which is never zero, and must be taken into account when gathering information.
Because the total information that we can handle is limited, any inadequate actual information that be gathered would be to the prejudice of more adequate potential information. such inadequate information is then called noise, Snd worse than lack of information, because it costs as much as adequate information. Thus, in our dynamic world, the quest of knowledge itself is subject to criteria of utility, and the utility of information is its pertinency, its propensity to favorably influence moral judgements. As an example, the exact quantization of information, when it is possible, itself requires so much information, that it is seldom worth to be sought.
So to gather information in better ways, one must scan the widest possible space of elements of knowledge, which is the essence of Liberty; but the width of this knowledge must be measured not in terms of its cost or of its interest in case it was true, but in terms of its pertinency and of its solidity, which is the essence of Security. These are dual, undissociable aspects of Information, that get their meaning from each other. Any attempt to priviledge one upon the other is pointless, and in the past and present, such attempts led to many a disaster.
Reality and potentiality, finiteness of knowledge, world as a dynamic system, relation between information and decision, duality of liberty and security, all these are parts of a consistent (we hope) approach of the world, that we will admit in the rest of this paper, at least on the considered subjects.
Computers are built and programmed by men and for men, with earthly materials and purposes. Hence the utility of computers, among other characteristics, is thus to be measured like the utility of any object and project in the human-reachable world, relatively to men. And, because what computers deal with is information, their utility lies in what will allow humans to access more information in quicker and safer ways, that is to communicate better through them computers with other humans, with nature, with the universe.
Again, Utility criteria should not only compare the visible value of objects, but also their cost, often invisible, in terms of what objects where discarded for it. Cost of computer information includes the time and effort spent at manufacturing or earning money to buy computer hardware and software, but also the time and effort spent before the computer to explain it the work you want to be done, and the time and effort spent verifying, correcting, trying again the obtained computer programs, or just worrying about the programmed computer crashing, preparing for possible or eventual crashes, and recovering from these. All this valuable free energy might have been spent much more profitably at doing other things, and is part of the actual cost of computerware, even when not taken into account by explicit financial contracts. We will stress on this point later.
So to see if computers in general are a useful tool, we can take the lack of computer as the implicit reference for computer utility, and see how computers benefit or not to mankind, comparing the result and cost. Once properly programmed, computers can do quickly and cheaply large amounts of simple calculations that would have required a large number of expensive human beings to manage (which is called "number crunching"); and they can repeat relentlessly their calculations without committing any of those mistakes that humans would undoubtly have made. When connected to "robot" devices, those calculations can replace the automatic parts of work, notably in the industry, and relieve humans from the degrading tasks of chain work, but also control machines that work in environments where no human would survive, and do all that much more regularly and reliably than humans would do. Only computers made possible the current state of industry and technology, with automated high-precision mass production, science of the very small, the very large, and the very complex, that no human senses or intelligence could ever have approached otherwise.
Thus, computers save a great amount of human work, and allow things that no amount of human work could ever bring without them; not only are they useful, but they are necessary to the current state of human civilization. We let the reader meditate on the impact of technology on her everyday life, and compare it to what was her grandmother's life. That this technology may be sometimes misused, and that the savings and benefits of this technology be possibly misdistributed, is a completely independent topic, which may hold for quite any technology, and which will not be otherwise commented in this article.
Not only can computers perform tasks that would require enormous amounts of human work without them, and do things with more precision than humans, but they do it with reliability that no human can provide. This may not appear as very important, not even as obvious, when the tasks undertaken are of independent one from the other, when erronous results can be discarded, or will be compensated somehow by the mass of good results, or when on the contrary the task is unique and completely controlled by one man. But when the failure of just one operation involves the failure of the whole effort, when a single man cannot warranty the validity of the task, then computers prove inestimable tools by their reliability.
Of course, computers are always subject to failures of some kinds,
to catastrophes and accidents;
but computers are not worse than anything else with respect to such events,
and can be arbitrarily enhanced in this respect,
because their technology is completely controlled.
However not only is it not a problem particular to computers,
but computers are most suited to fight this problem:
unpredictable failures are the doom of the world as we live,
where we always know a tiny finite piece of information,
so even if we can sometimes be fairly sure of many things,
and can never be completely sure about anything,
as we can never totally discard the event of
some unexpected external force significantly pertubating our experiments.
The phenomenon is the most pronounced with humans,
where every individual is such a complex system by himself,
that one can never control all the parameters that affect him,
can never perfectly reproduce them;
so there are always difficulties in trusting a man's work,
even when his sincerity is not in doubt.
On the contrary,
by their very mechanical nature of their implementation,
by the exactitude of their computations,
which derives from their very abstract design principle,
computing is both a predictable and a reproducible experiment;
it can be both mathematically formalized,
and studied with the tools of the physicists and engineers;
computer behavior is both producible and reproducible at will;
and this founds computer reliability:
you can always check and counter check a computer's calculations,
experiment with them under any condition you require
before you can trust them.
We see that computers allow to accumulate reliability
like nothing else in the human-reachable world,
though this reliability must be earned in the hard way,
by ever-repeated proofs, checks, and tests.
In fact, this reliability is one of the two faces of information,
which is what information technology is all about,
of which computers as we know them are the current cutting edge.
The problem with computers, the absolute limit to their utility, is that by the same mechanical virtues that make them so reliable, they can replicate and manipulate actual information, realize potential information into actual information, but they can't create information. Any information that lies in a computer derives from the work of men who built and programmed the computers, and from the external world with which the computer interacts by means of sensors and various devices.
Hence the limits of computers are men. If a man misdesigns or misprograms a computer, if he feeds it with improper data, if he puts it in an environment not suitable for correct computer behavior, then the computer cannot be expected to yield any correct result. One can fully trust everything he sees and tests about a computer, but as computers grow in utility and complexity, there are more and more things one cannot personally see and test about them, so one must rely on one's fellow human beings to have checked them. Again, this is not worse than anything else in the human world; but for computers as well as for anything else, these are hard limits of which we should all be conscious.
Like any product of civilization, computers depend on a very rich technological, economical, and social context, before they can even exist, not to talk about their being useful. They are not the work of a single man, who would be born naked in a desert world, and would build every bit of them from scratch. Instead, they are the fruit of the slow accumulation of human work, of which the foundations of civilization participate at least as much as the discoveries of modern scientists. The context is so necessary, that most often it is implicit; but one shouldn't be mistaken by this implicitness and forget or deny the necessity of the context. Actually, this very context, result of this accumulation process, is what Civilization is.
But again, the characteristic of information technology, is that the information you manage to put in it can be made to decay extremely slowly, as compared to objects of same energy: We can expect data entered today in a computer, that is interesting enough to be copied once every ten years at least, to last as long as information technology will exist, that is, as long as human civilization persists. Of course, huge monuments like the Egyptian pyramids are work of men that decay slowlier, need less care, and resist to harsher environments, so may last longer; but their informative yield is very weak, as compared to their enormous cost. If only slowness to decay was meant and not informational yield, then nuclear wastes would be the greatest human achievement!
Now computing has the particularity, among the many human collective projects, and as part of mankind being its own collective project, that it can be contributed to in a cumulated way for years. For this reason, we can have the greatest hope in it, as a work of the human race, as a tool to make masterpieces last longer, or as a masterpiece of its own. Computing has already gained its position among the great inventions of Man, together with electricity, writing, and currency.
Many dream that some day, the cumulated work invested in computing will allow humans to create some computer-based being, which they call "artificial intelligence", or simplier AI, that can rival with their human creators as for "intelligence", that is, their creativity, their ability to undertake independently and voluntarily useful projects; they dream (or some of them have the nightmare) that mankind engenders a next step in evolution by non-genetical means. But this dream (the eventuality of whose realization won't be discussed in this article) means that, by Information Theory's version of Cantor's diagonal argument, this AI surpasses the understanding of any single human: even though its general principles might be understandable, like the elementary laws of physics, the formidable context involved in anything but the simplest applications (and the most useless, as far as "intelligence is meant") would make it impossible for the most developped human brain to apprehend.
This is possible for a human work, as a few human-understandable rules can generate ununderstandable chaotic behavior -- if anyone fully understands a human work like the stock market, please contact me, so that I should learn how to become a billionaire. But this will not quite be computing as we know it anymore, where we have this human-checkable reliability. However, we'll see that the current design of computing systems greatly limits the potential of computer software, with respect to what a few programmers can understand; hence, until this design is replaced, AI will stay a remote dream.
As an example, a given modern washing machine is often a very useful computer system, where a static program manages the operations; but its utility lies entirely in the washing of clothes, so as a computing system, it is not excessively thrilling; the development of washing machines, however, contains a computing subsystem of its own, which is the development of better washing programs; this computing system might not be the most exciting one, but it is nevertheless an interesting one.
Similarly, a desktop computer alone might be a perfect computer system, it won't be a very interesting computing system until you consider a man, perhaps one among several, sitting in front of it and using it. And conversely, a man alone without computer might have lots of ideas, he won't constitute a very effective computing system until you give him the ability to express it into computer hardware or software. Note that desktop publishing in a business office is considered as being some kind of software, but that as long as this information is not spread, copied, and manipulated much by computers, that the writing is very redundant but not automatized, it is a very lowly interesting computing system. Developing tools to automate desktop publishing, on the other hand, is an interesting computing system; even desktop publishing, if it allowed to take any tiny active part in the development of such tools, would be an interesting computing system; unhappily, there is a quasi-monopoly of large corporations on such development, which we'll investigate in following chapters.
A most interesting Computing System, which particularly interests us, is made of all interacting men, computers, and particles in the Universe, where the information being considered is all that encoded by all known computers; we may call it the Universal Computing System (UCS). Actually, as the only computers we know in the Universe are on Earth, or not far from it in space, we may altogether call that the Global Computing System; however the two might diverge from each other in some future, so let's keep them separate.
Now, the question that this article, tries to answer is "how to maximize the utility of the Universal Computing System ?". That is, we take the current utility of computers for granted, and ask not how they can be useful, but how their utility can be improved, maximized. We already saw that this utility depends on the amount of pertinent information such systems yield as well as the free energy they cost. But to answer this question more precisely requires at the same time a general study of Computing Systems in general, of the way in which they are or should be organized, and a particular study of current, past, and expected future computing systems, that is, where the Universal Computing System is going if we're not doing anything.
Particularly, one could try to formalize the UCS with the set of the physic equations of its constituing particles. While such thing might be "theoretically possible", the complexity of the obtained equations would such that any direct treatment of them would be impossible, while the exact knowledge of these equations, and of the parameters that appear in it, is altogether unreachable. Thus, this formalization is not very good according to the above criterion.
A fundamental tool in the study of any system, dynamic or not, called analysis, consists into divide the system into subsystems, such that those subsystems, and the interactions between those subsystems be as a whole as simple as possible to formalize.
For computing systems, the basic, obvious, analysis, is to consider individual computers and human computer users as the subsystems. Because information flows quickly and broadly inside each of these subsystems, but comparatively slowly and tightly between them, they can be considered as decision centers, each of which takes actions depending mostly on its internal information, and slowly interacting with each other "on purpose" (because according to these internal informations).
Humans interact with other humans and computers; computers interact with other computers and humans. But while the stable, exact, objectized information lies in the computers, the dynamic nature of the project can be traced down to the humans; thus, even though only the computerized information might be ultimately valuable to the computing system, the information flow among humans, is a non-negligible aspect of the computing system as a project.
Surely, this is not the only possible way to analyze computing systems; but it is a very informative one, and any "better" analysis should take all of this into account.
Anyway, the point is that what counts when analyzing a system, a "canonical representation" in terms of atoms and such needs not be the most interesting one. Computers may be made, from the hardware, physical, point of view, of electronical semiconductor circuitry and other substratum; from the information point of view, this is just transient technological data; tomorrow's computers may be made of a completely different technology, they will still be computers. Similarly, living creatures, among which humans, are, as far as we know, made of organic molecules; but we perhaps on other places in the universe, or in other times, things can live that are not made of the same chemical agents.
What makes creature living is not the matter of which it is made
(or else, the soup you obtain after reducing a man to bits would be
as living as the man).
What makes the living creature is
the structure of internal and external interaction
that the layout of matter implements.
A chair is not a chair because it's made of wood or plastic,
but because it has such a shape that a human can sit on it.
What makes the thing what you think it is,
are abstract patterns that make you recognize it as such,
that constitute the idea of the thing.
And as for computing systems, the idea lies in the flow of information,
not the media on which the information flows or is stored.
Given a collection of subsystems of a cybernetical systems, we call "Common Background" the information that we can expect every of these subsystems to have. For instance, if we can expect most Europeans to wear socks, then this expectation is part of the Common Background of Europeans. If we can expect all the computers we consider to use binary logic, then this fact is part of the Common Background for those computers. The common background for a collection of human beings is called their collective culture, or even their Civilization, if a large, (mostly) independent collection of human beings is considered. The common background for a collection of computers is called their Operating System.
The concept of Common Background appears in any cybernetical system where a large enough number of similar subsystems exist. Common Backgrounds grow in complexity only if those sybsystems do get more complex too, and the large number of such systems means that these should be self-replicating, or more precisely correlated to self quasi-replication. To sum it up, an interesting concept of Common Background is most likely to appear only when some kind of "life" has developped in the cybernetical system, or when we're examining a large number of specifically considered similar systems.
Note that the "similarity" between the subsystems tightly corresponds to the existence of information common to the subsystems, that constitute the Common Background. In no way does this similarity necessarily correspond to any kind of "equality", among the subsystems: how could two subsystems be exactly the same, when they were specifically considered as disjoint subsystems, made of different atoms ? The similarity is an abstract, moral, concept, which must be relative to the frame of comparison that makes the considered information pertinent; a moral frame of Utility can do, but actually, any moral system in the widest acception can, not only those where an order of "Good" was distinguished. Finding a lot of similarities in somehow (or completely) impertinent subjects (such as gory "implementation details") doesn't imply as important a common background as does finding a few similarities on pertinent subjects (sometimes not any common background at all).
If we consider humans in the World, can we find cells that are "exactly the same" on distinct humans ? No, and even if we could find exactly the same cell on two humans, it wouldn't be meaningful, just boring. Yet you can nonetheless say that those two humans share the same ideas, the same languages and idioms or colloquialisms, the same manners, the same Cultural Background. And this is meaningful, because these are used to communicate, have the information flow, etc. Genetical strangers who were bred together share more background as regards society than genetical clones (twins) who were separated after their being born.
It's the same with computers: computers of the same hardware model, having large portions of common implementation code, but running completely different "applications" that have nothing conceptually common to the human user, might be considered as sharing little common information; on the contrary, even though computers may be of completely different models, of completely different hardware and software technologies, thus sharing no actual hardware or software implementation, they may still share a common background, that enables them to communicate and understand each other, and react similarly to similar environments, so that to the human users, they behave similarly, manipulate the same abstraction. That we called the Operating System.
Surely, we could escape arguments, by affirming that we have been giving a definition independently from the meaning of the expression "Operating System" in other contexts; but the utility of this article lies precisely in publishing information that is pertinent about everyday problems, so that such escape is not a solution.
By "Operating System", people intuitively mean the "basic" software available on a computer, upon which the rest is built.
The first naive definition for an OS would thus be to define it by "whatever software is available with the computer when you purchase it". Now, while this sure unambiguously defines an OS, the according pertinency is very poor, because, by being purely factual, the definition induces no possible moral statement upon OSes: anything that's delivered is an OS, whatever it is; you could equivalently write some very large and sophisticated software that works, or some tiny bit of crappy software that doesn't, still it'd be OS, by the mere fact it is sold with the computer; one could buy a computer, remove whatever was given with it, or add completely different packages to it, then resell it, and whatever he resells it with would be an OS. Surely, this definition is poor.
Then, one could decide that because this question of knowing what an OS is is so difficult, it should be let to the high-priests of OSdom, and that whatever is labelled "OS" by their acknowledged authority should be accepted as such, while what isn't should be deemed with the utmost defiance. While this puts the problem back, this is still basically the same attitude of accepting fact for reason, with the little enhancement that the rule of force applies to settle the fact, instead of raw facts being blindly accepted. This is abdicating reason in favor of religion, while the high-priests of computing are not more endowed than the common computer user to give a good definition for an OS. So as we're studying computer science, not computer religion, this is a poor definition, too.
Now, those who escaped the above traps, or the high-priests of the second trap, will try to define an OS in terms of the services it does or should provide, that is, of its actual of virtual informational contents. Unhappily, those contents are subject to great changes as one moves in space or time; different people and groups of people have needs and history, and with both the static and dynamic state of the computer services they have greatly change and evolve. This conception leads to endless fights about what should or not be included in the OS; when human civilization rather than just computer background was concerned, many bloody wars were waged to defend an idea or another. Even without fights, we see that completely different sets of services equally qualify as OSes, much like completely different civilizations like the ancient Greek and ancient Chinese civilizations, while being completely different, both were very sophisticated cultures at their time, not talking about other more or less primitive or sophisticated civilizations. Such a definition for an OS cannot be universal, and only the use force can have it prevail, so it becomes a new religion. Again, this is a poor definition for an OS.
The final step, as presented in the preceding chapter, is to take to define an OS as the container, instead of defining it as the contents, of the commonly available computer services. We saw that OS was to Computing Systems what Civilization was to Mankind; actually Computing Systems being a part of the Human system, their OSes are the mark of Human Civilization upon Computers. The language, habits, customs, scriptures, of a people, eating with one's bare hands, a fork and knife, or chopsticks, don't define if some people has a civilization or not; they define what this civilization is. Similarly the servives uniformly provided by a collection computer, the fact that a mouse or a microphone be used as an input device, that video screen or a braille table be used as an output device, that there be a builtin watch or not, those features don't define if those computers have an OS, but what that OS is.
Our definition settles that we can know, while refusing to enforce any dogma about what we can't know, which is the very principle of information against noise; it separates the contengencies of life from the universal concept of an OS: an OS is the common background between the computers of a considered collection. This moves the question of knowing what should or not be in an OS from a brute fight between OS religions, the mutual destruction of dogmas, to a moral competition between OSes, the collective construction of information.
However, as soon as we consider further versions of a "same" piece of software, as we consider its future development and maintenance process, the way several pieces of software interact, not just directly, but remotely, through the influence that they have on each other through the medium of humans who are writing them while knowing about the other, then we are indeed talking about flow of information, and the concept of OS becomes meaningful.
See that communication between machines do not mean that some kind of cable or radio waves be used to transmit exact messages; rather, the most used media for machines to communicate have always been humans, those same humans who talk to each other, and try to express their resulting ideas on machines.
[Independently from static assumptions one can make about the OS, its role is related to the dynamic nature of the Computing System inside which it is considered, and from which it derives its concept of utility. ]
An Operating System represents the context in which information is passed accross the Computing System; it is basis of reliable information upon which new information can be built that will enrich the whole system; when this information eventually settles, it enriches in turn the OS, and can serve as a universal basis for even further enhancements. That is the utility of Operating Systems.
That's why the power and long-term utility of an OS musn't be measured according to what the OS does currently allow to do, but according to how easily it can be extended so that more and more people share more and more complex software. That is, the power of an OS is not expressed in terms of services it statically provide, but in terms of services it can dynamically manage; intelligence is expressed not in terms of knowledge, but in terms of evolutivity toward more knowledge. A culture with a deep knowledge but that would prevent further innovations, like the ancient chinese civilization, would indeed be quite harmful. An OS providing lots of services, but not allowing its user to evolve would likewise be harmful.
Again, we find the obvious analogy with human culture for which the same stands; the analogy is not fallacious at all, as the primary goal of an operating system is allowing humans to communicate with computers more easily to achieve better software. So an operating system is a part of human culture, though a part that involves computers.
Multiplying the actual services provided by an operating system may be an expedient way to solve computer problems, in the same way that multiplying welfare institutions may be an expedient way to solve the everyday problems of a human system; the progress of the system ultimately means that those services will actually be multiplied in the long run. However, from the point of view of utility, what counts is not any the objective state of the system at any given moment, and its ephemeral advantages, but the dynamic project of the system accross time, and its smaller, but growing, long-standing advantages.
.....
The deepest flaw in computer design
is this idea that there is a fundamental difference
between system programming and usual programming,
between usual programming and "mere" using.
The previous point shows how false is this conception.
The truth is any computer user, whether a programming guru or a novice
user, is somehow trying to communicate with the machine. The easier
the communication, the quicker better larger the work is getting done.
Of course, there are different kind of using; actually, there are
infinitely many. You can often say that such kind of computer use is
much more advanced and technical than another; but you can never find
a clear limit, and that's the important point (in mathematics, we'd say
the space of kinds of computing is connected).
Of course also,
any given computer object has been created by some user(s),
who programmed it above a given system,
and is being used by other (or the same) user(s),
who program using it, above the thus enriched system.
That is, there are computer object providers and consumers.
But anyone can provide some objects and consume other objects;
providing objects without using some is unimaginable,
while using objects without providing any is pure useless waste.
The global opposition between users and programmers that roots
the computer industry is thus inadequate;
instead, there is a local complementarity between providers and consumers
of every kind of objects.
Some say that common users are too stupid to program;
that's only despising them;
most of them don't have time and mind
to learn all the subtleties of advanced programming;
Most of the time, such subtleties shouldn't be really needed,
and learning them is thus a waste of time
but they often do manually emulate macros,
and if shown once how to do it,
are very eager to use or even write their own macros or aliases.
Others fear that encouraging people to use a powerful programming
language is the door open to piracy and system crash,
and argue that programming languages are too complicated anyway.
Well, if the language library has such security holes and cryptic syntax,
then it is clearly misdesigned;
and if the language doesn't allow the design of a secure, understandable
library, then the language itself is misdesigned (e.g. "C").
Whatever was misdesigned, it should be redesigned, amended or replaced
(as should be "C").
If you don't want people to cross an invisible line, just do not draw roads
that cross the line, write understandable warning signs, then hire an army of
guards to shoot at people trying to trespass or walk out of the road.
If you're really paranoid, then just don't let people near the line:
don't have them use your computer. But if they have to use your computer,
then make the line appear, and abandon these ill-traced roads and fascist
behavior.
So as for those who despise higher-order and user-customizability, I shall repeat that there is NO frontier between using and programming. Programming is using the computer while using a computer is programming it. Which does not mean there is no difference between various users-programmers; but creating an arbitrary division in software between "languages" for "programmers" and "interfaces" for mere "users" is asking reality to comply to one's sentences instead of having one's sentences reflect reality: one ends with plenty of unadapted, inefficient, unpowerful tools, stupefies all computer users with a lot of unuseful ill-conceived, similar but different languages, and wastes a considerable lot of human and computer resources, writing the same elementary software again and again.
But this accounts only for centralized design; it appears that what system acknowledge as first-class objects are actually very coarse-grained information concepts, and that a meaningful study of information flow should take into account much finer-grained information, that such system just do no consider at all, hence being unadapted to the actual use that is done of them.
How does this design generalize to arbitrary OSes?
What do OS kernels provide that is essential to all OSes,
and what do they do that is costly noise?
To answer such questions, we must depart from the traditional
OS point of view that we know is flawed,
and see how are OSes doing, that we recognized as such,
that traditional design refuses to consider this way,
and what the analogy to human systems lead to.
Thus, we see that of course, centralization of the information flow through the kernel is not needed: hence, information most often is much more efficiently passed directly from object to object without any intermediate. Also,
To conclude, we'll say that the kernel is the central authority used to coordinate software components, and solve conflicts, in a computer system.
It is also remarkable that
while new standard libraries arise,
they do not lead to reduced code size
for programs of same functionality,
but to enhanced code size for them,
so that they take into account all the newly added capabilities.
As a blatant example
of the lack of evolution of system software quality
is the fact that
the most popular system software in the world (MS-DOS)
is a fifteen-year old thing that does not allow the user
to do either simple tasks, or complicated ones,
thus being a no-operating system,
and forces programmers to rewrite low-level tasks
everytime they develop any non-trivial program,
while not even providing trivial programs.
This industry-standard has always been designed
as a least sub-system possible for the Unix system,
which itself is a least subsytem of Multics
made of features assembled in undue ways
on top of only two basic abstractions,
the raw sequence of bytes ("files"),
and the ASCII character string.
As these abstractions proved not enough to express adequately the semantics of new hardware and software that appeared, Unix has had a huge number of ad-hoc "system calls" added, to extend the operating system in special ways. Hence, what was an OS meant to fit the tiny memory of then available computers, has grown into a tentaculous monster with ever growing pseudopods, that wastes without counting the resources of the most powerful workstations. And this, renamed as POSIX, is the new industry standard OS to come, whose promoters crown as the traditional, if not natural, way to organize computations.
Following the same tendency, widespread OSes are
found upon a large number of human interface services,
video and sound.
This is known as the "multi-media" revolution,
which basically just means that your computer produces
high-quality graphics and sound.
All that is fine:
it means that your system software
grants you access to your actual hardware,
which is the least it can do!
But software design, a.k.a. programming,
is not made simpler for that;
it is even made quite harder:
while a lot of new primitives are made available,
no new combinatorials are provided
that could ease their manipulation;
worse, even the old reliable software is made obsolete
by the new interface conventions.
Thus you have computers with beautiful interfaces
that waste lots of resources,
but that cannot do anything new;
to actually do interesting things,
you must constantly rewrite everything from almost scratch,
which leads to very expensive low-quality slowly-evolving software.
Of course, we easily can diagnose about the "multimedia revolution" that it stems from the cult of external look, of the container, to the detriment of the internal being, the contents; such cult is inevitable whenever non-technical people have to choose without any objective guide among technical products, so that the most seductive wins. So this general phenomenon, which goes beyond the scope of this paper, though it does harm to the computing world, is a sign that computing spreads and benefits to a large public; by its very nature, it may waste a lot of resources, but won't compromise the general utility of operating systems. Hence, if there is some flaw to find in current OS design, it must be looked for deeper.
Computing is a recent art, and somehow, it left its Antiquity for its Ancien Régime. Its world is dominated by a few powerful companies, that wage a perpetual war to each other, where At the same time, there are heavens where computists can grow in art while freely benefitting ..... isn't the deeply rooted .....
Actually, the the informational status of the computer world is quite remindful of the political status of
Well, firstly, we may like to find some underlying structure of mind in terms of which everything else would be expressed, and that we would call "kernel". Most existing OSes, at least, all those software that claim to be an OS, are conceived this way. Then, over this "kernel" that statically provides most basic services, "standard libraries" and "standard programs" are provided that should be able to do all that is needed in the system, that would contain all the system knowledge, while standard "device drivers" would provide complementary access to the external world.
We already see why such a conception may fail: it could perhaps be perfect for a finite unextensible static system, but we feel it may not be able to express a dynamically evolving system. However, a solid argument why it shouldn't be able to do so is not so obvious at first sight. The key is that like any complex enough systems, like human beings, computer have some self-knowledge. The fact becomes obvious when you see a computer being used as a development system for programs that will run on the same computer. And indeed the exceptions to that "kernel" concept are those kind of dynamic languages and systems that we call "reflective", that is, that allow dynamical manipulation of the language constructs themselves: FORTH and LISP (or Scheme) development systems, which can be at the same time editors, interpreters, debuggers and compilers, even if those functionalities are available separately, are such reflective systems. so there is no "kernel" design, but rather an integrated OS.
And then, we see that if the system is powerful enough (that is, reflective), any knowledge in the system can be applied to the system itself; any knowledge is also self-knowledge; so it can express system structure. As you discover more knowledge, you also discover more system structure, perhaps better structure than before, and certainly structure that is more efficiently represented directly than through stubborn translation to those static kernel constructs. So you can never statically settle once and for all the structure of the system without ampering the system's ability to evolve toward a better state; any structure that cannot adapt, even those you trust the most, may eventually (though slowly) become a burden as new meta-knowledge is available. Even if it actually won't, you can never be sure of it, and can expect only refutation, never confirmation of any such assumption.
The conclusion to this is that you cannot truely separate a "kernel" from a "standard library" or from "device drivers"; in a system that works properly, all have to be integrated into the single concept, the system itself as a whole. Any clear cut distinction inside the system is purely arbitrary, and harmful if not done due to strong reasons of necessity.
Well, we have seen that an OS' utility is not defined in terms
of static behavior, or standard library functionality; that it
should be optimally designed for dynamic extensibility, that it
shall provide a unified
interface to all users, without enforcing arbitrary layers
(or anything arbitrary at all). That is, an OS should be primarily
open and
rational.
But then, what kind of characteristics are these ? They are features
of a computing language. We defined an OS by its observational semantics,
and thus logically ended into a good OS being defined by a good way to
communicate with it and have it react.
People often boast about their OS being "language independent", but what
does it actually mean ?
Any powerful-enough (mathematicians say universal/Turing-equivalent)
computing system is able to emulate any language, so this is no valid argument.
Most of the time, this brag only means that they followed no structured plan
as for their OS semantics, which will lead to some horrible inconsistent
interface, or voluntarily limited their software to interface with the
least powerful language.
So before we can say how an OS should be, we must study computer languages, what they are meant to, how to compare them, how they should be or not.
Faré -- rideau@clipper.ens.fr