Part I: Operating Systems and Utility

  1. Utility vs Expediency
  2. We herein call useful something that saves time, effort, money, strength, or anything valuable in the long run or for a lot of people. Utility is opposed to Expediency; something is expedient if it saves such valuable things, but only in the short term, for special, personal, temporary purposes, and not for general, universal or permanent purposes. Utility and Expediency are of course relative, not absolute concepts: how much you save depends on a reference, so you always compare the utility of two actions; utility of an isolated, unmodifiable, action is meaningless.

    Utility is a moral concept; it directly depends on the goal you defined for general interest. Actually, Utility is as well defined by the moral concept of Good, as Good is defined by Utility; to maximize Good is to maximize Utility. We'll admit that all the previous objects described as valuable (time, effort, etc) are indeed valuable as far as general interest is concerned.


  3. Computer Utility
  4. Now, what is useful about computers ? Anything that will allow you to talk in a quicker and safer way to your computer, and gives you more controlled power. That is, it must save the time and effort you spend talking to the computer to explain the work you want it to do for you; but you must also save the time and effort you spend verifying, correcting, or trying again the computer's work; and of course, the computer must be able of as much useful work as possible.

    Let no computer be the basic reference for computer utility. Now, can computers be useful at all ? Well yes, indeed, as they can quickly and safely add or multiply tons of numbers (that's called number-crunching) that no human could handle in decent times and be sure not to commit mistakes. But number-crunching is not the only utility of computers. Computers can also memorize large amount of data, and remember them much more exactly and unerringly than humans. But all these utilities are related to quantity: managing quickly large amounts of information. Can computers be more useful than that ?

    Of course, computer utility has got limits; and those limits are those of the human understanding, as computers are creatures of the human. The dream of Artificial Intelligence (AI), older than computer itself, is to push those limits beyond the utility of humans themselves, that is, Human beings creating something better than (or as good as) themselves. But, human-like computer utility won't evolve from nil to beyond human understanding at the first try; it will have to reach every intermediate states before it may ever attain AI (if it ever does).

    So as for quality vs quantity, yes, computers can be useful. By their mechanical virtues of predictability, they can be used to verify consistency of mathematical theorems, to help people schedule their work, to help them diminish the number of mistakes they do. But do note that the same predictability virtues makes AI difficult to attain, for the limits of humans intelligence are far beyond the limits of what it can predict. However this is yet another problem, out of the range of this paper...


  5. Operating Systems
  6. Now, let's define what is an Operating System (for which we may later use the acronym OS).
    Computers are made of hardware, that is, physical, electronical, circuitry of semiconductors, or other substratum. Living organical creatures themselves, including humans, are complex machines made mostly of water, carbon and nitrogen.
    But if the hardware defines limits for a computer's (or living creature's) utility, it does not define the actual utility of a computer: as well as all creatures are not equal, all computers are not equal either. Note that this does not mean that the less useful should be prematurely destroyed (actually, such kind of eugenism would be quite harmful), only harmful ones may have to be. What makes computers different is the physical, external, environment in which they are used (the same stands for humans), but also software, that is, the way they are organized internally (again, humans also have the same, their mental being), and of course develops and evolves according to the previous external environment.

    The basic or common state of software, shared by a large number of different computers, is called their Operating System. This concept does not mean that some part of the software is exactly the same accross different computers; indeed, what does "the same" shall mean ? Some will want software of a computer to be exactly the same, which is a very limited definition, that furthermore is often meaningless: for example, if the two computers have in common a software part full of useless zeros, are they using the same system ? However, even if the computers use completely different technologies, you can sometimes say that they share the same behavior, that they manipulate similarly the same kind of objects.
    Let's use our human analogy again: can you find cells that are "the same" on different humans ? No, and even if you found exactly the same fat on two humans, it wouldn't be meaningful. But you can nonetheless say that those two humans share the same ideas, the same languages and idioms or colloquialisms, the same Cultural Background.
    It's the same with computers; even though computers may be of completely different models (and thus cannot be a copy one of one another), they may still share a common background, be able to communicate and understand each other, to react similarly to similar environments. That we called the Operating System.

    We see that actual OS limits are very dull, for "what is shared" depends on the class of computers considered, and the criteria used. Some people try to define the OS as a piece of software providing some list of services; but they then find their definition ever evolving, and never agree when it comes to having a definition that matches truely different systems. That's because they have a very narrow-minded self-centric view of what a operating system is. The ancient Chinese, or the ancient Greek did not learn the same customs, language, habits, did not read the same books; but either had a culture and no one could ever say that one was better than the other, or that one wasn't really a culture. Even if their culture is less sophisticated than ours, our ancestors, or some native tribes of continents recently discovered by the white, all did/do have a culture. As for computers, you cannot define being an OS in terms of what exactly an OS should provide (eating with a fork or accepting a mouse as a pointing input device), but in terms of if it is a software background common to different computers, and accessible to all of them. When you consider all possible humans or computers, the system is what they have in common: the hardware, their common physical characteristics; when you consider a given computer or individual, the system is just the whole computer or individual.


  7. Operating System Utility
  8. An operating system is not some complete, perfect, piece of software. It is a frame that should allow further developments in a constantly evolving world. Actual services provided by the system are expedient, like knowledge/know-how being expedient to humans for everyday life; but it's not what decides long-term utility of an OS. A machine that does not evolve does not need an operating system, but a perfect optimized program, which is a completely different matter; in the same way, for a specialized task, you may prefer dumb workers that know well their job to intelligent newbies. But the tasks you need to complete evolve, and your dumb worker won't adapt to a new job, so you'll have to throw him away or pay him to do nothing as the task he knows so well is obsolete; whereas with the intelligent worker, you may have to invest in his formation, but will always have a proficient collaborator after a short adaptation period. After all, even the dumb worker had to learn one day, and an operating system was needed as a design platform for any program.

    That's why the power and long-term utility of an OS musn't be measured according to what the OS does currently allow to do, but according to how easily it can be extended so that more and more people share more and more complex software. That is, the power of an OS is not expressed in terms of services it statically provide, but in terms of services it can dynamically manage; intelligence is expressed not in terms of knowledge, but in terms of evolutivity toward more knowledge. A culture with a deep knowledge but that would prevent further innovations, like the ancient chinese civilization, would indeed be quite harmful. An OS providing lots of services, but not allowing its user to evolve would likewise be harmful.

    Again, we find the obvious analogy with human culture for which the same stands; the analogy is not fallacious at all, as the primary goal of an operating system is allowing humans to communicate with computers more easily to achieve better software. So an operating system is a part of human culture, a part which involves the computers that run under this OS.


  9. Current state of system software
  10. It is remarkable that while since their origins computers have grown in power and speed at a constant exponential rate, system software only slowly evolved. It does not offer any new tools to master the increasing power of hardware, but only enhancements of obsolete tools, and new "device drivers" to access new hardware. System software becomes fatware (a.k.a. hugeware), as it tries to cope differently with all the different users' different but similar problems. It is also remarkable that while new standard libraries arise, they do not lead to reduced code size, but enhanced code size, to take into account all the new capabilities added.

    As a blatant example of the lack of evolution of system software quality is the fact that the most popular system software in the world (MS-DOS) is a twenty-year old thing that does not allow the user to do either simple tasks, or complicated ones, thus being a no-operating system, and forces programmers to rewrite low-level tasks everytime they develop any non-trivial program, while not even providing trivial programs. This industry-standard has always been designed as a least sub-system possible for the Unix system, which itself is a system made of features assembled in undue ways on top of only two basic abstractions, the raw sequence of bytes ("files"), and the ASCII character string; Unix has had a huge number of new special files and bizarre mechanisms added to allow access to new kind of hardware or software abstractions, but its principles are still those unusable unfriendly files and strings, that were originally thought as a minimal abstractions to fit the tiny memory of then existing computers, and not as an interesting concept for today's computers; now known as POSIX, it is the new industry standard OS to come.

    It may be said that computing has been doing quantitative leaps, but has not done any comparable qualitative leap; computing grows in extension, but does not evolve toward intelligence; it sometimes rather becomes more largely stupid. This is the problem of operating systems not having a good kernel: however large and complete their standard library, their utility will be essentially restricted to the direct use of the library.


  11. Newest operating systems: the so-called "Multimedia revolution"
  12. The other tendency in widespread OSes is to found the system upon a large number of human interface services, video and sound. This is known as the "multi-media" revolution, which just means that your computer produces high-quality graphics and sound. All that is fine: it means that your system software grants you access to your actual hardware, which is the least it can do ! But software design, a.k.a. programming, is not made simpler for that; it is even made quite harder: while a lot of new primitives are made available, no new combinatorials are provided that could ease their manipulation; worse, even the old reliable software is made obsolete by the new interface conventions. Thus you have computers with beautiful interface, but that cannot do anything new; to actually do interesting things, you must constantly rewrite everything from almost scratch, which leads to very expensive low-quality slowly-evolving software.
    This is the cult of external look, of the container, instead of the internal being, the contents; this problem does not concern only the computer industry, so the why of such tendency is beyond the scope of this paper; but one must be conscious of the problem anyway. The container may seem important when one hasn't used a computer much; but if you really use computers regularly, you'll see that a container improvement is useless unless a corresponding improvement of the contents has been done.

    This phenomenon can also be explained by the fact that programmers, long used to software habits from the heroic times when computer memories were too tight to contain more than just the specific software you needed (when they even could), do not seem to know how to fill today computers' memory, but with pictures of gorgeous women and digitized music (which is the so-called multimedia revolution). Computer hardware capabilities evolved much quicker than human software capabilities; thus humans find it simpler to fill computers with raw data (or almost raw data) than with intelligence, especially since companies put a "proprietary" label on any non-trivial software they produce.


  13. Analyzing an operating system
  14. What are the components of an operating system ? Well, firstly, we may like to find some underlying structure of mind in terms of which everything else would be expressed, and that we would call "kernel". Most existing OSes, at least, all those that claim to be an OS, are conceived this way. Then, over this "kernel" that statically provides most basic services, "standard libraries" and "standard programs" are provided that should be able to do all that is needed in the system, that would contain all the system knowledge, while standard "device drivers" would provide complementary access to the external world.

    We already see how such a conception fails: it would perhaps be perfect for a finite unextensible static system; but we fell it may not be able to express a dynamically evolving system. However, a solid argument why it shouldn't be able to do so is not so obvious at first sight. The key is that like any complex enough systems, like human beings, computer have some self-knowledge. The fact becomes obvious when you see a computer being used as a development system for programs that will run on the same computer. And indeed the exceptions to that "kernel" concept are those kind of dynamic languages and systems that we call "reflective", that is, that allow dynamical manipulation of the language constructs themselves: FORTH and LISP (or Scheme) development systems, which can be at the same time editors, interpreters, debuggers and compilers, even if those functionalities are available separately, so there is no "kernel" design, but rather an integrated OS.

    And then, we see that if the system is powerful enough (that is, reflective), any knowledge in the system can be applied to the system itself; any knowledge is also self-knowledge; so it can express system structure. As you discover more knowledge, you also discover more system structure, perhaps better structure than before, and certainly structure that is more efficiently represented directly than through stubborn translation to those static kernel constructs. So you can never statically settle once and for all the structure of the system without ampering the system's ability to evolve toward a better state; any structure that cannot adapt, even those you trust the most, may eventually (though slowly) become a burden as new meta-knowledge is available. Even if it actually won't, you can never be sure of it, and can expect only refutation, never confirmation of any such assumption.

    The conclusion to this is that you cannot truely separate a "kernel" from a "standard library" or from "device drivers"; in a system that works properly, all have to be integrated into the single system concept. Any clear cut distinction inside the system is purely arbitrary (but it is sometimes expedient to have arbitrary decisions).


  15. Users are Programmers
  16. Actually, the deepest flaw in computer design is this idea that there is a fundamental difference between system programming and usual programming, between usual programming and "mere" using. The previous point shows how false is this conception.
    The truth is any computer user, whether a programming guru or a novice user, is somehow trying to communicate with the machine. The easier the communication, the quicker better larger the work is getting done.
    Of course, there are different kind of using; actually, there are infinitely many. You can often say that such kind of computer use is much more advanced and technical than another; but you can never find a clear limit, and that's the important point (in mathematics, we'd say the space of kinds of computing is connected).
    Of course also, any given computer object has been created by some user(s), which programmed it above a given system, and is being used by other (or the same) user(s), who program using it, above the thus enriched system. That is, there are providers and consumers; anyone can provide some objects and consume others; but this is in no way a user/programmer opposition.

    Some say that common users are too stupid to program; that's only despising them; most of them don't have time and mind to learn all the subtleties of advanced programming; but they often do manually emulate macros, and if shown once how to do it, are very eager to use or even write their own macros or aliases.
    Others fear that encouraging people to use a powerful programming language is the door open to piracy and system crash, and argue that programming languages are too complicated anyway. Well, if the language library has such security holes and cryptic syntax, then it is clearly misdesigned; and if the language doesn't allow the design of a secure, understandable library, then the language itself is misdesigned (e.g. "C"). Whatever was misdesigned, it should be redesigned, amended or replaced (as should be "C"). If you don't want people to cross an invisible line, just do not draw roads that cross the line, write understandable warning signs, then hire an army of guards to shoot at people trying to trespass or walk out of the road. If you're really paranoid, then just don't let people near the line: don't have them use your computer. But if they have to use your computer, then make the line appear, and abandon these ill-traced roads and fascist behavior.

    So as for those who despise higher-order and user-customizability, I shall repeat that there is NO frontier between using and programming. Programming is using the computer while using a computer is programming it. The only thing you get by having different languages and interfaces for "programmers" and mere "users" is building plenty of inefficient languages, and stupefying all computer users with a lot of unuseful ill-conceived, similar but different ones, and waste a considerable lot of human and computer resources.


  17. Toward a unified system
  18. From what was previously said, what can we deduce about how an OS should be behaved for real utility ?
    Well, we have seen that an OS' utility is not defined in terms of static behavior, or standard library functionality; that it should be optimally designed for dynamic extensibility, that it shall provide a unified interface to all users, without enforcing arbitrary layers (or anything arbitrary at all). That is, an OS should be primarily open and rational.

    But then, what kind of characteristics are these ? They are features of a computing language. We defined an OS by its observational semantics, and thus logically ended into a good OS being defined by a good way to communicate with it and have it react.
    People often boast about their OS being "language independent", but what does it actually mean ? Any powerful-enough (mathematicians say universal/Turing-equivalent) computing system is able to emulate any language, so this is no valid argument. Most of the time, this brag only means that they followed no structured plan as for their OS semantics, which will lead to some horrible inconsistent interface, or voluntarily limited their software do interface with the least powerful language.

    So before we can say how an OS should be, we must study computer languages, what they are meant to, how to compare them, how they should be or not.


  • Next: II. Language Utility

  • Previous: Introduction: Computers
  • Up: Table of Contents


    To Do on this page

  • Divide into subfiles ?
  • wait for feedback from the members.
  • Put references to the Glossary...


    Faré -- rideau@clipper.ens.fr