[draft]

Nota Bene: This file contains many ideas to insert in the text as it is rewritten. The ideas are in no particular order, having been put at random places in the file as they came, or were moved from the written text, since late january 1995 when redacting this article began.


Text as source files: derives from the silly notion that because people type programs by hitting a sequence of keys, each being marked with a symbol, the program should be stored and manipulated as the sequence of symbol typed. That because books are read as such a sequence, because time is linear and thus any exploration of the text, then the text should be linear, too. This also derives from the fact that early computers where so slow and primitive, with such tight memory, that programmers had to care a lot about the representation of computer data, with the sequential nature of their being fed to the computer through punched paper or magnetic tape;

Defining an OS as a set of low-level abstractions: If freight technology had been left to similar companies and academies, they would have put the horse as the only basic abstraction on which to build... They could have made a new standard when it would have been obvious that steam engines should have been adopted twenty years ago, and similarly for all newer technologies... In any case, it doesn't directly tackle the real problem, which is reliable transport of goods; it just forces to people to use standard technology, however it be obsolete, and prevents them from developping structures that would survive this technology.

Also see the disaster of State managing services.

Each independent part is subject to the limit of a one-man's understanding so it be reliable. Existing systems are coarse-grained, which means that independent parts or large portions of programs, so that a complete program is made of few independent parts, and that total program complexity is limited to the sum of a few direct human understandings. Tunes will be fine-grained and reflective, so that a complete program is made of arbitrarily many limited parts, and can arbitrarily grow in complexity.

See the failure of the french industry, because of its illusory policy of developing their "own standard" (i.e. not a standard, as they're not strong enough to impose it by force) in a way bot internally centralized and externally isolated (!!!); such an industry can survive only by constantly stealing the taxpayers, and that's exactly what it has been doing for decades.

If safety criteria are not expressible, then to be safe, programs must be understandable by a one man. Because that man won't ever be there to maintain code, because armies of "maintainers" won't replace him, because there is no tool to safely adapt old programs to new points of views, then every so often, code must go to the dust bin. No wonder why software evolves so slowly: only some small human experience remains, and even then, because there is no way to express what that experience is, it cannot spread in technology fast ways, but only man to man.

Should the article start from the current state of things and try to see what fails, which is the historical approach, how ideas slowly came into my mind, how I started the article ?
Or shouldn't the article rather present the result of this slow meditation process, which is a completely different approach, where the reasons preexist to the the current state of things, where opinions do not overlap and correct each other, but make a consistent whole, which is how I'm rewriting the article ?

  • Part I would:
  • Part II would discuss programming language utility, stating the key concepts about it.
  • Any reuse includes some rewrite, which is to minimize. Similarly, when we "rewrite", we often reuse a lot of the formal and informal ideas from existing code, and even when we reinvent, we reuse the inspiration, or sometimes feedback from people already inspired.
  • Notably after discussing how to be able to construct as many new concepts as possible, it should explain that the key to concept expressivity (that reflectivity cannot indefinitely postpone) is their separation power, and thus the capability to affirm one of multiple alternatives, to express different things, to negate and deny things.
  • Part III would apply the concepts to existing technology.
  • It would have to discuss what is tradition, what is its role, how it should or not be considered, what it currently does wrong, how the Tunes approach inserts in it.
  • It would debunk myths
  • Efficiency,Security,Small-grain: take two, you have the third. That is, when you need two of them, you also need the third, but when you have two of them, you automatically have the third.
  • with OO, people discovered that implicit binding is needed. Unhappily, most "OO" only know as-late-as-possible binding and no such thing as reflectivity (=implicitness control) or migration (=modification of implicitness control).


    Having generic programs instead of just specific ones is exactly the main point that we saw about having a good grammar to introduce new generic objects, instead of just an increasing number of terminal, first order objects, that actually do specific things (i.e. extending the vocabulary).


    It may be said that computing has been doing quantitative leaps, but has not done any comparable qualitative leap; computing grows in extension, but does not evolve toward intelligence; it sometimes rather becomes more largely stupid. This is the problem of operating systems not having a good conceptual kernel: however large and complete their standard library, their utility will be essentially restricted to the direct use of the library.


  • Newest Operating Systems: the so-called "Multimedia revolution"
  • This phenomenon can also be explained by the fact that programmers, long used to software habits from the heroic times when computer memories were too tight to contain more than just the specific software you needed (when they even could), do not seem to know how to fill today computers' memory, but with pictures of gorgeous women and digitized music (which is the so-called multimedia revolution). Computer hardware capabilities evolved much quicker than human software capabilities; thus humans find it simpler to fill computers with raw data (or almost raw data) than with intelligence.

    Those habits, it must be said, were especially encouraged by the way information could not spread and augment the common public background, since because of lack of theory and practice of what a freely communicating world could or should be, only big companies enforcing "proprietary" label could up to now broadcast their software; people who would develop original software thus had (and sadly still have) to rewrite everything from almost scratch, unless they could afford a very high price for every piece of software they may want to build upon, without having much control on the contents of such software.


  • The role of the OS infrastructure in the computer world is much like that of the State in human societies: it should provide justice by guaranteeing, by force if needs be, that contracts will be fulfilled, and nothing more. In the case of computer software, this means that it will guarantee that contracts passed between objects will be fulfilled, that objects should fulfill each other's requirements before they can connect. When there is no Justice, there is no society/OS, but only chaos.


    What is really useful is a higher-order grammar, that allows to manipulate any kind of abstraction that does any kind of things at any level. We call level 0 the lowest kind of computer abstraction (e.g. bits, bytes, system words, or to idealize, natural integers). Level one is abstractions of these objects (i.e. functions manipulating them). More generally, level n+1 is made of abstractions of level n objects. We see that every level is a useful abstraction as it allows to manipulate objects that would not be possible to manipulate otherwise.

    But why stop there ? Everytime we have a set of level, we can define a new level by having objects that arbitrarily manipulate any lower object (that's ordinals); so we have objects that manipulate arbitrary objects of finite level, etc. There is an unbounded infinity of abstraction levels. To have the full power of abstraction, we must allow the use of any such level; but why not allow manipulating such full-powered systems ? Any logical limit you put on the system may be reached one day, and this day, the system would become completely obsolete; that's why any system to last must potentially contain (not in a subsystem) any single feature that may be needed one day.

    The solution is not to offer any bounded level of abstraction, but unlimited abstracting mechanisms; instead of offering only terminal operators (BASIC), or first level operators (C), or even finite-order offer combinators of arbitrary order.

    offer a grammar with an embedding of itself as an object. Of course, a simple logical theorem says that there is no consistent internal way of saying that the manipulated object is indeed the system itself, and the system state will always be much more complicated than it allows the system to understand about itself; but the system implementation may be such that the manipulated object indeed is the system. This is having a deep model of the system inside itself; and this is quite useful and powerful. This is what I call a higher-order grammar -- a grammar defining a language able to talk about something it believes be itself. And this way only can full genericity be achieved: allowing absolutely anything that can be done about the system, from inside, or from outside (after abstracting the system itself).


    ..... First, we see that the same algorithm can apply to arbitrarily complex data structures; but a piece of code can only handle a finitely complex data structure; thus to write code with full genericity, we need use code as parameters, that is, second order. In a low-level language (like "C"), this is done using function pointers.

    We soon see problems that arise from this method, and solutions for them. The first one is that whenever we use some structure, we have to explicitly give functions together with it to explain the various generic algorithm how to handle it. Worse even, a function that doesn't need some access method about an the structure may be asked to call other algorithms which will turn to need know this access method; and which exact method it needs may not be known in advance (because what algorithm will eventually be called is not known, for instance, in an interactive program). That's why explicitly passing the methods as parameters is slow, ugly, inefficient; moreover, that's code propagation (you propagate the list of methods associated to the structure -- if the list changes, all the using code changes). Thus, you mustn't pass explicitly those methods as parameters. You must pass them implicitly; when using a structure, the actual data and the methods to use it are embedded together. Such a structure including the data and methods to use it is commonly called an object; the constant data part and the methods, constitute the prototype of the object; objects are commonly grouped into classes made of objects with common prototype and sharing common data. This is the fundamental technique of Object-Oriented programming; Well, some call it that Abstract Data Types (ADTs) and say it's only part of the "OO" paradigm, while others don't see anything more in "OO". But that's only a question of dictionary convention. In this paper, I'll call it only ADT, while "OO" will also include more things. But know that words are not settled and that other authors may give the same names to different ideas and vice versa.

    BTW, the same code-propagation argument explains why side-effects are an especially useful thing as opposed to strictly functional programs (see pure ML :); of course side effects complicate very much the semantics of programming, to a point that ill use of side-effects can make a program impossible to understand or debug -- that's what not to do, and such possibility is the price to pay to prevent code propagation. Sharing mutable data (data subject to side effects) between different embeddings (different users) for instance is something whose semantics still have to be clearly settled (see below about object sharing).


    The second problem with second order is that if we are to provide functions other functions as parameter, we should have tools to produce such functions. Methods can be created dynamically as well as "mere" data, which is all the more frequent as a program needs user interaction. Thus, we need a way to have functions not only as parameters, but also as result of other functions. This is Higher order, and a language which can achieve this has a reflective semantics. Lisp and ML are such languages; FORTH also, whereas standard FORTH memory management isn't conceived for a largely dynamic use of such feature in a persistent environment. From "C" and such low-level languages that don't allow a direct portable implementation of the higher-order paradygm through the common function pointers (because low-level code generation is not available as in FORTH), the only way to achieve higher-order is to build an interpreter of a higher-order language such as LISP or ML (usually much more restricted languages are actually interpreted, because programmers don't have time to elaborate their own user customization language, whereas users don't want to learn a new complicated language for each different application and there is currently no standard user-friendly small-scale higher-order language that everyone can adopt -- there are just plenty of them, either very imperfect or too heavy to include in every single application).

    With respect to typing, Higher-Order means the target universe of the language is reflective -- it can talk about itself.

    With respect to Objective terminology, Higher-Order consists in having classes as objects, in turn being groupable in meta-classes. And we then see that it _does_ prevent code duplication, even in cases where the code concerns just one user as the user may want to consider concurrently two -- or more -- different instanciations of a same class (i.e. two sub-users may need toe have distinct but mostly similar object classes). Higher-Order is somehow allowing to be more than one computing environment: each function has its own independant environment, which can in turn contain functions.


    To end with genericity, here is some material to feed your thoughts about the need of system-builtin genericity: let's consider multiplexing. For instance, Unix (or worse, DOS) User/shell-level programs are ADTs, but with only one exported operation, the "C" main() function per executable file. As such "OS" are huge-grained, with ultra-heavy inter-executable-file (even inter-same-executable-file-processes) communication semantics no one can afford one executable per actual operation exported. Thus you'll group operations into single executables whose main() function will multiplex those functionalities.

    Also, communication channels are heavy to open, use, and maintain, so you must explicitly pass all kind of different data & code into single channels by manually multiplexing them (the same for having heavy multiple files or a manually multiplexed huge file).

    But the system cannot provide builtin multiplexing code for each single program that will need it. It does provide code for multiplexing the hardware, memory, disks, serial, parallel and network lines, screen, sound. POSIX requirements grow with things a compliant system oughta multiplex; new multiplexing programs ever appear. So the system grows, while it will never be enough for user demands as long as all possible multiplexing won't have been programmed, and meanwhile applications will spend most of their time manually multiplexing and demultiplexing objects not yet supported by the system.

    Thus, any software development on common OSes is hugeware. Huge in hardware resource needed (=memory - RAM or HD, CPU power, time, etc), huge in resource spent, and what is the most important, huge in programming time.

    The problem is current OSes provide no genericity of services. Thus they can never do the job for you. That why we really NEED generic system multiplexing, and more generally genericity as part of the system. If one generic multiplexer object was built, with two generic specializations for serial channels or flat arrays and some options for real-time behaviour and recovery strategy on failure, that would be enough for all the current multiplexing work done everywhere.


    So this is for Full Genericity: Abstract Data Types and Higher Order. Now, if this allows code reuse without code replication -- what we wanted -- it also raises new communication problems: if you reuse objects especially objects designed far away in space or time (i.e. designed by other people or an other, former, self), you must ensure that the reuse is consistent, that an object can rely upon a used object's behaviour. This is most dramatic if the used object (e.g. part of a library) comes to change and a bug (that you could have been aware of -- a quirk -- and already have modified your program accordingly) is removed or added. How to ensure object combinations' consistency ?

    Current common "OO" languages are not doing much consistency checks. At most, they include some more or less powerful kind of type checking (the most powerful ones being those of well-typed functional languages like CAML or SML), but you should know that even powerful, such type checking is not yet secure. For example you may well expect a more precise behavior from a comparison function on an ordered class 'a than just being 'a->'a->{LT,EQ,GT} i.e. telling that when you compare two elements the result can be "lesser than", "equal", or "greater than": you may want the comparison function to be compatible with the fact of the class to be actually ordered, that is x<y & y<z => x<z and such. Of course, a typechecking scheme, which is more than useful in any case, is a deterministic decision system, and as such cannot completely check arbitrary logical properties as expressed above (see your nearest lectures in Logic or Computation Theory). That's why to add such enhanced security, you must add non-deterministic behaviour to your consistency checker or ask for human help. That's the price for 100% secure object combining (but not 100% secure programming, as human error is still possible in misexpressing the requirements for using an object, and the non-deterministic behovior can require human-forced admission of unproved consistency checks by the computer).

    This kind of consistency security by logical formal property of code is called a formal specification method. The future of secure programming lies in there (try enquire in the industry about the cost of testing or debugging software that can endanger the company or even human lives if ill written, and insurance funds spent to cover eventual failures - you'll understand). Life concerned industries already use such modular formal specification techniques.

    In any cases, we see that even when such methods are not used automatically by the computer system, the programmer has to use them manually, by including the specification in comments or understanding the code, so he does computer work.

    Now that you've settled the skeleton of your language's requirements, you can think about peripheral deduced problems.

    .....

    ..... A technique should be used when and only when it is best fit; any other use may be expedient, but not quite useful.

    Moreover, it is very hard to anticipate one's future needs; whatever you do, there will always be new cases you won't have.

    lastly, it doesn't replace combinators And finally, as of the combinatorials allowed allowing local server objects to be saved by the client is hard to implement eficiently without the server becoming useless, or creating a security hole;

    ..... At best, your centralized code will provide not only the primitives you need, but also the combinators necessary; but then, your centralized code is a computing environment by itself, so why need the original computing environment ? there is obviously a problem somewhere; if one of the two computing environment was good, the other wouldn't be needed !!!; All these are problems with servers as much as with libraries.




    Summary

  • Axioms:

    Current computers are all based on the von Neumann model in which a centralized unit executes step by step a large program composed of elementary operations. While this model is simple and led to the wonderful computer technology we have, laws of physics limit in power future computer technology to no more than a grand maximum factor 10000 of what is possible today on superdupercomputers.
    This may seem a lot, and it is, which leaves room for many improvement in computer technology; however, the problems computer are confronted to are not limited anyway by the laws of physics. To break this barrier, we must use another computer model, we must have many different machines that cooperate, like cells in a body, ants in a colony, neurones in a brain, people in a society.

    Machines can already communicate; but with existing "operating systems" the only working method they know is "client/server architecture", that is, everybody communicating his job to a one von Neuman machine to do all the computations, which is limited by the same technological barrier as before. The problem is current programming technology is based on coarse-grained "processes" that are much too heavy to communicate; thus each job must be done on a one computer. machine that executes Computing s all the requirement to be used as for Tunes, or design a new one if none is found.


    That is, without ADTs, and combinating ADTs, you spend most of your time manually multiplexing. Without semantic reflection (higher order), you spend most of your time manually interpreting runtime generated code or manually compiling higher order code. Without logical specification, you spend most of your time manually verifying. Without language reflection, you spend most of your time building user interfaces. Without small grain, you spend most of your time manually inlining simple objects into complex ones, or worse, simulating them with complex ones. Without persistence, you spend most of your time writing disk I/O (or worse, net I/O) routines. Without transactions, you spend most of your time locking files. Without code generation from constraints, you spend most of your time writing redundant functions that could have been deduced from the constraints.

    To conclude, there are essentially two things we fight: lack of feature and power from software, and artificial barriers that misdesign of former software build between computer objects and others, computer objects and human beings, and human beings and other human beings.

  • More than 95% of human computing time is lost about Interfaces: interfaces with the system, interfaces with the human. Actual algorithms are very few, heuristics are at the same time few and too many, because the environment makes them unreliable. Interfaces can and should be semi-automatically deduced.
  • More generally, the problem with existing systems is lack of reflectivity, and lack of consistency: you can't simply, quickly, reliably, automate any kind of programming. in a way such that system consistency be enforced.
  • Persistence is necessary for AI:

    To conclude, I'll say

  • Stress on comput*ing* systems means having a computer *Project* not only a computer *Object*.
  • Why are existing OS so lame ? For the same reason that ancient lore is completely irrelevant in nowadays' world:
    At a time when life was hard, memories very small and expensive, development cost very high, people had to invent hacker's techniques to survive; they made arbitrary decisions so survive with their few resources; They behaved dirtily, and thought for the short term.
    They had to.

    Now, technology has always evolved at an increasing pace. What was experimental truth is always becoming obsolete, and good old recipes are becoming out of date. Behaving cleanly and thinking for the long term is made possible.
    It is made compulsory.
    The problem is, most people don't think, but blindly follow traditions. They do not try to distinguish what is truth and what is falsehood in traditions, what is still true, and what no longer stands. They take it as a whole, and adore it religiously, sometimes by devotion, most commonly by lack of thinking, often by refusal to think, rarely but already too often by a hypocrit calculus. Thus, they abdicate all their critical faculties, or use it against any ethics. As a result, for the large majority of honest people, their morals are an unspeakable burden, mixing common sense, valid or obsolete experimental data, and valid, outdated, or false rules, connected and tangled in such a way that by trying to extract something valid, you come up with a mass of entangled false things that are associated, and that when extirping false things, you often destroy the few that were valid together. The roots of their opinions are not in actual facts, but in lore, hence their being only remotely relevant to anything.

    Tunes intends to rip off all these computer superstitions.


    Things below are a draft.


    III. No computer is an island, entire in itself

    1. Down to actual OSes
    2. .....


    3. Humanly characteristics of computers
    4. persistence, resilience, mobility, etc....

      response to human


      • Centralized code
      • There's been a craze lately about "client/server" architecture for computer hardware and software. What is "client/server" architecture that many corporations boast about providing ?
        .....
        conceptually, a server is a centralized implementation for a library; centralized=>coarse-grained; now, coarse grained=>evil; hence centralized=>evil. we also have centralized=>network bandwidth waste. only "advantage": the concept is simple to implement even by the dumbest programmer. Do corporations boast about their programmers being dumb ?
        .....
        A very common way to share code is to write a code "server" that will include tests for all the different cases you may need in the future and branch to the right one. Actually, this is only some particular kind of library making, but much more clumsy, as a single entry point will comprise all different behaviours needed. This method proves hard to design well, as you have to take into account all possible cases to arise, with predecided encoding, whereas a good encoding would have to take into account actual use (and thus be decided after run-time measurements). The obtained code is slow as it must test many uncommon cases; it is huge, as it must take into account many cases, most of them seldom or never actually used; it is also uneasy to use, as you must encode and decode the arguments to fit its one entry point's calling convention. It is very difficult to modify, but by adding new entries and replacing obsolete subfunctions by stubs, because it would else break existing code; it is very clumsy to grant partial access to the subfunctions, as you must filter all the calls; security semantics become very hard to define.

        Centralized code is also called "client-server architecture"; the central code is called the server, while those who use it are called clients. And we saw that a function server is definitely something that no sensible man would use directly; human users tend to write a library that will encapsulate calls to the server. But it's how most operating systems and net-aware programs are implemented, as it's the simplest implementation way. Many companies boast about providing client-server based programs, but we see there's nothing to boast about it; client-server architecture is the simplest and dumbest mechanism ever conceived; even a newbie is able to do that easy. What they could boast about would be not using client-server architecture, but truely distributed yet dependable software.

        A server is nothing more than a bogus implementation for a library, and shares all the disadvantages and limits of a library, with enhanced extensibility problem, and additional overhead. It's only advantage is to have a uniform calling convention, which can be useful in a system with centralized security, or to pass the stream of arguments through a network to allow distant client and servers to communicate. This last use is particularly important, as it's the simplest trick ever found for accessing an object's multiple services through a single communication line. Translating software interface from library to server is called multiplexing the stream of library/server access, while the reverse translation is called demultiplexing it.


      • Multiplexing: the main role of an OS
      • Putting aside our main goal, that is, to see how reuse is possible in general, let us focus on this particular multiplexing technique, and see what lessons we can learn that we may generalize later.

        Multiplexing means to split a single communication line or some other resource into multiple sub-lines or sub-resources, so that this resource can be shared between multiple uses. Demultiplexing is recreating a single line (or resources) from those multiple ones; but as dataflow is often bi-directional, this reverse step is most often unseparable from the first, and we'll only talk about multiplexing for these two things. Thus, multiplexing can be used to share a multiple functions with a single stream of calls, or convertly to have a function server be accessed by multiple clients.

        Traditional computing systems often allow multiplexing of some physical resources, thus spliting them into a first (but potentially very large) level of equivalent logical resources. For example, a disk may be shared with a file-system; CPU time can be shared by task-switching; a network interface is shared with a packet-transmission protocol. Actually, what any operating system does can be considered multiplexing. But those same traditional computing systems do not provide the same multiplexing capability for arbitrary resource, and the user will eventually end-up with having to multiplex something himself (see the term user-level program to multiplex a serial line; or the screen program to share a terminal; or window systems, etc), and as the system does not support anything about it, he won't do it the best way, and not in synergy with other efforts.

        What is wrong with those traditional systems is precisely that they only allow limited, predefined, multiplexing of physical resources into a small, predefined, number of logical resources; there they create a big difference between physical resources (that may be multiplexed), and logical ones (which cannot be multiplexed again by the system). This gap is completely arbitrary (programmed computer abstractions are never purely physical, neither are they ever purely logical); and user-implemented multiplexers must cope with the system's lacks and deficiencies.


      • Genericity
      • Then what are "intelligent" ways to produce reusable, easy to modify code? Such a method should allow reusing code without duplicating it, and without growing it in a both unefficient and uncomplete way: an algorithm should be written once and for once for all the possible applications it may have, not for a specific one. We have just found the answer to this problem: the opposite of specificity, genericity.

        So we see that system designers are ill-advised when they provide such specific multiplexing, that may or may not be useful, whereas other kind of multiplexing is always needed (a proof of which being people always boasting about writing -- with real pain -- "client/server" "applications"). What they really should provide is generic ways to automatically multiplex lines, whenever such thing is needed.

        More generally a useful operating system should provide a generic way to share resources; for that's what an operating system is all about: sharing disks, screens, keyboards, and various devices between multiple users and programs that may want to use those accross time. But genericity is not only for operating systems/sharing. Genericity is useful in any domain; for genericity is instant reuse: your code is generic -- works in all cases -- so you can use it in any circumstances where it may be needed, whereas specific code must be rewritten or readapted each new time it must be used. Specificity may be expedient; but only genericity is useful on the long run.

        Let us recall that genericity is the property of writing things in their most generic forms, and having the system specialize them when needed, instead of hard-coding specific values (which is some kind of manual evaluation).

        Now, How can genericity be achieved ?


      • features: high-level abstraction Real-time, reflective language frame, code & data persistence, distribution, higher order
      • misfeatures: low-level abstraction explicit batch processing, adhoc languages, sessions & files, networking, first order
    5. Somewhere: develop the idea of adequateness of information, and show how it relates to pertinency.
  • More generally, in any system, for a specialized task, you may prefer dumb workers that know well their job to intelligent workers that that cost a lot more, and are not so specialized. But as the tasks you need to complete evolve, and your dumb workers don't, you'll have to throw them away or pay them to do nothing as the task they knows so well is obsolete; they may look cheap, but they can't adapt, and their overall cost is high for the little time when they are active; In a highly dynamic world, you lose at betting on dumbness, and should invest on intelligence.

  • whereas with the intelligent worker, you may have to invest in his formation, but will always have a proficient collaborator after a short adaptation period. After all, even the dumb worker had to learn one day, and an operating system was needed as a design platform for any program.

  • People tend to think statically in many ways.
  • When confronted with some proposition in TUNES, people tend to consider it separated from the rest of the TUNES ideas, and they then conclude that the idea is silly, because it contradicts something else in the traditional system design. These systems indeed have some coherency, which is why they survived and were passed by tradition. But TUNES tries to be much more coherent even,
  • At the time when the only metaprogramming tool was the human minds of specialized engineers, because memories were too small, which is very expensive and cannot deal with too much stuff at once, a run-time hardware protection was wishable to prevent bugs in existing programs from destroying data, even though th But now that computers have enough horsepower to be useful metaprogrammers, the constraints change completely.
  • Dispell the myth of "language independence", particularly about OSes. which really means "interfaces to many language implementations"; any expressive-enough language can express anything you expressed in another language in many ways.
  • And as the RISC/CISC then MISC/RISC concepts showed, the best way to achieve this is to keep the low-level things as small as possible, so as to focus on efficiency, and provide simple (yet powerful enough) semantics. The burden of combining those low-level things into useful high-level objects is then moved to compilers, that can do things much better than humans, and take advantage of the simpler low-level design.
    ------>8------>8------>8------>8------>8------>8------>8------>8------>8------
       Now, the description could be restated as:
    "project to replace existing Operating Systems, Languages,
    and User Interfaces by a completely rethough Computing
    System, based on a correctness-proof-secure
    higher-order reflective self-extensible fine-grained
    distributed persistent fault-tolerant version-aware
    decentralized (no-kernel) object system."
    
    
    > i saw your answer about an article in the news, so i wanna know,
    > what is tunes ?
       Well, that's a tough one.
       Here is what I told Yahoo:
    "TUNES is a project to replace existing Operating Systems, Languages,
    and User Interfaces by a completely rethough Computing
    System, based on a correctness-proof-secure
    higher-order reflective self-extensible fine-grained
    distributed persistent fault-tolerant version-aware
    decentralized (no-kernel) object system."
    
       Now, there are lots of technical terms in that.
    Basically, TUNES is a project that strives to develop a system where
    computists would be much freer than they currently are:
    in existing systems, you must suffer the inefficiencies of
    * centralized execution [=overhead in context switching],
    * centralized management [=overhead and single-mindedness in decisions],
    * manual consistency control [=slow operation, limitation in complexity],
    * manual error-recovery [=low security],
    * manual saving and restoration of data [=overhead, loss of data],
    * explicit network access [slow, bulky, limited, unfriendly, unefficient,
     wasteful distribution of resource],
    * coarse-grained modularity [=lack of features, difficulty to upgrade]
    * unextensibility [=impossibility to do things oneself,
     people being taken hostage by software providers]
    * unreflectivity [=impossibility to write programs clean for both human
     and computer; no way to specify security]
    * low-level programming [=necessity to redo things again everytime one
     parameter changes].
    
       If any of these seems unclear to you, I'll try to make it clearer in
    
    
    
    * unindustrialized countries: the low reliability of power feeds make
    resiliant persistency a must.
    
    ------>8------>8------>8------>8------>8------>8------>8------>8------>8------
    
    In this article, we have started from a general point of view of moral Utility, and by applying it to the particular field of computing, we have deduced several key requirements for computing systems to be as useful as they could be. We came to affirm concepts like dynamism, genericity, reflectivity, separation and persistency, which unhappily no available computing system fully implements.

    So to conclude, there is essentially one thing that we have to fight: the artificial informational barriers that lack of expressivity and misdesign of former software, due misknowledge, misunderstanding, and reject of the goals of computing, build between computer objects and others computer objects, computer objects and human beings, human beings and other human beings.

    ....


  • Previous: Bibliography
  • Up: Table of Contents


    To Do on this page

  • All stuff in it should be transferred to the definitive version or deleted.


    Faré -- rideau@clipper.ens.fr