A structure A is interpreted in another structure B if you can map the symbols of A with combinations of symbols of B (with all the properties conserved). The simplest way to be interpreted is to be included.
A structure A is a specialization of a structure B if it has the same symbols, but you know more properties about the represented objects.
Imagine a real-time process is interrupted: will it continue where is stopped ? or will it skip what was done during the interruption ? Imagine the system runs out of memory ? Whose memory are you to reclaim back ? To the biggest process ? The smallest ? The oldest ? The first to ask for more ? If objects spawn, thus filling memory (or CPU), how to detect "the one" responsible and destroy it ?
If an object locks a common resource, and then is itself blocked by a failure or other unwilling latency, should this transaction be cancelled, so others can access the resource, or should all the system wait for that single transaction to end ?
As for implementation methods, you should always be aware that defining
all those abstraction as the abstractions they are rather than hand-coded
emulation for these allows better optimizations by the compiler, quicker
write phase for the programmer, neater semantics for the reader/reuser,
no implementation code propagation, etc.
Partial evaluation should also allow specialization of code that don't use all the language's powerful semantics, so that standalone code be produced without including the full range of heavy reflective tools.
That is, without ADTs, and combinating ADTs, you spend most of your time
manually multiplexing. Without semantic reflection (higher order), you spend
most of your time manually interpreting runtime generated code or manually
compiling higher order code. Without logical specification, you spend most of
your time manually verifying. Without language reflection, you spend most of
your time building user interfaces. Without small grain, you spend most of
your time emulating simple objects with complex ones. Without persistence,
you spend most of your time writing disk I/O (or worse, net I/O) routines.
Without transactions, you spend most of your time locking files. Without
code generation from constraints, you spend most of your time writing
redundant functions that could have been deduced from the constraints.
To conclude, there are essentially two things we fight: lack of feature and power from software, and artificial barriers that misdesign of former software build between computer objects and others, computer objects and human beings, and human beings and other human beings.
To conclude, I'll say
Object vs. Project
Faré -- rideau@clipper.ens.fr