Multics: Lightweight Processes
|A pair of memos I wrote about implementing a tasking mechanism on Multics. Mainly for those who are very curious about arcane details of computing history.||03-Mar-1974|
This is a pair of memos I wrote in 1974 when I was a graduate student
working on the Multics project. I OCRed them from the original and I
expect it to be of interest mainly to those who worked on the project and have
been around too long.
This first memo is simply titled "Processes" and represents some ideas
for implementing lightweight processes on Multics. I actually did implement
and use a version of this. One reason for Unix's success was that its processes
were very lightweight and could thus be used freely as an architectural tool.
As with many such ideas, they were part of the initial Multics design documents
(the MSPM -- Multics System Programming Manual).
I tried to catch the obvious errors in OCRing but there may be others
though many of the errors may be original typos.
Scanned in from a memo printed on March 3, 1974
Acknowledgements: I would like to thank Doug Wells,
Dave Reed and Bob Mabee for
some of the ideas below and for the criticisms of my earlier attempts at description.
Disclaimer: This memo is an early attempt to get some of my ideas down on paper.
It is not meant to be read by anyone who does not already agree with the ideas.
It can, I hope serve as a basis for discussion amongst the converted.
A central concept in Multics is that of a process. Processes form the basis for
communication within Multics via the interprocess communications mechanisms. Processes
are not, however, the only entities which must communicate. The mall (and send_message)
commands provide for communications with more abstract entitles. The mailbox (or
con_rnsgs segment) serves as an Intermediary between the sender and receiver and
thus defines the details of the communications. Another example of extended communication
is that between a user and an absentee ob via its input parameters. The description
of an absentee lob contains a set of Initial parameters. But it can't be said that
these parameters serve as messages to a given process they serve to communicate
to he set of processes that may be created by a single absentee request.
The process is fundamental to Multics, but our current terminology is not sufficient
for describing the relationships between processes. I take a process to consist
of an address space, an execution point and an associated principle identifier.
A process is uniquely identified by its lock ID but is normally identified by its
process id. This is both inconvenient and insecure since it is easy to accidentally
communicate with the wrong process since the Jock Id is no normally involved in
interprocess communication. I will therefore assume that a process Id incorporates
the lock Id and is therefore unique for all time. In practice a process may maintain
(via software) its own set of control points in a given ring. At any time only one
of these control points is active, i.e. is being maintained by the system as the
process control point. I will use the term "task" to describe such a control point.
A group of these tasks share the address space of a process.
An absentee request is an example of a description of a process. A more common
example of a description of a process is the user entry in a project definition
table. When a process is created its initial state is defined by the intersection
of its descriptions. For example, the parameters in the login line are constrained
by the description in the pdt which is in turn constrained by the projects definition
in the system definition table. Similarly absentee jobs are described by the request
in the absentee queue subject to the constraints of the other tables.
While a process serves as the receiver for ipc messages, the message segment
serves as a receiver for its messages. One difficulty with message segments is that
they are passive receivers and must be polled to discover their contents. We can
conceive of a message segment having associated with it a process that is always
waiting for a message to be received and waking up when the message does come. The
problem here is that the life of a message segment is longer than the life of process.
This difficulty can be handled if we consider a message segment to contain description
of process rather than being associated with a process. Whenever a message is received
a process is created from this description to read the message. This forms a more
general model for 1ntrprocess communicator since we are not constrained to simply
communicating between waiting processes. Instead we can conceive of the message
being sent to the process generated by the description in the message segment. For
simplicity we can call the description itself a potential process since it has the
same characteristics as a blocked process. (These potential processes have other,
more significant implications In that they permit a user to leave a description
of a process that can be activated by any user who has appropriate access. This
provides an alternative to rings for creating protected subsystems and does not
have the flaws in the ring mechanism that come from an attempt to share a common
process and from the fact that rings impose a strict ordering on which subsystem
is protected from which within the process).
Potential processes are not complete substitutes for kinetic processes they are
not as rich. Kinetic processes contain much information that has been developed
in the course of its operation and is represented in its bindings. The creation
of a process (even at 100 milliseconds) is a fairly expensive operation and one
can economize by using an existing process rather than creating a new one. Thus
one must have the option of using an existing process to handle messages received
in addition to always creating a new one. In general, message segment can be viewed
as representing a queue for requests to a family of processes, some of which may
be represented as a potential processes until they are needed. When a potential
process is activated it may then go blocked to wait for a new request after processing
its initial request, or it may destroy itself so that a new process must be created
for the next request.
The use of branches in the hierarchy has a number of advantages over the current
IPC mechanisms. It permits access control to be applied to messages. It generalizes
that types of entities that may communicate and it is more flexible with respect
to the messages that may be sent, It can be implemented by a combination of the
current IPC mechanisms, message segments and a variation on the absentee manager.
There is a special class of interprocess communications that falls outside normal
interprocess communications. The interprocess signal mechanism differs from IPC
in that the messages serve to interrupt a running process. These interrupts are
(unfortunately) converted into p11 conditions by the fault handling mechanism. The
message can be easily handled by the above scheme by creating a new process Instead
of interrupting an existing one. The new process could then have access to stop
or otherwise control the first process if necessary. There are, especially within
the current Multics constraints, some reasons why it might be desirable to interrupt
an existing process. This can be done, however, within the current interprocess
communication mechanisms without the need to restrict the mechanism to four special
signals. The purpose of the IPS mechanism is to force the process to read some messages
without the necessity of going blocked. This can be accomplished more simply by
having one IPS signal that tells the process to interrogate its IC channels for
an important message. The user would then have a choice of designating which messages
are to be considered Important. The use of this "kick" signal and the subsequent
interrogating of IPC channels would have the benefit separating the processing of
these asynchronous signals from the pit condition mechanism and permitting them
to be handled as per process faults. The fault handlers would then have the option
of converting them into pl1 conditions or processing them in some other manner.
One immediate effect of the new mechanism would be the ability to have a quit signal
be associated with an I/O device instead of a process.
This "kick" signal and the associated mechanism can be implemented easily within
the current Multics signal by using IPO channels for the four current IPS signals
and permitting users to send "kick" signals when they deem a message important.
This would, for example, find immediate application in the send_message program.
A reason for hesitating on the implementation is the problem caused by a user maliciously
flooding another's processes with these interrupt signals. This problem is not real
in that it already exists with IPC channels and would not be significantly aggravated
by the additional capability to send interrupts. Even if the? mechanism were restricted
to special processes (as is the current IPS mechanisms) it would still permit improvements
in such common signals as ?quit". By using message segments as intermediaries for
IPC messages one gains, as a byproduct, the ability to check access In the process.
This not only solves the problem of too many interrupts being sent, It also solves
the problems of too many unauthorized IPC messages if all interprocess communication
were required to go through message segments. (In the case of a null message, this
is very Inexpensive in that no message need be written in the message segment).
Additional access modes pertinent to interprocess communication could be added to
the message segment. These would include access to send a wakeup, access to create
a process from its (protected) description and access to send an interrupt to the
process (or processes) associated with the message segment.
One consequence of the mechanisms described above is the ability of users to
make effective use of multiple processes. The mechanisms described are sufficient
to coordinate cooperating processes. Additional mechanism is necessary to permit
processes to control each other. Access must be associated with a process description
and with a process to describe its relationship to other processes. The Increased
use of processes points out a deficiency in the current access control mechanism.
Presently one's ability to specify access is limited to a combination of a person's
Id, a project Id and an instance (misnomer) tag. These provide resolution only to
a family of processes authorized by a given user on a given project. Two additional
capabilities are needed. The first is the ability to specify access for a given
process. This can be done by permitted a process id to be component of a principle
identifier. Secondly, a new family of process can be identified and should be easy
to describe. This is the family of processes generated from a given process description.
This component is formed from the unique Id of a message segment of the process
description and becomes part of the process principle identifier.
The following are some of the types of access that processes may have with respect
to one another:
||The user may Generate a process. This access is associated with description
of a process. The generated process will have a principle identifier created
from the principle identifier of the user who created the description, not
from the name of the user who requested the generation of the process. (The
requestor's name is available to the generated process.)
||The user may interrupt the process. This access is associated with the
description of the process and applies to any active process generated from
||The user may control this process by starting and stopping it.
||The user may destroy this process.
||The user may Edit this process by modifying SCU data or by modifying
the KST information.
Other relationships between processes must also be defined. For example, it would
be very useful to have two processes whose KSTs are synchronized so that they can
pass address pointers between each other. Also, a process may be defined as being
dependent upon another such that It the superior process dies the inferior one also
dies. This is important as a check accident runaway process.
How would processes be used? This is a fairly open question. In the initial Multics
design much use was to have been made of processes for such tasks as a user's console.
Perhaps it is again time for us to consider such applications.
Scanned in from a documented printed March 10, 1974.
This is the second in a series of unedited ramblings about processes. All disclaimers
that apply to the first rambling apply to this one.
The comments in this memo represent a more modest version of the proposals in
the first memo. This memo concentrates more on what can be done with minimum change
to Multics. I still claim that the more general ideas presented in the earlier memo
should be (end can easily be) implemented In order to make processes truly useful.
Since this memo is more concerned with immediate implementation for experimental,
If not practical, purposes, It will go into more detail about the implementation.
- I will define three main levels of sharing between processes.
- Two processes are not assumed to share anything, or at least very little.
Communication is via an I/O stream. The processes themselves can be considered
as I/O devices with special order calls suited to the needs of controlling
- The two processes are assumed to be running on the same machine (same
Multics?) and therefore can share segments. Communication via shared storage
can supplement the communication via the I/O stream.
- The two processes are assumed to share a name/address space. I have
used the term tasking to describe this relationship. This can be implemented
by multiple execution points associated with a name space or by sharing
the execution point of a single process among multiple subtasks. In more
restricted versions of tasking, the programmer may be permitted to assume
that tasks of a given priority level will never be running simultaneously
with a task at a higher priority level. Thus a high priority tasks can assume
itself effectively masked against affects of lower priority tasks. I am
not sure whether this is a very useful capability considering that it requires
one to make major assumptions about one's environment assumptions that can
easily be wrong.
There is also a form of cooperation between processes that falls between
types i and ii. Two processes can share their address space so that segments
always have the same segment number in the two processes but the two processes
have different name spaces, in practice, this sharing cannot apply to all segments
since their must be some truly per processes segments. Some implementation of
this latter model would simplify the sharing of data between two processes.
(It can be especially useful for list structures).
Many Multics programs make unnecessary assumptions about the stacked nature
of their environment. For example, qedx cannot be called recursively. It detects
recursive calls by somehow seeing if there is another invocation of itself active
on the stack and types a message to the user informing him and asking him if
he wants to abandon the earlier editing. If the other instance is running in
a different task, qedx does not seem to detect the case and returning to the
first Invocation results in an error. The solution to this particular problem
is to remove the restrictions against recursion. The only reason that. I have
been given for the lack of recursion is to permit programs other than qedx to
reference string buffers by fixed names. since the names of the buffers are
considered internal information and any operation (such as sorting) can be performed
by writing out, processing and rereading the data, this is a poor excuse.
Other programs with similar problems include edm and archive. (teco is a
notable exception). Another problem is one that I have come across in program
my own subsystem wherein there is a need for a per invocation data base that
is referenced by a number of programs within it's subsystem. If all the procedures
were internal to one large procedure the structure can be referenced as global
data. If, however, one wishes to achieve the effect of having a global data
base for the Invocation without explicitly passing a reference on all procedure
calls and without the incorporation of all procedures into one larger one, one
can use a static pointer to reference the data. On recursive calls, one saves
the static pointer to the previous invocation and sets it to point to the new
data structure. Upon returning (or unwinding) the pointer is reset to the previous
value. This solution works as long as one can assume that there is only one
instance of the subsystem running at a given time and that one must return from
a given invocation before resuming an earlier one. When this assumption is not
applicable, It is more difficult to achieve the affect of a global data base.
This is one example where a per?task static storage class would be desirable.
One can provide such a storage class by extending the object segment format
(and possibly the pit language) to permit the use of per task static storage
that would be referenced analogously to the linkage section (or Webbers proposed
static storage section) but which would use a special pointer in the base of
the stack that is different for each stack/task. Thus the use of tasks would
make it possible to make effective use of global data bases that are associated
with an invocation of a complete subsystem rather than all instances. This still
does not solve the problem of a subsystem caned recursively on a single static.
The full solution would involve a combination of the method described above
for stacked invocations and a per- task pointer to the data.
Currently there are only two types of processes on Multics: "interactive"
and "absentee'. Multics and user programs "know" too much about the characteristics
of each process type for them to be well adapted for use of slave processes.
An interactive lob is assumed to have an intelligent human user at a console
to read messages that are typed out and to answer questions in the start_up.ec.
The initializer assumes that it can write directly on the console of an interactive
Job. On the other hand, the initializer requires that an absentee lob write
aH its own messages on its output stream, including the messages saying that
the process cannot be created. (I would assume that this means that a user is
not charged for an unsuccessful Interactive login but is charged for an absentee
one that fails).
The solution to this problem is to remove the arbitrary distinction between
the two types of processes and permit the creator to specify particular attributes
for a process. Also, the initializer should never write on a process' console
directly, but should establish a communication segment In the process directory
and wakeup would be sent (possibly accompanied by a "kick") to Inform the recipient
of the availability of the message. If a process cannot be created, the initializer
can reply on the stream associated with the process o- by returning a code to
- If we are to be able to create processes, it is necessary to have the ability
to destroy processes. This is currently a serious deficiency In the absentee
Job system. One must phone the operator to have an absentee lob killed. More
generally, the standard technique for killing a process is to phone the operator
and ask him to do so. This doesn't seem very secure. (On the other hand, is
there the possibility of a better scheme for those who do not have consoles
- As part of the attempt to model the quit handling action of the listener
as the creation of a new process (or task) it is recognized that some aspects
of the environment are associated with a task (It seems more reasonable to speak
of a task In this context as opposed to process) rather than with the whole
process. The most obvious one is the set of I/O attachments of user_input, user_output
and error output. One can generalize this to say that there should be a table
of attachments that are to he associated with a process and those that are to
be associated with a task. This might not be feasible within the implementation
restrictions of the current I/O switch (executive?). A compromise is to continue
the explicitly switching of streams when tasks are switched.
This is, in fact, how Doug Wells' current tasking system works. For example,
one can write an exec_com that does a oil compilation in its own process
This seemed to have been scrambled in the original)
- Must use attach since we are writing to the stream for another process
- get a process
- pl1 -table
- The word quit overprinted in one column returns to a listener
- The word exit overprinted in one column destroys the process
In the above example, when the "quit" line is reached one is returned to
the listener in the controlling process but the pit process continues to run.
In the current implementation, its output messages will be typed on the user's
console as they are generated. One may perform other work in one's controlling
process. When the compilation is done, one can continue the task that controls
the pI1 process and the next line in the exec_com is read. In this case it exits
and destroys the slave process. (Note that even if there had been a line in
the exec_Com after the pit line, the quit would have taken effective as soon
as it were read without waiting for the compilation to complete). Multiple ec's
can be managing processes Independently. When a process is resumed, the appropriate
exec_com_stream is associated with user_input. (i.e. the one that was associated
with It when the task was quitted).
- Primitives for task coordination must be implemented. These include
||Like Ipc_$block. It places the wait information on the queue of
pending wakeups. It then scans for Interesting wakeups that might have
occurred. An process that has received its wakeup is placed on a list
of runnable tasks. The task with highest priority is then resumed. Resuming
a task consists mainly of switching stacks and the I/O attachments mentioned
||This is like task_block_ except that the current process is placed
immediately on the list of runnable processes.
||This is like the hcs_wakeup Interface except that a kick can be
optionally sent. The kick can be specified explicitly or can be encoded
in the channel name.
- As was mentioned earlier, I/O streams can be an effective means of controlling
and organizing tasks and processes. There are two aspects to this, the functions
of control and the functions of communication. The communication function will
be discussed in the next topic.
The control functions can be performed via order calls. For the controlling
process the functions available include:
||Stop an inferior process. An optional even channel can be specified
for reactivating the process. Otherwise the event channel is supplied
by the TOSIPI. A call to task_block_ is affected in the associated task.
||Send a wakeup to the associated process to resume its execution
after a stop.
||Executes the specified procedure in the associated process. The
procedure may be specified by an entry variable or a pathname. The TOSIM
will convert the entry variable to or from a character string as required.
(Query, within a given Multics installations, assuming the proper ACL's
can a an internal procedure be executed in a different process than
Its parents? A different task?)
Note that a process 1s destroyed when its controlling stream is detached.
A "hold" disposition is available to allow the slave process to continue even
when its parent disappears.
- The interface between to tasks can be solely via an io_stream. For simplicity
I will assume that there is a stream associated with each task. It the name
of a task is "a", then its stream might be called "task_stream.a". The other
end of the stream must also be attached to a task, we may call this attachment
"task_control.a". Then a task is running. Its user_i/o is attached to its stream.
For example, if we have five tasks: main, a, b, b1, b2 where main controls a
and b and b controls b1. and b2 the following attachment table might exist while
b is running
In this example the console is associated with the main task, though it need
not be. When a task attempts to read from its input stream, it automatically
goes blocked if there is no Input waiting. Thus for example, one can easily
implement a system wherein one process simply manages the console and feeds
data to another process by writing to its stream. If the console process is
higher priority, it can respond immediately requests for echoing. The other
process need not be aware of whether it is speaking directly to a console or
is going through another process. Other models of console handling can be envisioned,
such as requiring that the user explicitly state which process is to be given
each line of input. (This is what is currently done for the Multics initializer