[Up: Distributed Systems]
[Previous: Low-Level Distribution] [Next: Middleware]


High-Level Distribution

In distribution platforms that provide a high level of abstract distribution, communication mechanisms are an integral part of the overall system, so that invocations on remote components are transparent, i.e. indistinguishable from local communication. Two examples are ERLANG [2] and Distributed Oz [13].


ERLANG is a functional programming language for, according to the authors, concurrent, real-time, fault-tolerant distributed systems. It was developed by Ericsson and the Ellemtel Computer Science Laboratories for use in Ericsson's Open Transport Platform, an infrastructure for telephony applications, and has since been made available to the public.

With its focus on concurrent programming, ERLANG provides lightweight processes that are executed in parallel, similar in notion to threads in other programming languages. The only means of communication between processes is the sending of messages.

The language makes it especially easy to program event-based systems, and a popular design choice is to build a concurrent system as a number of communicating finite state machines. This design aims at implementing particularly reliable systems.

Distribution in ERLANG is realized as a special case of concurrency in that a new process is spawned on a remote node. Communication with a remote process is no different from talking to a local process. Since parallelism and messaging are natural mechanisms in ERLANG, distributed programming looks the same as non-distributed programming.

Special messages are sent by the ERLANG kernel if a remote process exits or fails and thus allows the developer to react to a failure in a fault-tolerant distributed system. Real-time is only handled passively, an application can limit the time spent in waiting for a message, but has no control over the delivery of messages.

With the modern concept of automated code loading, ERLANG programs do not need to be concerned with whether the code they want to execute is actually available on the remote node, as it will be downloaded automatically by the kernel.

ERLANG fully abstracts the network. Addressing is done using process ids, encoding and transferring a message to a remote process is transparent. The control over quality-of-service and real-time issues is, in the words of the authors, ``soft,'' as no guarantees are given. By timestamping messages and using timeouts, an application can only detect the failure case that a message was not received in time. Interoperability, however, is limited to ERLANG programs running under the control of the ERLANG kernel, which must be running on all nodes. A means of integrating code from foreign programming languages is provided, but requires the reading and writing of messages in ERLANG's native data format.

Distributed programming in ERLANG succeeds in a closed environment where the developer has the choice over all parts of the system and where the software is written from scratch, but as integration with other common programming languages is not possible, the developer must submit to both the language and its paradigms.

Distributed Oz

Distributed Oz is an extension to the Oz language developed at the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für künstliche Intelligenz, DFKI). Oz is advertised as a concurrent, ``multi-paradigm'' programming language and allows object-oriented as well as functional programming.

Like ERLANG, Oz provides an asynchronous messaging mechanism, ports, as a means of communication between threads, and the step from local messaging to the distributed exchange of messages is just as short. Synchronous method invocations on objects are realized in a Smalltalk fashion by sending a message to the object, and could therefore be remote as well.

Figure 2.2: Object Mobility in Distributed Oz

But instead of using remote method invocations, Distributed Oz uses lightweight object mobility. If a message referencing an object is sent to a remote site, first a proxy object is created on the remote end. If a method is invoked on the proxy, it first loads a duplicate of the object's class record, which holds the methods' code, and then transfers the object's state record, of which only one copy exists. The state pointer on the originating side is itself then replaced by a proxy.

A manager exists on the site where an object was initially located to keep track of its state record. Figure 2.2 shows a scenario where an object A has been accessed remotely. The class record exists on both sites, but the state record is kept on site 2 only. Both sites hold a reference to the manager M, and should A be referenced again on site 1, only the state needs to be moved back.

Object mobility is a controversial topic and depends on the situation, specifically, on the size of the object state compared to the data actually required by the remote site. Additional network traffic is needed to locate an object before it can be used. Distributed Oz currently does not offer a distinction between mobile and immobile objects. Oz threads are stationary but are limited to message-based communication and do not offer an object interface.

[Previous: Low-Level Distribution] [Next: Middleware]
[Up: Distributed Systems]

Frank Pilhofer