[Up: Distributed Systems]
[Previous: Introduction] [Next: High-Level Distribution]
The most basic distribution platforms are no more than communication packages that allow the sending and receiving of raw data between components. Components are contacted at a known location, and data can be exchanged using a predefined protocol. Examples for this kind of distribution are BSD sockets and PVM.
It is something of an exaggeration to describe sockets as a distribution platform, as this is not true to their design as a means of exchanging data. Sockets are presented here because of their significance as a transport for more sophisticated distribution platforms.
PVM, however, is a valid candidate as a distribution platform, if only to demonstrate the limitations of a low-level approach.
Sockets [67] were originally introduced in version 4.2 of the BSD Unix operating system, and have since been added first to other Unix's, then to other operating systems as well. Today, support for sockets is ubiquitous and part of both the Single Unix Specification [17] and the upcoming IEEE P1003.1g Protocol Independent Interfaces standard, part of the POSIX family. They provide a common programming interface to TCP/IP (Transmission Control Protocol, Internet Protocol [48,49]) and UDP/IP (User Datagram Protocol [47]), the basis for the Internet and similar networks.
Addressing is done using a host name and a port number, a numerical identifier in the range from 0 to 65535. Components are usually stand-alone processes. Servers (processes that expect incoming connections) bind themselves to a port, where they can be contacted by clients. Communication can be connection-oriented (data streams) or connection-less (message based). Multicasting can be used in local networks, where messages are sent to a group of recipients.
Host names and port numbers must be agreed on by all components of a distributed system. Port numbers are usually the same on all hosts and can be identified by name, using a database. Well-known port numbers can be registered with the Internet Assigned Numbers Authority [53], which thereby provides a primitive naming service.
Data transfer is done on a binary level, as chunks of uninterpreted octets, and recipients must know how the data is to be read and interpreted. Problems may occur in a heterogeneous environment, because raw integer and floating-point values may differ depending on programming language and hardware. Applications concerned with interoperability must take care to convert their data into a ``common format'' before sending it, so that it can be changed back into the potentially different native format on the receiving end. Many existing applications use a string-based protocol, where all data is encoded as a string.
Sockets present the developer with little help for distributed programming. Addresses are intuitive, using the Internet host name and a common port number, but no support for synchronization and encoding exists. The application itself must take care of establishing a protocol and packaging data.
Still, the high availability even in a heterogeneous environment and low cost (no cost if systems are connected to the Internet or an Intranet anyway) make socket-based programming attractive.
An example of a socket-based distributed system is the world-wide Usenet network. Most hosts on the Internet run the NNTP (Network News Transport Protocol) service [18], which can be contacted on the well-known port 119. All participating hosts cooperate in the distribution of messages by sending incoming news to other directly connected hosts, so that the messages ultimately arrive everywhere.
Classical IP networks can only offer best-effort capabilities in transmitting data, because cable lines have limited bandwidth and are shared fairly between numerous users, as each datagram is forwarded on a first-come, first-served basis.
Asynchronous Transfer Mode (ATM) [3] is a new ``telecommunications concept'' that includes support for reliable quality of service guarantees over a shared network. Some operating systems like Linux implement the socket interface over ATM [1], so that sockets can be guaranteed a fixed bandwidth and latency according to a ``User-Network Traffic Contract.''
Many higher-level distribution platforms build upon sockets, and in the generic sense there is no difference if sockets are implemented over a classical IP network or over ATM, but quality of service features will in most cases require an ATM basis.
Whereas sockets are basically a means of data transport, the Parallel Virtual Machine (PVM) [11] was explicitly designed with parallel distributed systems in mind. Its development started in 1989 at Oak Ridge National Laboratory with the intention of providing an efficient platform for parallel processing, using multiple workstations rather than supercomputers. Despite its roots in number crunching, PVM can be used as a universal distribution platform.
A virtual machine is composed of one or more hosts, each running a ``PVM Daemon.'' Some of these hosts can by themselves be multiprocessor machines, each processor is then identified as a node. Components of a distributed system, tasks identified by a unique task identifier (tid), can then be started on any node. Tasks can then spawn new child tasks by themselves.
Communication among tasks is done by asynchronous messaging and signaling. The sender fills a message buffer with data and then addresses the buffer to one or more recipients, using their tids. A message buffer can hold a mixture of basic data: octets, integer values, floating-point values and strings.
Interoperability is ensured by encoding messages using the External Data Representation standard [66] to transport messages between hosts. Implementations of PVM first flourished on Unix, but have also been ported to other operating systems; the two supported programming languages are C and Fortran.
Usually, PVM applications are based on a Master-Slave design, in which one master task starts up a number of slaves and then distributes small pieces of a computation to each of them. The slaves then operate in parallel, and the master waits to collect the results and to put the pieces together. In the world of high-performance computing, PVM has become such a success that it has been ported to and optimized for a wide range of massively parallel computers by their vendors, using their proprietary means of transferring messages, or even shared memory.
PVM abstracts the network by using task identifiers for addressing and a common data encoding. Message passing is used as synchronization paradigm. But still, components must agree on the protocol specifying when messages can be sent or received, and must know how to interpret the data contained in a message.
Although difficult to imagine, generic distributed systems could indeed be built upon PVM, but its features are too limited and not easy to use. In fact, PVM is a good demonstration of why an abstraction of a component's interface and OSI's presentation layer are needed to provide more transparency in passing data. The composition of messages is complicated enough as it is, and small changes in the message format require changing the code of all other components.
[Previous: Introduction] [Next: High-Level Distribution]
[Up: Distributed Systems]