SlideShare a Scribd company logo
Communicating Process Architectures 2005
Concurrent Systems Engineering Series Jan F
Broenink download
https://ebookbell.com/product/communicating-process-
architectures-2005-concurrent-systems-engineering-series-jan-f-
broenink-2135978
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Communicating Process Architectures 2006 Volume 64 Concurrent Systems
Engineering Series Concurrent Systems Engineering P H Welch
https://ebookbell.com/product/communicating-process-
architectures-2006-volume-64-concurrent-systems-engineering-series-
concurrent-systems-engineering-p-h-welch-2134864
Communicating Process Architectures 2008 Wotug31 Volume 66 Concurrent
Systems Engineering Series Ph Welch
https://ebookbell.com/product/communicating-process-
architectures-2008-wotug31-volume-66-concurrent-systems-engineering-
series-ph-welch-2409310
Communicating Process Architectures 2007 Wotug30 A A Mcewan
https://ebookbell.com/product/communicating-process-
architectures-2007-wotug30-a-a-mcewan-1403788
Communicating Process Architectures 2011 Wotug33 Ph Welch
https://ebookbell.com/product/communicating-process-
architectures-2011-wotug33-ph-welch-2495232
Process Algebra Equational Theories Of Communicating Processes J C M
Baeten
https://ebookbell.com/product/process-algebra-equational-theories-of-
communicating-processes-j-c-m-baeten-1494074
Writing Public Policy A Practical Guide To Communicating In The Policy
Making Process 3rd Edition Catherine F Smith
https://ebookbell.com/product/writing-public-policy-a-practical-guide-
to-communicating-in-the-policy-making-process-3rd-edition-catherine-f-
smith-51712020
Business Communication Process And Product Brief Edition 7th Edition
7th Edition Mary Ellen Guffey
https://ebookbell.com/product/business-communication-process-and-
product-brief-edition-7th-edition-7th-edition-mary-ellen-
guffey-58730348
Business Communication Process Product Brief Sixth Brief Canadian
Edition Griffin
https://ebookbell.com/product/business-communication-process-product-
brief-sixth-brief-canadian-edition-griffin-22038440
Business Communication Process And Product 7th Edition Mary Ellen
Guffey
https://ebookbell.com/product/business-communication-process-and-
product-7th-edition-mary-ellen-guffey-2388440
Communicating Process Architectures 2005 Concurrent Systems Engineering Series Jan F Broenink
Communicating Process Architectures 2005 Concurrent Systems Engineering Series Jan F Broenink
COMMUNICATING PROCESS ARCHITECTURES 2005
Concurrent Systems Engineering Series
Series Editors: M.R. Jane, J. Hulskamp, P.H. Welch, D. Stiles and T.L. Kunii
Volume 63
Previously published in this series:
Volume 62, Communicating Process Architectures 2004 (WoTUG-27), I.R. East, J. Martin,
P.H. Welch, D. Duce and M. Green
Volume 61, Communicating Process Architectures 2003 (WoTUG-26), J.F. Broenink and
G.H. Hilderink
Volume 60, Communicating Process Architectures 2002 (WoTUG-25), J.S. Pascoe, P.H. Welch,
R.J. Loader and V.S. Sunderam
Volume 59, Communicating Process Architectures 2001 (WoTUG-24), A. Chalmers, M. Mirmehdi
and H. Muller
Volume 58, Communicating Process Architectures 2000 (WoTUG-23), P.H. Welch and
A.W.P. Bakkers
Volume 57, Architectures, Languages and Techniques for Concurrent Systems (WoTUG-22),
B.M. Cook
Volumes 54–56, Computational Intelligence for Modelling, Control & Automation,
M. Mohammadian
Volume 53, Advances in Computer and Information Sciences ’98, U. Güdükbay, T. Dayar,
A. Gürsoy and E. Gelenbe
Volume 52, Architectures, Languages and Patterns for Parallel and Distributed Applications
(WoTUG-21), P.H. Welch and A.W.P. Bakkers
Volume 51, The Network Designer’s Handbook, A.M. Jones, N.J. Davies, M.A. Firth and
C.J. Wright
Volume 50, Parallel Programming and JAVA (WoTUG-20), A. Bakkers
Volume 49, Correct Models of Parallel Computing, S. Noguchi and M. Ota
Volume 48, Abstract Machine Models for Parallel and Distributed Computing, M. Kara, J.R. Davy,
D. Goodeve and J. Nash
Volume 47, Parallel Processing Developments (WoTUG-19), B. O’Neill
Volume 46, Transputer Applications and Systems ’95, B.M. Cook, M.R. Jane, P. Nixon and
P.H. Welch
Transputer and OCCAM Engineering Series
Volume 45, Parallel Programming and Applications, P. Fritzson and L. Finmo
Volume 44, Transputer and Occam Developments (WoTUG-18), P. Nixon
Volume 43, Parallel Computing: Technology and Practice (PCAT-94), J.P. Gray and F. Naghdy
Volume 42, Transputer Research and Applications 7 (NATUG-7), H. Arabnia
Volume 41, Transputer Applications and Systems ’94, A. de Gloria, M.R. Jane and D. Marini
Volume 40, Transputers ’94, M. Becker, L. Litzler and M. Tréhel
ISSN 1383-7575
Communicating Process
Architectures 2005
WoTUG-28
Edited by
Jan F. Broenink
University of Twente, The Netherlands
Herman W. Roebbers
Philips TASS, The Netherlands
Johan P.E. Sunter
Philips Semiconductors, The Netherlands
Peter H. Welch
University of Kent, United Kingdom
and
David C. Wood
University of Kent, United Kingdom
Proceedings of the 28th WoTUG Technical Meeting,
18–21 September 2005, Technische Universiteit Eindhoven,
The Netherlands
Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
© 2005 The authors.
All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, without prior written permission from the publisher.
ISBN 1-58603-561-4
Library of Congress Control Number: 2005932067
Publisher
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
Netherlands
fax: +31 20 687 0019
e-mail: order@iospress.nl
Distributor in the UK and Ireland Distributor in the USA and Canada
IOS Press/Lavis Marketing IOS Press, Inc.
73 Lime Walk 4502 Rachael Manor Drive
Headington Fairfax, VA 22032
Oxford OX3 7AD USA
England fax: +1 703 323 3668
fax: +44 1865 750079 e-mail: iosbooks@iospress.com
LEGAL NOTICE
The publisher is not responsible for the use which might be made of the following information.
PRINTED IN THE NETHERLANDS
Communicating Process Architectures 2005 v
Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.)
IOS Press, 2005
© 2005 The authors. All rights reserved.
Preface
We are at the start of a new CPA conference. Communicating Process Architectures 2005
marks the first time that this conference has been organized by an industrial company (Phil-
ips) in co-operation with a university (Technische Universiteit Eindhoven). We see that this
also marks the growing awareness of the ideas characterized by ‘Communicating Processes
Architecture’ and their growing adoption by industry beyond their traditional base in
safety-critical systems and security.
The complexity of modern computing systems has become so great that no one person –
maybe not even a small team – can understand all aspects and all interactions. The only
hope of making such systems work is to ensure that all components are correct by design
and that the components can be combined to achieve scalability. A crucial property is that
the cost of making a change to a system depends linearly on the size of that change – not on
the size of the system being changed. Of course, this must be true whether that change is a
matter of maintenance (e.g. to take advantage of upcoming multiprocessor hardware) or the
addition of new functionality. One key is that system composition (and disassembly) intro-
duces no surprises. A component must behave consistently, no matter the context in which
it is used – which means that component interfaces must be explicit, published and free
from hidden side-effect. Our view is that concurrency, underpinned by the formal process
algebras of Hoare’s Communicating Sequential Processes and Milner’s π-Calculus, pro-
vides the strongest basis for the development of technology that can make this happen.
Once again we offer strongly refereed high-quality papers covering many differing as-
pects: system design and implementation (for both hardware and software), tools (concur-
rent programming languages, libraries and run-time kernels), formal methods and applica-
tions. These papers are presented in a single stream so you won’t have to miss out on any-
thing. As always we have plenty of space for informal contact and we don’t have to worry
about the bar closing at half ten!
We are pleased to have keynote speakers such as Ad Peeters of Handshake Solutions
and Guy Broadfoot of Verum, proving that you can actually make profitable business using
CSP as your guiding principle in the design of concurrent systems, be they hardware or
software. The third keynote by IBM Chief Architect Peter Hofstee assures us that CSP was
also used in the design of the communication system of the recent Cell processor, jointly
developed by IBM, Sony and Toshiba. The fourth keynote talk is by Paul Stravers of Phil-
ips Semiconductors on the Wasabi multiprocessor architecture.
We anticipate that you will have a very fruitful get-together and hope that it will pro-
vide you with as much inspiration and motivation as we have always experienced.
We thank the authors for their submissions, the Programme Committee for their hard
work in reviewing the papers and Harold Weffers and Maggy de Wert (of TUE) in making
the arrangements for this meeting. Finally, we are especially grateful to Fred Barnes (of the
University of Kent) for his essential technical expertise and time in the preparation of these
proceedings.
Herman Roebbers (Philips TASS)
Peter Welch and David Wood (University of Kent)
Johan Sunter (Philips Semiconductors)
Jan Broenink (University of Twente)
vi
Programme Committee
Prof. Peter Welch, University of Kent, UK (Chair)
Dr. Alastair Allen, Aberdeen University, UK
Prof. Hamid Arabnia, University of Georgia, USA
Dr. Fred Barnes, University of Kent, UK
Dr. Richard Beton, Roke Manor Research Ltd, UK
Dr. John Bjorndalen, University of Tromso, Norway
Dr. Marcel Boosten, Philips Medical Systems, The Netherlands
Dr. Jan Broenink, University of Twente, The Netherlands
Dr. Alan Chalmers, University of Bristol, UK
Prof. Peter Clayton, Rhodes University, South Africa
Dr. Barry Cook, 4Links Ltd., UK
Ms. Ruth Ivimey-Cook, Stuga Ltd., UK
Dr. Ian East, Oxford Brookes University, UK
Dr. Mark Green, Oxford Brookes University, UK
Mr. Marcel Groothuis, University of Twente, The Netherlands
Dr. Michael Goldsmith, Formal Systems (Europe) Ltd., Oxford, UK
Dr. Kees Goossens, Philips Research, The Netherlands
Dr. Gerald Hilderink, Enschede, The Netherlands
Mr. Christopher Jones, British Aerospace, UK
Prof. Jon Kerridge, Napier University, UK
Dr. Tom Lake, InterGlossa, UK
Dr. Adrian Lawrence, Loughborough University, UK
Dr. Roger Loader, Reading, UK
Dr. Jeremy Martin, GSK Ltd., UK
Dr. Stephen Maudsley, Bristol, UK
Mr. Alistair McEwan, University of Surrey, UK
Prof. Brian O'Neill, Nottingham Trent University, UK
Prof. Chris Nevison, Colgate University, New York, USA
Dr. Denis Nicole, University of Southampton, UK
Prof. Patrick Nixon, University College Dublin, Ireland
Dr. James Pascoe, Bristol, UK
Dr. Jan Pedersen, University of Nevada, Las Vegas
Dr. Roger Peel, University of Surrey, UK
Ir. Herman Roebbers, Philips TASS, The Netherlands
Prof. Nan Schaller, Rochester Institute of Technology, New York, USA
Dr. Marc Smith, Colby College, Maine, USA
Prof. Dyke Stiles, Utah State University, USA
Dr. Johan Sunter, Philips Semiconductors, The Netherlands
Mr. Oyvind Teig, Autronica Fire and Security, Norway
Prof. Rod Tosten, Gettysburg University, USA
Dr. Stephen Turner, Nanyang Technological University, Singapore
Prof. Paul Tynman, Rochester Institute of Technology, New York, USA
Dr. Brian Vinter, University of Southern Denmark, Denmark
Prof. Alan Wagner, University of British Columbia, Canada
vii
Dr. Paul Walker, 4Links Ltd., UK
Mr. David Wood, University of Kent, UK
Prof. Jim Woodcock, University of York, UK
Ir. Peter Visser, University of Twente, The Netherlands
This page intentionally left blank
ix
Contents
Preface v
Herman Roebbers, Peter Welch, David Wood, Johan Sunter and Jan Broenink
Programme Committee vi
Interfacing with Honeysuckle by Formal Contract 1
Ian East
Groovy Parallel! A Return to the Spirit of occam? 13
Jon Kerridge, Ken Barclay and John Savage
On Issues of Constructing an Exception Handling Mechanism for CSP-Based
Process-Oriented Concurrent Software 29
Dusko S. Jovanovic, Bojan E. Orlic and Jan F. Broenink
Automatic Handel-C Generation from MATLAB®
and Simulink®
for Motion
Control with an FPGA 43
Bart Rem, Ajeesh Gopalakrishnan, Tom J.H. Geelen and Herman Roebbers
JCSP-Poison: Safe Termination of CSP Process Networks 71
Bernhard H.C. Sputh and Alastair R. Allen
jcsp.mobile: A Package Enabling Mobile Processes and Channels 109
Kevin Chalmers and Jon Kerridge
CSP++: How Faithful to CSPm? 129
W.B. Gardner
Fast Data Sharing within a Distributed, Multithreaded Control Framework for
Robot Teams 147
Albert Schoute, Remco Seesink, Werner Dierssen and Niek Kooij
Improving TCP/IP Multicasting with Message Segmentation 155
Hans Henrik Happe and Brian Vinter
Lazy Cellular Automata with Communicating Processes 165
Adam Sampson, Peter Welch and Fred Barnes
A Unifying Theory of True Concurrency Based on CSP and Lazy Observation 177
Marc L. Smith
The Architecture of the Minimum intrusion Grid (MiG) 189
Brian Vinter
Verification of JCSP Programs 203
Vladimir Klebanov, Philipp Rümmer, Steffen Schlager and Peter H. Schmitt
x
Architecture Design Space Exploration for Streaming Applications through
Timing Analysis 219
Maarten H. Wiggers, Nikolay Kavaldjiev, Gerard J.M. Smit
and Pierre G. Jansen
A Foreign-Function Interface Generator for occam-pi 235
Damian J. Dimmich and Christian L. Jacobsen
Interfacing C and occam-pi 249
Fred Barnes
Interactive Computing with the Minimum intrusion Grid (MiG) 261
John Markus Bjørndalen, Otto J. Anshus and Brian Vinter
High Level Modeling of Channel-Based Asynchronous Circuits Using Verilog 275
Arash Saifhashemi and Peter A. Beerel
Mobile Barriers for occam-pi: Semantics, Implementation and Application 289
Peter Welch and Fred Barnes
Exception Handling Mechanism in Communicating Threads for Java 317
Gerald H. Hilderink
R16: A New Transputer Design for FPGAs 335
John Jakson
Towards Strong Mobility in the Shared Source CLI 363
Johnston Stewart, Paddy Nixon, Tim Walsh and Ian Ferguson
gCSP occam Code Generation for RMoX 375
Marcel A. Groothuis, Geert K. Liet and Jan F. Broenink
Assessing Application Performance in Degraded Network Environments:
An FPGA-Based Approach 385
Mihai Ivanovici, Razvan Beuran and Neil Davies
Communication and Synchronization in the Cell Processor (Invited Talk) 397
H. Peter Hofstee
Homogeneous Multiprocessing for Consumer Electronics (Invited Talk) 399
Paul Stravers
Handshake Technology: High Way to Low Power (Invited Talk) 401
Ad Peeters
If Concurrency in Software Is So Simple, Why Is It So Hard? (Invited Talk) 403
Guy Broadfoot
Author Index 405
Communicating Process Architectures 2005
Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.)
IOS Press, 2005
Interfacing with Honeysuckle
by Formal Contract
Ian EAST
Dept. for Computing, Oxford Brookes University, Oxford OX33 1HX, England
ireast@brookes.ac.uk
Abstract. Honeysuckle [1] is a new programming language that allows systems to
be constructed from processes which communicate under service (client-server or
master-servant) protocol [2]. The model for abstraction includes a formal definition of
both service and service-network (system or component) [3]. Any interface between
two components thus forms a binding contract which will be statically verified by the
compiler. An account is given of how such an interface is constructed and expressed in
Honeysuckle, including how it may encapsulate state, and how access may be shared
and distributed. Implementation is also briefly discussed.
Keywords. Client-server protocol, compositionality, interfacing, component-based
software development, deadlock-freedom, programming language.
Introduction
The Honeysuckle project has two motivations. First, is the need for a method by which to
design and construct reactive (event-driven) and concurrent systems free of pathological be-
haviour, such as deadlock. Second, is the desire to design a new programming language that
builds on the success of occam [4] and profits from all that has been learned in two decades
of its use [5].
occam already has one worthy successor in occam-π which extends the original lan-
guage to support the development of distributed applications [6]. Both processes and chan-
nels thus become mobile. Honeysuckle is more conservative and allows only objects mobil-
ity. Emphasis has instead been placed on securing integrity within the embedded application
domain. Multiple offspring are testimony to the innovative vigour of occam.
Any successor must preserve its salient features. occam facilitates the natural expression
of concurrency without semaphore or monitor. It possesses transparent, and mostly formal,
semantics, based upon the theory of Communicating Sequential Processes (CSP) [7,8]. It is
also compositional, in that it is rendered inherently free of side-effects by the strict separation
of value and action (the changing of value).
occam also had its weaknesses, that limited its commercial potential. It offered poor
support for the expression of data structure and none for dynamic (abstract) data types. While
processes afford encapsulation and allow effective system modularity, there is also no support
for project (source code) modularity. One cannot collect related definitions in any kind of
reusable package. Also, the ability only to copy a value, and not pass access to an object, to
a parallel process caused inefficiency, and lay in contrast with the passing of parameters to a
sequential procedure.
Perhaps the most significant factor limiting the take-up of occam has been the additional
threats to security against error that come with concurrency; most notably, deadlock. Jeremy
Martin successfully brought together theoretical work on deadlock-avoidance using CSP with
the effective design patterns for process-oriented systems introduced by Peter Welch et al.
© 2005 The authors. All rights reserved.
1
I. East / Interfacing with Honeysuckle
[9,10,11,12]. The result was a set of formal design rules, each proven to guarantee deadlock-
freedom within a CSP framework.
By far the most widely applicable design rule relies on a formal service (client-server)
protocol to define a model for system architecture. This idea originated with Per Brinch-
Hansen [2] in the study of operating systems. Service architecture has a wide domain of
application because it can abstract a large variety of systems, including any that can be ex-
pressed using channels, as employed by occam. However, architecture is limited to hierar-
chical structure because of a design rule that requires the absence of any directed circuit in
service provision, in order to guarantee freedom from deadlock.
A formal model for the abstraction of systems with service architecture has been pre-
viously given [3], based upon the rules employed by Martin. This separates the abstraction
of service protocol and service network component, and shows how the definition of system
and component can be unified (a point to be revisited in the next section). Furthermore, the
model incorporates prioritisation, which not only offers support for reactive systems (that
typically prioritise event response), but also liberates system architecture from the constraint
of hierarchical (tree) structure. Finally, a further proof of the absence of deadlock was given,
subject to a new design rule.
Prioritised service architecture (PSA) presents the opportunity to build a wide range of
reactive/concurrent systems, guaranteed free of deadlock. However, it is too much to expect
any designer to take responsibility for the static verification of many formal design rules.
Specialist skills would be required. Even then, mistakes would be made. In order to ease
design and implementation, a new programming language is required. The compiler can then
automate all verification.
Honeysuckle seeks to combine the ambition for such a language with that for a succes-
sor to occam. It renders systems with PSA simple to derive and express, while retaining a
formal guarantee of deadlock-freedom, without resort to any specialist skill or tool beyond
the compiler. Its design is now complete and stable. A compiler is under construction and
will be made available free of charge.
This paper presents a detailed account of the programming of service protocol and the
construction of an interface for system or component in Honeysuckle. In so doing it continues
from the previous language overview [1]. We begin by considering the problem of modular
software composition and the limitations of existing object- and process-oriented languages.
1. The Problem of Composition
While occam is compositional in the construction of a monolithic program, it is not so with
regard to system modularity. In order to recursively compose or decompose a system, we
require:
• some components that are indivisible
• that compositions of components are themselves valid components
• that behaviour of any component is manifest in its interface, without reference to any
internal structure
Components whose definition complies with all the above conditions may be termed
compositional with regard to some operator or set of operators. As alluded to earlier, it has
been shown how service network components (SNCs) may be defined in such a way as to
satisfy the first two requirements when subject to parallel composition [3].
A corollary is that any system forms a valid component, since it is (by definition) a com-
position. Another corollary, vital to all forms of engineering, is that it is then possible to sub-
stitute any component with another, possessing the same interface, without affecting either
2
I. East / Interfacing with Honeysuckle
design or compliance with specification. Software engineering now aspires to this principle
[13].
Clearly, listing a series of procedures, with given parameters, or a series of channels,
with associated data types, does little to describe object or process. To substitute one process
with another that simply sports the same channels would obviously be asking for trouble. A
much richer language is called for, in which to describe an interface.
One possibility is to resort to Floyd-Hoare logic [14,15,16] and impose formal pre- and
post-conditions on each procedure (‘method’) or channel, and maintain invariants associated
with each component (process or object class). However, this would require effectively the
development of a language to suit each individual application and is somewhat cumbersome
and expensive. It also requires special skill. Perhaps for that reason, such an explicitly for-
mal approach has not found favour in much of industry. Furthermore, no other branch of
engineering resorts to such powerful methods.
Meyer introduced the expression design by contract [17], to which he devotes an entire
chapter of his textbook on object-oriented programming [18]. This would seem to be just
a particular usage of invariants and pre- and post-conditions, but it does render clear the
principle that some protocol must precede composition and be verifiable.
The difficulty that is peculiar to software, and that does not apply (often) to, say, me-
chanical engineering, is, of course, that a component is likely to be capable of complex be-
haviour, responding in a unique and perhaps extended manner to each possible input com-
bination. Not many mechanical systems possess memory and the ability to change their re-
sponse in perhaps a highly non-linear fashion. However, many electronic systems do possess
significantly complex behaviour, yet have interfaces specified without resort to full first-order
predicate calculus. Electronic engineers expect to be able to substitute components according
to somewhat more specific interface description.
One possibility for software component interface description, that is common with hard-
ware, is a formal communication protocol detailing the order in which messages are ex-
changed, together with their type and structure. In this way, a binding and meaningful con-
tract is espoused. Verification can be performed via the execution of an appropriate “state-
machine” (finite-state automaton (FSA)).
Marcel Boosten proposed just such a mechanism to resolve problems encountered upon
integration under component-based software development [19]. These included race condi-
tions, re-entrant call-backs, and inconsistency between component states. He interposed an
object between components that would simulate an appropriate FSA.
Communication protocol can provide an interface that is both verifiable and sufficiently
rich to at least reduce the amount of logic necessary for an adequate definition, if not eliminate
it altogether.
In Honeysuckle, an interface comprises a list of ports, each of which corresponds to one
end (client or provider) of a service and forms an attribute of the component. Each service
defines a communication protocol that is translated by the compiler into an appropriate FSA.
Conformance to that protocol is statically verifiable by the compiler.
Static verification is to be preferred wherever possible for the obvious reason that errors
can be safely corrected. Dynamic verification can be compared to checking your boat after
setting out to sea. Should you discover a hole, there is little you can then do but sink. Dis-
covering an error in software that is deployed and running rarely leaves an opportunity for
effective counter-measures, still less rectification. Furthermore, dynamic verification imposes
a performance overhead that may well prove significant, especially for low-latency reactive
applications.
It is thus claimed here that (prioritised) service architecture is an ideal candidate for
secure component-based software development (CBSD).
3
I. East / Interfacing with Honeysuckle
Honeysuckle also provides balanced abstraction between object and process. Both static
and dynamic object composition may be transparently expressed, without recourse to any
explicit reference (pointer). Distributed applications are supported with objects mobile be-
tween processes. Together, object and service abstraction affords a rich language in which to
express the interface between processes composed in either sequence or parallel.
2. Parallel Composition and Interfacing in Honeysuckle
2.1. Composition and Definition
Honeysuckle interposes “clear blue water” between system and project modularity. Each
definition of process, object, and service, is termed an item. Items may be gathered into a
collection. Items and collections serve the needs of separated development and reuse.
Processes and objects are the components from which systems are composed, and to-
gether serve the needs of system abstraction, design, and maintenance. Every object is owned
by a single process, though ownership may be transferred between processes at run-time.
Here, we are concerned only with the programming of processes and their service interface.
A program consists of one or more item definitions, including at least one of a process.
For example:
definition of process greet
imports
service console from Environment
process greet :
{
interface
client of console
defines
String value greeting : "Hello world!n"
send greeting to console
}
This defines a unique process greet that has a single port consuming a service named
console as interface. The console service is assumed provided by the system environment,
which is effectively another process composed in parallel (which must include “provider
of console” within its interface description). Figure 1 shows how both project and system
modularity may be visualized or drawn.
p.greet s.console greet
console
Figure 1. Visualizing both project and system modularity.
The left-hand drawing shows the item defining process greet importing the definition of
service console. On the right, the process is shown running as a client of that service.
Braces (curly brackets) denote the boundary of block scope, not sequential construction,
as in C or Java. They may be omitted where no context is given, and thus no indication of
scope required.
4
I. East / Interfacing with Honeysuckle
A process may be defined inline or offline in Honeysuckle with identical semantics.
When defined inline, any further (offline) definitions must be imported above the description
of the parent process.
...
{
interface
client of console
defines
String greeting : "Hello world!n"
send greeting to console
}
...
An inline definition is achieved simultaneously with command issue (greet!).
A process thus defined can still be named, facilitating recursion. For example, a proce-
dure to create a new document in, say, a word processor might include the means by which a
user can create a further document:
...
process new_document :
{
... context
...
...
...
new_document
}
...
2.2. Simple Services
If all the console service does is eat strings it is sent, it could be very simply defined:
definition of service console
imports
object class String from StandardTypes
service console :
receive String
This is the sort of thing a channel can do — simply define the type of value that can be
transmitted. Any such simple protocol can be achieved using a single service primitive. This
is termed a simple service. Note that it is expressed from the provider perspective. The client
must send a string.
One further definition is imported, of a string data type from a standard library — part of
the program environment. It was not necessary for the definition of process greet to directly
import that of String. Definitions in Honeysuckle are transparent. Since that of greet can see
that of console, it can also see that of String. For this reason, no standard data type need be
imported to an application program.
If more than one instance of a console service is required then one must define a class of
service, perhaps called Console:
definition of service class Console
...
It is often very useful to communicate a “null datum” — a signal:
5
I. East / Interfacing with Honeysuckle
definition of service class Sentinel
service class Sentinel :
send signal
This example makes an important point. A service definition says nothing about when the
signal is sent. That will depend on that of the process that provides it. Any service simply acts
as a template governing the communication undertaken between two (or more) processes.
Signal protocol illustrates a second point, also of some importance. The rules governing
the behaviour of every service network component (SNC) [3] do not require any service to
necessarily become available immediately. This allows signal protocol to be used to synchro-
nize two processes, where either may arrive first.
2.3. Service Construction and Context
Service protocol can provide a much richer interface, and thus tighter component specifica-
tion, by constraining the order in which communications occur. Perhaps the simplest example
is of handshaking, where a response is always made to any request:
definition of service class Console
imports
object class String from Standard_Types
service class Console :
sequence
receive String
send String
Any process implementing a compound service, like the above, is more tightly con-
strained than with a simple service.
A rather more sophisticated console might be subject to a small command set and would
behave accordingly:
service class Console :
{
defines
Byte write : #01
Byte read : #02
names
Byte command
sequence
receive command
if command
write
acquire String
read
sequence
receive Cardinal
transfer String
...
Now something strange has happened. A service has acquired state. While strange it may
seem, there is no cause for alarm. Naming within a service is ignored within any process that
implements it (either as client or provider). It simply allows identification between references
within a service definition, and so allows a decision to be taken according the intended object
or value. This leaves control over all naming with the definition of process context.
6
I. East / Interfacing with Honeysuckle
One peculiarity to watch out for is illustrated by the following:
service class Business :
{
...
sequence
acquire Order
send Invoice
if
acquire Payment
transfer Item
otherwise
skip
}
It might at first appear that payment will never be required and that service will always
terminate after the dispatch of (a copy of) the invoice. Such is not the case. The above def-
inition allows either payment to be acquired, then an item transferred, or no further transac-
tion between client and provider. It simply endorses either as legitimate. Perhaps the busi-
ness makes use of a timer service and decides according to elapsed time whether to accept or
refuse payment if/when offered.
Although it makes sense, any such protocol is not legitimate because it does not conform
to the formal conditions defining service protocol [3]. The sequence in which communica-
tions take place must be agreed between client and provider. Agreement can be made as late
as desired but it must be made. Here, at the point of selection (if) there is no agreement.
Selection and repetition must be undertaken according to mutually recorded values, which is
why a service may require state.
A compound service may also be constructed via repetition. It might seem unnecessary,
given that a service protocol is inherently repeatable anyway, but account must be taken of
other associated structure. For example, the following might be a useful protocol for copying
each week between two diaries:
service diary :
{
...
sequence
repeat
for each WeekDay
send day
send week
}
It also serves as a nice illustration of the Honeysuckle use of an enumeration as both data
type and range.
2.4. Implementation and Verification
Any service could be implemented in occam, using at most two channels — one in each
direction of data flow. Like a channel, a service is implemented using rendezvous. Because,
within a service, communications are undertaken strictly in sequence, only a single ren-
dezvous is required. As with occam, the rendezvous must be initially empty and then occu-
pied by the first party to become ready, which must render apparent the location of, or for,
any message and then wait.
Each service can be verified via a finite-state automaton (FSA) augmented with a loop
iteration counter. At process start, each service begins in an initial state and moves to its
7
I. East / Interfacing with Honeysuckle
successor every time a communication is encountered matching that expected. Upon process
termination, each automaton must be in a final “accepting” state. A single state marks any
repetition underway. Transition from that state awaits completion of the required number
of iterations, which may depend upon a previous communication (within the same service).
Selection is marked by multiple transitions leaving the state adopted on seeing the preceding
communication. A separate state-chain follows each option.
Static verification can be complete except for repetition terminated according to state
incorporated within the service. The compiler must take account of this and generate an
appropriate warning. Partial verification is still possible at compile-time, though the final
iteration count must be checked at run-time.
3. Shared and Distributed Services
3.1. Sharing
By definition, a service represents a contract between two parties only. However, the question
of which two can be resolved dynamically. In the use of occam, it became apparent that a
significant number of applications required the same superstructure, to allow services to be
shared in this way.
occam 3 [20] sought to address both the need to establish a protocol governing more
than one communication at a time and the need for shared access. Remote call channels
effected a remote procedure call (RPC), and thus afforded a protocol specifying a list of
parameters received by a subroutine, followed by a result returned. Once defined, RPCs could
be shared in a simple and transparent manner. occam 3 also added shared groups of simple
channels via yet another mechanism, somewhat less simple and transparent.
The RPC is less flexible than service protocol, which allows specifying communications
in either direction in any order. Furthermore, multiple services may be interleaved; multiple
calls to a remote procedure cannot, any more than they can to a local one. Lastly, the RPC is
added to the existing channel abstraction of communication, complicating the model signifi-
cantly. In Honeysuckle, services are all that is needed to abstract communication, all the way
from the simplest to the most complex protocol.
Honeysuckle allows services to be shared by multiple clients at the point of declaration.
No service need be explicitly designed for sharing or defined as shared.
{
...
network
shared console
parallel
{
interface
provider of console
...
}
... console clients
}
Any client of a shared service will be delayed while another is served. Multiple clients
form an implicit queue.
8
I. East / Interfacing with Honeysuckle
3.2. Synchronized Sharing
Experience with occam and the success of bulk-synchronous parallel processing strongly
suggest the need for barrier synchronisation. Honeysuckle obliges with the notion of syn-
chronized sharing, where every client must consume the service before any can reinitiate
consumption, and the cycle begin again.
...
network
synchronized shared console
...
Like the sharing in occam 3, synchronized sharing in Honeysuckle is superstructure. It
could be implemented directly via the use of an additional co-ordinating process but is be-
lieved useful and intuitive enough to warrant its own syntax. The degree of system abstraction
possible is thus raised.
3.3. Distribution
Sharing provides a many-to-one configuration between clients and a single provider. It is also
possible, in Honeysuckle, to describe both one-to-many and many-to-many configurations.
A service is said to be distributed when it is provided by more than one process.
...
network
distributed validation
...
Note that the service thus described may remain unique and should be defined accord-
ingly. Definition of an entire class of service is not required. (By now, the convention may
be apparent whereby a lower-case initial indicates uniqueness and an upper-case one a class,
with regard to any item — object, process, or service.)
The utility of this is to simplify the design of many systems and reduce the code required
for their implementation. Again, the degree of system abstraction possible is raised.
A many-to-many configuration may be expressed by combining two qualifiers:
...
network
distributed shared validation
...
When distributed, a shared service cannot be synchronized. This would make no sense,
as providers possess no intrinsic way of knowing when a cycle of service, around all clients,
is complete.
3.4. Design and Implementation
Neither sharing nor distribution influence the abstract interface of a component. Considera-
tion is only necessary when combining components. For example, the designer may choose
to replicate a number of components, each of which provides service A and declare provision
distributed between them. Similarly, they may choose a component providing service B and
declare provision shared between a number of clients.
A shared service requires little more in implementation than an unshared one. Two ren-
dezvous (locations) are required. One is used to synchronize access to the service and the
other each communication within it. Any client finding the provider both free and ready (both
rendezvous occupied) may simply proceed and complete the initial communication. After
this, it must clear both rendezvous. It may subsequently ignore the service rendezvous until
9
I. East / Interfacing with Honeysuckle
completion. Any other client arriving while service is in progress will find the provider un-
ready (service rendezvous empty). It then joins a queue, at the head of which is the service
rendezvous. The maximum length of the queue is just the total number of clients, defined at
compile-time.
Synchronized sharing requires a secondary queue from which elements are prevented
from joining the primary one until a cycle is complete. A shared distributed service requires
multiple primary queues. The physical interface that implements sharing and shared distribu-
tion is thus a small process, encapsulating one or more queues.
4. Conclusion
Honeysuckle affords powerful and fully component-wise compositional system design and
programming, yet with a simple and intuitive model for abstraction. It inherits and continues
the simplicity of occam but has added the ability to express the component (or system)
interface in much greater detail, so that integration and substitution should be more easily
achieved. Support is also included for distributed and bulk-synchronous application design,
with mobile objects and synchronized sharing of services.
Service (client-server) architecture is proving extremely popular in the design of dis-
tributed applications but is currently lacking an established formal basis, simple consistent
model for abstraction, and programming language. Honeysuckle and PSA would seem timely
and well-placed. Though no formal semantics for prioritisation yet appears to have gained
both stability and wide acceptance, this looks set to change [21].
A complete programming language manual is in preparation, as is a working compiler.
These will be completed and published as soon as possible.
Acknowledgements
The author is grateful for enlightening conversation with Peter Welch, Jeremy Martin, Sharon
Curtis, and David Lightfoot. He is particularly grateful to Jeremy Martin, whose earlier work
formed the foundation for the Honeysuckle project. That, in turn, was strongly reliant on
deadlock analysis by, and the failure-divergence-refinement (FDR) model of, Bill Roscoe,
Steve Brookes, and Tony Hoare.
References
[1] Ian R. East. The Honeysuckle programming language: An overview. IEE Software, 150(2):95–107, 2003.
[2] Per Brinch Hansen. Operating System Principles. Automatic Computation. Prentice Hall, 1973.
[3] Ian R. East. Prioritised service architecture. In I. R. East and J. M. R. Martin et al., editors,
Communicating Process Architectures 2004, Series in Concurrent Systems Engineering, pages 55–69.
IOS Press, 2004.
[4] Inmos. occam 2 Reference Manual. Series in Computer Science. Prentice Hall International, 1988.
[5] Ian R. East. Towards a successor to occam. In A. Chalmers, M. Mirmehdi, and H. Muller, editors,
Proceedings of Communicating Process Architecture 2001, pages 231–241, University of Bristol, UK,
2001. IOS Press.
[6] Fred R. M. Barnes and Peter H. Welch. Communicating mobile processes. In I. R. East and J. M.
R. Martin et al., editors, Communicating Process Architectures 2004, pages 201–218. IOS Press, 2004.
[7] C. A. R. Hoare. Communicating Sequential Processes. Series in Computer Science. Prentice Hall
International, 1985.
[8] A. W. Roscoe. The Theory and Practice of Concurrency. Series in Computer Science. Prentice-Hall,
1998.
[9] Peter H. Welch. Emulating digital logic using transputer networks. In Parallel Architectures and
Languages – Europe, volume 258 of LNCS, pages 357–373. Springer Verlag, 1987.
10
I. East / Interfacing with Honeysuckle
[10] Peter H. Welch, G. Justo, and Colin Willcock. High-level paradigms for deadlock-free high performance
systems. In R. Grebe et al., editor, Transputer Applications and Systems ’93, pages 981–1004. IOS Press,
1993.
[11] Jeremy M. R. Martin. The Design and Construction of Deadlock-Free Concurrent Systems. PhD thesis,
University of Buckingham, Hunter Street, Buckingham, MK18 1EG, UK, 1996.
[12] Jeremy M. R. Martin and Peter H. Welch. A design strategy for deadlock-free concurrent systems.
Transputer Communications, 3(3):1–18, 1997.
[13] Clemens Szyperski. Component Software: Beyond Object-Oriented Programming. Component Software
Series. Addison-Wesley, second edition, 2002.
[14] R. W. Floyd. Assigning meanings to programs. In American Mathematical Society Symp. in Applied
Mathematics, volume 19, pages 19–31, 1967.
[15] C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM,
12(10):576–580, 1969.
[16] C. A. R. Hoare. Proof of correctness of data representations. Acta Informatica, 1:271–281, 1972.
[17] Bertrand Meyer. Design by contract. Technical Report TR-EI-12/CO, ISE Inc., 270, Storke Road, Suite
7, Santa Barbara, CA 93117 USA, 1987.
[18] Bertrand Meyer. Object-Oriented Software Construction. Prentice Hall, second edition, 1997.
[19] Marcel Boosten. Formal contracts: Enabling component composition. In J. F. Broenink and G. H.
Hilderink, editors, Proceedings of Communicating Process Architecture 2003, pages 185–197, University
of Twente, Netherlands, 2003. IOS Press.
[20] Geoff Barrett. occam 3 Reference Manual. Inmos Ltd., 1992.
[21] Adrian E. Lawrence. Triples. In I. R. East and J. M. R. Martin et al., editors, Proceedings of
Communicating Process Architectures 2004, Series in Concurrent Systems Engineering, pages 157–184.
IOS Press, 2004.
11
This page intentionally left blank
Communicating Process Architectures 2005
Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.)
IOS Press, 2005
Groovy Parallel!
A Return to the Spirit of occam?
Jon KERRIDGE, Ken BARCLAY and John SAVAGE
The School of Computing, Napier University, Edinburgh EH10 5DT
{j.kerridge, k.barclay, j.savage} @ napier.ac.uk
Abstract. For some years there has been much activity in developing CSP-like
extensions to a number of common programming languages. In particular, a number
of groups have looked at extensions to Java. Recent developments in the Java
platform have resulted in groups proposing more expressive problem solving
environments. Groovy is one of these developments. Four constructs are proposed
that support the writing of parallel systems using the JCSP package. The use of
these constructs is then demonstrated in a number of examples, both concurrent and
parallel. A mechanism for writing XML descriptions of concurrent systems is
described and it is shown how this is integrated into the Groovy environment.
Finally conclusions are drawn relating to the use of the constructs, particularly in a
teaching and learning environment.
Keywords. Groovy, JCSP, Parallel and Concurrent Systems, Teaching and Learning
Introduction
The occam programming language [1] provided a concise, simple and elegant means of
describing computing systems comprising multiple processes running on one or more
processors. Its theoretical foundations lay in the Communicating Sequential Process
algebra of Hoare [2]. A practical realization of occam was the Inmos Transputer. With the
demise of that technology the utility of occam as a generally available language was lost.
The Communicating Process Architecture community kept the underlying principles of
occam alive by a number of developments such as Welch’s JCSP package [3] and
Hilderink’s CTJ[4]. Both these developments captured the concept of CSP in a Java
environment. The former is supported by an extensive package that also permits the
creation of systems that operate over a TCP/IP network. The problem with the Java
environment is that it requires a great deal of support code to create what is, in essence, a
simple idea.
Groovy [5] is a new scripting language being developed for the Java platform. Groovy
is compatible with Java at the bytecode level. This means that Groovy is Java. It has a Java
friendly syntax that makes the Java APIs easier to use. As a scripting language it offers an
ideal way in which to glue components. Groovy provides native syntactic support for many
constructs such as lists, maps and regular expressions. It provides for dynamic typing which
can immediately reduce the code bulk. The Groovy framework removes the heavy lifting
otherwise found in Java.
Thus the goal of the activity reported in this paper was to create a number of simple
constructs that permitted the construction of parallel systems more easily without the need
for the somewhat heavyweight requirements imposed by Java. This was seen as
particularly important when the concepts are being taught. By reducing the amount that has
to be written, students may be able to grasp more easily the underlying principles.
© 2005 The authors. All rights reserved.
13
J. Kerridge et al. / Groovy Parallel
1. The Spirit of Groovy
In August 2003 the Groovy project was initiated at codehaus [5], an open-source project
repository focussed on practical Java applications. The main architects of the language are
two consultants, James Strachan and Bob McWhirter. In its short life Groovy has
stimulated a great deal of interest in the Java community. So much so that it is likely to be
accepted as a standard language for the Java platform.
Groovy is a scripting language based on several languages including Java, Ruby,
Python and Smalltalk. Although the Java programming language is a very good systems
programming language, it is rather verbose and clumsy when used for systems integration.
However, Groovy with a friendly Java-based syntax makes it much easier to use the Java
Application Programming Interface. It is ideal for the rapid development of small to
medium sized applications.
Groovy offers native syntax support for various abstractions. These and other language
features make Groovy a viable alternative to Java. For example, the Java programmer
wishing to construct a list of bank accounts would first have to create an object of the class
ArrayList, then send it repeated add messages to populate it with Account objects. In
Groovy, it is much easier:
accounts = [ new Account(number : 123, balance : 1200),
new Account(number : 456, balance : 400)]
Here, the subscript brackets [ and ] denote a Groovy List. Observe also the
construction of the Account objects. This is an example of a named property map. Each
property of the Account object is named along with its initial value.
Maps (dictionaries) are also directly supported in Groovy. A Map is a collection of
key/value pairs. A Map is presented as a comma-separated list of key : value pairs as in:
divisors = [4 : [2], 6 : [2, 3], 12 : [2, 3, 4, 6]]
This Map is keyed by an integer and the value is a List of integers that are divisors of
the key.
Closures, in Groovy, are a powerful way of representing blocks of executable code.
Since closures are objects they can be passed around as, for example, method parameters.
Because closures are code blocks they can also be executed when required. Like methods,
closures can be defined in terms of one or more parameters. One of the most common uses
for closures is to process a collection. We can iterate across the elements of a collection and
apply the closure to them. A simple parameterized closure is:
greeting = { name -> println "Hello ${name}" }
The code block identified by greeting can be executed with the call message as in:
greeting.call ("Jon") // explicit call
greeting ("Ken") // implicit call
Several List and Map methods accept closures as an actual parameter. This
combination of closures and collections provides Groovy with some very neat solutions to
common problems. The each method, for example, can be used to iterate across the
elements of a collection and apply the closure, as in:
[1, 2, 3, 4].each { element -> print "${element}; " }
14
J. Kerridge et al. / Groovy Parallel
will print 1; 2; 3; 4;
["Ken" : 21, "John" : 22, "Jon" : 25].each { entry ->
if(entry.value > 21) print "entry.key, "
}
will print
John, Jon,
2. The Groovy Parallel Constructs
Groovy constructs are required that follow explicit requirements of CSP-based systems.
These are direct support for parallel, alternative and the construction of guards reflecting
that Groovy is a list-based environment whereas JCSP is an array-based system [5].
2.1 The PAR Construct
The PAR construct is simply an extension of the existing JCSP Parallel class that accepts
a list of processes. The class comprises a constructor that takes a list of processes
(processList) and casts them as an array of CSProcess as required by JCSP.
class PAR extends Parallel {
PAR(processList){
super( processList.toArray(new CSProcess[0]) )
}
}
2.2 The ALT construct
The ALT construct extends the existing JCSP Alternative class with a list of guards. The
class comprises a constructor that takes a list of guards (guardList) and casts them as an
array of Guard as required by the JCSP. The main advantage of this constructor in use is
that the channels that form the guards of the ALT are passed to a process as a list of channel
inputs and thus it is not necessary to create the Guard structure in the process definition.
The list of guards can also include CSTimer and Skip.
class ALT extends Alternative {
ALT (guardList) {
super( guardList.toArray(new Guard[0]) )
}
}
2.3 The CHANNEL_INPUT_LIST Construct
The CHANNEL_INPUT_LIST is used to create a list of channel input ends from an array of
channels. This list can then be passed as a guardList to an ALT. This construct only needs
to be used for channel arrays used between processes on a single processor. Channels that
connect processes running on different processes (NetChannels) can be passed as a list
without the need for this construct.
class CHANNEL_INPUT_LIST extends ArrayList{
CHANNEL_INPUT_LIST(array) {
super( Arrays.asList(Channel.getInputArray(array)) )
}
}
15
J. Kerridge et al. / Groovy Parallel
2.4 The CHANNEL_OUTPUT_LIST Construct
The CHANNEL_OUTPUT_LIST is used to construct a list of channel output ends form an array
of such channels and provides the converse capability to a CHANNEL_INPUT_LIST. It should
be noted that all the channel output ends have to be accessed by the same process.
class CHANNEL_OUTPUT_LIST extends ArrayList{
CHANNEL_OUTPUT_LIST(array) {
super( Arrays.asList(Channel.getOutputArray(array)) )
}
}
3. Using the Constructs
In this section we demonstrate the use of these constructs, first in a typical student learning
example based upon the use of a number of sender processes having their outputs
multiplexed into a single reading process. The second example is a little more complex and
shows a system that runs over a network of workstations and provides the basic control for
a tournament in which a number of players of different capabilities play the same game
(draughts) against each other and this is then used in an evolutionary system to develop a
better draughts player.
3.1 A Multiplexing System
3.1.1 The Send Process
The specification of the class SendProcess is brief and contains only the information
required. This aids teaching and learning and also understanding the purpose of the
process. The properties of the class are defined as cout and id (lines 2 and 3) without any
type information. The property cout will be passed the channel used to output data from
this process and id is an identifier for this process. The method run is then defined.
01 class SendProcess implements CSProcess {
02 cout // the channel used to output the data stream
03 id // the identifier of this process
04 void run() {
05 i = 0
06 1.upto(10) { // loop 10 times
07 i = i + 1
08 cout.write(i + id) // write the value of id + i to cout
09 }
10 }
11 }
There is no necessity for a constructor for the class or the setter and getter methods as
these are all created automatically by the Groovy system. The run method simply loops 10
times outputting the value of id to which has been added the loop index variable i (lines 4
to 8). Thus the explanation of its operation simply focuses on the communication aspects of
the process.
3.1.2 The Read Process
The ReadProcess is similarly brief and in this version extracts the SendProcess
identification (s) and value (v) from the value that is sent to the ReadProcess. It should
also be noted that types might be explicitly defined, as in the case of s (line 18), in order to
16
J. Kerridge et al. / Groovy Parallel
achieve the desired effect. It is assumed that identification values are expressed in
thousands.
12 class ReadProcess implements CSProcess {
13 cin // the input channel
14 void run() {
15 while (true) {
16 d = cin.read() // read from cin
17 v = d % 1000 // v the value read
18 int s = d / 1000 // from sender s
19 println "Read: ${v} from sender ${s}" // print v and s
20 }
21 }
22 }
3.1.3 The Plex Process
The Plex process is a classic example of a multiplex process that alternates over its input
channels (cin) and then reads a selected input, which is immediately written to the output
channel (cout) (line 31). The input channels are passed as a list to the process and these
are then passed to the ALT construct (line 27) to create the JCSP Alternative.
23 class Plex implements CSProcess {
24 cin // channel input list
25 cout // output channel onto which inputs are multiplexed
26 void run () {
27 alt = new ALT(cin)
28 running = true
29 while (running) {
30 index = alt.select ()
31 cout.write (cin[index].read())
32 }
33 }
34 }
3.1.4 Running the System on a Single Processor
Figure 1, shows a system comprising any number of SendProcesses together with a Plex
and a ReadProcess.
Figure 1. The Multiplex Process Structure
In a single processor invocation, five channels a, connect the SendProcesses to the
Plex process and are declared using the normal call to the Channel class of JCSP (line
35). Similarly, the channel b, connects the Plex process to the ReadProcess (line 36). A
CHANNEL_INPUT_LIST construct is used to create the list of channel inputs that will be
passed to the Plex process and which will be ALTed over (line 37).
b
a
SendProcess
SendProcess
SendProcess
Plex ReadProcess
17
J. Kerridge et al. / Groovy Parallel
The Groovy map abstraction is used (line 38) to create idMap that relates the instance
number of the SendProcess to the value that will be passed as its id property. A list
(sendList) of SendProcesses is then created (lines 39-41) using the collect method on a
list. The list comprises five instances of the SendProcess with the cout and id properties
set to the values indicated, using a closure applied to each member of the set [0,1,2,3,4]. A
processList is then created (lines 42-45) comprising the sendList plus instances of the
Plex and ReadProcess that have their properties initialized as indicated. The flatten()
method has to be applied because sendList is already a List that has to be removed for the
PAR constructor to work. Finally a PAR construct is created (line 46) and run. In section 4
a formulation that removes the need for flatten() is presented.
35 a = Channel.createOne2One (5)
36 b = Channel.createOne2One ()
37 channelList = new CHANNEL_INPUT_LIST (a)
38 idMap = [0: 1000, 1: 2000, 2:3000, 3:4000, 4:5000]
39 sendList = [0,1,2,3,4].collect
40 {i->return new SendProcess ( cout:a[i].out(),
41 id:idMap[i]) }
42 processList = [ sendList,
43 new Plex (cin : channelList, cout : b.out()),
44 new ReadProcess (cin : b.in() )
45 ].flatten()
46 new PAR (processList).run()
3.1.5 Running the System in Parallel on a Network
To run the same system shown in Figure 1, on a network, with each process being run on a
separate processor, a Main program for each process is required.
3.1.5.1 SendMain
SendMain is passed the numeric identifier (sendId) for this process (line 47) as the zero’th
command line argument. A network node is then created (line 48) and connected to a
default CNSServer process running on the network. From the sendId, a string is created
that is the name of the channel that this SendProcess will output its data on and a One2Net
channel is accordingly created (line 51). A list containing just one process is created (line
52) that is the invocation of the SendProcess with its properties initialized and this is
passed to a PAR constructor to be run (line 53).
47 sendId = Integer.parseInt( args[0] )
48 Node.getInstance().init(new TCPIPNodeFactory ())
49 int sendInstance = sendId / 1000
50 channelInstance = sendInstance - 1
51 outChan = CNS.createOne2Net ( "A" + channelInstance)
52 pList = [ new SendProcess ( id : sendId, cout : outChan ) ]
53 new PAR(pList).run()
3.1.5.2 PlexMain
PlexMain is passed the number of SendProcesses as a command line argument (line 54),
as there will be this number of input channels to the Plex process. These input channels
are created as a list of Net2One channels (lines 57-59) having the same names as were
created for each of the SendProcesses. As this is already a list there is no need to obtain
the input ends of the channels, as this is implicit in the creation of Net2One channels. The
Plex outChan is created as a One2Net channel with the name B (line 60) and the Plex
process is then run in a similar manner as each of the SendProcesses (lines 61, 62).
18
J. Kerridge et al. / Groovy Parallel
54 inputs = Integer.parseInt( args[0] )
55 Node.getInstance().init(new TCPIPNodeFactory ())
56 inChans = [] // an empty list of net channels
57 for (i in 0 ... inputs ) {
58 inChans << CNS.createNet2One ( "A" + i ) // append the channels
59 }
60 outChan = CNS.createOne2Net ( "B" )
61 pList = [ new Plex ( cin : inChans, cout : outChan ) ]
62 new PAR (pList).run()
3.1.5.3 ReadMain
ReadMain requires no command line arguments. It simply creates a network node (line 63),
followed by a Net2One channel with the same name as was created for PlexMain’s output
channel (line 64) and the ReadProcess is then invoked in the usual manner.
63 Node.getInstance().init(new TCPIPNodeFactory ())
64 inChan = CNS.createNet2One ( "B" )
65 pList = [ new ReadProcess ( cin : inChan ) ]
66 new PAR (pList).run()
3.1.6 Summary
In the single processor case, each process is interleaved on a single processor. In the multi-
processor case each process is run on a separate processor and it is assumed that CNSServer
[6] is executing somewhere on the network.
3.2 A Tournament Manager
The Tournament System, see Figure 2, is organized as a set of Board processes that each
run a game in the tournament on a different processor. The Board processes receive
information about the game they are to play from an Organiser process. The results from
the Board processes are returned via a ResultMux process running on the same processor as
the Organiser process. In order that the system operates in a Client-Server[6] mode each
Board process is considered to be a client process and the combination of the Organiser
and ResultMux processes is considered to be the Server.
Figure 2 The Tournament System
The system requires that data be communicated as a set of GameData and ResultData
objects. The system, as defined, cannot be executed on a single processor system as due
account of the copying of network communicated objects, which have to implement
M2O
O2M
ResultMux Organiser
Board
Board
W
Tournament
R
19
J. Kerridge et al. / Groovy Parallel
Serializable, is taken in the design. More importantly, the use of an internal channel
between two processes has to be considered and a reply channel is utilized to overcome the
fact that an object reference is passed between the ResultMux and Organiser processes.
3.2.1 The Data Objects
Two data objects are used within the system, GameData holds information concerning the
player identities and the playing weights associated with each player. A state (line 72)
property is used to indicate whether the object holds playing data or is being used to
indicate the end of the Tournament.
67 class GameData implements Serializable {
68 p1 // id of player 1
69 p2 // id of player 2
70 w1 // list of weights for player 1
71 w2 // list of weights for player 2
72 state // string containing data or end
73 }
The ResultData object is used to communicate results from the Board processes back
to the Organiser process. The use of each property of the object is identified in the
corresponding comments. The board on which the game is played is required (line 79) so
the Organiser process can send another game to the Board process immediately. The
state property (line 80) is used to indicate one of three states, namely; the board has been
initialized waiting for a game, the object contains the results of a game and the tournament
is finishing.
74 class ResultData implements Serializable {
75 p1 // player 1 identifier
76 p2 // player 2 identifier
77 result1V2 // result of game for p1 V p2
78 result2V1 // result of game for p2 V p1
79 board // board used
80 state // String containing init or result or end
81 }
3.2.2 The Board Process
The Board process is a client process and has been constructed so that an output to the
Organiser in the form of a result.write() (lines 96, 103, 119) communication is
always followed immediately by a work.read() (line 98). The initialization code with its
output is immediately followed, in the main loop, by the required input operation. The
main loop comprises two sections of an if-statement, which finish with either the outputting
of a result or a termination message. The latter does not need to receive an input from the
Organiser process because the Board process will itself have been terminated. In the
normal case, the outputting of a result at the end of the loop is immediately followed by an
input at the start of the loop. These lines (96, 98, 103, 119) have been highlighted in the
code listing. A consequence of using this design approach is that only one ResultData and
one GameData object is required thereby minimizing the use of the very expensive new
operator.
The most interesting aspect of the code is that the access to the properties of the data
classes is simply made using the dot notation. This results from Groovy automatically
generating the setters, getters and class constructors required. This has the immediate
benefit of making the code more accessible so that key points such as the structure of client
and server processes is more obvious.
20
J. Kerridge et al. / Groovy Parallel
82 class Board implements CSProcess {
83
84 bId // the id for this Board process
85 result // One2One channel connecting the Board to the ResultMux
86 work // One2One channel used to send work to this Board
87
88 void run() {
89 println "Board ${bId} has started"
90 tim = new CSTimer() // used to simulate game time
91 gameData = new GameData() // the weights and player ids
92 resultData = new ResultData() // the result of this game
93 resultData.state = "init"
94 resultData.board = bId
95 running = true
96 result.write(resultData) // send init to Organiser
97 while (running) {
98 gameData = work.read() // always follows a result.write
99 if ( gameData.state == "end" ) { // end of processing
100 println "Board ${bId} has terminated"
101 running = false
102 resultData.state = "end"
103 result.write(resultData) // send termination to ResultMux
104 }
105 else {
106 // run the game twice with P1 v P2 and then P2 v P1
107 // simulated by a timeout
108 tim.after ( tim.read() + 100 + gameData.p2 )
109 println "Board ${bId} playing games for
110 ${gameData.p1} and ${gameData.p2}"
111 outcome1V2 = bId // return the bId of the board playing game
112 outcome2V1 = -bId // instead of the actual outcomes
113 resultData.state = "result"
114 resultData.p1 = gameData.p1
115 resultData.p2 = gameData.p2
116 resultData.board = bId
117 resultData.result1V2 = outcome1V2
118 resultData.result2V1 = outcome2V1
119 result.write(resultData) // send result to ResultMux
120 }
121 } } }
3.2.3 The ResultMux Process
This process forms part of the tournament system and is used to multiplex results from the
Board processes to the Organiser. The ResultMux process runs on the same processor as
the Organiser and thus access to any data objects by both processes have to be carefully
managed. If this is not done then there is a chance that one process may overwrite data that
has already been communicated to the other process because only an object reference is
passed during such communications. In this case, the resultData object is read into in the
ResultMux process and manipulated within Organiser. Yet again the desire is to reduce
the number of new operations that are undertaken. new is both expensive and also leads to
the repeated invocation of the Java garbage collector. In the version presented here only
one instance of a ResultData object is created outside the main loop of the process. In
addition, no new operation exists within the loop (lines 129-144).
The only other problem to be overcome is that of terminating the ResultMux process.
One of the properties (boards) of the process is the number of parallel Board processes
invoked by the system. When a Board process receives a GameData object that has its
state set to “end” it communicates this to the ResultMux process as well. Once the
ResultMux process has received the required number of such messages it can then
terminate itself (lines 137-140).
21
J. Kerridge et al. / Groovy Parallel
The other aspect of note is that the property resultsIn is a list of network channels
and that these can be used as a parameter to the ALT construct without any modification
because ALT (line 132) is expecting a list of input channel ends, which is precisely the type
of a Net2One channel, see 3.2.6. Any ResultData that is read in on the resultsIn
channels is then immediately written to the resultOut channel (line 143).
The use of the reply property will be explained in the next section.
122 class ResultMux implements CSProcess {
123 boards // number of boards; used for process termination
124 resultOut // output channel from Mux to Organiser
125 reply // channel indicating result processed by Organiser
126 resultsIn // list of result channels from each of the boards
127
128 void run () {
129 resultData = new ResultData() // holds data from boards
130 endCount = 0
131 println "ResultMux has started"
132 alt = new ALT (resultsIn)
133 running = true
134 while (running) {
135 index = alt.select()
136 resultData = resultsIn[index].read()
137 if ( resultData.state == "end" ) {
138 endCount = endCount + 1
139 if ( endCount == boards ) {
140 running = false
141 }
142 } else {
143 resultOut.write(resultData)
144 b = reply.read()
145 }
146 } } }
3.2.4 The Organiser Process
This is the most complex process but it breaks down into a number of distinct sections that
facilitate its explanation. Yet again the use of the new operation has been limited to those
structures that are required and none are contained within the main loop of the process. The
outcomes structure is a list of lists that will contain the result of each game. The access
mechanism is similar to that of array access but Groovy permits other styles of access that
are more list oriented. Initially, each element of the structure is set to a sentinel value of
100 (lines 159-166). The result of each pair of games, pi plays pj and pj plays pi for all i
<>j, is recorded in the outcomes structure such that pi v pj is stored in the upper triangle of
outcomes and pj v pi in the lower part. Games such as draughts and chess have different
outcomes for the same players depending upon which is white or black and hence is the
starting player.
The main loop has been organized so that the Organiser receives a result from the
ResultMux. Saving the game’s results in the outcomes structure and then sending another
game to the now idle Board process achieves this (lines 171-178). However, before
another game is sent to the Board process a reply (line 178) is sent to the ResultMux
process to indicate the ResultData has been processed. The resultData object is passed
as a value from the ResultMux to the Organiser, which is an object reference. JCSP
requires that once a process has written an object it should not then access that object until
it is safe to do so. Thus once the outcomes structure has been updated the object is not
required and hence the reply can be sent to the ResultMux process immediately. This
happens on two occasions, first when the resultData contains the state “init” (line 180)
and more commonly when a result is returned and the state is “result” (line 178).
22
J. Kerridge et al. / Groovy Parallel
147 class Organiser implements CSProcess {
148 boards // the number of boards that are being used in parallel
149 players // number of players
150 work // channels on which work is sent to boards
151 result // channel on which results received from ResultMux
152 reply // reply to resultMux from Organiser
153
154 void run () {
155 resultData = new ResultData() // create the data structures
156 gameData = new GameData()
157 println "Organiser has started"
158 // set up the outcomes
159 outcomes = [ ]
160 for ( r in 0 ..< players ) { // cycle through the rows
161 row = [ ] // 0 ..< n gives 0 to n - 1
162 for ( c in 0 ..< players ) { // cycle through the columns
163 row << 100 // 100 acts as sentinel
164 }
165 outcomes << row
166 }
167 // the main loop
168 for ( r in 0 ..< players) {
169 c = r + 1
170 for ( c in 0 ..< players) {
171 resultData = result.read() // an object reference not a copy
172 b = resultData.board
173 if ( resultData.state == "result" ) {
174 p1 = resultData.p1
175 p2 = resultData.p2
176 outcomes [ p1 ] [ p2 ] = resultData.result1V2
177 outcomes [ p2 ] [ p1 ] = resultData.result2V1
178 reply.write(true) // outcomes processed
179 } else {
180 reply.write(true) // init received
181 }
182 // send the game [r,c] to Board process b
183 gameData.p1 = r
184 gameData.p2 = c
185 gameData.state = "data"
186 // set w1 to the weights for p1
187 // set w2 to the weights for p2
188 work[b].write(gameData)
189 }
190 }
191 // now terminate the Board processes
192 println "Organiser: Started termination process"
193 gameData.state = "end"
194 for ( i in 0 ... boards) {
195 resultData = result.read()
196 bd = resultData.board
197 p1 = resultData.p1
198 p2 = resultData.p2
199 outcomes [ p1 ] [ p2 ] = resultData.result1V2
200 outcomes [ p2 ] [ p1 ] = resultData.result2V1
201 reply.write(true)
202 work[bd].write(gameData)
203 }
204 println"Organiser: Outcomes are:"
205 for ( r in 0 ... players ) {
206 for ( c in 0 ... players ) {
207 print "[${r},${c}]:${outcomes[r][c]}; "
208 }
209 println " "
210 }
211 println"Organiser: Tournament has finished"
212 }
213 }
23
J. Kerridge et al. / Groovy Parallel
Initially, the loop will receive as many “init” messages as there are Board processes.
Thus once all the games have been sent to the Board processes, each of the Board processes
will still be processing a game. Hence, another loop has to be used to input the last game
result from each of these processes (lines 194-203). In this case the gameData that is output
contains the state “end” and this will cause the Board process that receives it to terminate
but not before it has also sent the message on to the ResultMux process. Finally, the
outcomes can be printed (lines 204-211) or in the real tournament system evaluated to
determine the best players so that they can be mutated in an evolutionary development
scheme.
3.2.5 Invoking a Board Process
Each Board process has to be invoked on its own processor. The network channels are
created using CNS static methods (lines 216, 217). It is vital that the channel names used in
one process invocation are the same as the corresponding channel in another processor.
214 Node.getInstance().init(new TCPIPNodeFactory ());
215 boardId = Integer.parseInt(args[0]) //the number of this Board
216 w = CNS.createNet2One("W" + boardId) // the Net2One work channel
217 r = CNS.createOne2Net("R" + boardId) // the One2Net result channel
218 println " Board ${boardId} has created its Net channels "
219 pList = [ new Board ( bId:boardId , result:r , work:w ) ]
220 new PAR (pList).run()
3.2.6 Invoking the Tournament
This code is similar expect that list of network channels are created by appending channels
of the correct type to list structures (lines 224-230). Two internal channels between
ResultMux and Organiser are created, M2O and O2M (lines 231, 232) and these are used to
implement the resultOut and reply connections respectively between these processes.
An advantage of the Groovy approach to constructors is that the constructor identifies each
property by name, rather than the order of arguments to a constructor call specifying the
order of the properties. It also increases the readability of the resulting code.
221 Node.getInstance().init(new TCPIPNodeFactory ());
222 nPlayers = Integer.parseInt(args[0]) // the number of players
223 nBoards = Integer.parseInt(args[1]) // the number of boards
224 w = [] // the list of One2Net work channels
225 r = [] // the list of Net2One result channels
226 for ( i in 0 ..< nBoards) {
227 i = i+1
228 w << CNS.createOne2Net("W" + i)
229 r << CNS.createNet2One("R" + i)
230 }
231 M2O = Channel.createOne2One()
232 O2M = Channel.createOne2One()
233 pList = [ new Organiser ( boards:nBoards , players:nPlayers ,
234 work:w , result: M2O.in(),
235 reply: O2M.out() ),
236 new ResultMux ( boards:nBoards , resultOut:M2O.out(),
237 resultsIn:r, reply: O2M.in() ) ]
238 new PAR ( pList) .run()
24
J. Kerridge et al. / Groovy Parallel
4. The XML Specification of Systems
Groovy includes tree-based builders that can be sub-classed to produce a variety of tree-
structured object representations. These specialized builders can then be used to represent,
for example, XML markup or GUI user interfaces. Whichever kind of builder object is
used, the Groovy markup syntax is always the same. This gives Groovy native syntactic
support for such constructs.
The following lines, 239 to 248, demonstrate how we might generate some XML [7] to
represent a book with its author, title, etc. The non-existent method call Author("Ken
Barclay") delivers the <Author>Ken Barclay</Author> element, while the method call
ISBN(number : "1234567890") produces the empty XML element <ISBN number=
"1234567890"/>.
239 // Create a builder
240 mB = new MarkupBuilder()
241
242 // Compose the builder
243 bk = mB.Book() { // <Book>
244 Author("Ken Barclay") // <Author>Ken Barclay</Author>
245 Title("Groovy") // <Title>Groovy</Title>
246 Publisher("Elsevier") // <Publisher>Elsevier</Publisher>
247 ISBN(number : "1234567890") // <ISBN number="1234567890"/>
248 // </Book>
It is also important to recognize that since all this is native Groovy syntax being used
to represent any arbitrarily nested markup, then we can also mix in any other Groovy
constructs such as variables, control flow such as looping and branching, or true method
calls.
In keeping with the spirit of Groovy, manipulating XML structures is made
particularly easy. Associated with XML structures is the need to navigate through the
content and extract various items. Having, say, parsed a data file of XML then traversing its
structures is directly supported in Groovy with XPath-like [7] expressions. For example, a
data file comprising a set of Book elements might be structured as:
249 <Library>
250 <Book> … </Book>
251 <Book> … </Book>
252 <Book> … </Book>
253 …
254 </Library>
If the variable doc represents the root for this XML document, then the navigation
expression doc.Book[0].Title[0] obtains the first Title for the first Book. Equally,
doc.Book delivers a List that represents all the Book elements in the Library. With a
suitable iterator we immediately have the code to print the title of every book in the library:
255 parser = new XmlParser()
256 doc = parser.parse("library.xml")
257
258 doc.Book.each { bk ->
259 println "${bk.Title[0].text()}"
260 }
The ease with which Groovy can manipulate XML structures encourages us the
consider representing JCSP networks as XML markup. Groovy can then manipulate that
information, configure the processes and channels, and then execute the model. For
25
J. Kerridge et al. / Groovy Parallel
example, we might arrive at the following markup (lines 261-274) for the classical
producer–consumer system built from the SendProcess and the ReadProcess described in
3.1.1 and 3.1.2. The libraries to be imported are specified on lines 262 and 263.
261 <csp-network>
262 <include name="com.quickstone.jcsp.lang.*"/>
263 <include name="uk.ac.napier.groovy.parallel.*"/>
264 <channel name="chan" class="Channel" type="createOne2One"/>
265 <processlist>
266 <process class="SendProcess">
267 <arg name="cout" value="chan.out()"/>
268 <arg name="id" value="1000"/>
269 </process>
270 <process class="ReadProcess">
271 <arg name="cin" value="chan.in()"/>
272 </process>
273 </processlist>
274 </csp-network>
To ensure the consistency of the information contained in these network configurations
we could define an XML schema [7] for this purpose. A richer schema defines how nested
structures could be described. From the preceding example we also permit a recursive
definition whereby a simple <process> may itself be another <processlist>. Hence we
can define the XML for the plexing system described in 3.1.4 by the following.
275 <csp-network>
276 <include name="com.quickstone.jcsp.lang.*"/>
277 <include name="uk.ac.napier.groovy.parallel.*"/>
278 <channel name="a" class="Channel" type="createOne2One" size="5"/>
279 <channel name="b" class="Channel" type="createOne2One"/>
280 <channelInputList name="channelList" source="a"/>
281 <processlist>
282 <processlist>
283 <process class="SendProcess">
284 <arg name="cout" value="a[0].out()"/>
285 <arg name="id" value="1000"/>
286 </process>
287 <process class="SendProcess">
288 <arg name="cout" value="a[1].out()"/>
289 <arg name="id" value="2000"/>
290 </process>
291 <process class="SendProcess">
292 <arg name="cout" value="a[2].out()"/>
293 <arg name="id" value="3000"/>
294 </process>
295 <process class="SendProcess">
296 <arg name="cout" value="a[3].out()"/>
297 <arg name="id" value="4000"/>
298 </process>
299 <process class="SendProcess">
300 <arg name="cout" value="a[4].out()"/>
301 <arg name="id" value="5000"/>
302 </process>
303 </processlist>
304 <process class="Plex">
305 <arg name="cout" value="b.out()"/>
306 <arg name="cin" value="channelList"/>
307 </process>
308 <process class="ReadProcess">
309 <arg name="cin" value="b.in()"/>
310 </process>
311 </processlist>
312 </csp-network>
313
26
J. Kerridge et al. / Groovy Parallel
By inspection we can see that the XML presented in lines 275 to 312 capture the
Groovy specification of the system given in lines 35 to 46. The main difference is that the
list of SendProcesses generated in lines 39 to 41 has been explicitly defined as a sequence
of SendProcess definitions. A Groovy program can parse this XML and the system will
then be invoked automatically on a single processor.
The automatically generated output from the above XML script is shown in lines 314
to 330. As can be seen it generates two PAR constructs nested one in the other. The internal
one contains the list of SendProcesses that are included within the one running the Plex
and ReadProcess processes. Lines 314 and 315 show the jar files that have to be imported.
The Groovy Parallel constructs described in section 2 have been placed in a jar file,
emphasizing that Groovy is just Java.
314 import com.quickstone.jcsp.lang.*
315 import uk.ac.napier.groovy.parallel.*
316 a = Channel.createOne2One(5)
317 b = Channel.createOne2One()
318 channelList = new CHANNEL_INPUT_LIST(a)
319 new PAR([
320 new PAR([
321 new SendProcess(cout : a[0].out(), id : 1000),
322 new SendProcess(cout : a[1].out(), id : 2000),
323 new SendProcess(cout : a[2].out(), id : 3000),
324 new SendProcess(cout : a[3].out(), id : 4000),
325 new SendProcess(cout : a[4].out(), id : 5000)
326 ]),
327 new Plex(cout : b.out(), cin : channelList),
328 new ReadProcess(cin : b.in())
329 ])
330 .run()
5. Conclusions and Future Work
The paper has shown that it is possible to create problem solutions in a clear and accessible
manner such that the essence of the CSP-style primitives and operations is more easily
understood. A special lecture was given to a set of students who were being taught Groovy
as an optional module in their second year. This lecture covered the concepts of CSP and
their implementation in Groovy. There was consensus that the approach had worked and
that students were able to assimilate the ideas. This does however need to be tested further
in a more formal setting.
Currently, Groovy uses dynamic binding and it can be argued that this is not
appropriate for a proper software engineering language. It would only need for this
checking to be done at compile time, say by a switch, and we could more robustly design,
implement and test systems.
Work is being undertaken to develop a diagramming tool that outputs the XML
required by the system builder. This would mean that the whole system could be
seamlessly incorporated into existing design and development tools such as ROME [8].
This could be extended to develop techniques for distributing a parallel system over a
network of workstations or a Beowulf cluster.
Further consideration could also be given to the XML specifications. An XML
vocabulary might be developed that is richer than that presented. Such a vocabulary might
provide a compact way to express for example, the channels used as inputs to processes
where they become the Guards of an ALT construct.
Can we answer the question posed by the title of this paper in the affirmative? We
suggest that sufficient evidence has been presented and that this provides a real way
forward for promoting the design of systems involving concurrent and parallel components.
27
J. Kerridge et al. / Groovy Parallel
Acknowledgements
A colleague, Ken Chisholm, provided the requirement for the draughts tournament. The
helpful comments of the referees were gratefully accepted.
References
[1] Inmos Ltd, occam2 Programming Reference Manual, Prentice-Hall, 1988.
[2] C.A.R. Hoare, Communicating Sequential Processes. New Jersey: Prentice-Hall, 1985; available
electronically from http://www.usingcsp.com/cspbook.pdf.
[3] P.H. Welch, Process Oriented Design for Java – Concurrency for All,
http://www.cs.kent.ac.uk/projects/ofa/jcsp/jcsp.ppt, web site accessed 4/5/2005.
[4] G. Hilderink, A. Bakkers and J. Broenink, A Distributed Real-Time java System Based on CSP, The Third
IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, ISORC 2000,
Newport Beach, California, pp.400-407, March 15-17, 2000.
[5] Groovy Developer’s Web Site, accessed 4/5/2005, groovy.codehaus.org.
[6] Quickstone Ltd, web site accessed 4/5/2005, www.quickstone.com.
[7] http://www.w3.org/TR/REC-xml/; http://www.w3.org/TR/xpath.
[8] K. Barclay and J. Savage, Object Oriented Design with UML and Java, Elsevier 2004; supporting tool
available from http://www.dcs.napier.ac.uk/~kab/jeRome/jeRome.html.
28
Communicating Process Architectures 2005
Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.)
IOS Press, 2005
On Issues of Constructing an Exception
Handling Mechanism for CSP-Based
Process-Oriented Concurrent Software†
Dusko S. JOVANOVIC, Bojan E. ORLIC, Jan F. BROENINK
Twente Embedded Systems Initiative,
Drebbel Institute for Mechatronics and Control Engineering,
Faculty of EE-Math-CS, University of Twente,
P.O.Box 217, 7500 AE, Enschede, the Netherlands
d.s.jovanovic@utwente.nl
Abstract. This paper discusses issues, possibilities and existing approaches for
fitting an exception handling mechanism (EHM) in CSP-based process-oriented
software architectures. After giving a survey on properties desired for a concurrent
EHM, specific problems and a few principal ideas for including exception handling
facilities in CSP-designs are discussed. As one of the CSP-based frameworks for
concurrent software, we extend CT (Communicating Threads) library with the
exception handling facilities. The extensions result in two different EHM models
whose compliance with the most important demands of concurrent EHMs (handling
simultaneous exceptions, the mechanism formalization and efficient
implementation) are observed.
Introduction
Under process-oriented architectures in principle we assume that a program’s algorithms
are confined within processes that exchange data via channels. When based on CSP [1],
channels (communication relationships) are synchronous, following the rendezvous
principle; executional compositions among processes are ruled by the CSP constructs,
possibly represented as compositional relationships [2]. Today’s successors of the
programming language occam, which was first to implement this programming model, are
occam-like libraries for Java, C and C++ (the most known are the University of Twente
variants CTJ [3], CTC and CTC++ [2, 4] and the University of Kent variants JCSP [5],
CCSP [6] and C++CSP [7]). “Twente” variants are together referred to as CT
(Communicating Threads), and for this paper all experiments are worked out within that
framework. The general Twente CSP-based framework for concurrent embedded control
software is referred to as CSP/CT, which implies the use of those concepts of CSP that are
implemented in the CT and accompanying tools [8] in order to provide this particular
process-oriented software environment.
Recent work [9] is concerned with dependability aspects of the CSP/CT, which revives
interest in fault tolerance mechanisms for CSP/CT, and among them the exception handling
mechanism (EHM). Exception handling is considered “as the most powerful software fault-
tolerance mechanism” [10]. An exception is an indication that something out of the
ordinary has occurred which must be brought to the attention of the program which raised it
[11]. Practical results during the research history of thirty years ([12]) appeared as
†
This research is supported by PROGRESS, the embedded system research program of the Dutch organization for Scientific Research,
NWO, the Dutch Ministry of Economic Affairs and the Technology Foundation STW.
© 2005 The authors. All rights reserved.
29
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
sophisticated EHMs in modern mainstream languages used for programming mission-
critical systems, like C++, Java and Ada. This paper considers the exception handling
concept on a methodological level of designing concurrent, CSP/CT process-oriented
software.
An EHM allows system designers to distribute dedicated corrective or alternative code
components at places within software composition that maximize effectiveness of error
recovery. Principles of EHM are based on provision of separate code segments or
components to which the execution flow is transferred upon an error occurrence in the
ordinary execution. Code segments or components that attempt error recovery (exception
handling) are called exception handlers. The main virtue of this way of handling errors in
software execution is a clear separation between normal (ordinary) program flow and parts
of software dedicated to correcting errors.
Because of alterations of a program’s execution flow due to exceptional operations,
EHMs additionally complicate understanding of concurrent software. In [13] issues of
exception handling in sequential systems are contrasted with those in concurrent systems,
especially the problems of concurrently raised exceptions resolution and simultaneous
error recovery.
Despite favourable properties in structuring error handling and the fact that EHM is the
only structured fault tolerance concept directly supported at the level of languages, it is not
so readily used in mission- or life-critical systems. Lack of tractable methods for testing or,
even more desired, formal verification of programs with exception handling is to be blamed
for hesitant use of this powerful concept. As clearly stated in [14], “since exceptions are
expected to occur rarely, the exception handling code of a system is in general the least
documented, tested, and understood part. Most of the design faults existing in a system
seem to be located in the code that handles exceptional situations.”
1 Properties of Exceptions and Exception Handling Mechanisms (EHMs)
1.1 EHM Requirements
1.1.1 General EHM Properties
The following list combines some general properties for evaluating quality and
completeness of an Exception Handling Mechanism (EHM) [13, 15, 16]. It should:
1. be simple to understand and use.
2. provide a clear separation of the ordinary program code flow and the code intended
for handling possible exceptions.
3. prevent an incomplete operation from continuing.
4. allow exceptions to contain all information about error occurrence that may be
useful for a proper handling, i.e. recovery action.
5. allow overhead in execution of exception handling code only in the presence of an
exception – exception handling burdens on the error-free execution flow should be
neglectable.
6. allow a uniform treatment of exceptions raised both by the environment and by the
program.
7. be flexible to allow adding, changing and refining exceptions.
8. impose declaring exceptions that a component may raise.
9. allow nesting exception handling facilities.
30
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
1.1.2 Properties of a Concurrent EHM
The main difficulty of extending well-understood sequential EHMs for use in concurrent
systems is the effect that occurrence of an exception in one of the collaborating processes
certainly has consequence to the other (parallel composed) processes. For instance,
exceptional interruption in one process before a rendezvous communication certainly
causes blocking of the other party in the communication, causing a deadlock-like situation
[17]. It is likely that an exceptional occurrence detected in one process is of concern of the
other processes.
In large parallel systems it may easily happen that independent exceptions occur
simultaneously: more than one exception had been raised before the first one has been
handled. The EHM, actually exception handlers, should detect these so-called concurrent
exception occurrences [13]. Also the same error may affect different processes during
different scenarios, so causing different but related exceptions. Such concurrent (and
possibly related) exceptions need to be treated in a holistic way. In these situations handling
exceptions one-by-one may be wrong – therefore in [13] the notion of exception hierarchy
has been introduced. The term “exception hierarchy” should be distinguished from the
hierarchy of exception handlers (which determines exception propagation, as addressed in
the remainder). Neither has it anything to do with a possible inheritance hierarchy of
exception types. The concept of exception hierarchy helps reasoning and acting in the case
of multiple simultaneously occurring exceptions: “if several exceptions are concurrently
raised, the exception used to activate the fault tolerance measures is the exception that is the
root of the smallest subtree containing all of the exceptions” [13].
For coping with the mentioned problems, a concurrent EHM should make sure that:
10. upon an exception occurrence in a process communicating in a parallel execution
with other processes, all processes dependent on that process should get informed
that the exception has occurred.
11. all participating processes simultaneously enter recovery activities specific for the
exception occurred.
12. in case of concurrent exception occurrences in different parallel composed
processes, a handler is chosen that treats the compound exceptional situation rather
than isolated exceptions.
1.1.3 Formal Verifiability and Real-time Requirements
In order to use any variant of the EHM models proposed in section 3, for high integrity
real-time systems (and to benefit from the CSP foundation for such one mechanism), the
proposal should allow that:
13. the mechanism is formally described and verified. The system as a whole including
both normal and exception handling operating modes should be liable to formal
checking analysis.
14. the temporal behaviour of the EHM implementation is as much as possible
predictable/controlled. In real-time systems, execution time of the EHM part of an
application should be taken into account when calculating temporal properties of
execution scenarios.
1.2 Sources of Exceptions in CSP-based Architectures
Within the CSP/CT architecture, exceptional events may be expected to occur in the
following different contexts:
31
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
1. run-time environment:
a. Run-time libraries and OS – illegal memory address, memory allocation
problems, division by zero, overflow, etc…
b. CT library components can raise exceptions (e.g. network device drivers or
remote link drivers on expired timeout; array index outside the range,
dereferencing a null pointer).
2. invalid(ated) channels (i.e. broken communication link, malfunctioning device or
“poisoned” channels).
3. consistency checks inserted at certain places in a program can fail (e.g. a variable
can go outside a permitted range).
4. exceptions induced by exceptions raised in some of the processes important to the
execution of the process.
1.3 Mechanism of Exception Propagation
After being thrown, an exception propagates to the place it can be eventually caught (and
handled). A crucial mechanism of an exception handling facility is its propagation
mechanism, which determines how to find a proper exception handler for the type of
exception that has been thrown. Exception propagation always follows a hierarchical path,
and in languages different choices are made [15, 16, 18]: dynamically along the function
call chain or object creation chain or statically along the lexical hierarchy [19]. The
exception propagation mechanism is crucial in understanding the execution flow in
presence of exceptions and its complexity directly influences acceptance of the concept in
practice.
1.4 Termination and Resumption EHM Models
Occurrence of an exception causes interruption of the ordinary program flow and transfer of
control to an exception handler. The state of the exceptionally interrupted processes is also
a concern.
Depending on the flow of execution between the ordinary and exceptional operation of
software (in presence of an exception), the so-called handling models [15] can be
predominantly divided in two groups: termination and resumption EHM models.
In the termination model, further execution of an “exception-guarded” process, function
or code block interrupted by an exceptional occurrence is aborted and never resumed.
Instead, it is the responsibility of the exception handler to bring the system in such a state
that it can continue providing the originally specified (or gracefully degraded) service. If
the exception handler is not capable of providing such a service, it will throw the exception
further. Therefore, adopting the termination model has intrinsically an unwelcome feature:
the functionality of the interrupted process after the exceptional occurrence (termination)
point has to be repeated in the handler. It may easily happen that the entire job before the
exception occurrence has to be repeated. Therefore, the idea of allowing (also) the
resumption mechanism within an EHM does not lose any of its attractions.
In the resumption model, an exception handler will also be executed following the
exception occurrence; however, the context of the exceptionally interrupted process will be
preserved and after the exception is handled (i.e. the handler terminated), the process will
continue its execution at the same point where it was interrupted.
Both exception handling models gained initially equal attention, but practice made the
termination model prevail for sequential EHMs, as much simpler to implement. It is
adopted in all mainstream languages, as C++, Java and Ada.
32
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
2 Exception Handling Facilities in CSP-based Architectures
The EHM models discussed in the next section are to address the concurrency-specific
issues and therefore aimed to be used at the level of processes in a process-oriented
concurrent environment. They should be implementable in any language suitable for
implementing the CSP principles themselves.
It is another wish that the mechanism does not restrict use of sequential exception
handling facilities (if any) present in a chosen implementation language. If a process
encapsulates a complex algorithm that is originally developed with use of some native
exception handling facilities, there should be no need to modify the original code. As long
as the use of a native EHM is confined to internal use within a process, it does not clash
with the EHM on the process-level. Practically, this means that internally used exceptions
must all be handled within the process. However, as the last resort, a component should
submit all unhandled exceptions to the process-level EHM complying with the process-
level exception handling mechanism.
The principal difficulty with concerting error recovery in concurrent systems is posed by
the fact that an exception occurrence in one process is an asynchronous event with respect
to other processes. In a system designed as a parallel composition of many processes,
proper handling of an exception occurrence that takes place in one of the participating
processes might require that other dependent processes are interrupted as well.
Propagation of unhandled exceptions is performed according to the hierarchical structure
of exception handlers. In occam and the CSP/CT framework, the system is structured as a
tree-like hierarchy made of constructs as branches and custom user processes containing
only channel communications and pure computation blocks as leaves. A natural choice is to
reuse an existing hierarchical construct/process structure and to use processes and
constructs as basic exception handling units. This choice can be implemented in few ways:
x every process/construct can be associated with an exception handler,
x extended, exception-aware versions of processes/constructs can be used instead of
ordinary processes and constructs,
x a particular exception handling construct may be introduced.
Regardless any particular implementation, upon an unsuccessful exception handing at
the process level, the exception will be thrown further to the scope of a construct.
Due to implementation issues, the termination model is preferred at the leaf-process
level in an application. The termination model applied at the construct level would mean
that prior to the execution of a construct-level exception handler all the subprocesses of the
construct would have to terminate. This can happen in several ways: one can choose to
wait till all subprocesses terminate (regularly or exceptionally) or force aborting further
execution of all subprocesses. In real-time systems where timely reaction to unexpected
events is very important the latter may be an appropriate choice. Abandoning the
termination model (at the construct level) and implementing the resumption model is a
better option when an exception does not influence some subprocesses at all or influences
them in a way that can be handled without aborting the subprocesses. Using the resumption
model at the construct level would not imply that a whole construct has to be aborted in
order to handle the exception that propagated to the construct level.
2.1 Asynchronous Transfer of Control (ATC)
One way to implement the termination model is by an internal mechanism related to the
constructs that can force the execution environment to abort all subprocesses and release all
the resources they might be holding. This approach resembles Ada’s ATC – Asynchronous
33
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
Transfer of Control or asynchronous notification in Real-time Java. However forcing
exceptional termination of all communicating, parallel composed processes poses a higher
risk of corrupting process states by an asynchronous abortion (therefore in the Ada
Ravenscar Profile [20] for high-integrity systems, the ATC is disabled). It is important to
state that such a mechanism should be made in a way that all aborted subprocesses are
given chance to finish in a proper state. This can be done by executing the associated
exception handlers for each subprocess.
2.2 Channel Poisoning
The other, more graceful, termination model is channel poisoning; sending a poison (or
reset) along channels in a CSP network is proposed in [21] as a mechanism for terminating
(or resetting) an occam network of processes. Processes that receive the poison spread it
further via all the channels they are connected to. Eventually all processes interconnected
via channels will receive the poison token and terminate. The method can be used for
implementing the termination model of constructs. In the CSP/CT framework this approach
is slightly modified as proposed in [2]: instead of passing the poison via the channels, the
idea is to poison (invalidate) the channels. Furthermore, in [9], it is proposed that any
attempt to access a poisoned channel by invoking its read/write operations will result in
throwing an exceptions in the context of the invoking process. Consequently the exception
handler associated with the process can handle the situation and/or poison other channels.
3 Architectures of EHM Models
Having in mind all the challenges for constructing a usable EHM for concurrent software,
the CSP architecture can be viewed as one offering an interesting environment for doing
that. In this part a few concepts are discussed with one eye on all the listed requirements,
among which a special concern is given to: handling of simultaneous exceptions, the
mechanism formalization and (timely) efficient implementation.
3.1 Formal Backgrounds of EHM
The first CSP construction that captures the behaviour when a process (Q) takes over after
another process (P) signals a failure is conceived by Hoare already in 1973 [22] as
P otherwise Q.
Association of a process and its handler can be modelled as in Figure 1.
Figure 1. Exception relation between a process P and its exception handling process Q
In the graphical notation as implemented in the graphical gCSP tool [8], the exception
handling process Q (exception handling processes are represented as ellipses) is associated
with the exception-guarded process P (ordinary processes are rectangles) by a
compositional exception relationship [2], following actually Hoare’s “otherwise” principle.
On the similar grounds, there have been several attempts to use CSP to formalize exception
handling [2, 17, 23, 24]. However, all these attempts have been limited to formalizing the
basic flow of activity upon exceptional termination of one process for benefit of another
34
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
(thus without building a comprehensive mechanism that fulfils aforesaid requirements for a
concurrent EHM). Also, they did not work out an implementation in a practical
programming language (with exception of [2]). Common for all is that both ordinary
operation and exceptional operation are encapsulated in processes. The compositionality of
the design is preserved by combining these processes by a construct.
Hoare eventually also catered for the basic termination principle with his interrupt
operator (') in [25], in a follow-up work [26] annotated with an exception event i ('i).
Despite its name, semantics of the ' operator is much closer to the termination model of
exception handling than to what is today usually referred to as “interrupt handling”, since it
implies termination of the left hand-side operand by an unconditional preemption by the
right hand-side one. A true “interrupt” operator would be useful for modelling the
resumption model of exception handling (as it actually was in the original proposal of the
interrupt operator in [23]). In [25], yet another operator (alternation, depicted as 9) may be
used to describe resuming a process execution after execution of another process (however,
this operator is not supported by the FDR model checker, while 'is). In a recent work [27]
a CSP-based algebra (with another variant of the Hoare’s exception-interrupt constructs) is
developed for long transactions threatened by exceptional events. The handling of
interrupts (exceptions) relies on the assumption that compensation for a wrongly taken act
is always possible. This assumption is too strong in the context of controlling mechanical
systems (with ever present real-time demands). Moreover, the concept focuses on undoing
wrong steps and not directly on fault tolerance.
Termination semantics is captured, besides Hoare’s ', also by (virtually the same)
except proposed in [23] and exception operator '
)

that appears in [2]. Whichever version is
used for modelling the exceptional termination of a process P that gets preempted by the
handler Q, it can be represented by a compositional hierarchy (in Figure 2) that corresponds
to Figure 1 as:
Figure 2. Compositional hierarchy of an exception construction
By “compositional hierarchy” we assume the way occam networks are built of processes
and constructs (which are also processes). We find the tree structure excellently capturing
this kind of executional compositions [8].
3.2 Exception Construct '
)

In the semantics of the '
)

operator [2], the composition in Figure 2 is interpreted as
following: upon an exception occurrence in process P, an exception is thrown and P
terminates; the exception is being caught by the exception construct (ExC1) and forwarded
to Q which begins its execution (handling the exception).
A concept of using a construct for modelling exception handling has a favourable
consequence to the mechanism of propagating (unhandled) exceptions: in a CSP network
with exception constructs, from the moment an exception is created and thrown by a
process, it propagates upwards along the compositional hierarchy until a proper handler is
found. Therefore, the propagation mechanism is clear and simple, since it follows the
compositional structure of the CSP/CT concurrent design.
Instead of the process P in Figure 2, there may be a construct with multiple processes. If
the construct is an Alternative or Sequential one, the situation is the same as with a single
35
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
process: upon exceptional termination of one of the alternatively or sequentially composed
processes, the exception is caught by the exception construct and handled by the process Q.
However, in the case of the Parallel construct, there is a possibility that more than one
process ends up in an exceptional situation (and therefore terminates by throwing different
exceptions). Consider situation in Figure 3.
Figure 3. Parallel construct under exception construct
Handler Q handles exceptions that may arise during execution of the parallel composition
of the processes P1, P2 and the exception construct ExC1 (actually, the exceptions thrown
by P3 and not handled by Q3). Here, it is a question at which moment exceptions from P1
should be handled (provided that the exception occurrence happens before P2 finishes)?
Moreover, what if P2 exceptionally terminates as well?
In the current implementation of '
)

[2, 28], the exceptions occurred in parallel composed
processes are handled when the Parallel construct is terminated (i.e. when all parallel
composed processes are terminated, successfully or exceptionally); for catching and
handling all possible exceptions occurred in a parallel composition, a concept of exception
set (collection of exceptions) is introduced. After termination of Par1, handler Q gets an
exception set object with all exceptions thrown by the child processes (P1, P2 and ExC1 –
all possibly unhandled processes in Q3 are rethrown).
The concept of exception set has another useful role. From its contents a handler can
reconstruct the exception hierarchy in case of simultaneous (concurrent) exceptions.
3.2.1 Channel Poisoning and the Exception Construct
Sending a poison along channels as proposed in [21] is a mechanism for terminating the
network or subnetwork. In the discussed EHM model proposal the poisoning mechanism
assumes that channels can be turned into a poisoned state in which they respond on
attempts of writing or reading by throwing back exceptions. In this way two problems are
solved.
The first problem is blocking of a rendezvous partner when the other one has
exceptionally terminated. Consider the following situation (Figure 4, Figure 5):
Figure 4. Rendezvous (potential blocking) Figure 5. Hierarchical representation of Figure 4
36
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
Processes P1 and P2 are both “exception-guarded” by exception constructs ExC1 and
ExC2 (i.e. by handlers Q1 and Q2 respectively), which are then parallel composed. Processes
P1 and P2 communicate over channel c. Should it happen than one of the processes
exceptionally terminates (before the rendezvous point), the other process stays blocked on
channel c. For that reason, handlers Q1 and Q2 should in principle turn the channel into the
poisoned state, so that the other party terminates with the same exception which caused the
first process to terminate. To recall, this exception is thrown on an attempt of reading or
writing. Moreover, an already poisoned channel on further poisoning attempts (which are
function calls) returns the poisoning exception, for a reason that will be explained soon. If
however the other rendezvous partner is already blocked on the channel, it should be
released at the act of poisoning (and then end up with the exception).
For this scheme to work, it is clear that all communicating parallel composed processes
should be “exception-guarded”, i.e. sheltered behind exception constructs. In that case, an
elegant possibility for concerted simultaneous exception handling comes automatically. On
an exceptional occurrence in one of the communicating processes, provided all are
accompanied with handlers that poison all channels connected to “their” processes, the
information of the exception spreads within the parallel composition. In case of
simultaneous exceptions, the spread of different exceptions progresses from different places
(processes) under a parallel construct. In that case, it will inevitably happen that a handler
will try to poison a channel that is already poisoned (with another exception). When
channels respond to the attempt of poisoning by returning the exception that poisoned them
initially, the handlers get information on occurrence of simultaneous exceptions. The
handler of the parallel construct will ultimately be able to reconstruct the complete
exception hierarchy.
However, this mechanism suffers from two major problems. The first one is possible
(unbounded) delay from occurrence of a (first) exception and handling it at the level of the
parallel construct. Remember that all parallel composed processes must terminate before
the handler of the parallel construct gets chance to analyse and handle the exception (set).
Some processes may spend a lot of time before coming to the rendezvous on a (poisoned)
channel and consequently be terminated! A possibility of Asynchronous Transfer of
Control is already commented as unwelcome in the high-integrity systems. An additional
penalty is that for the mechanism of rethrowing exceptions from poisoned channels to
work, it is necessary to clone exceptions (so that every handler can consider the total
exceptional situation) or at least have a rigorous administration of the (pointers to) occurred
exceptions.
The other problem inherent to the mechanism of channel poisoning is that a poison
spread is naturally bounded by the interconnection network of channels and not by the
boundaries of constructs. Namely, some channels may run to processes that belong to other
constructs; ultimately this may lead to termination of the whole application, which
contradicts to the idea of exception handling as the most powerful fault tolerance
mechanism. In [21] a possibility of inserting special processes on boundaries of the
subnetworks compelled to poisoning is proposed, but that means introducing completely
non-functional components into the system. The other option is a model-based (tool-based)
control of the poison spreading.
3.3 Interrupt Operator 'i, Environmental Exception Manager and Exception Channels
In the channel poisoning concept, propagation of an exception event was based on existing
communication channels. More apt to formal modelling would be a termination model
based on a concept that considers exceptions as explicit events communicated among
37
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
exception handlers via explicit exception channels. This change in paradigm makes formal
modelling and checking more straightforward.
Let us consider a Parallel construct Par containing 3 subprocesses: P1, P2 and P3 (see
Figure 6). Process-level exception handlers associated with these processes are Q1, Q2 and
Q3 respectively. In the scope of the exception construct ExC the exception handler
associated with the construct Par is process Q.
Figure 6. Design rule for fault-tolerant parallel composition with environmental care
Using the interrupt operator this could be written in CSP as (in is an explicit exception
event):
Par = (P1 'i1 Q1) || (P2 'i2 Q2) || (P3 'i3 Q3).
In turn, the relation between processes Par and Q is modelled in the same way:
Par 'i Q = ((P1 'i1 Q1) || (P2 'i2 Q2) || (P3 'i3 Q3)) 'i Q,
where i is the Par-level exception event.
If an exception occurs during execution of a process P1, the process will be aborted and
the associated exception handler Q1 will be invoked. This can actually be seen as an
implicit occurrence of the exception event i1. If the exception cannot be handled by Q1, this
exception should be communicated to the higher EHM level. Since such higher level EHM
facilities are represented by some process from the environment playing a role of a higher-
level exception handler, this can be implemented as communication via channels. One can
imagine that, following premature termination of a process, a higher EHM component can
throw exceptions in the contexts of the other affected processes. In sense of CSP, this is
equal to interrupting those processes, by inducing an event i2 that will cause, say process
P2 to be aborted and wake up its exception handler (Q2). In this way, graceful termination
(giving a chance for a process state clean-up) can be modelled by the CSP-standard
interrupt operator 'i.
Thus, from the interrupt operator 'i point of view aborting a process (P2) is nothing
more then communicating the exception event (i2) to the exception handling process (Q2).
And indeed the termination mechanism can be really implemented in this way. Special
exception channels can be dedicated to this purpose. The communication via exception
channel is actually an encapsulating mechanism used to throw an exception in the context
of affected processes (P2 and P3), forcing them to abort further execution and forcing the
execution of associated exception handlers (Q2 and/or Q3) instead.
Although their implementation is more complicated, from synchronization point of view
those channels are real rendezvous channels. This is the case because processes Q1, Q2 and
Q3 are during ordinary operation mode always ready to accept events i1, i2 and i3
produced by the environment.
38
D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software
Writing to an exception channel would pass data about the cause of the exception to the
process-level exception handler. In addition, the process must be unblocked if it is waiting
on a channel or semaphore. Afterwards, when a scheduler grants CPU time to that process,
instead of a regular context switch to the stack of the process, it would switch to the stack
unwrapped to a proper point for the execution of the exception handler.
When all process-level handlers Q1, Q2 and Q3 terminate, the construct (Par) will
terminate unsuccessfully by throwing an exception to its parent exception construct (ExC).
As a consequence, the exception handler Q will be executed.
But who will produce events i1, i2 and i3? The exception handler Q cannot do that
because it can be executed only after the construct and all of its subprocesses have already
terminated. It is possible to imagine an additional environment process (let us name it
environmental exception manager - EEM) that does that. This process would have to run in
parallel with the guarded construct or the whole application. Furthermore, because the
exception handling response time is important, this newly introduced process should have a
higher priority then the top application construct. For the running example,
PriPar (EEM, Par 'i Q).
Every process is by default equipped with an exception handler which, if not redefined
by the user, only throws all exceptions further to the environmental exception manager.
While for the previous concept it was not necessary that all processes have associated
handlers in order to let their exception be handled at the construct level, in this proposal it is
the rule all processes must have attached handlers (as in Figure 6). One side-effect of this
decision is that it becomes possible to define both a process and its exception handler as
two functions of one object. Normally, in the occam-like libraries, processes are
implemented as objects, but this was just a design choice since from the CSP point of view
there is no obstacle in realizing a process as merely a function. While a process and its
exception handler were defined in separate objects, the process had to pack all the data
needed for exception handling into an exception object in order to pass it to its exception
handler. Having them inside the same object is however more convenient in light of real-
time systems. Besides reducing the memory usage, the dynamic memory allocation can be
avoided since an exception handler can directly inspect data members defining the state of
the process.
The concept of the exception manager opens yet another possibility: thanks to the
careful management of the exceptions events, the resumption feature becomes viable. The
manager can have encoded application-specific exception handling rules. These rules may
not necessarily terminate all subprocesses. The termination and the resumption model can
be combined in one application.
3.3.1 Treating Complex Exceptional Situations
In order to make an appropriate handling decision about occurrence of simultaneous
exceptions in multiple processes (sometimes caused by the same physical fault), it is often
necessary to check the state of certain resources internal to concurrently executing
constructs. Obviously, handling such complex exception events requires some kind of
exception hierarchy checks and application specific rules that are encoded for all possible
combinations. If the number of those rules and combinations is very large, which is the case
in complex systems, the environmental exception manager can be implemented as a
complex process containing several environmental exception handlers covering different
functional views of the system or different classes of exceptional scenarios. It is also
possible to create one environmental exception handler for every construct in the system.
39
Other documents randomly have
different content
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Communicating Process Architectures 2008 Wotug31 Volume 66 Concurrent Systems...
PDF
epdf.pub_real-time-systems-design-and-analysis.pdf
PDF
The Concurrency Challenge : Notes
PDF
Our Concurrent Past; Our Distributed Future
PDF
Module 2.pdf
PDF
Advances in Computers Volume 92 1st Edition Ali Hurson (Eds.)
PDF
Embedded Systems Architecture Programming And Design 2nd Edition Raj Kamal
PDF
Embedded Systems Architecture Programming and Design 2nd Edition Raj Kamal
Communicating Process Architectures 2008 Wotug31 Volume 66 Concurrent Systems...
epdf.pub_real-time-systems-design-and-analysis.pdf
The Concurrency Challenge : Notes
Our Concurrent Past; Our Distributed Future
Module 2.pdf
Advances in Computers Volume 92 1st Edition Ali Hurson (Eds.)
Embedded Systems Architecture Programming And Design 2nd Edition Raj Kamal
Embedded Systems Architecture Programming and Design 2nd Edition Raj Kamal

Similar to Communicating Process Architectures 2005 Concurrent Systems Engineering Series Jan F Broenink (20)

PPTX
Parallel Programming Models: Shared variable model
PPTX
Hpc 6 7
PDF
Embedded Systems Architecture Programming and Design 2nd Edition Raj Kamal
PDF
Computer architecture and organization
PDF
Advanced Computer Architecture Parallelism, Scalability, Programmability (2nd...
 
PDF
Inter-Process Communication in distributed systems
PPTX
Cloud computing: Parallel and distributed processing.
PPTX
Cloud computing and distributed systems.
PDF
Embedded Software for the IoT 3rd Edition Klaus Elk
PDF
Microprocessor Architecture From Simple Pipelines To Chip Multiprocessors Jea...
PDF
The real time publisher subscriber inter-process communication model for dist...
PPT
CS4961-L1.ppt
PPT
Architectural Design.pptArchitectural Design.ppt
PPT
Chapter 6 - Architectural Design.pptbbbb
PPT
Parallel Computing 2007: Overview
PPTX
20090720 smith
PDF
Computing and Communications Engineering in Real-Time Application Development...
PDF
(eBook PDF) Distributed Systems: Concepts and Design 5th Edition
DOCX
Me cse
PDF
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
Parallel Programming Models: Shared variable model
Hpc 6 7
Embedded Systems Architecture Programming and Design 2nd Edition Raj Kamal
Computer architecture and organization
Advanced Computer Architecture Parallelism, Scalability, Programmability (2nd...
 
Inter-Process Communication in distributed systems
Cloud computing: Parallel and distributed processing.
Cloud computing and distributed systems.
Embedded Software for the IoT 3rd Edition Klaus Elk
Microprocessor Architecture From Simple Pipelines To Chip Multiprocessors Jea...
The real time publisher subscriber inter-process communication model for dist...
CS4961-L1.ppt
Architectural Design.pptArchitectural Design.ppt
Chapter 6 - Architectural Design.pptbbbb
Parallel Computing 2007: Overview
20090720 smith
Computing and Communications Engineering in Real-Time Application Development...
(eBook PDF) Distributed Systems: Concepts and Design 5th Edition
Me cse
Distributed systems principles and paradigms 2nd ed., New international ed Ed...
Ad

Recently uploaded (20)

PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
Cell Types and Its function , kingdom of life
PPTX
Institutional Correction lecture only . . .
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Pharma ospi slides which help in ospi learning
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
master seminar digital applications in india
PDF
RMMM.pdf make it easy to upload and study
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Cell Types and Its function , kingdom of life
Institutional Correction lecture only . . .
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Module 4: Burden of Disease Tutorial Slides S2 2025
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
O5-L3 Freight Transport Ops (International) V1.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Pharma ospi slides which help in ospi learning
O7-L3 Supply Chain Operations - ICLT Program
Final Presentation General Medicine 03-08-2024.pptx
Microbial diseases, their pathogenesis and prophylaxis
master seminar digital applications in india
RMMM.pdf make it easy to upload and study
PPH.pptx obstetrics and gynecology in nursing
Microbial disease of the cardiovascular and lymphatic systems
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Ad

Communicating Process Architectures 2005 Concurrent Systems Engineering Series Jan F Broenink

  • 1. Communicating Process Architectures 2005 Concurrent Systems Engineering Series Jan F Broenink download https://ebookbell.com/product/communicating-process- architectures-2005-concurrent-systems-engineering-series-jan-f- broenink-2135978 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Communicating Process Architectures 2006 Volume 64 Concurrent Systems Engineering Series Concurrent Systems Engineering P H Welch https://ebookbell.com/product/communicating-process- architectures-2006-volume-64-concurrent-systems-engineering-series- concurrent-systems-engineering-p-h-welch-2134864 Communicating Process Architectures 2008 Wotug31 Volume 66 Concurrent Systems Engineering Series Ph Welch https://ebookbell.com/product/communicating-process- architectures-2008-wotug31-volume-66-concurrent-systems-engineering- series-ph-welch-2409310 Communicating Process Architectures 2007 Wotug30 A A Mcewan https://ebookbell.com/product/communicating-process- architectures-2007-wotug30-a-a-mcewan-1403788 Communicating Process Architectures 2011 Wotug33 Ph Welch https://ebookbell.com/product/communicating-process- architectures-2011-wotug33-ph-welch-2495232
  • 3. Process Algebra Equational Theories Of Communicating Processes J C M Baeten https://ebookbell.com/product/process-algebra-equational-theories-of- communicating-processes-j-c-m-baeten-1494074 Writing Public Policy A Practical Guide To Communicating In The Policy Making Process 3rd Edition Catherine F Smith https://ebookbell.com/product/writing-public-policy-a-practical-guide- to-communicating-in-the-policy-making-process-3rd-edition-catherine-f- smith-51712020 Business Communication Process And Product Brief Edition 7th Edition 7th Edition Mary Ellen Guffey https://ebookbell.com/product/business-communication-process-and- product-brief-edition-7th-edition-7th-edition-mary-ellen- guffey-58730348 Business Communication Process Product Brief Sixth Brief Canadian Edition Griffin https://ebookbell.com/product/business-communication-process-product- brief-sixth-brief-canadian-edition-griffin-22038440 Business Communication Process And Product 7th Edition Mary Ellen Guffey https://ebookbell.com/product/business-communication-process-and- product-7th-edition-mary-ellen-guffey-2388440
  • 7. Concurrent Systems Engineering Series Series Editors: M.R. Jane, J. Hulskamp, P.H. Welch, D. Stiles and T.L. Kunii Volume 63 Previously published in this series: Volume 62, Communicating Process Architectures 2004 (WoTUG-27), I.R. East, J. Martin, P.H. Welch, D. Duce and M. Green Volume 61, Communicating Process Architectures 2003 (WoTUG-26), J.F. Broenink and G.H. Hilderink Volume 60, Communicating Process Architectures 2002 (WoTUG-25), J.S. Pascoe, P.H. Welch, R.J. Loader and V.S. Sunderam Volume 59, Communicating Process Architectures 2001 (WoTUG-24), A. Chalmers, M. Mirmehdi and H. Muller Volume 58, Communicating Process Architectures 2000 (WoTUG-23), P.H. Welch and A.W.P. Bakkers Volume 57, Architectures, Languages and Techniques for Concurrent Systems (WoTUG-22), B.M. Cook Volumes 54–56, Computational Intelligence for Modelling, Control & Automation, M. Mohammadian Volume 53, Advances in Computer and Information Sciences ’98, U. Güdükbay, T. Dayar, A. Gürsoy and E. Gelenbe Volume 52, Architectures, Languages and Patterns for Parallel and Distributed Applications (WoTUG-21), P.H. Welch and A.W.P. Bakkers Volume 51, The Network Designer’s Handbook, A.M. Jones, N.J. Davies, M.A. Firth and C.J. Wright Volume 50, Parallel Programming and JAVA (WoTUG-20), A. Bakkers Volume 49, Correct Models of Parallel Computing, S. Noguchi and M. Ota Volume 48, Abstract Machine Models for Parallel and Distributed Computing, M. Kara, J.R. Davy, D. Goodeve and J. Nash Volume 47, Parallel Processing Developments (WoTUG-19), B. O’Neill Volume 46, Transputer Applications and Systems ’95, B.M. Cook, M.R. Jane, P. Nixon and P.H. Welch Transputer and OCCAM Engineering Series Volume 45, Parallel Programming and Applications, P. Fritzson and L. Finmo Volume 44, Transputer and Occam Developments (WoTUG-18), P. Nixon Volume 43, Parallel Computing: Technology and Practice (PCAT-94), J.P. Gray and F. Naghdy Volume 42, Transputer Research and Applications 7 (NATUG-7), H. Arabnia Volume 41, Transputer Applications and Systems ’94, A. de Gloria, M.R. Jane and D. Marini Volume 40, Transputers ’94, M. Becker, L. Litzler and M. Tréhel ISSN 1383-7575
  • 8. Communicating Process Architectures 2005 WoTUG-28 Edited by Jan F. Broenink University of Twente, The Netherlands Herman W. Roebbers Philips TASS, The Netherlands Johan P.E. Sunter Philips Semiconductors, The Netherlands Peter H. Welch University of Kent, United Kingdom and David C. Wood University of Kent, United Kingdom Proceedings of the 28th WoTUG Technical Meeting, 18–21 September 2005, Technische Universiteit Eindhoven, The Netherlands Amsterdam • Berlin • Oxford • Tokyo • Washington, DC
  • 9. © 2005 The authors. All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without prior written permission from the publisher. ISBN 1-58603-561-4 Library of Congress Control Number: 2005932067 Publisher IOS Press Nieuwe Hemweg 6B 1013 BG Amsterdam Netherlands fax: +31 20 687 0019 e-mail: order@iospress.nl Distributor in the UK and Ireland Distributor in the USA and Canada IOS Press/Lavis Marketing IOS Press, Inc. 73 Lime Walk 4502 Rachael Manor Drive Headington Fairfax, VA 22032 Oxford OX3 7AD USA England fax: +1 703 323 3668 fax: +44 1865 750079 e-mail: iosbooks@iospress.com LEGAL NOTICE The publisher is not responsible for the use which might be made of the following information. PRINTED IN THE NETHERLANDS
  • 10. Communicating Process Architectures 2005 v Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.) IOS Press, 2005 © 2005 The authors. All rights reserved. Preface We are at the start of a new CPA conference. Communicating Process Architectures 2005 marks the first time that this conference has been organized by an industrial company (Phil- ips) in co-operation with a university (Technische Universiteit Eindhoven). We see that this also marks the growing awareness of the ideas characterized by ‘Communicating Processes Architecture’ and their growing adoption by industry beyond their traditional base in safety-critical systems and security. The complexity of modern computing systems has become so great that no one person – maybe not even a small team – can understand all aspects and all interactions. The only hope of making such systems work is to ensure that all components are correct by design and that the components can be combined to achieve scalability. A crucial property is that the cost of making a change to a system depends linearly on the size of that change – not on the size of the system being changed. Of course, this must be true whether that change is a matter of maintenance (e.g. to take advantage of upcoming multiprocessor hardware) or the addition of new functionality. One key is that system composition (and disassembly) intro- duces no surprises. A component must behave consistently, no matter the context in which it is used – which means that component interfaces must be explicit, published and free from hidden side-effect. Our view is that concurrency, underpinned by the formal process algebras of Hoare’s Communicating Sequential Processes and Milner’s π-Calculus, pro- vides the strongest basis for the development of technology that can make this happen. Once again we offer strongly refereed high-quality papers covering many differing as- pects: system design and implementation (for both hardware and software), tools (concur- rent programming languages, libraries and run-time kernels), formal methods and applica- tions. These papers are presented in a single stream so you won’t have to miss out on any- thing. As always we have plenty of space for informal contact and we don’t have to worry about the bar closing at half ten! We are pleased to have keynote speakers such as Ad Peeters of Handshake Solutions and Guy Broadfoot of Verum, proving that you can actually make profitable business using CSP as your guiding principle in the design of concurrent systems, be they hardware or software. The third keynote by IBM Chief Architect Peter Hofstee assures us that CSP was also used in the design of the communication system of the recent Cell processor, jointly developed by IBM, Sony and Toshiba. The fourth keynote talk is by Paul Stravers of Phil- ips Semiconductors on the Wasabi multiprocessor architecture. We anticipate that you will have a very fruitful get-together and hope that it will pro- vide you with as much inspiration and motivation as we have always experienced. We thank the authors for their submissions, the Programme Committee for their hard work in reviewing the papers and Harold Weffers and Maggy de Wert (of TUE) in making the arrangements for this meeting. Finally, we are especially grateful to Fred Barnes (of the University of Kent) for his essential technical expertise and time in the preparation of these proceedings. Herman Roebbers (Philips TASS) Peter Welch and David Wood (University of Kent) Johan Sunter (Philips Semiconductors) Jan Broenink (University of Twente)
  • 11. vi Programme Committee Prof. Peter Welch, University of Kent, UK (Chair) Dr. Alastair Allen, Aberdeen University, UK Prof. Hamid Arabnia, University of Georgia, USA Dr. Fred Barnes, University of Kent, UK Dr. Richard Beton, Roke Manor Research Ltd, UK Dr. John Bjorndalen, University of Tromso, Norway Dr. Marcel Boosten, Philips Medical Systems, The Netherlands Dr. Jan Broenink, University of Twente, The Netherlands Dr. Alan Chalmers, University of Bristol, UK Prof. Peter Clayton, Rhodes University, South Africa Dr. Barry Cook, 4Links Ltd., UK Ms. Ruth Ivimey-Cook, Stuga Ltd., UK Dr. Ian East, Oxford Brookes University, UK Dr. Mark Green, Oxford Brookes University, UK Mr. Marcel Groothuis, University of Twente, The Netherlands Dr. Michael Goldsmith, Formal Systems (Europe) Ltd., Oxford, UK Dr. Kees Goossens, Philips Research, The Netherlands Dr. Gerald Hilderink, Enschede, The Netherlands Mr. Christopher Jones, British Aerospace, UK Prof. Jon Kerridge, Napier University, UK Dr. Tom Lake, InterGlossa, UK Dr. Adrian Lawrence, Loughborough University, UK Dr. Roger Loader, Reading, UK Dr. Jeremy Martin, GSK Ltd., UK Dr. Stephen Maudsley, Bristol, UK Mr. Alistair McEwan, University of Surrey, UK Prof. Brian O'Neill, Nottingham Trent University, UK Prof. Chris Nevison, Colgate University, New York, USA Dr. Denis Nicole, University of Southampton, UK Prof. Patrick Nixon, University College Dublin, Ireland Dr. James Pascoe, Bristol, UK Dr. Jan Pedersen, University of Nevada, Las Vegas Dr. Roger Peel, University of Surrey, UK Ir. Herman Roebbers, Philips TASS, The Netherlands Prof. Nan Schaller, Rochester Institute of Technology, New York, USA Dr. Marc Smith, Colby College, Maine, USA Prof. Dyke Stiles, Utah State University, USA Dr. Johan Sunter, Philips Semiconductors, The Netherlands Mr. Oyvind Teig, Autronica Fire and Security, Norway Prof. Rod Tosten, Gettysburg University, USA Dr. Stephen Turner, Nanyang Technological University, Singapore Prof. Paul Tynman, Rochester Institute of Technology, New York, USA Dr. Brian Vinter, University of Southern Denmark, Denmark Prof. Alan Wagner, University of British Columbia, Canada
  • 12. vii Dr. Paul Walker, 4Links Ltd., UK Mr. David Wood, University of Kent, UK Prof. Jim Woodcock, University of York, UK Ir. Peter Visser, University of Twente, The Netherlands
  • 14. ix Contents Preface v Herman Roebbers, Peter Welch, David Wood, Johan Sunter and Jan Broenink Programme Committee vi Interfacing with Honeysuckle by Formal Contract 1 Ian East Groovy Parallel! A Return to the Spirit of occam? 13 Jon Kerridge, Ken Barclay and John Savage On Issues of Constructing an Exception Handling Mechanism for CSP-Based Process-Oriented Concurrent Software 29 Dusko S. Jovanovic, Bojan E. Orlic and Jan F. Broenink Automatic Handel-C Generation from MATLAB® and Simulink® for Motion Control with an FPGA 43 Bart Rem, Ajeesh Gopalakrishnan, Tom J.H. Geelen and Herman Roebbers JCSP-Poison: Safe Termination of CSP Process Networks 71 Bernhard H.C. Sputh and Alastair R. Allen jcsp.mobile: A Package Enabling Mobile Processes and Channels 109 Kevin Chalmers and Jon Kerridge CSP++: How Faithful to CSPm? 129 W.B. Gardner Fast Data Sharing within a Distributed, Multithreaded Control Framework for Robot Teams 147 Albert Schoute, Remco Seesink, Werner Dierssen and Niek Kooij Improving TCP/IP Multicasting with Message Segmentation 155 Hans Henrik Happe and Brian Vinter Lazy Cellular Automata with Communicating Processes 165 Adam Sampson, Peter Welch and Fred Barnes A Unifying Theory of True Concurrency Based on CSP and Lazy Observation 177 Marc L. Smith The Architecture of the Minimum intrusion Grid (MiG) 189 Brian Vinter Verification of JCSP Programs 203 Vladimir Klebanov, Philipp Rümmer, Steffen Schlager and Peter H. Schmitt
  • 15. x Architecture Design Space Exploration for Streaming Applications through Timing Analysis 219 Maarten H. Wiggers, Nikolay Kavaldjiev, Gerard J.M. Smit and Pierre G. Jansen A Foreign-Function Interface Generator for occam-pi 235 Damian J. Dimmich and Christian L. Jacobsen Interfacing C and occam-pi 249 Fred Barnes Interactive Computing with the Minimum intrusion Grid (MiG) 261 John Markus Bjørndalen, Otto J. Anshus and Brian Vinter High Level Modeling of Channel-Based Asynchronous Circuits Using Verilog 275 Arash Saifhashemi and Peter A. Beerel Mobile Barriers for occam-pi: Semantics, Implementation and Application 289 Peter Welch and Fred Barnes Exception Handling Mechanism in Communicating Threads for Java 317 Gerald H. Hilderink R16: A New Transputer Design for FPGAs 335 John Jakson Towards Strong Mobility in the Shared Source CLI 363 Johnston Stewart, Paddy Nixon, Tim Walsh and Ian Ferguson gCSP occam Code Generation for RMoX 375 Marcel A. Groothuis, Geert K. Liet and Jan F. Broenink Assessing Application Performance in Degraded Network Environments: An FPGA-Based Approach 385 Mihai Ivanovici, Razvan Beuran and Neil Davies Communication and Synchronization in the Cell Processor (Invited Talk) 397 H. Peter Hofstee Homogeneous Multiprocessing for Consumer Electronics (Invited Talk) 399 Paul Stravers Handshake Technology: High Way to Low Power (Invited Talk) 401 Ad Peeters If Concurrency in Software Is So Simple, Why Is It So Hard? (Invited Talk) 403 Guy Broadfoot Author Index 405
  • 16. Communicating Process Architectures 2005 Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.) IOS Press, 2005 Interfacing with Honeysuckle by Formal Contract Ian EAST Dept. for Computing, Oxford Brookes University, Oxford OX33 1HX, England ireast@brookes.ac.uk Abstract. Honeysuckle [1] is a new programming language that allows systems to be constructed from processes which communicate under service (client-server or master-servant) protocol [2]. The model for abstraction includes a formal definition of both service and service-network (system or component) [3]. Any interface between two components thus forms a binding contract which will be statically verified by the compiler. An account is given of how such an interface is constructed and expressed in Honeysuckle, including how it may encapsulate state, and how access may be shared and distributed. Implementation is also briefly discussed. Keywords. Client-server protocol, compositionality, interfacing, component-based software development, deadlock-freedom, programming language. Introduction The Honeysuckle project has two motivations. First, is the need for a method by which to design and construct reactive (event-driven) and concurrent systems free of pathological be- haviour, such as deadlock. Second, is the desire to design a new programming language that builds on the success of occam [4] and profits from all that has been learned in two decades of its use [5]. occam already has one worthy successor in occam-π which extends the original lan- guage to support the development of distributed applications [6]. Both processes and chan- nels thus become mobile. Honeysuckle is more conservative and allows only objects mobil- ity. Emphasis has instead been placed on securing integrity within the embedded application domain. Multiple offspring are testimony to the innovative vigour of occam. Any successor must preserve its salient features. occam facilitates the natural expression of concurrency without semaphore or monitor. It possesses transparent, and mostly formal, semantics, based upon the theory of Communicating Sequential Processes (CSP) [7,8]. It is also compositional, in that it is rendered inherently free of side-effects by the strict separation of value and action (the changing of value). occam also had its weaknesses, that limited its commercial potential. It offered poor support for the expression of data structure and none for dynamic (abstract) data types. While processes afford encapsulation and allow effective system modularity, there is also no support for project (source code) modularity. One cannot collect related definitions in any kind of reusable package. Also, the ability only to copy a value, and not pass access to an object, to a parallel process caused inefficiency, and lay in contrast with the passing of parameters to a sequential procedure. Perhaps the most significant factor limiting the take-up of occam has been the additional threats to security against error that come with concurrency; most notably, deadlock. Jeremy Martin successfully brought together theoretical work on deadlock-avoidance using CSP with the effective design patterns for process-oriented systems introduced by Peter Welch et al. © 2005 The authors. All rights reserved. 1
  • 17. I. East / Interfacing with Honeysuckle [9,10,11,12]. The result was a set of formal design rules, each proven to guarantee deadlock- freedom within a CSP framework. By far the most widely applicable design rule relies on a formal service (client-server) protocol to define a model for system architecture. This idea originated with Per Brinch- Hansen [2] in the study of operating systems. Service architecture has a wide domain of application because it can abstract a large variety of systems, including any that can be ex- pressed using channels, as employed by occam. However, architecture is limited to hierar- chical structure because of a design rule that requires the absence of any directed circuit in service provision, in order to guarantee freedom from deadlock. A formal model for the abstraction of systems with service architecture has been pre- viously given [3], based upon the rules employed by Martin. This separates the abstraction of service protocol and service network component, and shows how the definition of system and component can be unified (a point to be revisited in the next section). Furthermore, the model incorporates prioritisation, which not only offers support for reactive systems (that typically prioritise event response), but also liberates system architecture from the constraint of hierarchical (tree) structure. Finally, a further proof of the absence of deadlock was given, subject to a new design rule. Prioritised service architecture (PSA) presents the opportunity to build a wide range of reactive/concurrent systems, guaranteed free of deadlock. However, it is too much to expect any designer to take responsibility for the static verification of many formal design rules. Specialist skills would be required. Even then, mistakes would be made. In order to ease design and implementation, a new programming language is required. The compiler can then automate all verification. Honeysuckle seeks to combine the ambition for such a language with that for a succes- sor to occam. It renders systems with PSA simple to derive and express, while retaining a formal guarantee of deadlock-freedom, without resort to any specialist skill or tool beyond the compiler. Its design is now complete and stable. A compiler is under construction and will be made available free of charge. This paper presents a detailed account of the programming of service protocol and the construction of an interface for system or component in Honeysuckle. In so doing it continues from the previous language overview [1]. We begin by considering the problem of modular software composition and the limitations of existing object- and process-oriented languages. 1. The Problem of Composition While occam is compositional in the construction of a monolithic program, it is not so with regard to system modularity. In order to recursively compose or decompose a system, we require: • some components that are indivisible • that compositions of components are themselves valid components • that behaviour of any component is manifest in its interface, without reference to any internal structure Components whose definition complies with all the above conditions may be termed compositional with regard to some operator or set of operators. As alluded to earlier, it has been shown how service network components (SNCs) may be defined in such a way as to satisfy the first two requirements when subject to parallel composition [3]. A corollary is that any system forms a valid component, since it is (by definition) a com- position. Another corollary, vital to all forms of engineering, is that it is then possible to sub- stitute any component with another, possessing the same interface, without affecting either 2
  • 18. I. East / Interfacing with Honeysuckle design or compliance with specification. Software engineering now aspires to this principle [13]. Clearly, listing a series of procedures, with given parameters, or a series of channels, with associated data types, does little to describe object or process. To substitute one process with another that simply sports the same channels would obviously be asking for trouble. A much richer language is called for, in which to describe an interface. One possibility is to resort to Floyd-Hoare logic [14,15,16] and impose formal pre- and post-conditions on each procedure (‘method’) or channel, and maintain invariants associated with each component (process or object class). However, this would require effectively the development of a language to suit each individual application and is somewhat cumbersome and expensive. It also requires special skill. Perhaps for that reason, such an explicitly for- mal approach has not found favour in much of industry. Furthermore, no other branch of engineering resorts to such powerful methods. Meyer introduced the expression design by contract [17], to which he devotes an entire chapter of his textbook on object-oriented programming [18]. This would seem to be just a particular usage of invariants and pre- and post-conditions, but it does render clear the principle that some protocol must precede composition and be verifiable. The difficulty that is peculiar to software, and that does not apply (often) to, say, me- chanical engineering, is, of course, that a component is likely to be capable of complex be- haviour, responding in a unique and perhaps extended manner to each possible input com- bination. Not many mechanical systems possess memory and the ability to change their re- sponse in perhaps a highly non-linear fashion. However, many electronic systems do possess significantly complex behaviour, yet have interfaces specified without resort to full first-order predicate calculus. Electronic engineers expect to be able to substitute components according to somewhat more specific interface description. One possibility for software component interface description, that is common with hard- ware, is a formal communication protocol detailing the order in which messages are ex- changed, together with their type and structure. In this way, a binding and meaningful con- tract is espoused. Verification can be performed via the execution of an appropriate “state- machine” (finite-state automaton (FSA)). Marcel Boosten proposed just such a mechanism to resolve problems encountered upon integration under component-based software development [19]. These included race condi- tions, re-entrant call-backs, and inconsistency between component states. He interposed an object between components that would simulate an appropriate FSA. Communication protocol can provide an interface that is both verifiable and sufficiently rich to at least reduce the amount of logic necessary for an adequate definition, if not eliminate it altogether. In Honeysuckle, an interface comprises a list of ports, each of which corresponds to one end (client or provider) of a service and forms an attribute of the component. Each service defines a communication protocol that is translated by the compiler into an appropriate FSA. Conformance to that protocol is statically verifiable by the compiler. Static verification is to be preferred wherever possible for the obvious reason that errors can be safely corrected. Dynamic verification can be compared to checking your boat after setting out to sea. Should you discover a hole, there is little you can then do but sink. Dis- covering an error in software that is deployed and running rarely leaves an opportunity for effective counter-measures, still less rectification. Furthermore, dynamic verification imposes a performance overhead that may well prove significant, especially for low-latency reactive applications. It is thus claimed here that (prioritised) service architecture is an ideal candidate for secure component-based software development (CBSD). 3
  • 19. I. East / Interfacing with Honeysuckle Honeysuckle also provides balanced abstraction between object and process. Both static and dynamic object composition may be transparently expressed, without recourse to any explicit reference (pointer). Distributed applications are supported with objects mobile be- tween processes. Together, object and service abstraction affords a rich language in which to express the interface between processes composed in either sequence or parallel. 2. Parallel Composition and Interfacing in Honeysuckle 2.1. Composition and Definition Honeysuckle interposes “clear blue water” between system and project modularity. Each definition of process, object, and service, is termed an item. Items may be gathered into a collection. Items and collections serve the needs of separated development and reuse. Processes and objects are the components from which systems are composed, and to- gether serve the needs of system abstraction, design, and maintenance. Every object is owned by a single process, though ownership may be transferred between processes at run-time. Here, we are concerned only with the programming of processes and their service interface. A program consists of one or more item definitions, including at least one of a process. For example: definition of process greet imports service console from Environment process greet : { interface client of console defines String value greeting : "Hello world!n" send greeting to console } This defines a unique process greet that has a single port consuming a service named console as interface. The console service is assumed provided by the system environment, which is effectively another process composed in parallel (which must include “provider of console” within its interface description). Figure 1 shows how both project and system modularity may be visualized or drawn. p.greet s.console greet console Figure 1. Visualizing both project and system modularity. The left-hand drawing shows the item defining process greet importing the definition of service console. On the right, the process is shown running as a client of that service. Braces (curly brackets) denote the boundary of block scope, not sequential construction, as in C or Java. They may be omitted where no context is given, and thus no indication of scope required. 4
  • 20. I. East / Interfacing with Honeysuckle A process may be defined inline or offline in Honeysuckle with identical semantics. When defined inline, any further (offline) definitions must be imported above the description of the parent process. ... { interface client of console defines String greeting : "Hello world!n" send greeting to console } ... An inline definition is achieved simultaneously with command issue (greet!). A process thus defined can still be named, facilitating recursion. For example, a proce- dure to create a new document in, say, a word processor might include the means by which a user can create a further document: ... process new_document : { ... context ... ... ... new_document } ... 2.2. Simple Services If all the console service does is eat strings it is sent, it could be very simply defined: definition of service console imports object class String from StandardTypes service console : receive String This is the sort of thing a channel can do — simply define the type of value that can be transmitted. Any such simple protocol can be achieved using a single service primitive. This is termed a simple service. Note that it is expressed from the provider perspective. The client must send a string. One further definition is imported, of a string data type from a standard library — part of the program environment. It was not necessary for the definition of process greet to directly import that of String. Definitions in Honeysuckle are transparent. Since that of greet can see that of console, it can also see that of String. For this reason, no standard data type need be imported to an application program. If more than one instance of a console service is required then one must define a class of service, perhaps called Console: definition of service class Console ... It is often very useful to communicate a “null datum” — a signal: 5
  • 21. I. East / Interfacing with Honeysuckle definition of service class Sentinel service class Sentinel : send signal This example makes an important point. A service definition says nothing about when the signal is sent. That will depend on that of the process that provides it. Any service simply acts as a template governing the communication undertaken between two (or more) processes. Signal protocol illustrates a second point, also of some importance. The rules governing the behaviour of every service network component (SNC) [3] do not require any service to necessarily become available immediately. This allows signal protocol to be used to synchro- nize two processes, where either may arrive first. 2.3. Service Construction and Context Service protocol can provide a much richer interface, and thus tighter component specifica- tion, by constraining the order in which communications occur. Perhaps the simplest example is of handshaking, where a response is always made to any request: definition of service class Console imports object class String from Standard_Types service class Console : sequence receive String send String Any process implementing a compound service, like the above, is more tightly con- strained than with a simple service. A rather more sophisticated console might be subject to a small command set and would behave accordingly: service class Console : { defines Byte write : #01 Byte read : #02 names Byte command sequence receive command if command write acquire String read sequence receive Cardinal transfer String ... Now something strange has happened. A service has acquired state. While strange it may seem, there is no cause for alarm. Naming within a service is ignored within any process that implements it (either as client or provider). It simply allows identification between references within a service definition, and so allows a decision to be taken according the intended object or value. This leaves control over all naming with the definition of process context. 6
  • 22. I. East / Interfacing with Honeysuckle One peculiarity to watch out for is illustrated by the following: service class Business : { ... sequence acquire Order send Invoice if acquire Payment transfer Item otherwise skip } It might at first appear that payment will never be required and that service will always terminate after the dispatch of (a copy of) the invoice. Such is not the case. The above def- inition allows either payment to be acquired, then an item transferred, or no further transac- tion between client and provider. It simply endorses either as legitimate. Perhaps the busi- ness makes use of a timer service and decides according to elapsed time whether to accept or refuse payment if/when offered. Although it makes sense, any such protocol is not legitimate because it does not conform to the formal conditions defining service protocol [3]. The sequence in which communica- tions take place must be agreed between client and provider. Agreement can be made as late as desired but it must be made. Here, at the point of selection (if) there is no agreement. Selection and repetition must be undertaken according to mutually recorded values, which is why a service may require state. A compound service may also be constructed via repetition. It might seem unnecessary, given that a service protocol is inherently repeatable anyway, but account must be taken of other associated structure. For example, the following might be a useful protocol for copying each week between two diaries: service diary : { ... sequence repeat for each WeekDay send day send week } It also serves as a nice illustration of the Honeysuckle use of an enumeration as both data type and range. 2.4. Implementation and Verification Any service could be implemented in occam, using at most two channels — one in each direction of data flow. Like a channel, a service is implemented using rendezvous. Because, within a service, communications are undertaken strictly in sequence, only a single ren- dezvous is required. As with occam, the rendezvous must be initially empty and then occu- pied by the first party to become ready, which must render apparent the location of, or for, any message and then wait. Each service can be verified via a finite-state automaton (FSA) augmented with a loop iteration counter. At process start, each service begins in an initial state and moves to its 7
  • 23. I. East / Interfacing with Honeysuckle successor every time a communication is encountered matching that expected. Upon process termination, each automaton must be in a final “accepting” state. A single state marks any repetition underway. Transition from that state awaits completion of the required number of iterations, which may depend upon a previous communication (within the same service). Selection is marked by multiple transitions leaving the state adopted on seeing the preceding communication. A separate state-chain follows each option. Static verification can be complete except for repetition terminated according to state incorporated within the service. The compiler must take account of this and generate an appropriate warning. Partial verification is still possible at compile-time, though the final iteration count must be checked at run-time. 3. Shared and Distributed Services 3.1. Sharing By definition, a service represents a contract between two parties only. However, the question of which two can be resolved dynamically. In the use of occam, it became apparent that a significant number of applications required the same superstructure, to allow services to be shared in this way. occam 3 [20] sought to address both the need to establish a protocol governing more than one communication at a time and the need for shared access. Remote call channels effected a remote procedure call (RPC), and thus afforded a protocol specifying a list of parameters received by a subroutine, followed by a result returned. Once defined, RPCs could be shared in a simple and transparent manner. occam 3 also added shared groups of simple channels via yet another mechanism, somewhat less simple and transparent. The RPC is less flexible than service protocol, which allows specifying communications in either direction in any order. Furthermore, multiple services may be interleaved; multiple calls to a remote procedure cannot, any more than they can to a local one. Lastly, the RPC is added to the existing channel abstraction of communication, complicating the model signifi- cantly. In Honeysuckle, services are all that is needed to abstract communication, all the way from the simplest to the most complex protocol. Honeysuckle allows services to be shared by multiple clients at the point of declaration. No service need be explicitly designed for sharing or defined as shared. { ... network shared console parallel { interface provider of console ... } ... console clients } Any client of a shared service will be delayed while another is served. Multiple clients form an implicit queue. 8
  • 24. I. East / Interfacing with Honeysuckle 3.2. Synchronized Sharing Experience with occam and the success of bulk-synchronous parallel processing strongly suggest the need for barrier synchronisation. Honeysuckle obliges with the notion of syn- chronized sharing, where every client must consume the service before any can reinitiate consumption, and the cycle begin again. ... network synchronized shared console ... Like the sharing in occam 3, synchronized sharing in Honeysuckle is superstructure. It could be implemented directly via the use of an additional co-ordinating process but is be- lieved useful and intuitive enough to warrant its own syntax. The degree of system abstraction possible is thus raised. 3.3. Distribution Sharing provides a many-to-one configuration between clients and a single provider. It is also possible, in Honeysuckle, to describe both one-to-many and many-to-many configurations. A service is said to be distributed when it is provided by more than one process. ... network distributed validation ... Note that the service thus described may remain unique and should be defined accord- ingly. Definition of an entire class of service is not required. (By now, the convention may be apparent whereby a lower-case initial indicates uniqueness and an upper-case one a class, with regard to any item — object, process, or service.) The utility of this is to simplify the design of many systems and reduce the code required for their implementation. Again, the degree of system abstraction possible is raised. A many-to-many configuration may be expressed by combining two qualifiers: ... network distributed shared validation ... When distributed, a shared service cannot be synchronized. This would make no sense, as providers possess no intrinsic way of knowing when a cycle of service, around all clients, is complete. 3.4. Design and Implementation Neither sharing nor distribution influence the abstract interface of a component. Considera- tion is only necessary when combining components. For example, the designer may choose to replicate a number of components, each of which provides service A and declare provision distributed between them. Similarly, they may choose a component providing service B and declare provision shared between a number of clients. A shared service requires little more in implementation than an unshared one. Two ren- dezvous (locations) are required. One is used to synchronize access to the service and the other each communication within it. Any client finding the provider both free and ready (both rendezvous occupied) may simply proceed and complete the initial communication. After this, it must clear both rendezvous. It may subsequently ignore the service rendezvous until 9
  • 25. I. East / Interfacing with Honeysuckle completion. Any other client arriving while service is in progress will find the provider un- ready (service rendezvous empty). It then joins a queue, at the head of which is the service rendezvous. The maximum length of the queue is just the total number of clients, defined at compile-time. Synchronized sharing requires a secondary queue from which elements are prevented from joining the primary one until a cycle is complete. A shared distributed service requires multiple primary queues. The physical interface that implements sharing and shared distribu- tion is thus a small process, encapsulating one or more queues. 4. Conclusion Honeysuckle affords powerful and fully component-wise compositional system design and programming, yet with a simple and intuitive model for abstraction. It inherits and continues the simplicity of occam but has added the ability to express the component (or system) interface in much greater detail, so that integration and substitution should be more easily achieved. Support is also included for distributed and bulk-synchronous application design, with mobile objects and synchronized sharing of services. Service (client-server) architecture is proving extremely popular in the design of dis- tributed applications but is currently lacking an established formal basis, simple consistent model for abstraction, and programming language. Honeysuckle and PSA would seem timely and well-placed. Though no formal semantics for prioritisation yet appears to have gained both stability and wide acceptance, this looks set to change [21]. A complete programming language manual is in preparation, as is a working compiler. These will be completed and published as soon as possible. Acknowledgements The author is grateful for enlightening conversation with Peter Welch, Jeremy Martin, Sharon Curtis, and David Lightfoot. He is particularly grateful to Jeremy Martin, whose earlier work formed the foundation for the Honeysuckle project. That, in turn, was strongly reliant on deadlock analysis by, and the failure-divergence-refinement (FDR) model of, Bill Roscoe, Steve Brookes, and Tony Hoare. References [1] Ian R. East. The Honeysuckle programming language: An overview. IEE Software, 150(2):95–107, 2003. [2] Per Brinch Hansen. Operating System Principles. Automatic Computation. Prentice Hall, 1973. [3] Ian R. East. Prioritised service architecture. In I. R. East and J. M. R. Martin et al., editors, Communicating Process Architectures 2004, Series in Concurrent Systems Engineering, pages 55–69. IOS Press, 2004. [4] Inmos. occam 2 Reference Manual. Series in Computer Science. Prentice Hall International, 1988. [5] Ian R. East. Towards a successor to occam. In A. Chalmers, M. Mirmehdi, and H. Muller, editors, Proceedings of Communicating Process Architecture 2001, pages 231–241, University of Bristol, UK, 2001. IOS Press. [6] Fred R. M. Barnes and Peter H. Welch. Communicating mobile processes. In I. R. East and J. M. R. Martin et al., editors, Communicating Process Architectures 2004, pages 201–218. IOS Press, 2004. [7] C. A. R. Hoare. Communicating Sequential Processes. Series in Computer Science. Prentice Hall International, 1985. [8] A. W. Roscoe. The Theory and Practice of Concurrency. Series in Computer Science. Prentice-Hall, 1998. [9] Peter H. Welch. Emulating digital logic using transputer networks. In Parallel Architectures and Languages – Europe, volume 258 of LNCS, pages 357–373. Springer Verlag, 1987. 10
  • 26. I. East / Interfacing with Honeysuckle [10] Peter H. Welch, G. Justo, and Colin Willcock. High-level paradigms for deadlock-free high performance systems. In R. Grebe et al., editor, Transputer Applications and Systems ’93, pages 981–1004. IOS Press, 1993. [11] Jeremy M. R. Martin. The Design and Construction of Deadlock-Free Concurrent Systems. PhD thesis, University of Buckingham, Hunter Street, Buckingham, MK18 1EG, UK, 1996. [12] Jeremy M. R. Martin and Peter H. Welch. A design strategy for deadlock-free concurrent systems. Transputer Communications, 3(3):1–18, 1997. [13] Clemens Szyperski. Component Software: Beyond Object-Oriented Programming. Component Software Series. Addison-Wesley, second edition, 2002. [14] R. W. Floyd. Assigning meanings to programs. In American Mathematical Society Symp. in Applied Mathematics, volume 19, pages 19–31, 1967. [15] C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM, 12(10):576–580, 1969. [16] C. A. R. Hoare. Proof of correctness of data representations. Acta Informatica, 1:271–281, 1972. [17] Bertrand Meyer. Design by contract. Technical Report TR-EI-12/CO, ISE Inc., 270, Storke Road, Suite 7, Santa Barbara, CA 93117 USA, 1987. [18] Bertrand Meyer. Object-Oriented Software Construction. Prentice Hall, second edition, 1997. [19] Marcel Boosten. Formal contracts: Enabling component composition. In J. F. Broenink and G. H. Hilderink, editors, Proceedings of Communicating Process Architecture 2003, pages 185–197, University of Twente, Netherlands, 2003. IOS Press. [20] Geoff Barrett. occam 3 Reference Manual. Inmos Ltd., 1992. [21] Adrian E. Lawrence. Triples. In I. R. East and J. M. R. Martin et al., editors, Proceedings of Communicating Process Architectures 2004, Series in Concurrent Systems Engineering, pages 157–184. IOS Press, 2004. 11
  • 28. Communicating Process Architectures 2005 Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.) IOS Press, 2005 Groovy Parallel! A Return to the Spirit of occam? Jon KERRIDGE, Ken BARCLAY and John SAVAGE The School of Computing, Napier University, Edinburgh EH10 5DT {j.kerridge, k.barclay, j.savage} @ napier.ac.uk Abstract. For some years there has been much activity in developing CSP-like extensions to a number of common programming languages. In particular, a number of groups have looked at extensions to Java. Recent developments in the Java platform have resulted in groups proposing more expressive problem solving environments. Groovy is one of these developments. Four constructs are proposed that support the writing of parallel systems using the JCSP package. The use of these constructs is then demonstrated in a number of examples, both concurrent and parallel. A mechanism for writing XML descriptions of concurrent systems is described and it is shown how this is integrated into the Groovy environment. Finally conclusions are drawn relating to the use of the constructs, particularly in a teaching and learning environment. Keywords. Groovy, JCSP, Parallel and Concurrent Systems, Teaching and Learning Introduction The occam programming language [1] provided a concise, simple and elegant means of describing computing systems comprising multiple processes running on one or more processors. Its theoretical foundations lay in the Communicating Sequential Process algebra of Hoare [2]. A practical realization of occam was the Inmos Transputer. With the demise of that technology the utility of occam as a generally available language was lost. The Communicating Process Architecture community kept the underlying principles of occam alive by a number of developments such as Welch’s JCSP package [3] and Hilderink’s CTJ[4]. Both these developments captured the concept of CSP in a Java environment. The former is supported by an extensive package that also permits the creation of systems that operate over a TCP/IP network. The problem with the Java environment is that it requires a great deal of support code to create what is, in essence, a simple idea. Groovy [5] is a new scripting language being developed for the Java platform. Groovy is compatible with Java at the bytecode level. This means that Groovy is Java. It has a Java friendly syntax that makes the Java APIs easier to use. As a scripting language it offers an ideal way in which to glue components. Groovy provides native syntactic support for many constructs such as lists, maps and regular expressions. It provides for dynamic typing which can immediately reduce the code bulk. The Groovy framework removes the heavy lifting otherwise found in Java. Thus the goal of the activity reported in this paper was to create a number of simple constructs that permitted the construction of parallel systems more easily without the need for the somewhat heavyweight requirements imposed by Java. This was seen as particularly important when the concepts are being taught. By reducing the amount that has to be written, students may be able to grasp more easily the underlying principles. © 2005 The authors. All rights reserved. 13
  • 29. J. Kerridge et al. / Groovy Parallel 1. The Spirit of Groovy In August 2003 the Groovy project was initiated at codehaus [5], an open-source project repository focussed on practical Java applications. The main architects of the language are two consultants, James Strachan and Bob McWhirter. In its short life Groovy has stimulated a great deal of interest in the Java community. So much so that it is likely to be accepted as a standard language for the Java platform. Groovy is a scripting language based on several languages including Java, Ruby, Python and Smalltalk. Although the Java programming language is a very good systems programming language, it is rather verbose and clumsy when used for systems integration. However, Groovy with a friendly Java-based syntax makes it much easier to use the Java Application Programming Interface. It is ideal for the rapid development of small to medium sized applications. Groovy offers native syntax support for various abstractions. These and other language features make Groovy a viable alternative to Java. For example, the Java programmer wishing to construct a list of bank accounts would first have to create an object of the class ArrayList, then send it repeated add messages to populate it with Account objects. In Groovy, it is much easier: accounts = [ new Account(number : 123, balance : 1200), new Account(number : 456, balance : 400)] Here, the subscript brackets [ and ] denote a Groovy List. Observe also the construction of the Account objects. This is an example of a named property map. Each property of the Account object is named along with its initial value. Maps (dictionaries) are also directly supported in Groovy. A Map is a collection of key/value pairs. A Map is presented as a comma-separated list of key : value pairs as in: divisors = [4 : [2], 6 : [2, 3], 12 : [2, 3, 4, 6]] This Map is keyed by an integer and the value is a List of integers that are divisors of the key. Closures, in Groovy, are a powerful way of representing blocks of executable code. Since closures are objects they can be passed around as, for example, method parameters. Because closures are code blocks they can also be executed when required. Like methods, closures can be defined in terms of one or more parameters. One of the most common uses for closures is to process a collection. We can iterate across the elements of a collection and apply the closure to them. A simple parameterized closure is: greeting = { name -> println "Hello ${name}" } The code block identified by greeting can be executed with the call message as in: greeting.call ("Jon") // explicit call greeting ("Ken") // implicit call Several List and Map methods accept closures as an actual parameter. This combination of closures and collections provides Groovy with some very neat solutions to common problems. The each method, for example, can be used to iterate across the elements of a collection and apply the closure, as in: [1, 2, 3, 4].each { element -> print "${element}; " } 14
  • 30. J. Kerridge et al. / Groovy Parallel will print 1; 2; 3; 4; ["Ken" : 21, "John" : 22, "Jon" : 25].each { entry -> if(entry.value > 21) print "entry.key, " } will print John, Jon, 2. The Groovy Parallel Constructs Groovy constructs are required that follow explicit requirements of CSP-based systems. These are direct support for parallel, alternative and the construction of guards reflecting that Groovy is a list-based environment whereas JCSP is an array-based system [5]. 2.1 The PAR Construct The PAR construct is simply an extension of the existing JCSP Parallel class that accepts a list of processes. The class comprises a constructor that takes a list of processes (processList) and casts them as an array of CSProcess as required by JCSP. class PAR extends Parallel { PAR(processList){ super( processList.toArray(new CSProcess[0]) ) } } 2.2 The ALT construct The ALT construct extends the existing JCSP Alternative class with a list of guards. The class comprises a constructor that takes a list of guards (guardList) and casts them as an array of Guard as required by the JCSP. The main advantage of this constructor in use is that the channels that form the guards of the ALT are passed to a process as a list of channel inputs and thus it is not necessary to create the Guard structure in the process definition. The list of guards can also include CSTimer and Skip. class ALT extends Alternative { ALT (guardList) { super( guardList.toArray(new Guard[0]) ) } } 2.3 The CHANNEL_INPUT_LIST Construct The CHANNEL_INPUT_LIST is used to create a list of channel input ends from an array of channels. This list can then be passed as a guardList to an ALT. This construct only needs to be used for channel arrays used between processes on a single processor. Channels that connect processes running on different processes (NetChannels) can be passed as a list without the need for this construct. class CHANNEL_INPUT_LIST extends ArrayList{ CHANNEL_INPUT_LIST(array) { super( Arrays.asList(Channel.getInputArray(array)) ) } } 15
  • 31. J. Kerridge et al. / Groovy Parallel 2.4 The CHANNEL_OUTPUT_LIST Construct The CHANNEL_OUTPUT_LIST is used to construct a list of channel output ends form an array of such channels and provides the converse capability to a CHANNEL_INPUT_LIST. It should be noted that all the channel output ends have to be accessed by the same process. class CHANNEL_OUTPUT_LIST extends ArrayList{ CHANNEL_OUTPUT_LIST(array) { super( Arrays.asList(Channel.getOutputArray(array)) ) } } 3. Using the Constructs In this section we demonstrate the use of these constructs, first in a typical student learning example based upon the use of a number of sender processes having their outputs multiplexed into a single reading process. The second example is a little more complex and shows a system that runs over a network of workstations and provides the basic control for a tournament in which a number of players of different capabilities play the same game (draughts) against each other and this is then used in an evolutionary system to develop a better draughts player. 3.1 A Multiplexing System 3.1.1 The Send Process The specification of the class SendProcess is brief and contains only the information required. This aids teaching and learning and also understanding the purpose of the process. The properties of the class are defined as cout and id (lines 2 and 3) without any type information. The property cout will be passed the channel used to output data from this process and id is an identifier for this process. The method run is then defined. 01 class SendProcess implements CSProcess { 02 cout // the channel used to output the data stream 03 id // the identifier of this process 04 void run() { 05 i = 0 06 1.upto(10) { // loop 10 times 07 i = i + 1 08 cout.write(i + id) // write the value of id + i to cout 09 } 10 } 11 } There is no necessity for a constructor for the class or the setter and getter methods as these are all created automatically by the Groovy system. The run method simply loops 10 times outputting the value of id to which has been added the loop index variable i (lines 4 to 8). Thus the explanation of its operation simply focuses on the communication aspects of the process. 3.1.2 The Read Process The ReadProcess is similarly brief and in this version extracts the SendProcess identification (s) and value (v) from the value that is sent to the ReadProcess. It should also be noted that types might be explicitly defined, as in the case of s (line 18), in order to 16
  • 32. J. Kerridge et al. / Groovy Parallel achieve the desired effect. It is assumed that identification values are expressed in thousands. 12 class ReadProcess implements CSProcess { 13 cin // the input channel 14 void run() { 15 while (true) { 16 d = cin.read() // read from cin 17 v = d % 1000 // v the value read 18 int s = d / 1000 // from sender s 19 println "Read: ${v} from sender ${s}" // print v and s 20 } 21 } 22 } 3.1.3 The Plex Process The Plex process is a classic example of a multiplex process that alternates over its input channels (cin) and then reads a selected input, which is immediately written to the output channel (cout) (line 31). The input channels are passed as a list to the process and these are then passed to the ALT construct (line 27) to create the JCSP Alternative. 23 class Plex implements CSProcess { 24 cin // channel input list 25 cout // output channel onto which inputs are multiplexed 26 void run () { 27 alt = new ALT(cin) 28 running = true 29 while (running) { 30 index = alt.select () 31 cout.write (cin[index].read()) 32 } 33 } 34 } 3.1.4 Running the System on a Single Processor Figure 1, shows a system comprising any number of SendProcesses together with a Plex and a ReadProcess. Figure 1. The Multiplex Process Structure In a single processor invocation, five channels a, connect the SendProcesses to the Plex process and are declared using the normal call to the Channel class of JCSP (line 35). Similarly, the channel b, connects the Plex process to the ReadProcess (line 36). A CHANNEL_INPUT_LIST construct is used to create the list of channel inputs that will be passed to the Plex process and which will be ALTed over (line 37). b a SendProcess SendProcess SendProcess Plex ReadProcess 17
  • 33. J. Kerridge et al. / Groovy Parallel The Groovy map abstraction is used (line 38) to create idMap that relates the instance number of the SendProcess to the value that will be passed as its id property. A list (sendList) of SendProcesses is then created (lines 39-41) using the collect method on a list. The list comprises five instances of the SendProcess with the cout and id properties set to the values indicated, using a closure applied to each member of the set [0,1,2,3,4]. A processList is then created (lines 42-45) comprising the sendList plus instances of the Plex and ReadProcess that have their properties initialized as indicated. The flatten() method has to be applied because sendList is already a List that has to be removed for the PAR constructor to work. Finally a PAR construct is created (line 46) and run. In section 4 a formulation that removes the need for flatten() is presented. 35 a = Channel.createOne2One (5) 36 b = Channel.createOne2One () 37 channelList = new CHANNEL_INPUT_LIST (a) 38 idMap = [0: 1000, 1: 2000, 2:3000, 3:4000, 4:5000] 39 sendList = [0,1,2,3,4].collect 40 {i->return new SendProcess ( cout:a[i].out(), 41 id:idMap[i]) } 42 processList = [ sendList, 43 new Plex (cin : channelList, cout : b.out()), 44 new ReadProcess (cin : b.in() ) 45 ].flatten() 46 new PAR (processList).run() 3.1.5 Running the System in Parallel on a Network To run the same system shown in Figure 1, on a network, with each process being run on a separate processor, a Main program for each process is required. 3.1.5.1 SendMain SendMain is passed the numeric identifier (sendId) for this process (line 47) as the zero’th command line argument. A network node is then created (line 48) and connected to a default CNSServer process running on the network. From the sendId, a string is created that is the name of the channel that this SendProcess will output its data on and a One2Net channel is accordingly created (line 51). A list containing just one process is created (line 52) that is the invocation of the SendProcess with its properties initialized and this is passed to a PAR constructor to be run (line 53). 47 sendId = Integer.parseInt( args[0] ) 48 Node.getInstance().init(new TCPIPNodeFactory ()) 49 int sendInstance = sendId / 1000 50 channelInstance = sendInstance - 1 51 outChan = CNS.createOne2Net ( "A" + channelInstance) 52 pList = [ new SendProcess ( id : sendId, cout : outChan ) ] 53 new PAR(pList).run() 3.1.5.2 PlexMain PlexMain is passed the number of SendProcesses as a command line argument (line 54), as there will be this number of input channels to the Plex process. These input channels are created as a list of Net2One channels (lines 57-59) having the same names as were created for each of the SendProcesses. As this is already a list there is no need to obtain the input ends of the channels, as this is implicit in the creation of Net2One channels. The Plex outChan is created as a One2Net channel with the name B (line 60) and the Plex process is then run in a similar manner as each of the SendProcesses (lines 61, 62). 18
  • 34. J. Kerridge et al. / Groovy Parallel 54 inputs = Integer.parseInt( args[0] ) 55 Node.getInstance().init(new TCPIPNodeFactory ()) 56 inChans = [] // an empty list of net channels 57 for (i in 0 ... inputs ) { 58 inChans << CNS.createNet2One ( "A" + i ) // append the channels 59 } 60 outChan = CNS.createOne2Net ( "B" ) 61 pList = [ new Plex ( cin : inChans, cout : outChan ) ] 62 new PAR (pList).run() 3.1.5.3 ReadMain ReadMain requires no command line arguments. It simply creates a network node (line 63), followed by a Net2One channel with the same name as was created for PlexMain’s output channel (line 64) and the ReadProcess is then invoked in the usual manner. 63 Node.getInstance().init(new TCPIPNodeFactory ()) 64 inChan = CNS.createNet2One ( "B" ) 65 pList = [ new ReadProcess ( cin : inChan ) ] 66 new PAR (pList).run() 3.1.6 Summary In the single processor case, each process is interleaved on a single processor. In the multi- processor case each process is run on a separate processor and it is assumed that CNSServer [6] is executing somewhere on the network. 3.2 A Tournament Manager The Tournament System, see Figure 2, is organized as a set of Board processes that each run a game in the tournament on a different processor. The Board processes receive information about the game they are to play from an Organiser process. The results from the Board processes are returned via a ResultMux process running on the same processor as the Organiser process. In order that the system operates in a Client-Server[6] mode each Board process is considered to be a client process and the combination of the Organiser and ResultMux processes is considered to be the Server. Figure 2 The Tournament System The system requires that data be communicated as a set of GameData and ResultData objects. The system, as defined, cannot be executed on a single processor system as due account of the copying of network communicated objects, which have to implement M2O O2M ResultMux Organiser Board Board W Tournament R 19
  • 35. J. Kerridge et al. / Groovy Parallel Serializable, is taken in the design. More importantly, the use of an internal channel between two processes has to be considered and a reply channel is utilized to overcome the fact that an object reference is passed between the ResultMux and Organiser processes. 3.2.1 The Data Objects Two data objects are used within the system, GameData holds information concerning the player identities and the playing weights associated with each player. A state (line 72) property is used to indicate whether the object holds playing data or is being used to indicate the end of the Tournament. 67 class GameData implements Serializable { 68 p1 // id of player 1 69 p2 // id of player 2 70 w1 // list of weights for player 1 71 w2 // list of weights for player 2 72 state // string containing data or end 73 } The ResultData object is used to communicate results from the Board processes back to the Organiser process. The use of each property of the object is identified in the corresponding comments. The board on which the game is played is required (line 79) so the Organiser process can send another game to the Board process immediately. The state property (line 80) is used to indicate one of three states, namely; the board has been initialized waiting for a game, the object contains the results of a game and the tournament is finishing. 74 class ResultData implements Serializable { 75 p1 // player 1 identifier 76 p2 // player 2 identifier 77 result1V2 // result of game for p1 V p2 78 result2V1 // result of game for p2 V p1 79 board // board used 80 state // String containing init or result or end 81 } 3.2.2 The Board Process The Board process is a client process and has been constructed so that an output to the Organiser in the form of a result.write() (lines 96, 103, 119) communication is always followed immediately by a work.read() (line 98). The initialization code with its output is immediately followed, in the main loop, by the required input operation. The main loop comprises two sections of an if-statement, which finish with either the outputting of a result or a termination message. The latter does not need to receive an input from the Organiser process because the Board process will itself have been terminated. In the normal case, the outputting of a result at the end of the loop is immediately followed by an input at the start of the loop. These lines (96, 98, 103, 119) have been highlighted in the code listing. A consequence of using this design approach is that only one ResultData and one GameData object is required thereby minimizing the use of the very expensive new operator. The most interesting aspect of the code is that the access to the properties of the data classes is simply made using the dot notation. This results from Groovy automatically generating the setters, getters and class constructors required. This has the immediate benefit of making the code more accessible so that key points such as the structure of client and server processes is more obvious. 20
  • 36. J. Kerridge et al. / Groovy Parallel 82 class Board implements CSProcess { 83 84 bId // the id for this Board process 85 result // One2One channel connecting the Board to the ResultMux 86 work // One2One channel used to send work to this Board 87 88 void run() { 89 println "Board ${bId} has started" 90 tim = new CSTimer() // used to simulate game time 91 gameData = new GameData() // the weights and player ids 92 resultData = new ResultData() // the result of this game 93 resultData.state = "init" 94 resultData.board = bId 95 running = true 96 result.write(resultData) // send init to Organiser 97 while (running) { 98 gameData = work.read() // always follows a result.write 99 if ( gameData.state == "end" ) { // end of processing 100 println "Board ${bId} has terminated" 101 running = false 102 resultData.state = "end" 103 result.write(resultData) // send termination to ResultMux 104 } 105 else { 106 // run the game twice with P1 v P2 and then P2 v P1 107 // simulated by a timeout 108 tim.after ( tim.read() + 100 + gameData.p2 ) 109 println "Board ${bId} playing games for 110 ${gameData.p1} and ${gameData.p2}" 111 outcome1V2 = bId // return the bId of the board playing game 112 outcome2V1 = -bId // instead of the actual outcomes 113 resultData.state = "result" 114 resultData.p1 = gameData.p1 115 resultData.p2 = gameData.p2 116 resultData.board = bId 117 resultData.result1V2 = outcome1V2 118 resultData.result2V1 = outcome2V1 119 result.write(resultData) // send result to ResultMux 120 } 121 } } } 3.2.3 The ResultMux Process This process forms part of the tournament system and is used to multiplex results from the Board processes to the Organiser. The ResultMux process runs on the same processor as the Organiser and thus access to any data objects by both processes have to be carefully managed. If this is not done then there is a chance that one process may overwrite data that has already been communicated to the other process because only an object reference is passed during such communications. In this case, the resultData object is read into in the ResultMux process and manipulated within Organiser. Yet again the desire is to reduce the number of new operations that are undertaken. new is both expensive and also leads to the repeated invocation of the Java garbage collector. In the version presented here only one instance of a ResultData object is created outside the main loop of the process. In addition, no new operation exists within the loop (lines 129-144). The only other problem to be overcome is that of terminating the ResultMux process. One of the properties (boards) of the process is the number of parallel Board processes invoked by the system. When a Board process receives a GameData object that has its state set to “end” it communicates this to the ResultMux process as well. Once the ResultMux process has received the required number of such messages it can then terminate itself (lines 137-140). 21
  • 37. J. Kerridge et al. / Groovy Parallel The other aspect of note is that the property resultsIn is a list of network channels and that these can be used as a parameter to the ALT construct without any modification because ALT (line 132) is expecting a list of input channel ends, which is precisely the type of a Net2One channel, see 3.2.6. Any ResultData that is read in on the resultsIn channels is then immediately written to the resultOut channel (line 143). The use of the reply property will be explained in the next section. 122 class ResultMux implements CSProcess { 123 boards // number of boards; used for process termination 124 resultOut // output channel from Mux to Organiser 125 reply // channel indicating result processed by Organiser 126 resultsIn // list of result channels from each of the boards 127 128 void run () { 129 resultData = new ResultData() // holds data from boards 130 endCount = 0 131 println "ResultMux has started" 132 alt = new ALT (resultsIn) 133 running = true 134 while (running) { 135 index = alt.select() 136 resultData = resultsIn[index].read() 137 if ( resultData.state == "end" ) { 138 endCount = endCount + 1 139 if ( endCount == boards ) { 140 running = false 141 } 142 } else { 143 resultOut.write(resultData) 144 b = reply.read() 145 } 146 } } } 3.2.4 The Organiser Process This is the most complex process but it breaks down into a number of distinct sections that facilitate its explanation. Yet again the use of the new operation has been limited to those structures that are required and none are contained within the main loop of the process. The outcomes structure is a list of lists that will contain the result of each game. The access mechanism is similar to that of array access but Groovy permits other styles of access that are more list oriented. Initially, each element of the structure is set to a sentinel value of 100 (lines 159-166). The result of each pair of games, pi plays pj and pj plays pi for all i <>j, is recorded in the outcomes structure such that pi v pj is stored in the upper triangle of outcomes and pj v pi in the lower part. Games such as draughts and chess have different outcomes for the same players depending upon which is white or black and hence is the starting player. The main loop has been organized so that the Organiser receives a result from the ResultMux. Saving the game’s results in the outcomes structure and then sending another game to the now idle Board process achieves this (lines 171-178). However, before another game is sent to the Board process a reply (line 178) is sent to the ResultMux process to indicate the ResultData has been processed. The resultData object is passed as a value from the ResultMux to the Organiser, which is an object reference. JCSP requires that once a process has written an object it should not then access that object until it is safe to do so. Thus once the outcomes structure has been updated the object is not required and hence the reply can be sent to the ResultMux process immediately. This happens on two occasions, first when the resultData contains the state “init” (line 180) and more commonly when a result is returned and the state is “result” (line 178). 22
  • 38. J. Kerridge et al. / Groovy Parallel 147 class Organiser implements CSProcess { 148 boards // the number of boards that are being used in parallel 149 players // number of players 150 work // channels on which work is sent to boards 151 result // channel on which results received from ResultMux 152 reply // reply to resultMux from Organiser 153 154 void run () { 155 resultData = new ResultData() // create the data structures 156 gameData = new GameData() 157 println "Organiser has started" 158 // set up the outcomes 159 outcomes = [ ] 160 for ( r in 0 ..< players ) { // cycle through the rows 161 row = [ ] // 0 ..< n gives 0 to n - 1 162 for ( c in 0 ..< players ) { // cycle through the columns 163 row << 100 // 100 acts as sentinel 164 } 165 outcomes << row 166 } 167 // the main loop 168 for ( r in 0 ..< players) { 169 c = r + 1 170 for ( c in 0 ..< players) { 171 resultData = result.read() // an object reference not a copy 172 b = resultData.board 173 if ( resultData.state == "result" ) { 174 p1 = resultData.p1 175 p2 = resultData.p2 176 outcomes [ p1 ] [ p2 ] = resultData.result1V2 177 outcomes [ p2 ] [ p1 ] = resultData.result2V1 178 reply.write(true) // outcomes processed 179 } else { 180 reply.write(true) // init received 181 } 182 // send the game [r,c] to Board process b 183 gameData.p1 = r 184 gameData.p2 = c 185 gameData.state = "data" 186 // set w1 to the weights for p1 187 // set w2 to the weights for p2 188 work[b].write(gameData) 189 } 190 } 191 // now terminate the Board processes 192 println "Organiser: Started termination process" 193 gameData.state = "end" 194 for ( i in 0 ... boards) { 195 resultData = result.read() 196 bd = resultData.board 197 p1 = resultData.p1 198 p2 = resultData.p2 199 outcomes [ p1 ] [ p2 ] = resultData.result1V2 200 outcomes [ p2 ] [ p1 ] = resultData.result2V1 201 reply.write(true) 202 work[bd].write(gameData) 203 } 204 println"Organiser: Outcomes are:" 205 for ( r in 0 ... players ) { 206 for ( c in 0 ... players ) { 207 print "[${r},${c}]:${outcomes[r][c]}; " 208 } 209 println " " 210 } 211 println"Organiser: Tournament has finished" 212 } 213 } 23
  • 39. J. Kerridge et al. / Groovy Parallel Initially, the loop will receive as many “init” messages as there are Board processes. Thus once all the games have been sent to the Board processes, each of the Board processes will still be processing a game. Hence, another loop has to be used to input the last game result from each of these processes (lines 194-203). In this case the gameData that is output contains the state “end” and this will cause the Board process that receives it to terminate but not before it has also sent the message on to the ResultMux process. Finally, the outcomes can be printed (lines 204-211) or in the real tournament system evaluated to determine the best players so that they can be mutated in an evolutionary development scheme. 3.2.5 Invoking a Board Process Each Board process has to be invoked on its own processor. The network channels are created using CNS static methods (lines 216, 217). It is vital that the channel names used in one process invocation are the same as the corresponding channel in another processor. 214 Node.getInstance().init(new TCPIPNodeFactory ()); 215 boardId = Integer.parseInt(args[0]) //the number of this Board 216 w = CNS.createNet2One("W" + boardId) // the Net2One work channel 217 r = CNS.createOne2Net("R" + boardId) // the One2Net result channel 218 println " Board ${boardId} has created its Net channels " 219 pList = [ new Board ( bId:boardId , result:r , work:w ) ] 220 new PAR (pList).run() 3.2.6 Invoking the Tournament This code is similar expect that list of network channels are created by appending channels of the correct type to list structures (lines 224-230). Two internal channels between ResultMux and Organiser are created, M2O and O2M (lines 231, 232) and these are used to implement the resultOut and reply connections respectively between these processes. An advantage of the Groovy approach to constructors is that the constructor identifies each property by name, rather than the order of arguments to a constructor call specifying the order of the properties. It also increases the readability of the resulting code. 221 Node.getInstance().init(new TCPIPNodeFactory ()); 222 nPlayers = Integer.parseInt(args[0]) // the number of players 223 nBoards = Integer.parseInt(args[1]) // the number of boards 224 w = [] // the list of One2Net work channels 225 r = [] // the list of Net2One result channels 226 for ( i in 0 ..< nBoards) { 227 i = i+1 228 w << CNS.createOne2Net("W" + i) 229 r << CNS.createNet2One("R" + i) 230 } 231 M2O = Channel.createOne2One() 232 O2M = Channel.createOne2One() 233 pList = [ new Organiser ( boards:nBoards , players:nPlayers , 234 work:w , result: M2O.in(), 235 reply: O2M.out() ), 236 new ResultMux ( boards:nBoards , resultOut:M2O.out(), 237 resultsIn:r, reply: O2M.in() ) ] 238 new PAR ( pList) .run() 24
  • 40. J. Kerridge et al. / Groovy Parallel 4. The XML Specification of Systems Groovy includes tree-based builders that can be sub-classed to produce a variety of tree- structured object representations. These specialized builders can then be used to represent, for example, XML markup or GUI user interfaces. Whichever kind of builder object is used, the Groovy markup syntax is always the same. This gives Groovy native syntactic support for such constructs. The following lines, 239 to 248, demonstrate how we might generate some XML [7] to represent a book with its author, title, etc. The non-existent method call Author("Ken Barclay") delivers the <Author>Ken Barclay</Author> element, while the method call ISBN(number : "1234567890") produces the empty XML element <ISBN number= "1234567890"/>. 239 // Create a builder 240 mB = new MarkupBuilder() 241 242 // Compose the builder 243 bk = mB.Book() { // <Book> 244 Author("Ken Barclay") // <Author>Ken Barclay</Author> 245 Title("Groovy") // <Title>Groovy</Title> 246 Publisher("Elsevier") // <Publisher>Elsevier</Publisher> 247 ISBN(number : "1234567890") // <ISBN number="1234567890"/> 248 // </Book> It is also important to recognize that since all this is native Groovy syntax being used to represent any arbitrarily nested markup, then we can also mix in any other Groovy constructs such as variables, control flow such as looping and branching, or true method calls. In keeping with the spirit of Groovy, manipulating XML structures is made particularly easy. Associated with XML structures is the need to navigate through the content and extract various items. Having, say, parsed a data file of XML then traversing its structures is directly supported in Groovy with XPath-like [7] expressions. For example, a data file comprising a set of Book elements might be structured as: 249 <Library> 250 <Book> … </Book> 251 <Book> … </Book> 252 <Book> … </Book> 253 … 254 </Library> If the variable doc represents the root for this XML document, then the navigation expression doc.Book[0].Title[0] obtains the first Title for the first Book. Equally, doc.Book delivers a List that represents all the Book elements in the Library. With a suitable iterator we immediately have the code to print the title of every book in the library: 255 parser = new XmlParser() 256 doc = parser.parse("library.xml") 257 258 doc.Book.each { bk -> 259 println "${bk.Title[0].text()}" 260 } The ease with which Groovy can manipulate XML structures encourages us the consider representing JCSP networks as XML markup. Groovy can then manipulate that information, configure the processes and channels, and then execute the model. For 25
  • 41. J. Kerridge et al. / Groovy Parallel example, we might arrive at the following markup (lines 261-274) for the classical producer–consumer system built from the SendProcess and the ReadProcess described in 3.1.1 and 3.1.2. The libraries to be imported are specified on lines 262 and 263. 261 <csp-network> 262 <include name="com.quickstone.jcsp.lang.*"/> 263 <include name="uk.ac.napier.groovy.parallel.*"/> 264 <channel name="chan" class="Channel" type="createOne2One"/> 265 <processlist> 266 <process class="SendProcess"> 267 <arg name="cout" value="chan.out()"/> 268 <arg name="id" value="1000"/> 269 </process> 270 <process class="ReadProcess"> 271 <arg name="cin" value="chan.in()"/> 272 </process> 273 </processlist> 274 </csp-network> To ensure the consistency of the information contained in these network configurations we could define an XML schema [7] for this purpose. A richer schema defines how nested structures could be described. From the preceding example we also permit a recursive definition whereby a simple <process> may itself be another <processlist>. Hence we can define the XML for the plexing system described in 3.1.4 by the following. 275 <csp-network> 276 <include name="com.quickstone.jcsp.lang.*"/> 277 <include name="uk.ac.napier.groovy.parallel.*"/> 278 <channel name="a" class="Channel" type="createOne2One" size="5"/> 279 <channel name="b" class="Channel" type="createOne2One"/> 280 <channelInputList name="channelList" source="a"/> 281 <processlist> 282 <processlist> 283 <process class="SendProcess"> 284 <arg name="cout" value="a[0].out()"/> 285 <arg name="id" value="1000"/> 286 </process> 287 <process class="SendProcess"> 288 <arg name="cout" value="a[1].out()"/> 289 <arg name="id" value="2000"/> 290 </process> 291 <process class="SendProcess"> 292 <arg name="cout" value="a[2].out()"/> 293 <arg name="id" value="3000"/> 294 </process> 295 <process class="SendProcess"> 296 <arg name="cout" value="a[3].out()"/> 297 <arg name="id" value="4000"/> 298 </process> 299 <process class="SendProcess"> 300 <arg name="cout" value="a[4].out()"/> 301 <arg name="id" value="5000"/> 302 </process> 303 </processlist> 304 <process class="Plex"> 305 <arg name="cout" value="b.out()"/> 306 <arg name="cin" value="channelList"/> 307 </process> 308 <process class="ReadProcess"> 309 <arg name="cin" value="b.in()"/> 310 </process> 311 </processlist> 312 </csp-network> 313 26
  • 42. J. Kerridge et al. / Groovy Parallel By inspection we can see that the XML presented in lines 275 to 312 capture the Groovy specification of the system given in lines 35 to 46. The main difference is that the list of SendProcesses generated in lines 39 to 41 has been explicitly defined as a sequence of SendProcess definitions. A Groovy program can parse this XML and the system will then be invoked automatically on a single processor. The automatically generated output from the above XML script is shown in lines 314 to 330. As can be seen it generates two PAR constructs nested one in the other. The internal one contains the list of SendProcesses that are included within the one running the Plex and ReadProcess processes. Lines 314 and 315 show the jar files that have to be imported. The Groovy Parallel constructs described in section 2 have been placed in a jar file, emphasizing that Groovy is just Java. 314 import com.quickstone.jcsp.lang.* 315 import uk.ac.napier.groovy.parallel.* 316 a = Channel.createOne2One(5) 317 b = Channel.createOne2One() 318 channelList = new CHANNEL_INPUT_LIST(a) 319 new PAR([ 320 new PAR([ 321 new SendProcess(cout : a[0].out(), id : 1000), 322 new SendProcess(cout : a[1].out(), id : 2000), 323 new SendProcess(cout : a[2].out(), id : 3000), 324 new SendProcess(cout : a[3].out(), id : 4000), 325 new SendProcess(cout : a[4].out(), id : 5000) 326 ]), 327 new Plex(cout : b.out(), cin : channelList), 328 new ReadProcess(cin : b.in()) 329 ]) 330 .run() 5. Conclusions and Future Work The paper has shown that it is possible to create problem solutions in a clear and accessible manner such that the essence of the CSP-style primitives and operations is more easily understood. A special lecture was given to a set of students who were being taught Groovy as an optional module in their second year. This lecture covered the concepts of CSP and their implementation in Groovy. There was consensus that the approach had worked and that students were able to assimilate the ideas. This does however need to be tested further in a more formal setting. Currently, Groovy uses dynamic binding and it can be argued that this is not appropriate for a proper software engineering language. It would only need for this checking to be done at compile time, say by a switch, and we could more robustly design, implement and test systems. Work is being undertaken to develop a diagramming tool that outputs the XML required by the system builder. This would mean that the whole system could be seamlessly incorporated into existing design and development tools such as ROME [8]. This could be extended to develop techniques for distributing a parallel system over a network of workstations or a Beowulf cluster. Further consideration could also be given to the XML specifications. An XML vocabulary might be developed that is richer than that presented. Such a vocabulary might provide a compact way to express for example, the channels used as inputs to processes where they become the Guards of an ALT construct. Can we answer the question posed by the title of this paper in the affirmative? We suggest that sufficient evidence has been presented and that this provides a real way forward for promoting the design of systems involving concurrent and parallel components. 27
  • 43. J. Kerridge et al. / Groovy Parallel Acknowledgements A colleague, Ken Chisholm, provided the requirement for the draughts tournament. The helpful comments of the referees were gratefully accepted. References [1] Inmos Ltd, occam2 Programming Reference Manual, Prentice-Hall, 1988. [2] C.A.R. Hoare, Communicating Sequential Processes. New Jersey: Prentice-Hall, 1985; available electronically from http://www.usingcsp.com/cspbook.pdf. [3] P.H. Welch, Process Oriented Design for Java – Concurrency for All, http://www.cs.kent.ac.uk/projects/ofa/jcsp/jcsp.ppt, web site accessed 4/5/2005. [4] G. Hilderink, A. Bakkers and J. Broenink, A Distributed Real-Time java System Based on CSP, The Third IEEE International Symposium on Object-Oriented Real-Time Distributed Computing, ISORC 2000, Newport Beach, California, pp.400-407, March 15-17, 2000. [5] Groovy Developer’s Web Site, accessed 4/5/2005, groovy.codehaus.org. [6] Quickstone Ltd, web site accessed 4/5/2005, www.quickstone.com. [7] http://www.w3.org/TR/REC-xml/; http://www.w3.org/TR/xpath. [8] K. Barclay and J. Savage, Object Oriented Design with UML and Java, Elsevier 2004; supporting tool available from http://www.dcs.napier.ac.uk/~kab/jeRome/jeRome.html. 28
  • 44. Communicating Process Architectures 2005 Jan Broenink, Herman Roebbers, Johan Sunter, Peter Welch, and David Wood (Eds.) IOS Press, 2005 On Issues of Constructing an Exception Handling Mechanism for CSP-Based Process-Oriented Concurrent Software† Dusko S. JOVANOVIC, Bojan E. ORLIC, Jan F. BROENINK Twente Embedded Systems Initiative, Drebbel Institute for Mechatronics and Control Engineering, Faculty of EE-Math-CS, University of Twente, P.O.Box 217, 7500 AE, Enschede, the Netherlands d.s.jovanovic@utwente.nl Abstract. This paper discusses issues, possibilities and existing approaches for fitting an exception handling mechanism (EHM) in CSP-based process-oriented software architectures. After giving a survey on properties desired for a concurrent EHM, specific problems and a few principal ideas for including exception handling facilities in CSP-designs are discussed. As one of the CSP-based frameworks for concurrent software, we extend CT (Communicating Threads) library with the exception handling facilities. The extensions result in two different EHM models whose compliance with the most important demands of concurrent EHMs (handling simultaneous exceptions, the mechanism formalization and efficient implementation) are observed. Introduction Under process-oriented architectures in principle we assume that a program’s algorithms are confined within processes that exchange data via channels. When based on CSP [1], channels (communication relationships) are synchronous, following the rendezvous principle; executional compositions among processes are ruled by the CSP constructs, possibly represented as compositional relationships [2]. Today’s successors of the programming language occam, which was first to implement this programming model, are occam-like libraries for Java, C and C++ (the most known are the University of Twente variants CTJ [3], CTC and CTC++ [2, 4] and the University of Kent variants JCSP [5], CCSP [6] and C++CSP [7]). “Twente” variants are together referred to as CT (Communicating Threads), and for this paper all experiments are worked out within that framework. The general Twente CSP-based framework for concurrent embedded control software is referred to as CSP/CT, which implies the use of those concepts of CSP that are implemented in the CT and accompanying tools [8] in order to provide this particular process-oriented software environment. Recent work [9] is concerned with dependability aspects of the CSP/CT, which revives interest in fault tolerance mechanisms for CSP/CT, and among them the exception handling mechanism (EHM). Exception handling is considered “as the most powerful software fault- tolerance mechanism” [10]. An exception is an indication that something out of the ordinary has occurred which must be brought to the attention of the program which raised it [11]. Practical results during the research history of thirty years ([12]) appeared as † This research is supported by PROGRESS, the embedded system research program of the Dutch organization for Scientific Research, NWO, the Dutch Ministry of Economic Affairs and the Technology Foundation STW. © 2005 The authors. All rights reserved. 29
  • 45. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software sophisticated EHMs in modern mainstream languages used for programming mission- critical systems, like C++, Java and Ada. This paper considers the exception handling concept on a methodological level of designing concurrent, CSP/CT process-oriented software. An EHM allows system designers to distribute dedicated corrective or alternative code components at places within software composition that maximize effectiveness of error recovery. Principles of EHM are based on provision of separate code segments or components to which the execution flow is transferred upon an error occurrence in the ordinary execution. Code segments or components that attempt error recovery (exception handling) are called exception handlers. The main virtue of this way of handling errors in software execution is a clear separation between normal (ordinary) program flow and parts of software dedicated to correcting errors. Because of alterations of a program’s execution flow due to exceptional operations, EHMs additionally complicate understanding of concurrent software. In [13] issues of exception handling in sequential systems are contrasted with those in concurrent systems, especially the problems of concurrently raised exceptions resolution and simultaneous error recovery. Despite favourable properties in structuring error handling and the fact that EHM is the only structured fault tolerance concept directly supported at the level of languages, it is not so readily used in mission- or life-critical systems. Lack of tractable methods for testing or, even more desired, formal verification of programs with exception handling is to be blamed for hesitant use of this powerful concept. As clearly stated in [14], “since exceptions are expected to occur rarely, the exception handling code of a system is in general the least documented, tested, and understood part. Most of the design faults existing in a system seem to be located in the code that handles exceptional situations.” 1 Properties of Exceptions and Exception Handling Mechanisms (EHMs) 1.1 EHM Requirements 1.1.1 General EHM Properties The following list combines some general properties for evaluating quality and completeness of an Exception Handling Mechanism (EHM) [13, 15, 16]. It should: 1. be simple to understand and use. 2. provide a clear separation of the ordinary program code flow and the code intended for handling possible exceptions. 3. prevent an incomplete operation from continuing. 4. allow exceptions to contain all information about error occurrence that may be useful for a proper handling, i.e. recovery action. 5. allow overhead in execution of exception handling code only in the presence of an exception – exception handling burdens on the error-free execution flow should be neglectable. 6. allow a uniform treatment of exceptions raised both by the environment and by the program. 7. be flexible to allow adding, changing and refining exceptions. 8. impose declaring exceptions that a component may raise. 9. allow nesting exception handling facilities. 30
  • 46. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software 1.1.2 Properties of a Concurrent EHM The main difficulty of extending well-understood sequential EHMs for use in concurrent systems is the effect that occurrence of an exception in one of the collaborating processes certainly has consequence to the other (parallel composed) processes. For instance, exceptional interruption in one process before a rendezvous communication certainly causes blocking of the other party in the communication, causing a deadlock-like situation [17]. It is likely that an exceptional occurrence detected in one process is of concern of the other processes. In large parallel systems it may easily happen that independent exceptions occur simultaneously: more than one exception had been raised before the first one has been handled. The EHM, actually exception handlers, should detect these so-called concurrent exception occurrences [13]. Also the same error may affect different processes during different scenarios, so causing different but related exceptions. Such concurrent (and possibly related) exceptions need to be treated in a holistic way. In these situations handling exceptions one-by-one may be wrong – therefore in [13] the notion of exception hierarchy has been introduced. The term “exception hierarchy” should be distinguished from the hierarchy of exception handlers (which determines exception propagation, as addressed in the remainder). Neither has it anything to do with a possible inheritance hierarchy of exception types. The concept of exception hierarchy helps reasoning and acting in the case of multiple simultaneously occurring exceptions: “if several exceptions are concurrently raised, the exception used to activate the fault tolerance measures is the exception that is the root of the smallest subtree containing all of the exceptions” [13]. For coping with the mentioned problems, a concurrent EHM should make sure that: 10. upon an exception occurrence in a process communicating in a parallel execution with other processes, all processes dependent on that process should get informed that the exception has occurred. 11. all participating processes simultaneously enter recovery activities specific for the exception occurred. 12. in case of concurrent exception occurrences in different parallel composed processes, a handler is chosen that treats the compound exceptional situation rather than isolated exceptions. 1.1.3 Formal Verifiability and Real-time Requirements In order to use any variant of the EHM models proposed in section 3, for high integrity real-time systems (and to benefit from the CSP foundation for such one mechanism), the proposal should allow that: 13. the mechanism is formally described and verified. The system as a whole including both normal and exception handling operating modes should be liable to formal checking analysis. 14. the temporal behaviour of the EHM implementation is as much as possible predictable/controlled. In real-time systems, execution time of the EHM part of an application should be taken into account when calculating temporal properties of execution scenarios. 1.2 Sources of Exceptions in CSP-based Architectures Within the CSP/CT architecture, exceptional events may be expected to occur in the following different contexts: 31
  • 47. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software 1. run-time environment: a. Run-time libraries and OS – illegal memory address, memory allocation problems, division by zero, overflow, etc… b. CT library components can raise exceptions (e.g. network device drivers or remote link drivers on expired timeout; array index outside the range, dereferencing a null pointer). 2. invalid(ated) channels (i.e. broken communication link, malfunctioning device or “poisoned” channels). 3. consistency checks inserted at certain places in a program can fail (e.g. a variable can go outside a permitted range). 4. exceptions induced by exceptions raised in some of the processes important to the execution of the process. 1.3 Mechanism of Exception Propagation After being thrown, an exception propagates to the place it can be eventually caught (and handled). A crucial mechanism of an exception handling facility is its propagation mechanism, which determines how to find a proper exception handler for the type of exception that has been thrown. Exception propagation always follows a hierarchical path, and in languages different choices are made [15, 16, 18]: dynamically along the function call chain or object creation chain or statically along the lexical hierarchy [19]. The exception propagation mechanism is crucial in understanding the execution flow in presence of exceptions and its complexity directly influences acceptance of the concept in practice. 1.4 Termination and Resumption EHM Models Occurrence of an exception causes interruption of the ordinary program flow and transfer of control to an exception handler. The state of the exceptionally interrupted processes is also a concern. Depending on the flow of execution between the ordinary and exceptional operation of software (in presence of an exception), the so-called handling models [15] can be predominantly divided in two groups: termination and resumption EHM models. In the termination model, further execution of an “exception-guarded” process, function or code block interrupted by an exceptional occurrence is aborted and never resumed. Instead, it is the responsibility of the exception handler to bring the system in such a state that it can continue providing the originally specified (or gracefully degraded) service. If the exception handler is not capable of providing such a service, it will throw the exception further. Therefore, adopting the termination model has intrinsically an unwelcome feature: the functionality of the interrupted process after the exceptional occurrence (termination) point has to be repeated in the handler. It may easily happen that the entire job before the exception occurrence has to be repeated. Therefore, the idea of allowing (also) the resumption mechanism within an EHM does not lose any of its attractions. In the resumption model, an exception handler will also be executed following the exception occurrence; however, the context of the exceptionally interrupted process will be preserved and after the exception is handled (i.e. the handler terminated), the process will continue its execution at the same point where it was interrupted. Both exception handling models gained initially equal attention, but practice made the termination model prevail for sequential EHMs, as much simpler to implement. It is adopted in all mainstream languages, as C++, Java and Ada. 32
  • 48. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software 2 Exception Handling Facilities in CSP-based Architectures The EHM models discussed in the next section are to address the concurrency-specific issues and therefore aimed to be used at the level of processes in a process-oriented concurrent environment. They should be implementable in any language suitable for implementing the CSP principles themselves. It is another wish that the mechanism does not restrict use of sequential exception handling facilities (if any) present in a chosen implementation language. If a process encapsulates a complex algorithm that is originally developed with use of some native exception handling facilities, there should be no need to modify the original code. As long as the use of a native EHM is confined to internal use within a process, it does not clash with the EHM on the process-level. Practically, this means that internally used exceptions must all be handled within the process. However, as the last resort, a component should submit all unhandled exceptions to the process-level EHM complying with the process- level exception handling mechanism. The principal difficulty with concerting error recovery in concurrent systems is posed by the fact that an exception occurrence in one process is an asynchronous event with respect to other processes. In a system designed as a parallel composition of many processes, proper handling of an exception occurrence that takes place in one of the participating processes might require that other dependent processes are interrupted as well. Propagation of unhandled exceptions is performed according to the hierarchical structure of exception handlers. In occam and the CSP/CT framework, the system is structured as a tree-like hierarchy made of constructs as branches and custom user processes containing only channel communications and pure computation blocks as leaves. A natural choice is to reuse an existing hierarchical construct/process structure and to use processes and constructs as basic exception handling units. This choice can be implemented in few ways: x every process/construct can be associated with an exception handler, x extended, exception-aware versions of processes/constructs can be used instead of ordinary processes and constructs, x a particular exception handling construct may be introduced. Regardless any particular implementation, upon an unsuccessful exception handing at the process level, the exception will be thrown further to the scope of a construct. Due to implementation issues, the termination model is preferred at the leaf-process level in an application. The termination model applied at the construct level would mean that prior to the execution of a construct-level exception handler all the subprocesses of the construct would have to terminate. This can happen in several ways: one can choose to wait till all subprocesses terminate (regularly or exceptionally) or force aborting further execution of all subprocesses. In real-time systems where timely reaction to unexpected events is very important the latter may be an appropriate choice. Abandoning the termination model (at the construct level) and implementing the resumption model is a better option when an exception does not influence some subprocesses at all or influences them in a way that can be handled without aborting the subprocesses. Using the resumption model at the construct level would not imply that a whole construct has to be aborted in order to handle the exception that propagated to the construct level. 2.1 Asynchronous Transfer of Control (ATC) One way to implement the termination model is by an internal mechanism related to the constructs that can force the execution environment to abort all subprocesses and release all the resources they might be holding. This approach resembles Ada’s ATC – Asynchronous 33
  • 49. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software Transfer of Control or asynchronous notification in Real-time Java. However forcing exceptional termination of all communicating, parallel composed processes poses a higher risk of corrupting process states by an asynchronous abortion (therefore in the Ada Ravenscar Profile [20] for high-integrity systems, the ATC is disabled). It is important to state that such a mechanism should be made in a way that all aborted subprocesses are given chance to finish in a proper state. This can be done by executing the associated exception handlers for each subprocess. 2.2 Channel Poisoning The other, more graceful, termination model is channel poisoning; sending a poison (or reset) along channels in a CSP network is proposed in [21] as a mechanism for terminating (or resetting) an occam network of processes. Processes that receive the poison spread it further via all the channels they are connected to. Eventually all processes interconnected via channels will receive the poison token and terminate. The method can be used for implementing the termination model of constructs. In the CSP/CT framework this approach is slightly modified as proposed in [2]: instead of passing the poison via the channels, the idea is to poison (invalidate) the channels. Furthermore, in [9], it is proposed that any attempt to access a poisoned channel by invoking its read/write operations will result in throwing an exceptions in the context of the invoking process. Consequently the exception handler associated with the process can handle the situation and/or poison other channels. 3 Architectures of EHM Models Having in mind all the challenges for constructing a usable EHM for concurrent software, the CSP architecture can be viewed as one offering an interesting environment for doing that. In this part a few concepts are discussed with one eye on all the listed requirements, among which a special concern is given to: handling of simultaneous exceptions, the mechanism formalization and (timely) efficient implementation. 3.1 Formal Backgrounds of EHM The first CSP construction that captures the behaviour when a process (Q) takes over after another process (P) signals a failure is conceived by Hoare already in 1973 [22] as P otherwise Q. Association of a process and its handler can be modelled as in Figure 1. Figure 1. Exception relation between a process P and its exception handling process Q In the graphical notation as implemented in the graphical gCSP tool [8], the exception handling process Q (exception handling processes are represented as ellipses) is associated with the exception-guarded process P (ordinary processes are rectangles) by a compositional exception relationship [2], following actually Hoare’s “otherwise” principle. On the similar grounds, there have been several attempts to use CSP to formalize exception handling [2, 17, 23, 24]. However, all these attempts have been limited to formalizing the basic flow of activity upon exceptional termination of one process for benefit of another 34
  • 50. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software (thus without building a comprehensive mechanism that fulfils aforesaid requirements for a concurrent EHM). Also, they did not work out an implementation in a practical programming language (with exception of [2]). Common for all is that both ordinary operation and exceptional operation are encapsulated in processes. The compositionality of the design is preserved by combining these processes by a construct. Hoare eventually also catered for the basic termination principle with his interrupt operator (') in [25], in a follow-up work [26] annotated with an exception event i ('i). Despite its name, semantics of the ' operator is much closer to the termination model of exception handling than to what is today usually referred to as “interrupt handling”, since it implies termination of the left hand-side operand by an unconditional preemption by the right hand-side one. A true “interrupt” operator would be useful for modelling the resumption model of exception handling (as it actually was in the original proposal of the interrupt operator in [23]). In [25], yet another operator (alternation, depicted as 9) may be used to describe resuming a process execution after execution of another process (however, this operator is not supported by the FDR model checker, while 'is). In a recent work [27] a CSP-based algebra (with another variant of the Hoare’s exception-interrupt constructs) is developed for long transactions threatened by exceptional events. The handling of interrupts (exceptions) relies on the assumption that compensation for a wrongly taken act is always possible. This assumption is too strong in the context of controlling mechanical systems (with ever present real-time demands). Moreover, the concept focuses on undoing wrong steps and not directly on fault tolerance. Termination semantics is captured, besides Hoare’s ', also by (virtually the same) except proposed in [23] and exception operator ' ) that appears in [2]. Whichever version is used for modelling the exceptional termination of a process P that gets preempted by the handler Q, it can be represented by a compositional hierarchy (in Figure 2) that corresponds to Figure 1 as: Figure 2. Compositional hierarchy of an exception construction By “compositional hierarchy” we assume the way occam networks are built of processes and constructs (which are also processes). We find the tree structure excellently capturing this kind of executional compositions [8]. 3.2 Exception Construct ' ) In the semantics of the ' ) operator [2], the composition in Figure 2 is interpreted as following: upon an exception occurrence in process P, an exception is thrown and P terminates; the exception is being caught by the exception construct (ExC1) and forwarded to Q which begins its execution (handling the exception). A concept of using a construct for modelling exception handling has a favourable consequence to the mechanism of propagating (unhandled) exceptions: in a CSP network with exception constructs, from the moment an exception is created and thrown by a process, it propagates upwards along the compositional hierarchy until a proper handler is found. Therefore, the propagation mechanism is clear and simple, since it follows the compositional structure of the CSP/CT concurrent design. Instead of the process P in Figure 2, there may be a construct with multiple processes. If the construct is an Alternative or Sequential one, the situation is the same as with a single 35
  • 51. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software process: upon exceptional termination of one of the alternatively or sequentially composed processes, the exception is caught by the exception construct and handled by the process Q. However, in the case of the Parallel construct, there is a possibility that more than one process ends up in an exceptional situation (and therefore terminates by throwing different exceptions). Consider situation in Figure 3. Figure 3. Parallel construct under exception construct Handler Q handles exceptions that may arise during execution of the parallel composition of the processes P1, P2 and the exception construct ExC1 (actually, the exceptions thrown by P3 and not handled by Q3). Here, it is a question at which moment exceptions from P1 should be handled (provided that the exception occurrence happens before P2 finishes)? Moreover, what if P2 exceptionally terminates as well? In the current implementation of ' ) [2, 28], the exceptions occurred in parallel composed processes are handled when the Parallel construct is terminated (i.e. when all parallel composed processes are terminated, successfully or exceptionally); for catching and handling all possible exceptions occurred in a parallel composition, a concept of exception set (collection of exceptions) is introduced. After termination of Par1, handler Q gets an exception set object with all exceptions thrown by the child processes (P1, P2 and ExC1 – all possibly unhandled processes in Q3 are rethrown). The concept of exception set has another useful role. From its contents a handler can reconstruct the exception hierarchy in case of simultaneous (concurrent) exceptions. 3.2.1 Channel Poisoning and the Exception Construct Sending a poison along channels as proposed in [21] is a mechanism for terminating the network or subnetwork. In the discussed EHM model proposal the poisoning mechanism assumes that channels can be turned into a poisoned state in which they respond on attempts of writing or reading by throwing back exceptions. In this way two problems are solved. The first problem is blocking of a rendezvous partner when the other one has exceptionally terminated. Consider the following situation (Figure 4, Figure 5): Figure 4. Rendezvous (potential blocking) Figure 5. Hierarchical representation of Figure 4 36
  • 52. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software Processes P1 and P2 are both “exception-guarded” by exception constructs ExC1 and ExC2 (i.e. by handlers Q1 and Q2 respectively), which are then parallel composed. Processes P1 and P2 communicate over channel c. Should it happen than one of the processes exceptionally terminates (before the rendezvous point), the other process stays blocked on channel c. For that reason, handlers Q1 and Q2 should in principle turn the channel into the poisoned state, so that the other party terminates with the same exception which caused the first process to terminate. To recall, this exception is thrown on an attempt of reading or writing. Moreover, an already poisoned channel on further poisoning attempts (which are function calls) returns the poisoning exception, for a reason that will be explained soon. If however the other rendezvous partner is already blocked on the channel, it should be released at the act of poisoning (and then end up with the exception). For this scheme to work, it is clear that all communicating parallel composed processes should be “exception-guarded”, i.e. sheltered behind exception constructs. In that case, an elegant possibility for concerted simultaneous exception handling comes automatically. On an exceptional occurrence in one of the communicating processes, provided all are accompanied with handlers that poison all channels connected to “their” processes, the information of the exception spreads within the parallel composition. In case of simultaneous exceptions, the spread of different exceptions progresses from different places (processes) under a parallel construct. In that case, it will inevitably happen that a handler will try to poison a channel that is already poisoned (with another exception). When channels respond to the attempt of poisoning by returning the exception that poisoned them initially, the handlers get information on occurrence of simultaneous exceptions. The handler of the parallel construct will ultimately be able to reconstruct the complete exception hierarchy. However, this mechanism suffers from two major problems. The first one is possible (unbounded) delay from occurrence of a (first) exception and handling it at the level of the parallel construct. Remember that all parallel composed processes must terminate before the handler of the parallel construct gets chance to analyse and handle the exception (set). Some processes may spend a lot of time before coming to the rendezvous on a (poisoned) channel and consequently be terminated! A possibility of Asynchronous Transfer of Control is already commented as unwelcome in the high-integrity systems. An additional penalty is that for the mechanism of rethrowing exceptions from poisoned channels to work, it is necessary to clone exceptions (so that every handler can consider the total exceptional situation) or at least have a rigorous administration of the (pointers to) occurred exceptions. The other problem inherent to the mechanism of channel poisoning is that a poison spread is naturally bounded by the interconnection network of channels and not by the boundaries of constructs. Namely, some channels may run to processes that belong to other constructs; ultimately this may lead to termination of the whole application, which contradicts to the idea of exception handling as the most powerful fault tolerance mechanism. In [21] a possibility of inserting special processes on boundaries of the subnetworks compelled to poisoning is proposed, but that means introducing completely non-functional components into the system. The other option is a model-based (tool-based) control of the poison spreading. 3.3 Interrupt Operator 'i, Environmental Exception Manager and Exception Channels In the channel poisoning concept, propagation of an exception event was based on existing communication channels. More apt to formal modelling would be a termination model based on a concept that considers exceptions as explicit events communicated among 37
  • 53. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software exception handlers via explicit exception channels. This change in paradigm makes formal modelling and checking more straightforward. Let us consider a Parallel construct Par containing 3 subprocesses: P1, P2 and P3 (see Figure 6). Process-level exception handlers associated with these processes are Q1, Q2 and Q3 respectively. In the scope of the exception construct ExC the exception handler associated with the construct Par is process Q. Figure 6. Design rule for fault-tolerant parallel composition with environmental care Using the interrupt operator this could be written in CSP as (in is an explicit exception event): Par = (P1 'i1 Q1) || (P2 'i2 Q2) || (P3 'i3 Q3). In turn, the relation between processes Par and Q is modelled in the same way: Par 'i Q = ((P1 'i1 Q1) || (P2 'i2 Q2) || (P3 'i3 Q3)) 'i Q, where i is the Par-level exception event. If an exception occurs during execution of a process P1, the process will be aborted and the associated exception handler Q1 will be invoked. This can actually be seen as an implicit occurrence of the exception event i1. If the exception cannot be handled by Q1, this exception should be communicated to the higher EHM level. Since such higher level EHM facilities are represented by some process from the environment playing a role of a higher- level exception handler, this can be implemented as communication via channels. One can imagine that, following premature termination of a process, a higher EHM component can throw exceptions in the contexts of the other affected processes. In sense of CSP, this is equal to interrupting those processes, by inducing an event i2 that will cause, say process P2 to be aborted and wake up its exception handler (Q2). In this way, graceful termination (giving a chance for a process state clean-up) can be modelled by the CSP-standard interrupt operator 'i. Thus, from the interrupt operator 'i point of view aborting a process (P2) is nothing more then communicating the exception event (i2) to the exception handling process (Q2). And indeed the termination mechanism can be really implemented in this way. Special exception channels can be dedicated to this purpose. The communication via exception channel is actually an encapsulating mechanism used to throw an exception in the context of affected processes (P2 and P3), forcing them to abort further execution and forcing the execution of associated exception handlers (Q2 and/or Q3) instead. Although their implementation is more complicated, from synchronization point of view those channels are real rendezvous channels. This is the case because processes Q1, Q2 and Q3 are during ordinary operation mode always ready to accept events i1, i2 and i3 produced by the environment. 38
  • 54. D.S. Jovanovic et al. / Exception Handling Mechanisms for Concurrent Software Writing to an exception channel would pass data about the cause of the exception to the process-level exception handler. In addition, the process must be unblocked if it is waiting on a channel or semaphore. Afterwards, when a scheduler grants CPU time to that process, instead of a regular context switch to the stack of the process, it would switch to the stack unwrapped to a proper point for the execution of the exception handler. When all process-level handlers Q1, Q2 and Q3 terminate, the construct (Par) will terminate unsuccessfully by throwing an exception to its parent exception construct (ExC). As a consequence, the exception handler Q will be executed. But who will produce events i1, i2 and i3? The exception handler Q cannot do that because it can be executed only after the construct and all of its subprocesses have already terminated. It is possible to imagine an additional environment process (let us name it environmental exception manager - EEM) that does that. This process would have to run in parallel with the guarded construct or the whole application. Furthermore, because the exception handling response time is important, this newly introduced process should have a higher priority then the top application construct. For the running example, PriPar (EEM, Par 'i Q). Every process is by default equipped with an exception handler which, if not redefined by the user, only throws all exceptions further to the environmental exception manager. While for the previous concept it was not necessary that all processes have associated handlers in order to let their exception be handled at the construct level, in this proposal it is the rule all processes must have attached handlers (as in Figure 6). One side-effect of this decision is that it becomes possible to define both a process and its exception handler as two functions of one object. Normally, in the occam-like libraries, processes are implemented as objects, but this was just a design choice since from the CSP point of view there is no obstacle in realizing a process as merely a function. While a process and its exception handler were defined in separate objects, the process had to pack all the data needed for exception handling into an exception object in order to pass it to its exception handler. Having them inside the same object is however more convenient in light of real- time systems. Besides reducing the memory usage, the dynamic memory allocation can be avoided since an exception handler can directly inspect data members defining the state of the process. The concept of the exception manager opens yet another possibility: thanks to the careful management of the exceptions events, the resumption feature becomes viable. The manager can have encoded application-specific exception handling rules. These rules may not necessarily terminate all subprocesses. The termination and the resumption model can be combined in one application. 3.3.1 Treating Complex Exceptional Situations In order to make an appropriate handling decision about occurrence of simultaneous exceptions in multiple processes (sometimes caused by the same physical fault), it is often necessary to check the state of certain resources internal to concurrently executing constructs. Obviously, handling such complex exception events requires some kind of exception hierarchy checks and application specific rules that are encoded for all possible combinations. If the number of those rules and combinations is very large, which is the case in complex systems, the environmental exception manager can be implemented as a complex process containing several environmental exception handlers covering different functional views of the system or different classes of exceptional scenarios. It is also possible to create one environmental exception handler for every construct in the system. 39
  • 55. Other documents randomly have different content
  • 56. remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many
  • 57. small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate. Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support.
  • 58. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
  • 59. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com