SlideShare a Scribd company logo
High Performance Networks From Supercomputing to
Cloud Computing 1st Edition Dennis Abts pdf
download
https://ebookgate.com/product/high-performance-networks-from-
supercomputing-to-cloud-computing-1st-edition-dennis-abts/
Get the full ebook with Bonus Features for a Better Reading Experience on ebookgate.com
Instant digital products (PDF, ePub, MOBI) available
Download now and explore formats that suit you...
High Performance and Hardware Aware Computing Rainer
Buchty
https://ebookgate.com/product/high-performance-and-hardware-aware-
computing-rainer-buchty/
ebookgate.com
High performance communication networks 2nd Edition Jean
Walrand
https://ebookgate.com/product/high-performance-communication-
networks-2nd-edition-jean-walrand/
ebookgate.com
Working in Teams Moving From High Potential to High
Performance 1st Edition Brian A. Griffith
https://ebookgate.com/product/working-in-teams-moving-from-high-
potential-to-high-performance-1st-edition-brian-a-griffith/
ebookgate.com
Cloud Computing Bible 1st Edition Barrie Sosinsky
https://ebookgate.com/product/cloud-computing-bible-1st-edition-
barrie-sosinsky/
ebookgate.com
Phase Locking in High Performance Systems From Devices to
Architectures 1st Edition Behzad Razavi
https://ebookgate.com/product/phase-locking-in-high-performance-
systems-from-devices-to-architectures-1st-edition-behzad-razavi/
ebookgate.com
High Performance Computing Programming and Applications
Chapman Hall CRC Computational Science 1st Edition John
Levesque
https://ebookgate.com/product/high-performance-computing-programming-
and-applications-chapman-hall-crc-computational-science-1st-edition-
john-levesque/
ebookgate.com
Biomedical Diagnostics and Clinical Technologies Applying
High Performance Cluster and Grid Computing 1st Edition
Manuela Pereira
https://ebookgate.com/product/biomedical-diagnostics-and-clinical-
technologies-applying-high-performance-cluster-and-grid-computing-1st-
edition-manuela-pereira/
ebookgate.com
Architecting the Cloud Design Decisions for Cloud
Computing Service Models 1st Edition Michael J. Kavis
https://ebookgate.com/product/architecting-the-cloud-design-decisions-
for-cloud-computing-service-models-1st-edition-michael-j-kavis/
ebookgate.com
Practical Guide to High Performance Engineering Plastics
1st Edition Kemmish
https://ebookgate.com/product/practical-guide-to-high-performance-
engineering-plastics-1st-edition-kemmish/
ebookgate.com
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
Morgan Claypool Publishers
&
w w w . m o r g a n c l a y p o o l . c o m
Series Editor: Mark D. Hill, University of Wisconsin
MOR
GAN
&
CL
AYPOOL
C
M
& Morgan Claypool Publishers
&
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise,original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
Mark D. Hill, Series Editor
ISBN: 978-1-60845-402-0
9 781608 454020
90000
Series ISSN: 1935-3235
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
HIGH
PERFORMANCE
DATACENTER
NETWORKS
ABTS
•
KIM
High Performance Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology
Datacenter networks provide the communication substrate for large parallel computer systems that
form the ecosystem for high performance computing (HPC) systems and modern Internet appli-
cations. The design of new datacenter networks is motivated by an array of applications ranging
from communication intensive climatology, complex material simulations and molecular dynamics
to such Internet applications as Web search, language translation, collaborative Internet applications,
streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network
enables distributed applications to communicate and interoperate in an orchestrated and efficient
way.
This book describes the design and engineering tradeoffs of datacenter networks. It describes
interconnection networks from topology and network architecture to routing algorithms,and presents
opportunities for taking advantage of the emerging technology trends that are influencing router
microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also
need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance
limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their
routing algorithms and show how technology, signaling rates and cost-effective optics are motivating
new network topologies that scale up to millions of hosts. The book also provides detailed case
studies of two high performance parallel computer systems and their networks.
High Performance
Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts
John Kim
Morgan Claypool Publishers
&
w w w . m o r g a n c l a y p o o l . c o m
Series Editor: Mark D. Hill, University of Wisconsin
MOR
GAN
&
CL
AYPOOL
C
M
& Morgan Claypool Publishers
&
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise,original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
Mark D. Hill, Series Editor
ISBN: 978-1-60845-402-0
9 781608 454020
90000
Series ISSN: 1935-3235
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
HIGH
PERFORMANCE
DATACENTER
NETWORKS
ABTS
•
KIM
High Performance Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology
Datacenter networks provide the communication substrate for large parallel computer systems that
form the ecosystem for high performance computing (HPC) systems and modern Internet appli-
cations. The design of new datacenter networks is motivated by an array of applications ranging
from communication intensive climatology, complex material simulations and molecular dynamics
to such Internet applications as Web search, language translation, collaborative Internet applications,
streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network
enables distributed applications to communicate and interoperate in an orchestrated and efficient
way.
This book describes the design and engineering tradeoffs of datacenter networks. It describes
interconnection networks from topology and network architecture to routing algorithms,and presents
opportunities for taking advantage of the emerging technology trends that are influencing router
microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also
need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance
limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their
routing algorithms and show how technology, signaling rates and cost-effective optics are motivating
new network topologies that scale up to millions of hosts. The book also provides detailed case
studies of two high performance parallel computer systems and their networks.
High Performance
Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts
John Kim
Morgan Claypool Publishers
&
w w w . m o r g a n c l a y p o o l . c o m
Series Editor: Mark D. Hill, University of Wisconsin
MOR
GAN
&
CL
AYPOOL
C
M
& Morgan Claypool Publishers
&
About SYNTHESIs
This volume is a printed version of a work that appears in the Synthesis
Digital Library of Engineering and Computer Science. Synthesis Lectures
provide concise,original presentations of important research and development
topics, published quickly, in digital and print formats. For more information
visit www.morganclaypool.com
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
Mark D. Hill, Series Editor
ISBN: 978-1-60845-402-0
9 781608 454020
90000
Series ISSN: 1935-3235
SYNTHESIS LECTURES ON
COMPUTER ARCHITECTURE
HIGH
PERFORMANCE
DATACENTER
NETWORKS
ABTS
•
KIM
High Performance Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology
Datacenter networks provide the communication substrate for large parallel computer systems that
form the ecosystem for high performance computing (HPC) systems and modern Internet appli-
cations. The design of new datacenter networks is motivated by an array of applications ranging
from communication intensive climatology, complex material simulations and molecular dynamics
to such Internet applications as Web search, language translation, collaborative Internet applications,
streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network
enables distributed applications to communicate and interoperate in an orchestrated and efficient
way.
This book describes the design and engineering tradeoffs of datacenter networks. It describes
interconnection networks from topology and network architecture to routing algorithms,and presents
opportunities for taking advantage of the emerging technology trends that are influencing router
microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also
need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance
limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their
routing algorithms and show how technology, signaling rates and cost-effective optics are motivating
new network topologies that scale up to millions of hosts. The book also provides detailed case
studies of two high performance parallel computer systems and their networks.
High Performance
Datacenter Networks
Architectures, Algorithms, and Opportunity
Dennis Abts
John Kim
High Performance
Datacenter Networks
Architectures, Algorithms, and Opportunities
Synthesis Lectures on Computer
Architecture
Editor
Mark D. Hill, University of Wisconsin
Synthesis Lectures on Computer Architecture publishes 50- to 100-page publications on topics
pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware
components to create computers that meet functional, performance and cost goals. The scope will
largely follow the purview of premier computer architecture conferences, such as ISCA, HPCA,
MICRO, and ASPLOS.
High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities
Dennis Abts and John Kim
2011
Quantum Computing for Architects, Second Edition
Tzvetan Metodi, Fred Chong, and Arvin Faruque
2011
Processor Microarchitecture: An Implementation Perspective
Antonio González, Fernando Latorre, and Grigorios Magklis
2010
Transactional Memory, 2nd edition
Tim Harris, James Larus, and Ravi Rajwar
2010
Computer Architecture Performance Evaluation Methods
Lieven Eeckhout
2010
Introduction to Reconfigurable Supercomputing
Marco Lanzagorta, Stephen Bique, and Robert Rosenberg
2009
On-Chip Networks
Natalie Enright Jerger and Li-Shiuan Peh
2009
iii
The Memory System: You Can’t Avoid It, You Can’t Ignore It, You Can’t Fake It
Bruce Jacob
2009
Fault Tolerant Computer Architecture
Daniel J. Sorin
2009
The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines
free access
Luiz André Barroso and Urs Hölzle
2009
Computer Architecture Techniques for Power-Efficiency
Stefanos Kaxiras and Margaret Martonosi
2008
Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency
Kunle Olukotun, Lance Hammond, and James Laudon
2007
Transactional Memory
James R. Larus and Ravi Rajwar
2006
Quantum Computing for Computer Architects
Tzvetan S. Metodi and Frederic T. Chong
2006
Copyright © 2011 by Morgan & Claypool
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in
any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in
printed reviews, without the prior permission of the publisher.
High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities
Dennis Abts and John Kim
www.morganclaypool.com
ISBN: 9781608454020 paperback
ISBN: 9781608454037 ebook
DOI 10.2200/S00341ED1V01Y201103CAC014
A Publication in the Morgan & Claypool Publishers series
SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE
Lecture #14
Series Editor: Mark D. Hill, University of Wisconsin
Series ISSN
Synthesis Lectures on Computer Architecture
Print 1935-3235 Electronic 1935-3243
High Performance
Datacenter Networks
Architectures, Algorithms, and Opportunities
Dennis Abts
Google Inc.
John Kim
Korea Advanced Institute of Science and Technology (KAIST)
SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE #14
C
M
& cLaypool
Morgan publishers
&
ABSTRACT
Datacenter networks provide the communication substrate for large parallel computer systems that
form the ecosystem for high performance computing (HPC) systems and modern Internet appli-
cations. The design of new datacenter networks is motivated by an array of applications ranging
from communication intensive climatology, complex material simulations and molecular dynamics
to such Internet applications as Web search,language translation,collaborative Internet applications,
streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network
enables distributed applications to communicate and interoperate in an orchestrated and efficient
way.
This book describes the design and engineering tradeoffs of datacenter networks. It de-
scribes interconnection networks from topology and network architecture to routing algorithms,
and presents opportunities for taking advantage of the emerging technology trends that are influ-
encing router microarchitecture. With the emergence of “many-core” processor chips, it is evident
that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the
performance limiting effects of Amdahl’s Law. We provide an overview of conventional topologies
and their routing algorithms and show how technology, signaling rates and cost-effective optics are
motivating new network topologies that scale up to millions of hosts.The book also provides detailed
case studies of two high performance parallel computer systems and their networks.
KEYWORDS
network architecture and design, topology, interconnection networks, fiber optics, par-
allel computer architecture, system design
vii
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Note to the Reader. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
1.1 From Supercomputing to Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Beowulf: The Cluster is Born . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Overview of Parallel Programming Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Putting it all together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Quality of Service (QoS) requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 Flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.1 Lossy flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6.2 Lossless flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.7 The rise of ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1 Interconnection networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Technology trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Topology, Routing and Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Communication Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3 Topology Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Types of Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Mesh, Torus, and Hypercubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.1 Node identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.2 k-ary n-cube tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
viii
4 High-Radix Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.1 Towards High-radix Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Technology Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.1 Pin Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.2.2 Economical Optical Signaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 High-Radix Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.1 High-Dimension Hypercube, Mesh, Torus . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.2 Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.3.3 High-Radix Folded-Clos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.3.4 Flattened Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.5 Dragonfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.6 HyperX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1 Routing Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.1.1 Objectives of a Routing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Minimal Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.1 Deterministic Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.2 Oblivious Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Non-minimal Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3.1 Valiant’s algorithm (VAL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.3.2 Universal Global Adaptive Load-Balancing (UGAL) . . . . . . . . . . . . . . . . 42
5.3.3 Progressive Adaptive Routing (PAR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.4 Dimensionally-Adaptive, Load-balanced (DAL) Routing . . . . . . . . . . . . . 43
5.4 Indirect Adaptive Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.5 Routing Algorithm Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.5.1 Example 1: Folded-Clos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.5.2 Example 2: Flattened Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.5.3 Example 3: Dragonfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Scalable Switch Microarchitecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.1 Router Microarchitecture Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Scaling baseline microarchitecture to high radix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3 Fully Buffered Crossbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.4 Hierarchical Crossbar Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.5 Examples of High-Radix Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
ix
6.5.1 Cray YARC Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.5.2 Mellanox InfiniScale IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7 System Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1 Packaging hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.2 Power delivery and cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.3 Topology and Packaging Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
8 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.1 Cray BlackWidow Multiprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.1.1 BlackWidow Node Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.1.2 High-radix Folded-Clos Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.1.3 System Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.1.4 High-radix Fat-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
8.1.5 Packet Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.1.6 Network Layer Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.1.7 Data-link Layer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.1.8 Serializer/Deserializer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.2 Cray XT Multiprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.2.1 3-D torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.2.2 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.2.3 Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.2.4 SeaStar Router Microarchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
8.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
9 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.1 Programming models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.2 Wire protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
9.3 Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
Preface
This book is aimed at the researcher, graduate student and practitioner alike. We provide
some background and motivation to provide the reader with a substrate upon which we can build
the new concepts that are driving high-performance networking in both supercomputing and cloud
computing. We assume the reader is familiar with computer architecture and basic networking
concepts. We show the evolution of high-performance interconnection networks over the span of
two decades, and the underlying technology trends driving these changes. We describe how to apply
these technology drivers to enable new network topologies and routing algorithms that scale to
millions of processing cores. We hope that practitioners will find the material useful for making
design tradeoffs, and researchers will find the material both timely and relevant to modern parallel
computer systems which make up today’s datacenters.
Dennis Abts and John Kim
March 2011
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
Acknowledgments
While we draw from our experience at Cray and Google and academic work on the design
and operation of interconnection networks, most of what we learned is the result of hard work,
and years of experience that have led to practical insights. Our experience benefited tremendously
from our colleagues Steve Scott at Cray, and Bill Dally at Stanford University. In addition, many
hours of whiteboard-huddled conversations with Mike Marty, Philip Wells, Hong Liu, and Peter
Klausler at Google. We would also like to thank Google colleagues James Laudon, Bob Felderman,
Luiz Barroso, and Urs Hölzle for reviewing draft versions of the manuscript. We want to thank
the reviewers, especially Amin Vahdat and Mark Hill for taking the time to carefully read and
provide feedback on early versions of this manuscript. Thanks to Urs Hölzle for guidance, and
Kristin Weissman at Google and Michael Morgan at Morgan & Claypool Publishers. Finally, we
are grateful for Mark Hill and Michael Morgan for inviting us to this project and being patient with
deadlines.
Finally, and most importantly, we would like to thank our loving family members who gra-
ciously supported this work and patiently allowed us to spend our free time to work on this project.
Without their enduring patience and with an equal amount of prodding, this work would not have
materialized.
Dennis Abts and John Kim
March 2011
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
Note to the Reader
We very much appreciate any feedback, suggestions, and corrections you might have on our
manuscript. The Morgan & Claypool publishing process allows a lightweight method to revise the
electronic edition. We plan to revise the manuscript relatively often, and will gratefully acknowledge
any input that will help us to improve the accuracy, readability, or general usefulness of the book.
Please leave your feedback at http://tinyurl.com/HPNFeedback
Dennis Abts and John Kim
March 2011
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
1
C H A P T E R 1
Introduction
Today’s datacenters have emerged from the collection of loosely connected workstations, which
shaped the humble beginnings of the Internet, and grown into massive “warehouse-scale comput-
ers” (Figure 1.1) capable of running the most demanding workloads.Barroso and Hölzle describe the
architecture of a warehouse-scale computer (WSC) [9] and give an overview of the programming
model and common workloads executed on these machines.The hardware building blocks are pack-
aged into “racks” of about 40 servers, and many racks are interconnected using a high-performance
network to form a “cluster” with hundreds or thousands of tightly-coupled servers for performance,
cooling
towers
power substation
warehouse-scale
computer
Figure 1.1: A datacenter with cooling infrastructure and power delivery highlighted.
2 1. INTRODUCTION








!


    !    











          
Figure 1.2: Comparison of web search interest and terminology.
but loosely-coupled for fault tolerance and isolation. This highlights some distinctions between what
have traditionally been called“supercomputers” and what we now consider“cloud computing,” which
appears to have emerged around 2008 (based on the relative Web Search interest shown in Figure
1.2) as a moniker for server-side computing. Increasingly, our computing needs are moving away
from desktop computers toward more mobile clients (e.g., smart phones, tablet computers, and net-
books) that depend on Internet services, applications, and storage. As an example, it is much more
efficient to maintain a repository of digital photography on a server in the “cloud” than on a PC-like
computer that is perhaps not as well maintained as a server in a large datacenter, which is more
reminiscent of a clean room environment than a living room where your precious digital memories
are subjected to the daily routine of kids, spills, power failures, and varying temperatures; in addition,
most consumers upgrade computers every few years,requiring them to migrate all their precious data
to their newest piece of technology. In contrast, the “cloud” provides a clean, temperature controlled
environment with ample power distribution and backup. Not to mention your data in the “cloud” is
probably replicated for redundancy in the event of a hardware failure the user data is replicated and
restored generally without the user even aware that an error occurred.
1.1. FROM SUPERCOMPUTING TO CLOUD COMPUTING 3
1.1 FROM SUPERCOMPUTING TO CLOUD COMPUTING
As the ARPANET transformed into the Internet over the past forty years, and the World Wide
Web emerges from adolescence and turns twenty, this metamorphosis has seen changes in both
supercomputing and cloud computing. The supercomputing industry was born in 1976 when Sey-
mour Cray announced the Cray-1 [54]. Among the many innovations were its processor design,
process technology, system packaging, and instruction set architecture. The foundation of the ar-
chitecture was based on the notion of vector operations that allowed a single instruction to operate
on an array, or “vector,” of elements simultaneously. In contrast to scalar processors of the time
whose instructions operated on single data items. The vector parallelism approach dominated the
high-performance computing landscape for much of the 1980s and early 1990s until “commodity”
microprocessors began aggressively implementing forms of instruction-level parallelism (ILP) and
better cache memory systems to exploit spatial and temporal locality exhibited by most applications.
Improvements in CMOS process technology and full-custom CMOS design practices allowed mi-
croprocessors to quickly ramp up clock rates to several gigahertz. This coupled with multi-issue
pipelines; efficient branch prediction and speculation eventually allowed microprocessors to catch
up with their proprietary vector processors from Cray, Convex, and NEC. Over time, conventional
microprocessors incorporated short vector units (e.g., SSE, MMX, AltiVec) into the instruction set.
However, the largest beneficiary of vector processing has been multimedia applications as evidenced
by the jointly developed (by Sony,Toshiba,and IBM) Cell processor which found widespread success
in Sony’s Playstation3 game console, and even some special-purpose computer systems like Mercury
Systems.
Parallel applications eventually have to synchronize and communicate among parallel threads.
Amdahl’s Law is relentless and unless enough parallelism is exposed,the time spent orchestrating the
parallelism and executing the sequential region will ultimately limit the application performance [27].
1.2 BEOWULF: THE CLUSTER IS BORN
In 1994 Thomas Sterling (then dually affiliated with the California Institute of Technology and
NASAs JPL) and Donald Becker (then a researcher at NASA) assembled a parallel computer that
became known as a Beowulf cluster1. What was unique about Beowulf [61] systems was that they
were built from common “off-the-shelf” computers, as Figure 1.3 shows, system packaging was not
an emphasis. More importantly, as a loosely-coupled distributed memory machine, Beowulf forced
researchers to think about how to efficiently program parallel computers. As a result, we benefited
from portable and free programming interfaces such as parallel virtual machines (PVM), message
passing interfaces (MPICH and OpenMPI), local area multiprocessor (LAM); with MPI being
embraced by the HPC community and highly optimized.
The Beowulf cluster was organized so that one machine was designated the “server,” and it
managed job scheduling, pushing binaries to clients, and monitoring. It also acted as the gateway
1The genesis of the name comes from the poem which describes Beowulf as having “thirty men’s heft of grasp in the gripe of his
hand.”
4 1. INTRODUCTION
Figure 1.3: An 128 processor Beowulf cluster at NASA.
to the “outside world,” so researchers had a login host. The model is still quite common: with some
nodes being designated as service and IO nodes where users actually login to the parallel machine.
From there, they can compile their code, and launch the job on “compute only” nodes — the worker
bees of the colony — and console information, machine status is communicated to the service nodes.
1.3 OVERVIEW OF PARALLEL PROGRAMMING MODELS
Early supercomputers were able to work efficiently, in part, because they shared a common physical
memory space. As a result, communication among processors was very efficient as they updated
shared variables and operated on common data. However, as the size of the systems grew, this
shared memory model evolved into a distributed shared memory (DSM) model where each processing
node owns a portion of the machines physical memory and the programmer is provided with a
logically shared address space making it easy to reason about how the application is partitioned and
communication among threads. The Stanford DASH [45] was the first to demonstrate this cache-
coherent non-uniform memory (ccNUMA) access model, and the SGI Origin2000 [43] was the
first machine to successfully commercialize the DSM architecture.
We commonly refer to distributed memory machines as“clusters” since they are loosely-coupled
and rely on message passing for communication among processing nodes. With the inception of
Beowulf clusters, the HPC community realized they could build modest-sized parallel computers on
1.4. PUTTING IT ALL TOGETHER 5
a relatively small budget. To their benefit, the common benchmark for measuring the performance
of a parallel computer is LINPACK, which is not communication intensive, so it was commonplace
to use inexpensive Ethernet networks to string together commodity nodes. As a result, Ethernet got
a foothold on the list of the TOP500 [62] civilian supercomputers with almost 50% of the TOP500
systems using Ethernet.
1.4 PUTTING IT ALL TOGETHER
The first Cray-1 [54] supercomputer had expected to ship one system per quarter in 1977. Today,
microprocessor companies have refined their CMOS processes and manufacturing making them
very cost-effective building blocks for large-scale parallel systems capable of 10s of petaflops. This
shift away from “proprietary” processors and trend toward “commodity” processors has fueled the
growth of systems. At the time of this writing, the largest computer on the TOP500 list [62] has in
excess of 220,000 cores (see Figure 7.5) and consumes almost seven megawatts!
A datacenter server has many commonalities as one used in a supercomputer, however, there
are also some very glaring differences. We enumerate several properties of both a warehouse-scale
computer (WSC) and a supercomputer (Cray XE6).
Datacenter server
• Sockets per server 2 sockets x86 platform
• Memory capacity 16 GB DRAM
• Disk capacity 5×1TB disk drive, and 1×160GB SSD (FLASH)
• Compute density 80 sockets per rack
• Network bandwidth per rack 1×48-port GigE switch with 40 down links, and 8 uplinks (5×
oversubscription)
• Network bandwidth per socket 100 Mb/s if 1 GigE rack switch, or 1 Gb/s if 10 GigE rack
switch
Supercomputer server
• Sockets per server 8 sockets x86 platform
• Memory capacity 32 or 64 GB DRAM
• Disk capacity IO capacity varies. Each XIO blade has four PCIe-Gen2 interfaces, for a total
of 96 PCIe-Gen2 ×16 IO devices for a peak IO bandwidth of 768 GB/s per direction.
• Compute density 192 sockets per rack
6 1. INTRODUCTION
• Networkbandwidthperrack 48×48-port Gemini switch chips each with 160 GB/s switching
bandwidth
• Network bandwidth per socket 9.6GB/s injection bandwidth with non-coherent Hyper-
Transport 3.0 (ncHT3)
Several things stand out as differences between a datacenter server and supercomputer node.
First, the compute density for the supercomputer is significantly better than a standard 40U rack. On
the other hand, this dense packaging also puts pressure on cooling requirements not to mention
power delivery. As power and its associated delivery become increasingly expensive, it becomes more
important to optimize the number of operations per watt; often the size of a system is limited by
power distribution and cooling infrastructure.
AnotherpointisthevastdifferenceinnetworkbandwidthpersocketinlargepartbecausencHT3
is a much higher bandwidth processor interface than PCIe-Gen2, however, as PCI-Gen3×16 be-
comes available we expect that gap to narrow.
1.5 QUALITY OF SERVICE (QOS) REQUIREMENTS
With HPC systems it is commonplace to dedicate the system for the duration of application ex-
ecution. Allowing all processors to be used for compute resources. As a result, there is no need
for performance isolation from competing applications. Quality of Service (QoS) provides both per-
formance isolation and differentiated service for applications2. Cloud computing often has a varied
workloads requiring multiple applications to share resources. Workload consolidation [33] is becom-
ing increasingly important as memory and processor cost increase, as a result so does the value of
increased system utilization.
The QoS class refers to the end-to-end class of service as observed by the application. In
principle, QoS is divided into three categories:
Best effort - traffic is treated as a FIFO with no differentiation provided.
Differentiated service - also referred to as “soft QoS” where traffic is given a statistical preference
over other traffic. This means it is less likely to be dropped relative to best effort traffic, for
example, resulting in lower average latency and increased average bandwidth.
Guaranteed service - also referred to as “hard QoS” where a fraction of the network bandwidth is
reserved to provide no-loss, low jitter bandwidth guarantees.
In practice,there are many intermediate pieces which are,in part,responsible for implementing a QoS
scheme. A routing algorithm determines the set of usable paths through the network between any
source and destination. Generally speaking, routing is a background process that attempts to load-
balance the physical links in the system taking into account any network faults and programming
2We use the term “applications” loosely here to represent processes or threads, at whatever granularity a service level agreement is
applied.
1.6. FLOW CONTROL 7
the forwarding tables within each router. When a new packet arrives, the header is inspected and
the network address of the destination is used to index into the forwarding table which emits the
output port where the packet is scheduled for transmission.The “packet forwarding” process is done
on a packet-by-packet basis and is responsible for identifying packets marked for special treatment
according to its QoS class.
The basic unit over which a QoS class is applied is the flow. A flow is described as a tuple
(SourceIP, SourcePort, DestIP, DestPort). Packets are marked by the host or edge switch using
either 1) port range, or 2) host (sender/client-side) marking. Since we are talking about end-to-end
service levels, ideally the host which initiates the communication would request a specific level of
service. This requires some client-side API for establishing the QoS requirements prior to sending
a message. Alternatively, edge routers can mark packets as they are injected into the core fabric.
Packets are marked with their service class which is interpreted at each hop and acted upon by
routers along the path.For common Internet protocols,the differentiated service (DS) field of the IP
header provides this function as defined by the DiffServ [RFC2475] architecture for network layer
QoS. For compatibility reasons, this is the same field as the type of service (ToS) field [RFC791] of
the IP header. Since the RFC does not clearly describe how “low,” “medium,” or “high” are supposed
to be interpreted, it is common to use five classes: best effort (BE), AF1, AF2, AF3, AF4, and set
the drop priority to 0 (ignored).
1.6 FLOW CONTROL
Surprisingly, a key difference in system interconnects is flow control. How the switch and buffer
resources are managed is very different in Ethernet than what is typical in a supercomputer in-
terconnect. There are several kinds of flow control in a large distributed parallel computer. The
interconnection network is a shared resource among all the compute nodes, and network resources
must be carefully managed to avoid corrupting data, overflowing a buffer, etc.The basic mechanism
by which resources in the network are managed is flow control. Flow control provides a simple ac-
counting method for managing resources that are in demand by multiple uncoordinated sources.
The resource is managed in units of flits (flow control units). When a resource is requested but not
currently available for use, we must decide what to do with the incoming request. In general, we can
1) drop the request and all subsequent requests until the resource is freed, or 2) block and wait for
the request to free.
1.6.1 LOSSY FLOW CONTROL
With a lossy flow control [20, 48], the hardware can discard packets until there is room in the desired
resource. This approach is usually applied to input buffers on each switch chip, but also applies to
resources in the network interface controller (NIC) chip as well. When packets are dropped, the
software layers must detect the loss, usually through an unexpected sequence number indicating that
one or more packets are missing or out of order. The receiver software layers will discard packets
that do not match the expected sequence number, and the sender software layers will detect that it
8 1. INTRODUCTION
data
link
layer
data
link
layer
send
credits
data
packets
flow
ctrl
packets
Figure 1.4: Example of credit-based flow control across a network link.
has not received an acknowledgment packet and will cause a sender timeout which prompts the “send
window” — packets sent since the last acknowledgment was received — to be retransmitted. This
algorithm is referred to as go-back-N since the sender will “go back” and retransmit the last N (send
window) packets.
1.6.2 LOSSLESS FLOW CONTROL
Lossless flow control implies that packets are never dropped as a results of lack of buffer space (i.e.,
in the presence of congestion). Instead, it provides back pressure to indicate the absence of available
buffer space in the resource being managed.
1.6.2.1 Stop/Go (XON/XOFF) flow control
A common approach is XON/XOFF or stop/go flow control. In this approach, the receiver provides
simple handshaking to the sender indicating whether it is safe (XON) to transmit, or not (XOFF).
The sender is able to send flits until the receiver asserts stop (XOFF).Then, as the receiver continues
to process packets from the input buffer freeing space, and when a threshold is reached the receiver
will assert the XON again allowing the sender to again start sending. This Stop/Go functionality
correctly manages the resource and avoids overflow as long as the time at which XON is asserted
again (i.e., the threshold level in the input buffer) minus the time XOFF is asserted and the buffer
is sufficient to allow any in-flight flits to land. This slack in the buffer is necessary to act as a flow
control shock absorber for outstanding flits necessary to cover the propagation delay of the flow
control signals.
1.6.2.2 Credit-based flow control
Credit based flow control (Figure 1.4) provides more efficient use of the buffer resources.The sender
maintains a count of the number of available credits, which represent the amount of free space in
the receiver’s input buffer. A separate count is used for each virtual channel (VC) [21]. When a new
1.7. THE RISE OF ETHERNET 9
packet arrives at the output port, the sender checks the available credit counter. For wormhole flow
control [20] across the link, the sender’s available credit needs to only be one or more. For virtual
cut-through (VCT) [20, 22] flow control across the link, the sender’s available credit must be more
than the size of the packet. In practice, the switch hardware doesn’t have to track the size of the
packet in order to allow VCT flow control. The sender can simply check the available credit count
is larger than the maximum packet size.
1.7 THE RISE OF ETHERNET
It may be an extreme example comparing a typical datacenter server to a state-of-the-art super-
computer node, but the fact remains that Ethernet is gaining a significant foothold in the high-
performance computing space with nearly 50% of the systems on the TOP500 list [62] using Gi-
gabit Ethernet as shown in Figure 1.5(b). Infiniband (includes SDR, DDR and QDR) accounts
for 41% of the interconnects leaving very little room for proprietary networks. The landscape was
very different in 2002, as shown in Figure 1.5(a), where Myrinet accounted for about one third of
the system interconnects. The IBM SP2 interconnect accounted for about 18%, and the remaining
50% of the system interconnects were split among about nine different manufacturers. In 2002, only
about 8% of the TOP500 systems used gigabit Ethernet, compared to the nearly 50% in June of
2010.
1.8 SUMMARY
Nodoubt“cloudcomputing”benefitedfromthiswildgrowthandacceptanceintheHPCcommunity,
driving prices down and making more reliable parts. Moving forward we may see even further
consolidation as 40 Gig Ethernet converges with some of the Infiniband semantics with RDMA
over Ethernet (ROE). However, a warehouse-scale computer (WSC) [9] and a supercomputer have
different usage models. For example, most supercomputer applications expect to run on the machine
in a dedicated mode, not having to compete for compute, network, or IO resources with any other
applications.
Supercomputing applications will commonly checkpoint their dataset, since the MTBF of a
large system is usually measured in 10s of hours.Supercomputing applications also typically run with
a dedicated system, so QoS demands are not typically a concern. On the other hand, a datacenter
will run a wide variety of applications, some user-facing like Internet email, and others behind the
scenes. The workloads vary drastically, and programmers must learn that hardware can, and does,
fail and the application must be fault-aware and deal with it gracefully. Furthermore, clusters in the
datacenter are often shared across dozens of applications,so performance isolation and fault isolation
are key to scaling applications to large processor counts.
Choosing the “right” topology is important to the overall system performance. We must take
into account the flow control, QoS requirements, fault tolerance and resilience, as well as workloads
to better understand the latency and bandwidth characteristics of the entire system. For example,
10 1. INTRODUCTION
(a) 2002
(b) 2010
Figure 1.5: Breakdown of supercomputer interconnects from the Top500 list.
1.8. SUMMARY 11
topologies with abundant path diversity are able to find alternate routes between arbitrary endpoints.
This is only one aspect of topology choice that we will consider in subsequent chapters.
High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts
13
C H A P T E R 2
Background
Over the past three decades, Moore’s Law has ushered in an era where transistors within a single
silicon package are abundant; a trend that system architects took advantage of to create a class of
many-core chip multiprocessors (CMPs) which interconnect many small processing cores using an
on-chip network. However, the pin density, or number of signal pins per unit of silicon area, has not
kept up with this pace. As a result pin bandwidth, the amount of data we can get on and off the chip
package, has become a first-order design constraint and precious resource for system designers.
2.1 INTERCONNECTION NETWORKS
The components of a computer system often have to communicate to exchange status information,
or data that is used for computation. The interconnection network is the substrate over which this
communication takes place. Many-core CMPs employ an on-chip network for low-latency, high-
bandwidth load/store operations between processing cores and memory,and among processing cores
within a chip package.
Processor, memory, and its associated IO devices are often packaged together and referred
to as a processing node. The system-level interconnection network connects all the processing nodes
according to the network topology. In the past, system components shared a bus over which address
and data were exchanged, however, this communication model did not scale as the number of
components sharing the bus increased. Modern interconnection networks take advantage of high-
speed signaling [28] with point-to-point serial links providing high-bandwidth connections between
processors and memory in multiprocessors [29, 32], connecting input/output (IO) devices [31, 51],
and as switching fabrics for routers.
2.2 TECHNOLOGY TRENDS
There are many considerations that go into building a large-scale cluster computer, many of which
revolve around its cost effectiveness, in both capital (procurement) cost and operating expense. Al-
though many of the components that go into a cluster each have different technology drivers which
blurs the line that defines the optimal solution for both performance and cost. This chapter takes a
look at a few of the technology drivers and how they pertain to the interconnection network.
The interconnection network is the substrate over which processors, memory and I/O devices
interoperate. The underlying technology from which the network is built determines the data rate,
resiliency,and cost of the network.Ideally,the processor,network,and I/O devices are all orchestrated
14 2. BACKGROUND
in a way that leads to a cost-effective, high-performance computer system. The system, however, is
no better than the components from which it is built.
The basic building block of the network is the switch (router) chip that interconnects the
processingnodesaccordingtosomeprescribedtopology.Thetopologyandhowthesystemispackaged
are closely related; typical packaging schemes are hierarchical – chips are packaged onto printed
circuit boards,which in turn are packaged into an enclosure (e.g.,rack),which are connected together
to create a single system.
ITRS Trend
Figure 2.1: Off-chip bandwidth of prior routers, and ITRS predicted growth.
The past 20 years has seen several orders of magnitude increase in off-chip bandwidth spanning
from several gigabits per second up to several terabits per second today. The bandwidth shown in
Figure 2.1 plots the total pin bandwidth of a router – i.e., equivalent to the total number of signals
times the signaling rate of each signal – and illustrates an exponential increase in pin bandwidth.
Moreover, we expect this trend to continue into the next decade as shown by the International
Roadmap for Semiconductors (ITRS) in Figure 2.1, with 1000s of pins per package and more than
100 Tb/s of off-chip bandwidth. Despite this exponential growth, pin and wire density simply does
not match the growth rates of transistors as predicted by Moore’s Law.
2.2. TECHNOLOGY TRENDS 15
0
10
20
30
40
50
60
70
80
90
100
0.00 0.20 0.40 0.60 0.80 1.00
offered load
latency
(a) Load versus latency for an ideal M/D/1 queue model.
unloaded
network
latency
saturation
Average
Accepted
Bandwidth
(Mb/s)
Offered Load (Mb/s)
Average
Message
Latency
(μs)
(b) Measured data showing offered load (Mb/s) versus latency (μs) with average
accepted throughput (Mb/s) overlaid to demonstrate saturation in a real network.
Figure 2.2: Network latency and bandwidth characteristics.
16 2. BACKGROUND
2.3 TOPOLOGY, ROUTING AND FLOW CONTROL
Before diving into details of what drives network performance, we pause to lay the ground work for
some fundamental terminology and concepts. Network performance is characterized by its latency
and bandwidth characteristics as illustrated in Figure 2.2. The queueing delay, Q(λ), is a function
of the offered load (λ) and described by the latency-bandwidth characteristics of the network. An
approximation of Q(λ) is given by an M/D/1 queue model, Figure 2.2(a). If we overlay the average
accepted bandwidth observed by each node, assuming benign traffic, we Figure 2.2(b).
Q(λ) =
1
1 − λ
(2.1)
When there is very low offered load on the network, the Q(λ) delay is negligible. However, as traffic
intensity increases, and the network approaches saturation, the queueing delay will dominate the
total packet latency.
The performance and cost of the interconnect are driven by a number of design factors,
including topology,routing,flow control,and message efficiency.The topology describes how network
nodes are interconnected and determines the path diversity — the number of distinct paths between
any two nodes. The routing algorithm determines which path a packet will take in such as way as
to load balance the physical links in the network. Network resources (primarily buffers for packet
storage) are managed using a flow control mechanism. In general, flow control happens at the link-
layer and possibly end-to-end.Finally,packets carry a data payload and the packet efficiency determines
the delivered bandwidth to the application.
While recent many-core processors have spurred a 2× and 4× increase in the number of
processing cores in each cluster, unless network performance keeps pace, the effects of Amdahl’s
Law will become a limitation. The topology, routing, flow control, and message efficiency all have
first-order affects on the system performance, thus we will dive into each of these areas in more
detail in subsequent chapters.
2.4 COMMUNICATION STACK
Layers of abstraction are commonly used in networking to provide fault isolation and device in-
dependence. Figure 2.3 shows the communication stack that is largely representative of the lower
four layers of the OSI networking model. To reduce software overhead and the resulting end-to-
end latency, we want a thin networking stack. Some of the protocol processing that is common
in Internet communication protocols is handled in specialized hardware in the network interface
controller (NIC). For example, the transport layer provides reliable message delivery to applications
and whether the protocol bookkeeping is done in software (e.g.,TCP) or hardware (e.g., Infiniband
reliable connection) directly affects the application performance.The network layer provides a logical
namespace for endpoints (and possibly switches) in the system. The network layer handles pack-
ets, and provides the routing information identifying paths through the network among all source,
destination pairs. It is the network layer that asserts routes, either at the source (i.e., source-routed)
2.4. COMMUNICATION STACK 17
Network
Data Link
Physical
Transport
Network
Data Link
Physical
Transport
end-to-end flow control,
reliable message delivery
routing, node addressing,
load balancing
link-level flow control,
data-link layer reliable delivery
physical encoding (e.g. 8b10b)
byte and lane alignment,
physical media encoding
Interconnection Network
Figure 2.3: The communication stack.
or along each individual hop (i.e., distributed routing) along the path. The data link layer provides
link-level flow control to manage the receiver’s input buffer in units of flits (flow control units).The
lowest level of the protocol stack, the physical media layer, is where data is encoded and driven onto
the medium. The physical encoding must maintain a DC-neutral transmission line and commonly
uses 8b10b or 64b66b encoding to balance the transition density. For example, a 10-bit encoded
value is used to represent 8-bits of data resulting in a 20% physical encoding overhead.
SUMMARY
Interconnection networks are a critical component of modern computer systems. The emergence
of cloud computing, which provides a homogenous cluster using conventional microprocessors and
common Internet communication protocols aimed at providing Internet services (e.g., email, Web
search, collaborative Internet applications, streaming video, and so forth) at large scale. While In-
ternet services themselves may be insensitive to latency, since they operate on human timescales
measured in 100s of milliseconds, the backend applications providing those services may indeed
require large amounts of bandwidth (e.g., indexing the Web) and low latency characteristics. The
programming model for cloud services is built largely around distributed message passing,commonly
implemented around TCP (transport control protocol) as a conduit for making a remote procedure
call (RPC).
Supercomputing applications, on the other hand, are often communication intensive and can
be sensitive to network latency.The programming model may use a combination of shared memory
and message passing (e.g., MPI) with often very fine-grained communication and synchronization
18 2. BACKGROUND
needs. For example, collective operations, such as global sum, are commonplace in supercomputing
applications and rare in Internet services. This is largely because Internet applications evolved from
simple hardware primitives (e.g.,low-cost ethernet NIC) and common communication models (e.g.,
TCP sockets) that were incapable of such operations.
As processor and memory performance continues to increase, the interconnection network
is becoming increasingly important and largely determines the bandwidth and latency of remote
memory access. Going forward, the emergence of super datacenters will convolve into exa-scale
parallel computers.
19
C H A P T E R 3
Topology Basics
The network topology — describing precisely how nodes are connected — plays a central role in
both the performance and cost of the network. In addition, the topology drives aspects of the switch
design (e.g., virtual channel requirements, routing function, etc), fault tolerance, and sensitivity to
adversarial traffic. There are subtle yet very practical design issues that only arise at scale; we try to
highlight those key points as they appear.
3.1 INTRODUCTION
Many scientific problems can be decomposed into a 3-D structure that represents the basic building
blocks of the underlying phenomenon being studied. Such problems often have nearest neighbor
communication patterns, for example, and lend themselves nicely to k-ary n-cube networks. A
high-performance application will often use the system dedicated to provide the necessary perfor-
mance isolation, however, a large production datacenter cluster will often run multiple applications
simultaneously with varying workloads and often unstructured communication patterns.
The choice of topology is largely driven by two factors: technology and packaging constraints.
Here, technology refers to the underlying silicon from which the routers are fabricated (i.e., node size,
pin density, power, etc) and the signaling technology (e.g., optical versus electrical). The packaging
constraints will determine the compute density, or amount of computation per unit of area on the
datacenter floor. The packaging constraints will also dictate the data rate (signaling speed) and
distance over which we can reliably communicate.
As a result of evolving technology,the topologies used in large-scale systems have also changed.
Many of the earliest interconnection networks were designed using topologies such as butterflies or
hypercubes, based on the simple observation that these topologies minimized hop count. Analysis
by both Dally [18] and Agarwal [5] showed that under fixed packaging constraints, a low-radix
network offered lower packet latency and thus better performance. Since the mid-1990s, k-ary
n-cube networks were used by several high-performance multiprocessors such as the SGI Origin
2000 hypercube [43], the 2-D torus of the Cray X1 [16], the 3-D torus of the Cray T3E [55]
and XT3 [12, 17] and the torus of the Alpha 21364 [49] and IBM BlueGene [35]. However, the
increasing pin bandwidth has recently motivated the migration towards high-radix topologies such
as the radix-64 folded-Clos topology used in the Cray BlackWidow system [56]. In this chapter, we
will discuss mesh/torus topologies while in the next chapter, we will present high-radix topologies.
20 3. TOPOLOGY BASICS
3.2 TYPES OF NETWORKS
Topologies can be broken down into two different genres: direct and indirect [20]. A direct network
has processing nodes attached directly to the switching fabric; that is, the switching fabric is dis-
tributed among the processing nodes. An indirect network has the endpoint network independent
of the endpoints themselves – i.e., dedicated switch nodes exist and packets are forwarded indirectly
through these switch nodes. The type of network determines some of the packaging and cabling
requirements as well as fault resilience. It also impacts cost, for example, since a direct network can
combine the switching fabric and the network interface controller (NIC) functionality in the same
silicon package. An indirect network typically has two separate chips, with one for the NIC and
another for the switching fabric of the network. Examples of direct network include mesh, torus, and
hypercubes discussed in this chapter as well as high-radix topologies such as the flattened butterfly
described in the next chapter. Indirect networks include conventional butterfly topology and fat-tree
topologies.
The term radix and dimension are often used to describe both types of networks but have been
used differently for each network. For an indirect network, radix often refers to the number of ports
of a switch, and the dimension is related to the number of stages in the network. However, for a
direct network, the two terminologies are reversed – radix refers to the number of nodes within a
dimension, and the network size can be further increased by adding multiple dimensions. The two
terms are actually a duality of each other for the different networks – for example, in order to reduce
the network diameter, the radix of an indirect network or the dimension of a direct network can be
increased. To be consistent with existing literature, we will use the term radix to refer to different
aspects of a direct and an indirect network.
3.3 MESH,TORUS, AND HYPERCUBES
The mesh,torus and hypercube networks all belong to the same family of direct networks often referred
to as k-ary n-mesh or k-ary n-cube.The scalability of the network is largely determined by the radix,
k,and number of dimensions,n,with N = kn total endpoints in the network.In practice,the radix of
the network is not necessarily the same for every dimension (Figure 3.2). Therefore, a more general
way to express the total number of endpoints is given by Equation 3.1.
N =
n−1

i=0
ki (3.1)
4
3
2
1 6
5 7
0 4
3
2
1 6
5 7
0
(a) 8-ary 1-mesh. (b) 8-ary 1-cube.
Figure 3.1: Mesh (a) and torus (b) networks.
3.3. MESH,TORUS, AND HYPERCUBES 21
Mesh and torus networks (Figure 3.1) provide a convenient starting point to discuss topology
tradeoffs. Starting with the observation that each router in a k-ary n-mesh, as shown in Figure
3.1(a), requires only three ports; one port connects to its neighboring node to the left, another to its
right neighbor, and one port (not shown) connects the router to the processor. Nodes that lie along
the edge of a mesh, for example nodes 0 and 7 in Figure 3.1(a), require one less port. The same
applies to k-ary n-cube (torus) networks. In general, the number of input and output ports, or radix
of each router is given by Equation 3.2. The term “radix” is often used to describe both the number
of input and output ports on the router, and the size or number of nodes in each dimension of the
network.
r = 2n + 1 (3.2)
The number of dimensions (n) in a mesh or torus network is limited by practical packaging
constraints with typical values of n=2 or n=3. Since n is fixed we vary the radix (k) to increase the
size of the network. For example, to scale the network in Figure 3.2a from 32 nodes to 64 nodes, we
increase the radix of the y dimension from 4 to 8 as shown in Figure 3.2b.
4
3
2
0 1 6
5 7
12
11
10
8 9 14
13 15
20
19
18
16 17 22
21 23
28
27
26
24 25 30
29 31
4
3
2
0 1 6
5 7
12
11
10
8 9 14
13 15
20
19
18
16 17 22
21 23
28
27
26
24 25 30
29 31
36
35
34
32 33 38
37 39
44
43
42
40 41 46
45 47
52
51
50
48 49 54
53 55
60
59
58
56 57 62
61 63
(a) (8,4)-ary 2-mesh (b) 8-ary 2-mesh.
Figure 3.2: Irregular (a) and regular (b) mesh networks.
Since a binary hypercube (Figure 3.4) has a fixed radix (k=2), we scale the number of dimen-
sions (n) to increase its size. The number of dimensions in a system of size N is simply n = lg2(N)
from Equation 3.1.
r = n + 1 = lg2(N) + 1 (3.3)
As a result, hypercube networks require a router with more ports (Equation 3.3) than a mesh or
torus. For example, a 512 node 3-D torus (n=3) requires seven router ports, but a hypercube requires
n = lg2(512) + 1 = 10 ports. It is useful to note, an n-dimension binary hypercube is isomorphic to
22 3. TOPOLOGY BASICS
a n
2 -dimension torus with radix 4 (k=4). Router pin bandwidth is limited, thus building a 10-ported
router for a hypercube instead of a 7-ported torus router may not be feasible without making each
port narrower.
3.3.1 NODE IDENTIFIERS
The nodes in a k-ary n-cube are identified with an n-digit, radix k number. It is common to refer to
a node identifier as an endpoint’s “network address.” A packet makes a finite number of hops in each
of the n dimensions. A packet may traverse an intermediate router, ci, en route to its destination.
When it reaches the correct ordinate of the destination, that is ci = di, we have resolved the ith
dimension of the destination address.
3.3.2 k-ARY n-CUBE TRADEOFFS
The worst-case distance (measured in hops) that a packet must traverse between any source and any
destination is called the diameter of the network. The network diameter is an important metric as it
bounds the worst-case latency in the network. Since each hop entails an arbitration stage to choose
the appropriate output port, reducing the network diameter will, in general, reduce the variance in
observed packet latency. The network diameter is independent of traffic pattern, and is entirely a
function of the topology, as shown in Table 3.1
Table 3.1: Network diameter and average latency.
Diameter Average
Network (hops) (hops)
mesh k − 1 (k + 1)/3
torus k/2 k/4
hypercube n n/2
flattened butterfly n + 1 n + 1 − (n − 1)/k
from/to 0 1 2 3 4 5 6 7 8
0 0 1 2 3 4 5 6 7 8
1 1 0 1 2 3 4 5 6 7
2 2 1 0 1 2 3 4 5 6
3 3 2 1 0 1 2 3 4 5
4 4 3 2 1 0 1 2 3 4
5 5 4 3 2 1 0 1 2 3
6 6 5 4 3 2 1 0 1 2
7 7 6 5 4 3 2 1 0 1
8 8 7 6 5 4 3 2 1 0
from/to 0 1 2 3 4 5 6 7 8
0 0 1 2 3 4 4 3 2 1
1 1 0 1 2 3 4 4 3 2
2 2 1 0 1 2 3 4 4 3
3 3 2 1 0 1 2 3 4 4
4 4 3 2 1 0 1 2 3 4
5 4 4 3 2 1 0 1 2 3
6 3 4 4 3 2 1 0 1 2
7 2 3 4 4 3 2 1 0 1
8 1 2 3 4 4 3 2 1 0
(a) radix-9 mesh (b) radix-9 torus
Figure 3.3: Hops between every source, destination pair in a mesh (a) and torus (b).
In a mesh (Figure 3.3), the destination node is, at most, k-1 hops away. To compute the
average, we compute the distance from all sources to all destinations, thus a packet from node 1 to
3.3. MESH,TORUS, AND HYPERCUBES 23
node 2 is one hop, node 1 to node 3 is two hops, and so on. Summing the number of hops from
each source to each destination and dividing by the total number of packets sent k(k-1) to arrive at
the average hops taken. A packet traversing a torus network will use the wraparound links to reduce
the average hop count and network diameter.The worst-case distance in a torus with radix k is k/2,
but the average distance is only half of that, k/4. In practice, when the radix k of a torus is odd, and
there are two equidistant paths regardless of the direction (i.e., whether the wraparound link is used)
then a routing convention is used to break ties so that half the traffic goes in each direction across
the two paths.
A binary hypercube (Figure 3.4) has a fixed radix (k=2) and varies the number of dimensions
(n) to scale the network size. Each node in the network can be viewed as a binary number, as shown
in Figure 3.4. Nodes that differ in only one digit are connected together. More specifically, if two
nodes differ in the ith digit, then they are connected in the ith dimension. Minimal routing in a
hypercube will require, at most, n hops if the source and destination differ in every dimension, for
example, traversing from 000 to 111 in Figure 3.4. On average, however, a packet will take n/2 hops.
010
000
011
001
110
100
111
101
x
y
z
Figure 3.4: A binary hypercube with three dimensions.
SUMMARY
This chapter provided an overview of direct and indirect networks, focusing on topologies built from
low-radix routers with a relatively small number of wide ports. We describe key performance metrics
of diameter and average hops and discuss tradeoffs.Technology trends motivated the use of low-radix
topologies in the 80s and the early 90s.
24 3. TOPOLOGY BASICS
In practice, there are other issues that emerge as the system architecture is considered as
a whole; such as, QoS requirements, flow control requirements, and tolerance for latency variance.
However,these are secondary to the guiding technology (signaling speed) and packaging and cooling
constraints.In the next chapter,we describe how evolving technology motivates the use of high-radix
routers and how different high-radix topologies can efficiently exploit these many-ported switches.
25
C H A P T E R 4
High-Radix Topologies
Dally [18] and Agarwal [5] showed that under fixed packaging constraints, lower radix networks
offered lower packet latency. As a result, many studies have focused on low-radix topologies such as
the k-ary n-cube topology discussed in Chapter 3.The fundamental result of these authors still holds
– technology and packaging constraints should drive topology design. However, what has changed
in recent years are the topologies that these constraints lead us toward. In this section, we describe
the high-radix topologies that can better exploit today’s technology.
(a) radix-16 one-dimensional torus with each unidirectional link L lanes wide.
(b) radix-4 two-dimensional torus with each unidirectional link L/2 lanes wide.
Figure 4.1: Each router node has the same amount of pin bandwidth but differ in the number of ports.
4.1 TOWARDS HIGH-RADIX TOPOLOGIES
Technology trends and packaging constraints can and do have a major impact on the chosen topology.
For example, consider the diagram of two 16-node networks in Figure 4.1. The radix-16 one-
dimensional torus in Figure 4.1a has two ports on each router node; each port consists of an input
26 4. HIGH-RADIX TOPOLOGIES
and output and are L lanes wide. The amount of pin bandwidth off each router node is 4 × L. If
we partitioned the router bandwidth slightly differently, we can make better use of the bandwidth
as shown in Figure 4.1b. We transformed the one-dimensional torus of Figure 4.1a into a radix-4
two-dimensional torus in Figure 4.1b, where we have twice as many ports on each router, but each
port is only half as wide — so the pin bandwidth on the router is held constant. There are several
direct benefits of the high-radix topology in Figure 4.1b compared to its low-radix topology in Figure
4.1a:
(a) by increasing the number of ports on each router, but making each port narrower, we doubled
the amount of bisection bandwidth, and
(b) we decreased the average number of hops by half.
The topology in Figure 4.1b requires longer cables which can adversely impact the signaling rate
since the maximum bandwidth of an electrical cable drops with increasing cable length since signal
attenuation due to skin effect and dielectric absorption increases linearly with distance.
4.2 TECHNOLOGY DRIVERS
The trend toward high-radix networks is being driven by several technologies:
• high-speed signaling, allowing each channel to be narrower while still providing the same
bandwidth,
• affordable optical signaling through CMOS photonics and active optical cables that decouple
data rate from cable reach, and
• new router microarchitectures that scale to high port counts and exploit the abundant wire
and transistor density of modern CMOS devices.
The first two items are described further in this section while the router microarchitecture details
will be discussed in Chapter 6.
4.2.1 PIN BANDWIDTH
As described earlier in Chapter 2, the amount of total pin bandwidth has increased at a rate of 100×
over each decade for the past 20-25 years. To understand how this increased pin bandwidth affects
the optimal network radix, consider the latency (T ) of a packet traveling through a network. Under
low loads, this latency is the sum of header latency and serialization latency. The header latency
(Th) is the time for the beginning of a packet to traverse the network and is equal to the number
of hops (H) a packet takes times a per hop router delay (tr). Since packets are generally wider than
the network channels, the body of the packet must be squeezed across the channel, incurring an
additional serialization delay (Ts). Thus, total delay can be written as
T = Th + Ts = Htr + L/b (4.1)
4.2. TECHNOLOGY DRIVERS 27
where L is the length of a packet, and b is the bandwidth of the channels. For an N node network
with radix k routers (k input channels and k output channels per router), the number of hops1 must
be at least 2logkN. Also, if the total bandwidth of a router is B, that bandwidth is divided among
the 2k input and output channels and b = B/2k. Substituting this into the expression for latency
from Equation (4.1)
T = 2tr logk N + 2kL/B (4.2)
Then, setting dT/dk equal to zero and isolating k gives the optimal radix in terms of the network
parameters,
k log2
k =
Btr log N
L
(4.3)
In this differentiation, we assume B and tr are independent of the radix k. Since we are evaluating
the optimal radix for a given bandwidth, we can assume B is independent of k. The tr parameter is
a function of k but has only a small impact on the total latency and has no impact on the optimal
radix. Router delay tr can be expressed as the number of pipeline stages (P) times the cycle time
(tcy). As radix increases, the router microarchitecture can be designed where tcy remains constant
and P increases logarithmically. The number of pipeline stages P can be further broken down into
a component that is independent of the radix X and a component which is dependent on the radix
Y log2 k. 2 Thus, router delay (tr) can be rewritten as
tr = tcyP = tcy(X + Y log2 k) (4.4)
If this relationship is substituted back into Equation (4.2) and differentiated, the dependency on
radix k coming from the router delay disappears and does not change the optimal radix. Intuitively,
although a single router delay increases with a log(k) dependence, the effect is offset in the network
by the fact that the hop count decreases as 1/ log(k) and as a result, the router delay does not
significantly affect the optimal radix.
In Equation (4.2), we also ignore time of flight for packets to traverse the wires that make
up the network channels. The time of flight does not depend on the radix(k) and thus has minimal
impact on the optimal radix. Time of flight is D/v where D is the total physical distance traveled
by a packet, and v is the propagation velocity. As radix increases, the distance between two router
nodes increases. However, the total distance traveled by a packet will be approximately equal since
the lower-radix network requires more hops. 3
From Equation (4.3),we refer to the quantity A = Btr log N
L as the aspect ratio of the router [42].
This aspect ratio impacts the router radix that minimizes network latency.A high aspect ratio implies
a “tall, skinny” router (many, narrow channels) minimizes latency, while a low ratio implies a “short,
fat” router (few, wide channels).
1Uniform traffic is assumed and 2logkN hops are required for a non-blocking network.
2For example, routing pipeline stage is often independent of the radix while the switch allocation is dependent on the radix.
3The time of flight is also dependent on the packaging of the system but we ignore packaging in this analysis.
28 4. HIGH-RADIX TOPOLOGIES
1996
2003
2010
1991
1
10
100
1000
10 100 1000 10000
Aspect Ratio
Optimal
Radix
(k)
Figure 4.2: Relationship between the optimal radix for minimum latency and router aspect ratio. The
labeled points show the approximate aspect ratio for a given year’s technology with a packet size of L=128
bits
0
50
100
150
200
250
300
0 50 100 150 200 250
radix
latency
(nsec)
2003 technology 2010 technology
0
1
2
3
4
5
6
7
8
0 50 100 150 200 250
radix
cost
(
#
of
1000
channels)
2003 technology 2010 technology
(a) (b)
Figure 4.3: Latency (a) and cost (b) of the network as the radix is increased for two different technologies.
A plot of the minimum latency radix versus aspect ratio is shown in Figure 4.2 annotated with
aspect ratios from several years.These particular numbers are representative of large supercomputers
with single-word network accesses4, but the general trend of the radix increasing significantly over
time remains. Figure 4.3(a) shows how latency varies with radix for 2003 and 2010 aspect ratios. As
radix is increased, latency first decreases as hop count, and hence Th, is reduced. However, beyond a
certain radix, serialization latency begins to dominate the overall latency and latency increases. As
bandwidth, and hence aspect ratio, is increased, the radix that gives minimum latency also increases.
For 2004 technology (aspect ratio = 652), the optimum radix is 45 while for 2010 technology (aspect
ratio = 3013) the optimum radix is 128.
4The 1996 data is from the Cray T3E [55] (B=48Gb/s, tr =40ns, N=2048), the 2003 data is combined from the Alpha 21364 [49]
and Velio VC2002 [20] (1Tb/s, 10ns, 4096), and the 2010 data was estimated as (20Tb/s, 2ns, 8192).
Another Random Document on
Scribd Without Any Related Topics
CHAPTER XV
A RUDE AWAKENING
“Look out!” was White’s warning to Lieutenant Wingate, as the
guide sprang forward to the man on the ground.
“Is he dead?” called Elfreda, getting up to go forward to the
visitor’s assistance.
“No. Stay where you are for the present, please.” The camp was
silent for a moment, then White stood up. “It’s Jim Haley!” he
announced. “And he has been pretty roughly used.”
“The Man from Seattle!” cried the girls. Elfreda was at his side
instantly.
“Is he wounded?” she asked.
“I think not,” replied the guide.
“See if he has any peanuts with him,” advised Stacy Brown.
“Stacy!” Hippy’s voice was stern, and the fat boy subsided.
A quick examination by White and Miss Briggs failed to reveal any
wounds. They brought water, and Elfreda bathed Haley’s face,
which, though bloody, was only scratched, probably by contact with
bushes. It took but a short time to revive him, his trouble being
almost wholly exhaustion. Grace hastened to make a pot of tea,
which Haley gulped down and instantly recovered himself.
“Sorry I lost my samples, or I’d not have been in this shape,” he
said, grinning.
“What happened to you?” Hippy asked.
“Same old story. The mountain ruffians wanted peanuts, so they
tackled me. One taste of the International’s product and men will
commit murder to get more of it. I threw away all I had, and they’re
picking them up along the trail. It was the only way I could get rid of
the scoundrels. Then I got into more trouble. A pack of wolves got
the scent of the peanuts and they tackled me, too, but I hadn’t any
of the International’s product to throw to them, so I had to run for
it. They chased me nearly all the way in. ‘Good for man and beast’ is
the slogan that I shall send on to the International for use in their
publicity matter.”
The girls were now laughing heartily, but, as they recalled the
manner of Haley’s leaving them, they subsided abruptly. Haley’s now
merry eyes caught the significance of the change.
“I’m Done For!”
“What have I said or done now? Is it because I have no peanuts
for you good people?”
“I think the young ladies would like an explanation of your sudden
departure the other night,” spoke up Hippy Wingate.
“Were I to tell you that I ran away because I was afraid, you
probably would not believe me, so I’ll not tell you that. There are
some things one can speak of freely, and others that he cannot. This
latter happens to be my difficulty now. If you feel that you do not
want me, of course I shall not impose upon you. I thank you, but I
warn you that you are not to enjoy any of the International’s product
until you reach home. They eat ’em alive up here.”
“You are quite welcome to remain as long as you wish. Please
stay over Sunday with us, Mr. Haley,” requested Grace. “We hope to
have a spread for our Sunday dinner,” she added laughingly.
“You win, Mrs. Gray. Unfortunately, my International raiment is in
a sad condition, but if you will lend me a pair of shears I’ll cut off the
ragged ends and try to make myself presentable.”
The girls, at this juncture, bade the men good-night and turned
in, for there were not many hours left for sleep, and they were now
very tired after the exciting night through which they had passed.
A few words passed between the guide and the peanut man, and
Ham White listened with a heavy frown on his face.
“I won’t do it!” he exclaimed. “Do you think you would were you
in my position?”
“If the International’s product didn’t pay me I should,” answered
the peanut man, with a twinkle in his eyes.
“Oh, hang the International!” retorted White. “I give you fair
warning that I’ll not double-cross these young women for you or for
any of your confounded outfit. I’ve done enough already, and I am
thinking of going to them and making a clean breast of what I have
done and then get out.”
“Don’t be a fool, White. Here! Read this.” Haley extended a folded
slip of paper to the guide, who opened and read it, the frown
deepening on his forehead.
White handed back the slip of paper, and resting his chin in the
palm of his hand sat regarding the distant campfire thoughtfully, for
they had withdrawn out of earshot of the camp for their
conversation.
“Very well!” agreed Hamilton White after a few moments’
reflection. “I might as well be hanged for a sheep as a wolf, but if
anything happens here as a result I shall tell why. Remember that,
Haley.”
“Oh, well, what’s a bag of peanuts more or less?” was the
enigmatic reply of the Man from Seattle. “I’ll take a nip of sleep, if
you don’t mind, and be on my way, but not far away.”
The queer visitor took the blanket that had been given to him,
and, walking back into the forest a short distance from the camp, lay
down and went to sleep. The guide did not turn in at all, but sat
silently in the shadows, rifle at his side, thinking and listening. Thus
the rest of the night passed, and day began to dawn.
With the breaking of the day Hamilton White climbed the
miniature mountain, and drawing a single-barreled glass from his
pocket began studying the landscape. A tiny spiral of smoke about
two miles to the north claimed his instant attention. He studied it for
a few moments. At first the smoke was quite dark, then the spiral
grew thin and gray as it waved lazily on the still morning air.
“Someone is building a breakfast fire,” he muttered. “And they
know how to build a fire, too. That may be Haley’s crowd. Ah!”
As White slowly swept his glass around he discovered something
else that aroused his keen interest. On a distant mountain a flag was
being wigwagged. He could not see the operator of it, but he was
able to follow the message that was being spelled out.
Another shift of his glass and a careful study of known localities
enabled the guide to find the person who was receiving the
message, and soon the receiver began answering with his signal
flag.
Ham White grinned as he read both messages.
“The forest eyes of Uncle Sam!” he murmured. The signalers were
forest lookouts whose eyes were constantly on the alert watching
over the vast forest within their range for suspicious smokes, and
they were having a friendly Sunday morning conversation over a
distance of nearly four miles.
Ham read and smiled.
“If they knew they would be more careful of what they said,” he
chuckled, then a few moments later he climbed down, returned to
camp and started the breakfast fire. He fried some strips of bacon,
put on the coffee, and then he sounded the breakfast call.
“Come and get it!” was the call that rang out on the mountain air.
The Overlanders thought they wanted to sleep, in fact, they were
hardly awake when they got lip grumbling, in most instances, and
began hurriedly dressing. All were shivering, for the air was very
chill. The odor of the breakfast, when they smelled it, added to the
haste of their dressing.
“Stick your heads in the cold water and you will be all right,”
advised the guide.
The girls returned from the spring, their faces rich with color, eyes
sparkling, and ready for breakfast.
“How are the appetites? I don’t ask you, Mr. Brown. You have
proved to my satisfaction that you can eat whether you are hungry
or not,” laughed White.
“We are ready for breakfast, sir,” answered Elfreda Briggs. “My,
but it does smell good.” “Where is Mr. Haley?” questioned Grace,
regarding the guide with a look of inquiry in her eyes.
“He thought best to sleep outside of the camp, and no doubt has
gone on before this.”
“Why, Mr. White?” persisted Grace.
“That is a question that I can’t answer just now, Mrs. Gray,”
returned the guide, meeting her eyes in a level gaze.
“Oh, very well. We will have breakfast.”
“We will,” agreed Stacy, and began to help himself from the frying
pan, when the guide smilingly placed a hand on the fat boy’s arm.
“You forget the ladies, Mr. Brown,” he reminded.
“Forget them? How could I?”
“It is you who forget, Hamilton,” interposed Emma. “You forget
that Stacy Brown never was brought up.”
“Give me the chuck!” whispered Stacy. “Heap the plate.”
White, catching the significance of the request, heaped the plate,
and Stacy bore it to Emma with great dignity. He bowed low and
offered the plate.
“Your highness is served,” he said. “If you will be so kind as to call
your sweet soul to earth from the ethereal realms above long
enough to feed that sweet soul on a few fat slices of common pig,
you will be a real human being. I thank you,” added the boy, as
Emma, her face flushing, took the plate, her lips framing a reply
which was never uttered. The shout of laughter that greeted Stacy’s
act and words left Emma without speech. Nor did she speak more
than once during the meal, then only to ask for another cup of
coffee.
Breakfast finished and the morning work done in camp, the three
men went out to groom the horses, while Grace and Elfreda strayed
away. Their objective was the rock from which Ham White had made
his early observation.
“Have you the diary?” asked Grace as they seated themselves.
“Oh, what a wonderful view. Isn’t it superb?”
“Yes, I have the diary, and I see the view, and agree with you that
it is superb, but suppose we get down to business before we are
interrupted. I do not believe we shall be spied on here, at least,” said
Elfreda, glancing about her.
The thumb-worn book was produced, and the girls bent over it,
beginning with the first page. There were daily weather comments,
movements of the prospector from place to place, little incidents in
his daily life, none of which seemed to shed any light on the subject
in which the two girls were interested.
“Here is something!” breathed Grace finally, and read, under date
of April 30, the following paragraph:
“‘Plenty here. Dare not dig, for am watched. Picked up in channel
enough pay-dirt to keep over next winter. Channel itself ought to
pan out fortune, but shall have to have help. Isn’t safe to try it
alone. The gang of cutthroats would murder me. Some day mebby
they’ll get me as it is.’”
“Hm-m-m-m,” murmured Miss Briggs. “I wondered why, if he had
made such a find, Mr. Petersen shouldn’t get out the gold and put it
in a safe place before someone got ahead of him. The diary seems
to furnish a reason for his delay. He must refer to the Murray gang.”
“Listen to this entry, Elfreda,” begged Grace, reading:
“‘Queer thing this morning. The sun was shining on the children,
and on grandma’s bonnet, but her face was as black as a nigger’s. I
wonder if that was a warning to me to keep away. Gold, gold! How
terrible is the lure for the yellow stuff. It gets into the blood, it eats
into the heart. It’s a frightful disease.’”
“That checks up with what Mr. Petersen had me to write down,
doesn’t it, Grace?” breathed Elfreda.
“Undoubtedly. He must refer to the same thing, but it doesn’t give
us the least idea where the place is.”
“The man would be a fool to write a thing like that in a diary—to
tell where and how. Anything else? There is something on the next
page.”
“Yes,” answered Grace, turning the page and reading:
“‘Though I haven’t found it, I know pretty well where the mother
lode is, but I’m afraid of it—afraid to look for it. I’m afraid the wealth
I should find there would kill me just because of the responsibility of
possessing it. Then again, what is there left in life after a man has
got all he has dreamed of, and yearned for, and fought for, and
worked for, up to that time? Nothing!’”
“What a philosopher!” marvelled Grace Harlowe.
“He is right, too,” agreed Miss Briggs. “Suppose we forget about
it, also,” urged Elfreda. “I am tired of it.”
“J. Elfreda, if I didn’t know you so well, I should believe you are in
love, you are so gloomy. Listen! Mr. Petersen probably has no one
surviving him. He wished you to have what he had found. It was the
request of a man about to pass out; it was a trust, Elfreda. One day
someone, perhaps the very ones who tried to kill him, will stumble
on the Lost Mine. I should say that the prospector’s request imposed
a duty on you, my dear—a duty to go to the place he names, take
possession of what you may find there and keep it for your own. You
can’t expect to make a fortune practicing law, especially if you don’t
do more practicing than you have done in the last few years. I fear
these summer outings of ours have cost each of us something.”
Elfreda said she didn’t regret the loss of time. Her time was her
own, and she had sufficient funds to enable her to take care of
herself and the little daughter that she had adopted a few years
before.
“The question is, though, how am I going to find this place—how
are we going to find it, I mean, for what I find is for the outfit, not
for my own selfish self. I—”
Elfreda’s eyes had been wandering over the scene that lay before
them as Grace slowly turned the leaves of the diary. Miss Briggs
thought she had seen a movement off to the right at the edge of the
rock farthest from the camp.
“What is it?” demanded Grace, glancing up quickly.
“Nothing. Go on. Find anything else?”
“Only this: ‘When the sun is at the meridian the sands turn to
golden yellow,’” read Grace.
“What does he mean, do you think?”
“I suppose he means to convey that the bed of the dry stream, if
it is dry, shows a sort of golden strip. That is all I can make of it.
There seems to be nothing else in the book in reference to the
subject in which we are particularly interested. I am certain that the
poor man knew what he was saying; I believe that he believed he
had found what he says he found. Whether he did find it or not is
quite another matter. In any event Lost River and the lost mine are
well worth looking for as we go along. If there be such a place,
Overland luck will lead us to it,” finished Grace.
“I doubt it—I was going to say I hope Overland luck doesn’t lead
us to it, to our River of Doubt. Oh, Grace!”
“Wha—at is it?”
“Oh, look!”
A black head of hair, lifted just above the level of the rock on the
far side, revealed a low forehead and a pair of burning black eyes—
evil eyes they seemed to the two startled girls. They could not see
the hands that were gripping the edge of the rock, but what they
could see was sufficient to fill them with alarm.
Without an instant’s hesitation, Elfreda Briggs snatched up a
chunk of flinty rock and hurled it with all her might. The chunk of
rock fell a couple of yards short of the mark, bounced up into the air,
and landed fairly on the man’s head.
“Who says a woman can’t throw a stone!” cried J. Elfreda Briggs
almost hysterically.
CHAPTER XVI
BANDITS TAKE THEIR TOLL
“Run!” cried Grace.
“The diary!” exclaimed Elfreda, as Grace dropped the book,
snatched it up, and ran clambering down the rocks.
The guide saw them coming, saw that something was wrong, and
strode forward to meet the two girls.
“What is it?” he asked sharply.
“A prowler,” answered Grace, out of breath.
“Where?”
“There! On the other side of the rock. He was spying on us, and I
think Miss Briggs hit him with a piece of rock,” exclaimed Grace.
“Lieutenant!” called Hamilton White, and sprinted around the base
of the big rock. Hippy Wingate was not far behind him, though Hippy
did not know what had occurred, nor did he wait for an explanation.
He knew that there was trouble, and that was sufficient for him.
The two men reached their objective at about the same time.
White was peering at the rocks and bushes at the base of the big
rock.
“Miss Briggs did hit him. See the blood there, and the bushes
crushed where he fell. She must have given him a good wallop,” he
chuckled.
White began to run the trail, a trail that was plain and easily
followed. Hippy was right behind him, using his eyes to good
advantage.
“Lieutenant, I think you had best go back and watch the camp.
This may be a trick to coax us men away. Keep a sharp lookout.
Have Brown stand guard with you. There is little need to worry, for
we can see and hear. Skip!” urged the guide.
Hippy lost no time in getting back to camp, and when he reached
there he found Grace and Elfreda laughing, and explaining to their
companions what had happened.
They repeated the story to him.
“Oh, well, let them fuss. They can’t do anything to us,” averred
Lieutenant Wingate after he had heard all of the story. “I’ll sit on top
of the rock and watch over you children.”
“That’s what I say,” agreed Stacy. “We men can beat them at their
own game, and have a lap or so to spare. Ham will chase them so
far away that they never will find their way back. If he doesn’t I will.”
“Don’t be too positive,” admonished Grace. “I think it wise for us
to be on the alert. For some reason those ruffians are determined to
be rid of us, at least.”
“Oh, I hope Hamilton will take care of himself,” murmured Emma,
whereat her companions laughed heartily.
None of the girls left the immediate camp all that morning; they
even sent Stacy to the spring for water, much to that young man’s
disgust, for Stacy had planned on having a fine day’s sleep in his
tent.
Noon came, and the guide had not returned, so Grace decided
that they would have something to eat. The girls got the meal.
After they sat down to eat, the girls tried to be merry, but they
admitted that they missed Hamilton White, though none felt alarm at
his absence. The meal finished, dishes were washed and put away,
and packs laid out for a quick move, in the event of that becoming
necessary, for by this time the Overland Riders had learned to be
ready at a moment’s notice.
Hippy from his point of vantage kept guard over the camp and its
vicinity, now and then studying the view spread out before him. The
air was fragrant with the odor of the forest, and Hippy grew sleepy.
To keep awake he decided to get down and walk. This he did,
reaching the ground on the side of the rock farthest from the camp.
The Overlander, with only a revolver, strolled through the forest
making a circle around the camp, and studying the trees for blazes
and the ground for indications of recent visitors. Now and then he
would sit down, back against a tree, and gaze up into the blue sky
and the waving tops of the big pines.
The afternoon wore away and Hippy was still trail-hunting. It was
near supper time when Nora called him. There was no answer, so
she climbed the rock, expecting to find her husband sleeping, for
Hippy loved sleep fully as much as Stacy Brown did.
Lieutenant Wingate was not on the rock, but Nora found his rifle
laying there. She ran back to her companions in alarm.
“Hippy isn’t there!” she cried. “Oh, girls, can anything have
happened to him?” Nora was on the verge of tears.
“No, of course not,” comforted Grace.
“Then where is he?”
“Probably asleep somewhere about,” suggested Emma. “You know
he and Stacy have the sleep habit.”
“I don’t believe it. I am going out to search for him.”
“Nora, you will not!” differed Grace with emphasis. “We will all
remain where we are. To get separated would be foolish. Hippy is all
right, so sit down and chat with us. Mr. White will be along soon,
and some others besides Emma Dean will be glad to see him,” she
added, with a teasing glance at Emma.
The Overland girls ate a cold supper that night, no one feeling like
cooking or sitting down to a hearty meal. Nora was so worried that
she refused to eat at all, and, while the other girls were equally
disturbed, they masked their real feelings by teasing each other.
Emma and Stacy were ragged unmercifully.
Darkness settled over the forest, but still no Hippy, no guide.
“I think it will be advisable to bring in the horses, don’t you,
Elfreda?” asked Grace.
Miss Briggs and the others thought that would be a wise move, so
the ponies, and such of their equipment as was outside the camp,
were brought in; fuel was gathered and piled up so that they might
keep the fire burning; then the party sat down in their tents, with
blankets thrown over their shoulders, and began their watch.
It was ten o’clock that night when the hail of Ham White was
heard, and after the tension of the last few hours the Overland girls
felt like screaming a welcome. Instead they sprang out and stood
awaiting him.
“Well, did you good people think I had deserted you?” he cried
out. “I am nearly famished. Is there anything left from dinner?”
“Yes, of course there is. I will get you something. First I must tell
you. Mr. Wingate has been missing since some time this afternoon.
We don’t know what to make of it unless he has fallen asleep
somewhere,” said Grace.
“What! Tell me about it.”
Nora told the guide the story, explaining that Hippy had taken up
his station on the rock to guard the camp, and that that was the last
they saw of him.
Ham White was disturbed, but he did not show it. Instead he
laughed.
“No doubt, as Mrs. Gray has suggested, he has gone to sleep.
Where is Mr. Brown?”
“He is asleep in his tent, as usual,” spoke up Emma. “Oh,
Hamilton, won’t you please find Hippy—now?”
“I will do my best. Give me a snack and I’ll go out now. I followed
the other trail for something like five miles. There were four men in
the party, only one of whom came near the camp. The trail finally
bumped into the side of a mountain and I lost it. It was so dark I
could not follow it farther. Thank you!” he added, as Emma handed
him some bacon. “I will go right out.”
They followed him around the rock and watched with keen
interest as Ham White searched for and found the trail of the
missing Hippy, which he followed, with the aid of his pocket lamp,
for some distance.
“He was strolling,” announced the guide. “You can see here where
he sat down to rest, then went on. Please return to camp. Unless he
wandered off and lost his way, I shall probably soon find him.”
The girls promptly turned back towards camp, Nora with
reluctance, which she made no effort to conceal. Then followed two
hours of anxiety. The guide returned shortly after midnight.
“There is no use of searching farther to-night,” he announced.
“Mr. Wingate undoubtedly has strayed away, but I’ll find him in the
morning. Please turn in and get some rest, for we shall undoubtedly
have an active day to-morrow. In any event, don’t lose your nerve,
Mrs. Wingate. The Lieutenant has had enough experience to know
how to take care of himself.”
Nora went to her tent weeping, Emma Dean’s arm around her, but
Grace held back at a gesture from Elfreda, who had observed that
the guide studiously avoided looking directly at Nora Wingate.
“Mr. White, have you anything to say to us?” questioned Elfreda.
“Meaning what?”
“We wish to know what you really did discover. It was well not to
say any more than you did to Mrs. Wingate.”
“You made a discovery of some sort—of that we are convinced,”
spoke up Grace.
“Yes, I did,” admitted White. “I found the lieutenant’s revolver
beside a tree where he had been sitting. His trail ended there!”
“Meaning?” persisted Miss Briggs.
“That he was attacked and carried away, in all probability. I found
evidences of that.”
“What can be done?” demanded Elfreda.
“Nothing until morning. I have means of obtaining assistance,
which I will employ as soon as it is light enough to see.”
The girls turned away and walked slowly to their tent, and the
guide stepped over to the tent occupied by Hippy and Stacy Brown.
He was out in a moment and striding towards Elfreda’s quarters.
“Miss Briggs! Mrs. Gray!” he called.
“Yes!” answered the voices of Elfreda and Grace.
“Stacy Brown is not in his tent. There has been a struggle, and
the boy has been forcibly removed,” was the startling
announcement.
CHAPTER XVII
A TEST OF COURAGE
“Sta—Stacy gone?” exclaimed Elfreda Briggs. “It can’t be possible.
He is playing one of his practical jokes on us.”
“Let us look, but don’t disturb Emma and Nora if it can be
avoided,” urged Grace.
The two girls, with the guide, repaired to Lieutenant Wingate’s
tent, and examined it, using their pocket lamps. It was as Hamilton
White had said—there was every evidence that a struggle had taken
place there. The fat boy’s hat and his revolver lay where they had
been hurled to one side of the tent. His blouse was a yard or so to
the rear, and the imprint of his heels where they had been dragged
over the ground was plainly visible.
“He must have been asleep,” nodded White.
“Yes,” agreed Grace. “If awake Stacy would have set up such a
howl that none could have failed to hear. When do you think this
was done, Mr. White?”
“When we were out looking for the lieutenant. If you will
remember, Mr. Brown remained behind.”
“Do you think it wise to follow his trail?” asked Grace.
“No. Not now. I dare not leave the camp. All this may be part of a
plan. My duty is here, at least until daylight, when I will get into
communication with those who will find both men.”
“You think so, Mr. White?” questioned Elfreda anxiously.
“Yes. It is the work of the same gang, but what their motive is we
can only surmise. You and Mrs. Gray may know.”
Elfreda felt her face growing hot, and a retort was on her lips, but
she suppressed it.
“Mrs. Gray, if you think I should try to run the trail now, I will do
so, but it would be against my judgment. I hope you do not insist,”
said White, turning to Grace.
“I believe you are right,” answered Grace. “Come, Elfreda, we will
go to our tent, for no serious harm can come either to Hippy or
Stacy. They dare not harm them.”
Ham White did not reply. He knew the character of the men who
committed that piece of banditry, and knew that they would hesitate
at no crime to gain their ends, whatever those ends might be.
The guide got no sleep that night. Mindful of the attacks that had
been made on the camp, he took up his position at a distance, and,
with rifle in hand, sat motionless the rest of the night. From his
position in the deep shadows he commanded a view of the entire
camp, which was dimly lighted by the campfire all night long.
There were occasional sounds that Ham White did not believe
were made by marauding animals, but none were definite enough to
warrant exposing his position. During his vigil nothing occurred to
disturb the sleepers.
The graying mists of the early morning were rising from gulch and
forest, enfolding the mountaintops, when Ham White stole around
the camp, scrutinizing every foot of the ground. By the time he had
completed this task the mists were so far cleared away that a good
view of the surrounding country might be had.
From his kit the guide selected a wigwag signalling flag, and
taking one of the tent poles for use as a flagstaff, he went cautiously
to the high rock that stood sentinel over the Overland camp, and
climbed to its top.
“I hope none of the girls wake up,” he muttered, peering down
into the camp, which was as quiet as a deserted forest.
Ham White, after attaching the flag to the pole, began waving it
up and down, which in the wigwag code means, “I wish to speak
with you.”
It was at this juncture that Grace Harlowe slowly opened her
eyes. Where she lay she could look straight up to the top of the rock
without making the slightest movement, and her amazement must
have been reflected in her eyes.
Like several of the Overland girls, Grace’s experience in the war
had included learning to signal and to read signals. She was out of
practice, but was easily able to read any message not sent too fast.
Ham began his message, after getting the attention of the persons
to whom he was signalling, at a speed that Grace could not follow.
She did, however, catch a few words that were enlightening.
“Trouble—Haley—Trail—Send word—Caution—Great secrecy or
expose hands—Fatal to—” were some of the words that she caught
as the guide flashed them off. Then he paused.
“How I wish I could see the answer,” muttered the Overland girl,
as she watched Hamilton White, with glasses at his eyes, receiving
the message that was being sent to him.
Grace Harlowe’s, however, were not the only pair of eyes that
witnessed that exhibition of signalling. Other eyes were observing,
but that other pair could not read a word of what the signallers were
saying.
White dropped his glasses and snatched up his flag, and she read,
this time with greater ease:
“It may be fatal. Great danger to both. My responsibility. Must
have instant action. This an order. Obey without loss time. Report
soon as anything to say.” The guide signed his name, and the words
that followed the signature filled Grace Harlowe with amazement.
She saw the guide remove the flag from its staff and hide it under a
stone, after which he descended to the camp, passing the open
tents without so much as a glance at them.
Ham stirred up the fire and put over the breakfast, and, while it
was cooking, Grace came out, greeting him cheerfully.
“Is there any news, Mr. White?” she asked sweetly.
“No, not yet.”
“What have you done?”
“I signalled to a fire-lookout station that assistance was needed. It
is best to wait until we hear from them.”
“How, signal?” she questioned, appearing not to understand.
“By the air route, Mrs. Gray,” was the smiling reply.
Grace Harlowe shrugged her shoulders.
“You are a very clever man, Mr. White,” she said, and walked to
her tent to awaken Miss Briggs.
When informed that Stacy Brown was missing, a few moments
later, Nora Wingate became hysterical, but Grace and Elfreda calmed
her, and the party were ready to sit down to breakfast when the
guide announced it as ready.
It was a trying, anxious morning for the little band of Overlanders.
White made frequent trips to the rock, observed questioningly by
Elfreda.
“What is he looking for, Grace?” she asked. “Does the man expect
to find the bandits that way?”
“I don’t know. Why not ask him, J. Elfreda?”
“Not I. You know I would not.”
About mid-forenoon Grace suggested to the guide that he go out
into the forest and see if he could glean any information as to the
direction that the kidnappers had taken when they left the camp,
with either Hippy or Stacy Brown.
White pondered the subject a moment, then agreed.
“If you will promise not to leave camp, and to fire a shot at the
least suspicious sound or occurrence, I will go out,” he said. “One of
you had better go to the rock and take station there until my return.”
Grace said she would do that. Matters were working out to her
satisfaction, and, after telling Elfreda to take her rifle and post
herself a short distance to the rear of the camp, and assigning
Emma and Nora to the right and left ends of their camping place,
Grace climbed the rock and sat down. After Ham White, following a
survey of the camp and her arrangements, of which he approved
with a nod and a wave of the hand, had left the camp, Grace got up
and looked for the signal flag, which she found under a flat stone.
“Now! Having disposed of my companions I shall see what I shall
and can see,” she told herself.
Securing the signal flag, the Overland girl took a survey of the
landscape. A vast sea of dense forest lay all about her, broken here
and there by a white-capped mountain. Nothing that looked as if it
might be a fire-lookout station attracted her eyes. She had used her
field glasses, but without result.
A moment of vigorous signalling on her part followed, after which
Grace swept the landscape again. She discovered nothing at all.
Another trial was made, and the word “answer” was spelled out by
her.
Her eye caught a faint something far to the north of her, and
Grace’s glasses were at her eyes in a twinkling. A little white flag
was fluttering up and down against the background of forest green
in the far distance.
“I’ve got him!” cried the girl exultingly. “I’ve got him!” Then,
wigwagging, Grace Harlowe signalled the one word, “Report!”
“Who?” came the answer, almost before she could get the glasses
to her eyes to read the message.
“For White,” she wigwagged. “Report!”
Holding the flag, now lowered to the rock, with one hand, the
other holding the glasses to her eyes, Grace bent every faculty to
watching that little fluttering, bobbing square of white, that, at her
distance from it, looked little larger than a postage stamp.
“Repeat!” she interrupted frequently, whenever part of a word
was missed. It was a laborious effort for her, out of practice as she
was, and the exchange of messages lasted for a full half hour before
the Overland girl gave her unseen, unknown signaller the “O. K.”
signal.
Grace folded the flag and placed it under the stone, then
straightened up.
“Mr. Hamilton White, I have you now!” she exclaimed, a
triumphant note in her voice.
CHAPTER XVIII
THE FLAMING ARROW
“Where am I at?”
It was Hippy Wingate’s first conscious moment since he was
struck down while sleeping with his back against a tree not far from
the Overland camp. All was darkness about him as he awakened in
unfamiliar surroundings. Essaying to rise, the Overlander discovered
that he was bound. Still worse, there was a gag in his mouth.
A gentle breeze was blowing over him, and at first he thought he
was still under the trees. Hippy then realized that there was a hard
floor beneath him. His head ached, and when he tried to sit up he
found that it swam dizzily.
“I wonder what happened to me?” he muttered. “Hello!”
There was no response to his call; in fact, his voice, still weak, did
not carry far and it was thick because of the gag. Then began a
struggle with himself, that, while it exhausted him for the time
being, aided in overcoming his dizziness.
Hippy heard men conversing, heard them approaching,
whereupon he pretended still to be unconscious. A door was flung
wide open, and a lantern, held high, lighted up the interior of the
building with a faint radiance.
“Hain’t woke up,” announced one of the two men who stood in
the doorway.
“Mebby he never will,” answered the other.
“I don’t reckon it makes much difference, so long as we got two
of ’em,” returned the first speaker. “What shall we do—let ’im sleep?”
“Yes.”
The man with the lantern strode over and peered down at the
prostrate Overlander, while the prisoner, from beneath what seemed
to be closed eyelids, got a good look into the swarthy, hard-lined
face. Lieutenant Wingate would remember that face—he would
remember the voices of both men—would know them wherever he
heard them.
“Let ’im sleep. When he wakes up we’ll have something to say to
’im.” With that the two men went out, slamming the door behind
them.
The lantern light had shown Hippy that he was in a log cabin. At
his back was a window, or a window-opening, for which he was
thankful, as it offered a possible way of escape. But how, in his
present condition, could he hope to gain his liberty?
There was no answer to the Overlander’s mental question. First,
he must regain his strength. The leather thongs with which he was
bound interfered with his circulation, and his legs were numb. So
were his arms, and his jaws ached from the gag that was between
his teeth. In fact, Lieutenant Hippy Wingate did not remember ever
to have suffered so many aches and pains at one time as he had at
that moment.
He began his struggles again, but more with the idea of starting
his circulation and gaining strength than with any immediate hope of
escape. By rolling over several times he was able to reach the door,
but having reached it he had no hands with which to open it. Hippy
wanted to look out. Failing there, he bethought himself of the
window, and rolled back across the floor to it. Exerting a great
effort, he managed to work his head up to the window so he could
see out.
The night was dark, but the Overlander was able to make out
trees and rugged rocky walls, together with what appeared to be a
dense mass of bushes. The scene was unlike anything he had seen
in the State of Washington since his party had started on their
outing.
“I may be up in the Canadian Rockies, for all I know,” he
muttered.
Hippy sank down, weak and trembling.
For a change, he rolled back and forth, pulling himself up to the
window again and again, and each time found himself stronger than
before.
“If I were free and had a gun I’d show those cowards something!”
raged the Overlander, his anger rising. “Why did they have to pick on
me? I wonder what the folks at the camp are think—”
“Sh-h-h-h!”
It was a low, sibilant hiss from the window, and Hippy fell
suddenly silent.
“Keep quiet and listen to me,” warned a hoarse voice. “The gang
is out of range, but we don’t know when one or more of ’em will be
back. I’m coming in.”
Not being able to answer, except with a grunt, the Overlander
merely grunted his understanding.
The stranger leaped into the room and felt for the prisoner.
“I am going to cut you loose. Are you wounded?”
“No, I think not,” mumbled Hippy, but his words were
unintelligible.
The first thing the stranger did was to remove the gag, which he
did with so much care that the operation gave no pain. Then came
the leather thongs. These he ripped off with a few deft sweeps of a
knife, and Lieutenant Wingate was a free man so far as his bonds
were concerned.
“Can you walk?” in the same hoarse voice.
“I could fly if I had to,” was the brief reply. “Who are you?”
“You wouldn’t know if I told you. Here!” The man thrust a revolver
into his hand. “Don’t use it unless you have to. We aren’t out of the
woods by a long shot. Come!”
The stranger assisted Hippy through the window, which was
accomplished with some difficulty, for Lieutenant Wingate was stiff
and sore. A firm hand was fixed on his arm, and his companion
began leading him rapidly away. Not a word was spoken for several
minutes—not until they had plunged into the dark depths of a
canyon, through which the man picked the way unerringly.
“How are you standing it?” was the question abruptly put to
Lieutenant Wingate.
“Rotten! But I’ll pick up speed as I go along and get my motors
warmed up.”
The stranger chuckled.
“Where are we going?”
“We are headed for your camp, but it’s quite a hike and a hard
one. If you get leg-weary, stop and rest a bit. How’d they get you?”
“I went to sleep just outside the camp, and I think I must have
got a clump on the head. Ouch!” Hippy had lifted a hand to his
head, and felt there a bump as big as an egg. “I guess I did get a
clump. It’s a wonder I’m not dead. When is it, to-day or to-morrow?”
“It’s the day after,” was the half humorous reply.
“Please tell me how you found me?” asked the Overlander.
“Ham White got in touch with some people I know. They got word
to me, and gave me the tip. The same people saw the gang that got
you heading for the pass where you were taken, so I made for that
place as soon as I got the word from White. I was lucky; I might
have had to hunt the whole state over for you. The gang made a
bad play when they picked you up. We’ve got a line on them now.”
“Who is we?” interjected Hippy.
“All of us,” was the noncommittal reply. “Don’t speak so loudly. It
isn’t safe yet.”
That walk Hippy Wingate never forgot. Every step sent shooting
pains through his head and legs. He stumbled frequently, but every
time the grip of the stranger tightened on his arm, and he was kept
on his feet.
“When you get to camp, tell your people to watch out. Some of
the gang are still out on trail. I reckon they aren’t out for any good,
and they may be planning to rush your camp and get the rest of
your party.”
“Why do they want us?” wondered Lieutenant Wingate. “Is it
robbery?”
“Yes, but not the sort of robbery you think. Tell your friend Miss
Briggs that it’s time she told her party her story. She knows why.”
“I begin to see a light,” muttered the Overlander. “Say! There’s
something familiar about your voice, but I can’t place it. Got a cold?”
“Yes.”
Little conversation was indulged in after that, and at last Hippy’s
rescuer halted and pointed.
“See that light?” he asked in a whisper.
“Yes.”
“That’s your camp. I leave you here. Take my advice, and don’t
make much noise to-night. Keep your fire low, and post guards. Tell
White there is a man out here wants to see him. You need not let
the others know about my being here. I’m in a hurry. Good-night.”
“But—won’t you come—”
“Go on!”
Hippy wavered a little as he started towards the camp, into which
he staggered a few minutes later.
A cry greeted his appearance, and Nora’s arms were flung about
his neck ere he had fairly reached the light of the campfire. He held
up his hand for silence.
“Give me something to eat, if you love me. I’m famished.”
Nora ran for the coffee pot, which Ham White took from her.
Hippy stepped over to him and whispered something to the guide,
as he relieved White of the coffee pot.
White immediately left the camp.
By now the other members of the party were about Hippy shoving
their joy at his return.
“Have you seen Stacy?” demanded Grace eagerly, as soon as she
could get his attention.
“No. Why?”
“He, too, has been missing, and—”
“The curs!” raged Lieutenant Wingate. “So they got him, too, did
they?”
“Never mind now. You must drink and eat. Where is Mr. White?”
wondered Grace, glancing quickly about the camp.
“I sent him out on an errand,” answered Hippy. “Ah! The coffee is
not so hot that it burns, but it’s nectar.”
“Oh, my darlin’! Your head!” cried Nora, just discovering the
swelling there.
Elfreda was at his side in an instant, examining the lump that, to
Hippy, seemed fully as big as his head itself. Miss Briggs ran to her
tent for liniment, and in a moment was applying it to the sore spot.
Hippy’s story was brief, because there was little that he could tell
them. He was amazed when he learned that he had been away so
long.
Grace explained to him how White had reached some lookouts on
the range and got them to go in search of him. “How they found you
so soon, I don’t understand. Do you?”
Hippy shook his head.
“There are some things in this neck of the woods that are beyond
explaining. I hope they didn’t give Stacy such a wallop as I got. But
don’t worry about him. They can’t keep him long. Stacy will eat
them out of his way. I was easy. He isn’t.”
Ham White returned at this juncture.
“We shall probably have another guest to-night, if all goes well,”
he announced.
“A guest?” wondered the Overlanders.
“So I am informed; perhaps more than one. Do not ask any
questions, for I can’t answer them. Well, Lieutenant, you had a
rough time of it, didn’t you?”
“The Germans could not have done anything much worse.”
“Would you recognize any of the fellows who captured you?”
questioned White.
“I saw only two, but I shall know them when I see them, and they
will have reason to know me, for—”
“Hamilton, who are the guests you are expecting?” urged Emma
in her sweetest tone of voice.
“Sorry, Miss Dean, but I can’t tell you.”
“Isn’t that just like a man—making a mystery of everything? I
think—”
“Hello, folks!” cried a voice from the bush.
The Overlanders fairly jumped at the sound of the familiar voice.
“Tom! Tom Gray!” cried Grace, running and throwing herself into
her husband’s arms. “How happy I am to see you, you will never
know. I needed you, Tom—we all have needed you, and I think we
shall need you still more. Where did you come from?”
“Hello, old chap!” cried Hippy jovially.
The Overlanders crowded around Captain Tom Gray joyously.
“How are you, White!” greeted Grace’s husband, as soon as he
could free himself from the welcome of Grace, Nora and Emma. “I
have been looking forward to meeting you, and I knew, from what I
had heard, just the sort of man you would be—I mean as to looks,”
added Tom, grinning. “The men on the range are looking forward to
seeing their—”
A warning look from the guide checked Tom.
“I will explain later,” whispered the guide.
“I thank you for sending for me,” bowed Tom, with ready
resourcefulness. “I knew that the need must be urgent or you would
not have done so.”
“Yes. I have a double responsibility—a moral and a physical one,
and I felt that I had no right to go farther until I had consulted with
Mrs. Gray’s husband. We are heading for trouble, in fact we have
already been having it.”
“Tell me about it. I know some of the facts, but I want them at
first hand.”
“Miss Briggs knows the story. I suggest that she relate the story
of her experiences, which will give you the slant I want you to get. I
suppose you know of the kidnapping of Lieutenant Wingate and
Stacy Brown?” asked the guide.
“The bare facts only. J. Elfreda, you seem to be the pivotal point
on this journey. Grace is holding my hand so tightly that I shall have
to ask her to give me a chance to listen to you,” answered Tom
laughingly.
Emma offered to demonstrate to give Tom a “chance” to hear the
story. Grace laughed happily. A great load of responsibility and worry
had been lifted from her shoulders.
“I will be good, J. Elfreda. Please tell Tom everything—everything,
remember. Mr. White, we wish you to sit in,” added Grace, as the
guide discreetly moved away.
There followed a moment of silence, then Elfreda Briggs began
the story of the fire, of her arrival at the forest cabin, and of the
dramatic occurrences there. She told of the diary, of the loss of the
gold dust, and of the general directions that Sam Petersen had left
for locating the claim, though Elfreda did not say what those
directions were. She thought it advisable not to do so.
Hippy got up and walked to his tent, returning shortly and
standing with his back to a tree and his hands in his pockets as Miss
Briggs finished her story.
Grace took up the story from that point, relating all that had
occurred since Elfreda’s experience in the forest shack, but avoiding
what she had learned through her wigwagging about Hamilton
White.
Tom Gray pondered over the story, stroking his cheek, which Tom
always did when thinking deeply.
“The Murrays, eh, White?” he questioned, glancing up at the
guide.
Ham White nodded.
“It looks that way,” replied White.
“They know about this Lost River story, do you think?”
“Most everyone does up here. It is an old Indian legend, and
probably has no more foundation in fact than most Indian legends,”
answered the guide. “Mind you, I am not saying that such a place
doesn’t exist. No doubt there are many rich veins in the Cascade
Range yet to be discovered. Petersen evidently believed he had
found it, but he undoubtedly was delirious when he described the
spot. He had been shot, you know.”
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookgate.com

More Related Content

PDF
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
PDF
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
PDF
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
PDF
Data Center Solutions: Radical Shift toward Design-Driven Innovation
PDF
Networks in Cloud Computing :Network Architecture for Cloud
PDF
Information Technology in Industry(ITII) - November Issue 2018
PDF
Introduction to High Performance Computing
PDF
Introduction to High-Performance Computing
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
High Performance Networks From Supercomputing to Cloud Computing 1st Edition ...
Data Center Solutions: Radical Shift toward Design-Driven Innovation
Networks in Cloud Computing :Network Architecture for Cloud
Information Technology in Industry(ITII) - November Issue 2018
Introduction to High Performance Computing
Introduction to High-Performance Computing

Similar to High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts (20)

PPTX
Lecture notes - Data Centers________.pptx
PDF
Complete Download DevOps for networking boost your organization's growth by i...
PDF
CC LECTURE NOTES (1).pdf
PDF
Architecting Modern Data Platforms Jan Kunigk Ian Buss Paul Wilkinson
PDF
The Growth Of Data Centers
PPTX
Software Defined Networking - Huawei, June 2017
PDF
Data Centre Network Optimization
PDF
Attaining High Performance Communications A Vertical Approach 1st Edition Ada...
PPTX
CLOUD COMPUTING UNIT-1
PDF
Juniper Networks: Q Fabric Architecture
PPTX
LinkedIn's Approach to Programmable Data Center
PDF
next-generation-data-centers
PDF
Juniper: Data Center Evolution
PDF
System Networks Drive the Next Generation of Automated, Dynamic Datacenters
PPT
Presentation-1.ppt
PPTX
Data Center Networks
PDF
Hadoop in the Enterprise Architecture A Guide to Successful Integration 1st E...
PPTX
Lect#5 asdasdasdasdasdasdasdasdasd(2).pptx
PDF
SDN Software Defined Networks 1st Edition Thomas Nadeau D.
PPTX
MODULE 01 - CLOUD COMPUTING [BIS 613D] .pptx
Lecture notes - Data Centers________.pptx
Complete Download DevOps for networking boost your organization's growth by i...
CC LECTURE NOTES (1).pdf
Architecting Modern Data Platforms Jan Kunigk Ian Buss Paul Wilkinson
The Growth Of Data Centers
Software Defined Networking - Huawei, June 2017
Data Centre Network Optimization
Attaining High Performance Communications A Vertical Approach 1st Edition Ada...
CLOUD COMPUTING UNIT-1
Juniper Networks: Q Fabric Architecture
LinkedIn's Approach to Programmable Data Center
next-generation-data-centers
Juniper: Data Center Evolution
System Networks Drive the Next Generation of Automated, Dynamic Datacenters
Presentation-1.ppt
Data Center Networks
Hadoop in the Enterprise Architecture A Guide to Successful Integration 1st E...
Lect#5 asdasdasdasdasdasdasdasdasd(2).pptx
SDN Software Defined Networks 1st Edition Thomas Nadeau D.
MODULE 01 - CLOUD COMPUTING [BIS 613D] .pptx
Ad

Recently uploaded (20)

PDF
RMMM.pdf make it easy to upload and study
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
master seminar digital applications in india
PPTX
History, Philosophy and sociology of education (1).pptx
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Classroom Observation Tools for Teachers
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
What if we spent less time fighting change, and more time building what’s rig...
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
01-Introduction-to-Information-Management.pdf
PPTX
Cell Types and Its function , kingdom of life
RMMM.pdf make it easy to upload and study
LDMMIA Reiki Yoga Finals Review Spring Summer
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
master seminar digital applications in india
History, Philosophy and sociology of education (1).pptx
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Classroom Observation Tools for Teachers
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
STATICS OF THE RIGID BODIES Hibbelers.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
What if we spent less time fighting change, and more time building what’s rig...
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Anesthesia in Laparoscopic Surgery in India
01-Introduction-to-Information-Management.pdf
Cell Types and Its function , kingdom of life
Ad

High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts

  • 1. High Performance Networks From Supercomputing to Cloud Computing 1st Edition Dennis Abts pdf download https://ebookgate.com/product/high-performance-networks-from- supercomputing-to-cloud-computing-1st-edition-dennis-abts/ Get the full ebook with Bonus Features for a Better Reading Experience on ebookgate.com
  • 2. Instant digital products (PDF, ePub, MOBI) available Download now and explore formats that suit you... High Performance and Hardware Aware Computing Rainer Buchty https://ebookgate.com/product/high-performance-and-hardware-aware- computing-rainer-buchty/ ebookgate.com High performance communication networks 2nd Edition Jean Walrand https://ebookgate.com/product/high-performance-communication- networks-2nd-edition-jean-walrand/ ebookgate.com Working in Teams Moving From High Potential to High Performance 1st Edition Brian A. Griffith https://ebookgate.com/product/working-in-teams-moving-from-high- potential-to-high-performance-1st-edition-brian-a-griffith/ ebookgate.com Cloud Computing Bible 1st Edition Barrie Sosinsky https://ebookgate.com/product/cloud-computing-bible-1st-edition- barrie-sosinsky/ ebookgate.com
  • 3. Phase Locking in High Performance Systems From Devices to Architectures 1st Edition Behzad Razavi https://ebookgate.com/product/phase-locking-in-high-performance- systems-from-devices-to-architectures-1st-edition-behzad-razavi/ ebookgate.com High Performance Computing Programming and Applications Chapman Hall CRC Computational Science 1st Edition John Levesque https://ebookgate.com/product/high-performance-computing-programming- and-applications-chapman-hall-crc-computational-science-1st-edition- john-levesque/ ebookgate.com Biomedical Diagnostics and Clinical Technologies Applying High Performance Cluster and Grid Computing 1st Edition Manuela Pereira https://ebookgate.com/product/biomedical-diagnostics-and-clinical- technologies-applying-high-performance-cluster-and-grid-computing-1st- edition-manuela-pereira/ ebookgate.com Architecting the Cloud Design Decisions for Cloud Computing Service Models 1st Edition Michael J. Kavis https://ebookgate.com/product/architecting-the-cloud-design-decisions- for-cloud-computing-service-models-1st-edition-michael-j-kavis/ ebookgate.com Practical Guide to High Performance Engineering Plastics 1st Edition Kemmish https://ebookgate.com/product/practical-guide-to-high-performance- engineering-plastics-1st-edition-kemmish/ ebookgate.com
  • 5. Morgan Claypool Publishers & w w w . m o r g a n c l a y p o o l . c o m Series Editor: Mark D. Hill, University of Wisconsin MOR GAN & CL AYPOOL C M & Morgan Claypool Publishers & About SYNTHESIs This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and Computer Science. Synthesis Lectures provide concise,original presentations of important research and development topics, published quickly, in digital and print formats. For more information visit www.morganclaypool.com SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE Mark D. Hill, Series Editor ISBN: 978-1-60845-402-0 9 781608 454020 90000 Series ISSN: 1935-3235 SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE HIGH PERFORMANCE DATACENTER NETWORKS ABTS • KIM High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet appli- cations. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms,and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts. The book also provides detailed case studies of two high performance parallel computer systems and their networks. High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts John Kim Morgan Claypool Publishers & w w w . m o r g a n c l a y p o o l . c o m Series Editor: Mark D. Hill, University of Wisconsin MOR GAN & CL AYPOOL C M & Morgan Claypool Publishers & About SYNTHESIs This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and Computer Science. Synthesis Lectures provide concise,original presentations of important research and development topics, published quickly, in digital and print formats. For more information visit www.morganclaypool.com SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE Mark D. Hill, Series Editor ISBN: 978-1-60845-402-0 9 781608 454020 90000 Series ISSN: 1935-3235 SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE HIGH PERFORMANCE DATACENTER NETWORKS ABTS • KIM High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet appli- cations. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms,and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts. The book also provides detailed case studies of two high performance parallel computer systems and their networks. High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts John Kim Morgan Claypool Publishers & w w w . m o r g a n c l a y p o o l . c o m Series Editor: Mark D. Hill, University of Wisconsin MOR GAN & CL AYPOOL C M & Morgan Claypool Publishers & About SYNTHESIs This volume is a printed version of a work that appears in the Synthesis Digital Library of Engineering and Computer Science. Synthesis Lectures provide concise,original presentations of important research and development topics, published quickly, in digital and print formats. For more information visit www.morganclaypool.com SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE Mark D. Hill, Series Editor ISBN: 978-1-60845-402-0 9 781608 454020 90000 Series ISSN: 1935-3235 SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE HIGH PERFORMANCE DATACENTER NETWORKS ABTS • KIM High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts, Google Inc. and John Kim, Korea Advanced Institute of Sceince and Technology Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet appli- cations. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It describes interconnection networks from topology and network architecture to routing algorithms,and presents opportunities for taking advantage of the emerging technology trends that are influencing router microarchitecture. With the emergence of “many-core”processor chips, it is evident that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts. The book also provides detailed case studies of two high performance parallel computer systems and their networks. High Performance Datacenter Networks Architectures, Algorithms, and Opportunity Dennis Abts John Kim
  • 7. Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin Synthesis Lectures on Computer Architecture publishes 50- to 100-page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. The scope will largely follow the purview of premier computer architecture conferences, such as ISCA, HPCA, MICRO, and ASPLOS. High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities Dennis Abts and John Kim 2011 Quantum Computing for Architects, Second Edition Tzvetan Metodi, Fred Chong, and Arvin Faruque 2011 Processor Microarchitecture: An Implementation Perspective Antonio González, Fernando Latorre, and Grigorios Magklis 2010 Transactional Memory, 2nd edition Tim Harris, James Larus, and Ravi Rajwar 2010 Computer Architecture Performance Evaluation Methods Lieven Eeckhout 2010 Introduction to Reconfigurable Supercomputing Marco Lanzagorta, Stephen Bique, and Robert Rosenberg 2009 On-Chip Networks Natalie Enright Jerger and Li-Shiuan Peh 2009
  • 8. iii The Memory System: You Can’t Avoid It, You Can’t Ignore It, You Can’t Fake It Bruce Jacob 2009 Fault Tolerant Computer Architecture Daniel J. Sorin 2009 The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines free access Luiz André Barroso and Urs Hölzle 2009 Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi 2008 Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency Kunle Olukotun, Lance Hammond, and James Laudon 2007 Transactional Memory James R. Larus and Ravi Rajwar 2006 Quantum Computing for Computer Architects Tzvetan S. Metodi and Frederic T. Chong 2006
  • 9. Copyright © 2011 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities Dennis Abts and John Kim www.morganclaypool.com ISBN: 9781608454020 paperback ISBN: 9781608454037 ebook DOI 10.2200/S00341ED1V01Y201103CAC014 A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE Lecture #14 Series Editor: Mark D. Hill, University of Wisconsin Series ISSN Synthesis Lectures on Computer Architecture Print 1935-3235 Electronic 1935-3243
  • 10. High Performance Datacenter Networks Architectures, Algorithms, and Opportunities Dennis Abts Google Inc. John Kim Korea Advanced Institute of Science and Technology (KAIST) SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE #14 C M & cLaypool Morgan publishers &
  • 11. ABSTRACT Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet appli- cations. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search,language translation,collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applications to communicate and interoperate in an orchestrated and efficient way. This book describes the design and engineering tradeoffs of datacenter networks. It de- scribes interconnection networks from topology and network architecture to routing algorithms, and presents opportunities for taking advantage of the emerging technology trends that are influ- encing router microarchitecture. With the emergence of “many-core” processor chips, it is evident that we will also need “many-port” routing chips to provide a bandwidth-rich network to avoid the performance limiting effects of Amdahl’s Law. We provide an overview of conventional topologies and their routing algorithms and show how technology, signaling rates and cost-effective optics are motivating new network topologies that scale up to millions of hosts.The book also provides detailed case studies of two high performance parallel computer systems and their networks. KEYWORDS network architecture and design, topology, interconnection networks, fiber optics, par- allel computer architecture, system design
  • 12. vii Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Note to the Reader. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.1 From Supercomputing to Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Beowulf: The Cluster is Born . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Overview of Parallel Programming Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Putting it all together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5 Quality of Service (QoS) requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.6 Flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6.1 Lossy flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.6.2 Lossless flow control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.7 The rise of ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1 Interconnection networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Technology trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Topology, Routing and Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4 Communication Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3 Topology Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Types of Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Mesh, Torus, and Hypercubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3.1 Node identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.2 k-ary n-cube tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
  • 13. viii 4 High-Radix Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.1 Towards High-radix Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Technology Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.1 Pin Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.2 Economical Optical Signaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3 High-Radix Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.1 High-Dimension Hypercube, Mesh, Torus . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.2 Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.3 High-Radix Folded-Clos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3.4 Flattened Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3.5 Dragonfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3.6 HyperX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 5 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1 Routing Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.1.1 Objectives of a Routing Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Minimal Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2.1 Deterministic Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2.2 Oblivious Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.3 Non-minimal Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.3.1 Valiant’s algorithm (VAL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.3.2 Universal Global Adaptive Load-Balancing (UGAL) . . . . . . . . . . . . . . . . 42 5.3.3 Progressive Adaptive Routing (PAR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.3.4 Dimensionally-Adaptive, Load-balanced (DAL) Routing . . . . . . . . . . . . . 43 5.4 Indirect Adaptive Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.5 Routing Algorithm Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.5.1 Example 1: Folded-Clos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.5.2 Example 2: Flattened Butterfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.5.3 Example 3: Dragonfly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6 Scalable Switch Microarchitecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.1 Router Microarchitecture Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.2 Scaling baseline microarchitecture to high radix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.3 Fully Buffered Crossbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.4 Hierarchical Crossbar Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.5 Examples of High-Radix Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
  • 14. ix 6.5.1 Cray YARC Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.5.2 Mellanox InfiniScale IV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 7 System Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.1 Packaging hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.2 Power delivery and cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 7.3 Topology and Packaging Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 8 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8.1 Cray BlackWidow Multiprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8.1.1 BlackWidow Node Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 8.1.2 High-radix Folded-Clos Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 8.1.3 System Packaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 8.1.4 High-radix Fat-tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 8.1.5 Packet Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 8.1.6 Network Layer Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 8.1.7 Data-link Layer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 8.1.8 Serializer/Deserializer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 8.2 Cray XT Multiprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 8.2.1 3-D torus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 8.2.2 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 8.2.3 Flow Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 8.2.4 SeaStar Router Microarchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 8.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 9 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.1 Programming models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.2 Wire protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 9.3 Opportunities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Authors’ Biographies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
  • 16. Preface This book is aimed at the researcher, graduate student and practitioner alike. We provide some background and motivation to provide the reader with a substrate upon which we can build the new concepts that are driving high-performance networking in both supercomputing and cloud computing. We assume the reader is familiar with computer architecture and basic networking concepts. We show the evolution of high-performance interconnection networks over the span of two decades, and the underlying technology trends driving these changes. We describe how to apply these technology drivers to enable new network topologies and routing algorithms that scale to millions of processing cores. We hope that practitioners will find the material useful for making design tradeoffs, and researchers will find the material both timely and relevant to modern parallel computer systems which make up today’s datacenters. Dennis Abts and John Kim March 2011
  • 18. Acknowledgments While we draw from our experience at Cray and Google and academic work on the design and operation of interconnection networks, most of what we learned is the result of hard work, and years of experience that have led to practical insights. Our experience benefited tremendously from our colleagues Steve Scott at Cray, and Bill Dally at Stanford University. In addition, many hours of whiteboard-huddled conversations with Mike Marty, Philip Wells, Hong Liu, and Peter Klausler at Google. We would also like to thank Google colleagues James Laudon, Bob Felderman, Luiz Barroso, and Urs Hölzle for reviewing draft versions of the manuscript. We want to thank the reviewers, especially Amin Vahdat and Mark Hill for taking the time to carefully read and provide feedback on early versions of this manuscript. Thanks to Urs Hölzle for guidance, and Kristin Weissman at Google and Michael Morgan at Morgan & Claypool Publishers. Finally, we are grateful for Mark Hill and Michael Morgan for inviting us to this project and being patient with deadlines. Finally, and most importantly, we would like to thank our loving family members who gra- ciously supported this work and patiently allowed us to spend our free time to work on this project. Without their enduring patience and with an equal amount of prodding, this work would not have materialized. Dennis Abts and John Kim March 2011
  • 20. Note to the Reader We very much appreciate any feedback, suggestions, and corrections you might have on our manuscript. The Morgan & Claypool publishing process allows a lightweight method to revise the electronic edition. We plan to revise the manuscript relatively often, and will gratefully acknowledge any input that will help us to improve the accuracy, readability, or general usefulness of the book. Please leave your feedback at http://tinyurl.com/HPNFeedback Dennis Abts and John Kim March 2011
  • 22. 1 C H A P T E R 1 Introduction Today’s datacenters have emerged from the collection of loosely connected workstations, which shaped the humble beginnings of the Internet, and grown into massive “warehouse-scale comput- ers” (Figure 1.1) capable of running the most demanding workloads.Barroso and Hölzle describe the architecture of a warehouse-scale computer (WSC) [9] and give an overview of the programming model and common workloads executed on these machines.The hardware building blocks are pack- aged into “racks” of about 40 servers, and many racks are interconnected using a high-performance network to form a “cluster” with hundreds or thousands of tightly-coupled servers for performance, cooling towers power substation warehouse-scale computer Figure 1.1: A datacenter with cooling infrastructure and power delivery highlighted.
  • 23. 2 1. INTRODUCTION ! ! Figure 1.2: Comparison of web search interest and terminology. but loosely-coupled for fault tolerance and isolation. This highlights some distinctions between what have traditionally been called“supercomputers” and what we now consider“cloud computing,” which appears to have emerged around 2008 (based on the relative Web Search interest shown in Figure 1.2) as a moniker for server-side computing. Increasingly, our computing needs are moving away from desktop computers toward more mobile clients (e.g., smart phones, tablet computers, and net- books) that depend on Internet services, applications, and storage. As an example, it is much more efficient to maintain a repository of digital photography on a server in the “cloud” than on a PC-like computer that is perhaps not as well maintained as a server in a large datacenter, which is more reminiscent of a clean room environment than a living room where your precious digital memories are subjected to the daily routine of kids, spills, power failures, and varying temperatures; in addition, most consumers upgrade computers every few years,requiring them to migrate all their precious data to their newest piece of technology. In contrast, the “cloud” provides a clean, temperature controlled environment with ample power distribution and backup. Not to mention your data in the “cloud” is probably replicated for redundancy in the event of a hardware failure the user data is replicated and restored generally without the user even aware that an error occurred.
  • 24. 1.1. FROM SUPERCOMPUTING TO CLOUD COMPUTING 3 1.1 FROM SUPERCOMPUTING TO CLOUD COMPUTING As the ARPANET transformed into the Internet over the past forty years, and the World Wide Web emerges from adolescence and turns twenty, this metamorphosis has seen changes in both supercomputing and cloud computing. The supercomputing industry was born in 1976 when Sey- mour Cray announced the Cray-1 [54]. Among the many innovations were its processor design, process technology, system packaging, and instruction set architecture. The foundation of the ar- chitecture was based on the notion of vector operations that allowed a single instruction to operate on an array, or “vector,” of elements simultaneously. In contrast to scalar processors of the time whose instructions operated on single data items. The vector parallelism approach dominated the high-performance computing landscape for much of the 1980s and early 1990s until “commodity” microprocessors began aggressively implementing forms of instruction-level parallelism (ILP) and better cache memory systems to exploit spatial and temporal locality exhibited by most applications. Improvements in CMOS process technology and full-custom CMOS design practices allowed mi- croprocessors to quickly ramp up clock rates to several gigahertz. This coupled with multi-issue pipelines; efficient branch prediction and speculation eventually allowed microprocessors to catch up with their proprietary vector processors from Cray, Convex, and NEC. Over time, conventional microprocessors incorporated short vector units (e.g., SSE, MMX, AltiVec) into the instruction set. However, the largest beneficiary of vector processing has been multimedia applications as evidenced by the jointly developed (by Sony,Toshiba,and IBM) Cell processor which found widespread success in Sony’s Playstation3 game console, and even some special-purpose computer systems like Mercury Systems. Parallel applications eventually have to synchronize and communicate among parallel threads. Amdahl’s Law is relentless and unless enough parallelism is exposed,the time spent orchestrating the parallelism and executing the sequential region will ultimately limit the application performance [27]. 1.2 BEOWULF: THE CLUSTER IS BORN In 1994 Thomas Sterling (then dually affiliated with the California Institute of Technology and NASAs JPL) and Donald Becker (then a researcher at NASA) assembled a parallel computer that became known as a Beowulf cluster1. What was unique about Beowulf [61] systems was that they were built from common “off-the-shelf” computers, as Figure 1.3 shows, system packaging was not an emphasis. More importantly, as a loosely-coupled distributed memory machine, Beowulf forced researchers to think about how to efficiently program parallel computers. As a result, we benefited from portable and free programming interfaces such as parallel virtual machines (PVM), message passing interfaces (MPICH and OpenMPI), local area multiprocessor (LAM); with MPI being embraced by the HPC community and highly optimized. The Beowulf cluster was organized so that one machine was designated the “server,” and it managed job scheduling, pushing binaries to clients, and monitoring. It also acted as the gateway 1The genesis of the name comes from the poem which describes Beowulf as having “thirty men’s heft of grasp in the gripe of his hand.”
  • 25. 4 1. INTRODUCTION Figure 1.3: An 128 processor Beowulf cluster at NASA. to the “outside world,” so researchers had a login host. The model is still quite common: with some nodes being designated as service and IO nodes where users actually login to the parallel machine. From there, they can compile their code, and launch the job on “compute only” nodes — the worker bees of the colony — and console information, machine status is communicated to the service nodes. 1.3 OVERVIEW OF PARALLEL PROGRAMMING MODELS Early supercomputers were able to work efficiently, in part, because they shared a common physical memory space. As a result, communication among processors was very efficient as they updated shared variables and operated on common data. However, as the size of the systems grew, this shared memory model evolved into a distributed shared memory (DSM) model where each processing node owns a portion of the machines physical memory and the programmer is provided with a logically shared address space making it easy to reason about how the application is partitioned and communication among threads. The Stanford DASH [45] was the first to demonstrate this cache- coherent non-uniform memory (ccNUMA) access model, and the SGI Origin2000 [43] was the first machine to successfully commercialize the DSM architecture. We commonly refer to distributed memory machines as“clusters” since they are loosely-coupled and rely on message passing for communication among processing nodes. With the inception of Beowulf clusters, the HPC community realized they could build modest-sized parallel computers on
  • 26. 1.4. PUTTING IT ALL TOGETHER 5 a relatively small budget. To their benefit, the common benchmark for measuring the performance of a parallel computer is LINPACK, which is not communication intensive, so it was commonplace to use inexpensive Ethernet networks to string together commodity nodes. As a result, Ethernet got a foothold on the list of the TOP500 [62] civilian supercomputers with almost 50% of the TOP500 systems using Ethernet. 1.4 PUTTING IT ALL TOGETHER The first Cray-1 [54] supercomputer had expected to ship one system per quarter in 1977. Today, microprocessor companies have refined their CMOS processes and manufacturing making them very cost-effective building blocks for large-scale parallel systems capable of 10s of petaflops. This shift away from “proprietary” processors and trend toward “commodity” processors has fueled the growth of systems. At the time of this writing, the largest computer on the TOP500 list [62] has in excess of 220,000 cores (see Figure 7.5) and consumes almost seven megawatts! A datacenter server has many commonalities as one used in a supercomputer, however, there are also some very glaring differences. We enumerate several properties of both a warehouse-scale computer (WSC) and a supercomputer (Cray XE6). Datacenter server • Sockets per server 2 sockets x86 platform • Memory capacity 16 GB DRAM • Disk capacity 5×1TB disk drive, and 1×160GB SSD (FLASH) • Compute density 80 sockets per rack • Network bandwidth per rack 1×48-port GigE switch with 40 down links, and 8 uplinks (5× oversubscription) • Network bandwidth per socket 100 Mb/s if 1 GigE rack switch, or 1 Gb/s if 10 GigE rack switch Supercomputer server • Sockets per server 8 sockets x86 platform • Memory capacity 32 or 64 GB DRAM • Disk capacity IO capacity varies. Each XIO blade has four PCIe-Gen2 interfaces, for a total of 96 PCIe-Gen2 ×16 IO devices for a peak IO bandwidth of 768 GB/s per direction. • Compute density 192 sockets per rack
  • 27. 6 1. INTRODUCTION • Networkbandwidthperrack 48×48-port Gemini switch chips each with 160 GB/s switching bandwidth • Network bandwidth per socket 9.6GB/s injection bandwidth with non-coherent Hyper- Transport 3.0 (ncHT3) Several things stand out as differences between a datacenter server and supercomputer node. First, the compute density for the supercomputer is significantly better than a standard 40U rack. On the other hand, this dense packaging also puts pressure on cooling requirements not to mention power delivery. As power and its associated delivery become increasingly expensive, it becomes more important to optimize the number of operations per watt; often the size of a system is limited by power distribution and cooling infrastructure. AnotherpointisthevastdifferenceinnetworkbandwidthpersocketinlargepartbecausencHT3 is a much higher bandwidth processor interface than PCIe-Gen2, however, as PCI-Gen3×16 be- comes available we expect that gap to narrow. 1.5 QUALITY OF SERVICE (QOS) REQUIREMENTS With HPC systems it is commonplace to dedicate the system for the duration of application ex- ecution. Allowing all processors to be used for compute resources. As a result, there is no need for performance isolation from competing applications. Quality of Service (QoS) provides both per- formance isolation and differentiated service for applications2. Cloud computing often has a varied workloads requiring multiple applications to share resources. Workload consolidation [33] is becom- ing increasingly important as memory and processor cost increase, as a result so does the value of increased system utilization. The QoS class refers to the end-to-end class of service as observed by the application. In principle, QoS is divided into three categories: Best effort - traffic is treated as a FIFO with no differentiation provided. Differentiated service - also referred to as “soft QoS” where traffic is given a statistical preference over other traffic. This means it is less likely to be dropped relative to best effort traffic, for example, resulting in lower average latency and increased average bandwidth. Guaranteed service - also referred to as “hard QoS” where a fraction of the network bandwidth is reserved to provide no-loss, low jitter bandwidth guarantees. In practice,there are many intermediate pieces which are,in part,responsible for implementing a QoS scheme. A routing algorithm determines the set of usable paths through the network between any source and destination. Generally speaking, routing is a background process that attempts to load- balance the physical links in the system taking into account any network faults and programming 2We use the term “applications” loosely here to represent processes or threads, at whatever granularity a service level agreement is applied.
  • 28. 1.6. FLOW CONTROL 7 the forwarding tables within each router. When a new packet arrives, the header is inspected and the network address of the destination is used to index into the forwarding table which emits the output port where the packet is scheduled for transmission.The “packet forwarding” process is done on a packet-by-packet basis and is responsible for identifying packets marked for special treatment according to its QoS class. The basic unit over which a QoS class is applied is the flow. A flow is described as a tuple (SourceIP, SourcePort, DestIP, DestPort). Packets are marked by the host or edge switch using either 1) port range, or 2) host (sender/client-side) marking. Since we are talking about end-to-end service levels, ideally the host which initiates the communication would request a specific level of service. This requires some client-side API for establishing the QoS requirements prior to sending a message. Alternatively, edge routers can mark packets as they are injected into the core fabric. Packets are marked with their service class which is interpreted at each hop and acted upon by routers along the path.For common Internet protocols,the differentiated service (DS) field of the IP header provides this function as defined by the DiffServ [RFC2475] architecture for network layer QoS. For compatibility reasons, this is the same field as the type of service (ToS) field [RFC791] of the IP header. Since the RFC does not clearly describe how “low,” “medium,” or “high” are supposed to be interpreted, it is common to use five classes: best effort (BE), AF1, AF2, AF3, AF4, and set the drop priority to 0 (ignored). 1.6 FLOW CONTROL Surprisingly, a key difference in system interconnects is flow control. How the switch and buffer resources are managed is very different in Ethernet than what is typical in a supercomputer in- terconnect. There are several kinds of flow control in a large distributed parallel computer. The interconnection network is a shared resource among all the compute nodes, and network resources must be carefully managed to avoid corrupting data, overflowing a buffer, etc.The basic mechanism by which resources in the network are managed is flow control. Flow control provides a simple ac- counting method for managing resources that are in demand by multiple uncoordinated sources. The resource is managed in units of flits (flow control units). When a resource is requested but not currently available for use, we must decide what to do with the incoming request. In general, we can 1) drop the request and all subsequent requests until the resource is freed, or 2) block and wait for the request to free. 1.6.1 LOSSY FLOW CONTROL With a lossy flow control [20, 48], the hardware can discard packets until there is room in the desired resource. This approach is usually applied to input buffers on each switch chip, but also applies to resources in the network interface controller (NIC) chip as well. When packets are dropped, the software layers must detect the loss, usually through an unexpected sequence number indicating that one or more packets are missing or out of order. The receiver software layers will discard packets that do not match the expected sequence number, and the sender software layers will detect that it
  • 29. 8 1. INTRODUCTION data link layer data link layer send credits data packets flow ctrl packets Figure 1.4: Example of credit-based flow control across a network link. has not received an acknowledgment packet and will cause a sender timeout which prompts the “send window” — packets sent since the last acknowledgment was received — to be retransmitted. This algorithm is referred to as go-back-N since the sender will “go back” and retransmit the last N (send window) packets. 1.6.2 LOSSLESS FLOW CONTROL Lossless flow control implies that packets are never dropped as a results of lack of buffer space (i.e., in the presence of congestion). Instead, it provides back pressure to indicate the absence of available buffer space in the resource being managed. 1.6.2.1 Stop/Go (XON/XOFF) flow control A common approach is XON/XOFF or stop/go flow control. In this approach, the receiver provides simple handshaking to the sender indicating whether it is safe (XON) to transmit, or not (XOFF). The sender is able to send flits until the receiver asserts stop (XOFF).Then, as the receiver continues to process packets from the input buffer freeing space, and when a threshold is reached the receiver will assert the XON again allowing the sender to again start sending. This Stop/Go functionality correctly manages the resource and avoids overflow as long as the time at which XON is asserted again (i.e., the threshold level in the input buffer) minus the time XOFF is asserted and the buffer is sufficient to allow any in-flight flits to land. This slack in the buffer is necessary to act as a flow control shock absorber for outstanding flits necessary to cover the propagation delay of the flow control signals. 1.6.2.2 Credit-based flow control Credit based flow control (Figure 1.4) provides more efficient use of the buffer resources.The sender maintains a count of the number of available credits, which represent the amount of free space in the receiver’s input buffer. A separate count is used for each virtual channel (VC) [21]. When a new
  • 30. 1.7. THE RISE OF ETHERNET 9 packet arrives at the output port, the sender checks the available credit counter. For wormhole flow control [20] across the link, the sender’s available credit needs to only be one or more. For virtual cut-through (VCT) [20, 22] flow control across the link, the sender’s available credit must be more than the size of the packet. In practice, the switch hardware doesn’t have to track the size of the packet in order to allow VCT flow control. The sender can simply check the available credit count is larger than the maximum packet size. 1.7 THE RISE OF ETHERNET It may be an extreme example comparing a typical datacenter server to a state-of-the-art super- computer node, but the fact remains that Ethernet is gaining a significant foothold in the high- performance computing space with nearly 50% of the systems on the TOP500 list [62] using Gi- gabit Ethernet as shown in Figure 1.5(b). Infiniband (includes SDR, DDR and QDR) accounts for 41% of the interconnects leaving very little room for proprietary networks. The landscape was very different in 2002, as shown in Figure 1.5(a), where Myrinet accounted for about one third of the system interconnects. The IBM SP2 interconnect accounted for about 18%, and the remaining 50% of the system interconnects were split among about nine different manufacturers. In 2002, only about 8% of the TOP500 systems used gigabit Ethernet, compared to the nearly 50% in June of 2010. 1.8 SUMMARY Nodoubt“cloudcomputing”benefitedfromthiswildgrowthandacceptanceintheHPCcommunity, driving prices down and making more reliable parts. Moving forward we may see even further consolidation as 40 Gig Ethernet converges with some of the Infiniband semantics with RDMA over Ethernet (ROE). However, a warehouse-scale computer (WSC) [9] and a supercomputer have different usage models. For example, most supercomputer applications expect to run on the machine in a dedicated mode, not having to compete for compute, network, or IO resources with any other applications. Supercomputing applications will commonly checkpoint their dataset, since the MTBF of a large system is usually measured in 10s of hours.Supercomputing applications also typically run with a dedicated system, so QoS demands are not typically a concern. On the other hand, a datacenter will run a wide variety of applications, some user-facing like Internet email, and others behind the scenes. The workloads vary drastically, and programmers must learn that hardware can, and does, fail and the application must be fault-aware and deal with it gracefully. Furthermore, clusters in the datacenter are often shared across dozens of applications,so performance isolation and fault isolation are key to scaling applications to large processor counts. Choosing the “right” topology is important to the overall system performance. We must take into account the flow control, QoS requirements, fault tolerance and resilience, as well as workloads to better understand the latency and bandwidth characteristics of the entire system. For example,
  • 31. 10 1. INTRODUCTION (a) 2002 (b) 2010 Figure 1.5: Breakdown of supercomputer interconnects from the Top500 list.
  • 32. 1.8. SUMMARY 11 topologies with abundant path diversity are able to find alternate routes between arbitrary endpoints. This is only one aspect of topology choice that we will consider in subsequent chapters.
  • 34. 13 C H A P T E R 2 Background Over the past three decades, Moore’s Law has ushered in an era where transistors within a single silicon package are abundant; a trend that system architects took advantage of to create a class of many-core chip multiprocessors (CMPs) which interconnect many small processing cores using an on-chip network. However, the pin density, or number of signal pins per unit of silicon area, has not kept up with this pace. As a result pin bandwidth, the amount of data we can get on and off the chip package, has become a first-order design constraint and precious resource for system designers. 2.1 INTERCONNECTION NETWORKS The components of a computer system often have to communicate to exchange status information, or data that is used for computation. The interconnection network is the substrate over which this communication takes place. Many-core CMPs employ an on-chip network for low-latency, high- bandwidth load/store operations between processing cores and memory,and among processing cores within a chip package. Processor, memory, and its associated IO devices are often packaged together and referred to as a processing node. The system-level interconnection network connects all the processing nodes according to the network topology. In the past, system components shared a bus over which address and data were exchanged, however, this communication model did not scale as the number of components sharing the bus increased. Modern interconnection networks take advantage of high- speed signaling [28] with point-to-point serial links providing high-bandwidth connections between processors and memory in multiprocessors [29, 32], connecting input/output (IO) devices [31, 51], and as switching fabrics for routers. 2.2 TECHNOLOGY TRENDS There are many considerations that go into building a large-scale cluster computer, many of which revolve around its cost effectiveness, in both capital (procurement) cost and operating expense. Al- though many of the components that go into a cluster each have different technology drivers which blurs the line that defines the optimal solution for both performance and cost. This chapter takes a look at a few of the technology drivers and how they pertain to the interconnection network. The interconnection network is the substrate over which processors, memory and I/O devices interoperate. The underlying technology from which the network is built determines the data rate, resiliency,and cost of the network.Ideally,the processor,network,and I/O devices are all orchestrated
  • 35. 14 2. BACKGROUND in a way that leads to a cost-effective, high-performance computer system. The system, however, is no better than the components from which it is built. The basic building block of the network is the switch (router) chip that interconnects the processingnodesaccordingtosomeprescribedtopology.Thetopologyandhowthesystemispackaged are closely related; typical packaging schemes are hierarchical – chips are packaged onto printed circuit boards,which in turn are packaged into an enclosure (e.g.,rack),which are connected together to create a single system. ITRS Trend Figure 2.1: Off-chip bandwidth of prior routers, and ITRS predicted growth. The past 20 years has seen several orders of magnitude increase in off-chip bandwidth spanning from several gigabits per second up to several terabits per second today. The bandwidth shown in Figure 2.1 plots the total pin bandwidth of a router – i.e., equivalent to the total number of signals times the signaling rate of each signal – and illustrates an exponential increase in pin bandwidth. Moreover, we expect this trend to continue into the next decade as shown by the International Roadmap for Semiconductors (ITRS) in Figure 2.1, with 1000s of pins per package and more than 100 Tb/s of off-chip bandwidth. Despite this exponential growth, pin and wire density simply does not match the growth rates of transistors as predicted by Moore’s Law.
  • 36. 2.2. TECHNOLOGY TRENDS 15 0 10 20 30 40 50 60 70 80 90 100 0.00 0.20 0.40 0.60 0.80 1.00 offered load latency (a) Load versus latency for an ideal M/D/1 queue model. unloaded network latency saturation Average Accepted Bandwidth (Mb/s) Offered Load (Mb/s) Average Message Latency (μs) (b) Measured data showing offered load (Mb/s) versus latency (μs) with average accepted throughput (Mb/s) overlaid to demonstrate saturation in a real network. Figure 2.2: Network latency and bandwidth characteristics.
  • 37. 16 2. BACKGROUND 2.3 TOPOLOGY, ROUTING AND FLOW CONTROL Before diving into details of what drives network performance, we pause to lay the ground work for some fundamental terminology and concepts. Network performance is characterized by its latency and bandwidth characteristics as illustrated in Figure 2.2. The queueing delay, Q(λ), is a function of the offered load (λ) and described by the latency-bandwidth characteristics of the network. An approximation of Q(λ) is given by an M/D/1 queue model, Figure 2.2(a). If we overlay the average accepted bandwidth observed by each node, assuming benign traffic, we Figure 2.2(b). Q(λ) = 1 1 − λ (2.1) When there is very low offered load on the network, the Q(λ) delay is negligible. However, as traffic intensity increases, and the network approaches saturation, the queueing delay will dominate the total packet latency. The performance and cost of the interconnect are driven by a number of design factors, including topology,routing,flow control,and message efficiency.The topology describes how network nodes are interconnected and determines the path diversity — the number of distinct paths between any two nodes. The routing algorithm determines which path a packet will take in such as way as to load balance the physical links in the network. Network resources (primarily buffers for packet storage) are managed using a flow control mechanism. In general, flow control happens at the link- layer and possibly end-to-end.Finally,packets carry a data payload and the packet efficiency determines the delivered bandwidth to the application. While recent many-core processors have spurred a 2× and 4× increase in the number of processing cores in each cluster, unless network performance keeps pace, the effects of Amdahl’s Law will become a limitation. The topology, routing, flow control, and message efficiency all have first-order affects on the system performance, thus we will dive into each of these areas in more detail in subsequent chapters. 2.4 COMMUNICATION STACK Layers of abstraction are commonly used in networking to provide fault isolation and device in- dependence. Figure 2.3 shows the communication stack that is largely representative of the lower four layers of the OSI networking model. To reduce software overhead and the resulting end-to- end latency, we want a thin networking stack. Some of the protocol processing that is common in Internet communication protocols is handled in specialized hardware in the network interface controller (NIC). For example, the transport layer provides reliable message delivery to applications and whether the protocol bookkeeping is done in software (e.g.,TCP) or hardware (e.g., Infiniband reliable connection) directly affects the application performance.The network layer provides a logical namespace for endpoints (and possibly switches) in the system. The network layer handles pack- ets, and provides the routing information identifying paths through the network among all source, destination pairs. It is the network layer that asserts routes, either at the source (i.e., source-routed)
  • 38. 2.4. COMMUNICATION STACK 17 Network Data Link Physical Transport Network Data Link Physical Transport end-to-end flow control, reliable message delivery routing, node addressing, load balancing link-level flow control, data-link layer reliable delivery physical encoding (e.g. 8b10b) byte and lane alignment, physical media encoding Interconnection Network Figure 2.3: The communication stack. or along each individual hop (i.e., distributed routing) along the path. The data link layer provides link-level flow control to manage the receiver’s input buffer in units of flits (flow control units).The lowest level of the protocol stack, the physical media layer, is where data is encoded and driven onto the medium. The physical encoding must maintain a DC-neutral transmission line and commonly uses 8b10b or 64b66b encoding to balance the transition density. For example, a 10-bit encoded value is used to represent 8-bits of data resulting in a 20% physical encoding overhead. SUMMARY Interconnection networks are a critical component of modern computer systems. The emergence of cloud computing, which provides a homogenous cluster using conventional microprocessors and common Internet communication protocols aimed at providing Internet services (e.g., email, Web search, collaborative Internet applications, streaming video, and so forth) at large scale. While In- ternet services themselves may be insensitive to latency, since they operate on human timescales measured in 100s of milliseconds, the backend applications providing those services may indeed require large amounts of bandwidth (e.g., indexing the Web) and low latency characteristics. The programming model for cloud services is built largely around distributed message passing,commonly implemented around TCP (transport control protocol) as a conduit for making a remote procedure call (RPC). Supercomputing applications, on the other hand, are often communication intensive and can be sensitive to network latency.The programming model may use a combination of shared memory and message passing (e.g., MPI) with often very fine-grained communication and synchronization
  • 39. 18 2. BACKGROUND needs. For example, collective operations, such as global sum, are commonplace in supercomputing applications and rare in Internet services. This is largely because Internet applications evolved from simple hardware primitives (e.g.,low-cost ethernet NIC) and common communication models (e.g., TCP sockets) that were incapable of such operations. As processor and memory performance continues to increase, the interconnection network is becoming increasingly important and largely determines the bandwidth and latency of remote memory access. Going forward, the emergence of super datacenters will convolve into exa-scale parallel computers.
  • 40. 19 C H A P T E R 3 Topology Basics The network topology — describing precisely how nodes are connected — plays a central role in both the performance and cost of the network. In addition, the topology drives aspects of the switch design (e.g., virtual channel requirements, routing function, etc), fault tolerance, and sensitivity to adversarial traffic. There are subtle yet very practical design issues that only arise at scale; we try to highlight those key points as they appear. 3.1 INTRODUCTION Many scientific problems can be decomposed into a 3-D structure that represents the basic building blocks of the underlying phenomenon being studied. Such problems often have nearest neighbor communication patterns, for example, and lend themselves nicely to k-ary n-cube networks. A high-performance application will often use the system dedicated to provide the necessary perfor- mance isolation, however, a large production datacenter cluster will often run multiple applications simultaneously with varying workloads and often unstructured communication patterns. The choice of topology is largely driven by two factors: technology and packaging constraints. Here, technology refers to the underlying silicon from which the routers are fabricated (i.e., node size, pin density, power, etc) and the signaling technology (e.g., optical versus electrical). The packaging constraints will determine the compute density, or amount of computation per unit of area on the datacenter floor. The packaging constraints will also dictate the data rate (signaling speed) and distance over which we can reliably communicate. As a result of evolving technology,the topologies used in large-scale systems have also changed. Many of the earliest interconnection networks were designed using topologies such as butterflies or hypercubes, based on the simple observation that these topologies minimized hop count. Analysis by both Dally [18] and Agarwal [5] showed that under fixed packaging constraints, a low-radix network offered lower packet latency and thus better performance. Since the mid-1990s, k-ary n-cube networks were used by several high-performance multiprocessors such as the SGI Origin 2000 hypercube [43], the 2-D torus of the Cray X1 [16], the 3-D torus of the Cray T3E [55] and XT3 [12, 17] and the torus of the Alpha 21364 [49] and IBM BlueGene [35]. However, the increasing pin bandwidth has recently motivated the migration towards high-radix topologies such as the radix-64 folded-Clos topology used in the Cray BlackWidow system [56]. In this chapter, we will discuss mesh/torus topologies while in the next chapter, we will present high-radix topologies.
  • 41. 20 3. TOPOLOGY BASICS 3.2 TYPES OF NETWORKS Topologies can be broken down into two different genres: direct and indirect [20]. A direct network has processing nodes attached directly to the switching fabric; that is, the switching fabric is dis- tributed among the processing nodes. An indirect network has the endpoint network independent of the endpoints themselves – i.e., dedicated switch nodes exist and packets are forwarded indirectly through these switch nodes. The type of network determines some of the packaging and cabling requirements as well as fault resilience. It also impacts cost, for example, since a direct network can combine the switching fabric and the network interface controller (NIC) functionality in the same silicon package. An indirect network typically has two separate chips, with one for the NIC and another for the switching fabric of the network. Examples of direct network include mesh, torus, and hypercubes discussed in this chapter as well as high-radix topologies such as the flattened butterfly described in the next chapter. Indirect networks include conventional butterfly topology and fat-tree topologies. The term radix and dimension are often used to describe both types of networks but have been used differently for each network. For an indirect network, radix often refers to the number of ports of a switch, and the dimension is related to the number of stages in the network. However, for a direct network, the two terminologies are reversed – radix refers to the number of nodes within a dimension, and the network size can be further increased by adding multiple dimensions. The two terms are actually a duality of each other for the different networks – for example, in order to reduce the network diameter, the radix of an indirect network or the dimension of a direct network can be increased. To be consistent with existing literature, we will use the term radix to refer to different aspects of a direct and an indirect network. 3.3 MESH,TORUS, AND HYPERCUBES The mesh,torus and hypercube networks all belong to the same family of direct networks often referred to as k-ary n-mesh or k-ary n-cube.The scalability of the network is largely determined by the radix, k,and number of dimensions,n,with N = kn total endpoints in the network.In practice,the radix of the network is not necessarily the same for every dimension (Figure 3.2). Therefore, a more general way to express the total number of endpoints is given by Equation 3.1. N = n−1 i=0 ki (3.1) 4 3 2 1 6 5 7 0 4 3 2 1 6 5 7 0 (a) 8-ary 1-mesh. (b) 8-ary 1-cube. Figure 3.1: Mesh (a) and torus (b) networks.
  • 42. 3.3. MESH,TORUS, AND HYPERCUBES 21 Mesh and torus networks (Figure 3.1) provide a convenient starting point to discuss topology tradeoffs. Starting with the observation that each router in a k-ary n-mesh, as shown in Figure 3.1(a), requires only three ports; one port connects to its neighboring node to the left, another to its right neighbor, and one port (not shown) connects the router to the processor. Nodes that lie along the edge of a mesh, for example nodes 0 and 7 in Figure 3.1(a), require one less port. The same applies to k-ary n-cube (torus) networks. In general, the number of input and output ports, or radix of each router is given by Equation 3.2. The term “radix” is often used to describe both the number of input and output ports on the router, and the size or number of nodes in each dimension of the network. r = 2n + 1 (3.2) The number of dimensions (n) in a mesh or torus network is limited by practical packaging constraints with typical values of n=2 or n=3. Since n is fixed we vary the radix (k) to increase the size of the network. For example, to scale the network in Figure 3.2a from 32 nodes to 64 nodes, we increase the radix of the y dimension from 4 to 8 as shown in Figure 3.2b. 4 3 2 0 1 6 5 7 12 11 10 8 9 14 13 15 20 19 18 16 17 22 21 23 28 27 26 24 25 30 29 31 4 3 2 0 1 6 5 7 12 11 10 8 9 14 13 15 20 19 18 16 17 22 21 23 28 27 26 24 25 30 29 31 36 35 34 32 33 38 37 39 44 43 42 40 41 46 45 47 52 51 50 48 49 54 53 55 60 59 58 56 57 62 61 63 (a) (8,4)-ary 2-mesh (b) 8-ary 2-mesh. Figure 3.2: Irregular (a) and regular (b) mesh networks. Since a binary hypercube (Figure 3.4) has a fixed radix (k=2), we scale the number of dimen- sions (n) to increase its size. The number of dimensions in a system of size N is simply n = lg2(N) from Equation 3.1. r = n + 1 = lg2(N) + 1 (3.3) As a result, hypercube networks require a router with more ports (Equation 3.3) than a mesh or torus. For example, a 512 node 3-D torus (n=3) requires seven router ports, but a hypercube requires n = lg2(512) + 1 = 10 ports. It is useful to note, an n-dimension binary hypercube is isomorphic to
  • 43. 22 3. TOPOLOGY BASICS a n 2 -dimension torus with radix 4 (k=4). Router pin bandwidth is limited, thus building a 10-ported router for a hypercube instead of a 7-ported torus router may not be feasible without making each port narrower. 3.3.1 NODE IDENTIFIERS The nodes in a k-ary n-cube are identified with an n-digit, radix k number. It is common to refer to a node identifier as an endpoint’s “network address.” A packet makes a finite number of hops in each of the n dimensions. A packet may traverse an intermediate router, ci, en route to its destination. When it reaches the correct ordinate of the destination, that is ci = di, we have resolved the ith dimension of the destination address. 3.3.2 k-ARY n-CUBE TRADEOFFS The worst-case distance (measured in hops) that a packet must traverse between any source and any destination is called the diameter of the network. The network diameter is an important metric as it bounds the worst-case latency in the network. Since each hop entails an arbitration stage to choose the appropriate output port, reducing the network diameter will, in general, reduce the variance in observed packet latency. The network diameter is independent of traffic pattern, and is entirely a function of the topology, as shown in Table 3.1 Table 3.1: Network diameter and average latency. Diameter Average Network (hops) (hops) mesh k − 1 (k + 1)/3 torus k/2 k/4 hypercube n n/2 flattened butterfly n + 1 n + 1 − (n − 1)/k from/to 0 1 2 3 4 5 6 7 8 0 0 1 2 3 4 5 6 7 8 1 1 0 1 2 3 4 5 6 7 2 2 1 0 1 2 3 4 5 6 3 3 2 1 0 1 2 3 4 5 4 4 3 2 1 0 1 2 3 4 5 5 4 3 2 1 0 1 2 3 6 6 5 4 3 2 1 0 1 2 7 7 6 5 4 3 2 1 0 1 8 8 7 6 5 4 3 2 1 0 from/to 0 1 2 3 4 5 6 7 8 0 0 1 2 3 4 4 3 2 1 1 1 0 1 2 3 4 4 3 2 2 2 1 0 1 2 3 4 4 3 3 3 2 1 0 1 2 3 4 4 4 4 3 2 1 0 1 2 3 4 5 4 4 3 2 1 0 1 2 3 6 3 4 4 3 2 1 0 1 2 7 2 3 4 4 3 2 1 0 1 8 1 2 3 4 4 3 2 1 0 (a) radix-9 mesh (b) radix-9 torus Figure 3.3: Hops between every source, destination pair in a mesh (a) and torus (b). In a mesh (Figure 3.3), the destination node is, at most, k-1 hops away. To compute the average, we compute the distance from all sources to all destinations, thus a packet from node 1 to
  • 44. 3.3. MESH,TORUS, AND HYPERCUBES 23 node 2 is one hop, node 1 to node 3 is two hops, and so on. Summing the number of hops from each source to each destination and dividing by the total number of packets sent k(k-1) to arrive at the average hops taken. A packet traversing a torus network will use the wraparound links to reduce the average hop count and network diameter.The worst-case distance in a torus with radix k is k/2, but the average distance is only half of that, k/4. In practice, when the radix k of a torus is odd, and there are two equidistant paths regardless of the direction (i.e., whether the wraparound link is used) then a routing convention is used to break ties so that half the traffic goes in each direction across the two paths. A binary hypercube (Figure 3.4) has a fixed radix (k=2) and varies the number of dimensions (n) to scale the network size. Each node in the network can be viewed as a binary number, as shown in Figure 3.4. Nodes that differ in only one digit are connected together. More specifically, if two nodes differ in the ith digit, then they are connected in the ith dimension. Minimal routing in a hypercube will require, at most, n hops if the source and destination differ in every dimension, for example, traversing from 000 to 111 in Figure 3.4. On average, however, a packet will take n/2 hops. 010 000 011 001 110 100 111 101 x y z Figure 3.4: A binary hypercube with three dimensions. SUMMARY This chapter provided an overview of direct and indirect networks, focusing on topologies built from low-radix routers with a relatively small number of wide ports. We describe key performance metrics of diameter and average hops and discuss tradeoffs.Technology trends motivated the use of low-radix topologies in the 80s and the early 90s.
  • 45. 24 3. TOPOLOGY BASICS In practice, there are other issues that emerge as the system architecture is considered as a whole; such as, QoS requirements, flow control requirements, and tolerance for latency variance. However,these are secondary to the guiding technology (signaling speed) and packaging and cooling constraints.In the next chapter,we describe how evolving technology motivates the use of high-radix routers and how different high-radix topologies can efficiently exploit these many-ported switches.
  • 46. 25 C H A P T E R 4 High-Radix Topologies Dally [18] and Agarwal [5] showed that under fixed packaging constraints, lower radix networks offered lower packet latency. As a result, many studies have focused on low-radix topologies such as the k-ary n-cube topology discussed in Chapter 3.The fundamental result of these authors still holds – technology and packaging constraints should drive topology design. However, what has changed in recent years are the topologies that these constraints lead us toward. In this section, we describe the high-radix topologies that can better exploit today’s technology. (a) radix-16 one-dimensional torus with each unidirectional link L lanes wide. (b) radix-4 two-dimensional torus with each unidirectional link L/2 lanes wide. Figure 4.1: Each router node has the same amount of pin bandwidth but differ in the number of ports. 4.1 TOWARDS HIGH-RADIX TOPOLOGIES Technology trends and packaging constraints can and do have a major impact on the chosen topology. For example, consider the diagram of two 16-node networks in Figure 4.1. The radix-16 one- dimensional torus in Figure 4.1a has two ports on each router node; each port consists of an input
  • 47. 26 4. HIGH-RADIX TOPOLOGIES and output and are L lanes wide. The amount of pin bandwidth off each router node is 4 × L. If we partitioned the router bandwidth slightly differently, we can make better use of the bandwidth as shown in Figure 4.1b. We transformed the one-dimensional torus of Figure 4.1a into a radix-4 two-dimensional torus in Figure 4.1b, where we have twice as many ports on each router, but each port is only half as wide — so the pin bandwidth on the router is held constant. There are several direct benefits of the high-radix topology in Figure 4.1b compared to its low-radix topology in Figure 4.1a: (a) by increasing the number of ports on each router, but making each port narrower, we doubled the amount of bisection bandwidth, and (b) we decreased the average number of hops by half. The topology in Figure 4.1b requires longer cables which can adversely impact the signaling rate since the maximum bandwidth of an electrical cable drops with increasing cable length since signal attenuation due to skin effect and dielectric absorption increases linearly with distance. 4.2 TECHNOLOGY DRIVERS The trend toward high-radix networks is being driven by several technologies: • high-speed signaling, allowing each channel to be narrower while still providing the same bandwidth, • affordable optical signaling through CMOS photonics and active optical cables that decouple data rate from cable reach, and • new router microarchitectures that scale to high port counts and exploit the abundant wire and transistor density of modern CMOS devices. The first two items are described further in this section while the router microarchitecture details will be discussed in Chapter 6. 4.2.1 PIN BANDWIDTH As described earlier in Chapter 2, the amount of total pin bandwidth has increased at a rate of 100× over each decade for the past 20-25 years. To understand how this increased pin bandwidth affects the optimal network radix, consider the latency (T ) of a packet traveling through a network. Under low loads, this latency is the sum of header latency and serialization latency. The header latency (Th) is the time for the beginning of a packet to traverse the network and is equal to the number of hops (H) a packet takes times a per hop router delay (tr). Since packets are generally wider than the network channels, the body of the packet must be squeezed across the channel, incurring an additional serialization delay (Ts). Thus, total delay can be written as T = Th + Ts = Htr + L/b (4.1)
  • 48. 4.2. TECHNOLOGY DRIVERS 27 where L is the length of a packet, and b is the bandwidth of the channels. For an N node network with radix k routers (k input channels and k output channels per router), the number of hops1 must be at least 2logkN. Also, if the total bandwidth of a router is B, that bandwidth is divided among the 2k input and output channels and b = B/2k. Substituting this into the expression for latency from Equation (4.1) T = 2tr logk N + 2kL/B (4.2) Then, setting dT/dk equal to zero and isolating k gives the optimal radix in terms of the network parameters, k log2 k = Btr log N L (4.3) In this differentiation, we assume B and tr are independent of the radix k. Since we are evaluating the optimal radix for a given bandwidth, we can assume B is independent of k. The tr parameter is a function of k but has only a small impact on the total latency and has no impact on the optimal radix. Router delay tr can be expressed as the number of pipeline stages (P) times the cycle time (tcy). As radix increases, the router microarchitecture can be designed where tcy remains constant and P increases logarithmically. The number of pipeline stages P can be further broken down into a component that is independent of the radix X and a component which is dependent on the radix Y log2 k. 2 Thus, router delay (tr) can be rewritten as tr = tcyP = tcy(X + Y log2 k) (4.4) If this relationship is substituted back into Equation (4.2) and differentiated, the dependency on radix k coming from the router delay disappears and does not change the optimal radix. Intuitively, although a single router delay increases with a log(k) dependence, the effect is offset in the network by the fact that the hop count decreases as 1/ log(k) and as a result, the router delay does not significantly affect the optimal radix. In Equation (4.2), we also ignore time of flight for packets to traverse the wires that make up the network channels. The time of flight does not depend on the radix(k) and thus has minimal impact on the optimal radix. Time of flight is D/v where D is the total physical distance traveled by a packet, and v is the propagation velocity. As radix increases, the distance between two router nodes increases. However, the total distance traveled by a packet will be approximately equal since the lower-radix network requires more hops. 3 From Equation (4.3),we refer to the quantity A = Btr log N L as the aspect ratio of the router [42]. This aspect ratio impacts the router radix that minimizes network latency.A high aspect ratio implies a “tall, skinny” router (many, narrow channels) minimizes latency, while a low ratio implies a “short, fat” router (few, wide channels). 1Uniform traffic is assumed and 2logkN hops are required for a non-blocking network. 2For example, routing pipeline stage is often independent of the radix while the switch allocation is dependent on the radix. 3The time of flight is also dependent on the packaging of the system but we ignore packaging in this analysis.
  • 49. 28 4. HIGH-RADIX TOPOLOGIES 1996 2003 2010 1991 1 10 100 1000 10 100 1000 10000 Aspect Ratio Optimal Radix (k) Figure 4.2: Relationship between the optimal radix for minimum latency and router aspect ratio. The labeled points show the approximate aspect ratio for a given year’s technology with a packet size of L=128 bits 0 50 100 150 200 250 300 0 50 100 150 200 250 radix latency (nsec) 2003 technology 2010 technology 0 1 2 3 4 5 6 7 8 0 50 100 150 200 250 radix cost ( # of 1000 channels) 2003 technology 2010 technology (a) (b) Figure 4.3: Latency (a) and cost (b) of the network as the radix is increased for two different technologies. A plot of the minimum latency radix versus aspect ratio is shown in Figure 4.2 annotated with aspect ratios from several years.These particular numbers are representative of large supercomputers with single-word network accesses4, but the general trend of the radix increasing significantly over time remains. Figure 4.3(a) shows how latency varies with radix for 2003 and 2010 aspect ratios. As radix is increased, latency first decreases as hop count, and hence Th, is reduced. However, beyond a certain radix, serialization latency begins to dominate the overall latency and latency increases. As bandwidth, and hence aspect ratio, is increased, the radix that gives minimum latency also increases. For 2004 technology (aspect ratio = 652), the optimum radix is 45 while for 2010 technology (aspect ratio = 3013) the optimum radix is 128. 4The 1996 data is from the Cray T3E [55] (B=48Gb/s, tr =40ns, N=2048), the 2003 data is combined from the Alpha 21364 [49] and Velio VC2002 [20] (1Tb/s, 10ns, 4096), and the 2010 data was estimated as (20Tb/s, 2ns, 8192).
  • 50. Another Random Document on Scribd Without Any Related Topics
  • 51. CHAPTER XV A RUDE AWAKENING “Look out!” was White’s warning to Lieutenant Wingate, as the guide sprang forward to the man on the ground. “Is he dead?” called Elfreda, getting up to go forward to the visitor’s assistance. “No. Stay where you are for the present, please.” The camp was silent for a moment, then White stood up. “It’s Jim Haley!” he announced. “And he has been pretty roughly used.” “The Man from Seattle!” cried the girls. Elfreda was at his side instantly. “Is he wounded?” she asked. “I think not,” replied the guide. “See if he has any peanuts with him,” advised Stacy Brown. “Stacy!” Hippy’s voice was stern, and the fat boy subsided. A quick examination by White and Miss Briggs failed to reveal any wounds. They brought water, and Elfreda bathed Haley’s face, which, though bloody, was only scratched, probably by contact with bushes. It took but a short time to revive him, his trouble being almost wholly exhaustion. Grace hastened to make a pot of tea, which Haley gulped down and instantly recovered himself. “Sorry I lost my samples, or I’d not have been in this shape,” he said, grinning. “What happened to you?” Hippy asked. “Same old story. The mountain ruffians wanted peanuts, so they tackled me. One taste of the International’s product and men will commit murder to get more of it. I threw away all I had, and they’re picking them up along the trail. It was the only way I could get rid of the scoundrels. Then I got into more trouble. A pack of wolves got
  • 52. the scent of the peanuts and they tackled me, too, but I hadn’t any of the International’s product to throw to them, so I had to run for it. They chased me nearly all the way in. ‘Good for man and beast’ is the slogan that I shall send on to the International for use in their publicity matter.” The girls were now laughing heartily, but, as they recalled the manner of Haley’s leaving them, they subsided abruptly. Haley’s now merry eyes caught the significance of the change.
  • 53. “I’m Done For!” “What have I said or done now? Is it because I have no peanuts for you good people?” “I think the young ladies would like an explanation of your sudden departure the other night,” spoke up Hippy Wingate.
  • 54. “Were I to tell you that I ran away because I was afraid, you probably would not believe me, so I’ll not tell you that. There are some things one can speak of freely, and others that he cannot. This latter happens to be my difficulty now. If you feel that you do not want me, of course I shall not impose upon you. I thank you, but I warn you that you are not to enjoy any of the International’s product until you reach home. They eat ’em alive up here.” “You are quite welcome to remain as long as you wish. Please stay over Sunday with us, Mr. Haley,” requested Grace. “We hope to have a spread for our Sunday dinner,” she added laughingly. “You win, Mrs. Gray. Unfortunately, my International raiment is in a sad condition, but if you will lend me a pair of shears I’ll cut off the ragged ends and try to make myself presentable.” The girls, at this juncture, bade the men good-night and turned in, for there were not many hours left for sleep, and they were now very tired after the exciting night through which they had passed. A few words passed between the guide and the peanut man, and Ham White listened with a heavy frown on his face. “I won’t do it!” he exclaimed. “Do you think you would were you in my position?” “If the International’s product didn’t pay me I should,” answered the peanut man, with a twinkle in his eyes. “Oh, hang the International!” retorted White. “I give you fair warning that I’ll not double-cross these young women for you or for any of your confounded outfit. I’ve done enough already, and I am thinking of going to them and making a clean breast of what I have done and then get out.” “Don’t be a fool, White. Here! Read this.” Haley extended a folded slip of paper to the guide, who opened and read it, the frown deepening on his forehead. White handed back the slip of paper, and resting his chin in the palm of his hand sat regarding the distant campfire thoughtfully, for they had withdrawn out of earshot of the camp for their conversation.
  • 55. “Very well!” agreed Hamilton White after a few moments’ reflection. “I might as well be hanged for a sheep as a wolf, but if anything happens here as a result I shall tell why. Remember that, Haley.” “Oh, well, what’s a bag of peanuts more or less?” was the enigmatic reply of the Man from Seattle. “I’ll take a nip of sleep, if you don’t mind, and be on my way, but not far away.” The queer visitor took the blanket that had been given to him, and, walking back into the forest a short distance from the camp, lay down and went to sleep. The guide did not turn in at all, but sat silently in the shadows, rifle at his side, thinking and listening. Thus the rest of the night passed, and day began to dawn. With the breaking of the day Hamilton White climbed the miniature mountain, and drawing a single-barreled glass from his pocket began studying the landscape. A tiny spiral of smoke about two miles to the north claimed his instant attention. He studied it for a few moments. At first the smoke was quite dark, then the spiral grew thin and gray as it waved lazily on the still morning air. “Someone is building a breakfast fire,” he muttered. “And they know how to build a fire, too. That may be Haley’s crowd. Ah!” As White slowly swept his glass around he discovered something else that aroused his keen interest. On a distant mountain a flag was being wigwagged. He could not see the operator of it, but he was able to follow the message that was being spelled out. Another shift of his glass and a careful study of known localities enabled the guide to find the person who was receiving the message, and soon the receiver began answering with his signal flag. Ham White grinned as he read both messages. “The forest eyes of Uncle Sam!” he murmured. The signalers were forest lookouts whose eyes were constantly on the alert watching over the vast forest within their range for suspicious smokes, and they were having a friendly Sunday morning conversation over a distance of nearly four miles. Ham read and smiled.
  • 56. “If they knew they would be more careful of what they said,” he chuckled, then a few moments later he climbed down, returned to camp and started the breakfast fire. He fried some strips of bacon, put on the coffee, and then he sounded the breakfast call. “Come and get it!” was the call that rang out on the mountain air. The Overlanders thought they wanted to sleep, in fact, they were hardly awake when they got lip grumbling, in most instances, and began hurriedly dressing. All were shivering, for the air was very chill. The odor of the breakfast, when they smelled it, added to the haste of their dressing. “Stick your heads in the cold water and you will be all right,” advised the guide. The girls returned from the spring, their faces rich with color, eyes sparkling, and ready for breakfast. “How are the appetites? I don’t ask you, Mr. Brown. You have proved to my satisfaction that you can eat whether you are hungry or not,” laughed White. “We are ready for breakfast, sir,” answered Elfreda Briggs. “My, but it does smell good.” “Where is Mr. Haley?” questioned Grace, regarding the guide with a look of inquiry in her eyes. “He thought best to sleep outside of the camp, and no doubt has gone on before this.” “Why, Mr. White?” persisted Grace. “That is a question that I can’t answer just now, Mrs. Gray,” returned the guide, meeting her eyes in a level gaze. “Oh, very well. We will have breakfast.” “We will,” agreed Stacy, and began to help himself from the frying pan, when the guide smilingly placed a hand on the fat boy’s arm. “You forget the ladies, Mr. Brown,” he reminded. “Forget them? How could I?” “It is you who forget, Hamilton,” interposed Emma. “You forget that Stacy Brown never was brought up.” “Give me the chuck!” whispered Stacy. “Heap the plate.”
  • 57. White, catching the significance of the request, heaped the plate, and Stacy bore it to Emma with great dignity. He bowed low and offered the plate. “Your highness is served,” he said. “If you will be so kind as to call your sweet soul to earth from the ethereal realms above long enough to feed that sweet soul on a few fat slices of common pig, you will be a real human being. I thank you,” added the boy, as Emma, her face flushing, took the plate, her lips framing a reply which was never uttered. The shout of laughter that greeted Stacy’s act and words left Emma without speech. Nor did she speak more than once during the meal, then only to ask for another cup of coffee. Breakfast finished and the morning work done in camp, the three men went out to groom the horses, while Grace and Elfreda strayed away. Their objective was the rock from which Ham White had made his early observation. “Have you the diary?” asked Grace as they seated themselves. “Oh, what a wonderful view. Isn’t it superb?” “Yes, I have the diary, and I see the view, and agree with you that it is superb, but suppose we get down to business before we are interrupted. I do not believe we shall be spied on here, at least,” said Elfreda, glancing about her. The thumb-worn book was produced, and the girls bent over it, beginning with the first page. There were daily weather comments, movements of the prospector from place to place, little incidents in his daily life, none of which seemed to shed any light on the subject in which the two girls were interested. “Here is something!” breathed Grace finally, and read, under date of April 30, the following paragraph: “‘Plenty here. Dare not dig, for am watched. Picked up in channel enough pay-dirt to keep over next winter. Channel itself ought to pan out fortune, but shall have to have help. Isn’t safe to try it alone. The gang of cutthroats would murder me. Some day mebby they’ll get me as it is.’”
  • 58. “Hm-m-m-m,” murmured Miss Briggs. “I wondered why, if he had made such a find, Mr. Petersen shouldn’t get out the gold and put it in a safe place before someone got ahead of him. The diary seems to furnish a reason for his delay. He must refer to the Murray gang.” “Listen to this entry, Elfreda,” begged Grace, reading: “‘Queer thing this morning. The sun was shining on the children, and on grandma’s bonnet, but her face was as black as a nigger’s. I wonder if that was a warning to me to keep away. Gold, gold! How terrible is the lure for the yellow stuff. It gets into the blood, it eats into the heart. It’s a frightful disease.’” “That checks up with what Mr. Petersen had me to write down, doesn’t it, Grace?” breathed Elfreda. “Undoubtedly. He must refer to the same thing, but it doesn’t give us the least idea where the place is.” “The man would be a fool to write a thing like that in a diary—to tell where and how. Anything else? There is something on the next page.” “Yes,” answered Grace, turning the page and reading: “‘Though I haven’t found it, I know pretty well where the mother lode is, but I’m afraid of it—afraid to look for it. I’m afraid the wealth I should find there would kill me just because of the responsibility of possessing it. Then again, what is there left in life after a man has got all he has dreamed of, and yearned for, and fought for, and worked for, up to that time? Nothing!’” “What a philosopher!” marvelled Grace Harlowe. “He is right, too,” agreed Miss Briggs. “Suppose we forget about it, also,” urged Elfreda. “I am tired of it.” “J. Elfreda, if I didn’t know you so well, I should believe you are in love, you are so gloomy. Listen! Mr. Petersen probably has no one surviving him. He wished you to have what he had found. It was the request of a man about to pass out; it was a trust, Elfreda. One day someone, perhaps the very ones who tried to kill him, will stumble on the Lost Mine. I should say that the prospector’s request imposed a duty on you, my dear—a duty to go to the place he names, take possession of what you may find there and keep it for your own. You
  • 59. can’t expect to make a fortune practicing law, especially if you don’t do more practicing than you have done in the last few years. I fear these summer outings of ours have cost each of us something.” Elfreda said she didn’t regret the loss of time. Her time was her own, and she had sufficient funds to enable her to take care of herself and the little daughter that she had adopted a few years before. “The question is, though, how am I going to find this place—how are we going to find it, I mean, for what I find is for the outfit, not for my own selfish self. I—” Elfreda’s eyes had been wandering over the scene that lay before them as Grace slowly turned the leaves of the diary. Miss Briggs thought she had seen a movement off to the right at the edge of the rock farthest from the camp. “What is it?” demanded Grace, glancing up quickly. “Nothing. Go on. Find anything else?” “Only this: ‘When the sun is at the meridian the sands turn to golden yellow,’” read Grace. “What does he mean, do you think?” “I suppose he means to convey that the bed of the dry stream, if it is dry, shows a sort of golden strip. That is all I can make of it. There seems to be nothing else in the book in reference to the subject in which we are particularly interested. I am certain that the poor man knew what he was saying; I believe that he believed he had found what he says he found. Whether he did find it or not is quite another matter. In any event Lost River and the lost mine are well worth looking for as we go along. If there be such a place, Overland luck will lead us to it,” finished Grace. “I doubt it—I was going to say I hope Overland luck doesn’t lead us to it, to our River of Doubt. Oh, Grace!” “Wha—at is it?” “Oh, look!” A black head of hair, lifted just above the level of the rock on the far side, revealed a low forehead and a pair of burning black eyes— evil eyes they seemed to the two startled girls. They could not see
  • 60. the hands that were gripping the edge of the rock, but what they could see was sufficient to fill them with alarm. Without an instant’s hesitation, Elfreda Briggs snatched up a chunk of flinty rock and hurled it with all her might. The chunk of rock fell a couple of yards short of the mark, bounced up into the air, and landed fairly on the man’s head. “Who says a woman can’t throw a stone!” cried J. Elfreda Briggs almost hysterically.
  • 61. CHAPTER XVI BANDITS TAKE THEIR TOLL “Run!” cried Grace. “The diary!” exclaimed Elfreda, as Grace dropped the book, snatched it up, and ran clambering down the rocks. The guide saw them coming, saw that something was wrong, and strode forward to meet the two girls. “What is it?” he asked sharply. “A prowler,” answered Grace, out of breath. “Where?” “There! On the other side of the rock. He was spying on us, and I think Miss Briggs hit him with a piece of rock,” exclaimed Grace. “Lieutenant!” called Hamilton White, and sprinted around the base of the big rock. Hippy Wingate was not far behind him, though Hippy did not know what had occurred, nor did he wait for an explanation. He knew that there was trouble, and that was sufficient for him. The two men reached their objective at about the same time. White was peering at the rocks and bushes at the base of the big rock. “Miss Briggs did hit him. See the blood there, and the bushes crushed where he fell. She must have given him a good wallop,” he chuckled. White began to run the trail, a trail that was plain and easily followed. Hippy was right behind him, using his eyes to good advantage. “Lieutenant, I think you had best go back and watch the camp. This may be a trick to coax us men away. Keep a sharp lookout. Have Brown stand guard with you. There is little need to worry, for we can see and hear. Skip!” urged the guide.
  • 62. Hippy lost no time in getting back to camp, and when he reached there he found Grace and Elfreda laughing, and explaining to their companions what had happened. They repeated the story to him. “Oh, well, let them fuss. They can’t do anything to us,” averred Lieutenant Wingate after he had heard all of the story. “I’ll sit on top of the rock and watch over you children.” “That’s what I say,” agreed Stacy. “We men can beat them at their own game, and have a lap or so to spare. Ham will chase them so far away that they never will find their way back. If he doesn’t I will.” “Don’t be too positive,” admonished Grace. “I think it wise for us to be on the alert. For some reason those ruffians are determined to be rid of us, at least.” “Oh, I hope Hamilton will take care of himself,” murmured Emma, whereat her companions laughed heartily. None of the girls left the immediate camp all that morning; they even sent Stacy to the spring for water, much to that young man’s disgust, for Stacy had planned on having a fine day’s sleep in his tent. Noon came, and the guide had not returned, so Grace decided that they would have something to eat. The girls got the meal. After they sat down to eat, the girls tried to be merry, but they admitted that they missed Hamilton White, though none felt alarm at his absence. The meal finished, dishes were washed and put away, and packs laid out for a quick move, in the event of that becoming necessary, for by this time the Overland Riders had learned to be ready at a moment’s notice. Hippy from his point of vantage kept guard over the camp and its vicinity, now and then studying the view spread out before him. The air was fragrant with the odor of the forest, and Hippy grew sleepy. To keep awake he decided to get down and walk. This he did, reaching the ground on the side of the rock farthest from the camp. The Overlander, with only a revolver, strolled through the forest making a circle around the camp, and studying the trees for blazes and the ground for indications of recent visitors. Now and then he
  • 63. would sit down, back against a tree, and gaze up into the blue sky and the waving tops of the big pines. The afternoon wore away and Hippy was still trail-hunting. It was near supper time when Nora called him. There was no answer, so she climbed the rock, expecting to find her husband sleeping, for Hippy loved sleep fully as much as Stacy Brown did. Lieutenant Wingate was not on the rock, but Nora found his rifle laying there. She ran back to her companions in alarm. “Hippy isn’t there!” she cried. “Oh, girls, can anything have happened to him?” Nora was on the verge of tears. “No, of course not,” comforted Grace. “Then where is he?” “Probably asleep somewhere about,” suggested Emma. “You know he and Stacy have the sleep habit.” “I don’t believe it. I am going out to search for him.” “Nora, you will not!” differed Grace with emphasis. “We will all remain where we are. To get separated would be foolish. Hippy is all right, so sit down and chat with us. Mr. White will be along soon, and some others besides Emma Dean will be glad to see him,” she added, with a teasing glance at Emma. The Overland girls ate a cold supper that night, no one feeling like cooking or sitting down to a hearty meal. Nora was so worried that she refused to eat at all, and, while the other girls were equally disturbed, they masked their real feelings by teasing each other. Emma and Stacy were ragged unmercifully. Darkness settled over the forest, but still no Hippy, no guide. “I think it will be advisable to bring in the horses, don’t you, Elfreda?” asked Grace. Miss Briggs and the others thought that would be a wise move, so the ponies, and such of their equipment as was outside the camp, were brought in; fuel was gathered and piled up so that they might keep the fire burning; then the party sat down in their tents, with blankets thrown over their shoulders, and began their watch.
  • 64. It was ten o’clock that night when the hail of Ham White was heard, and after the tension of the last few hours the Overland girls felt like screaming a welcome. Instead they sprang out and stood awaiting him. “Well, did you good people think I had deserted you?” he cried out. “I am nearly famished. Is there anything left from dinner?” “Yes, of course there is. I will get you something. First I must tell you. Mr. Wingate has been missing since some time this afternoon. We don’t know what to make of it unless he has fallen asleep somewhere,” said Grace. “What! Tell me about it.” Nora told the guide the story, explaining that Hippy had taken up his station on the rock to guard the camp, and that that was the last they saw of him. Ham White was disturbed, but he did not show it. Instead he laughed. “No doubt, as Mrs. Gray has suggested, he has gone to sleep. Where is Mr. Brown?” “He is asleep in his tent, as usual,” spoke up Emma. “Oh, Hamilton, won’t you please find Hippy—now?” “I will do my best. Give me a snack and I’ll go out now. I followed the other trail for something like five miles. There were four men in the party, only one of whom came near the camp. The trail finally bumped into the side of a mountain and I lost it. It was so dark I could not follow it farther. Thank you!” he added, as Emma handed him some bacon. “I will go right out.” They followed him around the rock and watched with keen interest as Ham White searched for and found the trail of the missing Hippy, which he followed, with the aid of his pocket lamp, for some distance. “He was strolling,” announced the guide. “You can see here where he sat down to rest, then went on. Please return to camp. Unless he wandered off and lost his way, I shall probably soon find him.” The girls promptly turned back towards camp, Nora with reluctance, which she made no effort to conceal. Then followed two
  • 65. hours of anxiety. The guide returned shortly after midnight. “There is no use of searching farther to-night,” he announced. “Mr. Wingate undoubtedly has strayed away, but I’ll find him in the morning. Please turn in and get some rest, for we shall undoubtedly have an active day to-morrow. In any event, don’t lose your nerve, Mrs. Wingate. The Lieutenant has had enough experience to know how to take care of himself.” Nora went to her tent weeping, Emma Dean’s arm around her, but Grace held back at a gesture from Elfreda, who had observed that the guide studiously avoided looking directly at Nora Wingate. “Mr. White, have you anything to say to us?” questioned Elfreda. “Meaning what?” “We wish to know what you really did discover. It was well not to say any more than you did to Mrs. Wingate.” “You made a discovery of some sort—of that we are convinced,” spoke up Grace. “Yes, I did,” admitted White. “I found the lieutenant’s revolver beside a tree where he had been sitting. His trail ended there!” “Meaning?” persisted Miss Briggs. “That he was attacked and carried away, in all probability. I found evidences of that.” “What can be done?” demanded Elfreda. “Nothing until morning. I have means of obtaining assistance, which I will employ as soon as it is light enough to see.” The girls turned away and walked slowly to their tent, and the guide stepped over to the tent occupied by Hippy and Stacy Brown. He was out in a moment and striding towards Elfreda’s quarters. “Miss Briggs! Mrs. Gray!” he called. “Yes!” answered the voices of Elfreda and Grace. “Stacy Brown is not in his tent. There has been a struggle, and the boy has been forcibly removed,” was the startling announcement.
  • 66. CHAPTER XVII A TEST OF COURAGE “Sta—Stacy gone?” exclaimed Elfreda Briggs. “It can’t be possible. He is playing one of his practical jokes on us.” “Let us look, but don’t disturb Emma and Nora if it can be avoided,” urged Grace. The two girls, with the guide, repaired to Lieutenant Wingate’s tent, and examined it, using their pocket lamps. It was as Hamilton White had said—there was every evidence that a struggle had taken place there. The fat boy’s hat and his revolver lay where they had been hurled to one side of the tent. His blouse was a yard or so to the rear, and the imprint of his heels where they had been dragged over the ground was plainly visible. “He must have been asleep,” nodded White. “Yes,” agreed Grace. “If awake Stacy would have set up such a howl that none could have failed to hear. When do you think this was done, Mr. White?” “When we were out looking for the lieutenant. If you will remember, Mr. Brown remained behind.” “Do you think it wise to follow his trail?” asked Grace. “No. Not now. I dare not leave the camp. All this may be part of a plan. My duty is here, at least until daylight, when I will get into communication with those who will find both men.” “You think so, Mr. White?” questioned Elfreda anxiously. “Yes. It is the work of the same gang, but what their motive is we can only surmise. You and Mrs. Gray may know.” Elfreda felt her face growing hot, and a retort was on her lips, but she suppressed it.
  • 67. “Mrs. Gray, if you think I should try to run the trail now, I will do so, but it would be against my judgment. I hope you do not insist,” said White, turning to Grace. “I believe you are right,” answered Grace. “Come, Elfreda, we will go to our tent, for no serious harm can come either to Hippy or Stacy. They dare not harm them.” Ham White did not reply. He knew the character of the men who committed that piece of banditry, and knew that they would hesitate at no crime to gain their ends, whatever those ends might be. The guide got no sleep that night. Mindful of the attacks that had been made on the camp, he took up his position at a distance, and, with rifle in hand, sat motionless the rest of the night. From his position in the deep shadows he commanded a view of the entire camp, which was dimly lighted by the campfire all night long. There were occasional sounds that Ham White did not believe were made by marauding animals, but none were definite enough to warrant exposing his position. During his vigil nothing occurred to disturb the sleepers. The graying mists of the early morning were rising from gulch and forest, enfolding the mountaintops, when Ham White stole around the camp, scrutinizing every foot of the ground. By the time he had completed this task the mists were so far cleared away that a good view of the surrounding country might be had. From his kit the guide selected a wigwag signalling flag, and taking one of the tent poles for use as a flagstaff, he went cautiously to the high rock that stood sentinel over the Overland camp, and climbed to its top. “I hope none of the girls wake up,” he muttered, peering down into the camp, which was as quiet as a deserted forest. Ham White, after attaching the flag to the pole, began waving it up and down, which in the wigwag code means, “I wish to speak with you.” It was at this juncture that Grace Harlowe slowly opened her eyes. Where she lay she could look straight up to the top of the rock
  • 68. without making the slightest movement, and her amazement must have been reflected in her eyes. Like several of the Overland girls, Grace’s experience in the war had included learning to signal and to read signals. She was out of practice, but was easily able to read any message not sent too fast. Ham began his message, after getting the attention of the persons to whom he was signalling, at a speed that Grace could not follow. She did, however, catch a few words that were enlightening. “Trouble—Haley—Trail—Send word—Caution—Great secrecy or expose hands—Fatal to—” were some of the words that she caught as the guide flashed them off. Then he paused. “How I wish I could see the answer,” muttered the Overland girl, as she watched Hamilton White, with glasses at his eyes, receiving the message that was being sent to him. Grace Harlowe’s, however, were not the only pair of eyes that witnessed that exhibition of signalling. Other eyes were observing, but that other pair could not read a word of what the signallers were saying. White dropped his glasses and snatched up his flag, and she read, this time with greater ease: “It may be fatal. Great danger to both. My responsibility. Must have instant action. This an order. Obey without loss time. Report soon as anything to say.” The guide signed his name, and the words that followed the signature filled Grace Harlowe with amazement. She saw the guide remove the flag from its staff and hide it under a stone, after which he descended to the camp, passing the open tents without so much as a glance at them. Ham stirred up the fire and put over the breakfast, and, while it was cooking, Grace came out, greeting him cheerfully. “Is there any news, Mr. White?” she asked sweetly. “No, not yet.” “What have you done?” “I signalled to a fire-lookout station that assistance was needed. It is best to wait until we hear from them.” “How, signal?” she questioned, appearing not to understand.
  • 69. “By the air route, Mrs. Gray,” was the smiling reply. Grace Harlowe shrugged her shoulders. “You are a very clever man, Mr. White,” she said, and walked to her tent to awaken Miss Briggs. When informed that Stacy Brown was missing, a few moments later, Nora Wingate became hysterical, but Grace and Elfreda calmed her, and the party were ready to sit down to breakfast when the guide announced it as ready. It was a trying, anxious morning for the little band of Overlanders. White made frequent trips to the rock, observed questioningly by Elfreda. “What is he looking for, Grace?” she asked. “Does the man expect to find the bandits that way?” “I don’t know. Why not ask him, J. Elfreda?” “Not I. You know I would not.” About mid-forenoon Grace suggested to the guide that he go out into the forest and see if he could glean any information as to the direction that the kidnappers had taken when they left the camp, with either Hippy or Stacy Brown. White pondered the subject a moment, then agreed. “If you will promise not to leave camp, and to fire a shot at the least suspicious sound or occurrence, I will go out,” he said. “One of you had better go to the rock and take station there until my return.” Grace said she would do that. Matters were working out to her satisfaction, and, after telling Elfreda to take her rifle and post herself a short distance to the rear of the camp, and assigning Emma and Nora to the right and left ends of their camping place, Grace climbed the rock and sat down. After Ham White, following a survey of the camp and her arrangements, of which he approved with a nod and a wave of the hand, had left the camp, Grace got up and looked for the signal flag, which she found under a flat stone. “Now! Having disposed of my companions I shall see what I shall and can see,” she told herself.
  • 70. Securing the signal flag, the Overland girl took a survey of the landscape. A vast sea of dense forest lay all about her, broken here and there by a white-capped mountain. Nothing that looked as if it might be a fire-lookout station attracted her eyes. She had used her field glasses, but without result. A moment of vigorous signalling on her part followed, after which Grace swept the landscape again. She discovered nothing at all. Another trial was made, and the word “answer” was spelled out by her. Her eye caught a faint something far to the north of her, and Grace’s glasses were at her eyes in a twinkling. A little white flag was fluttering up and down against the background of forest green in the far distance. “I’ve got him!” cried the girl exultingly. “I’ve got him!” Then, wigwagging, Grace Harlowe signalled the one word, “Report!” “Who?” came the answer, almost before she could get the glasses to her eyes to read the message. “For White,” she wigwagged. “Report!” Holding the flag, now lowered to the rock, with one hand, the other holding the glasses to her eyes, Grace bent every faculty to watching that little fluttering, bobbing square of white, that, at her distance from it, looked little larger than a postage stamp. “Repeat!” she interrupted frequently, whenever part of a word was missed. It was a laborious effort for her, out of practice as she was, and the exchange of messages lasted for a full half hour before the Overland girl gave her unseen, unknown signaller the “O. K.” signal. Grace folded the flag and placed it under the stone, then straightened up. “Mr. Hamilton White, I have you now!” she exclaimed, a triumphant note in her voice.
  • 71. CHAPTER XVIII THE FLAMING ARROW “Where am I at?” It was Hippy Wingate’s first conscious moment since he was struck down while sleeping with his back against a tree not far from the Overland camp. All was darkness about him as he awakened in unfamiliar surroundings. Essaying to rise, the Overlander discovered that he was bound. Still worse, there was a gag in his mouth. A gentle breeze was blowing over him, and at first he thought he was still under the trees. Hippy then realized that there was a hard floor beneath him. His head ached, and when he tried to sit up he found that it swam dizzily. “I wonder what happened to me?” he muttered. “Hello!” There was no response to his call; in fact, his voice, still weak, did not carry far and it was thick because of the gag. Then began a struggle with himself, that, while it exhausted him for the time being, aided in overcoming his dizziness. Hippy heard men conversing, heard them approaching, whereupon he pretended still to be unconscious. A door was flung wide open, and a lantern, held high, lighted up the interior of the building with a faint radiance. “Hain’t woke up,” announced one of the two men who stood in the doorway. “Mebby he never will,” answered the other. “I don’t reckon it makes much difference, so long as we got two of ’em,” returned the first speaker. “What shall we do—let ’im sleep?” “Yes.” The man with the lantern strode over and peered down at the prostrate Overlander, while the prisoner, from beneath what seemed
  • 72. to be closed eyelids, got a good look into the swarthy, hard-lined face. Lieutenant Wingate would remember that face—he would remember the voices of both men—would know them wherever he heard them. “Let ’im sleep. When he wakes up we’ll have something to say to ’im.” With that the two men went out, slamming the door behind them. The lantern light had shown Hippy that he was in a log cabin. At his back was a window, or a window-opening, for which he was thankful, as it offered a possible way of escape. But how, in his present condition, could he hope to gain his liberty? There was no answer to the Overlander’s mental question. First, he must regain his strength. The leather thongs with which he was bound interfered with his circulation, and his legs were numb. So were his arms, and his jaws ached from the gag that was between his teeth. In fact, Lieutenant Hippy Wingate did not remember ever to have suffered so many aches and pains at one time as he had at that moment. He began his struggles again, but more with the idea of starting his circulation and gaining strength than with any immediate hope of escape. By rolling over several times he was able to reach the door, but having reached it he had no hands with which to open it. Hippy wanted to look out. Failing there, he bethought himself of the window, and rolled back across the floor to it. Exerting a great effort, he managed to work his head up to the window so he could see out. The night was dark, but the Overlander was able to make out trees and rugged rocky walls, together with what appeared to be a dense mass of bushes. The scene was unlike anything he had seen in the State of Washington since his party had started on their outing. “I may be up in the Canadian Rockies, for all I know,” he muttered. Hippy sank down, weak and trembling.
  • 73. For a change, he rolled back and forth, pulling himself up to the window again and again, and each time found himself stronger than before. “If I were free and had a gun I’d show those cowards something!” raged the Overlander, his anger rising. “Why did they have to pick on me? I wonder what the folks at the camp are think—” “Sh-h-h-h!” It was a low, sibilant hiss from the window, and Hippy fell suddenly silent. “Keep quiet and listen to me,” warned a hoarse voice. “The gang is out of range, but we don’t know when one or more of ’em will be back. I’m coming in.” Not being able to answer, except with a grunt, the Overlander merely grunted his understanding. The stranger leaped into the room and felt for the prisoner. “I am going to cut you loose. Are you wounded?” “No, I think not,” mumbled Hippy, but his words were unintelligible. The first thing the stranger did was to remove the gag, which he did with so much care that the operation gave no pain. Then came the leather thongs. These he ripped off with a few deft sweeps of a knife, and Lieutenant Wingate was a free man so far as his bonds were concerned. “Can you walk?” in the same hoarse voice. “I could fly if I had to,” was the brief reply. “Who are you?” “You wouldn’t know if I told you. Here!” The man thrust a revolver into his hand. “Don’t use it unless you have to. We aren’t out of the woods by a long shot. Come!” The stranger assisted Hippy through the window, which was accomplished with some difficulty, for Lieutenant Wingate was stiff and sore. A firm hand was fixed on his arm, and his companion began leading him rapidly away. Not a word was spoken for several minutes—not until they had plunged into the dark depths of a canyon, through which the man picked the way unerringly.
  • 74. “How are you standing it?” was the question abruptly put to Lieutenant Wingate. “Rotten! But I’ll pick up speed as I go along and get my motors warmed up.” The stranger chuckled. “Where are we going?” “We are headed for your camp, but it’s quite a hike and a hard one. If you get leg-weary, stop and rest a bit. How’d they get you?” “I went to sleep just outside the camp, and I think I must have got a clump on the head. Ouch!” Hippy had lifted a hand to his head, and felt there a bump as big as an egg. “I guess I did get a clump. It’s a wonder I’m not dead. When is it, to-day or to-morrow?” “It’s the day after,” was the half humorous reply. “Please tell me how you found me?” asked the Overlander. “Ham White got in touch with some people I know. They got word to me, and gave me the tip. The same people saw the gang that got you heading for the pass where you were taken, so I made for that place as soon as I got the word from White. I was lucky; I might have had to hunt the whole state over for you. The gang made a bad play when they picked you up. We’ve got a line on them now.” “Who is we?” interjected Hippy. “All of us,” was the noncommittal reply. “Don’t speak so loudly. It isn’t safe yet.” That walk Hippy Wingate never forgot. Every step sent shooting pains through his head and legs. He stumbled frequently, but every time the grip of the stranger tightened on his arm, and he was kept on his feet. “When you get to camp, tell your people to watch out. Some of the gang are still out on trail. I reckon they aren’t out for any good, and they may be planning to rush your camp and get the rest of your party.” “Why do they want us?” wondered Lieutenant Wingate. “Is it robbery?”
  • 75. “Yes, but not the sort of robbery you think. Tell your friend Miss Briggs that it’s time she told her party her story. She knows why.” “I begin to see a light,” muttered the Overlander. “Say! There’s something familiar about your voice, but I can’t place it. Got a cold?” “Yes.” Little conversation was indulged in after that, and at last Hippy’s rescuer halted and pointed. “See that light?” he asked in a whisper. “Yes.” “That’s your camp. I leave you here. Take my advice, and don’t make much noise to-night. Keep your fire low, and post guards. Tell White there is a man out here wants to see him. You need not let the others know about my being here. I’m in a hurry. Good-night.” “But—won’t you come—” “Go on!” Hippy wavered a little as he started towards the camp, into which he staggered a few minutes later. A cry greeted his appearance, and Nora’s arms were flung about his neck ere he had fairly reached the light of the campfire. He held up his hand for silence. “Give me something to eat, if you love me. I’m famished.” Nora ran for the coffee pot, which Ham White took from her. Hippy stepped over to him and whispered something to the guide, as he relieved White of the coffee pot. White immediately left the camp. By now the other members of the party were about Hippy shoving their joy at his return. “Have you seen Stacy?” demanded Grace eagerly, as soon as she could get his attention. “No. Why?” “He, too, has been missing, and—” “The curs!” raged Lieutenant Wingate. “So they got him, too, did they?”
  • 76. “Never mind now. You must drink and eat. Where is Mr. White?” wondered Grace, glancing quickly about the camp. “I sent him out on an errand,” answered Hippy. “Ah! The coffee is not so hot that it burns, but it’s nectar.” “Oh, my darlin’! Your head!” cried Nora, just discovering the swelling there. Elfreda was at his side in an instant, examining the lump that, to Hippy, seemed fully as big as his head itself. Miss Briggs ran to her tent for liniment, and in a moment was applying it to the sore spot. Hippy’s story was brief, because there was little that he could tell them. He was amazed when he learned that he had been away so long. Grace explained to him how White had reached some lookouts on the range and got them to go in search of him. “How they found you so soon, I don’t understand. Do you?” Hippy shook his head. “There are some things in this neck of the woods that are beyond explaining. I hope they didn’t give Stacy such a wallop as I got. But don’t worry about him. They can’t keep him long. Stacy will eat them out of his way. I was easy. He isn’t.” Ham White returned at this juncture. “We shall probably have another guest to-night, if all goes well,” he announced. “A guest?” wondered the Overlanders. “So I am informed; perhaps more than one. Do not ask any questions, for I can’t answer them. Well, Lieutenant, you had a rough time of it, didn’t you?” “The Germans could not have done anything much worse.” “Would you recognize any of the fellows who captured you?” questioned White. “I saw only two, but I shall know them when I see them, and they will have reason to know me, for—” “Hamilton, who are the guests you are expecting?” urged Emma in her sweetest tone of voice.
  • 77. “Sorry, Miss Dean, but I can’t tell you.” “Isn’t that just like a man—making a mystery of everything? I think—” “Hello, folks!” cried a voice from the bush. The Overlanders fairly jumped at the sound of the familiar voice. “Tom! Tom Gray!” cried Grace, running and throwing herself into her husband’s arms. “How happy I am to see you, you will never know. I needed you, Tom—we all have needed you, and I think we shall need you still more. Where did you come from?” “Hello, old chap!” cried Hippy jovially. The Overlanders crowded around Captain Tom Gray joyously. “How are you, White!” greeted Grace’s husband, as soon as he could free himself from the welcome of Grace, Nora and Emma. “I have been looking forward to meeting you, and I knew, from what I had heard, just the sort of man you would be—I mean as to looks,” added Tom, grinning. “The men on the range are looking forward to seeing their—” A warning look from the guide checked Tom. “I will explain later,” whispered the guide. “I thank you for sending for me,” bowed Tom, with ready resourcefulness. “I knew that the need must be urgent or you would not have done so.” “Yes. I have a double responsibility—a moral and a physical one, and I felt that I had no right to go farther until I had consulted with Mrs. Gray’s husband. We are heading for trouble, in fact we have already been having it.” “Tell me about it. I know some of the facts, but I want them at first hand.” “Miss Briggs knows the story. I suggest that she relate the story of her experiences, which will give you the slant I want you to get. I suppose you know of the kidnapping of Lieutenant Wingate and Stacy Brown?” asked the guide. “The bare facts only. J. Elfreda, you seem to be the pivotal point on this journey. Grace is holding my hand so tightly that I shall have
  • 78. to ask her to give me a chance to listen to you,” answered Tom laughingly. Emma offered to demonstrate to give Tom a “chance” to hear the story. Grace laughed happily. A great load of responsibility and worry had been lifted from her shoulders. “I will be good, J. Elfreda. Please tell Tom everything—everything, remember. Mr. White, we wish you to sit in,” added Grace, as the guide discreetly moved away. There followed a moment of silence, then Elfreda Briggs began the story of the fire, of her arrival at the forest cabin, and of the dramatic occurrences there. She told of the diary, of the loss of the gold dust, and of the general directions that Sam Petersen had left for locating the claim, though Elfreda did not say what those directions were. She thought it advisable not to do so. Hippy got up and walked to his tent, returning shortly and standing with his back to a tree and his hands in his pockets as Miss Briggs finished her story. Grace took up the story from that point, relating all that had occurred since Elfreda’s experience in the forest shack, but avoiding what she had learned through her wigwagging about Hamilton White. Tom Gray pondered over the story, stroking his cheek, which Tom always did when thinking deeply. “The Murrays, eh, White?” he questioned, glancing up at the guide. Ham White nodded. “It looks that way,” replied White. “They know about this Lost River story, do you think?” “Most everyone does up here. It is an old Indian legend, and probably has no more foundation in fact than most Indian legends,” answered the guide. “Mind you, I am not saying that such a place doesn’t exist. No doubt there are many rich veins in the Cascade Range yet to be discovered. Petersen evidently believed he had found it, but he undoubtedly was delirious when he described the spot. He had been shot, you know.”
  • 79. Welcome to Our Bookstore - The Ultimate Destination for Book Lovers Are you passionate about books and eager to explore new worlds of knowledge? At our website, we offer a vast collection of books that cater to every interest and age group. From classic literature to specialized publications, self-help books, and children’s stories, we have it all! Each book is a gateway to new adventures, helping you expand your knowledge and nourish your soul Experience Convenient and Enjoyable Book Shopping Our website is more than just an online bookstore—it’s a bridge connecting readers to the timeless values of culture and wisdom. With a sleek and user-friendly interface and a smart search system, you can find your favorite books quickly and easily. Enjoy special promotions, fast home delivery, and a seamless shopping experience that saves you time and enhances your love for reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookgate.com