SlideShare a Scribd company logo
Operating Openshift An Sre Approach To Managing
Infrastructure 1st Edition Rick Rackow download
https://ebookbell.com/product/operating-openshift-an-sre-
approach-to-managing-infrastructure-1st-edition-rick-
rackow-47581718
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Operating Openshift Third Early Release 20220920 Third Early Release
Rick Rackow
https://ebookbell.com/product/operating-openshift-third-early-
release-20220920-third-early-release-rick-rackow-46201238
Architecting And Operating Openshift Clusters Openshift For
Infrastructure And Operations Teams 1st Ed William Caban
https://ebookbell.com/product/architecting-and-operating-openshift-
clusters-openshift-for-infrastructure-and-operations-teams-1st-ed-
william-caban-10795874
Architecting And Operating Openshift Clusters Openshift For
Infrastructure And Operations Teams 1st Edition William Caban
https://ebookbell.com/product/architecting-and-operating-openshift-
clusters-openshift-for-infrastructure-and-operations-teams-1st-
edition-william-caban-54981178
Mlops With Red Hat Openshift A Cloudnative Approach To Machine
Learning Operations Ross Brigoli
https://ebookbell.com/product/mlops-with-red-hat-openshift-a-
cloudnative-approach-to-machine-learning-operations-ross-
brigoli-56896508
Operating System Principles 7th Edition Abraham Silberschatz
https://ebookbell.com/product/operating-system-principles-7th-edition-
abraham-silberschatz-44906930
Operating System Second Rohit Khurana
https://ebookbell.com/product/operating-system-second-rohit-
khurana-46156340
Operating And Support Costestimating Guide Department Of Department Of
Defense
https://ebookbell.com/product/operating-and-support-costestimating-
guide-department-of-department-of-defense-46515544
Operating Ai Ulrika Jagare
https://ebookbell.com/product/operating-ai-ulrika-jagare-48741516
Operating Systems Advanced Concepts Mamoru Maekawa Arthur E Oldehoeft
Rodney R Oldehoeft
https://ebookbell.com/product/operating-systems-advanced-concepts-
mamoru-maekawa-arthur-e-oldehoeft-rodney-r-oldehoeft-49455972
Operating Openshift An Sre Approach To Managing Infrastructure 1st Edition Rick Rackow
Rick Rackow
& Manuel Dewald
Operating
OpenShift
An SRE Approach to Managing Infrastructure
OPENSHIFT AND KUBERNETES
“An essential companion
for anyone deploying and
maintaining an OpenShift
environment.”
—Andrew Block
Distinguished Architect, Red Hat
“Should be a mandatory
read for every team
running OpenShift
workloads in production.
—Bilgin Ibryam
Coauthor of Kubernetes Patterns,
Product Manager at Diagrid
Operating OpenShift
US $59.99 CAN $74.99
ISBN: 978-1-098-10639-3
Twitter: @oreillymedia
linkedin.com/company/oreilly-media
youtube.com/oreillymedia
Kubernetes has gained significant popularity over the past
few years, with OpenShift as one of its most mature and
prominent distributions. But while OpenShift provides
several layers of abstraction over vanilla Kubernetes, this
software can quickly become overwhelming because of its
rich feature set and functionality. This practical book helps
you understand and manage OpenShift clusters from minimal
deployment to large multicluster installations.
Principal site reliability engineers Rick Rackow and Manuel
Dewald, who worked together on Red Hat’s managed
OpenShift offering for years, provide valuable advice to help
your teams operate OpenShift clusters efficiently. Designed
for SREs, system administrators, DevOps engineers, and
cloud architects, Operating OpenShift encourages consistent
and easy container orchestration and helps reduce the
effort of deploying a Kubernetes platform. You’ll learn why
OpenShift has become highly attractive to enterprises large
and small.
• Learn OpenShift core concepts and deployment strategies
• Explore multicluster OpenShift Container Platform
deployments
• Administer OpenShift clusters following best practices
• Learn best practices for deploying workloads to OpenShift
• Monitor OpenShift clusters through state-of-the-art
concepts
• Build and deploy Kubernetes operators to automate
administrative tasks
• Configure OpenShift clusters using a GitOps approach
Rick Rackow is a seasoned professional
who’s worked on cloud and container
adoption throughout his career. As
site reliability engineer on Red Hat’s
OpenShift Dedicated SRE team, Rick
managed and maintained countless
OpenShift clusters at scale and ensured
their reliability by developing and
following the best practices in this book.
Manuel Dewald has been a software
engineer on many software projects,
from big enterprise software to
distributed open source software
composed of independent components.
He is lead SRE on the OpenShift
Dedicated team at Red Hat, operating
OpenShift clusters and automating the
cluster lifecycle.
Rick Rackow and Manuel Dewald
Operating OpenShift
An SRE Approach to Managing Infrastructure
Boston Farnham Sebastopol Tokyo
Beijing Boston Farnham Sebastopol Tokyo
Beijing
978-1-098-10639-3
[LSI]
Operating OpenShift
by Rick Rackow and Manuel Dewald
Copyright © 2023 Rick Rackow and Manuel Dewald. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are
also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional
sales department: 800-998-9938 or corporate@oreilly.com.
Acquisitions Editor: John Devins
Development Editor: Corbin Collins
Production Editor: Ashley Stussy
Copyeditor: Piper Editorial Consulting, LLC
Proofreader: Judith McConville
Indexer: Amnet Systems LLC
Interior Designer: David Futato
Cover Designer: Karen Montgomery
Illustrator: Kate Dullea
November 2022: First Edition
Revision History for the First Edition
2022-11-07: First Release
See http://oreilly.com/catalog/errata.csp?isbn=9781098106393 for release details.
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Operating OpenShift, the cover image,
and related trade dress are trademarks of O’Reilly Media, Inc.
The views expressed in this work are those of the authors, and do not represent the publisher’s views.
While the publisher and the authors have used good faith efforts to ensure that the information and
instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility
for errors or omissions, including without limitation responsibility for damages resulting from the use
of or reliance on this work. Use of the information and instructions contained in this work is at your
own risk. If any code samples or other technology this work contains or describes is subject to open
source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use
thereof complies with such licenses and/or rights.
To Linus
— R.R.
To Marie
— M.D.
Operating Openshift An Sre Approach To Managing Infrastructure 1st Edition Rick Rackow
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Traditional Operations Teams 2
How Site Reliability Engineering Helps 3
OpenShift as a Tool for Site Reliability Engineers 4
Individual Challenges for SRE Teams 5
2. Installing OpenShift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
OKD, OCP, and Other Considerations 7
OKD 7
OCP 8
OSD, ROSA, and ARO 8
Local Clusters with OpenShift Local 8
Planning Cluster Size 12
Instance Sizing Recommendations 12
Node Sizing Recommendations 12
Master Sizing Recommendations 13
Infra Nodes 15
Basic OpenShift Installations 17
Installer-Provisioned Infrastructure 17
Self-Provisioned Infrastructure 24
Summary 24
3. Running Workloads on OpenShift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Deploying Code 26
Deploying Existing Container Images 27
Deploying Applications from Git Repositories 29
v
Accessing Deployed Services 31
Accessing Services from Other Pods 31
Distribution of Requests 32
Exposing Services 33
Route by Auto-generated DNS Names 34
Route by Path 35
External Load Balancers 37
Securing Services with TLS 40
Specifying TLS Certificates 40
Redirecting Traffic to TLS Route 42
Let’s Encrypt Trusted Certificates 44
Encrypted Communication to the Service 51
Summary 57
4. Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Cluster Access 59
Role-Based Access Control 61
Roles and ClusterRoles 62
RoleBindings and ClusterRoleBindings 63
CLI 65
ServiceAccounts 66
Threat Modelling 67
Workloads 68
Summary 72
5. Automating Builds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
OpenShift Image Builds 73
Docker Build 74
Source to Image (S2I) Build 81
Custom S2I Images 84
Red Hat OpenShift Pipelines 87
Overview 88
Install Red Hat OpenShift Pipelines 90
Setting Up the Pipeline 92
Turning the Pipeline into Continuous Integration 104
Summary 110
6. In-Cluster Monitoring Stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Cluster Monitoring Operator 111
Prometheus Operator 114
User Workload Monitoring 130
Visualizing Metrics 136
vi | Table of Contents
Console Dashboards 136
Using Grafana 137
Summary 141
7. Advanced Monitoring and Observability Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Service Oriented Monitoring 143
Service Level Indicators 144
Service Level Objectives 145
Tools 150
Logging 154
ClusterLogging 154
Log Forwarding 158
Loki 158
Visualization 159
Installation 159
Creating a Grafana Instance 161
Data Source 161
Dashboards 164
Summary 166
8. Automating OpenShift Cluster Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Recurring Operations Tasks 168
Application Updates 169
Certificate Renewals 169
OpenShift Updates 169
Backups 170
Automating Recurring Operations Tasks 170
Persistence 170
Creating Snapshots 173
Using CronJobs for Task Automation 176
Cluster Configuration 182
Manage Cluster Configuration with OpenShift GitOps 184
Installing OpenShift GitOps 185
Managing Configuration with OpenShift GitOps 189
Managing Configuration of Multiple Clusters with OpenShift GitOps 193
Summary 197
9. Developing Custom Operators to Automate Cluster Operations. . . . . . . . . . . . . . . . . . . 199
Operator SDK 201
Operator Design 202
Bootstrapping the Operator 203
Setting Up a CA Directory for Development 207
Table of Contents | vii
Designing the Custom Resource Definition 209
Installing the CustomResourceDefinition 212
Local Operator Development 213
The Reconcile Function 215
Deploying the Operator 216
Creating and Updating OpenShift Resources 220
Specifying RBAC Permissions 223
Routing Traffic to the Operator 224
Adding Additional Controllers 227
Updating Resource Status 229
Summary 231
10. Practical Patterns for Operating OpenShift Clusters at Scale. . . . . . . . . . . . . . . . . . . . . 233
Cluster Lifecycle 233
Cluster Configuration 235
Logging 235
Monitoring 236
Alerting 237
Automation 238
On Call 238
Primary On Call 239
Backup On Call 239
Shift Rotation 239
Ticket Queue 239
Incident Management 240
When to Declare an Incident 241
Inform the Customer 241
Define Roles 241
Incident Timeline 242
Document the Process 242
Postmortem 243
Accessing OpenShift Clusters 243
The Stage Is Yours 243
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
viii | Table of Contents
Preface
In late December 2020, a Slack notification from Rick popped up on Manuel’s laptop.
“You know what?” it said, “You and I, we’re going to write a book!”
“What are we going to write about?”
“Operating OpenShift!”
Fast-forward almost two years, and that very book is now before your eyes.
The backstory is that over the past several years, more and more people reached out
to us to ask if we would be able to share some of our OpenShift insights with them—
to help them operate their OpenShift clusters more efficiently.
At that time the two of us worked as site reliability engineers for OpenShift clusters
at Red Hat. Efficiently operating OpenShift clusters was indeed our day-to-day chal‐
lenge, and we had accumulated a lot of knowledge and expertise. We used that
experience to create this book.
We divided the 10 chapters of this book according to our personal interests and depth
of experience. Chapters 1, 3, 5, 8, 9, and 10 are written by Manuel. Chapters 2, 4, 6,
and 7 are by Rick.
We learned a lot more about OpenShift in the past two years working on the book.
Even with our experience operating OpenShift at Red Hat, many of the tools for oper‐
ating and automating operations still required further research and experimentation.
We’ve done our best to compile the results of our experiments into simple steps that
you can follow to get started. Of course, you’ll need to adjust the examples to apply
them to your specific needs as soon as you start using the tools.
All the examples use the simplified scenario of an arcade gaming platform that
you’ll deploy to your cluster as you follow the book. You’ll find the resources of this
example workload in the corresponding GitHub repository.
ix
Conventions Used in This Book
The following typographical conventions are used in this book:
Italic
Indicates new terms, URLs, email addresses, filenames, and file extensions.
Constant width
Used for program listings, as well as within paragraphs to refer to program
elements such as variable or function names, databases, data types, environment
variables, statements, and keywords.
Constant width bold
Shows commands or other text that should be typed literally by the user.
Constant width italic
Shows text that should be replaced with user-supplied values or by values deter‐
mined by context.
This element signifies a tip or suggestion.
This element signifies a general note.
This element indicates a warning or caution.
Using Code Examples
Supplemental material (code examples, exercises, etc.) is available for download at
https://github.com/OperatingOpenshift.
If you have a technical question or a problem using the code examples, please send
emails to bookquestions@oreilly.com.
This book is here to help you get your job done. In general, if example code is offered
with this book, you may use it in your programs and documentation. You do not
x | Preface
need to contact us for permission unless you’re reproducing a significant portion
of the code. For example, writing a program that uses several chunks of code from
this book does not require permission. Selling or distributing examples from O’Reilly
books does require permission. Answering a question by citing this book and quot‐
ing example code does not require permission. Incorporating a significant amount
of example code from this book into your product’s documentation does require
permission.
We appreciate, but generally do not require, attribution. An attribution usually
includes the title, author, publisher, and ISBN. For example: “Book Title by Some
Author (O’Reilly). Copyright 2012 Some Copyright Holder, 978-0-596-xxxx-x.”
If you feel your use of code examples falls outside fair use or the permission given
above, feel free to contact us at permissions@oreilly.com.
O’Reilly Online Learning
For more than 40 years, O’Reilly Media has provided technol‐
ogy and business training, knowledge, and insight to help
companies succeed.
Our unique network of experts and innovators share their knowledge and expertise
through books, articles, and our online learning platform. O’Reilly’s online learning
platform gives you on-demand access to live training courses, in-depth learning
paths, interactive coding environments, and a vast collection of text and video from
O’Reilly and 200+ other publishers. For more information, visit http://oreilly.com.
How to Contact Us
Please address comments and questions concerning this book to the publisher:
O’Reilly Media, Inc.
1005 Gravenstein Highway North
Sebastopol, CA 95472
800-998-9938 (in the United States or Canada)
707-829-0515 (international or local)
707-829-0104 (fax)
We have a web page for this book, where we list errata, examples, and any additional
information. You can access this page at https://oreil.ly/operating-openshift-1e.
Email bookquestions@oreilly.com to comment or ask technical questions about this
book.
Preface | xi
For news and information about our books and courses, visit https://oreilly.com.
Find us on LinkedIn: https://linkedin.com/company/oreilly-media
Follow us on Twitter: https://twitter.com/oreillymedia
Watch us on YouTube: https://www.youtube.com/oreillymedia
Acknowledgments
Over the past two years, a lot of people have been supportive of our idea for this
book, and we would like to thank everyone who helped us stay motivated and finish
this work.
We’d like to thank the following people who worked with us from the O’Reilly team:
John Devins helped us finalize the book proposal and convinced the right people
that it’s worth to invest in the topic. Corbin Collins, our development editor, was
always the first to review our raw material and patiently corrected our formatting and
grammar mistakes. He also always had an eye on our roadmap and reached out in
time if adjustments needed to be made. Along with him, we also want to thank Sara
Hunter and Ashley Stussy for their thorough reviews and incredibly helpful feedback.
Our technical editors Andrew Block and Bilgin Ibrayam were incredibly helpful
and contributed lots of good ideas to improve the content. They even mentioned
alternatives that we’d overlooked in our research.
A lot of the research done for this book involved chatting with the right people, both
inside Red Hat and in the open source communities, who have been hard at work
on the respective components covered in this book. We’d like to thank everyone who
helped us get things up and running.
Finally, we want to thank our families, Stephanie, Linus, Julia, and Marie, who have
been supportive of the idea from the beginning and helped us free up time to focus
on writing this book and put up with our moods when things didn’t go too well.
This book would not exist without you.
xii | Preface
CHAPTER 1
Introduction
Manuel Dewald
Operating distributed software is a difficult task. It requires humans with a deep
understanding of the system they maintain. No matter how much automation you
create, it will never replace highly skilled operations personnel.
OpenShift is a platform, built to help software teams develop and deploy their
distributed software. It comes with a large set of tools that are built in or can be
deployed easily. While it can be of great help to its users and can eliminate a lot of
traditionally manual operations burdens, OpenShift itself is a distributed system that
needs to be deployed, operated, and maintained.
Many companies have platform teams that provide development platforms based
on OpenShift to software teams so the maintenance effort is centralized and the
deployment patterns are standardized across the organization. These platform teams
are shifting more and more into the direction of Site Reliability Engineering (SRE)
teams, where software development practices are applied to operations tasks. Scripts
are replaced by proper software solutions that can be tested more easily and deployed
automatically using continuous integration/continuous delivery (CI/CD) systems.
Alerts are transformed from simple cause-based alerts like “a high amount of mem‐
ory is used on Virtual Machine 23” into symptom-based alerts based on Service Level
Objectives (SLO) that reflect customer experience, like “processing of requests takes
longer than we expect it to.”
OpenShift provides all the tools you need to run software on top of it with SRE
paradigms, from a monitoring platform to an integrated CI/CD system that you can
use to observe and run both the software deployed to the OpenShift cluster, as well as
the cluster itself. But building the automation, implementing a good alerting strategy,
and finally, debugging issues that occur when operating an OpenShift cluster, are still
difficult tasks that require skilled operations or SRE staffing.
1
Even in SRE teams, traditionally a good portion of the engineers’ time is dedicated
to manual operations tasks, often called toil. The operations time should be capped,
though, as the main goal of SRE is to tackle the toil with software engineering.
O’Reilly published a series of books written by site reliability engineers (SREs) at
Google, related to the core SRE concepts. We encourage you to take a look at these
books if you’re interested in details about these principles. In the first book, Site
Reliability Engineering, the authors mostly speak from their experience as SREs at
Google, suggesting to limit the time working on toil to 50% of an engineering team’s
time.
Traditional Operations Teams
The goal of having an upper limit for toil is to avoid shifting back into an operations
team where people spend most of the time working down toil that accumulates with
both the scale of service adoption and software advancement.
Part of the accumulating toil while the service adoption grows is the number of
alerts an operations team gets if the alerting strategy isn’t ready for scaling. If you’re
maintaining software that creates one alert per day per tenant, keeping one engineer
busy running 10 tenants, you will need to scale the number of on-call engineers
linearly with the number of tenants the team operates. That means in order to double
the number of tenants, you need to double the number of engineers dedicated to
reacting to alerts. These engineers will effectively not be able to work on reducing the
toil created by the alerts while working down the toil and investigating the issues.
In a traditional operations team that runs OpenShift as a development platform for
other departments of the company, onboarding new tenants is often a manual task. It
may be initiated by the requesting team to open a ticket that asks for a new OpenShift
cluster. Someone from the operations team will pick up the ticket and start creating
the required resources, kick off the installer, configure the cluster so the requesting
team gets access, and so forth. A similar process may be set up for turning down
clusters when they are not needed anymore. Managing the lifecycle of OpenShift
clusters can be a huge source of toil, and as long as the process is mainly manual, the
amount of toil will scale with the adoption of the service.
In addition to being toil-packed processes, manual lifecycle and configuration man‐
agement are error prone. When an engineer runs the same procedure several times
during a week, as documented in a team-managed Wiki, chances are they will miss an
important step or pass a wrong parameter to any of the scripts, resulting in a broken
state that may not be discovered immediately.
When managing multiple OpenShift clusters, having one that is slightly different
from the others due to a mistake in the provisioning or configuration process,
or even due to a customer request, is dangerous and usually generates more toil.
2 | Chapter 1: Introduction
Automation that the team generated over time may not be tailored to the specifics of
a single snowflake cluster. Running that automation may just not be possible, causing
more toil for the operations team. In the worst case, it may even render the cluster
unusable.
Automation in a traditional ops team can often be found in a central repository that
can be checked out on engineer devices so they can run the scripts they need as part
of working on a documented process. This is problematic not only because it still
needs manual interaction and hence doesn’t scale well but also engineer’s devices are
often configured differently. They can differ in the OS they use, adding the need to
support different vendors in the tooling, for example by providing a standardized
environment like a container environment to run the automation.
But even then, the version of the scripts to run may differ from engineer to engineer,
or the script to run hasn’t been updated when it should’ve been as a new version
of OpenShift has been released. Automated testing is something that is seldomly
implemented for operations scripts made to quickly get rid of a piece of toil. All this
makes automation in scripts that are running on developer machines brittle.
How Site Reliability Engineering Helps
In an SRE team, the goal is to replace such scripts with actual software that is
versioned properly, has a mature release strategy, has a continuous integration and
delivery process, and runs from the latest released version on dedicated machines, for
example, an OpenShift cluster.
OpenShift SRE teams treat the operations of OpenShift clusters, from setting them
up to tearing them down, as a software problem. By applying evolved best practices
from the software engineering world to cluster operations, many of the problems
mentioned earlier can be solved. The software can be unit-tested to ensure that new
changes won’t break existing behavior. Additionally, a set of integration tests can
ensure it works as expected even when the environment changes, such as when a new
version of OpenShift is released.
Instead of proactively reacting to more and more requests from customers as the
service adoption grows, the SRE team can provide a self-service process that can be
used by their customers to provision and configure their clusters. This also reduces
the risk of snowflakes, as less manual interaction is needed by the SRE team. What
can and cannot be configured should be part of the UI provided to the customer,
so requests to treat a single cluster differently from all the others should turn into a
feature request for the automation or UI. That way, it will end up as a supported state
rather than a manual configuration update.
To ensure that the alerting strategy can scale, SRE teams usually move from a
cause-based alerting strategy to a symptom-based alerting strategy, ensuring that only
How Site Reliability Engineering Helps | 3
problems that risk impacting the user experience reach their pager. Smaller problems
that do not need to be resolved immediately can move to a ticket queue to work on as
time allows.
Shifting to an SRE culture means allowing people to watch their own software, taking
away the operations burden from the team one step at a time. It’s a shift that will
take time, but it’s a rewarding process. It will turn a team that runs software someone
else wrote into a team that writes and runs software they’re writing themselves,
with the goal of automating the lifecycle and operations of the software under their
control. An SRE culture enables service growth by true automation and observation
of customer experience rather than the internal state.
OpenShift as a Tool for Site Reliability Engineers
This book will help you to utilize the tools that are already included with OpenShift
or that can be installed with minimal effort to operate software and OpenShift itself
the SRE way.
We expect you to have a basic understanding of how containers, Kubernetes, and
OpenShift work to be able to understand and follow all the examples. Fundamental
concepts like pods will not be explained in full detail, but you may find a quick
refresher where we found it helpful to understand a specific aspect of OpenShift.
We show you the different options for installing OpenShift, helping you to auto‐
mate the lifecycle of OpenShift clusters as needed. Lifecycle management includes
not only installing and tearing down clusters but also managing the configuration
of your OpenShift cluster in a GitOps fashion. Even if you need to manage the
configuration of multiple clusters, you can use Argo CD on OpenShift to manage the
configuration of a multitude of OpenShift clusters.
This book shows you how to run workloads on OpenShift using a simple example
application. You can use this example to walk through the chapters and try out the
code samples. However, you should be able to use the same patterns to deploy more
serious software, like automation that you built to manage OpenShift resources—for
example, an OpenShift operator.
OpenShift also provides the tools you need to automate the building and deployment
of your software, from simple automated container builds, whenever you check in
a new change, to version control, to full-fledged custom pipelines using OpenShift
Pipelines.
In addition to automation, the SRE way of managing OpenShift clusters includes
proper alerting that allows you to scale. OpenShift comes with a lot of built-in alerts
that you can use to get informed when something goes wrong with a cluster. This
book will help you understand the severity levels of those alerts and show you how
4 | Chapter 1: Introduction
to build your own alerts, based on metrics that are available in the OpenShift built-in
monitoring system.
Working as OpenShift SREs at Red Hat together for more than two years, we both
learned a lot about all the different kinds of alerts that OpenShift emits and how to
investigate and solve problems. The benefit of working close to OpenShift Engineer‐
ing is that we can even contribute to alerts in OpenShift if we find problems with
them during our work.
Over time, a number of people have reached out, being interested in how we work
as a team of SREs. We realize there is a growing interest in all different topics related
to our work: From how we operate OpenShift to building custom operators, people
show interest in the topic at conferences or reach out to us directly.
This book aims to help you take some of our learnings and use them to run Open‐
Shift in your specific environment. We believe that OpenShift is a great distribution
of Kubernetes that brings a lot of additional comfort with it, comfort that will allow
you to get started quickly and thrive at operating OpenShift.
Individual Challenges for SRE Teams
OpenShift comes with a lot of tools that can help you in many situations as a
developer or operator. This book can cover only a few of those tools and does not
aim to provide a full overview of all OpenShift features. Instead of trying to replicate
the OpenShift documentation, this book focuses on highlighting the things we think
will help you get started operating OpenShift. With more features being developed
and added to OpenShift over time, it is a good idea to follow the OpenShift blog and
the OpenShift documentation for a more holistic view of what’s included in a given
release.
Many of the tools this book covers are under active development, so you may find
them behaving slightly differently from how they worked when this book was pub‐
lished. Each section references the documentation for a more detailed explanation of
how to use a specific component. This documentation is usually updated frequently,
so you can find up-to-date information there.
When you use Kubernetes as a platform, you probably know that many things
are automated for you already: you only need to tell the control plane how many
resources you need in your deployment, and Kubernetes will find a node to place it.
You don’t need to do a rolling upgrade of a new version of your software manually,
because Kubernetes can handle that for you. All you need to do is configure the
Kubernetes resources according to your needs.
Individual Challenges for SRE Teams | 5
OpenShift, being based on Kubernetes, adds more convenience, like routing traffic
to your web service from the outside world: exposing your service at a specific DNS
name and routing traffic to the right place is done via the OpenShift router.
These are only a few of the tasks that used to be done by operations personnel but can
be automated in OpenShift by default.
However, depending on your specific needs and the environment you’re running
OpenShift in, there are probably some very specific tasks that you need to solve on
your own. This book cannot tell you step-by-step what you need to do in order to
fully automate operations. If it were that easy to fit every environment, it would most
probably be part of OpenShift already. So, please treat this book as an informing set
of guidelines, but know that you will still need to solve some of the problems to make
OpenShift fit your operations strategy.
Part of your strategy will be to decide how and where you want to install OpenShift.
Do you want to use one of the public cloud providers? That may be the easiest to
achieve, but you may also be required to run OpenShift in your own data center for
some workloads.
The first step for operating OpenShift is setting it up, and when you find yourself in
a place where you’ll need to run multiple OpenShift clusters, you probably want to
automate this part of the cluster lifecycle. Chapter 2 discusses different ways to install
an OpenShift cluster, from running it on a developer machine, which can be helpful
to develop software that needs a running OpenShift cluster during development, to a
public reachable OpenShift deployment using a public cloud provider.
6 | Chapter 1: Introduction
CHAPTER 2
Installing OpenShift
Rick Rackow
As with any piece of software, the story of OpenShift starts by installing it. This
chapter walks you through some scenarios that reach from small to scale. This
chapter focuses on a single cluster installation and explores the limits of different
sizes of clusters. However, at some point, scaling a cluster may either not be enough
or may not serve the use case very well. In those cases you will want to look into
multicluster deployments. Those are covered as part of Chapter 10.
OKD, OCP, and Other Considerations
OpenShift can be considered as a distribution of Kubernetes, and it is available in
different ways. We will go over each of them in this section, draw a small comparison,
and point out how they relate to one another.
OKD
OKD is not an acronym. Before its rebranding, OKD used to be called OpenShift
Origin. Now it’s OKD, and that is how it should be referred to, for trademark
reasons. Namely, the Linux Foundation does not allow Red Hat to use “Kubernetes”
in products or projects further than referencing it.
OKD is a distribution of Kubernetes optimized for continuous application develop‐
ment and multi-tenant deployment. OKD also serves as the upstream code base upon
which Red Hat OpenShift Online and Red Hat OpenShift Container Platform are built.
—docs.okd.io
In other words, OKD is where upstream Kubernetes is vendored and the core of
OpenShift starts to exist. It serves as the base for everything else that is OpenShift.
7
OCP
OCP stands for OpenShift Container Platform. This is what people (especially inside
Red Hat) most commonly mean when they mention OpenShift. OCP is positioned
downstream of OKD. Different support levels are available. You can try it out for free
during an evaluation period. All you need is a Red Hat account. It is not required for
you to make any purchase of a Red Hat product or support to follow this book.
OCP is what is covered in this book. If there is a difference between how OCP and
OKD work, we default to OCP.
OSD, ROSA, and ARO
In addition to a self-hosted and self-installed OpenShift, Red Hat also offers
OpenShift-as-a-Service as a fully managed offering on Amazon Web Services, Micro‐
soft Azure, and Google Cloud Platform. We don’t go into much detail with those, as
you wouldn’t really need to read this book if you were to buy a subscription for any of
those, but for future reference, the terminology is:
Acronym Name Available On
OSD OpenShift Dedicated AWS, GCP
ROSA Red Hat OpenShift Service on AWS AWS
ARO Azure Red Hat OpenShift Azure
All of those are viable options for anyone who wants to run production workloads
on OpenShift as they are all very closely connected to one another with direct
dependencies. The dependency tree is OKD ⇒ OCP ⇒ OSD, ROSA, ARO.
Which one you decide on depends on your needs in terms of support, environment,
ease of use, ease of operation, and cost per cluster. We decided to default to OCP for
this book because it strikes a balance between upstream and downstream position. It
is more feature complete than OKD and offers support but not to a level of a fully
managed solution like OSD or the other managed solutions.
Local Clusters with OpenShift Local
OpenShift Local is the easiest way to launch a full OpenShift cluster locally. If you
have touched Kubernetes before, you have probably heard of Minikube and Open‐
Shift Local, the OpenShift equivalent.
Its developers describe it as “OpenShift 4 on your laptop”. In fact, you can install it
not just on laptops but almost everywhere: workstations, Cloud VMs, or laptops. At
its core, OpenShift Local is a virtual machine that serves as both OpenShift Worker
and Master.
8 | Chapter 2: Installing OpenShift
OpenShift Local is ephemeral by nature and should not be used for
production use cases.
The documentation is your best friend. Make sure to consult it whenever you get
stuck. It is the condensed start-to-finish guide for OpenShift Local, and it’s open
source. That means it’s frequently updated, and you can contribute to it, in case you
find something along the way that you think isn’t covered enough yet.
Head on over to OpenShift Cluster Manager (OCM). We reference this page fre‐
quently throughout this chapter, specifically when we talk about the installers. It
serves as your overview and starting point for all clusters that you registered, regard‐
less if they are OpenShift Local, OCP, or managed clusters.
Sign in with your Red Hat account. If you don’t have one, create one. You should be
presented with a view similar to the one in Figure 2-1.
Figure 2-1. OCM start view
Click the Create cluster button and then choose “Local” in the next view.
Local Clusters with OpenShift Local | 9
Choose the platform that you want to install OpenShift Local on. Note that it has
your current platform auto-selected, based on your browser’s user agent. The example
shown in Figure 2-2 was created on macOS, and it is auto-selected.
Figure 2-2. OCM OpenShift local view
Next, download the archive. Also download and save your “Pull secret” by clicking
the Download Pull Secret button, shown in Figure 2-2. After the download has
finished, extract the archive into any location that is in your $PATH.
$ tar -xJvf crc-macos-amd64.tar.xz
Since you have extracted into your $PATH, you will now be able to use the included
binaries right away. Two important files are packaged in the archive. The first is crc,
which is the binary to interact with your OpenShift Local cluster, and its name is
an acronym for CodeReady Containers, the former name of OpenShift local. The
second is oc, which is the OpenShift command line utility to interact with generally
all OpenShift clusters. It is the equivalent of kubectl for Kubernetes. Those two files
10 | Chapter 2: Installing OpenShift
together allow you to effectively set up and manage your OpenShift Local cluster, as
well as interact with it afterward as you would with any other OpenShift cluster.
The basic interaction with your cluster will be to set it up. This can be done as
follows:
$ crc setup
INFO Checking if podman remote executable is cached
INFO Checking if admin-helper executable is cached
INFO Caching admin-helper executable
INFO Uncompressing crc_hyperkit_4.7.5.crcbundle
crc.qcow2: 10.13 GiB / 10.13 GiB [-------------------] 100.00%
Your system is correctly setup for using CodeReady Containers.
You can now run 'crc start' to start the OpenShift cluster
During your first setup, you will be prompted to opt into sending telemetry data. This
is a very limited set of on-cluster data that gets forwarded to Red Hat. You can see the
full list of what gets sent online.
Opting out of sending telemetry data can impact certain features in
OpenShift Cluster Manager that rely on telemetry data.
Now that the setup is done, go ahead and launch the cluster with the following
command:
$ crc start
INFO Checking if running as non-root
INFO Checking if podman remote executable is cached
INFO Checking if admin-helper executable is cached
INFO Checking minimum RAM requirements
INFO Checking if HyperKit is installed
INFO Checking if crc-driver-hyperkit is installed
INFO Checking file permissions for /etc/hosts
INFO Checking file permissions for /etc/resolver/testing
CodeReady Containers requires a pull secret to download content from Red Hat.
? Please enter the pull secret
At this point, paste the content of the pull secret you downloaded earlier. The pull
secret will allow you to pull the required images from Red Hat’s container registry
as well as associate the cluster to your Red Hat user, which ultimately also will
make it show up in OpenShift Cluster Manager. Your OpenShift Local installation
is completed after this step. You can use this cluster to familiarize yourself with the
oc command line tool as well as the web console. Remember that this cluster is
ephemeral. In case you need to restore the state of installation, you can start over with
the following command:
$ crc delete && crc start
Local Clusters with OpenShift Local | 11
Planning Cluster Size
In this section you will deploy a multinode OpenShift Cluster. There are some
considerations to go over, and one of the most important is planning the cluster’s size
and capacity.
Instance Sizing Recommendations
OpenShift documentation has some pointers for how to scale your clusters’ instances.
Let’s examine what potential issues you can run into if you scale too small. You can
safely assume that scaling too big is not an issue, other than cost. You will also find
remarks about that throughout the following sections.
The instance size is directly related to your workloads, and master and nodes behave
similarly to some extent. The more workloads you plan to run, the bigger your
instances have to become. However, the way they scale is fundamentally different.
Whereas nodes directly relate to workload almost linearly, masters don’t. That means
that a cluster’s capacity can be scaled out to a certain extent without any adjustments
to the control plane.
Node Sizing Recommendations
To better illustrate the scaling behavior of nodes, let’s look at an example.
Think of a cluster of three nodes; ignore the masters for now. Each of them is an AWS
m5.xlarge, so 4 vCPU and 8 GB of ram. That gives you a total cluster capacity of 12
vCPU and 24 GB ram. In this virtual scenario you can try to run workloads in perfect
distribution and use up all the resources, and then you will either need to scale nodes
to bigger instances (horizontally) or more of them (vertically). Add another instance
and the cluster capacity grows linearly. Now you have 16 vCPU and 32 GB for our
workloads.
The above scenario disregards a small but important detail: system reserved and
kube reserved capacity. Since OpenShift release 4.8, OpenShift can take care of that
automatically. To enable this functionality, add the following to the KubeletConfig:
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
name: dynamic-node
spec:
autoSizingReserved: true
It is possible to adjust the KubeletConfig post-install as well as before creating a clus‐
ter. Forcing OpenShift to take on the system-relevant resources is a recommended
setting to ensure the cluster’s functionality and should not be omitted unless explicit
reasons exist.
12 | Chapter 2: Installing OpenShift
Think of it like this: 10 pods that run an m5.xlarge node and each of those pods has
a requested set of resources of 0.4 CPU, and they actually use that. Naturally, your
system process gets into trouble and that node becomes unstable. In the worst case,
the node becomes unresponsive and crashes, the workloads on it get reallocated to
other nodes, overloading those, and you end up with a chain reaction: your whole
cluster becomes unresponsive. From that perspective it’s a small price to pay to
sacrifice some of that precious capacity to ensure cluster stability.
So we know that nodes scale linearly with their workloads and that we need to add a
bit of reserved capacity on top of that. So how big should your node be? We have to
consider three questions:
• How big is your single biggest workload?
•
• How much can you utilize a big node?
•
• How fast can you deploy more nodes?
•
The single biggest workload determines the minimum size of a node. The explanation
is that if you can’t fit the workload on a node, you have a problem, because you want
to be able to deploy all your workloads to the cluster.
The flip side of that is the efficiency you want to achieve. Having a node idle at only
50% usage all the time is really just burning money. You want to find the sweet spot
between being able to fit all your workloads, while at the same time making the most
of your nodes. Those two points together lead to the point that we want to be using
nodes that are as small as possible and if you need more, deploy another one, so the
utilization per node is still high even with an extra node added to the cluster.
The factor that can make you go down a different path is time: the time it takes to
deploy another node, in case you hit capacity. Certain ways to deploy are faster than
others. For example, having set up automation that allows you to deploy another
node to the cluster within 5 minutes makes a great difference from having to man‐
ually provision a new blade in a datacenter and waiting for it for a day until the
datacenter team has mounted and connected it.
The rule here is the slower you can provision new nodes, the bigger a single node
needs to be, and the earlier you have to provision new nodes. The time to new node
directly works against the max utilization you want to aim for per node.
Master Sizing Recommendations
Nodes are important for giving a home to your workloads, but masters are the heart
of OpenShift.
Planning Cluster Size | 13
The masters, or control plane nodes, are what keep the cluster running since they are
hosting:
• etcd
•
• API server (kube and OpenShift)
•
• Controller manager (kube and OpenShift)
•
• OpenShift Oauth API server
•
• OpenShift Oauth Server
•
• HA proxy
•
The masters don’t directly run workloads; therefore, they behave differently when
it comes to scalability. As opposed to the linear scalability needs of nodes, which
depend on the workloads, the master capacity has to be scaled alongside the number
of nodes.
Another difference compared to the node scalability is that you need to look at
vertical scaling over horizontal scaling. You cannot simply scale out master nodes
horizontally because some components that run on masters require a quorum as well
as replication. The most prominent case is etcd. The central store for state, secrets,
and so on is just one of the components to name. Theoretically, almost any arbitrary
number of masters is possible in an OpenShift cluster as long they can form a
quorum. This means that a leader election needs to happen, with a majority of votes.
This can become rather tricky, for example with an even number of nodes like “4”
or “2.” In those cases, there is no guarantee that any given node will have a majority,
and leader election can get stuck or, worse, multiple leaders can be elected, which
might break the cluster. The question is, “Why not just 1?” and the answer to that is
the cluster’s resilience. You cannot risk your whole cluster, which is basically unusable
without masters, by having only a single point of failure. Imagine a scenario where
you have one master instance, and it crashes because of a failure in the underlying
infrastructure. The whole cluster is completely useless at this point, and recovery
from that kind of failure is hard. The next smallest option is 3, and that is also our
recommendation. In fact, the official documentation states that exactly three master
nodes must be used for all production deployments.
With the count set, we have the option left of vertical scaling. However, with masters
being the heart of the cluster, you have to account for the fragile state you take a
cluster into when you resize an already running master node, since it will need to be
shut down to be resized.
14 | Chapter 2: Installing OpenShift
Make sure to plan for growth. If you plan to have 20 nodes at
the very beginning in order to have room for your workloads,
choose the next bigger size master instances. This comes at a small
price point but will save you massive amounts of work and risk by
avoiding a master scaling operation.
Infra Nodes
Infra nodes are worker nodes with an extra label. Other than that, they are just
regular OpenShift nodes. So if they’re “just” nodes, why do they get the extra label?
Two reasons: cost and cluster resilience.
The easy one is cost: certain infrastructure workloads don’t trigger subscription costs
with Red Hat. What that means is if you have a node that exclusively runs infrastruc‐
ture workloads, you don’t have to pay your subscription fee for that node. Seems like
an easy way to save money. For the sake of completeness, the full list of components
that don’t require node subscriptions can be found in the latest documentation. Some
components run on masters and also need to be there, like the OCP control plane.
Others can be moved around. So you create a new set of nodes, with the infra label.
Reason number two: the cluster’s resiliency. Regular workloads as well as infra work‐
loads don’t make a difference to OpenShift when they’re on the same node. Imagine
a regular cluster with just masters and nodes. You deploy all your applications as well
as the infra workloads that come out of the box to nodes. Now when the unfortunate
situation happens that you run out of resources, it may just as well be that an “infra”
workload gets killed as a “regular” application workload. This is, of course, not the
best situation. On the other hand, when all infrastructure-related workloads are safely
placed on their own set of nodes, the “regular” applications don’t impact them at
all, creating better resilience and better performance. Good candidates to be moved
around are:
• In-cluster monitoring (configmap)
•
• Routers (IngressController)
•
• Default registry (Config)
•
Moving them by adding a label to the corresponding elements that are noted inside
the parenthesis. The following example shows how it is done for the in-cluster
monitoring solution.
Planning Cluster Size | 15
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-monitoring-config
namespace: openshift-monitoring
data:
config.yaml: |+
alertmanagerMain:
nodeSelector:
node-role.kubernetes.io/infra: ""
prometheusK8s:
nodeSelector:
node-role.kubernetes.io/infra: ""
prometheusOperator:
nodeSelector:
node-role.kubernetes.io/infra: ""
grafana:
nodeSelector:
node-role.kubernetes.io/infra: ""
k8sPrometheusAdapter:
nodeSelector:
node-role.kubernetes.io/infra: ""
kubeStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
telemeterClient:
nodeSelector:
node-role.kubernetes.io/infra: ""
openshiftStateMetrics:
nodeSelector:
node-role.kubernetes.io/infra: ""
thanosQuerier:
nodeSelector:
node-role.kubernetes.io/infra: ""
Add that to your already existing configmap or create a new one with just this. For
the latter option, we would create the preceding file and apply it as follows:
$ oc create -f cluster-monitoring-configmap.yaml
Follow it with the following command:
$ watch 'oc get pod -n openshift-monitoring -o wide'
A last note on the scaling of infra nodes: They scale almost the same way as master
nodes. The reason they need to be scaled vertically in the first place is that Prome‐
theus as part of the in-cluster monitoring solution requires more memory with more
metrics it is storing.
16 | Chapter 2: Installing OpenShift
Basic OpenShift Installations
This section discusses the first way to install an actual production OpenShift cluster.
There are two different ways that come in different shapes but do the same thing, just
for your respective infrastructure.
Installer-Provisioned Infrastructure
Think of this as an all-in-one solution. The installer creates the underlying infrastruc‐
ture, networking infrastructure, and OpenShift cluster on the cloud provider of your
choice (or compatible bare metal options). Run a single command, pass in your
credentials, and what you get back is an up-and-running OpenShift cluster.
The starting point is again the OpenShift Cluster Manager landing page, which you
can see in Figure 2-3.
Figure 2-3. OCM landing page
Click the Create cluster button again, but this time choose your cloud provider, in
our case Google Cloud Platform (GCP). This takes you to the next page, shown in
Figure 2-4, where we choose “Installer-provisioned infrastructure.”
Basic OpenShift Installations | 17
Figure 2-4. OCM installer choice
Figure 2-5 shows the main installer page. In the first part, you can see all required
artifacts. Part two gives you the absolute basic installation command, and part three
contains some minor information about subscriptions.
18 | Chapter 2: Installing OpenShift
Figure 2-5. OCM installer-provisioned infrastructure landing page
Basic OpenShift Installations | 19
Let’s download the installer by clicking Download Installer. While we’re there, also
download the pull secret and `oc`binary.
Unpack the archive with the binaries to somewhere in your $PATH to have easy access
to them on the command line. Use the following command:
$ tar -xzvf openshift-client-mac.tar.gz
x README.md
x oc
x kubectl
Now unpack the installer in the same way:
$ tar -xzvf openshift-install-mac.tar.gz
x README.md
x openshift-install
You can move openshift-install into a directory in your $PATH too, in case you plan to
access it rather frequently, for example. Otherwise, just keep it in a location that suits
you and reference it by absolute or relative path.
In our example, we unpacked in the ~/Downloads directory, so we would access the
installer as follows:
$ ./Downloads/openshift-install
Prerequisites
Make sure that your cloud provider is set up and ready. The installer will also let you
know if any configuration is missing. A whole section in the documentation discusses
just the setup of the prerequisites, but we want to go over it anyway, just to be sure
you have a good overview of what you need.
To begin, we need a project. You can create that from the console or from the
command line interface (CLI) by running the following command:
gcloud projects create openshift-guinea-pig
Your GCP project must use the Premium Network Service
Tier if you are using installer-provisioned infrastructure. The
Standard Network Service Tier is not supported for clusters
installed using the installation program. The installation pro‐
gram configures internal load balancing for the api-int.<clus‐
ter_name>.<base_domain> URL; the Premium Tier is required for
internal load balancing.
In the project you just created, you also need a certain set of application program‐
ming interfaces (APIs) to be enabled. Table 2-1 shows you which ones are needed.
20 | Chapter 2: Installing OpenShift
Table 2-1. GCP required API overview
API service Console service name
Compute Engine API compute.googleapis.com
Google Cloud APIs cloudapis.googleapis.com
Cloud Resource Manager API cloudresourcemanager.googleapis.com
Google DNS API dns.googleapis.com
IAM Service Account Credentials API iamcredentials.googleapis.com
Identity and Access Management (IAM) API iam.googleapis.com
Service Management API servicemanagement.googleapis.com
Service Usage API serviceusage.googleapis.com
Google Cloud Storage JSON API storage-api.googleapis.com
Cloud Storage storage-component.googleapis.com
You can leverage the gcloud CLI tool again to enable all of those or any other method
that you prefer.
$ gcloud services enable compute.googleapis.com cloudapis.googleapis.com 
cloudresourcemanager.googleapis.com 
dns.googleapis.com 
iamcredentials.googleapis.com 
iam.googleapis.com 
servicemanagement.googleapis.com 
serviceusage.googleapis.com 
storage-api.googleapis.com 
storage-component.googleapis.com
Operation "operations/acf.p2-10448422-91a9fd12a64b" finished successfully.
Make sure that you have enough quota in your project. Please see the OpenShift
documentation for the latest requirements.
You also need a dedicated public domain name system (DNS) zone in the project,
and it needs to be authoritative for the domain. If you don’t have a domain, you can
purchase one from your preferred registrar.
Now create the managed zone like this but with your domain:
$ gcloud dns managed-zones create ocp-cluster 
--description=openshift-cluster 
--dns-name=operatingopenshift.com 
--visibility=public
Basic OpenShift Installations | 21
Get the authoritative name servers from the hosted zone records:
$ gcloud dns managed-zones describe ocp-cluster
creationTime: '2021-04-22T11:13:17.236Z'
description: openshift-cluster
dnsName: operatingopenshift.com.
id: '9171610950957705760'
kind: dns#managedZone
name: ocp-cluster
nameServers:
- ns-cloud-d1.googledomains.com.
- ns-cloud-d2.googledomains.com.
- ns-cloud-d3.googledomains.com.
- ns-cloud-d4.googledomains.com.
visibility: public
The last step here is to point your registrar to the name servers that you just extracted
as authoritative.
Now create the service account:
$ gcloud iam service-accounts create ocp-cluster 
--description="Service account for OCP cluster creation" 
--display-name="OCP_CREATOR"
Created service account [ocp-cluster].
Afterward assign it the required roles in order to get the needed permissions. The list
of required permissions is in the documentation.
$ gcloud projects add-iam-policy-binding innate-attic-182119 
--member="serviceAccount:ocp-cluster
@innate-attic-182119.iam.gserviceaccount.com" 
--role="roles/owner"
Updated IAM policy for project [innate-attic-182119].
bindings:
- members:
- serviceAccount:ocp-cluster@innate-attic-182119.iam.gserviceaccount.com
role: roles/owner
etag: BwXAjkFSyZw=
version: 1
The last step before you can actually install our cluster is to get your local environ‐
ment ready.
Create a secure shell protocol (SSH) key-pair and add it to your ssh-agent (after you
enabled the agent) with the following command:
$ ssh-keygen -t ed25519 -N ''
Generating public/private ed25519 key pair.
Enter file in which to save the key (/Users/rrackow/.ssh/id_ed25519):
Your identification has been saved in /Users/rrackow/.ssh/id_ed25519.
Your public key has been saved in /Users/rrackow/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:c0y9aLQMnv6lBd51Hdrw4q4muNwAeExxdWvauvhwTtk rrackow@MacBook-Pro
22 | Chapter 2: Installing OpenShift
The key's randomart image is:
+--[ED25519 256]--+
| . ... . |
| o ... |
| . . oo.. . |
| + . B+o .= o|
| . + S.O..o.oo|
| . .. =+o.... |
| oo=.E+. |
| ..Oo.=. |
| +o==... |
+----[SHA256]-----+
$ eval "$(ssh-agent -s)"
Agent pid 49003
$ ssh-add /Users/rrackow/.ssh/id_ed25519
Identity added: /Users/rrackow/.ssh/id_ed25519 (rrackow@MacBook-Pro)
Now create a key-file and download it. Once that is done, export its path.
$ gcloud iam service-accounts keys create servicce-account-keys 
--iam-account=ocp-cluster@innate-attic-182119.iam.gserviceaccount.com
created key [b8879741ba8850edcadd9840996e882adc05e228]
$ export GOOGLE_APPLICATION_CREDENTIALS='~/service-account-keys'
Installation
The installer, if you don’t pass in any arguments, works in an interactive mode, which
looks something like this: it will prompt you for choices, and you can move around
with the arrow keys and make an appropriate selection with the return key.
$ ./Downloads/openshift-install create cluster --dir='ocp-cluster-install'
? SSH Public Key [Use arrows to move, enter to select, type to filter]
> /Users/rrackow/.ssh/id_ed25519.pub
/Users/rrackow/.ssh/libra.pub
/Users/rrackow/.ssh/openshift-gcp.pub
/Users/rrackow/.ssh/rpi-ocp-discovery.pub
/Users/rrackow/.ssh/rrackow_private.pub
/Users/rrackow/.ssh/rrackow_redhat_rsa.pub
<none>
? Platform [Use arrows to move, enter to select, type to filter]
aws
azure
> gcp
openstack
ovirt
vsphere
INFO Credentials loaded from file "/Users/rrackow/.gcp/osServiceAccount.json"
? Project ID [Use arrows to move, enter to select, type to filter]
> openshift-guinea-pig (innate-attic-182119)
? Region [Use arrows to move, enter to select, type to filter]
europe-west6 (Zürich, Switzerland)
northamerica-northeast1 (Montréal, Québec, Canada)
southamerica-east1 (São Paulo, Brazil)
Basic OpenShift Installations | 23
> us-central1 (Council Bluffs, Iowa, USA)
us-east1 (Moncks Corner, South Carolina, USA)
us-east4 (Ashburn, Northern Virginia, USA)
us-west1 (The Dalles, Oregon, USA)
? Base Domain [Use arrows to move, enter to select, type to filter]
> operatingopenshift.com
rackow.io
? Cluster Name ocp-cluster
? Pull Secret [? for help] *****************
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc': run
'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here:
https://console-openshift-console.apps.ocp-cluster.operatingopenshift.com
INFO Login to the console with user:
"kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL"
INFO Time elapsed: 36m22s
You don’t have to write the credentials as you can find them in your
install dir, for example ocp-cluster-install/.openshift_install.lo.
Each option will collapse once you make a selection, so don’t be confused if it looks
slightly different for you. The last two require a manual input.
After you make your last selection, the installer will work its magic. This commonly
takes around 45 minutes.
Self-Provisioned Infrastructure
You can also install OpenShift on preexisting infrastructure. That puts you in full
control of absolutely everything and also allows for a better incorporation in any sort
of pipeline. Imagine you ran a pipeline, with just a create cluster command, and
it fails at some point. Probably not very pretty to sort out what went wrong and even
worse to actually automate error handling.
Summary
In this chapter, we discussed how to install a local cluster all the way through with
considerations on how to plan your production cluster size. Each type of instance
was highlighted, and lastly, you learned how to install production clusters with the
OpenShift installer, using Installer Provisioned Infrastructure.
24 | Chapter 2: Installing OpenShift
CHAPTER 3
Running Workloads on OpenShift
Manuel Dewald
At this point you should already have an OpenShift cluster that you can use to deploy
applications. It may be a cluster running on VMs provisioned by a cloud provider
or even a small cluster on your notebook using OpenShift Local. You can access the
console and log in to the cluster with the oc command-line utility. But how do you
deploy an application that your team built to the cluster?
Most applications running on OpenShift clusters are web-based. Such applications
are usually accessed by users via a web browser, or as backends by apps installed to
user-owned devices. For the sake of this chapter you can use an arranged deployment
consisting of three different services to practice deploying application code to your
OpenShift cluster. A small OpenShift Local cluster should provide enough capacity to
deploy this application. However, to follow some parts of the chapter you will need a
cluster that is accessible externally.
The application used in this chapter is the arcade gaming platform of a fictitious
game publisher. It consists of the following components:
• Games, each running in its own service (for now there is only one game).
•
• A highscore service where the scores of every game and player can be shown.
•
• The platform service, used as entry point where customers can browse, start, and
•
purchase games.
Figure 3-1 gives you an overview of the involved components and how they interact.
25
Figure 3-1. Components of the arcade platform example application
The code is organized in a Git repository on GitHub, where each developer of the
company can contribute to every service when necessary. All three services of this
small sample application are located in the same Git repository. This is so you need to
look at only one repository and do not need to clone several different ones. The code
from this example is used in all of the following sections. If you want to follow along
with this example code, use this command to check out the latest version:
$ git clone https://github.com/OperatingOpenShift/s3e
Deploying Code
To have all services you want to run on your OpenShift cluster contained in the same
namespace, first create a new project:
$ oc new-project arcade
This command will automatically switch your context to the newly created arcade
project. All further commands automatically target this project without the need to
mention it in every command.
A project in OpenShift is a namespace with additional annotations. In most cases the
differentiation between project and namespace is not relevant for the examples in this
book, so the two terms are mostly interchangeable.
To switch to a different project, you can use the following command:
$ oc project default
To switch back to the arcade project, run the following command accordingly:
$ oc project arcade
Instead of running the oc project command before subsequent commands, you
can also execute all the commands against a certain namespace by selecting the
26 | Chapter 3: Running Workloads on OpenShift
namespace in each command. All oc commands support the -n flag (shorthand for
--namespace), which can be used to specify a namespace to run the command in.
In practice, when you know you’ll execute a number of commands against the
same namespace, switching to it using oc project saves some typing time and also
saves you from executing commands against the “default” namespace and wondering
where all your resources went.
Deploying Existing Container Images
The quickest way to start a container in the new project is using oc run. Since the
game service of the application you want to deploy is already built into a container
image, you can start it on the cluster using the following command:
$ oc run game --image=quay.io/mdewald/s3e
pod/game created
This will spin up a new pod on the cluster. Use the following command to observe it
while it’s starting up. As soon as it’s ready, you should see the status “Running”:
$ oc get pods
NAME READY STATUS RESTARTS AGE
game 1/1 Running 0 24s
At this point, you’re probably curious to take a look at the game you just deployed.
However, the oc run command just spins up a pod without an exposed endpoint,
so you need to find a way to access the game UI (which is exposed at port 8080 in
this container image). A quick and simple approach to confirm the UI is working
is to forward the port from the container to your local machine. To do so, run the
following command:
$ oc port-forward game 8080
Forwarding from 127.0.0.1:8080 -> 8080
While oc run is a quick and easy way to verify that the cluster can access your built
container image and runs as expected, it is not the method of choice to continuously
run an application on your cluster, as it doesn’t provide advanced concepts that some
of the abstractions around deploying pods provide. The standard way to deploy an
application is a deployment resource. Deployments provide additional features to
plain pods. For example, they can be used for rolling upgrades or to run multiple
instances distributed across nodes. To create a deployment game with the same
container image, run oc create deployment and oc get pods to observe the pod
coming up:
Deploying Code | 27
$ oc create deployment game --image=quay.io/mdewald/s3e
deployment.apps/game created
$ oc get pods
NAME READY STATUS RESTARTS AGE
game 1/1 Running 0 13m
game-c6fb95cc6-bk6zp 1/1 Running 0 78s
Security context constraints
When you deploy a container using oc create deployment the pod will run with
different parameters. One difference is the annotation openshift.io/scc. Compare
the output of the following two commands, adjusted to the pod generated for your
deployment:
$ oc get pod game 
-o "jsonpath={.metadata.annotations['openshift.io/scc']}"
anyuid
$ oc get pod game-c6fb95cc6-bk6zp 
-o "jsonpath={.metadata.annotations['openshift.io/scc']}"
restricted
The restricted security context constraint (SCC) means the pods of this deploy‐
ment will not be able to run privileged containers or mount host directories, and
containers must use a unique identifier (UID) from the allowed range. That means,
for applications running a web server (in this example, NGINX), they need to be
configured accordingly. They cannot run on port 80 or specify a UID that will be
mapped automatically to a high UID within the range configured by the project.
See the NGINX documentation for an explanation on how to configure NGINX to
serve on a specific port.
Scaling and exposing deployments
You can now scale the game Deployment using oc scale deployment. You will see
additional pods coming up immediately.
$ oc scale deployment game --replicas=3
deployment.apps/game scaled
$ oc get pods
NAME READY STATUS RESTARTS AGE
game 1/1 Running 0 16m
game-c6fb95cc6-bk6zp 1/1 Running 0 3m24s
game-c6fb95cc6-bmxzd 0/1 ContainerCreating 0 3s
game-c6fb95cc6-q8bp8 0/1 ContainerCreating 0 3s
To access those different instances, you need to create a service resource and tell it to
expose port 8080 from your pods. To create the service, run the following command:
28 | Chapter 3: Running Workloads on OpenShift
$ oc expose deployment game --port=8080
service/game exposed
$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
game ClusterIP 172.25.113.82 <none> 8080/TCP 6s
$ oc get endpoints
NAME ENDPOINTS AGE
game 10.116.0.57:8080,10.116.0.59:8080,10.116.0.60:8080 22s
As you can see from the output of oc get endpoints, OpenShift has registered
three different endpoints for the service, one for each instance running. To test the
connection, you can again forward port 8080 to localhost, this time using the service
instead of the pod:
$ oc port-forward service/game 8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
To get the second service of the arcade platform application deployed, repeat the
preceding steps for the platform service:
$ oc create deployment platform --image=quay.io/mdewald/s3e-platform
$ oc expose deployment platform --port=8080
Use port-forwarding again to check if the service is accepting requests:
$ oc port-forward service/platform 8080
As you have probably already realized, port-forwarding is not how your users would
want to access your service. Before we dive into exposing the services to the outside
of the cluster in “Accessing Deployed Services” on page 31, the following section
takes a look at a third way to deploy your application.
Deploying Applications from Git Repositories
The arcade platform contains a service that collects the scores per user of all games.
The service is written in Go and can be found in the highscore subfolder of the Git
repository. To deploy this service, this example does not use an already existing image
from a container registry but instead uses OpenShift’s built-in build infrastructure.
To deploy the application right from the Git repository, run the following command:
$ oc new-app https://github.com/OperatingOpenShift/s3e 
--context-dir=highscore
--name=highscore
--> Found container image 28f6e27 (13 days old) from Docker Hub for
"alpine:latest"
Deploying Code | 29
* An image stream tag will be created as "alpine:latest" that will track
the source image
* A Docker build using source code from
https://github.com/OperatingOpenShift/s3e will be created
* The resulting image will be pushed to image stream tag
"highscore:latest"
* Every time "alpine:latest" changes a new build will be triggered
--> Creating resources ...
imagestream.image.openshift.io "alpine" created
imagestream.image.openshift.io "highscore" created
buildconfig.build.openshift.io "highscore" created
deployment.apps "highscore" created
service "highscore" created
[...]
Git repository containing the application
Subfolder in the repository to deploy
Name of the application used in resources
Resources created for the application
When reading the output of this command, you can see OpenShift does a lot of work
for you in maintaining this application. Chapter 5 takes a closer look at OpenShift’s
built-in build system.
What’s important for now is that OpenShift created a build pod that checked out
the Git repository and built a container image using the Dockerfile in the highscore
subfolder. It automatically created a service for the application in the same step.
It will take some time to finish the build. When running oc get pods you will
see a build pod running, and after the state of this pod turns to “Completed” the
application pod will come up:
$ oc get pods
NAME READY STATUS RESTARTS AGE
game 1/1 Running 0 33h
game-c6fb95cc6-vj2qh 1/1 Running 0 20h
highscore-1-build 0/1 Completed 0 4m12s
highscore-56656f848c-k542p 1/1 Running 0 2m57s
There is no owning resource for all the resources created by oc new-app. You can
follow the logs to get an understanding of which resources the command created for
you on the OpenShift cluster.
30 | Chapter 3: Running Workloads on OpenShift
Cleaning up an application
The following sections still use the resources created by the oc
new-app command to expose them to the outside of the cluster.
However, you may wonder how to uninstall an application, since
there is no resource owning everything that OpenShift created
automatically. You can run the following command to clean up
everything that relates to the highscore application, as OpenShift
adds the app=highscore label to everything it creates:
$ oc delete all --selector app=highscore
service "highscore" deleted
deployment.apps "highscore" deleted
buildconfig.build.openshift.io "highscore" deleted
build.build.openshift.io "highscore-1" deleted
imagestream.image.openshift.io "alpine" deleted
imagestream.image.openshift.io "highscore" deleted
Alternatively, if you want to get rid of the whole platform, you can
also delete the project:
$ oc delete project arcade
project "arcade" deleted
Accessing Deployed Services
After deploying all three services of the arcade platform application as described
in the previous section, you should now have three services running in the arcade
namespace:
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
game ClusterIP 172.25.113.82 <none> 8080/TCP 35h
highscore ClusterIP 172.25.32.245 <none> 8080/TCP 45s
platform ClusterIP 172.25.170.245 <none> 8080/TCP 6s
All three services expose port 8080 of the pods. For game and platform you used your
knowledge of the services to expose the right port. In case of the highscore service,
OpenShift detected the exposed port from the container it built.
Accessing Services from Other Pods
All three services are of type ClusterIP, which allows other components of the cluster
to access it. This is helpful for services that are used only by components communi‐
cating to each other within the cluster. To test this, you can deploy a pod to interact
with the services:
$ oc run curl --image=curlimages/curl --command sleep 30h
This command will create a pod in the cluster that you can use to query one of the
services using the curl command. The hostname of the service is the name you gave
Accessing Deployed Services | 31
the service, so in this case you can query http://platform:8080 to reach the platform
web service:
$ oc exec curl -- curl -s http://platform:8080
<html>
<head>
[...]
The preceding oc run command created a pod in the namespace arcade, where all
the services of the arcade platform are deployed as well. That’s why you can access the
service just by specifying the service name as hostname. If you create the curl pod in
another namespace, for example the default namespace, this would not be possible,
as the following snippet shows:
$ oc -n default run curl --image=curlimages/curl --command sleep 30h
$ oc -n default exec curl -- curl -s platform:8080
command terminated with exit code 6
As you can see, the curl pod in the default namespace cannot resolve the hostname
platform. However, we still can query a service in a different namespace by specify‐
ing the full internal domain name of the service:
$ oc -n default exec curl -- curl -s platform.arcade.svc.cluster.local:8080
<html>
<head>
[...]
The internal DNS name of OpenShift services is set to <service-name>.<name‐
space>.svc.cluster.local.
Depending on the network configuration of the cluster you’re
using, communication across specific namespaces may be blocked.
NetworkPolicies can be used to allow or to block communication
between services of specific namespaces.
Distribution of Requests
In the previous section, you scaled the game deployment up to three running pods. If
you have not done this until now or scaled it back down, use the following command
to scale it up:
$ oc scale deployment game --replicas=3
deployment.apps/game scaled
OpenShift will distribute the requests across all the endpoints of the service. To make
this visible, the game deployment writes a header instance-ip to responses, which
you can query from your curl pod. Use the following command to list all endpoints of
the game service:
32 | Chapter 3: Running Workloads on OpenShift
$ oc get endpoints game
NAME ENDPOINTS AGE
game 10.116.0.62:8080,10.116.0.63:8080,10.116.0.64:8080 35h
The following command runs an endless loop with curl commands to send HTTP
requests to the game service:
$ oc exec curl -- sh -c 
'while true; do curl -si game:8080 | grep instance-ip; sleep 1s; done'
instance-ip: 10.116.0.62
instance-ip: 10.116.0.63
instance-ip: 10.116.0.62
instance-ip: 10.116.0.64
instance-ip: 10.116.0.63
instance-ip: 10.116.0.64
instance-ip: 10.116.0.63
[...]
The -i flag tells curl to print response headers. Each output of the curl command
is filtered with grep to only print the response header instance-ip. This results in a
list, showing the distribution of requests.
As you can see in the output of the command, the requests are distributed randomly
to all three deployed pods.
To exit from the endless loop, press Ctrl+C.
The “instance-ip” header is a custom header added for the purpose
of this chapter. If you want to replicate this with your own applica‐
tion you can add the following line to your NGINX configuration:
add_header instance-ip $server_addr always;
However, this is not something we recommend for production
deployments but just to visualize which endpoint receives the
request.
Exposing Services
So far, you’ve seen how to access services from within the cluster using the hostname
or the cluster-internal DNS name of a given service. To access a service from your
local machine for debugging you can use port-forwarding. In most cases, however,
you want your users to reach the web services, or at least parts of them, via the
network, for example using their web browser. For that, you need to expose your
services. OpenShift provides easy-to-use tooling to create a public DNS name as
subdomain of the cluster domain that can be reached from outside of the cluster.
To use it, you can create route resources for the services you want to expose to the
network or internet.
Exposing Services | 33
Route by Auto-generated DNS Names
The first service to expose is the main entrance point of the arcade gaming platform,
the platform service. To do so, just run oc expose again, this time specifying the
service you want to expose to the outside world:
$ oc expose service platform
route.route.openshift.io/platform exposed
After running this command, a route resource has been created in the “arcade”
namespace. Use the following command to see the route that has been generated:
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT
platform platform-arcade.apps-crc.testing platform 8080
Next, expose the game service. Run oc expose again and inspect the routes that
OpenShift created in the namespace:
$ oc expose service game
route.route.openshift.io/game exposed
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT
game game-arcade.apps-crc.testing game 8080
platform platform-arcade.apps-crc.testing platform 8080
You can now see that for the different routes for the services, each was assigned a
unique DNS name. Open a browser to verify the two web pages can be reached.
Figure 3-2 shows how the arcade gaming platform page should look. If you’re run‐
ning OpenShift Local, those will be http://platform-arcade.apps-crc.testing and http://
game-arcade.apps-crc.testing/s3e. Remember the game service only serves the /s3e
path.
34 | Chapter 3: Running Workloads on OpenShift
Figure 3-2. Example application: Arcade gaming platform front-end
Route by Path
From the platform page, you will notice that neither the link to the highscore page
nor the button to the game is currently working. This is because the highscore
service is not yet exposed, and because the game service is currently exposed with a
different domain name. By default, OpenShift creates unique subdomains for each
exposed service, composed from namespace and service name. You can see them in
the output of the preceding oc get routes command. However, you can tell Open‐
Shift to route the requests based on the path in a URL instead of generating unique
names per service. If you look back at the architecture of the example application in
Figure 3-1, routing by path using the same domain name is what you need to get the
application running.
You can reuse the domain name generated for the platform service, platform-
arcade.apps-crc.testing for the complete application, specifying paths that should
be routed to the different services. Since the platform service is meant as the main
entrypoint to the application and expects requests at /, you don’t need to alter this
route. Expose the highscore service at /highscore with the following command:
$ oc expose service highscore 
--hostname=platform-arcade.apps-crc.testing --path=/highscore
route.route.openshift.io/highscore exposed
To change the hostname of the game service, you can edit the generated route with
the following command. It opens an editor where you can adjust the generated
hostname to platform-arcade.apps-crc.testing and set the path to /s3e:
Exposing Services | 35
$ oc edit route game
apiVersion: route.openshift.io/v1
kind: Route
metadata:
[...]
name: game
namespace: arcade
spec:
host: platform-arcade.apps-crc.testing
path: /s3e
port:
targetPort: 8080
to:
kind: Service
name: game
weight: 100
wildcardPolicy: None
status:
[...]
Sets the path of this route to /s3e so all requests to this path will be forwarded to
the game service.
After saving your changes and exiting the editor, you can get a list of the routes again.
All three routes should now be assigned to the same hostname:
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT
game platform-arcade.apps-crc.testing /s3e game 8080
highscore platform-arcade.apps-crc.testing /highscore highscore 8080-tcp
platform platform-arcade.apps-crc.testing platform 8080
When you revisit the main page http://platform-arcade.apps-crc.testing in your
browser, the game button should work. The link to the highscore page should work as
well, which will look similar to Figure 3-3 after finishing some games.
Figure 3-3. Example application: Arcade gaming platform highscore
36 | Chapter 3: Running Workloads on OpenShift
Random documents with unrelated
content Scribd suggests to you:
+
—
NY Times 22:310 Ag 26 ‘17 1100w
“We have often commented on the imaginative quality of Mr
Blackwood’s work. These mystical tales have that quality in a pre-
eminent degree. Like his former stories, they possess distinct
literary value.”
+ Outlook 117:100 S 19 ‘17 30w
“The book is seasoned with one humorous tale.”
+
—
The Times [London] Lit Sup p92 F 22 ‘17
650w
BLACKWOOD, ALGERNON. The wave; an Egyptian aftermath.
*$1.50 (1c) Dutton 16-24201
From childhood he had been haunted by a wave. It rose behind
him, advanced, curled over from the crest, but did not fall.
Sometimes it came as a waking obsession, sometimes as a dream.
His father, a learned psychologist with inclinations toward Freud,
tries to explain it, but the Freudian hypothesis is inadequate.
Associated with the wave, is a strange perfume, identified
afterwards as Egyptian. The recurring experience follows him into
manhood, affecting his life and his relations to men and women.
Certain persons are borne to him on the crest of the wave, as it
were. These always become of significance in his life. Of them are
Lettice Aylmer and his cousin Tony. Later in Egypt, these three act
out a drama which seems to be a repetition of something they
have experienced before. It is here that Tom Kelverdon’s wave
rises to its full height and breaks, but it does not overwhelm him.
“On the whole, Mr Blackwood maintains, though he does not
strengthen, our good opinion of his imaginativeness and power of
evoking the beautiful.”
+ Ath p544 N ‘16 150w
“Mr Blackwood knows how to give these stories of reincarnation an
effect beyond mere creepiness. But his method is so leisurely that
he is often ‘slow,’ in the sense of dull and long-drawn-out; and his
manner is formal and ponderous and unleavened by humour:
common frailties of philosophical romance.” H. W. Boynton
+
—
Bookm 45:207 Ap ‘17 480w
“Never before has Mr Blackwood written a novel that comes so
close to the real things of life as ‘The wave,’ It touches persistently
upon the supernatural, but its visions are wholly subjective.” E. F.
E.
+
+
Boston Transcript p8 F 21 ‘17 1400w
+ Ind 89:556 Mr 26 ‘17 200w
+
—
Nation 104:368 Mr 29 ‘17 430w
“One’s strongest impression on closing this book is that of beauty
—beauty alike of style and of spirit. The glory of words, the
grandeur that was Egypt, the splendor of a brave and loving
human soul—these are the very substance of this fascinating
volume.”
+
+
N Y Times 22:47 F 11 ‘17 950w
“A strange and unusual book, full of insight and imagination. It is
the work of a very delicate literary craftsman, who is a past master
in the art of elusive suggestion.”
+ Sat R 123:40 Ja 13 ‘17 500w
“With the characteristic Blackwood mystery to help, the book is
rich in excitement and experience.”
+ The Times [London] Lit Sup p488 O 12 ‘16
450w
BLAISDELL, ALBERT FRANKLIN, and BALL, FRANCIS
KINGSLEY. American history for little folks. il *75c (2c) Little
973 17-25786
This book, adapted for use in the third school grade, is intended as
an introduction to “The American history story-book” and other
more advanced works by the authors. The aim has been to choose
some of the more dramatic and picturesque events and to relate
them in a simple and easy style. A partial list of contents follows:
Columbus, the sailor; The sea of darkness; The hero of Virginia;
Seeking a new home; Captain Miles Standish; Dark days in New
England; The Dutch in New York; William Penn, the Quaker; A
famous tea party; Polly Daggett saves the flagpole; Peggy White
calls on Lord Cornwallis.
Reviewed by J: Walcott
Bookm 46:496 D ‘17 50w
BLANCHARD, RALPH HARRUB. Liability and compensation
insurance. il *$2 Appleton 331.82 17-24252
A textbook which presents the results of the workmen’s
compensation movement in the United States in terms of
legislative and insurance practice, and explains the industrial
accident problem and the development of liability and
compensation principles as a background for the comprehension of
present problems. The book is divided into three parts: Industrial
accidents and their prevention; Employers’ liability and workmen’s
compensation; Employers’ liability and workmen’s compensation
insurance.
“Mr Blanchard covers the entire field in a very fair way, though it is
evident that he does so in the professor’s study rather than from
the ground of practical experience. The insurance feature is
especially well covered.”
+
—
Dial 63:534 N 22 ‘17 170w
“The author deals with the state compensation acts, and the stock
company, mutual and state fund methods of insuring the payment
of such compensation. He concludes that, because of insufficient
data, a choice among these three methods cannot be made at
present. The author misses the determining factor in such a
choice. This is, that the most desirable method of taking care of
industrial accident losses is that which does most to prevent such
losses.”
— Engin News-Rec 79:1170 D 20 ‘17 240w
“In the presentation of the insurance problem an important and
timely contribution has been made.” E. S. Gray
+ J Pol Econ 25:1050 D ‘17 250w
“It should appeal primarily to teachers and students of insurance,
but it contains much information of interest to the business man
and the intelligent general reader as well.”
+ Nation 106:122 Ja 31 ‘18 360w
“The subject is presented both broadly and well. The point is not
shirked that the subject in some aspects is controversial. In such
cases both sides are presented, as the author’s intention is to give
information rather than judgment.”
+ N Y Times 22:497 N 25 ‘17 230w
“The author has to be commended for the clearness and
conciseness of statement and helpful bibliographic notes. On the
other hand it must, like most text-books, be dogmatic, and one
fails to get the impression from reading the book how much is still
controversial in the field of compensation. ... One is somewhat
inclined to question the wisdom of the printing of the New York
compensation law as an appendix to the book. The New York act is
not as typical as a good many other acts.” I. M. Rubinow
+
—
Survey 39:149 N 10 ‘17 350w
BLAND, JOHN OTWAY PERCY. Li Hung-chang. (Makers of the
nineteenth century) il *$2 (2c) Holt (Eng ed 17-26886)
Mr Bland is joint author of Backhouse and Bland’s “China under the
Empress Dowager.” The introductory chapter of the present
volume reviews the conditions existing in China at the outset of Li
Hung-chang’s career. The author then gives a detailed account of
Li’s life from childhood to his death in 1901, just after the Boxer
rebellion, at the age of seventy-eight. He considers him as a
Chinese official, as a diplomat, a naval and military administrator,
and a statesman and politician, and concludes that Li’s chief claim
to greatness lies in the fact that, at the time of the Taiping
rebellion, he “grasped the vital significance of the impact of the
West, and the necessity for reorganizing China’s system of
government and national defences to meet it.” The biographer’s
task, he tells us, has been complicated by the lack of any accurate
Chinese account of Li’s career, and the untrustworthiness of
Chinese official records. Moreover, the “Memoirs of the Viceroy Li
Hung-chang,” published in 1913, were a “literary fraud.” The
present work, therefore, is based largely upon the recorded
opinions of independent and competent European observers.
There is a bibliographical note of two pages, followed by a
chronological table of events in Chinese history. The book is
indexed.
“Mr Bland makes very clear to us the mingling elements in Li’s
nature, showing how sometimes patriotism and sometimes self-
interest stirred him most. ... By the time we reach Mr Bland’s final
summing up of the character we realize how skilful has been his
handling of the material and how vividly he has made us realize
his impression of the great premier.” D. L. M.
+ Boston Transcript p8 O 17 ‘17 900w
+ Lit D 55:36 N 3 ‘17 950w
“His treatment of his subject recalls a time when familiarity with
life at the treaty ports was enough literary capital for the ordinary
authority on Chinese affairs and real acquaintance with their
history and ideas was left to the missionaries. ... No new material
about Li has been unearthed, no advance has been made towards
obtaining Chinese estimates of the man, no approach towards any
but an Englishman’s point of view is attempted. ... On the other
hand, it is fair to add that the book is easily read and that it
portrays a rather splendid type of the oriental viceroy.”
–
+
Nation 105:488 N 1 ‘17 1500w
“Excellent biography.”
+ N Y Times 22:501 N 25 ‘17 1000w
“The really significant services that Li Hung Chang rendered to his
race are clearly set forth in this volume by a writer who has had
good opportunities to study China and the Chinese at first hand.”
+ R of Rs 56:551 N ‘17 120w
“If the provision of an adequate ‘setting’ is one of the difficulties to
be encountered in limning Li Hung-chang’s career, another is the
paucity of record. ... Mr Bland is to be congratulated upon the
comprehensive narrative which he has succeeded in compiling.”
* +
–
The Times [London] Lit Sup p535 N 8 ‘17
1850w
BLATHWAYT, RAYMOND. Through life and round the world;
being the story of my life. il *$3.50 Dutton 17-23043
Mr Blathwayt is a British journalist who has traveled widely and
has made a specialty of the art of interviewing. Before taking up
journalism, he served as a curate in Trinidad, in the East End of
London, and in an English village. He believes himself to be the
first to adapt the American “interview” to English manners. Among
those interviewed by him are William Black, Thomas Hardy, Hall
Caine, Grant Allen, William Dean Howells, Thomas Bailey Aldrich,
and Oliver Wendell Holmes.
“Illustrated from photographs and from drawings by Mortimer
Menpes.” E. F. E.
Boston Transcript p7 Ag 8 ‘17 800w
“So many aspects of English life and examples of English character
are included in Mr Blathwayt’s book that it forms a reminiscential
commentary upon the journalistic and literary world of London
during the past thirty years.” E. F. E.
Boston Transcript p6 Ag 11 ‘17 900w
“The book is a veritable gold mine for the after-dinner speaker, for
it is besprinkled with quotable anecdotes.”
+ Dial 64:30 Ja 3 ‘18 250w
“His book abounds in what Mr Leacock calls ‘aristocratic
anecdotes,’ platitudinous reflections, and ‘fine writing.’ His naïve
confessions as a curate help to explain the spiritual deadness and
professionalism of the Church of England; they might well be used
as illustrative footnotes to ‘The soul of a bishop.’”
— Nation 105:610 N 29 ‘17 190w
“It is very entertaining, as engaging a book of reminiscence as has
been put before the public in many a day.”
+ N Y Times 22:293 Ag 12 ‘17 1200w
“Mr Blathwayt is a born raconteur. Particularly good are his
descriptions of his life as a young curate and as an almost
penniless wanderer in Connecticut.”
+ Outlook 117:26 S 5 ‘17 70w
Sat R 123:436 My 12 ‘17 820w
“All his admiration of Captain Marryat and of Mrs Radcliffe has not
taught him to spell their names right. He misquotes with the
utmost facility. ... Here is a writer who has made livelihood and
reputation by writing, yet has never mastered the elementary rules
of the art. ... His book is frequently, though not constantly
entertaining; but it would be much less entertaining than it is
without the innocence of its author’s self-revelation.”
–
+
The Times [London] Lit Sup p198 Ap 26
‘17 950w
BLEACKLEY, HORACE WILLIAM. Life of John Wilkes. il *$5
(3½c) Lane 17-24876
This is a scholarly account, based to a great extent on original
documents of the English politician, publicist and political agitator,
who, “from 1764 to 1780 was the central figure not only of London
but of England.” (Sat R)
“Mr Bleackley has executed his task in a scholarly and interesting
manner, and his book forms an acceptable supplement to Lecky. ...
The numerous illustrations are a valuable feature of the book.”
+ Ath p419 Ag ‘17 160w
“Remarkable as the career of John Wilkes confessedly was, and
undeniably interesting as this biography is, in spite of Mr
Bleackley’s literary skill its final impression is not good. If, as we
are told, none ‘of his contemporaries influenced more powerfully
the spirit of the age,’ that spirit must have been grossly immoral to
condone his immoral grossness.”
–
+
Lit D 55:44 N 17 ‘17 240w
“Mr Bleackley has found a subject well suited to his talent in this
profoundly interesting historical study.”
+ N Y Times 22:417 O 21 ‘17 550w
+ Outlook 117:184 O 3 ‘17 50w
“This is one of the best biographies that have appeared for a long
time. Mr Bleackley has read and rifled nearly all the memoirs,
manuscripts, diaries, letters, newspapers of the period, and we
have not read a more erudite and conscientious treatment of a
controversial subject. ... He treats his hero with the benevolent
impartiality of the scientific historian.”
* +
+
Sat R 124:sup4 Jl 7 ‘17 1200w
“Mr Bleackley has given us a most interesting book. ... He has put
before himself the task of proving that a man who wrought so
much for liberty was himself a great man and a lover of the cause
for which he fought. We allow that Wilkes had genius of a sort, but
doubt whether he really cared two pins about the rights of
constituencies, or the illegality of general warrants, or the liberty
of the press. He fought for John Wilkes, and in fighting for him
achieved results of wide constitutional importance.”
* Spec 119:167 Ag 18 ‘17 1500w
“The language is journalistic. ... As a picture of 17th-century
England in its most corrupt and licentious phases the book has
some historical value, though it is too often written in the language
of gossip rather than history. ... The book has its faults—
particularly its emphasis upon Wilkes’s mistresses—but the
evidence is well documented. ... It is to be regretted that a career
so closely connected with American independence should be
treated to so great an extent as the subject of a record of private
vices. ... There is much biographical and historical matter in it of
genuine interest.”
–
+
Springf’d Republican p15 S 23 ‘17 1050w
“Mr Bleackley enumerates a good many of those who have
included Wilkes in their historical canvases. ... An essay by Fraser
Rae preceded Trevelyan’s description in his rainbow-tinted history
of Charles James Fox, and later came a biography in two volumes
by Percy Fitzgerald. Praise is reiterated of the excellent monograph
by J. M. Rigg in the ‘Dictionary of national biography’; but so far as
we see, no mention is made of by far the most judicial and
philosophic account of the transactions in which Wilkes was
conspicuous in Lecky’s ‘History of England in the eighteenth
century.’ ... His style is a little arid, but his ripened power of
research, his patience and diligence in sifting material, combine to
furnish a truly notable portrait. ... The historical background shows
a great advance upon any of his preceding work. ... The volume is
very well finished, the references (largely to Mss.) overwhelming,
the illustrations well-chosen, the errata scrupulous, the index
complete.”
* + The Times [London] Lit Sup p318 Jl 5 ‘17
2050w
BLUMENTHAL, DANIEL.[2]
Alsace-Lorraine. map *75c (7c) Putnam
943.4
“A study of the relations of the two provinces to France and to
Germany and a presentation of the just claims of their people.”
The author, an Alsatian by birth, has been deputy from Strasbourg
in the Reichstag, senator from Alsace-Lorraine, and mayor of the
city of Colmar. The book has an introduction by Douglas Wilson
Johnson of Columbia university, who says, “The problem of Alsace-
Lorraine is in a very real sense an American problem.”
“There is no more moving recent plea for the restoration of Alsace-
Lorraine than this little volume.”
+ Boston Transcript p6 Ja 9 ‘18 200w
BLUNDELL, MARY E. (SWEETMAN) (MRS FRANCIS
BLUNDELL) (M. E. FRANCIS, pseud.). Dark Rosaleen. *$1.35
(1c) Kenedy A17-1416
A story of modern Ireland. In a study of the relationship between
two families, the author gives an epitome of the situation that
exists in Ireland between Catholics and Protestants. Hector
McTavish’s father is a fanatical Scotch Presbyterian, but since he
grows up in a Catholic community, Hector makes friends with the
children of that church. Patsy Burke is his dearest playmate and
Honor Burke is to him a foster mother. Fearing these influences,
the father takes the boy away and, when he returns thirteen years
later, it is to find Patsy an ordained priest and Patsy’s little sister,
Norah, grown into sweet womanhood. The love between Hector
and Norah, their marriage and the birth of their child leads to
tragedy. But, in the child, the author sees a symbol of hope for the
new Ireland.
“The author has not written a thesis novel, but a touching tale of
what she feels and loves.”
+ Cath World 105:259 My ‘17 130w
“There is nothing intolerant in the spirit of this very thrilling book.”
+ N Y Times 22:166 Ap 29 ‘17 550w
BODART, GASTON, and KELLOGG, VERNON LYMAN. Losses of
life in modern wars; ed. by Harald Westergaard. *$2 Oxford
172.4 16-20885
“It is the function of the Division of economics and history of the
Carnegie endowment for international peace, under the direction
of Professor J. B. Clark, to promote a thorough and scientific
investigation of the causes and results of war. ... The first volume
resulting from these studies contains two reports upon
investigations carried on in furtherance of this plan. The first, by
Mr Gaston Bodart, deals with the ‘Losses of life in modern wars:
Austria-Hungary, France.’ The second, by Professor Vernon L.
Kellogg, is a preliminary report and discussion of ‘Military selection
and race deterioration.’ ... Professor Kellogg marshals his facts to
expose the dysgenic effects of war in military selection, which
exposes the strongest and sturdiest young men to destruction and
for the most part leaves the weaklings to perpetuate the race. He
cites statistics to prove an actual measurable, physical
deterioration in stature in France due apparently to military
selection. ... To these dysgenic aspects of militarism the author
adds the appalling racial deterioration resulting from venereal
diseases.”—Dial
Am Hist R 22:702 Ap ‘17 450w
+ A L A Bkl 13:196 F ‘17
“The work is a candid and sane discussion of both sides of this
very important aspect of militarism.”
+ Dial 61:401 N 16 ‘16 390w
“It would be difficult to exaggerate the importance of this original
and authoritative study into the actual facts of war.”
+ Educ R 52:528 D ‘16 70w
BOGARDUS, EMORY STEPHEN. Introduction to sociology. $1.50
University of Southern California press, 3474 University av., Los
Angeles, Cal. 302 17-21833
The author who is professor of sociology in the University of
Southern California offers this textbook as an introduction not only
to sociology in its restricted sense but to the entire field of the
social sciences. He presents the political and economic factors in
social progress not only from a sociological point of view but in
such a way that the student will want to continue along political
science or economic lines. It is the aim to stimulate and to direct
social interest to law, politics and business. He discusses the
population basis of social progress, the geographic, biologic and
psychologic bases as well; social progress as affected by genetic,
hygienic, recreative, economic, political, ethical, esthetic,
intellectual, religious, and associative factors. A closing chapter
surveys the scientific outlook for social progress.
“The advantage of Professor Bogardus’s method is that it brings to
bear in a simple, elementary way a great mass of pertinent facts.”
+ Dial 63:596 D 6 ‘17 150w
“The author does not, perhaps, distinguish clearly enough between
the sociological and the social points of view.” B. L.
+
—
Survey 39:202 N 24 ‘17 240w
BOGEN, BORIS D. Jewish philanthropy; an exposition of principles
and methods of Jewish social service in the United States. *$2
Macmillan 360 17-15182
“The entire field of Jewish social service, both theoretic and
practical, is here discussed by a man who has been engaged in it
for about twenty-five years as educator, settlement head, relief
agent, and now field secretary of the National conference of
Jewish charities. ... The author points out that the pre-eminent
Jewish contribution to social service in this country is the
‘federation idea.’ By federating their charities, the Jews succeeded
in uniting communities, in raising more funds to carry on work
more adequately; they have prevented duplication of effort,
conserved energies and eliminated waste.” (Survey) The book has
an eight-page bibliography.
A L A Bkl 14:40 N ‘17
“No one perhaps is better qualified to discuss with authority the
subject of Jewish philanthropy than Dr Boris D. Bogen, of
Cincinnati. Himself a Russian by birth and early training, he speaks
concerning the immigrant with a thoroughness born of intimate
and empiric knowledge, supplemented by years of accurate and
exhaustive study.” A. A. Benesch
+ Am Pol Sci R 11:785 N ‘17 580w
“Once in a while the author makes a sweeping statement without
citing authorities. There are two serious drawbacks to the
usefulness of the work. One is the constant use of Hebrew words,
which are usually not translated or are mistranslated. Any future
work of this character should have a glossary of such Hebrew
words as part of its appendix. The other is the chapter on
Standards of relief, which ought to have been the most important,
received the most scant attention. But all in all, the book is a
splendid piece of work.” Eli Mayer
+
—
Ann Am Acad 74:303 N ‘17 400w
Cleveland p107 S ‘17 10w
+ Ind 92:109 O 13 ‘17 110w
“The book contains a great mass of information regarding various
Jewish philanthropies, although no attempt is made to present
statistical matter in a formal way.”
R of Rs 56:441 O ‘17 50w
“Dr Bogen’s book is wide in scope and will be found useful as a
handbook for non-Jewish as well as for Jewish social workers.”
Oscar Leonard
+ Survey 38:532 S 15 ‘17 500w
BOIRAC, ÉMILE. Our hidden forces (“La psychologie inconnue”);
an experimental study of the psychic sciences; tr. and ed., with
an introd., by W. de Kerlor. il *$2 (3c) Stokes 130 17-13485
This work, translated from the French, is based on investigations in
a field to which scientists of note in the United States, with the
exception of William James, have given little attention, that of
psychic phenomena. In France, on the other hand, the translator
assures us, such investigations, have made such progress as to
gain national recognition. The book is based on experimental
studies and consists of collected papers that were written during
the period from 1893 to 1903. Animal magnetism in the light of
new investigations, Mesmerism and suggestion, The provocation of
sleep at a distance, The colors of human magnetism, The scientific
study of spiritism, etc., are among the subjects.
“Professor Émile Boirac, rector of the Academy of Dijon, France,
and author of this book, is an acknowledged leader of thought in
matters both psychological and psychic. He has devoted many
years to studying the problems pertaining to life and death, and
this present book was awarded the prize in a contest to which
many of the leading psychologists contributed. ... Though a
scientific book, it is not without attraction for the lay reader.”
+ Boston Transcript p7 Je 13 ‘17 320w
Cleveland p91 Jl ‘17 30w
N Y Br Lib News 4:93 Je ‘17
+ R of Rs 56:106 Jl ‘17 80w
BOLIN, JAKOB. Gymnastic problems; with an introd. by Earl
Barnes. il *$1.50 (4c) Stokes 613.7 17-12150
This book by the late Professor Bolin of the University of Utah has
been prepared for publication by a group of his associates, who
feel that the work is “one of the most important contributions to
the subject of gymnastics which has been written in English.” In
the first chapter the author discusses the relation of gymnastic
exercise to physical training in general. His own position is that the
aim of gymnastics is hygienic in a special sense, its object being to
counteract the evils of one sided activity. The remaining chapters
are devoted to: The principle of gymnastic selection; The principle
of gymnastic totality; The principle of gymnastic unity; The
composition of the lesson; Progression; General considerations of
method.
“Of value to all teachers of physical education and to those
interested in healthful efficiency.”
+ A L A Bkl 14:10 O ‘17
BONNER, GERALDINE (HARD PAN, pseud.). Treasure and
trouble therewith. il *$1.50 (1½c) Appleton 17-21974
“After the opening scene, which pictures a hold-up and robbery of
a Wells-Fargo stage coach in the California mountains, the story
drops into more conventional lines of romance. The robbery, which
is the act of two rough prospectors, is the prelude to the social
experiences in San Francisco of a familiar type of cosmopolitan
adventurer. He is little better than a tramp when he discovers the
robbers’ cache. He makes off with the gold and conceals it near
San Francisco. Being well-born and educated, though thoroughly
unscrupulous, he finds an easy entrance to San Francisco society.”
(Springf’d Republican) The rest of the book gives the story of his
life in the city. The California earthquake of 1906 plays an
important part in the story.
+ A L A Bkl 14:59 N ‘17
“Geraldine Bonner has a good plot in ‘Treasure and trouble
therewith,’ although not an especially attractive one. ... All her
pictures of California are vivid and sympathetic, but the character
drawing is unskilful.”
+
—
N Y Evening Post p3 O 13 ‘17 80w
“Miss Bonner has endeavored, with commendable success, to
combine realism with the stirring incidents and dramatic situations
of the story of plot and action. Especially good are the chapters
which deal with the earthquake.”
+ N Y Times 22:311 Ag 26 ‘17 770w
“In spite of the complete lack of plausibility, the book affords a
certain measure of diversion.”
–
+
Springf’d Republican p15 S 16 ‘17 300w
BOSANKO, W. Collecting old lustre ware. (Collectors’ pocket ser.) il
*75c (3½c) Doran 738 A17-1002
The editor in his preface says that he believes this to be the first
book on old English lustre ware ever published. He adds: “Yet
there are many collectors of old lustre ware; it still abounds, there
is plenty of it to hunt for, and prices are not yet excessive. By the
aid of this informative book and the study of museum examples a
beginner may equip himself well, and may take up this hobby
hopefully, certain of finding treasures.” There are over forty-five
illustrations.
A L A Bkl 13:436 Jl ‘17
“Simple, practical handbook.”
+ Cleveland p97 Jl ‘17 20w
N Y Br Lib News 5:75 My ‘17 20w
+ R of Rs 56:220 Ag ‘17 50w
BOSANQUET, BERNARD. Social and international ideals. *$2.25
Macmillan 304 (Eng ed 17-28213)
“This volume is a collection of essays, reviews, and lectures, all of
which, with one exception, were published before the war, and
most of which on the face of them reveal that fact. ... Though the
contents of the volume seem at first sight to be fortuitously put
together, there runs through them unity of spirit, thought,
purpose, and manner.” (The Times [London] Lit Sup Jl 12 ‘17)
“Most of the pages (14 out of 17 are reprinted from the Charity
Organization Review) discuss the principles which should govern
our handling of social problems with the view of displaying ‘the
organizing power which belongs to a belief in the supreme values
—beauty, truth, kindness, for example—and how a conception of
life which has them for its good is not unpractical.’” (The Times
[London] Lit Sup Je 21 ‘17)
“We may single out, as of special importance in this new volume,
Mr Bosanquet’s idea of the growth of individuality and his idea of
the structure of political society. In the chapter on ‘Optimism’ he
points out that the mistake of its opponents is the acceptance of
their momentary experience as final. ... Criticism, confined to a few
sentences, must obviously be inadequate. ... If there are omissions
in Mr Bosanquet’s analysis of fact, his ideal also appears to be too
simple.”
+ Ath p398 Ag ‘17 950w
“It is a great privilege to listen to a wise man and a real logician,
who is at once a wit and a humanitarian. Dr Bosanquet was not for
nothing a fellow in moderations. The whole book is full of sound
common sense.”
+ Boston Transcript p8 Ja 19 ‘18 600w
Cleveland p135 D ‘17 60w
“Written in a strain of reasoned optimism.” M. J.
+ Int J Ethics 28:291 Ja ‘18 200w
“Here we have the precious kernel of wisdom in the hard nut of
paradox. No doubt, justice and kindness, beauty and truth are the
things that matter most, and it is no small service to direct our
thoughts once again to them. But how to embody and realize them
in the maze and tangle of our actual world, that is a problem
apparently too great for any single thinker.” R. F. A. H.
+
—
New Repub 13:353 Ja 19 ‘18 1850w
+ The Times [London] Lit Sup p299 Je 21 ‘17
130w
“If we are tempted to say that these pages show his aptitude for
making simple things look difficult, they reveal also the meaning of
life. They disclose to those living the humblest of lives that they
may enter if they will—the door is ever open—to regions the
highest and purest. ... If the book contained nothing else than
some of the observations in the last chapters as to true pacifism
and patriotism, it would make every reader its debtor.”
+ The Times [London] Lit Sup p326 Jl 12 ‘17
1800w
BOSSCHÈRE, JEAN DE, il. Christmas tales of Flanders. il *$3 Dodd
398
Popular Christmas tales current in Flanders and Brabant, translated
by M. C. O. Morris, and spiritedly illustrated partly in color and
partly in black and white by Jean de Bosschère.
“The engaging color-work of Mr de Bosschère is full of brilliancy,
and makes of this Christmas book a rich gift from a country now
sorely stricken.”
+ Lit D 55:53 D 8 ‘17 50w
“A very charming book for young people, and so interestingly
illustrated that their elders will find it almost equally attractive. All
the pictures have humor, dexterity, force, and appreciation of
character.”
+ N Y Times 22:514 D 2 ‘17 70w
“This handsome and well-illustrated book is one of the most
attractive we have seen this season. ... Some of the drawings
seem to us a little scratchy, but they will all be clear to a child.
They lack the tortured straining after originality and the purposeful
ugliness which modern art has occasionally thrust upon the
nursery.”
+
—
Sat R 124:sup10 D 8 ‘17 280w
Spec 119:sup628 D 1 ‘17 330w
“The stories are sometimes abrupt in their inconclusiveness;
homely and almost entirely unromantic. Sometimes a disagreeable
hint of cynicism obtrudes itself; but this may have been left on our
minds by the association with M. de Bosschère’s illustrations. They
are completely unsuited to their purpose.”
–
+
The Times [London] Lit Sup p621 D 13 ‘17
200w
BOSTWICK, ARTHUR ELMORE. American public library. il *$1.75
(2c) Appleton 020 17-17641
This is a new edition, revised and brought up to date, of a book
written by the librarian of the St Louis public library and first
published seven years ago. “As a matter of mechanical necessity,
no doubt, the revisions and additions have limited themselves to
such changes as could be made, here and there, without requiring
any considerable resetting or recasting of the pages, so that the
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
PDF DevOps with OpenShift 1st Edition Mike Hepburn download
PDF
Download full DevOps with OpenShift 1st Edition Mike Hepburn ebook all chapters
PDF
Object Storage with Swift Cloud storage administration through OpenStack 1. A...
PDF
Kubernetes Best Practices 1st Edition Brendan Burns Eddie Villalba
PDF
Cloud Foundry the definitive guide develop deploy and scale First Edition Winn
PDF
OpenStack Operations Guide 1st Edition Tom Fifield
PDF
Openstack Operations Guide 1st Edition Tom Fifield Diane Fleming
PDF
Continuous Enterprise Development In Java Testable Solutions With Arquillian ...
PDF DevOps with OpenShift 1st Edition Mike Hepburn download
Download full DevOps with OpenShift 1st Edition Mike Hepburn ebook all chapters
Object Storage with Swift Cloud storage administration through OpenStack 1. A...
Kubernetes Best Practices 1st Edition Brendan Burns Eddie Villalba
Cloud Foundry the definitive guide develop deploy and scale First Edition Winn
OpenStack Operations Guide 1st Edition Tom Fifield
Openstack Operations Guide 1st Edition Tom Fifield Diane Fleming
Continuous Enterprise Development In Java Testable Solutions With Arquillian ...

Similar to Operating Openshift An Sre Approach To Managing Infrastructure 1st Edition Rick Rackow (20)

PDF
Architecting Modern Data Platforms Jan Kunigk Ian Buss Paul Wilkinson
PDF
Docker up &amp; running
PDF
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
PDF
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
PDF
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
PDF
Modernizing Enterprise Java 1st Edition Markus Eisele
PDF
Docker- Up and Running for telecom professionals.pdf
PDF
Immediate download Kubernetes Best Practices 1st Edition Brendan Burns ebooks...
PDF
Infrastructure as code managing servers in the cloud Morris
PDF
Production Kubernetes: Building Successful Application Platforms 1st Edition ...
PDF
Production Kubernetes: Building Successful Application Platforms 1st Edition ...
PDF
Learning Serverless Design Develop and Deploy with Confidence 1st Edition Jas...
PDF
Infrastructure as code managing servers in the cloud Morris
PDF
Hadoop in the Enterprise Architecture A Guide to Successful Integration 1st E...
PDF
Designing Evolvable Web Apis With Aspnet 1st Edition Glenn Block
PDF
Using Docker Developing And Deploying Software With Containers 1st Edition Ad...
PDF
Architecture Patterns with Python 1st Edition Harry Percival
PDF
Istio Up Running Using a Service Mesh to Connect Secure Control and Observe 1...
PDF
Infrastructure as code managing servers in the cloud Morris 2024 scribd download
PDF
Kubernetes Operators Automating the Container Orchestration Platform 1st Edit...
Architecting Modern Data Platforms Jan Kunigk Ian Buss Paul Wilkinson
Docker up &amp; running
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
Using Docker Developing and Deploying Software with Containers 1st Edition Ad...
Modernizing Enterprise Java 1st Edition Markus Eisele
Docker- Up and Running for telecom professionals.pdf
Immediate download Kubernetes Best Practices 1st Edition Brendan Burns ebooks...
Infrastructure as code managing servers in the cloud Morris
Production Kubernetes: Building Successful Application Platforms 1st Edition ...
Production Kubernetes: Building Successful Application Platforms 1st Edition ...
Learning Serverless Design Develop and Deploy with Confidence 1st Edition Jas...
Infrastructure as code managing servers in the cloud Morris
Hadoop in the Enterprise Architecture A Guide to Successful Integration 1st E...
Designing Evolvable Web Apis With Aspnet 1st Edition Glenn Block
Using Docker Developing And Deploying Software With Containers 1st Edition Ad...
Architecture Patterns with Python 1st Edition Harry Percival
Istio Up Running Using a Service Mesh to Connect Secure Control and Observe 1...
Infrastructure as code managing servers in the cloud Morris 2024 scribd download
Kubernetes Operators Automating the Container Orchestration Platform 1st Edit...
Ad

Recently uploaded (20)

PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Business Ethics Teaching Materials for college
PDF
Open folder Downloads.pdf yes yes ges yes
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PPTX
master seminar digital applications in india
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
01-Introduction-to-Information-Management.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Pre independence Education in Inndia.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Cell Structure & Organelles in detailed.
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
human mycosis Human fungal infections are called human mycosis..pptx
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Business Ethics Teaching Materials for college
Open folder Downloads.pdf yes yes ges yes
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
master seminar digital applications in india
102 student loan defaulters named and shamed – Is someone you know on the list?
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
01-Introduction-to-Information-Management.pdf
Microbial disease of the cardiovascular and lymphatic systems
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
FourierSeries-QuestionsWithAnswers(Part-A).pdf
TR - Agricultural Crops Production NC III.pdf
Pre independence Education in Inndia.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Basic Mud Logging Guide for educational purpose
Cell Structure & Organelles in detailed.
Ad

Operating Openshift An Sre Approach To Managing Infrastructure 1st Edition Rick Rackow

  • 1. Operating Openshift An Sre Approach To Managing Infrastructure 1st Edition Rick Rackow download https://ebookbell.com/product/operating-openshift-an-sre- approach-to-managing-infrastructure-1st-edition-rick- rackow-47581718 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Operating Openshift Third Early Release 20220920 Third Early Release Rick Rackow https://ebookbell.com/product/operating-openshift-third-early- release-20220920-third-early-release-rick-rackow-46201238 Architecting And Operating Openshift Clusters Openshift For Infrastructure And Operations Teams 1st Ed William Caban https://ebookbell.com/product/architecting-and-operating-openshift- clusters-openshift-for-infrastructure-and-operations-teams-1st-ed- william-caban-10795874 Architecting And Operating Openshift Clusters Openshift For Infrastructure And Operations Teams 1st Edition William Caban https://ebookbell.com/product/architecting-and-operating-openshift- clusters-openshift-for-infrastructure-and-operations-teams-1st- edition-william-caban-54981178 Mlops With Red Hat Openshift A Cloudnative Approach To Machine Learning Operations Ross Brigoli https://ebookbell.com/product/mlops-with-red-hat-openshift-a- cloudnative-approach-to-machine-learning-operations-ross- brigoli-56896508
  • 3. Operating System Principles 7th Edition Abraham Silberschatz https://ebookbell.com/product/operating-system-principles-7th-edition- abraham-silberschatz-44906930 Operating System Second Rohit Khurana https://ebookbell.com/product/operating-system-second-rohit- khurana-46156340 Operating And Support Costestimating Guide Department Of Department Of Defense https://ebookbell.com/product/operating-and-support-costestimating- guide-department-of-department-of-defense-46515544 Operating Ai Ulrika Jagare https://ebookbell.com/product/operating-ai-ulrika-jagare-48741516 Operating Systems Advanced Concepts Mamoru Maekawa Arthur E Oldehoeft Rodney R Oldehoeft https://ebookbell.com/product/operating-systems-advanced-concepts- mamoru-maekawa-arthur-e-oldehoeft-rodney-r-oldehoeft-49455972
  • 5. Rick Rackow & Manuel Dewald Operating OpenShift An SRE Approach to Managing Infrastructure
  • 6. OPENSHIFT AND KUBERNETES “An essential companion for anyone deploying and maintaining an OpenShift environment.” —Andrew Block Distinguished Architect, Red Hat “Should be a mandatory read for every team running OpenShift workloads in production. —Bilgin Ibryam Coauthor of Kubernetes Patterns, Product Manager at Diagrid Operating OpenShift US $59.99 CAN $74.99 ISBN: 978-1-098-10639-3 Twitter: @oreillymedia linkedin.com/company/oreilly-media youtube.com/oreillymedia Kubernetes has gained significant popularity over the past few years, with OpenShift as one of its most mature and prominent distributions. But while OpenShift provides several layers of abstraction over vanilla Kubernetes, this software can quickly become overwhelming because of its rich feature set and functionality. This practical book helps you understand and manage OpenShift clusters from minimal deployment to large multicluster installations. Principal site reliability engineers Rick Rackow and Manuel Dewald, who worked together on Red Hat’s managed OpenShift offering for years, provide valuable advice to help your teams operate OpenShift clusters efficiently. Designed for SREs, system administrators, DevOps engineers, and cloud architects, Operating OpenShift encourages consistent and easy container orchestration and helps reduce the effort of deploying a Kubernetes platform. You’ll learn why OpenShift has become highly attractive to enterprises large and small. • Learn OpenShift core concepts and deployment strategies • Explore multicluster OpenShift Container Platform deployments • Administer OpenShift clusters following best practices • Learn best practices for deploying workloads to OpenShift • Monitor OpenShift clusters through state-of-the-art concepts • Build and deploy Kubernetes operators to automate administrative tasks • Configure OpenShift clusters using a GitOps approach Rick Rackow is a seasoned professional who’s worked on cloud and container adoption throughout his career. As site reliability engineer on Red Hat’s OpenShift Dedicated SRE team, Rick managed and maintained countless OpenShift clusters at scale and ensured their reliability by developing and following the best practices in this book. Manuel Dewald has been a software engineer on many software projects, from big enterprise software to distributed open source software composed of independent components. He is lead SRE on the OpenShift Dedicated team at Red Hat, operating OpenShift clusters and automating the cluster lifecycle.
  • 7. Rick Rackow and Manuel Dewald Operating OpenShift An SRE Approach to Managing Infrastructure Boston Farnham Sebastopol Tokyo Beijing Boston Farnham Sebastopol Tokyo Beijing
  • 8. 978-1-098-10639-3 [LSI] Operating OpenShift by Rick Rackow and Manuel Dewald Copyright © 2023 Rick Rackow and Manuel Dewald. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://oreilly.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Acquisitions Editor: John Devins Development Editor: Corbin Collins Production Editor: Ashley Stussy Copyeditor: Piper Editorial Consulting, LLC Proofreader: Judith McConville Indexer: Amnet Systems LLC Interior Designer: David Futato Cover Designer: Karen Montgomery Illustrator: Kate Dullea November 2022: First Edition Revision History for the First Edition 2022-11-07: First Release See http://oreilly.com/catalog/errata.csp?isbn=9781098106393 for release details. The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Operating OpenShift, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
  • 9. To Linus — R.R. To Marie — M.D.
  • 11. Table of Contents Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Traditional Operations Teams 2 How Site Reliability Engineering Helps 3 OpenShift as a Tool for Site Reliability Engineers 4 Individual Challenges for SRE Teams 5 2. Installing OpenShift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 OKD, OCP, and Other Considerations 7 OKD 7 OCP 8 OSD, ROSA, and ARO 8 Local Clusters with OpenShift Local 8 Planning Cluster Size 12 Instance Sizing Recommendations 12 Node Sizing Recommendations 12 Master Sizing Recommendations 13 Infra Nodes 15 Basic OpenShift Installations 17 Installer-Provisioned Infrastructure 17 Self-Provisioned Infrastructure 24 Summary 24 3. Running Workloads on OpenShift. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Deploying Code 26 Deploying Existing Container Images 27 Deploying Applications from Git Repositories 29 v
  • 12. Accessing Deployed Services 31 Accessing Services from Other Pods 31 Distribution of Requests 32 Exposing Services 33 Route by Auto-generated DNS Names 34 Route by Path 35 External Load Balancers 37 Securing Services with TLS 40 Specifying TLS Certificates 40 Redirecting Traffic to TLS Route 42 Let’s Encrypt Trusted Certificates 44 Encrypted Communication to the Service 51 Summary 57 4. Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Cluster Access 59 Role-Based Access Control 61 Roles and ClusterRoles 62 RoleBindings and ClusterRoleBindings 63 CLI 65 ServiceAccounts 66 Threat Modelling 67 Workloads 68 Summary 72 5. Automating Builds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 OpenShift Image Builds 73 Docker Build 74 Source to Image (S2I) Build 81 Custom S2I Images 84 Red Hat OpenShift Pipelines 87 Overview 88 Install Red Hat OpenShift Pipelines 90 Setting Up the Pipeline 92 Turning the Pipeline into Continuous Integration 104 Summary 110 6. In-Cluster Monitoring Stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Cluster Monitoring Operator 111 Prometheus Operator 114 User Workload Monitoring 130 Visualizing Metrics 136 vi | Table of Contents
  • 13. Console Dashboards 136 Using Grafana 137 Summary 141 7. Advanced Monitoring and Observability Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Service Oriented Monitoring 143 Service Level Indicators 144 Service Level Objectives 145 Tools 150 Logging 154 ClusterLogging 154 Log Forwarding 158 Loki 158 Visualization 159 Installation 159 Creating a Grafana Instance 161 Data Source 161 Dashboards 164 Summary 166 8. Automating OpenShift Cluster Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Recurring Operations Tasks 168 Application Updates 169 Certificate Renewals 169 OpenShift Updates 169 Backups 170 Automating Recurring Operations Tasks 170 Persistence 170 Creating Snapshots 173 Using CronJobs for Task Automation 176 Cluster Configuration 182 Manage Cluster Configuration with OpenShift GitOps 184 Installing OpenShift GitOps 185 Managing Configuration with OpenShift GitOps 189 Managing Configuration of Multiple Clusters with OpenShift GitOps 193 Summary 197 9. Developing Custom Operators to Automate Cluster Operations. . . . . . . . . . . . . . . . . . . 199 Operator SDK 201 Operator Design 202 Bootstrapping the Operator 203 Setting Up a CA Directory for Development 207 Table of Contents | vii
  • 14. Designing the Custom Resource Definition 209 Installing the CustomResourceDefinition 212 Local Operator Development 213 The Reconcile Function 215 Deploying the Operator 216 Creating and Updating OpenShift Resources 220 Specifying RBAC Permissions 223 Routing Traffic to the Operator 224 Adding Additional Controllers 227 Updating Resource Status 229 Summary 231 10. Practical Patterns for Operating OpenShift Clusters at Scale. . . . . . . . . . . . . . . . . . . . . 233 Cluster Lifecycle 233 Cluster Configuration 235 Logging 235 Monitoring 236 Alerting 237 Automation 238 On Call 238 Primary On Call 239 Backup On Call 239 Shift Rotation 239 Ticket Queue 239 Incident Management 240 When to Declare an Incident 241 Inform the Customer 241 Define Roles 241 Incident Timeline 242 Document the Process 242 Postmortem 243 Accessing OpenShift Clusters 243 The Stage Is Yours 243 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 viii | Table of Contents
  • 15. Preface In late December 2020, a Slack notification from Rick popped up on Manuel’s laptop. “You know what?” it said, “You and I, we’re going to write a book!” “What are we going to write about?” “Operating OpenShift!” Fast-forward almost two years, and that very book is now before your eyes. The backstory is that over the past several years, more and more people reached out to us to ask if we would be able to share some of our OpenShift insights with them— to help them operate their OpenShift clusters more efficiently. At that time the two of us worked as site reliability engineers for OpenShift clusters at Red Hat. Efficiently operating OpenShift clusters was indeed our day-to-day chal‐ lenge, and we had accumulated a lot of knowledge and expertise. We used that experience to create this book. We divided the 10 chapters of this book according to our personal interests and depth of experience. Chapters 1, 3, 5, 8, 9, and 10 are written by Manuel. Chapters 2, 4, 6, and 7 are by Rick. We learned a lot more about OpenShift in the past two years working on the book. Even with our experience operating OpenShift at Red Hat, many of the tools for oper‐ ating and automating operations still required further research and experimentation. We’ve done our best to compile the results of our experiments into simple steps that you can follow to get started. Of course, you’ll need to adjust the examples to apply them to your specific needs as soon as you start using the tools. All the examples use the simplified scenario of an arcade gaming platform that you’ll deploy to your cluster as you follow the book. You’ll find the resources of this example workload in the corresponding GitHub repository. ix
  • 16. Conventions Used in This Book The following typographical conventions are used in this book: Italic Indicates new terms, URLs, email addresses, filenames, and file extensions. Constant width Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords. Constant width bold Shows commands or other text that should be typed literally by the user. Constant width italic Shows text that should be replaced with user-supplied values or by values deter‐ mined by context. This element signifies a tip or suggestion. This element signifies a general note. This element indicates a warning or caution. Using Code Examples Supplemental material (code examples, exercises, etc.) is available for download at https://github.com/OperatingOpenshift. If you have a technical question or a problem using the code examples, please send emails to bookquestions@oreilly.com. This book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not x | Preface
  • 17. need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing examples from O’Reilly books does require permission. Answering a question by citing this book and quot‐ ing example code does not require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission. We appreciate, but generally do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: “Book Title by Some Author (O’Reilly). Copyright 2012 Some Copyright Holder, 978-0-596-xxxx-x.” If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at permissions@oreilly.com. O’Reilly Online Learning For more than 40 years, O’Reilly Media has provided technol‐ ogy and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through books, articles, and our online learning platform. O’Reilly’s online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O’Reilly and 200+ other publishers. For more information, visit http://oreilly.com. How to Contact Us Please address comments and questions concerning this book to the publisher: O’Reilly Media, Inc. 1005 Gravenstein Highway North Sebastopol, CA 95472 800-998-9938 (in the United States or Canada) 707-829-0515 (international or local) 707-829-0104 (fax) We have a web page for this book, where we list errata, examples, and any additional information. You can access this page at https://oreil.ly/operating-openshift-1e. Email bookquestions@oreilly.com to comment or ask technical questions about this book. Preface | xi
  • 18. For news and information about our books and courses, visit https://oreilly.com. Find us on LinkedIn: https://linkedin.com/company/oreilly-media Follow us on Twitter: https://twitter.com/oreillymedia Watch us on YouTube: https://www.youtube.com/oreillymedia Acknowledgments Over the past two years, a lot of people have been supportive of our idea for this book, and we would like to thank everyone who helped us stay motivated and finish this work. We’d like to thank the following people who worked with us from the O’Reilly team: John Devins helped us finalize the book proposal and convinced the right people that it’s worth to invest in the topic. Corbin Collins, our development editor, was always the first to review our raw material and patiently corrected our formatting and grammar mistakes. He also always had an eye on our roadmap and reached out in time if adjustments needed to be made. Along with him, we also want to thank Sara Hunter and Ashley Stussy for their thorough reviews and incredibly helpful feedback. Our technical editors Andrew Block and Bilgin Ibrayam were incredibly helpful and contributed lots of good ideas to improve the content. They even mentioned alternatives that we’d overlooked in our research. A lot of the research done for this book involved chatting with the right people, both inside Red Hat and in the open source communities, who have been hard at work on the respective components covered in this book. We’d like to thank everyone who helped us get things up and running. Finally, we want to thank our families, Stephanie, Linus, Julia, and Marie, who have been supportive of the idea from the beginning and helped us free up time to focus on writing this book and put up with our moods when things didn’t go too well. This book would not exist without you. xii | Preface
  • 19. CHAPTER 1 Introduction Manuel Dewald Operating distributed software is a difficult task. It requires humans with a deep understanding of the system they maintain. No matter how much automation you create, it will never replace highly skilled operations personnel. OpenShift is a platform, built to help software teams develop and deploy their distributed software. It comes with a large set of tools that are built in or can be deployed easily. While it can be of great help to its users and can eliminate a lot of traditionally manual operations burdens, OpenShift itself is a distributed system that needs to be deployed, operated, and maintained. Many companies have platform teams that provide development platforms based on OpenShift to software teams so the maintenance effort is centralized and the deployment patterns are standardized across the organization. These platform teams are shifting more and more into the direction of Site Reliability Engineering (SRE) teams, where software development practices are applied to operations tasks. Scripts are replaced by proper software solutions that can be tested more easily and deployed automatically using continuous integration/continuous delivery (CI/CD) systems. Alerts are transformed from simple cause-based alerts like “a high amount of mem‐ ory is used on Virtual Machine 23” into symptom-based alerts based on Service Level Objectives (SLO) that reflect customer experience, like “processing of requests takes longer than we expect it to.” OpenShift provides all the tools you need to run software on top of it with SRE paradigms, from a monitoring platform to an integrated CI/CD system that you can use to observe and run both the software deployed to the OpenShift cluster, as well as the cluster itself. But building the automation, implementing a good alerting strategy, and finally, debugging issues that occur when operating an OpenShift cluster, are still difficult tasks that require skilled operations or SRE staffing. 1
  • 20. Even in SRE teams, traditionally a good portion of the engineers’ time is dedicated to manual operations tasks, often called toil. The operations time should be capped, though, as the main goal of SRE is to tackle the toil with software engineering. O’Reilly published a series of books written by site reliability engineers (SREs) at Google, related to the core SRE concepts. We encourage you to take a look at these books if you’re interested in details about these principles. In the first book, Site Reliability Engineering, the authors mostly speak from their experience as SREs at Google, suggesting to limit the time working on toil to 50% of an engineering team’s time. Traditional Operations Teams The goal of having an upper limit for toil is to avoid shifting back into an operations team where people spend most of the time working down toil that accumulates with both the scale of service adoption and software advancement. Part of the accumulating toil while the service adoption grows is the number of alerts an operations team gets if the alerting strategy isn’t ready for scaling. If you’re maintaining software that creates one alert per day per tenant, keeping one engineer busy running 10 tenants, you will need to scale the number of on-call engineers linearly with the number of tenants the team operates. That means in order to double the number of tenants, you need to double the number of engineers dedicated to reacting to alerts. These engineers will effectively not be able to work on reducing the toil created by the alerts while working down the toil and investigating the issues. In a traditional operations team that runs OpenShift as a development platform for other departments of the company, onboarding new tenants is often a manual task. It may be initiated by the requesting team to open a ticket that asks for a new OpenShift cluster. Someone from the operations team will pick up the ticket and start creating the required resources, kick off the installer, configure the cluster so the requesting team gets access, and so forth. A similar process may be set up for turning down clusters when they are not needed anymore. Managing the lifecycle of OpenShift clusters can be a huge source of toil, and as long as the process is mainly manual, the amount of toil will scale with the adoption of the service. In addition to being toil-packed processes, manual lifecycle and configuration man‐ agement are error prone. When an engineer runs the same procedure several times during a week, as documented in a team-managed Wiki, chances are they will miss an important step or pass a wrong parameter to any of the scripts, resulting in a broken state that may not be discovered immediately. When managing multiple OpenShift clusters, having one that is slightly different from the others due to a mistake in the provisioning or configuration process, or even due to a customer request, is dangerous and usually generates more toil. 2 | Chapter 1: Introduction
  • 21. Automation that the team generated over time may not be tailored to the specifics of a single snowflake cluster. Running that automation may just not be possible, causing more toil for the operations team. In the worst case, it may even render the cluster unusable. Automation in a traditional ops team can often be found in a central repository that can be checked out on engineer devices so they can run the scripts they need as part of working on a documented process. This is problematic not only because it still needs manual interaction and hence doesn’t scale well but also engineer’s devices are often configured differently. They can differ in the OS they use, adding the need to support different vendors in the tooling, for example by providing a standardized environment like a container environment to run the automation. But even then, the version of the scripts to run may differ from engineer to engineer, or the script to run hasn’t been updated when it should’ve been as a new version of OpenShift has been released. Automated testing is something that is seldomly implemented for operations scripts made to quickly get rid of a piece of toil. All this makes automation in scripts that are running on developer machines brittle. How Site Reliability Engineering Helps In an SRE team, the goal is to replace such scripts with actual software that is versioned properly, has a mature release strategy, has a continuous integration and delivery process, and runs from the latest released version on dedicated machines, for example, an OpenShift cluster. OpenShift SRE teams treat the operations of OpenShift clusters, from setting them up to tearing them down, as a software problem. By applying evolved best practices from the software engineering world to cluster operations, many of the problems mentioned earlier can be solved. The software can be unit-tested to ensure that new changes won’t break existing behavior. Additionally, a set of integration tests can ensure it works as expected even when the environment changes, such as when a new version of OpenShift is released. Instead of proactively reacting to more and more requests from customers as the service adoption grows, the SRE team can provide a self-service process that can be used by their customers to provision and configure their clusters. This also reduces the risk of snowflakes, as less manual interaction is needed by the SRE team. What can and cannot be configured should be part of the UI provided to the customer, so requests to treat a single cluster differently from all the others should turn into a feature request for the automation or UI. That way, it will end up as a supported state rather than a manual configuration update. To ensure that the alerting strategy can scale, SRE teams usually move from a cause-based alerting strategy to a symptom-based alerting strategy, ensuring that only How Site Reliability Engineering Helps | 3
  • 22. problems that risk impacting the user experience reach their pager. Smaller problems that do not need to be resolved immediately can move to a ticket queue to work on as time allows. Shifting to an SRE culture means allowing people to watch their own software, taking away the operations burden from the team one step at a time. It’s a shift that will take time, but it’s a rewarding process. It will turn a team that runs software someone else wrote into a team that writes and runs software they’re writing themselves, with the goal of automating the lifecycle and operations of the software under their control. An SRE culture enables service growth by true automation and observation of customer experience rather than the internal state. OpenShift as a Tool for Site Reliability Engineers This book will help you to utilize the tools that are already included with OpenShift or that can be installed with minimal effort to operate software and OpenShift itself the SRE way. We expect you to have a basic understanding of how containers, Kubernetes, and OpenShift work to be able to understand and follow all the examples. Fundamental concepts like pods will not be explained in full detail, but you may find a quick refresher where we found it helpful to understand a specific aspect of OpenShift. We show you the different options for installing OpenShift, helping you to auto‐ mate the lifecycle of OpenShift clusters as needed. Lifecycle management includes not only installing and tearing down clusters but also managing the configuration of your OpenShift cluster in a GitOps fashion. Even if you need to manage the configuration of multiple clusters, you can use Argo CD on OpenShift to manage the configuration of a multitude of OpenShift clusters. This book shows you how to run workloads on OpenShift using a simple example application. You can use this example to walk through the chapters and try out the code samples. However, you should be able to use the same patterns to deploy more serious software, like automation that you built to manage OpenShift resources—for example, an OpenShift operator. OpenShift also provides the tools you need to automate the building and deployment of your software, from simple automated container builds, whenever you check in a new change, to version control, to full-fledged custom pipelines using OpenShift Pipelines. In addition to automation, the SRE way of managing OpenShift clusters includes proper alerting that allows you to scale. OpenShift comes with a lot of built-in alerts that you can use to get informed when something goes wrong with a cluster. This book will help you understand the severity levels of those alerts and show you how 4 | Chapter 1: Introduction
  • 23. to build your own alerts, based on metrics that are available in the OpenShift built-in monitoring system. Working as OpenShift SREs at Red Hat together for more than two years, we both learned a lot about all the different kinds of alerts that OpenShift emits and how to investigate and solve problems. The benefit of working close to OpenShift Engineer‐ ing is that we can even contribute to alerts in OpenShift if we find problems with them during our work. Over time, a number of people have reached out, being interested in how we work as a team of SREs. We realize there is a growing interest in all different topics related to our work: From how we operate OpenShift to building custom operators, people show interest in the topic at conferences or reach out to us directly. This book aims to help you take some of our learnings and use them to run Open‐ Shift in your specific environment. We believe that OpenShift is a great distribution of Kubernetes that brings a lot of additional comfort with it, comfort that will allow you to get started quickly and thrive at operating OpenShift. Individual Challenges for SRE Teams OpenShift comes with a lot of tools that can help you in many situations as a developer or operator. This book can cover only a few of those tools and does not aim to provide a full overview of all OpenShift features. Instead of trying to replicate the OpenShift documentation, this book focuses on highlighting the things we think will help you get started operating OpenShift. With more features being developed and added to OpenShift over time, it is a good idea to follow the OpenShift blog and the OpenShift documentation for a more holistic view of what’s included in a given release. Many of the tools this book covers are under active development, so you may find them behaving slightly differently from how they worked when this book was pub‐ lished. Each section references the documentation for a more detailed explanation of how to use a specific component. This documentation is usually updated frequently, so you can find up-to-date information there. When you use Kubernetes as a platform, you probably know that many things are automated for you already: you only need to tell the control plane how many resources you need in your deployment, and Kubernetes will find a node to place it. You don’t need to do a rolling upgrade of a new version of your software manually, because Kubernetes can handle that for you. All you need to do is configure the Kubernetes resources according to your needs. Individual Challenges for SRE Teams | 5
  • 24. OpenShift, being based on Kubernetes, adds more convenience, like routing traffic to your web service from the outside world: exposing your service at a specific DNS name and routing traffic to the right place is done via the OpenShift router. These are only a few of the tasks that used to be done by operations personnel but can be automated in OpenShift by default. However, depending on your specific needs and the environment you’re running OpenShift in, there are probably some very specific tasks that you need to solve on your own. This book cannot tell you step-by-step what you need to do in order to fully automate operations. If it were that easy to fit every environment, it would most probably be part of OpenShift already. So, please treat this book as an informing set of guidelines, but know that you will still need to solve some of the problems to make OpenShift fit your operations strategy. Part of your strategy will be to decide how and where you want to install OpenShift. Do you want to use one of the public cloud providers? That may be the easiest to achieve, but you may also be required to run OpenShift in your own data center for some workloads. The first step for operating OpenShift is setting it up, and when you find yourself in a place where you’ll need to run multiple OpenShift clusters, you probably want to automate this part of the cluster lifecycle. Chapter 2 discusses different ways to install an OpenShift cluster, from running it on a developer machine, which can be helpful to develop software that needs a running OpenShift cluster during development, to a public reachable OpenShift deployment using a public cloud provider. 6 | Chapter 1: Introduction
  • 25. CHAPTER 2 Installing OpenShift Rick Rackow As with any piece of software, the story of OpenShift starts by installing it. This chapter walks you through some scenarios that reach from small to scale. This chapter focuses on a single cluster installation and explores the limits of different sizes of clusters. However, at some point, scaling a cluster may either not be enough or may not serve the use case very well. In those cases you will want to look into multicluster deployments. Those are covered as part of Chapter 10. OKD, OCP, and Other Considerations OpenShift can be considered as a distribution of Kubernetes, and it is available in different ways. We will go over each of them in this section, draw a small comparison, and point out how they relate to one another. OKD OKD is not an acronym. Before its rebranding, OKD used to be called OpenShift Origin. Now it’s OKD, and that is how it should be referred to, for trademark reasons. Namely, the Linux Foundation does not allow Red Hat to use “Kubernetes” in products or projects further than referencing it. OKD is a distribution of Kubernetes optimized for continuous application develop‐ ment and multi-tenant deployment. OKD also serves as the upstream code base upon which Red Hat OpenShift Online and Red Hat OpenShift Container Platform are built. —docs.okd.io In other words, OKD is where upstream Kubernetes is vendored and the core of OpenShift starts to exist. It serves as the base for everything else that is OpenShift. 7
  • 26. OCP OCP stands for OpenShift Container Platform. This is what people (especially inside Red Hat) most commonly mean when they mention OpenShift. OCP is positioned downstream of OKD. Different support levels are available. You can try it out for free during an evaluation period. All you need is a Red Hat account. It is not required for you to make any purchase of a Red Hat product or support to follow this book. OCP is what is covered in this book. If there is a difference between how OCP and OKD work, we default to OCP. OSD, ROSA, and ARO In addition to a self-hosted and self-installed OpenShift, Red Hat also offers OpenShift-as-a-Service as a fully managed offering on Amazon Web Services, Micro‐ soft Azure, and Google Cloud Platform. We don’t go into much detail with those, as you wouldn’t really need to read this book if you were to buy a subscription for any of those, but for future reference, the terminology is: Acronym Name Available On OSD OpenShift Dedicated AWS, GCP ROSA Red Hat OpenShift Service on AWS AWS ARO Azure Red Hat OpenShift Azure All of those are viable options for anyone who wants to run production workloads on OpenShift as they are all very closely connected to one another with direct dependencies. The dependency tree is OKD ⇒ OCP ⇒ OSD, ROSA, ARO. Which one you decide on depends on your needs in terms of support, environment, ease of use, ease of operation, and cost per cluster. We decided to default to OCP for this book because it strikes a balance between upstream and downstream position. It is more feature complete than OKD and offers support but not to a level of a fully managed solution like OSD or the other managed solutions. Local Clusters with OpenShift Local OpenShift Local is the easiest way to launch a full OpenShift cluster locally. If you have touched Kubernetes before, you have probably heard of Minikube and Open‐ Shift Local, the OpenShift equivalent. Its developers describe it as “OpenShift 4 on your laptop”. In fact, you can install it not just on laptops but almost everywhere: workstations, Cloud VMs, or laptops. At its core, OpenShift Local is a virtual machine that serves as both OpenShift Worker and Master. 8 | Chapter 2: Installing OpenShift
  • 27. OpenShift Local is ephemeral by nature and should not be used for production use cases. The documentation is your best friend. Make sure to consult it whenever you get stuck. It is the condensed start-to-finish guide for OpenShift Local, and it’s open source. That means it’s frequently updated, and you can contribute to it, in case you find something along the way that you think isn’t covered enough yet. Head on over to OpenShift Cluster Manager (OCM). We reference this page fre‐ quently throughout this chapter, specifically when we talk about the installers. It serves as your overview and starting point for all clusters that you registered, regard‐ less if they are OpenShift Local, OCP, or managed clusters. Sign in with your Red Hat account. If you don’t have one, create one. You should be presented with a view similar to the one in Figure 2-1. Figure 2-1. OCM start view Click the Create cluster button and then choose “Local” in the next view. Local Clusters with OpenShift Local | 9
  • 28. Choose the platform that you want to install OpenShift Local on. Note that it has your current platform auto-selected, based on your browser’s user agent. The example shown in Figure 2-2 was created on macOS, and it is auto-selected. Figure 2-2. OCM OpenShift local view Next, download the archive. Also download and save your “Pull secret” by clicking the Download Pull Secret button, shown in Figure 2-2. After the download has finished, extract the archive into any location that is in your $PATH. $ tar -xJvf crc-macos-amd64.tar.xz Since you have extracted into your $PATH, you will now be able to use the included binaries right away. Two important files are packaged in the archive. The first is crc, which is the binary to interact with your OpenShift Local cluster, and its name is an acronym for CodeReady Containers, the former name of OpenShift local. The second is oc, which is the OpenShift command line utility to interact with generally all OpenShift clusters. It is the equivalent of kubectl for Kubernetes. Those two files 10 | Chapter 2: Installing OpenShift
  • 29. together allow you to effectively set up and manage your OpenShift Local cluster, as well as interact with it afterward as you would with any other OpenShift cluster. The basic interaction with your cluster will be to set it up. This can be done as follows: $ crc setup INFO Checking if podman remote executable is cached INFO Checking if admin-helper executable is cached INFO Caching admin-helper executable INFO Uncompressing crc_hyperkit_4.7.5.crcbundle crc.qcow2: 10.13 GiB / 10.13 GiB [-------------------] 100.00% Your system is correctly setup for using CodeReady Containers. You can now run 'crc start' to start the OpenShift cluster During your first setup, you will be prompted to opt into sending telemetry data. This is a very limited set of on-cluster data that gets forwarded to Red Hat. You can see the full list of what gets sent online. Opting out of sending telemetry data can impact certain features in OpenShift Cluster Manager that rely on telemetry data. Now that the setup is done, go ahead and launch the cluster with the following command: $ crc start INFO Checking if running as non-root INFO Checking if podman remote executable is cached INFO Checking if admin-helper executable is cached INFO Checking minimum RAM requirements INFO Checking if HyperKit is installed INFO Checking if crc-driver-hyperkit is installed INFO Checking file permissions for /etc/hosts INFO Checking file permissions for /etc/resolver/testing CodeReady Containers requires a pull secret to download content from Red Hat. ? Please enter the pull secret At this point, paste the content of the pull secret you downloaded earlier. The pull secret will allow you to pull the required images from Red Hat’s container registry as well as associate the cluster to your Red Hat user, which ultimately also will make it show up in OpenShift Cluster Manager. Your OpenShift Local installation is completed after this step. You can use this cluster to familiarize yourself with the oc command line tool as well as the web console. Remember that this cluster is ephemeral. In case you need to restore the state of installation, you can start over with the following command: $ crc delete && crc start Local Clusters with OpenShift Local | 11
  • 30. Planning Cluster Size In this section you will deploy a multinode OpenShift Cluster. There are some considerations to go over, and one of the most important is planning the cluster’s size and capacity. Instance Sizing Recommendations OpenShift documentation has some pointers for how to scale your clusters’ instances. Let’s examine what potential issues you can run into if you scale too small. You can safely assume that scaling too big is not an issue, other than cost. You will also find remarks about that throughout the following sections. The instance size is directly related to your workloads, and master and nodes behave similarly to some extent. The more workloads you plan to run, the bigger your instances have to become. However, the way they scale is fundamentally different. Whereas nodes directly relate to workload almost linearly, masters don’t. That means that a cluster’s capacity can be scaled out to a certain extent without any adjustments to the control plane. Node Sizing Recommendations To better illustrate the scaling behavior of nodes, let’s look at an example. Think of a cluster of three nodes; ignore the masters for now. Each of them is an AWS m5.xlarge, so 4 vCPU and 8 GB of ram. That gives you a total cluster capacity of 12 vCPU and 24 GB ram. In this virtual scenario you can try to run workloads in perfect distribution and use up all the resources, and then you will either need to scale nodes to bigger instances (horizontally) or more of them (vertically). Add another instance and the cluster capacity grows linearly. Now you have 16 vCPU and 32 GB for our workloads. The above scenario disregards a small but important detail: system reserved and kube reserved capacity. Since OpenShift release 4.8, OpenShift can take care of that automatically. To enable this functionality, add the following to the KubeletConfig: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node spec: autoSizingReserved: true It is possible to adjust the KubeletConfig post-install as well as before creating a clus‐ ter. Forcing OpenShift to take on the system-relevant resources is a recommended setting to ensure the cluster’s functionality and should not be omitted unless explicit reasons exist. 12 | Chapter 2: Installing OpenShift
  • 31. Think of it like this: 10 pods that run an m5.xlarge node and each of those pods has a requested set of resources of 0.4 CPU, and they actually use that. Naturally, your system process gets into trouble and that node becomes unstable. In the worst case, the node becomes unresponsive and crashes, the workloads on it get reallocated to other nodes, overloading those, and you end up with a chain reaction: your whole cluster becomes unresponsive. From that perspective it’s a small price to pay to sacrifice some of that precious capacity to ensure cluster stability. So we know that nodes scale linearly with their workloads and that we need to add a bit of reserved capacity on top of that. So how big should your node be? We have to consider three questions: • How big is your single biggest workload? • • How much can you utilize a big node? • • How fast can you deploy more nodes? • The single biggest workload determines the minimum size of a node. The explanation is that if you can’t fit the workload on a node, you have a problem, because you want to be able to deploy all your workloads to the cluster. The flip side of that is the efficiency you want to achieve. Having a node idle at only 50% usage all the time is really just burning money. You want to find the sweet spot between being able to fit all your workloads, while at the same time making the most of your nodes. Those two points together lead to the point that we want to be using nodes that are as small as possible and if you need more, deploy another one, so the utilization per node is still high even with an extra node added to the cluster. The factor that can make you go down a different path is time: the time it takes to deploy another node, in case you hit capacity. Certain ways to deploy are faster than others. For example, having set up automation that allows you to deploy another node to the cluster within 5 minutes makes a great difference from having to man‐ ually provision a new blade in a datacenter and waiting for it for a day until the datacenter team has mounted and connected it. The rule here is the slower you can provision new nodes, the bigger a single node needs to be, and the earlier you have to provision new nodes. The time to new node directly works against the max utilization you want to aim for per node. Master Sizing Recommendations Nodes are important for giving a home to your workloads, but masters are the heart of OpenShift. Planning Cluster Size | 13
  • 32. The masters, or control plane nodes, are what keep the cluster running since they are hosting: • etcd • • API server (kube and OpenShift) • • Controller manager (kube and OpenShift) • • OpenShift Oauth API server • • OpenShift Oauth Server • • HA proxy • The masters don’t directly run workloads; therefore, they behave differently when it comes to scalability. As opposed to the linear scalability needs of nodes, which depend on the workloads, the master capacity has to be scaled alongside the number of nodes. Another difference compared to the node scalability is that you need to look at vertical scaling over horizontal scaling. You cannot simply scale out master nodes horizontally because some components that run on masters require a quorum as well as replication. The most prominent case is etcd. The central store for state, secrets, and so on is just one of the components to name. Theoretically, almost any arbitrary number of masters is possible in an OpenShift cluster as long they can form a quorum. This means that a leader election needs to happen, with a majority of votes. This can become rather tricky, for example with an even number of nodes like “4” or “2.” In those cases, there is no guarantee that any given node will have a majority, and leader election can get stuck or, worse, multiple leaders can be elected, which might break the cluster. The question is, “Why not just 1?” and the answer to that is the cluster’s resilience. You cannot risk your whole cluster, which is basically unusable without masters, by having only a single point of failure. Imagine a scenario where you have one master instance, and it crashes because of a failure in the underlying infrastructure. The whole cluster is completely useless at this point, and recovery from that kind of failure is hard. The next smallest option is 3, and that is also our recommendation. In fact, the official documentation states that exactly three master nodes must be used for all production deployments. With the count set, we have the option left of vertical scaling. However, with masters being the heart of the cluster, you have to account for the fragile state you take a cluster into when you resize an already running master node, since it will need to be shut down to be resized. 14 | Chapter 2: Installing OpenShift
  • 33. Make sure to plan for growth. If you plan to have 20 nodes at the very beginning in order to have room for your workloads, choose the next bigger size master instances. This comes at a small price point but will save you massive amounts of work and risk by avoiding a master scaling operation. Infra Nodes Infra nodes are worker nodes with an extra label. Other than that, they are just regular OpenShift nodes. So if they’re “just” nodes, why do they get the extra label? Two reasons: cost and cluster resilience. The easy one is cost: certain infrastructure workloads don’t trigger subscription costs with Red Hat. What that means is if you have a node that exclusively runs infrastruc‐ ture workloads, you don’t have to pay your subscription fee for that node. Seems like an easy way to save money. For the sake of completeness, the full list of components that don’t require node subscriptions can be found in the latest documentation. Some components run on masters and also need to be there, like the OCP control plane. Others can be moved around. So you create a new set of nodes, with the infra label. Reason number two: the cluster’s resiliency. Regular workloads as well as infra work‐ loads don’t make a difference to OpenShift when they’re on the same node. Imagine a regular cluster with just masters and nodes. You deploy all your applications as well as the infra workloads that come out of the box to nodes. Now when the unfortunate situation happens that you run out of resources, it may just as well be that an “infra” workload gets killed as a “regular” application workload. This is, of course, not the best situation. On the other hand, when all infrastructure-related workloads are safely placed on their own set of nodes, the “regular” applications don’t impact them at all, creating better resilience and better performance. Good candidates to be moved around are: • In-cluster monitoring (configmap) • • Routers (IngressController) • • Default registry (Config) • Moving them by adding a label to the corresponding elements that are noted inside the parenthesis. The following example shows how it is done for the in-cluster monitoring solution. Planning Cluster Size | 15
  • 34. apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: "" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" grafana: nodeSelector: node-role.kubernetes.io/infra: "" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" Add that to your already existing configmap or create a new one with just this. For the latter option, we would create the preceding file and apply it as follows: $ oc create -f cluster-monitoring-configmap.yaml Follow it with the following command: $ watch 'oc get pod -n openshift-monitoring -o wide' A last note on the scaling of infra nodes: They scale almost the same way as master nodes. The reason they need to be scaled vertically in the first place is that Prome‐ theus as part of the in-cluster monitoring solution requires more memory with more metrics it is storing. 16 | Chapter 2: Installing OpenShift
  • 35. Basic OpenShift Installations This section discusses the first way to install an actual production OpenShift cluster. There are two different ways that come in different shapes but do the same thing, just for your respective infrastructure. Installer-Provisioned Infrastructure Think of this as an all-in-one solution. The installer creates the underlying infrastruc‐ ture, networking infrastructure, and OpenShift cluster on the cloud provider of your choice (or compatible bare metal options). Run a single command, pass in your credentials, and what you get back is an up-and-running OpenShift cluster. The starting point is again the OpenShift Cluster Manager landing page, which you can see in Figure 2-3. Figure 2-3. OCM landing page Click the Create cluster button again, but this time choose your cloud provider, in our case Google Cloud Platform (GCP). This takes you to the next page, shown in Figure 2-4, where we choose “Installer-provisioned infrastructure.” Basic OpenShift Installations | 17
  • 36. Figure 2-4. OCM installer choice Figure 2-5 shows the main installer page. In the first part, you can see all required artifacts. Part two gives you the absolute basic installation command, and part three contains some minor information about subscriptions. 18 | Chapter 2: Installing OpenShift
  • 37. Figure 2-5. OCM installer-provisioned infrastructure landing page Basic OpenShift Installations | 19
  • 38. Let’s download the installer by clicking Download Installer. While we’re there, also download the pull secret and `oc`binary. Unpack the archive with the binaries to somewhere in your $PATH to have easy access to them on the command line. Use the following command: $ tar -xzvf openshift-client-mac.tar.gz x README.md x oc x kubectl Now unpack the installer in the same way: $ tar -xzvf openshift-install-mac.tar.gz x README.md x openshift-install You can move openshift-install into a directory in your $PATH too, in case you plan to access it rather frequently, for example. Otherwise, just keep it in a location that suits you and reference it by absolute or relative path. In our example, we unpacked in the ~/Downloads directory, so we would access the installer as follows: $ ./Downloads/openshift-install Prerequisites Make sure that your cloud provider is set up and ready. The installer will also let you know if any configuration is missing. A whole section in the documentation discusses just the setup of the prerequisites, but we want to go over it anyway, just to be sure you have a good overview of what you need. To begin, we need a project. You can create that from the console or from the command line interface (CLI) by running the following command: gcloud projects create openshift-guinea-pig Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation pro‐ gram configures internal load balancing for the api-int.<clus‐ ter_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. In the project you just created, you also need a certain set of application program‐ ming interfaces (APIs) to be enabled. Table 2-1 shows you which ones are needed. 20 | Chapter 2: Installing OpenShift
  • 39. Table 2-1. GCP required API overview API service Console service name Compute Engine API compute.googleapis.com Google Cloud APIs cloudapis.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Management API servicemanagement.googleapis.com Service Usage API serviceusage.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com You can leverage the gcloud CLI tool again to enable all of those or any other method that you prefer. $ gcloud services enable compute.googleapis.com cloudapis.googleapis.com cloudresourcemanager.googleapis.com dns.googleapis.com iamcredentials.googleapis.com iam.googleapis.com servicemanagement.googleapis.com serviceusage.googleapis.com storage-api.googleapis.com storage-component.googleapis.com Operation "operations/acf.p2-10448422-91a9fd12a64b" finished successfully. Make sure that you have enough quota in your project. Please see the OpenShift documentation for the latest requirements. You also need a dedicated public domain name system (DNS) zone in the project, and it needs to be authoritative for the domain. If you don’t have a domain, you can purchase one from your preferred registrar. Now create the managed zone like this but with your domain: $ gcloud dns managed-zones create ocp-cluster --description=openshift-cluster --dns-name=operatingopenshift.com --visibility=public Basic OpenShift Installations | 21
  • 40. Get the authoritative name servers from the hosted zone records: $ gcloud dns managed-zones describe ocp-cluster creationTime: '2021-04-22T11:13:17.236Z' description: openshift-cluster dnsName: operatingopenshift.com. id: '9171610950957705760' kind: dns#managedZone name: ocp-cluster nameServers: - ns-cloud-d1.googledomains.com. - ns-cloud-d2.googledomains.com. - ns-cloud-d3.googledomains.com. - ns-cloud-d4.googledomains.com. visibility: public The last step here is to point your registrar to the name servers that you just extracted as authoritative. Now create the service account: $ gcloud iam service-accounts create ocp-cluster --description="Service account for OCP cluster creation" --display-name="OCP_CREATOR" Created service account [ocp-cluster]. Afterward assign it the required roles in order to get the needed permissions. The list of required permissions is in the documentation. $ gcloud projects add-iam-policy-binding innate-attic-182119 --member="serviceAccount:ocp-cluster @innate-attic-182119.iam.gserviceaccount.com" --role="roles/owner" Updated IAM policy for project [innate-attic-182119]. bindings: - members: - serviceAccount:ocp-cluster@innate-attic-182119.iam.gserviceaccount.com role: roles/owner etag: BwXAjkFSyZw= version: 1 The last step before you can actually install our cluster is to get your local environ‐ ment ready. Create a secure shell protocol (SSH) key-pair and add it to your ssh-agent (after you enabled the agent) with the following command: $ ssh-keygen -t ed25519 -N '' Generating public/private ed25519 key pair. Enter file in which to save the key (/Users/rrackow/.ssh/id_ed25519): Your identification has been saved in /Users/rrackow/.ssh/id_ed25519. Your public key has been saved in /Users/rrackow/.ssh/id_ed25519.pub. The key fingerprint is: SHA256:c0y9aLQMnv6lBd51Hdrw4q4muNwAeExxdWvauvhwTtk rrackow@MacBook-Pro 22 | Chapter 2: Installing OpenShift
  • 41. The key's randomart image is: +--[ED25519 256]--+ | . ... . | | o ... | | . . oo.. . | | + . B+o .= o| | . + S.O..o.oo| | . .. =+o.... | | oo=.E+. | | ..Oo.=. | | +o==... | +----[SHA256]-----+ $ eval "$(ssh-agent -s)" Agent pid 49003 $ ssh-add /Users/rrackow/.ssh/id_ed25519 Identity added: /Users/rrackow/.ssh/id_ed25519 (rrackow@MacBook-Pro) Now create a key-file and download it. Once that is done, export its path. $ gcloud iam service-accounts keys create servicce-account-keys --iam-account=ocp-cluster@innate-attic-182119.iam.gserviceaccount.com created key [b8879741ba8850edcadd9840996e882adc05e228] $ export GOOGLE_APPLICATION_CREDENTIALS='~/service-account-keys' Installation The installer, if you don’t pass in any arguments, works in an interactive mode, which looks something like this: it will prompt you for choices, and you can move around with the arrow keys and make an appropriate selection with the return key. $ ./Downloads/openshift-install create cluster --dir='ocp-cluster-install' ? SSH Public Key [Use arrows to move, enter to select, type to filter] > /Users/rrackow/.ssh/id_ed25519.pub /Users/rrackow/.ssh/libra.pub /Users/rrackow/.ssh/openshift-gcp.pub /Users/rrackow/.ssh/rpi-ocp-discovery.pub /Users/rrackow/.ssh/rrackow_private.pub /Users/rrackow/.ssh/rrackow_redhat_rsa.pub <none> ? Platform [Use arrows to move, enter to select, type to filter] aws azure > gcp openstack ovirt vsphere INFO Credentials loaded from file "/Users/rrackow/.gcp/osServiceAccount.json" ? Project ID [Use arrows to move, enter to select, type to filter] > openshift-guinea-pig (innate-attic-182119) ? Region [Use arrows to move, enter to select, type to filter] europe-west6 (Zürich, Switzerland) northamerica-northeast1 (Montréal, Québec, Canada) southamerica-east1 (São Paulo, Brazil) Basic OpenShift Installations | 23
  • 42. > us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-west1 (The Dalles, Oregon, USA) ? Base Domain [Use arrows to move, enter to select, type to filter] > operatingopenshift.com rackow.io ? Cluster Name ocp-cluster ? Pull Secret [? for help] ***************** INFO Creating infrastructure resources... INFO Waiting up to 20m0s for the Kubernetes API INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc': run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp-cluster.operatingopenshift.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s You don’t have to write the credentials as you can find them in your install dir, for example ocp-cluster-install/.openshift_install.lo. Each option will collapse once you make a selection, so don’t be confused if it looks slightly different for you. The last two require a manual input. After you make your last selection, the installer will work its magic. This commonly takes around 45 minutes. Self-Provisioned Infrastructure You can also install OpenShift on preexisting infrastructure. That puts you in full control of absolutely everything and also allows for a better incorporation in any sort of pipeline. Imagine you ran a pipeline, with just a create cluster command, and it fails at some point. Probably not very pretty to sort out what went wrong and even worse to actually automate error handling. Summary In this chapter, we discussed how to install a local cluster all the way through with considerations on how to plan your production cluster size. Each type of instance was highlighted, and lastly, you learned how to install production clusters with the OpenShift installer, using Installer Provisioned Infrastructure. 24 | Chapter 2: Installing OpenShift
  • 43. CHAPTER 3 Running Workloads on OpenShift Manuel Dewald At this point you should already have an OpenShift cluster that you can use to deploy applications. It may be a cluster running on VMs provisioned by a cloud provider or even a small cluster on your notebook using OpenShift Local. You can access the console and log in to the cluster with the oc command-line utility. But how do you deploy an application that your team built to the cluster? Most applications running on OpenShift clusters are web-based. Such applications are usually accessed by users via a web browser, or as backends by apps installed to user-owned devices. For the sake of this chapter you can use an arranged deployment consisting of three different services to practice deploying application code to your OpenShift cluster. A small OpenShift Local cluster should provide enough capacity to deploy this application. However, to follow some parts of the chapter you will need a cluster that is accessible externally. The application used in this chapter is the arcade gaming platform of a fictitious game publisher. It consists of the following components: • Games, each running in its own service (for now there is only one game). • • A highscore service where the scores of every game and player can be shown. • • The platform service, used as entry point where customers can browse, start, and • purchase games. Figure 3-1 gives you an overview of the involved components and how they interact. 25
  • 44. Figure 3-1. Components of the arcade platform example application The code is organized in a Git repository on GitHub, where each developer of the company can contribute to every service when necessary. All three services of this small sample application are located in the same Git repository. This is so you need to look at only one repository and do not need to clone several different ones. The code from this example is used in all of the following sections. If you want to follow along with this example code, use this command to check out the latest version: $ git clone https://github.com/OperatingOpenShift/s3e Deploying Code To have all services you want to run on your OpenShift cluster contained in the same namespace, first create a new project: $ oc new-project arcade This command will automatically switch your context to the newly created arcade project. All further commands automatically target this project without the need to mention it in every command. A project in OpenShift is a namespace with additional annotations. In most cases the differentiation between project and namespace is not relevant for the examples in this book, so the two terms are mostly interchangeable. To switch to a different project, you can use the following command: $ oc project default To switch back to the arcade project, run the following command accordingly: $ oc project arcade Instead of running the oc project command before subsequent commands, you can also execute all the commands against a certain namespace by selecting the 26 | Chapter 3: Running Workloads on OpenShift
  • 45. namespace in each command. All oc commands support the -n flag (shorthand for --namespace), which can be used to specify a namespace to run the command in. In practice, when you know you’ll execute a number of commands against the same namespace, switching to it using oc project saves some typing time and also saves you from executing commands against the “default” namespace and wondering where all your resources went. Deploying Existing Container Images The quickest way to start a container in the new project is using oc run. Since the game service of the application you want to deploy is already built into a container image, you can start it on the cluster using the following command: $ oc run game --image=quay.io/mdewald/s3e pod/game created This will spin up a new pod on the cluster. Use the following command to observe it while it’s starting up. As soon as it’s ready, you should see the status “Running”: $ oc get pods NAME READY STATUS RESTARTS AGE game 1/1 Running 0 24s At this point, you’re probably curious to take a look at the game you just deployed. However, the oc run command just spins up a pod without an exposed endpoint, so you need to find a way to access the game UI (which is exposed at port 8080 in this container image). A quick and simple approach to confirm the UI is working is to forward the port from the container to your local machine. To do so, run the following command: $ oc port-forward game 8080 Forwarding from 127.0.0.1:8080 -> 8080 While oc run is a quick and easy way to verify that the cluster can access your built container image and runs as expected, it is not the method of choice to continuously run an application on your cluster, as it doesn’t provide advanced concepts that some of the abstractions around deploying pods provide. The standard way to deploy an application is a deployment resource. Deployments provide additional features to plain pods. For example, they can be used for rolling upgrades or to run multiple instances distributed across nodes. To create a deployment game with the same container image, run oc create deployment and oc get pods to observe the pod coming up: Deploying Code | 27
  • 46. $ oc create deployment game --image=quay.io/mdewald/s3e deployment.apps/game created $ oc get pods NAME READY STATUS RESTARTS AGE game 1/1 Running 0 13m game-c6fb95cc6-bk6zp 1/1 Running 0 78s Security context constraints When you deploy a container using oc create deployment the pod will run with different parameters. One difference is the annotation openshift.io/scc. Compare the output of the following two commands, adjusted to the pod generated for your deployment: $ oc get pod game -o "jsonpath={.metadata.annotations['openshift.io/scc']}" anyuid $ oc get pod game-c6fb95cc6-bk6zp -o "jsonpath={.metadata.annotations['openshift.io/scc']}" restricted The restricted security context constraint (SCC) means the pods of this deploy‐ ment will not be able to run privileged containers or mount host directories, and containers must use a unique identifier (UID) from the allowed range. That means, for applications running a web server (in this example, NGINX), they need to be configured accordingly. They cannot run on port 80 or specify a UID that will be mapped automatically to a high UID within the range configured by the project. See the NGINX documentation for an explanation on how to configure NGINX to serve on a specific port. Scaling and exposing deployments You can now scale the game Deployment using oc scale deployment. You will see additional pods coming up immediately. $ oc scale deployment game --replicas=3 deployment.apps/game scaled $ oc get pods NAME READY STATUS RESTARTS AGE game 1/1 Running 0 16m game-c6fb95cc6-bk6zp 1/1 Running 0 3m24s game-c6fb95cc6-bmxzd 0/1 ContainerCreating 0 3s game-c6fb95cc6-q8bp8 0/1 ContainerCreating 0 3s To access those different instances, you need to create a service resource and tell it to expose port 8080 from your pods. To create the service, run the following command: 28 | Chapter 3: Running Workloads on OpenShift
  • 47. $ oc expose deployment game --port=8080 service/game exposed $ oc get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE game ClusterIP 172.25.113.82 <none> 8080/TCP 6s $ oc get endpoints NAME ENDPOINTS AGE game 10.116.0.57:8080,10.116.0.59:8080,10.116.0.60:8080 22s As you can see from the output of oc get endpoints, OpenShift has registered three different endpoints for the service, one for each instance running. To test the connection, you can again forward port 8080 to localhost, this time using the service instead of the pod: $ oc port-forward service/game 8080 Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080 To get the second service of the arcade platform application deployed, repeat the preceding steps for the platform service: $ oc create deployment platform --image=quay.io/mdewald/s3e-platform $ oc expose deployment platform --port=8080 Use port-forwarding again to check if the service is accepting requests: $ oc port-forward service/platform 8080 As you have probably already realized, port-forwarding is not how your users would want to access your service. Before we dive into exposing the services to the outside of the cluster in “Accessing Deployed Services” on page 31, the following section takes a look at a third way to deploy your application. Deploying Applications from Git Repositories The arcade platform contains a service that collects the scores per user of all games. The service is written in Go and can be found in the highscore subfolder of the Git repository. To deploy this service, this example does not use an already existing image from a container registry but instead uses OpenShift’s built-in build infrastructure. To deploy the application right from the Git repository, run the following command: $ oc new-app https://github.com/OperatingOpenShift/s3e --context-dir=highscore --name=highscore --> Found container image 28f6e27 (13 days old) from Docker Hub for "alpine:latest" Deploying Code | 29
  • 48. * An image stream tag will be created as "alpine:latest" that will track the source image * A Docker build using source code from https://github.com/OperatingOpenShift/s3e will be created * The resulting image will be pushed to image stream tag "highscore:latest" * Every time "alpine:latest" changes a new build will be triggered --> Creating resources ... imagestream.image.openshift.io "alpine" created imagestream.image.openshift.io "highscore" created buildconfig.build.openshift.io "highscore" created deployment.apps "highscore" created service "highscore" created [...] Git repository containing the application Subfolder in the repository to deploy Name of the application used in resources Resources created for the application When reading the output of this command, you can see OpenShift does a lot of work for you in maintaining this application. Chapter 5 takes a closer look at OpenShift’s built-in build system. What’s important for now is that OpenShift created a build pod that checked out the Git repository and built a container image using the Dockerfile in the highscore subfolder. It automatically created a service for the application in the same step. It will take some time to finish the build. When running oc get pods you will see a build pod running, and after the state of this pod turns to “Completed” the application pod will come up: $ oc get pods NAME READY STATUS RESTARTS AGE game 1/1 Running 0 33h game-c6fb95cc6-vj2qh 1/1 Running 0 20h highscore-1-build 0/1 Completed 0 4m12s highscore-56656f848c-k542p 1/1 Running 0 2m57s There is no owning resource for all the resources created by oc new-app. You can follow the logs to get an understanding of which resources the command created for you on the OpenShift cluster. 30 | Chapter 3: Running Workloads on OpenShift
  • 49. Cleaning up an application The following sections still use the resources created by the oc new-app command to expose them to the outside of the cluster. However, you may wonder how to uninstall an application, since there is no resource owning everything that OpenShift created automatically. You can run the following command to clean up everything that relates to the highscore application, as OpenShift adds the app=highscore label to everything it creates: $ oc delete all --selector app=highscore service "highscore" deleted deployment.apps "highscore" deleted buildconfig.build.openshift.io "highscore" deleted build.build.openshift.io "highscore-1" deleted imagestream.image.openshift.io "alpine" deleted imagestream.image.openshift.io "highscore" deleted Alternatively, if you want to get rid of the whole platform, you can also delete the project: $ oc delete project arcade project "arcade" deleted Accessing Deployed Services After deploying all three services of the arcade platform application as described in the previous section, you should now have three services running in the arcade namespace: $ oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE game ClusterIP 172.25.113.82 <none> 8080/TCP 35h highscore ClusterIP 172.25.32.245 <none> 8080/TCP 45s platform ClusterIP 172.25.170.245 <none> 8080/TCP 6s All three services expose port 8080 of the pods. For game and platform you used your knowledge of the services to expose the right port. In case of the highscore service, OpenShift detected the exposed port from the container it built. Accessing Services from Other Pods All three services are of type ClusterIP, which allows other components of the cluster to access it. This is helpful for services that are used only by components communi‐ cating to each other within the cluster. To test this, you can deploy a pod to interact with the services: $ oc run curl --image=curlimages/curl --command sleep 30h This command will create a pod in the cluster that you can use to query one of the services using the curl command. The hostname of the service is the name you gave Accessing Deployed Services | 31
  • 50. the service, so in this case you can query http://platform:8080 to reach the platform web service: $ oc exec curl -- curl -s http://platform:8080 <html> <head> [...] The preceding oc run command created a pod in the namespace arcade, where all the services of the arcade platform are deployed as well. That’s why you can access the service just by specifying the service name as hostname. If you create the curl pod in another namespace, for example the default namespace, this would not be possible, as the following snippet shows: $ oc -n default run curl --image=curlimages/curl --command sleep 30h $ oc -n default exec curl -- curl -s platform:8080 command terminated with exit code 6 As you can see, the curl pod in the default namespace cannot resolve the hostname platform. However, we still can query a service in a different namespace by specify‐ ing the full internal domain name of the service: $ oc -n default exec curl -- curl -s platform.arcade.svc.cluster.local:8080 <html> <head> [...] The internal DNS name of OpenShift services is set to <service-name>.<name‐ space>.svc.cluster.local. Depending on the network configuration of the cluster you’re using, communication across specific namespaces may be blocked. NetworkPolicies can be used to allow or to block communication between services of specific namespaces. Distribution of Requests In the previous section, you scaled the game deployment up to three running pods. If you have not done this until now or scaled it back down, use the following command to scale it up: $ oc scale deployment game --replicas=3 deployment.apps/game scaled OpenShift will distribute the requests across all the endpoints of the service. To make this visible, the game deployment writes a header instance-ip to responses, which you can query from your curl pod. Use the following command to list all endpoints of the game service: 32 | Chapter 3: Running Workloads on OpenShift
  • 51. $ oc get endpoints game NAME ENDPOINTS AGE game 10.116.0.62:8080,10.116.0.63:8080,10.116.0.64:8080 35h The following command runs an endless loop with curl commands to send HTTP requests to the game service: $ oc exec curl -- sh -c 'while true; do curl -si game:8080 | grep instance-ip; sleep 1s; done' instance-ip: 10.116.0.62 instance-ip: 10.116.0.63 instance-ip: 10.116.0.62 instance-ip: 10.116.0.64 instance-ip: 10.116.0.63 instance-ip: 10.116.0.64 instance-ip: 10.116.0.63 [...] The -i flag tells curl to print response headers. Each output of the curl command is filtered with grep to only print the response header instance-ip. This results in a list, showing the distribution of requests. As you can see in the output of the command, the requests are distributed randomly to all three deployed pods. To exit from the endless loop, press Ctrl+C. The “instance-ip” header is a custom header added for the purpose of this chapter. If you want to replicate this with your own applica‐ tion you can add the following line to your NGINX configuration: add_header instance-ip $server_addr always; However, this is not something we recommend for production deployments but just to visualize which endpoint receives the request. Exposing Services So far, you’ve seen how to access services from within the cluster using the hostname or the cluster-internal DNS name of a given service. To access a service from your local machine for debugging you can use port-forwarding. In most cases, however, you want your users to reach the web services, or at least parts of them, via the network, for example using their web browser. For that, you need to expose your services. OpenShift provides easy-to-use tooling to create a public DNS name as subdomain of the cluster domain that can be reached from outside of the cluster. To use it, you can create route resources for the services you want to expose to the network or internet. Exposing Services | 33
  • 52. Route by Auto-generated DNS Names The first service to expose is the main entrance point of the arcade gaming platform, the platform service. To do so, just run oc expose again, this time specifying the service you want to expose to the outside world: $ oc expose service platform route.route.openshift.io/platform exposed After running this command, a route resource has been created in the “arcade” namespace. Use the following command to see the route that has been generated: $ oc get routes NAME HOST/PORT PATH SERVICES PORT platform platform-arcade.apps-crc.testing platform 8080 Next, expose the game service. Run oc expose again and inspect the routes that OpenShift created in the namespace: $ oc expose service game route.route.openshift.io/game exposed $ oc get routes NAME HOST/PORT PATH SERVICES PORT game game-arcade.apps-crc.testing game 8080 platform platform-arcade.apps-crc.testing platform 8080 You can now see that for the different routes for the services, each was assigned a unique DNS name. Open a browser to verify the two web pages can be reached. Figure 3-2 shows how the arcade gaming platform page should look. If you’re run‐ ning OpenShift Local, those will be http://platform-arcade.apps-crc.testing and http:// game-arcade.apps-crc.testing/s3e. Remember the game service only serves the /s3e path. 34 | Chapter 3: Running Workloads on OpenShift
  • 53. Figure 3-2. Example application: Arcade gaming platform front-end Route by Path From the platform page, you will notice that neither the link to the highscore page nor the button to the game is currently working. This is because the highscore service is not yet exposed, and because the game service is currently exposed with a different domain name. By default, OpenShift creates unique subdomains for each exposed service, composed from namespace and service name. You can see them in the output of the preceding oc get routes command. However, you can tell Open‐ Shift to route the requests based on the path in a URL instead of generating unique names per service. If you look back at the architecture of the example application in Figure 3-1, routing by path using the same domain name is what you need to get the application running. You can reuse the domain name generated for the platform service, platform- arcade.apps-crc.testing for the complete application, specifying paths that should be routed to the different services. Since the platform service is meant as the main entrypoint to the application and expects requests at /, you don’t need to alter this route. Expose the highscore service at /highscore with the following command: $ oc expose service highscore --hostname=platform-arcade.apps-crc.testing --path=/highscore route.route.openshift.io/highscore exposed To change the hostname of the game service, you can edit the generated route with the following command. It opens an editor where you can adjust the generated hostname to platform-arcade.apps-crc.testing and set the path to /s3e: Exposing Services | 35
  • 54. $ oc edit route game apiVersion: route.openshift.io/v1 kind: Route metadata: [...] name: game namespace: arcade spec: host: platform-arcade.apps-crc.testing path: /s3e port: targetPort: 8080 to: kind: Service name: game weight: 100 wildcardPolicy: None status: [...] Sets the path of this route to /s3e so all requests to this path will be forwarded to the game service. After saving your changes and exiting the editor, you can get a list of the routes again. All three routes should now be assigned to the same hostname: $ oc get routes NAME HOST/PORT PATH SERVICES PORT game platform-arcade.apps-crc.testing /s3e game 8080 highscore platform-arcade.apps-crc.testing /highscore highscore 8080-tcp platform platform-arcade.apps-crc.testing platform 8080 When you revisit the main page http://platform-arcade.apps-crc.testing in your browser, the game button should work. The link to the highscore page should work as well, which will look similar to Figure 3-3 after finishing some games. Figure 3-3. Example application: Arcade gaming platform highscore 36 | Chapter 3: Running Workloads on OpenShift
  • 55. Random documents with unrelated content Scribd suggests to you:
  • 56. + — NY Times 22:310 Ag 26 ‘17 1100w “We have often commented on the imaginative quality of Mr Blackwood’s work. These mystical tales have that quality in a pre- eminent degree. Like his former stories, they possess distinct literary value.” + Outlook 117:100 S 19 ‘17 30w “The book is seasoned with one humorous tale.” + — The Times [London] Lit Sup p92 F 22 ‘17 650w BLACKWOOD, ALGERNON. The wave; an Egyptian aftermath. *$1.50 (1c) Dutton 16-24201 From childhood he had been haunted by a wave. It rose behind him, advanced, curled over from the crest, but did not fall. Sometimes it came as a waking obsession, sometimes as a dream. His father, a learned psychologist with inclinations toward Freud, tries to explain it, but the Freudian hypothesis is inadequate. Associated with the wave, is a strange perfume, identified afterwards as Egyptian. The recurring experience follows him into manhood, affecting his life and his relations to men and women. Certain persons are borne to him on the crest of the wave, as it were. These always become of significance in his life. Of them are Lettice Aylmer and his cousin Tony. Later in Egypt, these three act out a drama which seems to be a repetition of something they have experienced before. It is here that Tom Kelverdon’s wave rises to its full height and breaks, but it does not overwhelm him. “On the whole, Mr Blackwood maintains, though he does not strengthen, our good opinion of his imaginativeness and power of evoking the beautiful.” + Ath p544 N ‘16 150w
  • 57. “Mr Blackwood knows how to give these stories of reincarnation an effect beyond mere creepiness. But his method is so leisurely that he is often ‘slow,’ in the sense of dull and long-drawn-out; and his manner is formal and ponderous and unleavened by humour: common frailties of philosophical romance.” H. W. Boynton + — Bookm 45:207 Ap ‘17 480w “Never before has Mr Blackwood written a novel that comes so close to the real things of life as ‘The wave,’ It touches persistently upon the supernatural, but its visions are wholly subjective.” E. F. E. + + Boston Transcript p8 F 21 ‘17 1400w + Ind 89:556 Mr 26 ‘17 200w + — Nation 104:368 Mr 29 ‘17 430w “One’s strongest impression on closing this book is that of beauty —beauty alike of style and of spirit. The glory of words, the grandeur that was Egypt, the splendor of a brave and loving human soul—these are the very substance of this fascinating volume.” + + N Y Times 22:47 F 11 ‘17 950w “A strange and unusual book, full of insight and imagination. It is the work of a very delicate literary craftsman, who is a past master in the art of elusive suggestion.” + Sat R 123:40 Ja 13 ‘17 500w “With the characteristic Blackwood mystery to help, the book is rich in excitement and experience.”
  • 58. + The Times [London] Lit Sup p488 O 12 ‘16 450w BLAISDELL, ALBERT FRANKLIN, and BALL, FRANCIS KINGSLEY. American history for little folks. il *75c (2c) Little 973 17-25786 This book, adapted for use in the third school grade, is intended as an introduction to “The American history story-book” and other more advanced works by the authors. The aim has been to choose some of the more dramatic and picturesque events and to relate them in a simple and easy style. A partial list of contents follows: Columbus, the sailor; The sea of darkness; The hero of Virginia; Seeking a new home; Captain Miles Standish; Dark days in New England; The Dutch in New York; William Penn, the Quaker; A famous tea party; Polly Daggett saves the flagpole; Peggy White calls on Lord Cornwallis. Reviewed by J: Walcott Bookm 46:496 D ‘17 50w BLANCHARD, RALPH HARRUB. Liability and compensation insurance. il *$2 Appleton 331.82 17-24252 A textbook which presents the results of the workmen’s compensation movement in the United States in terms of legislative and insurance practice, and explains the industrial accident problem and the development of liability and compensation principles as a background for the comprehension of present problems. The book is divided into three parts: Industrial accidents and their prevention; Employers’ liability and workmen’s compensation; Employers’ liability and workmen’s compensation insurance. “Mr Blanchard covers the entire field in a very fair way, though it is evident that he does so in the professor’s study rather than from
  • 59. the ground of practical experience. The insurance feature is especially well covered.” + — Dial 63:534 N 22 ‘17 170w “The author deals with the state compensation acts, and the stock company, mutual and state fund methods of insuring the payment of such compensation. He concludes that, because of insufficient data, a choice among these three methods cannot be made at present. The author misses the determining factor in such a choice. This is, that the most desirable method of taking care of industrial accident losses is that which does most to prevent such losses.” — Engin News-Rec 79:1170 D 20 ‘17 240w “In the presentation of the insurance problem an important and timely contribution has been made.” E. S. Gray + J Pol Econ 25:1050 D ‘17 250w “It should appeal primarily to teachers and students of insurance, but it contains much information of interest to the business man and the intelligent general reader as well.” + Nation 106:122 Ja 31 ‘18 360w “The subject is presented both broadly and well. The point is not shirked that the subject in some aspects is controversial. In such cases both sides are presented, as the author’s intention is to give information rather than judgment.” + N Y Times 22:497 N 25 ‘17 230w “The author has to be commended for the clearness and conciseness of statement and helpful bibliographic notes. On the other hand it must, like most text-books, be dogmatic, and one fails to get the impression from reading the book how much is still controversial in the field of compensation. ... One is somewhat
  • 60. inclined to question the wisdom of the printing of the New York compensation law as an appendix to the book. The New York act is not as typical as a good many other acts.” I. M. Rubinow + — Survey 39:149 N 10 ‘17 350w BLAND, JOHN OTWAY PERCY. Li Hung-chang. (Makers of the nineteenth century) il *$2 (2c) Holt (Eng ed 17-26886) Mr Bland is joint author of Backhouse and Bland’s “China under the Empress Dowager.” The introductory chapter of the present volume reviews the conditions existing in China at the outset of Li Hung-chang’s career. The author then gives a detailed account of Li’s life from childhood to his death in 1901, just after the Boxer rebellion, at the age of seventy-eight. He considers him as a Chinese official, as a diplomat, a naval and military administrator, and a statesman and politician, and concludes that Li’s chief claim to greatness lies in the fact that, at the time of the Taiping rebellion, he “grasped the vital significance of the impact of the West, and the necessity for reorganizing China’s system of government and national defences to meet it.” The biographer’s task, he tells us, has been complicated by the lack of any accurate Chinese account of Li’s career, and the untrustworthiness of Chinese official records. Moreover, the “Memoirs of the Viceroy Li Hung-chang,” published in 1913, were a “literary fraud.” The present work, therefore, is based largely upon the recorded opinions of independent and competent European observers. There is a bibliographical note of two pages, followed by a chronological table of events in Chinese history. The book is indexed. “Mr Bland makes very clear to us the mingling elements in Li’s nature, showing how sometimes patriotism and sometimes self- interest stirred him most. ... By the time we reach Mr Bland’s final summing up of the character we realize how skilful has been his
  • 61. handling of the material and how vividly he has made us realize his impression of the great premier.” D. L. M. + Boston Transcript p8 O 17 ‘17 900w + Lit D 55:36 N 3 ‘17 950w “His treatment of his subject recalls a time when familiarity with life at the treaty ports was enough literary capital for the ordinary authority on Chinese affairs and real acquaintance with their history and ideas was left to the missionaries. ... No new material about Li has been unearthed, no advance has been made towards obtaining Chinese estimates of the man, no approach towards any but an Englishman’s point of view is attempted. ... On the other hand, it is fair to add that the book is easily read and that it portrays a rather splendid type of the oriental viceroy.” – + Nation 105:488 N 1 ‘17 1500w “Excellent biography.” + N Y Times 22:501 N 25 ‘17 1000w “The really significant services that Li Hung Chang rendered to his race are clearly set forth in this volume by a writer who has had good opportunities to study China and the Chinese at first hand.” + R of Rs 56:551 N ‘17 120w “If the provision of an adequate ‘setting’ is one of the difficulties to be encountered in limning Li Hung-chang’s career, another is the paucity of record. ... Mr Bland is to be congratulated upon the comprehensive narrative which he has succeeded in compiling.” * + – The Times [London] Lit Sup p535 N 8 ‘17 1850w
  • 62. BLATHWAYT, RAYMOND. Through life and round the world; being the story of my life. il *$3.50 Dutton 17-23043 Mr Blathwayt is a British journalist who has traveled widely and has made a specialty of the art of interviewing. Before taking up journalism, he served as a curate in Trinidad, in the East End of London, and in an English village. He believes himself to be the first to adapt the American “interview” to English manners. Among those interviewed by him are William Black, Thomas Hardy, Hall Caine, Grant Allen, William Dean Howells, Thomas Bailey Aldrich, and Oliver Wendell Holmes. “Illustrated from photographs and from drawings by Mortimer Menpes.” E. F. E. Boston Transcript p7 Ag 8 ‘17 800w “So many aspects of English life and examples of English character are included in Mr Blathwayt’s book that it forms a reminiscential commentary upon the journalistic and literary world of London during the past thirty years.” E. F. E. Boston Transcript p6 Ag 11 ‘17 900w “The book is a veritable gold mine for the after-dinner speaker, for it is besprinkled with quotable anecdotes.” + Dial 64:30 Ja 3 ‘18 250w “His book abounds in what Mr Leacock calls ‘aristocratic anecdotes,’ platitudinous reflections, and ‘fine writing.’ His naïve confessions as a curate help to explain the spiritual deadness and professionalism of the Church of England; they might well be used as illustrative footnotes to ‘The soul of a bishop.’” — Nation 105:610 N 29 ‘17 190w “It is very entertaining, as engaging a book of reminiscence as has been put before the public in many a day.” + N Y Times 22:293 Ag 12 ‘17 1200w
  • 63. “Mr Blathwayt is a born raconteur. Particularly good are his descriptions of his life as a young curate and as an almost penniless wanderer in Connecticut.” + Outlook 117:26 S 5 ‘17 70w Sat R 123:436 My 12 ‘17 820w
  • 64. “All his admiration of Captain Marryat and of Mrs Radcliffe has not taught him to spell their names right. He misquotes with the utmost facility. ... Here is a writer who has made livelihood and reputation by writing, yet has never mastered the elementary rules of the art. ... His book is frequently, though not constantly entertaining; but it would be much less entertaining than it is without the innocence of its author’s self-revelation.” – + The Times [London] Lit Sup p198 Ap 26 ‘17 950w BLEACKLEY, HORACE WILLIAM. Life of John Wilkes. il *$5 (3½c) Lane 17-24876 This is a scholarly account, based to a great extent on original documents of the English politician, publicist and political agitator, who, “from 1764 to 1780 was the central figure not only of London but of England.” (Sat R) “Mr Bleackley has executed his task in a scholarly and interesting manner, and his book forms an acceptable supplement to Lecky. ... The numerous illustrations are a valuable feature of the book.” + Ath p419 Ag ‘17 160w “Remarkable as the career of John Wilkes confessedly was, and undeniably interesting as this biography is, in spite of Mr Bleackley’s literary skill its final impression is not good. If, as we are told, none ‘of his contemporaries influenced more powerfully the spirit of the age,’ that spirit must have been grossly immoral to condone his immoral grossness.” – + Lit D 55:44 N 17 ‘17 240w “Mr Bleackley has found a subject well suited to his talent in this profoundly interesting historical study.”
  • 65. + N Y Times 22:417 O 21 ‘17 550w + Outlook 117:184 O 3 ‘17 50w “This is one of the best biographies that have appeared for a long time. Mr Bleackley has read and rifled nearly all the memoirs, manuscripts, diaries, letters, newspapers of the period, and we have not read a more erudite and conscientious treatment of a controversial subject. ... He treats his hero with the benevolent impartiality of the scientific historian.” * + + Sat R 124:sup4 Jl 7 ‘17 1200w “Mr Bleackley has given us a most interesting book. ... He has put before himself the task of proving that a man who wrought so much for liberty was himself a great man and a lover of the cause for which he fought. We allow that Wilkes had genius of a sort, but doubt whether he really cared two pins about the rights of constituencies, or the illegality of general warrants, or the liberty of the press. He fought for John Wilkes, and in fighting for him achieved results of wide constitutional importance.” * Spec 119:167 Ag 18 ‘17 1500w “The language is journalistic. ... As a picture of 17th-century England in its most corrupt and licentious phases the book has some historical value, though it is too often written in the language of gossip rather than history. ... The book has its faults— particularly its emphasis upon Wilkes’s mistresses—but the evidence is well documented. ... It is to be regretted that a career so closely connected with American independence should be treated to so great an extent as the subject of a record of private vices. ... There is much biographical and historical matter in it of genuine interest.” – + Springf’d Republican p15 S 23 ‘17 1050w
  • 66. “Mr Bleackley enumerates a good many of those who have included Wilkes in their historical canvases. ... An essay by Fraser Rae preceded Trevelyan’s description in his rainbow-tinted history of Charles James Fox, and later came a biography in two volumes by Percy Fitzgerald. Praise is reiterated of the excellent monograph by J. M. Rigg in the ‘Dictionary of national biography’; but so far as we see, no mention is made of by far the most judicial and philosophic account of the transactions in which Wilkes was conspicuous in Lecky’s ‘History of England in the eighteenth century.’ ... His style is a little arid, but his ripened power of research, his patience and diligence in sifting material, combine to furnish a truly notable portrait. ... The historical background shows a great advance upon any of his preceding work. ... The volume is very well finished, the references (largely to Mss.) overwhelming, the illustrations well-chosen, the errata scrupulous, the index complete.” * + The Times [London] Lit Sup p318 Jl 5 ‘17 2050w BLUMENTHAL, DANIEL.[2] Alsace-Lorraine. map *75c (7c) Putnam 943.4 “A study of the relations of the two provinces to France and to Germany and a presentation of the just claims of their people.” The author, an Alsatian by birth, has been deputy from Strasbourg in the Reichstag, senator from Alsace-Lorraine, and mayor of the city of Colmar. The book has an introduction by Douglas Wilson Johnson of Columbia university, who says, “The problem of Alsace- Lorraine is in a very real sense an American problem.” “There is no more moving recent plea for the restoration of Alsace- Lorraine than this little volume.” + Boston Transcript p6 Ja 9 ‘18 200w
  • 67. BLUNDELL, MARY E. (SWEETMAN) (MRS FRANCIS BLUNDELL) (M. E. FRANCIS, pseud.). Dark Rosaleen. *$1.35 (1c) Kenedy A17-1416 A story of modern Ireland. In a study of the relationship between two families, the author gives an epitome of the situation that exists in Ireland between Catholics and Protestants. Hector McTavish’s father is a fanatical Scotch Presbyterian, but since he grows up in a Catholic community, Hector makes friends with the children of that church. Patsy Burke is his dearest playmate and Honor Burke is to him a foster mother. Fearing these influences, the father takes the boy away and, when he returns thirteen years later, it is to find Patsy an ordained priest and Patsy’s little sister, Norah, grown into sweet womanhood. The love between Hector and Norah, their marriage and the birth of their child leads to tragedy. But, in the child, the author sees a symbol of hope for the new Ireland. “The author has not written a thesis novel, but a touching tale of what she feels and loves.” + Cath World 105:259 My ‘17 130w “There is nothing intolerant in the spirit of this very thrilling book.” + N Y Times 22:166 Ap 29 ‘17 550w BODART, GASTON, and KELLOGG, VERNON LYMAN. Losses of life in modern wars; ed. by Harald Westergaard. *$2 Oxford 172.4 16-20885 “It is the function of the Division of economics and history of the Carnegie endowment for international peace, under the direction of Professor J. B. Clark, to promote a thorough and scientific investigation of the causes and results of war. ... The first volume resulting from these studies contains two reports upon investigations carried on in furtherance of this plan. The first, by Mr Gaston Bodart, deals with the ‘Losses of life in modern wars:
  • 68. Austria-Hungary, France.’ The second, by Professor Vernon L. Kellogg, is a preliminary report and discussion of ‘Military selection and race deterioration.’ ... Professor Kellogg marshals his facts to expose the dysgenic effects of war in military selection, which exposes the strongest and sturdiest young men to destruction and for the most part leaves the weaklings to perpetuate the race. He cites statistics to prove an actual measurable, physical deterioration in stature in France due apparently to military selection. ... To these dysgenic aspects of militarism the author adds the appalling racial deterioration resulting from venereal diseases.”—Dial Am Hist R 22:702 Ap ‘17 450w + A L A Bkl 13:196 F ‘17 “The work is a candid and sane discussion of both sides of this very important aspect of militarism.” + Dial 61:401 N 16 ‘16 390w “It would be difficult to exaggerate the importance of this original and authoritative study into the actual facts of war.” + Educ R 52:528 D ‘16 70w BOGARDUS, EMORY STEPHEN. Introduction to sociology. $1.50 University of Southern California press, 3474 University av., Los Angeles, Cal. 302 17-21833 The author who is professor of sociology in the University of Southern California offers this textbook as an introduction not only to sociology in its restricted sense but to the entire field of the social sciences. He presents the political and economic factors in social progress not only from a sociological point of view but in such a way that the student will want to continue along political science or economic lines. It is the aim to stimulate and to direct social interest to law, politics and business. He discusses the
  • 69. population basis of social progress, the geographic, biologic and psychologic bases as well; social progress as affected by genetic, hygienic, recreative, economic, political, ethical, esthetic, intellectual, religious, and associative factors. A closing chapter surveys the scientific outlook for social progress. “The advantage of Professor Bogardus’s method is that it brings to bear in a simple, elementary way a great mass of pertinent facts.” + Dial 63:596 D 6 ‘17 150w “The author does not, perhaps, distinguish clearly enough between the sociological and the social points of view.” B. L. + — Survey 39:202 N 24 ‘17 240w BOGEN, BORIS D. Jewish philanthropy; an exposition of principles and methods of Jewish social service in the United States. *$2 Macmillan 360 17-15182 “The entire field of Jewish social service, both theoretic and practical, is here discussed by a man who has been engaged in it for about twenty-five years as educator, settlement head, relief agent, and now field secretary of the National conference of Jewish charities. ... The author points out that the pre-eminent Jewish contribution to social service in this country is the ‘federation idea.’ By federating their charities, the Jews succeeded in uniting communities, in raising more funds to carry on work more adequately; they have prevented duplication of effort, conserved energies and eliminated waste.” (Survey) The book has an eight-page bibliography. A L A Bkl 14:40 N ‘17 “No one perhaps is better qualified to discuss with authority the subject of Jewish philanthropy than Dr Boris D. Bogen, of Cincinnati. Himself a Russian by birth and early training, he speaks
  • 70. concerning the immigrant with a thoroughness born of intimate and empiric knowledge, supplemented by years of accurate and exhaustive study.” A. A. Benesch + Am Pol Sci R 11:785 N ‘17 580w “Once in a while the author makes a sweeping statement without citing authorities. There are two serious drawbacks to the usefulness of the work. One is the constant use of Hebrew words, which are usually not translated or are mistranslated. Any future work of this character should have a glossary of such Hebrew words as part of its appendix. The other is the chapter on Standards of relief, which ought to have been the most important, received the most scant attention. But all in all, the book is a splendid piece of work.” Eli Mayer + — Ann Am Acad 74:303 N ‘17 400w Cleveland p107 S ‘17 10w + Ind 92:109 O 13 ‘17 110w “The book contains a great mass of information regarding various Jewish philanthropies, although no attempt is made to present statistical matter in a formal way.” R of Rs 56:441 O ‘17 50w “Dr Bogen’s book is wide in scope and will be found useful as a handbook for non-Jewish as well as for Jewish social workers.” Oscar Leonard + Survey 38:532 S 15 ‘17 500w BOIRAC, ÉMILE. Our hidden forces (“La psychologie inconnue”); an experimental study of the psychic sciences; tr. and ed., with
  • 71. an introd., by W. de Kerlor. il *$2 (3c) Stokes 130 17-13485 This work, translated from the French, is based on investigations in a field to which scientists of note in the United States, with the exception of William James, have given little attention, that of psychic phenomena. In France, on the other hand, the translator assures us, such investigations, have made such progress as to gain national recognition. The book is based on experimental studies and consists of collected papers that were written during the period from 1893 to 1903. Animal magnetism in the light of new investigations, Mesmerism and suggestion, The provocation of sleep at a distance, The colors of human magnetism, The scientific study of spiritism, etc., are among the subjects. “Professor Émile Boirac, rector of the Academy of Dijon, France, and author of this book, is an acknowledged leader of thought in matters both psychological and psychic. He has devoted many years to studying the problems pertaining to life and death, and this present book was awarded the prize in a contest to which many of the leading psychologists contributed. ... Though a scientific book, it is not without attraction for the lay reader.” + Boston Transcript p7 Je 13 ‘17 320w Cleveland p91 Jl ‘17 30w N Y Br Lib News 4:93 Je ‘17 + R of Rs 56:106 Jl ‘17 80w BOLIN, JAKOB. Gymnastic problems; with an introd. by Earl Barnes. il *$1.50 (4c) Stokes 613.7 17-12150 This book by the late Professor Bolin of the University of Utah has been prepared for publication by a group of his associates, who feel that the work is “one of the most important contributions to
  • 72. the subject of gymnastics which has been written in English.” In the first chapter the author discusses the relation of gymnastic exercise to physical training in general. His own position is that the aim of gymnastics is hygienic in a special sense, its object being to counteract the evils of one sided activity. The remaining chapters are devoted to: The principle of gymnastic selection; The principle of gymnastic totality; The principle of gymnastic unity; The composition of the lesson; Progression; General considerations of method. “Of value to all teachers of physical education and to those interested in healthful efficiency.” + A L A Bkl 14:10 O ‘17 BONNER, GERALDINE (HARD PAN, pseud.). Treasure and trouble therewith. il *$1.50 (1½c) Appleton 17-21974 “After the opening scene, which pictures a hold-up and robbery of a Wells-Fargo stage coach in the California mountains, the story drops into more conventional lines of romance. The robbery, which is the act of two rough prospectors, is the prelude to the social experiences in San Francisco of a familiar type of cosmopolitan adventurer. He is little better than a tramp when he discovers the robbers’ cache. He makes off with the gold and conceals it near San Francisco. Being well-born and educated, though thoroughly unscrupulous, he finds an easy entrance to San Francisco society.” (Springf’d Republican) The rest of the book gives the story of his life in the city. The California earthquake of 1906 plays an important part in the story. + A L A Bkl 14:59 N ‘17 “Geraldine Bonner has a good plot in ‘Treasure and trouble therewith,’ although not an especially attractive one. ... All her pictures of California are vivid and sympathetic, but the character drawing is unskilful.”
  • 73. + — N Y Evening Post p3 O 13 ‘17 80w “Miss Bonner has endeavored, with commendable success, to combine realism with the stirring incidents and dramatic situations of the story of plot and action. Especially good are the chapters which deal with the earthquake.” + N Y Times 22:311 Ag 26 ‘17 770w “In spite of the complete lack of plausibility, the book affords a certain measure of diversion.” – + Springf’d Republican p15 S 16 ‘17 300w BOSANKO, W. Collecting old lustre ware. (Collectors’ pocket ser.) il *75c (3½c) Doran 738 A17-1002 The editor in his preface says that he believes this to be the first book on old English lustre ware ever published. He adds: “Yet there are many collectors of old lustre ware; it still abounds, there is plenty of it to hunt for, and prices are not yet excessive. By the aid of this informative book and the study of museum examples a beginner may equip himself well, and may take up this hobby hopefully, certain of finding treasures.” There are over forty-five illustrations. A L A Bkl 13:436 Jl ‘17 “Simple, practical handbook.” + Cleveland p97 Jl ‘17 20w N Y Br Lib News 5:75 My ‘17 20w + R of Rs 56:220 Ag ‘17 50w
  • 74. BOSANQUET, BERNARD. Social and international ideals. *$2.25 Macmillan 304 (Eng ed 17-28213) “This volume is a collection of essays, reviews, and lectures, all of which, with one exception, were published before the war, and most of which on the face of them reveal that fact. ... Though the contents of the volume seem at first sight to be fortuitously put together, there runs through them unity of spirit, thought, purpose, and manner.” (The Times [London] Lit Sup Jl 12 ‘17) “Most of the pages (14 out of 17 are reprinted from the Charity Organization Review) discuss the principles which should govern our handling of social problems with the view of displaying ‘the organizing power which belongs to a belief in the supreme values —beauty, truth, kindness, for example—and how a conception of life which has them for its good is not unpractical.’” (The Times [London] Lit Sup Je 21 ‘17) “We may single out, as of special importance in this new volume, Mr Bosanquet’s idea of the growth of individuality and his idea of the structure of political society. In the chapter on ‘Optimism’ he points out that the mistake of its opponents is the acceptance of their momentary experience as final. ... Criticism, confined to a few sentences, must obviously be inadequate. ... If there are omissions in Mr Bosanquet’s analysis of fact, his ideal also appears to be too simple.” + Ath p398 Ag ‘17 950w “It is a great privilege to listen to a wise man and a real logician, who is at once a wit and a humanitarian. Dr Bosanquet was not for nothing a fellow in moderations. The whole book is full of sound common sense.” + Boston Transcript p8 Ja 19 ‘18 600w Cleveland p135 D ‘17 60w “Written in a strain of reasoned optimism.” M. J.
  • 75. + Int J Ethics 28:291 Ja ‘18 200w “Here we have the precious kernel of wisdom in the hard nut of paradox. No doubt, justice and kindness, beauty and truth are the things that matter most, and it is no small service to direct our thoughts once again to them. But how to embody and realize them in the maze and tangle of our actual world, that is a problem apparently too great for any single thinker.” R. F. A. H. + — New Repub 13:353 Ja 19 ‘18 1850w + The Times [London] Lit Sup p299 Je 21 ‘17 130w “If we are tempted to say that these pages show his aptitude for making simple things look difficult, they reveal also the meaning of life. They disclose to those living the humblest of lives that they may enter if they will—the door is ever open—to regions the highest and purest. ... If the book contained nothing else than some of the observations in the last chapters as to true pacifism and patriotism, it would make every reader its debtor.” + The Times [London] Lit Sup p326 Jl 12 ‘17 1800w BOSSCHÈRE, JEAN DE, il. Christmas tales of Flanders. il *$3 Dodd 398 Popular Christmas tales current in Flanders and Brabant, translated by M. C. O. Morris, and spiritedly illustrated partly in color and partly in black and white by Jean de Bosschère. “The engaging color-work of Mr de Bosschère is full of brilliancy, and makes of this Christmas book a rich gift from a country now sorely stricken.” + Lit D 55:53 D 8 ‘17 50w
  • 76. “A very charming book for young people, and so interestingly illustrated that their elders will find it almost equally attractive. All the pictures have humor, dexterity, force, and appreciation of character.” + N Y Times 22:514 D 2 ‘17 70w “This handsome and well-illustrated book is one of the most attractive we have seen this season. ... Some of the drawings seem to us a little scratchy, but they will all be clear to a child. They lack the tortured straining after originality and the purposeful ugliness which modern art has occasionally thrust upon the nursery.” + — Sat R 124:sup10 D 8 ‘17 280w Spec 119:sup628 D 1 ‘17 330w “The stories are sometimes abrupt in their inconclusiveness; homely and almost entirely unromantic. Sometimes a disagreeable hint of cynicism obtrudes itself; but this may have been left on our minds by the association with M. de Bosschère’s illustrations. They are completely unsuited to their purpose.” – + The Times [London] Lit Sup p621 D 13 ‘17 200w BOSTWICK, ARTHUR ELMORE. American public library. il *$1.75 (2c) Appleton 020 17-17641 This is a new edition, revised and brought up to date, of a book written by the librarian of the St Louis public library and first published seven years ago. “As a matter of mechanical necessity, no doubt, the revisions and additions have limited themselves to such changes as could be made, here and there, without requiring any considerable resetting or recasting of the pages, so that the
  • 77. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com