SlideShare a Scribd company logo
Visit ebookfinal.com to download the full version and
explore more ebooks or textbooks
An Introduction to Machine Learning
Interpretability 1st Edition Patrick Hall And
Navdeep Gill
_____ Click the link below to download _____
https://ebookfinal.com/download/an-introduction-to-machine-
learning-interpretability-1st-edition-patrick-hall-and-
navdeep-gill/
Explore and download more ebooks or textbook at ebookfinal.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Artificial Intelligence With an Introduction to Machine
Learning 1st Edition Richard E Neapolitan
https://ebookfinal.com/download/artificial-intelligence-with-an-
introduction-to-machine-learning-1st-edition-richard-e-neapolitan/
A Hands On Introduction to Machine Learning 1st Edition
Shah
https://ebookfinal.com/download/a-hands-on-introduction-to-machine-
learning-1st-edition-shah/
A Concise Introduction to Machine Learning 1st Edition
A.C. Faul (Author)
https://ebookfinal.com/download/a-concise-introduction-to-machine-
learning-1st-edition-a-c-faul-author/
Inside the Machine An Illustrated Introduction to
Microprocessors and Computer Architecture 1st Edition Jon
Stokes
https://ebookfinal.com/download/inside-the-machine-an-illustrated-
introduction-to-microprocessors-and-computer-architecture-1st-edition-
jon-stokes/
Social Learning An Introduction to Mechanisms Methods and
Models William Hoppitt
https://ebookfinal.com/download/social-learning-an-introduction-to-
mechanisms-methods-and-models-william-hoppitt/
Handbook of Natural Language Processing Second Edition
Chapman Hall CRC Machine Learning Pattern Recognition
Series Nitin Indurkhya
https://ebookfinal.com/download/handbook-of-natural-language-
processing-second-edition-chapman-hall-crc-machine-learning-pattern-
recognition-series-nitin-indurkhya/
Awakening An Introduction to the History of Eastern
Thought 6th Edition Patrick S. Bresnan
https://ebookfinal.com/download/awakening-an-introduction-to-the-
history-of-eastern-thought-6th-edition-patrick-s-bresnan/
Designing Human machine Cooperation Systems 1st Edition
Patrick Millot
https://ebookfinal.com/download/designing-human-machine-cooperation-
systems-1st-edition-patrick-millot/
A Machine Learning Approach to Phishing Detection and
Defense 1st Edition I.S. Amiri
https://ebookfinal.com/download/a-machine-learning-approach-to-
phishing-detection-and-defense-1st-edition-i-s-amiri/
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
An Introduction to Machine Learning Interpretability 1st
Edition Patrick Hall And Navdeep Gill Digital Instant
Download
Author(s): Patrick Hall and Navdeep Gill
ISBN(s): 9781492033141, 1492033146
Edition: 1
File Details: PDF, 3.87 MB
Year: 2018
Language: english
Patrick Hall & Navdeep Gill
An Applied Perspective on Fairness,
Accountability, Transparency, and
Explainable AI
An Introduction to
Machine Learning
Interpretability
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
Patrick Hall and Navdeep Gill
An Introduction to Machine
Learning Interpretability
An Applied Perspective on Fairness,
Accountability, Transparency,
and Explainable AI
Boston Farnham Sebastopol Tokyo
Beijing Boston Farnham Sebastopol Tokyo
Beijing
978-1-492-03314-1
[LSI]
An Introduction to Machine Learning Interpretability
by Patrick Hall and Navdeep Gill
Copyright © 2018 O’Reilly Media, Inc. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.
O’Reilly books may be purchased for educational, business, or sales promotional use. Online edi‐
tions are also available for most titles (http://oreilly.com/safari). For more information, contact our
corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com.
Editor: Nicole Tache
Production Editor: Nicholas Adams
Copyeditor: Octal Publishing, Inc.
Interior Designer: David Futato
Cover Designer: Randy Comer
Illustrator: Rebecca Demarest
April 2018: First Edition
Revision History for the First Edition
2017-03-28: First Release
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. An Introduction to Machine
Learning Interpretability, the cover image, and related trade dress are trademarks of O’Reilly Media,
Inc.
While the publisher and the authors have used good faith efforts to ensure that the information and
instructions contained in this work are accurate, the publisher and the authors disclaim all responsi‐
bility for errors or omissions, including without limitation responsibility for damages resulting from
the use of or reliance on this work. Use of the information and instructions contained in this work is
at your own risk. If any code samples or other technology this work contains or describes is subject
to open source licenses or the intellectual property rights of others, it is your responsibility to ensure
that your use thereof complies with such licenses and/or rights.
Table of Contents
An Introduction to Machine Learning Interpretability. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Machine Learning and Predictive Modeling in Practice 2
Social and Commercial Motivations for Machine Learning Interpretability 3
The Multiplicity of Good Models and Model Locality 5
Accurate Models with Approximate Explanations 7
Defining Interpretability 9
A Machine Learning Interpretability Taxonomy for Applied Practitioners 10
Common Interpretability Techniques 13
Testing Interpretability 29
Machine Learning Interpretability in Action 30
Conclusion 31
iii
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
An Introduction to Machine Learning
Interpretability
Understanding and trusting models and their results is a hallmark of good sci‐
ence. Scientists, engineers, physicians, researchers, and humans in general have
the need to understand and trust models and modeling results that affect their
work and their lives. However, the forces of innovation and competition are now
driving analysts and data scientists to try ever-more complex predictive modeling
and machine learning algorithms. Such algorithms for machine learning include
gradient-boosted ensembles (GBM), artificial neural networks (ANN), and ran‐
dom forests, among many others. Many machine learning algorithms have been
labeled “black box” models because of their inscrutable inner-workings. What
makes these models accurate is what makes their predictions difficult to under‐
stand: they are very complex. This is a fundamental trade-off. These algorithms
are typically more accurate for predicting nonlinear, faint, or rare phenomena.
Unfortunately, more accuracy almost always comes at the expense of interpreta‐
bility, and interpretability is crucial for business adoption, model documentation,
regulatory oversight, and human acceptance and trust.
The inherent trade-off between accuracy and interpretability in predictive mod‐
eling can be a particularly vexing catch-22 for analysts and data scientists work‐
ing in regulated industries. Due to strenuous regulatory and documentation
requirements, data science professionals in the regulated verticals of banking,
insurance, healthcare, and other industries often feel locked into using tradi‐
tional, linear modeling techniques to create their predictive models. So, how can
you use machine learning to improve the accuracy of your predictive models and
increase the value they provide to your organization while still retaining some
degree of interpretability?
This report provides some answers to this question by introducing interpretable
machine learning techniques, algorithms, and models. It discusses predictive
modeling and machine learning from an applied perspective and puts forward
social and commercial motivations for interpretability, fairness, accountability,
1
and transparency in machine learning. It defines interpretability, examines some
of the major theoretical difficulties in the burgeoning field, and provides a taxon‐
omy for classifying and describing interpretable machine learning techniques.
We then discuss many credible and practical machine learning interpretability
techniques, consider testing of these interpretability techniques themselves, and,
finally, we present a set of open source code examples for interpretability techni‐
ques.
Machine Learning and Predictive Modeling in Practice
Companies and organizations use machine learning and predictive models for a
very wide variety of revenue- or value-generating applications. A tiny sample of
such applications includes deciding whether to award someone a credit card or
loan, deciding whether to release someone from a hospital, or generating custom
recommendations for new products or services. Although many principles of
applied machine learning are shared across industries, the practice of machine
learning at banks, insurance companies, healthcare providers and in other regu‐
lated industries is often quite different from machine learning as conceptualized
in popular blogs, the news and technology media, and academia. It’s also some‐
what different from the practice of machine learning in the technologically
advanced and generally unregulated digital, ecommerce, FinTech, and internet
verticals. Teaching and research in machine learning tend to put a central focus
on algorithms, and the computer science, mathematics, and statistics of learning
from data. Personal blogs and media outlets also tend to focus on algorithms and
often with more hype and less rigor than in academia. In commercial practice,
talent acquisition, data engineering, data security, hardened deployment of
machine learning apps and systems, managing and monitoring an ever-
increasing number of predictive models, modeling process documentation, and
regulatory compliance often take precedence over more academic concerns
regarding machine learning algorithms[1].
Successful entities in both traditional enterprise and in digital, ecommerce, Fin‐
Tech, and internet verticals have developed processes for recruiting and retaining
analytical talent, amassed vast amounts of data, and engineered massive flows of
data through corporate IT systems. Both types of entities have faced data security
challenges; both have learned to deploy the complex logic that defines machine
learning models into operational, public-facing IT systems; and both are learning
to manage the large number of predictive and machine learning models required
to stay competitive in today’s data-driven commercial landscape. However, larger,
more established companies tend to practice statistics, analytics, and data mining
at the margins of their business to optimize revenue or allocation of other valua‐
ble assets. Digital, ecommerce, FinTech, and internet companies, operating out‐
side of most regulatory oversight, and often with direct access to huge data stores
2 | An Introduction to Machine Learning Interpretability
and world-class talent pools, have often made web-based data and machine
learning products central to their business.
In the context of applied machine learning, more regulated, and often more tra‐
ditional, companies tend to face a unique challenge. They must use techniques,
algorithms, and models that are simple and transparent enough to allow for
detailed documentation of internal system mechanisms and in-depth analysis by
government regulators. Interpretable, fair, and transparent models are a serious
legal mandate in banking, insurance, healthcare, and other industries. Some of
the major regulatory statutes currently governing these industries include the
Civil Rights Acts of 1964 and 1991, the Americans with Disabilities Act, the
Genetic Information Nondiscrimination Act, the Health Insurance Portability
and Accountability Act, the Equal Credit Opportunity Act, the Fair Credit
Reporting Act, the Fair Housing Act, Federal Reserve SR 11-7, and European
Union (EU) Greater Data Privacy Regulation (GDPR) Article 22[2]. Moreover,
regulatory regimes are continuously changing, and these regulatory regimes are
key drivers of what constitutes interpretability in applied machine learning.
Social and Commercial Motivations for Machine
Learning Interpretability
The now-contemplated field of data science amounts to a superset of the fields of statis‐
tics and machine learning, which adds some technology for “scaling up” to “big data.”
This chosen superset is motivated by commercial rather than intellectual developments.
Choosing in this way is likely to miss out on the really important intellectual event of the
next 50 years.
—David Donoho[3]
Usage of AI and machine learning models is likely to become more common‐
place as larger swaths of the economy embrace automation and data-driven deci‐
sion making. Even though these predictive systems can be quite accurate, they
have been treated as inscrutable black boxes in the past, that produce only
numeric or categorical predictions with no accompanying explanations. Unfortu‐
nately, recent studies and recent events have drawn attention to mathematical
and sociological flaws in prominent machine learning systems, but practitioners
usually don’t have the appropriate tools to pry open machine learning black
boxes to debug and troubleshoot them[4][5].
Although this report focuses mainly on the commercial aspects of interpretable
machine learning, it is always crucially important to consider social motivations
and impacts of data science, including interpretability, fairness, accountability,
and transparency in machine learning. One of the greatest hopes for data science
and machine learning is simply increased convenience, automation, and organi‐
zation in our day-to-day lives. Even today, I am beginning to see fully automated
baggage scanners at airports and my phone is constantly recommending new
Social and Commercial Motivations for Machine Learning Interpretability | 3
music that I actually like. As these types of automation and conveniences grow
more common, machine learning engineers will need more and better tools to
debug these ever-more present, decision-making systems. As machine learning
begins to make a larger impact on everyday human life, whether it’s just addi‐
tional convenience or assisting in serious, impactful, or historically fraught and
life-altering decisions, people will likely want to know how these automated deci‐
sions are being made. This might be the most fundamental application of
machine learning interpretability, and some argue the EU GDPR is already
legislating a “right to explanation” for EU citizens impacted by algorithmic deci‐
sions[6].
Machine learning also promises quick, accurate, and unbiased decision making
in life-changing scenarios. Computers can theoretically use machine learning to
make objective, data-driven decisions in critical situations like criminal convic‐
tions, medical diagnoses, and college admissions, but interpretability, among
other technological advances, is needed to guarantee the promises of correctness
and objectivity. Without interpretability, accountability, and transparency in
machine learning decisions, there is no certainty that a machine learning system
is not simply relearning and reapplying long-held, regrettable, and erroneous
human biases. Nor are there any assurances that human operators have not
designed a machine learning system to make intentionally prejudicial decisions.
Hacking and adversarial attacks on machine learning systems are also a serious
concern. Without real insight into a complex machine learning system’s opera‐
tional mechanisms, it can be very difficult to determine whether its outputs have
been altered by malicious hacking or whether its inputs can be changed to create
unwanted or unpredictable decisions. Researchers recently discovered that slight
changes, such as applying stickers, can prevent machine learning systems from
recognizing street signs[7]. Such adversarial attacks, which require almost no
software engineering expertise, can obviously have severe consequences.
For traditional and often more-regulated commercial applications, machine
learning can enhance established analytical practices (typically by increasing pre‐
diction accuracy over conventional but highly interpretable linear models) or it
can enable the incorporation of unstructured data into analytical pursuits. In
many industries, linear models have long been the preferred tools for predictive
modeling, and many practitioners and decision-makers are simply suspicious of
machine learning. If nonlinear models—generated by training machine learning
algorithms—make more accurate predictions on previously unseen data, this
typically translates into improved financial margins but only if the model is
accepted by internal validation teams and business partners and approved by
external regulators. Interpretability can increase transparency and trust in com‐
plex machine learning models, and it can allow more sophisticated and poten‐
tially more accurate nonlinear models to be used in place of traditional linear
models, even in some regulated dealings. Equifax’s NeuroDecision is a great
4 | An Introduction to Machine Learning Interpretability
example of modifying a machine learning technique (an ANN) to be interpreta‐
ble and using it to make measurably more accurate predictions than a linear
model in a regulated application. To make automated credit-lending decisions,
NeuroDecision uses ANNs with simple constraints, which are somewhat more
accurate than conventional regression models and also produce the regulator-
mandated reason codes that explain the logic behind a credit-lending decision.
NeuroDecision’s increased accuracy could lead to credit lending in a broader
portion of the market, such as new-to-credit consumers, than previously possi‐
ble[1][8].
Less-traditional and typically less-regulated companies currently face a greatly
reduced burden when it comes to creating fair, accountable, and transparent
machine learning systems. For these companies, interpretability is often an
important but secondary concern. Even though transparency into complex data
and machine learning products might be necessary for internal debugging, vali‐
dation, or business adoption purposes, the world has been using Google’s search
engine and Netflix’s movie recommendations for years without widespread
demands to know why or how these machine learning systems generate their
results. However, as the apps and systems that digital, ecommerce, FinTech, and
internet companies create (often based on machine learning) continue to change
from occasional conveniences or novelties into day-to-day necessities, consumer
and public demand for interpretability, fairness, accountability, and transparency
in these products will likely increase.
The Multiplicity of Good Models and Model Locality
If machine learning can lead to more accurate models and eventually financial
gains, why isn’t everyone using interpretable machine learning? Simple answer:
it’s fundamentally difficult and it’s a very new field of research. One of the most
difficult mathematical problems in interpretable machine learning goes by sev‐
eral names. In his seminal 2001 paper, Professor Leo Breiman of UC, Berkeley,
coined the phrase: the multiplicity of good models[9]. Some in credit scoring refer
to this phenomenon as model locality. It is well understood that for the same set
of input variables and prediction targets, complex machine learning algorithms
can produce multiple accurate models with very similar, but not the same, inter‐
nal architectures. This alone is an obstacle to interpretation, but when using
these types of algorithms as interpretation tools or with interpretation tools, it is
important to remember that details of explanations can change across multiple
accurate models. Because of this systematic instability, multiple interpretability
techniques should be used to derive explanations for a single model, and practi‐
tioners are urged to seek consistent results across multiple modeling and inter‐
pretation techniques.
The Multiplicity of Good Models and Model Locality | 5
Figures 1-1 and 1-2 are cartoon illustrations of the surfaces defined by error
functions for two fictitious predictive models. In Figure 1-1 the error function is
representative of a traditional linear model’s error function. The surface created
by the error function in Figure 1-1 is convex. It has a clear global minimum in
three dimensions, meaning that given two input variables, such as a customer’s
income and a customer’s interest rate, the most accurate model trained to predict
loan defaults (or any other outcome) would almost always give the same weight
to each input in the prediction, and the location of the minimum of the error
function and the weights for the inputs would be unlikely to change very much if
the model was retrained, even if the input data about customer’s income and
interest rate changed a little bit. (The actual numeric values for the weights could
be ascertained by tracing a straight line from minimum of the error function pic‐
tured in Figure 1-1 to the interest rate axis [the X axis] and income axis [the Y
axis].)
Figure 1-1. An illustration of the error surface of a traditional linear model. (Figure
courtesy of H2O.ai.)
Because of the convex nature of the error surface for linear models, there is basi‐
cally only one best model, given some relatively stable set of inputs and a predic‐
tion target. The model associated with the error surface displayed in Figure 1-1
would be said to have strong model locality. Moreover, because the weighting of
income versus interest rate is highly stable in the pictured error function and its
associated linear model, explanations about how the function made decisions
about loan defaults based on those two inputs would also be stable. More stable
explanations are often considered more trustworthy explanations.
Figure 1-2 depicts a nonconvex error surface that is representative of the error
function for a machine learning function with two inputs—for example, a cus‐
tomer’s income and a customer’s interest rate—and an output, such as the same
customer’s probability of defaulting on a loan. This nonconvex error surface with
6 | An Introduction to Machine Learning Interpretability
no obvious global minimum implies there are many different ways a complex
machine learning algorithm could learn to weigh a customer’s income and a cus‐
tomer’s interest rate to make a good decision about when they might default.
Each of these different weightings would create a different function for making
loan default decisions, and each of these different functions would have different
explanations. Less-stable explanations feel less trustworthy, but are less-stable
explanations actually valuable and useful? The answer to this question is central
to the value proposition of interpretable machine learning and is examined in the
next section.
Figure 1-2. An illustration of the error surface of a machine learning model. (Figure
courtesy of H2O.ai.)
Accurate Models with Approximate Explanations
Due to many valid concerns, including the multiplicity of good models, many
researchers and practitioners deemed the complex, intricate formulas created by
training machine learning algorithms to be uninterpretable for many years.
Although great advances have been made in recent years to make these often
nonlinear, nonmonotonic, and noncontinuous machine-learned response func‐
tions more understandable[10][11], it is likely that such functions will never be
as directly or universally interpretable as more traditional linear models.
Why consider machine learning approaches for inferential or explanatory pur‐
poses? In general, linear models focus on understanding and predicting average
behavior, whereas machine-learned response functions can often make accurate
but more difficult to explain predictions for subtler aspects of modeled phenom‐
Accurate Models with Approximate Explanations | 7
enon. In a sense, linear models create very exact interpretations for approximate
models (see Figure 1-3).
Figure 1-3. A linear model, g(x), predicts the average number of purchases, given a
customer’s age. The predictions can be inaccurate but the explanations are straight‐
forward and stable. (Figure courtesy of H2O.ai.)
Whereas linear models account for global, average phenomena in a dataset,
machine learning models attempt to learn about the local and nonlinear charac‐
teristics of a dataset and also tend to be evaluated in terms of predictive accuracy.
The machine learning interpretability approach seeks to make approximate inter‐
pretations for these types of more exact models. After an accurate predictive
model has been trained, it should then be examined from many different view‐
points, including its ability to generate approximate explanations. As illustrated
in Figure 1-4, it is possible that an approximate interpretation of a more exact
model can have as much, or more, value and meaning than the exact interpreta‐
tions provided by an approximate model.
Additionally, the use of machine learning techniques for inferential or predictive
purposes shouldn’t prevent us from using linear models for interpretation. In
fact, using local linear approximations of more complex machine-learned func‐
tions to derive explanations, as depicted in Figure 1-4, is one of the most popular
current approaches. This technique has become known as local interpretable
model-agnostic explanations (LIME), and several free and open source imple‐
mentations of LIME are available for practitioners to evaluate[12].
8 | An Introduction to Machine Learning Interpretability
Figure 1-4. A machine learning model, g(x), predicts the number of purchases, given
a customer’s age, very accurately, nearly replicating the true, unknown signal-
generating function, f(x). Although the explanations for this function are approxi‐
mate, they are at least as useful, if not more so, than the linear model explanations
in Figure 1-3. (Figure courtesy of H2O.ai.)
Defining Interpretability
Let’s take a step back now and offer a definition of interpretability, and also
briefly introduce those groups at the forefront of machine learning interpretabil‐
ity research today. In the context of machine learning models and results, inter‐
pretability has been defined as “the ability to explain or to present in
understandable terms to a human.”[13]. The latter might be the simplest defini‐
tion of machine learning interpretability, but there are several communities with
different and sophisticated notions of what interpretability is today and should be
in the future. Two of the most prominent groups pursuing interpretability
research are a group of academics operating under the acronym FAT* and civil‐
ian and military researchers funded by the Defense Advanced Research Projects
Agency (DARPA). FAT* academics (meaning fairness, accountability, and trans‐
parency in multiple artificial intelligence, machine learning, computer science,
legal, social science, and policy applications) are primarily focused on promoting
and enabling interpretability and fairness in algorithmic decision-making sys‐
tems with social and commercial impact. DARPA-funded researchers seem pri‐
marily interested in increasing interpretability in sophisticated pattern
recognition models needed for security applications. They tend to label their
work explainable AI, or XAI.
Defining Interpretability | 9
A Machine Learning Interpretability Taxonomy for
Applied Practitioners
Technical challenges as well as the needs and perspectives of different user com‐
munities make machine learning interpretability a subjective and complicated
subject. Luckily, a previously defined taxonomy has proven useful for character‐
izing the interpretability of various popular explanatory techniques used in
commercial data mining, analytics, data science, and machine learning applica‐
tions[10]. The taxonomy describes models in terms of their complexity, and cate‐
gorizes interpretability techniques by the global or local scope of explanations
they generate, the family of algorithms to which they can be applied, and their
ability to promote trust and understanding.
A Scale for Interpretability
The complexity of a machine learning model is directly related to its interpreta‐
bility. Generally, the more complex the model, the more difficult it is to interpret
and explain. The number of weights or rules in a model—or its Vapnik–Chervo‐
nenkis dimension, a more formal measure—are good ways to quantify a model’s
complexity. However, analyzing the functional form of a model is particularly
useful for commercial applications such as credit scoring. The following list
describes the functional forms of models and discusses their degree of interpreta‐
bility in various use cases.
High interpretability—linear, monotonic functions
Functions created by traditional regression algorithms are probably the most
interpretable class of models. We refer to these models here as “linear and
monotonic,” meaning that for a change in any given input variable (or some‐
times combination or function of an input variable), the output of the
response function changes at a defined rate, in only one direction, and at a
magnitude represented by a readily available coefficient. Monotonicity also
enables intuitive and even automatic reasoning about predictions. For
instance, if a credit lender rejects your credit card application, it can easily
tell you why because its probability-of-default model often assumes your
credit score, your account balances, and the length of your credit history are
monotonically related to your ability to pay your credit card bill. When these
explanations are created automatically, they are typically called reason codes.
Linear, monotonic functions play another important role in machine learn‐
ing interpretability. Besides being highly interpretable themselves, linear and
monotonic functions are also used in explanatory techniques, including the
popular LIME approach.
10 | An Introduction to Machine Learning Interpretability
Medium interpretability—nonlinear, monotonic functions
Although most machine-learned response functions are nonlinear, some can
be constrained to be monotonic with respect to any given independent vari‐
able. Although there is no single coefficient that represents the change in the
response function output induced by a change in a single input variable,
nonlinear and monotonic functions do always change in one direction as a
single input variable changes. Nonlinear, monotonic response functions usu‐
ally allow for the generation of both reason codes and relative variable
importance measures. Nonlinear, monotonic response functions are there‐
fore interpretable and potentially suitable for use in regulated applications.
Of course, there are linear, nonmonotonic machine-learned response func‐
tions that can, for instance, be created by the multivariate adaptive regression
splines (MARS) approach. We do not highlight these functions here. They
tend to be less accurate predictors than purely nonlinear, nonmonotonic
functions and less directly interpretable than their completely monotonic
counterparts.
Low interpretability—nonlinear, nonmonotonic functions
Most machine learning algorithms create nonlinear, nonmonotonic response
functions. This class of functions is the most difficult to interpret, as they can
change in a positive and negative direction and at a varying rate for any
change in an input variable. Typically, the only standard interpretability
measures these functions provide are relative variable importance measures.
You should use a combination of several techniques, which we present in the
sections that follow, to interpret these extremely complex models.
Global and Local Interpretability
It’s often important to understand the entire model that you’ve trained on a
global scale, and also to zoom into local regions of your data or your predictions
and derive local explanations. Global interpretations help us understand the
inputs and their entire modeled relationship with the prediction target, but
global interpretations can be highly approximate in some cases. Local interpreta‐
tions help us understand model predictions for a single row of data or a group of
similar rows. Because small sections of a machine-learned response function are
more likely to be linear, monotonic, or otherwise well-behaved, local explana‐
tions can be more accurate than global explanations. It’s also very likely that the
best explanations of a machine learning model will come from combining the
results of global and local interpretation techniques. In subsequent sections we
will use the following descriptors to classify the scope of an interpretable
machine learning approach:
A Machine Learning Interpretability Taxonomy for Applied Practitioners | 11
Global interpretability
Some machine learning interpretability techniques facilitate global explana‐
tions of machine learning algorithms, their results, or the machine-learned
relationship between the prediction target and the input variables.
Local interpretability
Local interpretations promote understanding of small regions of the
machine-learned relationship between the prediction target and the input
variables, such as clusters of input records and their corresponding predic‐
tions, or deciles of predictions and their corresponding input rows, or even
single rows of data.
Model-Agnostic and Model-Specific Interpretability
Another important way to classify model interpretability techniques is whether
they are model agnostic, meaning they can be applied to different types of
machine learning algorithms, or model specific, meaning techniques that are
applicable only for a single type or class of algorithm. For instance, the LIME
technique is model agnostic and can be used to interpret nearly any set of
machine learning inputs and machine learning predictions. On the other hand,
the technique known as treeinterpreter is model specific and can be applied only
to decision tree models. Although model-agnostic interpretability techniques are
convenient, and in some ways ideal, they often rely on surrogate models or other
approximations that can degrade the accuracy of the explanations they provide.
Model-specific interpretation techniques tend to use the model to be interpreted
directly, leading to potentially more accurate explanations.
Understanding and Trust
Machine learning algorithms and the functions they create during training are
sophisticated, intricate, and opaque. Humans who would like to use these models
have basic, emotional needs to understand and trust them because we rely on
them for our livelihoods or because we need them to make important decisions.
For some users, technical descriptions of algorithms in textbooks and journals
provide enough insight to fully understand machine learning models. For these
users, cross-validation, error measures, and assessment plots probably also pro‐
vide enough information to trust a model. Unfortunately, for many applied prac‐
titioners, the usual definitions and assessments don’t often inspire full trust and
understanding in machine learning models and their results.
Trust and understanding are different phenomena, and both are important. The
techniques presented in the next section go beyond standard assessment and
diagnostic practices to engender greater understanding and trust in complex
models. These techniques enhance understanding by either providing transpar‐
ency and specific insights into the mechanisms of the algorithms and the
12 | An Introduction to Machine Learning Interpretability
functions they create or by providing detailed information and accountability for
the answers they provide. The techniques that follow enhance trust by enabling
users to observe or ensure the fairness, stability, and dependability of machine
learning algorithms, the functions they create, and the answers they generate.
Common Interpretability Techniques
Many credible techniques for training interpretable models and gaining insights
into model behavior and mechanisms have existed for years. Many others have
been put forward in a recent flurry of research. This section of the report dis‐
cusses many such interpretability techniques in terms of the proposed machine
learning interpretability taxonomy. The section begins by discussing data visuali‐
zation approaches because having a strong understanding of a dataset is a first
step toward validating, explaining, and trusting models. We then present white-
box modeling techniques, or models with directly transparent inner workings,
followed by techniques that can generate explanations for the most complex
types of predictive models such as model visualizations, reason codes, and vari‐
able importance measures. We conclude the section by discussing approaches for
testing machine learning models for stability and trustworthiness.
Seeing and Understanding Your Data
Seeing and understanding data is important for interpretable machine learning
because models represent data, and understanding the contents of that data helps
set reasonable expectations for model behavior and output. Unfortunately, most
real datasets are difficult to see and understand because they have many variables
and many rows. Even though plotting many dimensions is technically possible,
doing so often detracts from, instead of enhances, human understanding of com‐
plex datasets. Of course, there are many, many ways to visualize datasets. We
chose the techniques highlighted in Tables 1-1 and 1-2 and in Figure 1-5 because
they help illustrate many important aspects of a dataset in just two dimensions.
Table 1-1. A description of 2-D projection data visualization approaches
Technique: 2-D projections
Description: Projecting rows of a dataset from a usually high-dimensional original space into a more visually
understandable lower-dimensional space, ideally two or three dimensions. Techniques to achieve this include Principal
Components Analysis (PCA), Multidimensional Scaling (MDS), t-Distributed Stochastic Neighbor Embedding (t-SNE),
and Autoencoder Networks.
Common Interpretability Techniques | 13
Suggested usage: The key idea is to represent the rows of a dataset in a meaningful low-dimensional space. Datasets
containing images, text, or even business data with many variables can be difficult to visualize as a whole. These
projection techniques enable high-dimensional datasets to be projected into representative low-dimensional spaces
and visualized using the trusty old scatter plot technique. A high-quality projection visualized in a scatter plot should
exhibit key structural elements of a dataset, such as clusters, hierarchy, sparsity, and outliers. 2-D projections are often
used in fraud or anomaly detection to find outlying entities, like people, transactions, or computers, or unusual clusters
of entities.
References:
Visualizing Data using t-SNE
MDS, Cox, T.F., Cox, M.A.A. Multidimensional Scaling. Chapman and Hall. 2001.
The Elements of Statistical Learning
Reducing the Dimensionality of Data with Neural Networks
OSS:
h2o.ai
R (various packages)
scikit-learn (various functions)
Global or local scope: Global and local. You can use most forms of visualizations to see a courser view of the entire
dataset, or they can provide granular views of local portions of the dataset. Ideally, advanced visualization tool kits
enable users to pan, zoom, and drill-down easily. Otherwise, users can plot different parts of the dataset at different
scales themselves.
Best-suited complexity: 2-D projections can help us to
understand very complex relationships in datasets.
Model specific or model agnostic: Model agnostic;
visualizing complex datasets with many variables.
Trust and understanding: Projections add a degree of trust if they are used to confirm machine learning modeling
results. For instance, if known hierarchies, classes, or clusters exist in training or test datasets and these structures are
visible in 2-D projections, it is possible to confirm that a machine learning model is labeling these structures correctly.
A secondary check is to confirm that similar attributes of structures are projected relatively near one another and
different attributes of structures are projected relatively far from one another. Consider a model used to classify or
cluster marketing segments. It is reasonable to expect a machine learning model to label older, richer customers
differently than younger, less affluent customers, and moreover to expect that these different groups should be
relatively disjointed and compact in a projection, and relatively far from one another.
Table 1-2. A description of the correlation graph data visualization
approach
Technique: Correlation graphs
Description: A correlation graph is a two-dimensional representation of the relationships (correlation) in a dataset.
The authors create correlation graphs in which the nodes of the graph are the variables in a dataset and the edge
weights (thickness) between the nodes are defined by the absolute values of their pairwise Pearson correlation. For
visual simplicity, absolute weights below a certain threshold are not displayed, the node size is determined by a node’s
number of connections (node degree), node color is determined by a graph community calculation, and node position
is defined by a graph force field algorithm. The correlation graph allows us to see groups of correlated variables,
identify irrelevant variables, and discover or verify important relationships that machine learning models should
incorporate, all in two dimensions.
Suggested usage: Correlation graphs are a very powerful tool for seeing and understanding relationships
(correlation) between variables in a dataset. They are especially powerful in text mining or topic modeling to see the
relationships between entities and ideas. Traditional network graphs—a similar approach—are also popular for
finding relationships between customers or products in transactional data and for use in fraud detection to find
unusual interactions between entities like people or computers.
14 | An Introduction to Machine Learning Interpretability
Discovering Diverse Content Through
Random Scribd Documents
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
The Project Gutenberg eBook of Battleground
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.
Title: Battleground
Author: Lester Del Rey
Release date: April 1, 2024 [eBook #73313]
Language: English
Original publication: New York, NY: King-Size Publications, Inc, 1954
Credits: Greg Weeks, Mary Meehan and the Online Distributed
Proofreading Team at http://www.pgdp.net
*** START OF THE PROJECT GUTENBERG EBOOK BATTLEGROUND
***
An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill
Battleground
By Lester del Rey
We know that the human race must
struggle to survive—and that
on the outcome may hang disaster.
But just how wide is Armageddon?
Lester del Rey would certainly
be acclaimed by any unbiased
critic as one of America's ten
most gifted science fiction
writers. His work has
appeared in many magazines,
and Hollywood, radio, and TV
have all enhanced his ever-
growing popularity. In
BATTLEGROUND he has found
a theme worthy of his rare
talents—the doom potential in
an alien culture.
[Transcriber's Note: This etext was produced from
Fantastic Universe July 1954.
Extensive research did not uncover any evidence that
the U.S. copyright on this publication was renewed.]
Beyond the observation port of the hypercruiser Clarion lay the utter
blackness of nothing. The ship was effectively cutting across space
without going through it, spanning parsecs for every subjective day
of travel.
There were neither stars, space nor time around them, and only the
great detectors built into the ship could keep them from being
hopelessly lost. These followed a trail of energy laid down on the
way out from Earth years before, leading them homeward, solar
system by solar system.
Acting Captain Lenk stood with his back to the other three, studying
their sullen reflections in the port. It was better than facing them
directly, somehow, even though it showed his own bald scalp, tautly
hollow face and slump-shouldered body.
"All right," he said at last. "So we vote again. I'll have to remind you
we're under orders to investigate all habitable planets on a line back
to Earth. I vote we follow orders. Jeremy?"
The xenologist shrugged faintly. His ash-blond coloring, general
slimness and refinement of features gave him a look of weakness,
but his voice was a heavy, determined bass. "I stand pat. We didn't
explore the last planet enough. I vote we go back and make a
thorough job of it."
"Home—at once!" The roar came from the squat, black-bearded
minerologist, Graves. "God never meant man to leave the world on
which He put him! Take us back, I say, where...."
"Aimes?" Lenk cut in quickly.
They'd heard Graves' violently fundamentalist arguments endlessly,
until the sound of his voice was enough to revive every antagonism
and hatred they had ever felt. Graves had been converted to the
newest and most rapidly expanding of the extreme evangelical faiths
just before they had left. And unfortunately for the others, he had
maintained that his covenant to go on the exploration could not be
broken, even though venturing into space was a cardinal sin.
Aimes glowered at the others from under grizzled eyebrows. Of
them all, the linguodynamicist took part in the fewest arguments
and apparently detested the others most. He turned his heavy body
now as he studied them, seemingly trying to make up his mind
which he detested most at the moment. Then he grunted.
"With you, Captain," Aimes said curtly.
He swung on his heel and stalked out of the control cabin, to go
back to studying the undeciphered writing of the planets they had
visited.
Graves let out a single hiss and followed, probably heading for the
galley, since it was his period to cook.
Jeremy waited deliberately until the minerologist's footsteps could no
longer be heard, and then turned to leave.
Lenk hesitated for a second, then decided that monotony was worse
than anything else. "How about some chess, Jeremy?" he asked.
The other stopped, and some of the sullenness left his face.
Apparently the protracted arguments had wearied him until he was
also feeling the relief of decisive action. "Why not?" Jeremy said. "I'll
set up the board while you fiddle with your dials."
No fiddling was necessary, since Lenk had never cut them off their
automatic detecting circuit, but he went through the motions for the
other's benefit. Gravitic strain came faintly through hyperspace, and
the ship could locate suns by it. If approach revealed planets of
habitable size, it was set to snap out of hyperspace automatically
near the most likely world.
Lenk had been afraid such a solar system might be found before
they could resolve the argument, and his own relief from the full
measure of cabin fever came from the end of that possibility.
They settled down to the game with a minimum of conversation.
Since the other four members of the crew had been killed by some
unknown virus, conversation had proven less than cheerful. It was
better when they were on a planet and busy, but four people were
too few for the monotony of hypertravel.
Then Jeremy snapped out of it. He cleared his throat tentatively
while castling, grimaced, and then nodded positively. "I was right,
Lenk. We never did explore those other planets properly."
"Maybe not," Lenk agreed. "But with the possibility of alien raiders
headed toward Earth...."
"Bunk! No sign of raiders. Every indication was that the races on
those worlds killed themselves off—no technology alien to their own
culture. And there would have been with aliens invading."
"Time that way? Coincidence can account for just so much."
"It has to account for the lowering cultural levels in the colonizing
direction," Jeremy said curtly. "Better leave that sort of argument to
Aimes. He's conditioned to it."
Lenk shrugged and turned back to the chess. It was over his head,
anyhow.
Men had built only three other cruisers capable of exceeding the
speed of light, so far. The first had gone out in a direction opposite
to that of the Clarion and had returned to report a regular decline in
culture as the distance of habitable worlds from Earth increased. The
nearest was in a medieval state, the next an early bronze culture,
then a stone-age one, and so on, down to the furthest explored,
where the native race had barely discovered fire.
It had been either impossible coincidence or the evidence of some
law nobody has been quite ready to accept, save for the newly
spreading fundamentalists, who maintained it proved that Earth was
the center of the universe.
The other two cruisers had not reported back when the Clarion took
off.
And their own trip had only added to the mystery, and they had
touched on four habitable systems. And on each, there had been
evidence of a highly developed race and some vast struggle that had
killed off that race completely.
The furthest had lain fallow for an unguessable period of time, and
in each succeeding one, evidence indicated the time interval since
the destruction of the culture had been less. On the world they had
left, the end must have come not more than a few thousand years
before.
"Suppose one race had gone along in a straight line, seeding the
systems with life," Lenk guessed. "Remember, every race we found
had similarities. And suppose another race of conquerors stumbled
on that line and is mopping up? Maybe with some weapon that
leaves no trace."
Jeremy looked at him. "Suppose Graves is right, and his God wipes
out all wicked races. He keeps planting races, hoping they'll turn out
right, and wiping out the old ones?" he snorted. "Only, of course he
thinks Earth is the only world that counts. We're dealing with facts,
Lenk, not wild theories. And why should an alien race simply wipe
out another race, wait a thousand years or so, and move on—
without using the plant afterwards, even for a base for the next
operation? Also, why should we find plenty of weapons, but no
skeletons?"
"Skeletons are pretty fragile. And if somebody had the mythical heat
ray...."
"Bunk! If it would vaporize calcium in the bones, it would vaporize
some of the parts of the weapons we found." Jeremy moved a rook,
considered it, and pointed. "Check. And there are always some parts
of skeletons that will last more than a thousand years. I've got a
theory, but it's...."
Pale light cut through the viewing port, and a gong sounded in the
room. Lenk jerked to his feet and moved to his screens.
"Maybe we'll know now," he said. "We'll be landing on a planet in
about an hour. And it looks pretty much like Earth, from here."
He cranked up the gain on the magnifiers, and studied it again,
scanning the surface of the planet below them. There were clouds in
the sky, but through a clear patch he made out enough evidence.
"Want me to set us down near a city?" he asked, pointing.
Jeremy nodded. Like all the other planets on this trip, the one below
was either inhabited or had been inhabited until recently.
They knew before the ship landed that the habitation was strictly
past tense, at least as far as any high level of culture was
concerned. The cities were in ruins.
At one time, they must have reared upwards to heights as imposing
as those of the free state of New York City or the commonwealth of
Chicago. But now the buildings had lost their top-most towers, and
the bases showed yawning holes in many places.
They landed in the center of the largest city, after a quick skim over
the surface to be sure that no smaller city had escaped. A quick
sampling of the air indicated it was breathable, with no poisons and
only a touch of radioactivity, too low to be dangerous.
Aimes and Jeremy went out, each in a little tractor. While making
explorations, they were capable of forgetting their antagonisms in
their common curiosity.
Graves remained on the ship. He had decided somewhere along the
line that setting foot on an alien planet was more sinful than travel
through space, and refused to be shaken.
Lenk finished what observations were necessary. He fiddled around,
bothered by the quiet city outside. It had been better on the other
worlds, where the ruins had been softened by time and weather.
Here, it was too easy to imagine things. Finally, he climbed into
rough clothes, and went out on foot.
Everything was silent. Grass almost identical with that of Earth was
growing through much of the torn pavement, and there were trees
and bushes here and there. Vines had climbed some of the ruined
walls. But there were no flowers. Much of the planet had apparently
been overgrown with forest and weeds, but this city was in a
temperate zone, and clear enough for easy travel.
Lenk listened to the wind, and the faint sighing of a few trees
nearby. He kicked over stones and rubble where they lay on patches
of damp earth. And he kept looking at the sky.
But it was no different from other worlds as far as the desolation
went. There were no insects, and no animals stared warily up from
the basements, and the grass showed no signs of having been
grazed. It was as if the animal kingdom had never existed here.
He made his way back from the section of largest buildings, toward
what might have been a park at one time. Here there was less
danger of being trapped in any collapsing ruin, and he moved more
confidently. The low buildings might have been public sites, but they
somehow seemed more like homes.
He stumbled on something, and leaned down to pick it up. At first,
the oddness of its design confused his vision. Then he made out a
barrel with rifling inside, and a chamber that still contained pellets,
now covered with corrosion. It would have fitted his hand oddly, but
he could have used the pistol.
Beyond it lay a line of rust that might have been a sword at one
time. Coiled over it was a heavy loop of thick plastic that ended in a
group of wires, apparently of stainless steel. Each wire ended in a
row of cutting points. It might have been a cross between a knout
and a bolas. He had a vision of something alien and sinister coming
at him with one of those, and shuddered.
There was a ruin of rust and corroded parts further on that might
have been a variation of a machine gun. Lenk started for it, to be
stopped by a shout.
"Hold it!" It was Jeremy's voice, and now the tank came around a
corner, and headed toward him. "Stay put, Lenk. That thing may be
booby-trapped. And we can't be sure here that there has been time
enough to make it safe."
Lenk shuddered again, and climbed in hastily as Jeremy held open
the door. It was tight inside, but reasonably safe, since the tank had
been designed for almost anything. Jeremy must have seen him
leaving the ship and followed.
But by noon they had abandoned the fear of booby-traps. Either
there had never been any or time had drawn their stings.
Lenk wandered through the section already roughly surveyed, and
declared safe. He felt convinced the inhabitants of this world once
had been more like men than most other races. They had been two-
legged, with arms and heads in a human position on their upright
bodies.
Judging from the size of the furniture, they had been slightly larger
than men but not enough to matter. The pictures on the walls were
odd mostly for the greenish tints of the skin and the absence of
outward noses or ears. With a little fixing and recoloring, they might
have been people.
He came to a room that had been sealed off, pried open the door,
and went in. It smelled stale enough to indicate that it had been
reasonably air-tight. Benches and chairs ran along one wall, and a
heavy wooden table occupied the middle. On that were piled bits
and pieces in a curious scramble. He studied them carefully—belts,
obviously, buttons, the inevitable weapons, scraps of plastic
material.
A minute later, he was shouting for Jeremy over the little walkie-
talkie. The xenologist appeared in less than five minutes. He stared
about for a second, then grinned wryly.
"Your first, eh? I've found a lot of them. Sure, those were corpses
there once." He saw Lenk's expression, and shrugged. "Oh, you
were right to call me. It proves we weren't crazy. Wood and some
cloth still preserved, but no bones. I've got a collection of pictures
like that."
"A corrosive gas—" Lenk suggested.
Jeremy shook his head vigorously. "No dice, Captain. See that belt?
It's plant fiber—something like linen. No gas strong enough to eat
up a body would leave that unharmed. And they had skeletons, too
—we've found models in what must have been a museum. But we
can't even find the fossil skeletons that should be there. Odd,
though."
He prodded about among the weapons, shaking his head. "All the
weapons in places like this show evidence of one homogeneous
design. And all the ornaments are in a T shape, like this one."
He lifted a stainless metal object from the floor and dropped it. "But
outside in the square, there are at least two designs. For once, it
almost looks as if your idea of an alien invader might be worth
considering."
The radio at his side let out a squawk, and he cut it on listening to
the thin whisper that came from it. Abruptly, he swung about and
headed toward his tractor outside, with Lenk following.
"Aimes has found something," Jeremy said.
They found the linguodynamicist in the gutted ruins of a building
into which great concrete troughs led. A rusty ruin in one of the
troughs indicated something like a locomotive had once run in it,
apparently on great ball bearings. The fat man was pointing
excitedly toward something on one of the walls.
At first glance, it seemed to be a picture of more of the green
people, apparently undergoing some violent torture. Then their eyes
swept on—and they gasped.
Over the green people, three vaguely reptilian monstrosities were
hovering, at least twice the size of the others, all equipped with the
fanged whips Lenk had seen. One of the green men was apparently
trying to defend himself with a huge T-shaped weapon, but the
others were helpless. The reptilian monsters sprouted great ugly
wings of glaring red from their shoulders.
"The invaders," Lenk said. They were horrible things to see. "But
their weapons weren't that big...."
"A war poster!" Aimes said bitterly. "It doesn't tell a thing except that
there were two groups."
Jeremy studied it, more closely. "Not necessarily even that. It's
designed for some emotional effect. But at least, it's a hint that
there may have been enemies unlike the ones who lived here. Lenk,
can I take the scout ship out?"
"Go ahead," Lenk told him. He frowned at the poster. "Jeremy, if that
means the human race is going to have to face an alien invasion
from monsters like that...."
"It means nothing!"
Jeremy went off, with Aimes apparently in agreement for a change.
Lenk stood studying the poster. Finally he ripped it down, surprised
to find how strong it still was, and rolled it up to carry back to the
ship.
Each world had been razed more recently, and each with the same
curious curse. The race had risen to a high culture, and then had
seemingly been wiped out in a few brief years. The destruction had
accounted for all life on the planet, other than vegetable—and had
wiped out even the bones. All that had been left was a collection of
weapons and relics of more doubtful use.
The pattern was the same. The direction was steadily toward Earth,
leaping from planet to planet at jumps of thousands of years apart,
or perhaps mere hundreds. This planet must have been attacked
less than five hundred years before, though it was hard to tell
without controlled study of decay here.
Even now Earth might be suffering the invasion! They had been
gone nearly three years. And during that time, the monsters might
have swooped down hideously out of space.
They might return to find the Earth a wasteland!
His thoughts were a turmoil that grew worse as he stared at the
poster. The unknown artist had done his job well. A feeling of horror
poured out of it, filling him with an insensate desire to find such
monstrosities and rend and maim them, as they had tormented the
unfortunate green people.
Graves came stomping up to the control room, carrying lunch, and
took one look at the picture. "Serves the heathens right," he
grumbled. "Look at them. In hell, suffering from the lashes of the
devils of the pit. And still holding up that heathen charm."
Lenk blinked. But Graves' idea wasn't too fantastic, at that. The
creatures did look like devils, and the T-shaped object might be a
religious symbol. Hadn't some faith or other used the tau cross in its
worship? And those objects on the third world back had resembled
swastikas, which were another religious symbol on Earth.
That part fitted. During periods of extreme stress or danger, man
sought some home in his faith. Was it so unnatural that alien races
might do the same?
"Isn't there anything hopeful in your religion, Graves?" he asked
bitterly, wondering what the man had been like before his
conversion to the rigidity he now possessed. He'd probably been as
violent an atheist. Usually, a fanatic who switched sides became
doubly fanatical.
The revival of religious devotion had begun some fifteen years
before, and from what Lenk had seen, the world had been a better
and more kindly place for it. But there would always be those who
thought the only true devotion lay in the burning of witches. Or
maybe Graves needed psychiatric treatment for his morose moods
were becoming suspiciously psychotic, and his fanaticism might be
only a sign of deeper trouble.
The man went off muttering something about the prophecy and the
time being at hand for all to be tried in fire. Lenk went back to
staring at the poster until he heard the scout come back. He found
Aimes and Jeremy busy unloading what seemed to be loot enough
to fill two of the scouts.
"A whole library, almost intact," Aimes spoke with elation. "And
plenty of it is on film, where we can correlate words and images! In
two weeks, I'll speak the language like a native."
"Good!" Lenk told him. "Because in about that time, we'll be home
on Earth. As long as there's any chance that our people should be
warned about invaders, I'm not delaying any longer!"
"You can forget the alien invaders," Jeremy objected.
Then he exploded his thunderbolt. The horrible aliens had proved to
be no more than a group of purple-skinned people on the other side
of the planet with a quite divergent culture, but of the same basic
stock as the green-skinned men. They also exaggerated in their
drawings, and to about the same degree.
Fortunately the treasure-trove from the library would give the two
men enough for years of work, and required the attention of a full
group. They were eager now to take off for Earth and to begin
recruiting a new expedition, taking only enough with them for the
first basic steps.
Lenk headed directly for the control room. He began setting up the
proper directions on the board while Jeremy finished the account.
"But something's hitting the planets," he objected. His hand found
the main button and the Clarion began heading up through the
atmosphere on normal gravity warp, until she could reach open
space, and go into hyperdrive. "Your monsters prove to be only
people—but it still doesn't explain the way disaster follows a line
straight toward Earth! And until we know...."
"Maybe we'd be better off not knowing," Jeremy said. But he refused
to clarify his statement.
Then the hyperdrive went on.
The homeward trip was somewhat different from the others. There
were none of the petty fights this time.
Aimes and Jeremy were busy in their own way, decoding the
language and collating the material they had.
Graves was with them, grumbling at being around the heathen
things, but apparently morbidly fascinated by them.
Lenk could offer no help, and his duty lay with the ship. He
pondered over the waves of destruction that seemed to wash toward
Earth, and the diminishing cultural levels on the planets beyond. It
couldn't be pure coincidence. Nor could he accept the idea that
Earth was the center of the universe, and that everything else was
necessarily imperfect.
Surprisingly, it was Graves who gave him his first hopeful suggestion.
A week had passed, and they were well into the second when the
men really caught his attention. Graves was bringing his lunch,
actually smiling. He frowned.
"What gives?" he asked.
"It's all true!" Graves answered, and there was an inner glow to him.
"Just as it's prophesied in Revelations. There were times when I had
doubts, but now I know. God has set the heathens before me as
proof that Armageddon will come, and I have been singled out to
bring the glad tidings to His faithful!"
"I thought you didn't believe God would have anything to do with
heathens!" Lenk objected. He was trying to recall whether a sudden
phase of manic joy was a warning symptom or not.
"I misunderstood. I thought God had forbade space flight. But now it
is proved how He loves us. He singled us out to teach us to fly
through space that we could learn." Graves gathered up the dishes
without noticing that Lenk hadn't touched them and went off in a
cloud of ecstasy.
But his point had been made, and Lenk turned it over. Then, with a
shout, he headed toward the headquarters of the two remaining
scientists. He found them sitting quietly, watching a reel of some
kind being projected through an alien device.
"I hear it's Armageddon we're facing," he said.
He expected grins of amusement from them—or at least from
Jeremy. But none came. Aimes nodded.
"First progress in all directions. Then a period when religion seems
to be in the decline. Then a revival, and a return to faith in the
prophecies. All religions agree on those prophecies, Lenk.
Revelations refer to the end of Armageddon, when the whole world
will wipe itself out before the creation of a better world, in one
planet-wide war. The old Norse legends spoke of a Fimbulvetr, when
the giants and their gods would destroy the earth in war. And these
green-skinned peoples had the same religious prophecies. They
came true, too. Armageddon. Contagious Armageddon."
Lenk stared from one to the other, suspecting a joke. "But that still
leaves coincidence—the way things move from planet to planet...."
"Not at all," Jeremy said. "These people didn't have space travel, but
they had some pretty highly developed science. They found what we
thought we'd disproved—an ether drift. It would carry spores from
planet to planet—and in the exact direction needed to account for
what we've seen. Races were more advanced back that way, less so
the way we first went, simply because of the time it took the spores
to drift."
"And what about the destruction?" Lenk asked woodenly. Their faces
were getting him—they looked as if they believed it. "Is there
another disease spore to drive races mad?"
"Nothing like that. Just the natural course of cultures when they
pass a certain level," Jeremy answered. "I should have seen that
myself. Every race follows the same basic pattern. The only question
is how much time we've got left—a week or a thousand years?"
They turned back to their projection device, but Lenk caught the
xenologist by the shoulder and swung him back. "But they didn't
have space travel! That doesn't fit their pattern. Even if you're
right...."
Jeremy nodded. "We don't have the secret of immortality, either. And
this race did. But, damn it, I'd still like to know what happened to all
those skeletons?"
Lenk went back to his control room. And perversely, his thoughts
insisted on accepting their explanation. It would be like man to think
that important things could only happen on his own home planet,
and prophecy an end for his own race, never dreaming it could
happen to others.
It would be normal for him to sense somehow out of his own nature
what his inevitable end must be—and then to be completely amazed
when he found the same end for other races.
But....
Space travel—travel at faster than light speeds—had to make a
difference. There were the other worlds on the other side of the sun,
where men were already planning to colonize. Even if a world might
normally blow up in a final wild holocaust, it would have its whole
racial pattern changed when it began to spread out among the stars.
It would have to have a revival of the old pioneering spirit. There
had been the beginnings of that when they left. And with that, such
a war could be prevented forever.
He heard Graves moving about in the galley, singing something
about graves opening, and grimaced.
Besides, Jeremy had admitted that they didn't have all the answers.
The mystery of the vanished skeletons remained—and until that was
accounted for, nothing could be considered explained.
He forgot about the skeletons as he began planning how he'd
wangle his way into one of the colonies. Then, even if catastrophe
did strike Earth in another thousand years or so, the race could go
on. Ten more years, and man would be safe....
He was feeling almost cheerful as they finally came out of
hyperspace near Earth ... and landed....
The skeletons—lay scattered everywhere.
*** END OF THE PROJECT GUTENBERG EBOOK BATTLEGROUND
***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookfinal.com

More Related Content

PDF
O'Reilly ebook: Machine Learning at Enterprise Scale | Qubole
PDF
Reliable Machine Learning Applying Sre Principles To Ml In Production 1st Edi...
PDF
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Machine learning for the web explore the web and make smarter predictions usi...
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
O'Reilly ebook: Machine Learning at Enterprise Scale | Qubole
Reliable Machine Learning Applying Sre Principles To Ml In Production 1st Edi...
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Machine learning for the web explore the web and make smarter predictions usi...
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley

Similar to An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill (20)

PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
A Guide to Machine Learning Developer in 2024.pdf
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
PDF
Solution Manual for Management Information Systems: Managing the Digital Firm...
PDF
Crack the AI-900 Exam in 2025 with Confidence – Real Dumps & Practice Questio...
PDF
Microsoft Azure AI Fundamentals: Introduction to AI Concepts and Azure AI Ser...
PDF
The Future of Artificial Intelligence Depends on Trust
PDF
Explainable Ai For Practitioners Designing And Implementing Explainable Ml So...
PDF
Solution Manual for Management Information Systems: Managing the Digital Firm...
PDF
Practical Machine Learning
PDF
Machine learning for Marketers
PDF
Using MIS 9th Edition Kroenke Solutions Manual
PDF
Solution Manual for Management Information Systems: Managing the Digital Firm...
PDF
Technovision
PDF
Using MIS 9th Edition Kroenke Solutions Manual
PDF
Using MIS 9th Edition Kroenke Solutions Manual
PDF
Solution Manual for Management Information Systems: Managing the Digital Firm...
PDF
Solution Manual for Management Information Systems: Managing the Digital Firm...
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
A Guide to Machine Learning Developer in 2024.pdf
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Solution Manual for Systems Analysis and Design, 12th Edition, Scott Tilley
Solution Manual for Management Information Systems: Managing the Digital Firm...
Crack the AI-900 Exam in 2025 with Confidence – Real Dumps & Practice Questio...
Microsoft Azure AI Fundamentals: Introduction to AI Concepts and Azure AI Ser...
The Future of Artificial Intelligence Depends on Trust
Explainable Ai For Practitioners Designing And Implementing Explainable Ml So...
Solution Manual for Management Information Systems: Managing the Digital Firm...
Practical Machine Learning
Machine learning for Marketers
Using MIS 9th Edition Kroenke Solutions Manual
Solution Manual for Management Information Systems: Managing the Digital Firm...
Technovision
Using MIS 9th Edition Kroenke Solutions Manual
Using MIS 9th Edition Kroenke Solutions Manual
Solution Manual for Management Information Systems: Managing the Digital Firm...
Solution Manual for Management Information Systems: Managing the Digital Firm...
Ad

Recently uploaded (20)

PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Complications of Minimal Access Surgery at WLH
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Presentation on HIE in infants and its manifestations
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Cell Types and Its function , kingdom of life
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
master seminar digital applications in india
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Complications of Minimal Access Surgery at WLH
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
Presentation on HIE in infants and its manifestations
GDM (1) (1).pptx small presentation for students
human mycosis Human fungal infections are called human mycosis..pptx
Cell Types and Its function , kingdom of life
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
master seminar digital applications in india
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Anesthesia in Laparoscopic Surgery in India
102 student loan defaulters named and shamed – Is someone you know on the list?
Microbial diseases, their pathogenesis and prophylaxis
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Final Presentation General Medicine 03-08-2024.pptx
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Pharmacology of Heart Failure /Pharmacotherapy of CHF
STATICS OF THE RIGID BODIES Hibbelers.pdf
Ad

An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill

  • 1. Visit ebookfinal.com to download the full version and explore more ebooks or textbooks An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill _____ Click the link below to download _____ https://ebookfinal.com/download/an-introduction-to-machine- learning-interpretability-1st-edition-patrick-hall-and- navdeep-gill/ Explore and download more ebooks or textbook at ebookfinal.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Artificial Intelligence With an Introduction to Machine Learning 1st Edition Richard E Neapolitan https://ebookfinal.com/download/artificial-intelligence-with-an- introduction-to-machine-learning-1st-edition-richard-e-neapolitan/ A Hands On Introduction to Machine Learning 1st Edition Shah https://ebookfinal.com/download/a-hands-on-introduction-to-machine- learning-1st-edition-shah/ A Concise Introduction to Machine Learning 1st Edition A.C. Faul (Author) https://ebookfinal.com/download/a-concise-introduction-to-machine- learning-1st-edition-a-c-faul-author/ Inside the Machine An Illustrated Introduction to Microprocessors and Computer Architecture 1st Edition Jon Stokes https://ebookfinal.com/download/inside-the-machine-an-illustrated- introduction-to-microprocessors-and-computer-architecture-1st-edition- jon-stokes/
  • 3. Social Learning An Introduction to Mechanisms Methods and Models William Hoppitt https://ebookfinal.com/download/social-learning-an-introduction-to- mechanisms-methods-and-models-william-hoppitt/ Handbook of Natural Language Processing Second Edition Chapman Hall CRC Machine Learning Pattern Recognition Series Nitin Indurkhya https://ebookfinal.com/download/handbook-of-natural-language- processing-second-edition-chapman-hall-crc-machine-learning-pattern- recognition-series-nitin-indurkhya/ Awakening An Introduction to the History of Eastern Thought 6th Edition Patrick S. Bresnan https://ebookfinal.com/download/awakening-an-introduction-to-the- history-of-eastern-thought-6th-edition-patrick-s-bresnan/ Designing Human machine Cooperation Systems 1st Edition Patrick Millot https://ebookfinal.com/download/designing-human-machine-cooperation- systems-1st-edition-patrick-millot/ A Machine Learning Approach to Phishing Detection and Defense 1st Edition I.S. Amiri https://ebookfinal.com/download/a-machine-learning-approach-to- phishing-detection-and-defense-1st-edition-i-s-amiri/
  • 5. An Introduction to Machine Learning Interpretability 1st Edition Patrick Hall And Navdeep Gill Digital Instant Download Author(s): Patrick Hall and Navdeep Gill ISBN(s): 9781492033141, 1492033146 Edition: 1 File Details: PDF, 3.87 MB Year: 2018 Language: english
  • 6. Patrick Hall & Navdeep Gill An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI An Introduction to Machine Learning Interpretability
  • 8. Patrick Hall and Navdeep Gill An Introduction to Machine Learning Interpretability An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI Boston Farnham Sebastopol Tokyo Beijing Boston Farnham Sebastopol Tokyo Beijing
  • 9. 978-1-492-03314-1 [LSI] An Introduction to Machine Learning Interpretability by Patrick Hall and Navdeep Gill Copyright © 2018 O’Reilly Media, Inc. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online edi‐ tions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/institutional sales department: 800-998-9938 or corporate@oreilly.com. Editor: Nicole Tache Production Editor: Nicholas Adams Copyeditor: Octal Publishing, Inc. Interior Designer: David Futato Cover Designer: Randy Comer Illustrator: Rebecca Demarest April 2018: First Edition Revision History for the First Edition 2017-03-28: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. An Introduction to Machine Learning Interpretability, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsi‐ bility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights.
  • 10. Table of Contents An Introduction to Machine Learning Interpretability. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Machine Learning and Predictive Modeling in Practice 2 Social and Commercial Motivations for Machine Learning Interpretability 3 The Multiplicity of Good Models and Model Locality 5 Accurate Models with Approximate Explanations 7 Defining Interpretability 9 A Machine Learning Interpretability Taxonomy for Applied Practitioners 10 Common Interpretability Techniques 13 Testing Interpretability 29 Machine Learning Interpretability in Action 30 Conclusion 31 iii
  • 12. An Introduction to Machine Learning Interpretability Understanding and trusting models and their results is a hallmark of good sci‐ ence. Scientists, engineers, physicians, researchers, and humans in general have the need to understand and trust models and modeling results that affect their work and their lives. However, the forces of innovation and competition are now driving analysts and data scientists to try ever-more complex predictive modeling and machine learning algorithms. Such algorithms for machine learning include gradient-boosted ensembles (GBM), artificial neural networks (ANN), and ran‐ dom forests, among many others. Many machine learning algorithms have been labeled “black box” models because of their inscrutable inner-workings. What makes these models accurate is what makes their predictions difficult to under‐ stand: they are very complex. This is a fundamental trade-off. These algorithms are typically more accurate for predicting nonlinear, faint, or rare phenomena. Unfortunately, more accuracy almost always comes at the expense of interpreta‐ bility, and interpretability is crucial for business adoption, model documentation, regulatory oversight, and human acceptance and trust. The inherent trade-off between accuracy and interpretability in predictive mod‐ eling can be a particularly vexing catch-22 for analysts and data scientists work‐ ing in regulated industries. Due to strenuous regulatory and documentation requirements, data science professionals in the regulated verticals of banking, insurance, healthcare, and other industries often feel locked into using tradi‐ tional, linear modeling techniques to create their predictive models. So, how can you use machine learning to improve the accuracy of your predictive models and increase the value they provide to your organization while still retaining some degree of interpretability? This report provides some answers to this question by introducing interpretable machine learning techniques, algorithms, and models. It discusses predictive modeling and machine learning from an applied perspective and puts forward social and commercial motivations for interpretability, fairness, accountability, 1
  • 13. and transparency in machine learning. It defines interpretability, examines some of the major theoretical difficulties in the burgeoning field, and provides a taxon‐ omy for classifying and describing interpretable machine learning techniques. We then discuss many credible and practical machine learning interpretability techniques, consider testing of these interpretability techniques themselves, and, finally, we present a set of open source code examples for interpretability techni‐ ques. Machine Learning and Predictive Modeling in Practice Companies and organizations use machine learning and predictive models for a very wide variety of revenue- or value-generating applications. A tiny sample of such applications includes deciding whether to award someone a credit card or loan, deciding whether to release someone from a hospital, or generating custom recommendations for new products or services. Although many principles of applied machine learning are shared across industries, the practice of machine learning at banks, insurance companies, healthcare providers and in other regu‐ lated industries is often quite different from machine learning as conceptualized in popular blogs, the news and technology media, and academia. It’s also some‐ what different from the practice of machine learning in the technologically advanced and generally unregulated digital, ecommerce, FinTech, and internet verticals. Teaching and research in machine learning tend to put a central focus on algorithms, and the computer science, mathematics, and statistics of learning from data. Personal blogs and media outlets also tend to focus on algorithms and often with more hype and less rigor than in academia. In commercial practice, talent acquisition, data engineering, data security, hardened deployment of machine learning apps and systems, managing and monitoring an ever- increasing number of predictive models, modeling process documentation, and regulatory compliance often take precedence over more academic concerns regarding machine learning algorithms[1]. Successful entities in both traditional enterprise and in digital, ecommerce, Fin‐ Tech, and internet verticals have developed processes for recruiting and retaining analytical talent, amassed vast amounts of data, and engineered massive flows of data through corporate IT systems. Both types of entities have faced data security challenges; both have learned to deploy the complex logic that defines machine learning models into operational, public-facing IT systems; and both are learning to manage the large number of predictive and machine learning models required to stay competitive in today’s data-driven commercial landscape. However, larger, more established companies tend to practice statistics, analytics, and data mining at the margins of their business to optimize revenue or allocation of other valua‐ ble assets. Digital, ecommerce, FinTech, and internet companies, operating out‐ side of most regulatory oversight, and often with direct access to huge data stores 2 | An Introduction to Machine Learning Interpretability
  • 14. and world-class talent pools, have often made web-based data and machine learning products central to their business. In the context of applied machine learning, more regulated, and often more tra‐ ditional, companies tend to face a unique challenge. They must use techniques, algorithms, and models that are simple and transparent enough to allow for detailed documentation of internal system mechanisms and in-depth analysis by government regulators. Interpretable, fair, and transparent models are a serious legal mandate in banking, insurance, healthcare, and other industries. Some of the major regulatory statutes currently governing these industries include the Civil Rights Acts of 1964 and 1991, the Americans with Disabilities Act, the Genetic Information Nondiscrimination Act, the Health Insurance Portability and Accountability Act, the Equal Credit Opportunity Act, the Fair Credit Reporting Act, the Fair Housing Act, Federal Reserve SR 11-7, and European Union (EU) Greater Data Privacy Regulation (GDPR) Article 22[2]. Moreover, regulatory regimes are continuously changing, and these regulatory regimes are key drivers of what constitutes interpretability in applied machine learning. Social and Commercial Motivations for Machine Learning Interpretability The now-contemplated field of data science amounts to a superset of the fields of statis‐ tics and machine learning, which adds some technology for “scaling up” to “big data.” This chosen superset is motivated by commercial rather than intellectual developments. Choosing in this way is likely to miss out on the really important intellectual event of the next 50 years. —David Donoho[3] Usage of AI and machine learning models is likely to become more common‐ place as larger swaths of the economy embrace automation and data-driven deci‐ sion making. Even though these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric or categorical predictions with no accompanying explanations. Unfortu‐ nately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent machine learning systems, but practitioners usually don’t have the appropriate tools to pry open machine learning black boxes to debug and troubleshoot them[4][5]. Although this report focuses mainly on the commercial aspects of interpretable machine learning, it is always crucially important to consider social motivations and impacts of data science, including interpretability, fairness, accountability, and transparency in machine learning. One of the greatest hopes for data science and machine learning is simply increased convenience, automation, and organi‐ zation in our day-to-day lives. Even today, I am beginning to see fully automated baggage scanners at airports and my phone is constantly recommending new Social and Commercial Motivations for Machine Learning Interpretability | 3
  • 15. music that I actually like. As these types of automation and conveniences grow more common, machine learning engineers will need more and better tools to debug these ever-more present, decision-making systems. As machine learning begins to make a larger impact on everyday human life, whether it’s just addi‐ tional convenience or assisting in serious, impactful, or historically fraught and life-altering decisions, people will likely want to know how these automated deci‐ sions are being made. This might be the most fundamental application of machine learning interpretability, and some argue the EU GDPR is already legislating a “right to explanation” for EU citizens impacted by algorithmic deci‐ sions[6]. Machine learning also promises quick, accurate, and unbiased decision making in life-changing scenarios. Computers can theoretically use machine learning to make objective, data-driven decisions in critical situations like criminal convic‐ tions, medical diagnoses, and college admissions, but interpretability, among other technological advances, is needed to guarantee the promises of correctness and objectivity. Without interpretability, accountability, and transparency in machine learning decisions, there is no certainty that a machine learning system is not simply relearning and reapplying long-held, regrettable, and erroneous human biases. Nor are there any assurances that human operators have not designed a machine learning system to make intentionally prejudicial decisions. Hacking and adversarial attacks on machine learning systems are also a serious concern. Without real insight into a complex machine learning system’s opera‐ tional mechanisms, it can be very difficult to determine whether its outputs have been altered by malicious hacking or whether its inputs can be changed to create unwanted or unpredictable decisions. Researchers recently discovered that slight changes, such as applying stickers, can prevent machine learning systems from recognizing street signs[7]. Such adversarial attacks, which require almost no software engineering expertise, can obviously have severe consequences. For traditional and often more-regulated commercial applications, machine learning can enhance established analytical practices (typically by increasing pre‐ diction accuracy over conventional but highly interpretable linear models) or it can enable the incorporation of unstructured data into analytical pursuits. In many industries, linear models have long been the preferred tools for predictive modeling, and many practitioners and decision-makers are simply suspicious of machine learning. If nonlinear models—generated by training machine learning algorithms—make more accurate predictions on previously unseen data, this typically translates into improved financial margins but only if the model is accepted by internal validation teams and business partners and approved by external regulators. Interpretability can increase transparency and trust in com‐ plex machine learning models, and it can allow more sophisticated and poten‐ tially more accurate nonlinear models to be used in place of traditional linear models, even in some regulated dealings. Equifax’s NeuroDecision is a great 4 | An Introduction to Machine Learning Interpretability
  • 16. example of modifying a machine learning technique (an ANN) to be interpreta‐ ble and using it to make measurably more accurate predictions than a linear model in a regulated application. To make automated credit-lending decisions, NeuroDecision uses ANNs with simple constraints, which are somewhat more accurate than conventional regression models and also produce the regulator- mandated reason codes that explain the logic behind a credit-lending decision. NeuroDecision’s increased accuracy could lead to credit lending in a broader portion of the market, such as new-to-credit consumers, than previously possi‐ ble[1][8]. Less-traditional and typically less-regulated companies currently face a greatly reduced burden when it comes to creating fair, accountable, and transparent machine learning systems. For these companies, interpretability is often an important but secondary concern. Even though transparency into complex data and machine learning products might be necessary for internal debugging, vali‐ dation, or business adoption purposes, the world has been using Google’s search engine and Netflix’s movie recommendations for years without widespread demands to know why or how these machine learning systems generate their results. However, as the apps and systems that digital, ecommerce, FinTech, and internet companies create (often based on machine learning) continue to change from occasional conveniences or novelties into day-to-day necessities, consumer and public demand for interpretability, fairness, accountability, and transparency in these products will likely increase. The Multiplicity of Good Models and Model Locality If machine learning can lead to more accurate models and eventually financial gains, why isn’t everyone using interpretable machine learning? Simple answer: it’s fundamentally difficult and it’s a very new field of research. One of the most difficult mathematical problems in interpretable machine learning goes by sev‐ eral names. In his seminal 2001 paper, Professor Leo Breiman of UC, Berkeley, coined the phrase: the multiplicity of good models[9]. Some in credit scoring refer to this phenomenon as model locality. It is well understood that for the same set of input variables and prediction targets, complex machine learning algorithms can produce multiple accurate models with very similar, but not the same, inter‐ nal architectures. This alone is an obstacle to interpretation, but when using these types of algorithms as interpretation tools or with interpretation tools, it is important to remember that details of explanations can change across multiple accurate models. Because of this systematic instability, multiple interpretability techniques should be used to derive explanations for a single model, and practi‐ tioners are urged to seek consistent results across multiple modeling and inter‐ pretation techniques. The Multiplicity of Good Models and Model Locality | 5
  • 17. Figures 1-1 and 1-2 are cartoon illustrations of the surfaces defined by error functions for two fictitious predictive models. In Figure 1-1 the error function is representative of a traditional linear model’s error function. The surface created by the error function in Figure 1-1 is convex. It has a clear global minimum in three dimensions, meaning that given two input variables, such as a customer’s income and a customer’s interest rate, the most accurate model trained to predict loan defaults (or any other outcome) would almost always give the same weight to each input in the prediction, and the location of the minimum of the error function and the weights for the inputs would be unlikely to change very much if the model was retrained, even if the input data about customer’s income and interest rate changed a little bit. (The actual numeric values for the weights could be ascertained by tracing a straight line from minimum of the error function pic‐ tured in Figure 1-1 to the interest rate axis [the X axis] and income axis [the Y axis].) Figure 1-1. An illustration of the error surface of a traditional linear model. (Figure courtesy of H2O.ai.) Because of the convex nature of the error surface for linear models, there is basi‐ cally only one best model, given some relatively stable set of inputs and a predic‐ tion target. The model associated with the error surface displayed in Figure 1-1 would be said to have strong model locality. Moreover, because the weighting of income versus interest rate is highly stable in the pictured error function and its associated linear model, explanations about how the function made decisions about loan defaults based on those two inputs would also be stable. More stable explanations are often considered more trustworthy explanations. Figure 1-2 depicts a nonconvex error surface that is representative of the error function for a machine learning function with two inputs—for example, a cus‐ tomer’s income and a customer’s interest rate—and an output, such as the same customer’s probability of defaulting on a loan. This nonconvex error surface with 6 | An Introduction to Machine Learning Interpretability
  • 18. no obvious global minimum implies there are many different ways a complex machine learning algorithm could learn to weigh a customer’s income and a cus‐ tomer’s interest rate to make a good decision about when they might default. Each of these different weightings would create a different function for making loan default decisions, and each of these different functions would have different explanations. Less-stable explanations feel less trustworthy, but are less-stable explanations actually valuable and useful? The answer to this question is central to the value proposition of interpretable machine learning and is examined in the next section. Figure 1-2. An illustration of the error surface of a machine learning model. (Figure courtesy of H2O.ai.) Accurate Models with Approximate Explanations Due to many valid concerns, including the multiplicity of good models, many researchers and practitioners deemed the complex, intricate formulas created by training machine learning algorithms to be uninterpretable for many years. Although great advances have been made in recent years to make these often nonlinear, nonmonotonic, and noncontinuous machine-learned response func‐ tions more understandable[10][11], it is likely that such functions will never be as directly or universally interpretable as more traditional linear models. Why consider machine learning approaches for inferential or explanatory pur‐ poses? In general, linear models focus on understanding and predicting average behavior, whereas machine-learned response functions can often make accurate but more difficult to explain predictions for subtler aspects of modeled phenom‐ Accurate Models with Approximate Explanations | 7
  • 19. enon. In a sense, linear models create very exact interpretations for approximate models (see Figure 1-3). Figure 1-3. A linear model, g(x), predicts the average number of purchases, given a customer’s age. The predictions can be inaccurate but the explanations are straight‐ forward and stable. (Figure courtesy of H2O.ai.) Whereas linear models account for global, average phenomena in a dataset, machine learning models attempt to learn about the local and nonlinear charac‐ teristics of a dataset and also tend to be evaluated in terms of predictive accuracy. The machine learning interpretability approach seeks to make approximate inter‐ pretations for these types of more exact models. After an accurate predictive model has been trained, it should then be examined from many different view‐ points, including its ability to generate approximate explanations. As illustrated in Figure 1-4, it is possible that an approximate interpretation of a more exact model can have as much, or more, value and meaning than the exact interpreta‐ tions provided by an approximate model. Additionally, the use of machine learning techniques for inferential or predictive purposes shouldn’t prevent us from using linear models for interpretation. In fact, using local linear approximations of more complex machine-learned func‐ tions to derive explanations, as depicted in Figure 1-4, is one of the most popular current approaches. This technique has become known as local interpretable model-agnostic explanations (LIME), and several free and open source imple‐ mentations of LIME are available for practitioners to evaluate[12]. 8 | An Introduction to Machine Learning Interpretability
  • 20. Figure 1-4. A machine learning model, g(x), predicts the number of purchases, given a customer’s age, very accurately, nearly replicating the true, unknown signal- generating function, f(x). Although the explanations for this function are approxi‐ mate, they are at least as useful, if not more so, than the linear model explanations in Figure 1-3. (Figure courtesy of H2O.ai.) Defining Interpretability Let’s take a step back now and offer a definition of interpretability, and also briefly introduce those groups at the forefront of machine learning interpretabil‐ ity research today. In the context of machine learning models and results, inter‐ pretability has been defined as “the ability to explain or to present in understandable terms to a human.”[13]. The latter might be the simplest defini‐ tion of machine learning interpretability, but there are several communities with different and sophisticated notions of what interpretability is today and should be in the future. Two of the most prominent groups pursuing interpretability research are a group of academics operating under the acronym FAT* and civil‐ ian and military researchers funded by the Defense Advanced Research Projects Agency (DARPA). FAT* academics (meaning fairness, accountability, and trans‐ parency in multiple artificial intelligence, machine learning, computer science, legal, social science, and policy applications) are primarily focused on promoting and enabling interpretability and fairness in algorithmic decision-making sys‐ tems with social and commercial impact. DARPA-funded researchers seem pri‐ marily interested in increasing interpretability in sophisticated pattern recognition models needed for security applications. They tend to label their work explainable AI, or XAI. Defining Interpretability | 9
  • 21. A Machine Learning Interpretability Taxonomy for Applied Practitioners Technical challenges as well as the needs and perspectives of different user com‐ munities make machine learning interpretability a subjective and complicated subject. Luckily, a previously defined taxonomy has proven useful for character‐ izing the interpretability of various popular explanatory techniques used in commercial data mining, analytics, data science, and machine learning applica‐ tions[10]. The taxonomy describes models in terms of their complexity, and cate‐ gorizes interpretability techniques by the global or local scope of explanations they generate, the family of algorithms to which they can be applied, and their ability to promote trust and understanding. A Scale for Interpretability The complexity of a machine learning model is directly related to its interpreta‐ bility. Generally, the more complex the model, the more difficult it is to interpret and explain. The number of weights or rules in a model—or its Vapnik–Chervo‐ nenkis dimension, a more formal measure—are good ways to quantify a model’s complexity. However, analyzing the functional form of a model is particularly useful for commercial applications such as credit scoring. The following list describes the functional forms of models and discusses their degree of interpreta‐ bility in various use cases. High interpretability—linear, monotonic functions Functions created by traditional regression algorithms are probably the most interpretable class of models. We refer to these models here as “linear and monotonic,” meaning that for a change in any given input variable (or some‐ times combination or function of an input variable), the output of the response function changes at a defined rate, in only one direction, and at a magnitude represented by a readily available coefficient. Monotonicity also enables intuitive and even automatic reasoning about predictions. For instance, if a credit lender rejects your credit card application, it can easily tell you why because its probability-of-default model often assumes your credit score, your account balances, and the length of your credit history are monotonically related to your ability to pay your credit card bill. When these explanations are created automatically, they are typically called reason codes. Linear, monotonic functions play another important role in machine learn‐ ing interpretability. Besides being highly interpretable themselves, linear and monotonic functions are also used in explanatory techniques, including the popular LIME approach. 10 | An Introduction to Machine Learning Interpretability
  • 22. Medium interpretability—nonlinear, monotonic functions Although most machine-learned response functions are nonlinear, some can be constrained to be monotonic with respect to any given independent vari‐ able. Although there is no single coefficient that represents the change in the response function output induced by a change in a single input variable, nonlinear and monotonic functions do always change in one direction as a single input variable changes. Nonlinear, monotonic response functions usu‐ ally allow for the generation of both reason codes and relative variable importance measures. Nonlinear, monotonic response functions are there‐ fore interpretable and potentially suitable for use in regulated applications. Of course, there are linear, nonmonotonic machine-learned response func‐ tions that can, for instance, be created by the multivariate adaptive regression splines (MARS) approach. We do not highlight these functions here. They tend to be less accurate predictors than purely nonlinear, nonmonotonic functions and less directly interpretable than their completely monotonic counterparts. Low interpretability—nonlinear, nonmonotonic functions Most machine learning algorithms create nonlinear, nonmonotonic response functions. This class of functions is the most difficult to interpret, as they can change in a positive and negative direction and at a varying rate for any change in an input variable. Typically, the only standard interpretability measures these functions provide are relative variable importance measures. You should use a combination of several techniques, which we present in the sections that follow, to interpret these extremely complex models. Global and Local Interpretability It’s often important to understand the entire model that you’ve trained on a global scale, and also to zoom into local regions of your data or your predictions and derive local explanations. Global interpretations help us understand the inputs and their entire modeled relationship with the prediction target, but global interpretations can be highly approximate in some cases. Local interpreta‐ tions help us understand model predictions for a single row of data or a group of similar rows. Because small sections of a machine-learned response function are more likely to be linear, monotonic, or otherwise well-behaved, local explana‐ tions can be more accurate than global explanations. It’s also very likely that the best explanations of a machine learning model will come from combining the results of global and local interpretation techniques. In subsequent sections we will use the following descriptors to classify the scope of an interpretable machine learning approach: A Machine Learning Interpretability Taxonomy for Applied Practitioners | 11
  • 23. Global interpretability Some machine learning interpretability techniques facilitate global explana‐ tions of machine learning algorithms, their results, or the machine-learned relationship between the prediction target and the input variables. Local interpretability Local interpretations promote understanding of small regions of the machine-learned relationship between the prediction target and the input variables, such as clusters of input records and their corresponding predic‐ tions, or deciles of predictions and their corresponding input rows, or even single rows of data. Model-Agnostic and Model-Specific Interpretability Another important way to classify model interpretability techniques is whether they are model agnostic, meaning they can be applied to different types of machine learning algorithms, or model specific, meaning techniques that are applicable only for a single type or class of algorithm. For instance, the LIME technique is model agnostic and can be used to interpret nearly any set of machine learning inputs and machine learning predictions. On the other hand, the technique known as treeinterpreter is model specific and can be applied only to decision tree models. Although model-agnostic interpretability techniques are convenient, and in some ways ideal, they often rely on surrogate models or other approximations that can degrade the accuracy of the explanations they provide. Model-specific interpretation techniques tend to use the model to be interpreted directly, leading to potentially more accurate explanations. Understanding and Trust Machine learning algorithms and the functions they create during training are sophisticated, intricate, and opaque. Humans who would like to use these models have basic, emotional needs to understand and trust them because we rely on them for our livelihoods or because we need them to make important decisions. For some users, technical descriptions of algorithms in textbooks and journals provide enough insight to fully understand machine learning models. For these users, cross-validation, error measures, and assessment plots probably also pro‐ vide enough information to trust a model. Unfortunately, for many applied prac‐ titioners, the usual definitions and assessments don’t often inspire full trust and understanding in machine learning models and their results. Trust and understanding are different phenomena, and both are important. The techniques presented in the next section go beyond standard assessment and diagnostic practices to engender greater understanding and trust in complex models. These techniques enhance understanding by either providing transpar‐ ency and specific insights into the mechanisms of the algorithms and the 12 | An Introduction to Machine Learning Interpretability
  • 24. functions they create or by providing detailed information and accountability for the answers they provide. The techniques that follow enhance trust by enabling users to observe or ensure the fairness, stability, and dependability of machine learning algorithms, the functions they create, and the answers they generate. Common Interpretability Techniques Many credible techniques for training interpretable models and gaining insights into model behavior and mechanisms have existed for years. Many others have been put forward in a recent flurry of research. This section of the report dis‐ cusses many such interpretability techniques in terms of the proposed machine learning interpretability taxonomy. The section begins by discussing data visuali‐ zation approaches because having a strong understanding of a dataset is a first step toward validating, explaining, and trusting models. We then present white- box modeling techniques, or models with directly transparent inner workings, followed by techniques that can generate explanations for the most complex types of predictive models such as model visualizations, reason codes, and vari‐ able importance measures. We conclude the section by discussing approaches for testing machine learning models for stability and trustworthiness. Seeing and Understanding Your Data Seeing and understanding data is important for interpretable machine learning because models represent data, and understanding the contents of that data helps set reasonable expectations for model behavior and output. Unfortunately, most real datasets are difficult to see and understand because they have many variables and many rows. Even though plotting many dimensions is technically possible, doing so often detracts from, instead of enhances, human understanding of com‐ plex datasets. Of course, there are many, many ways to visualize datasets. We chose the techniques highlighted in Tables 1-1 and 1-2 and in Figure 1-5 because they help illustrate many important aspects of a dataset in just two dimensions. Table 1-1. A description of 2-D projection data visualization approaches Technique: 2-D projections Description: Projecting rows of a dataset from a usually high-dimensional original space into a more visually understandable lower-dimensional space, ideally two or three dimensions. Techniques to achieve this include Principal Components Analysis (PCA), Multidimensional Scaling (MDS), t-Distributed Stochastic Neighbor Embedding (t-SNE), and Autoencoder Networks. Common Interpretability Techniques | 13
  • 25. Suggested usage: The key idea is to represent the rows of a dataset in a meaningful low-dimensional space. Datasets containing images, text, or even business data with many variables can be difficult to visualize as a whole. These projection techniques enable high-dimensional datasets to be projected into representative low-dimensional spaces and visualized using the trusty old scatter plot technique. A high-quality projection visualized in a scatter plot should exhibit key structural elements of a dataset, such as clusters, hierarchy, sparsity, and outliers. 2-D projections are often used in fraud or anomaly detection to find outlying entities, like people, transactions, or computers, or unusual clusters of entities. References: Visualizing Data using t-SNE MDS, Cox, T.F., Cox, M.A.A. Multidimensional Scaling. Chapman and Hall. 2001. The Elements of Statistical Learning Reducing the Dimensionality of Data with Neural Networks OSS: h2o.ai R (various packages) scikit-learn (various functions) Global or local scope: Global and local. You can use most forms of visualizations to see a courser view of the entire dataset, or they can provide granular views of local portions of the dataset. Ideally, advanced visualization tool kits enable users to pan, zoom, and drill-down easily. Otherwise, users can plot different parts of the dataset at different scales themselves. Best-suited complexity: 2-D projections can help us to understand very complex relationships in datasets. Model specific or model agnostic: Model agnostic; visualizing complex datasets with many variables. Trust and understanding: Projections add a degree of trust if they are used to confirm machine learning modeling results. For instance, if known hierarchies, classes, or clusters exist in training or test datasets and these structures are visible in 2-D projections, it is possible to confirm that a machine learning model is labeling these structures correctly. A secondary check is to confirm that similar attributes of structures are projected relatively near one another and different attributes of structures are projected relatively far from one another. Consider a model used to classify or cluster marketing segments. It is reasonable to expect a machine learning model to label older, richer customers differently than younger, less affluent customers, and moreover to expect that these different groups should be relatively disjointed and compact in a projection, and relatively far from one another. Table 1-2. A description of the correlation graph data visualization approach Technique: Correlation graphs Description: A correlation graph is a two-dimensional representation of the relationships (correlation) in a dataset. The authors create correlation graphs in which the nodes of the graph are the variables in a dataset and the edge weights (thickness) between the nodes are defined by the absolute values of their pairwise Pearson correlation. For visual simplicity, absolute weights below a certain threshold are not displayed, the node size is determined by a node’s number of connections (node degree), node color is determined by a graph community calculation, and node position is defined by a graph force field algorithm. The correlation graph allows us to see groups of correlated variables, identify irrelevant variables, and discover or verify important relationships that machine learning models should incorporate, all in two dimensions. Suggested usage: Correlation graphs are a very powerful tool for seeing and understanding relationships (correlation) between variables in a dataset. They are especially powerful in text mining or topic modeling to see the relationships between entities and ideas. Traditional network graphs—a similar approach—are also popular for finding relationships between customers or products in transactional data and for use in fraud detection to find unusual interactions between entities like people or computers. 14 | An Introduction to Machine Learning Interpretability
  • 26. Discovering Diverse Content Through Random Scribd Documents
  • 29. The Project Gutenberg eBook of Battleground
  • 30. This ebook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this ebook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. Title: Battleground Author: Lester Del Rey Release date: April 1, 2024 [eBook #73313] Language: English Original publication: New York, NY: King-Size Publications, Inc, 1954 Credits: Greg Weeks, Mary Meehan and the Online Distributed Proofreading Team at http://www.pgdp.net *** START OF THE PROJECT GUTENBERG EBOOK BATTLEGROUND ***
  • 32. Battleground By Lester del Rey We know that the human race must struggle to survive—and that on the outcome may hang disaster. But just how wide is Armageddon? Lester del Rey would certainly be acclaimed by any unbiased critic as one of America's ten most gifted science fiction writers. His work has appeared in many magazines, and Hollywood, radio, and TV have all enhanced his ever- growing popularity. In BATTLEGROUND he has found a theme worthy of his rare talents—the doom potential in an alien culture.
  • 33. [Transcriber's Note: This etext was produced from Fantastic Universe July 1954. Extensive research did not uncover any evidence that the U.S. copyright on this publication was renewed.]
  • 34. Beyond the observation port of the hypercruiser Clarion lay the utter blackness of nothing. The ship was effectively cutting across space without going through it, spanning parsecs for every subjective day of travel. There were neither stars, space nor time around them, and only the great detectors built into the ship could keep them from being hopelessly lost. These followed a trail of energy laid down on the way out from Earth years before, leading them homeward, solar system by solar system. Acting Captain Lenk stood with his back to the other three, studying their sullen reflections in the port. It was better than facing them directly, somehow, even though it showed his own bald scalp, tautly hollow face and slump-shouldered body. "All right," he said at last. "So we vote again. I'll have to remind you we're under orders to investigate all habitable planets on a line back to Earth. I vote we follow orders. Jeremy?" The xenologist shrugged faintly. His ash-blond coloring, general slimness and refinement of features gave him a look of weakness, but his voice was a heavy, determined bass. "I stand pat. We didn't explore the last planet enough. I vote we go back and make a thorough job of it." "Home—at once!" The roar came from the squat, black-bearded minerologist, Graves. "God never meant man to leave the world on which He put him! Take us back, I say, where...." "Aimes?" Lenk cut in quickly. They'd heard Graves' violently fundamentalist arguments endlessly, until the sound of his voice was enough to revive every antagonism and hatred they had ever felt. Graves had been converted to the newest and most rapidly expanding of the extreme evangelical faiths just before they had left. And unfortunately for the others, he had
  • 35. maintained that his covenant to go on the exploration could not be broken, even though venturing into space was a cardinal sin. Aimes glowered at the others from under grizzled eyebrows. Of them all, the linguodynamicist took part in the fewest arguments and apparently detested the others most. He turned his heavy body now as he studied them, seemingly trying to make up his mind which he detested most at the moment. Then he grunted. "With you, Captain," Aimes said curtly. He swung on his heel and stalked out of the control cabin, to go back to studying the undeciphered writing of the planets they had visited. Graves let out a single hiss and followed, probably heading for the galley, since it was his period to cook. Jeremy waited deliberately until the minerologist's footsteps could no longer be heard, and then turned to leave. Lenk hesitated for a second, then decided that monotony was worse than anything else. "How about some chess, Jeremy?" he asked. The other stopped, and some of the sullenness left his face. Apparently the protracted arguments had wearied him until he was also feeling the relief of decisive action. "Why not?" Jeremy said. "I'll set up the board while you fiddle with your dials." No fiddling was necessary, since Lenk had never cut them off their automatic detecting circuit, but he went through the motions for the other's benefit. Gravitic strain came faintly through hyperspace, and the ship could locate suns by it. If approach revealed planets of habitable size, it was set to snap out of hyperspace automatically near the most likely world. Lenk had been afraid such a solar system might be found before they could resolve the argument, and his own relief from the full measure of cabin fever came from the end of that possibility.
  • 36. They settled down to the game with a minimum of conversation. Since the other four members of the crew had been killed by some unknown virus, conversation had proven less than cheerful. It was better when they were on a planet and busy, but four people were too few for the monotony of hypertravel. Then Jeremy snapped out of it. He cleared his throat tentatively while castling, grimaced, and then nodded positively. "I was right, Lenk. We never did explore those other planets properly." "Maybe not," Lenk agreed. "But with the possibility of alien raiders headed toward Earth...." "Bunk! No sign of raiders. Every indication was that the races on those worlds killed themselves off—no technology alien to their own culture. And there would have been with aliens invading." "Time that way? Coincidence can account for just so much." "It has to account for the lowering cultural levels in the colonizing direction," Jeremy said curtly. "Better leave that sort of argument to Aimes. He's conditioned to it." Lenk shrugged and turned back to the chess. It was over his head, anyhow. Men had built only three other cruisers capable of exceeding the speed of light, so far. The first had gone out in a direction opposite to that of the Clarion and had returned to report a regular decline in culture as the distance of habitable worlds from Earth increased. The nearest was in a medieval state, the next an early bronze culture, then a stone-age one, and so on, down to the furthest explored, where the native race had barely discovered fire. It had been either impossible coincidence or the evidence of some law nobody has been quite ready to accept, save for the newly spreading fundamentalists, who maintained it proved that Earth was the center of the universe. The other two cruisers had not reported back when the Clarion took off.
  • 37. And their own trip had only added to the mystery, and they had touched on four habitable systems. And on each, there had been evidence of a highly developed race and some vast struggle that had killed off that race completely. The furthest had lain fallow for an unguessable period of time, and in each succeeding one, evidence indicated the time interval since the destruction of the culture had been less. On the world they had left, the end must have come not more than a few thousand years before. "Suppose one race had gone along in a straight line, seeding the systems with life," Lenk guessed. "Remember, every race we found had similarities. And suppose another race of conquerors stumbled on that line and is mopping up? Maybe with some weapon that leaves no trace." Jeremy looked at him. "Suppose Graves is right, and his God wipes out all wicked races. He keeps planting races, hoping they'll turn out right, and wiping out the old ones?" he snorted. "Only, of course he thinks Earth is the only world that counts. We're dealing with facts, Lenk, not wild theories. And why should an alien race simply wipe out another race, wait a thousand years or so, and move on— without using the plant afterwards, even for a base for the next operation? Also, why should we find plenty of weapons, but no skeletons?" "Skeletons are pretty fragile. And if somebody had the mythical heat ray...." "Bunk! If it would vaporize calcium in the bones, it would vaporize some of the parts of the weapons we found." Jeremy moved a rook, considered it, and pointed. "Check. And there are always some parts of skeletons that will last more than a thousand years. I've got a theory, but it's...." Pale light cut through the viewing port, and a gong sounded in the room. Lenk jerked to his feet and moved to his screens.
  • 38. "Maybe we'll know now," he said. "We'll be landing on a planet in about an hour. And it looks pretty much like Earth, from here." He cranked up the gain on the magnifiers, and studied it again, scanning the surface of the planet below them. There were clouds in the sky, but through a clear patch he made out enough evidence. "Want me to set us down near a city?" he asked, pointing. Jeremy nodded. Like all the other planets on this trip, the one below was either inhabited or had been inhabited until recently. They knew before the ship landed that the habitation was strictly past tense, at least as far as any high level of culture was concerned. The cities were in ruins. At one time, they must have reared upwards to heights as imposing as those of the free state of New York City or the commonwealth of Chicago. But now the buildings had lost their top-most towers, and the bases showed yawning holes in many places. They landed in the center of the largest city, after a quick skim over the surface to be sure that no smaller city had escaped. A quick sampling of the air indicated it was breathable, with no poisons and only a touch of radioactivity, too low to be dangerous. Aimes and Jeremy went out, each in a little tractor. While making explorations, they were capable of forgetting their antagonisms in their common curiosity. Graves remained on the ship. He had decided somewhere along the line that setting foot on an alien planet was more sinful than travel through space, and refused to be shaken. Lenk finished what observations were necessary. He fiddled around, bothered by the quiet city outside. It had been better on the other worlds, where the ruins had been softened by time and weather. Here, it was too easy to imagine things. Finally, he climbed into rough clothes, and went out on foot.
  • 39. Everything was silent. Grass almost identical with that of Earth was growing through much of the torn pavement, and there were trees and bushes here and there. Vines had climbed some of the ruined walls. But there were no flowers. Much of the planet had apparently been overgrown with forest and weeds, but this city was in a temperate zone, and clear enough for easy travel. Lenk listened to the wind, and the faint sighing of a few trees nearby. He kicked over stones and rubble where they lay on patches of damp earth. And he kept looking at the sky. But it was no different from other worlds as far as the desolation went. There were no insects, and no animals stared warily up from the basements, and the grass showed no signs of having been grazed. It was as if the animal kingdom had never existed here. He made his way back from the section of largest buildings, toward what might have been a park at one time. Here there was less danger of being trapped in any collapsing ruin, and he moved more confidently. The low buildings might have been public sites, but they somehow seemed more like homes. He stumbled on something, and leaned down to pick it up. At first, the oddness of its design confused his vision. Then he made out a barrel with rifling inside, and a chamber that still contained pellets, now covered with corrosion. It would have fitted his hand oddly, but he could have used the pistol. Beyond it lay a line of rust that might have been a sword at one time. Coiled over it was a heavy loop of thick plastic that ended in a group of wires, apparently of stainless steel. Each wire ended in a row of cutting points. It might have been a cross between a knout and a bolas. He had a vision of something alien and sinister coming at him with one of those, and shuddered. There was a ruin of rust and corroded parts further on that might have been a variation of a machine gun. Lenk started for it, to be stopped by a shout.
  • 40. "Hold it!" It was Jeremy's voice, and now the tank came around a corner, and headed toward him. "Stay put, Lenk. That thing may be booby-trapped. And we can't be sure here that there has been time enough to make it safe." Lenk shuddered again, and climbed in hastily as Jeremy held open the door. It was tight inside, but reasonably safe, since the tank had been designed for almost anything. Jeremy must have seen him leaving the ship and followed. But by noon they had abandoned the fear of booby-traps. Either there had never been any or time had drawn their stings. Lenk wandered through the section already roughly surveyed, and declared safe. He felt convinced the inhabitants of this world once had been more like men than most other races. They had been two- legged, with arms and heads in a human position on their upright bodies. Judging from the size of the furniture, they had been slightly larger than men but not enough to matter. The pictures on the walls were odd mostly for the greenish tints of the skin and the absence of outward noses or ears. With a little fixing and recoloring, they might have been people. He came to a room that had been sealed off, pried open the door, and went in. It smelled stale enough to indicate that it had been reasonably air-tight. Benches and chairs ran along one wall, and a heavy wooden table occupied the middle. On that were piled bits and pieces in a curious scramble. He studied them carefully—belts, obviously, buttons, the inevitable weapons, scraps of plastic material. A minute later, he was shouting for Jeremy over the little walkie- talkie. The xenologist appeared in less than five minutes. He stared about for a second, then grinned wryly. "Your first, eh? I've found a lot of them. Sure, those were corpses there once." He saw Lenk's expression, and shrugged. "Oh, you were right to call me. It proves we weren't crazy. Wood and some
  • 41. cloth still preserved, but no bones. I've got a collection of pictures like that." "A corrosive gas—" Lenk suggested. Jeremy shook his head vigorously. "No dice, Captain. See that belt? It's plant fiber—something like linen. No gas strong enough to eat up a body would leave that unharmed. And they had skeletons, too —we've found models in what must have been a museum. But we can't even find the fossil skeletons that should be there. Odd, though." He prodded about among the weapons, shaking his head. "All the weapons in places like this show evidence of one homogeneous design. And all the ornaments are in a T shape, like this one." He lifted a stainless metal object from the floor and dropped it. "But outside in the square, there are at least two designs. For once, it almost looks as if your idea of an alien invader might be worth considering." The radio at his side let out a squawk, and he cut it on listening to the thin whisper that came from it. Abruptly, he swung about and headed toward his tractor outside, with Lenk following. "Aimes has found something," Jeremy said. They found the linguodynamicist in the gutted ruins of a building into which great concrete troughs led. A rusty ruin in one of the troughs indicated something like a locomotive had once run in it, apparently on great ball bearings. The fat man was pointing excitedly toward something on one of the walls. At first glance, it seemed to be a picture of more of the green people, apparently undergoing some violent torture. Then their eyes swept on—and they gasped. Over the green people, three vaguely reptilian monstrosities were hovering, at least twice the size of the others, all equipped with the fanged whips Lenk had seen. One of the green men was apparently trying to defend himself with a huge T-shaped weapon, but the
  • 42. others were helpless. The reptilian monsters sprouted great ugly wings of glaring red from their shoulders. "The invaders," Lenk said. They were horrible things to see. "But their weapons weren't that big...." "A war poster!" Aimes said bitterly. "It doesn't tell a thing except that there were two groups." Jeremy studied it, more closely. "Not necessarily even that. It's designed for some emotional effect. But at least, it's a hint that there may have been enemies unlike the ones who lived here. Lenk, can I take the scout ship out?" "Go ahead," Lenk told him. He frowned at the poster. "Jeremy, if that means the human race is going to have to face an alien invasion from monsters like that...." "It means nothing!" Jeremy went off, with Aimes apparently in agreement for a change. Lenk stood studying the poster. Finally he ripped it down, surprised to find how strong it still was, and rolled it up to carry back to the ship. Each world had been razed more recently, and each with the same curious curse. The race had risen to a high culture, and then had seemingly been wiped out in a few brief years. The destruction had accounted for all life on the planet, other than vegetable—and had wiped out even the bones. All that had been left was a collection of weapons and relics of more doubtful use. The pattern was the same. The direction was steadily toward Earth, leaping from planet to planet at jumps of thousands of years apart, or perhaps mere hundreds. This planet must have been attacked less than five hundred years before, though it was hard to tell without controlled study of decay here. Even now Earth might be suffering the invasion! They had been gone nearly three years. And during that time, the monsters might have swooped down hideously out of space.
  • 43. They might return to find the Earth a wasteland! His thoughts were a turmoil that grew worse as he stared at the poster. The unknown artist had done his job well. A feeling of horror poured out of it, filling him with an insensate desire to find such monstrosities and rend and maim them, as they had tormented the unfortunate green people. Graves came stomping up to the control room, carrying lunch, and took one look at the picture. "Serves the heathens right," he grumbled. "Look at them. In hell, suffering from the lashes of the devils of the pit. And still holding up that heathen charm." Lenk blinked. But Graves' idea wasn't too fantastic, at that. The creatures did look like devils, and the T-shaped object might be a religious symbol. Hadn't some faith or other used the tau cross in its worship? And those objects on the third world back had resembled swastikas, which were another religious symbol on Earth. That part fitted. During periods of extreme stress or danger, man sought some home in his faith. Was it so unnatural that alien races might do the same? "Isn't there anything hopeful in your religion, Graves?" he asked bitterly, wondering what the man had been like before his conversion to the rigidity he now possessed. He'd probably been as violent an atheist. Usually, a fanatic who switched sides became doubly fanatical. The revival of religious devotion had begun some fifteen years before, and from what Lenk had seen, the world had been a better and more kindly place for it. But there would always be those who thought the only true devotion lay in the burning of witches. Or maybe Graves needed psychiatric treatment for his morose moods were becoming suspiciously psychotic, and his fanaticism might be only a sign of deeper trouble. The man went off muttering something about the prophecy and the time being at hand for all to be tried in fire. Lenk went back to staring at the poster until he heard the scout come back. He found
  • 44. Aimes and Jeremy busy unloading what seemed to be loot enough to fill two of the scouts. "A whole library, almost intact," Aimes spoke with elation. "And plenty of it is on film, where we can correlate words and images! In two weeks, I'll speak the language like a native." "Good!" Lenk told him. "Because in about that time, we'll be home on Earth. As long as there's any chance that our people should be warned about invaders, I'm not delaying any longer!" "You can forget the alien invaders," Jeremy objected. Then he exploded his thunderbolt. The horrible aliens had proved to be no more than a group of purple-skinned people on the other side of the planet with a quite divergent culture, but of the same basic stock as the green-skinned men. They also exaggerated in their drawings, and to about the same degree. Fortunately the treasure-trove from the library would give the two men enough for years of work, and required the attention of a full group. They were eager now to take off for Earth and to begin recruiting a new expedition, taking only enough with them for the first basic steps. Lenk headed directly for the control room. He began setting up the proper directions on the board while Jeremy finished the account. "But something's hitting the planets," he objected. His hand found the main button and the Clarion began heading up through the atmosphere on normal gravity warp, until she could reach open space, and go into hyperdrive. "Your monsters prove to be only people—but it still doesn't explain the way disaster follows a line straight toward Earth! And until we know...." "Maybe we'd be better off not knowing," Jeremy said. But he refused to clarify his statement. Then the hyperdrive went on.
  • 45. The homeward trip was somewhat different from the others. There were none of the petty fights this time. Aimes and Jeremy were busy in their own way, decoding the language and collating the material they had. Graves was with them, grumbling at being around the heathen things, but apparently morbidly fascinated by them. Lenk could offer no help, and his duty lay with the ship. He pondered over the waves of destruction that seemed to wash toward Earth, and the diminishing cultural levels on the planets beyond. It couldn't be pure coincidence. Nor could he accept the idea that Earth was the center of the universe, and that everything else was necessarily imperfect. Surprisingly, it was Graves who gave him his first hopeful suggestion. A week had passed, and they were well into the second when the men really caught his attention. Graves was bringing his lunch, actually smiling. He frowned. "What gives?" he asked. "It's all true!" Graves answered, and there was an inner glow to him. "Just as it's prophesied in Revelations. There were times when I had doubts, but now I know. God has set the heathens before me as proof that Armageddon will come, and I have been singled out to bring the glad tidings to His faithful!" "I thought you didn't believe God would have anything to do with heathens!" Lenk objected. He was trying to recall whether a sudden phase of manic joy was a warning symptom or not. "I misunderstood. I thought God had forbade space flight. But now it is proved how He loves us. He singled us out to teach us to fly through space that we could learn." Graves gathered up the dishes
  • 46. without noticing that Lenk hadn't touched them and went off in a cloud of ecstasy. But his point had been made, and Lenk turned it over. Then, with a shout, he headed toward the headquarters of the two remaining scientists. He found them sitting quietly, watching a reel of some kind being projected through an alien device. "I hear it's Armageddon we're facing," he said. He expected grins of amusement from them—or at least from Jeremy. But none came. Aimes nodded. "First progress in all directions. Then a period when religion seems to be in the decline. Then a revival, and a return to faith in the prophecies. All religions agree on those prophecies, Lenk. Revelations refer to the end of Armageddon, when the whole world will wipe itself out before the creation of a better world, in one planet-wide war. The old Norse legends spoke of a Fimbulvetr, when the giants and their gods would destroy the earth in war. And these green-skinned peoples had the same religious prophecies. They came true, too. Armageddon. Contagious Armageddon." Lenk stared from one to the other, suspecting a joke. "But that still leaves coincidence—the way things move from planet to planet...." "Not at all," Jeremy said. "These people didn't have space travel, but they had some pretty highly developed science. They found what we thought we'd disproved—an ether drift. It would carry spores from planet to planet—and in the exact direction needed to account for what we've seen. Races were more advanced back that way, less so the way we first went, simply because of the time it took the spores to drift." "And what about the destruction?" Lenk asked woodenly. Their faces were getting him—they looked as if they believed it. "Is there another disease spore to drive races mad?" "Nothing like that. Just the natural course of cultures when they pass a certain level," Jeremy answered. "I should have seen that
  • 47. myself. Every race follows the same basic pattern. The only question is how much time we've got left—a week or a thousand years?" They turned back to their projection device, but Lenk caught the xenologist by the shoulder and swung him back. "But they didn't have space travel! That doesn't fit their pattern. Even if you're right...." Jeremy nodded. "We don't have the secret of immortality, either. And this race did. But, damn it, I'd still like to know what happened to all those skeletons?" Lenk went back to his control room. And perversely, his thoughts insisted on accepting their explanation. It would be like man to think that important things could only happen on his own home planet, and prophecy an end for his own race, never dreaming it could happen to others. It would be normal for him to sense somehow out of his own nature what his inevitable end must be—and then to be completely amazed when he found the same end for other races. But.... Space travel—travel at faster than light speeds—had to make a difference. There were the other worlds on the other side of the sun, where men were already planning to colonize. Even if a world might normally blow up in a final wild holocaust, it would have its whole racial pattern changed when it began to spread out among the stars. It would have to have a revival of the old pioneering spirit. There had been the beginnings of that when they left. And with that, such a war could be prevented forever. He heard Graves moving about in the galley, singing something about graves opening, and grimaced. Besides, Jeremy had admitted that they didn't have all the answers. The mystery of the vanished skeletons remained—and until that was accounted for, nothing could be considered explained.
  • 48. He forgot about the skeletons as he began planning how he'd wangle his way into one of the colonies. Then, even if catastrophe did strike Earth in another thousand years or so, the race could go on. Ten more years, and man would be safe.... He was feeling almost cheerful as they finally came out of hyperspace near Earth ... and landed.... The skeletons—lay scattered everywhere.
  • 49. *** END OF THE PROJECT GUTENBERG EBOOK BATTLEGROUND *** Updated editions will replace the previous one—the old editions will be renamed. Creating the works from print editions not protected by U.S. copyright law means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg™ electronic works to protect the PROJECT GUTENBERG™ concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for an eBook, except by following the terms of the trademark license, including paying royalties for use of the Project Gutenberg trademark. If you do not charge anything for copies of this eBook, complying with the trademark license is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. Project Gutenberg eBooks may be modified and printed and given away—you may do practically ANYTHING in the United States with eBooks not protected by U.S. copyright law. Redistribution is subject to the trademark license, especially commercial redistribution. START: FULL LICENSE
  • 50. THE FULL PROJECT GUTENBERG LICENSE
  • 51. PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg™ mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase “Project Gutenberg”), you agree to comply with all the terms of the Full Project Gutenberg™ License available with this file or online at www.gutenberg.org/license. Section 1. General Terms of Use and Redistributing Project Gutenberg™ electronic works 1.A. By reading or using any part of this Project Gutenberg™ electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg™ electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg™ electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. “Project Gutenberg” is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg™ electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg™ electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg™ electronic works. See paragraph 1.E below.
  • 52. 1.C. The Project Gutenberg Literary Archive Foundation (“the Foundation” or PGLAF), owns a compilation copyright in the collection of Project Gutenberg™ electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is unprotected by copyright law in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg™ mission of promoting free access to electronic works by freely sharing Project Gutenberg™ works in compliance with the terms of this agreement for keeping the Project Gutenberg™ name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg™ License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg™ work. The Foundation makes no representations concerning the copyright status of any work in any country other than the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg™ License must appear prominently whenever any copy of a Project Gutenberg™ work (any work on which the phrase “Project Gutenberg” appears, or with which the phrase “Project Gutenberg” is associated) is accessed, displayed, performed, viewed, copied or distributed:
  • 53. This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. 1.E.2. If an individual Project Gutenberg™ electronic work is derived from texts not protected by U.S. copyright law (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase “Project Gutenberg” associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg™ electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg™ License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg™ License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg™. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1
  • 54. with active links or immediate access to the full terms of the Project Gutenberg™ License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg™ work in a format other than “Plain Vanilla ASCII” or other format used in the official version posted on the official Project Gutenberg™ website (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original “Plain Vanilla ASCII” or other form. Any alternate format must include the full Project Gutenberg™ License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg™ works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg™ electronic works provided that: • You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg™ works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg™ trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, “Information
  • 55. about donations to the Project Gutenberg Literary Archive Foundation.” • You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg™ License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works. • You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. • You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or
  • 56. damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
  • 57. Welcome to our website – the ideal destination for book lovers and knowledge seekers. With a mission to inspire endlessly, we offer a vast collection of books, ranging from classic literary works to specialized publications, self-development books, and children's literature. Each book is a new journey of discovery, expanding knowledge and enriching the soul of the reade Our website is not just a platform for buying books, but a bridge connecting readers to the timeless values of culture and wisdom. With an elegant, user-friendly interface and an intelligent search system, we are committed to providing a quick and convenient shopping experience. Additionally, our special promotions and home delivery services ensure that you save time and fully enjoy the joy of reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookfinal.com