SlideShare a Scribd company logo
Transparent User Authentication Biometrics Rfid
And Behavioural Profiling 1st Edition Nathan
Clarke Auth download
https://ebookbell.com/product/transparent-user-authentication-
biometrics-rfid-and-behavioural-profiling-1st-edition-nathan-
clarke-auth-2451980
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Transparent Designs Personal Computing And The Politics Of
Userfriendliness Michael L Black
https://ebookbell.com/product/transparent-designs-personal-computing-
and-the-politics-of-userfriendliness-michael-l-black-42272816
My Trans Parent A User Guide For When Your Parent Transitions Heather
Bryant
https://ebookbell.com/product/my-trans-parent-a-user-guide-for-when-
your-parent-transitions-heather-bryant-46279772
Tutorial On Fiscal Transparency Portals An Usercentered Development
Tarick Gracida
https://ebookbell.com/product/tutorial-on-fiscal-transparency-portals-
an-usercentered-development-tarick-gracida-34013728
Transparent Soil Modelling Technique And Its Application Honghua Zhao
https://ebookbell.com/product/transparent-soil-modelling-technique-
and-its-application-honghua-zhao-48685266
Transparent Wood Materials Properties Applications And Fire Behaviour
Igor Wachter
https://ebookbell.com/product/transparent-wood-materials-properties-
applications-and-fire-behaviour-igor-wachter-49028474
Transparent Teaching Of Adolescents Defining The Ideal Class For
Students And Teachers 2nd Edition Mindy Kellerkyriakides Stacey Bruton
Annmarie Dearman Victoria Grant Crystal Jovae Mazur Daniel Powell
Christina Tate
https://ebookbell.com/product/transparent-teaching-of-adolescents-
defining-the-ideal-class-for-students-and-teachers-2nd-edition-mindy-
kellerkyriakides-stacey-bruton-annmarie-dearman-victoria-grant-
crystal-jovae-mazur-daniel-powell-christina-tate-51589834
Transparent Design In Higher Education Teaching And Leadership A Guide
To Implementing The Transparency Framework Institutionwide To Improve
Learning And Retention 1st Edition Maryann Winkelmes Allison Boye
Suzanne Tapp Peter Felten Ashley Finley
https://ebookbell.com/product/transparent-design-in-higher-education-
teaching-and-leadership-a-guide-to-implementing-the-transparency-
framework-institutionwide-to-improve-learning-and-retention-1st-
edition-maryann-winkelmes-allison-boye-suzanne-tapp-peter-felten-
ashley-finley-51651456
Transparent And Reproducible Social Science Research How To Do Open
Science Garret Christensen Jeremy Freese Edward Miguel
https://ebookbell.com/product/transparent-and-reproducible-social-
science-research-how-to-do-open-science-garret-christensen-jeremy-
freese-edward-miguel-51817886
Transparent Plastics Design And Technology Simone Jeska
https://ebookbell.com/product/transparent-plastics-design-and-
technology-simone-jeska-51929534
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
Transparent User Authentication
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
Nathan Clarke
Transparent User
Authentication
Biometrics, RFID and Behavioural Profiling
Nathan Clarke
Centre for Security, Communications &
Network Research (CSCAN)
Plymouth University
Drake Circus
PL4 8AA Plymouth
United Kingdom
N.Clarke@plymouth.ac.uk
ISBN 978-0-85729-804-1 e-ISBN 978-0-85729-805-8
DOI 10.1007/978-0-85729-805-8
Springer London Dordrecht Heidelberg New York
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Control Number: 2011935034
© Springer-Verlag London Limited 2011
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms of licenses issued by the
Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers.
The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of
a specific statement, that such names are exempt from the relevant laws and regulations and therefore free
for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information
contained in this book and cannot accept any legal responsibility or liability for any errors or omissions
that may be made.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
v
The world of user authentication is focussed upon developing technologies to solve
the problem of point-of-entry identity verification required by many information
systems. Unfortunately, authentication approaches; secret knowledge, token and
biometric, all fail to provide universally strong user authentication – with various
well-documented failings existing. Moreover, existing approaches fail to identify
the real information security risk. Authenticating users at point-of-entry, and failing
to require re-authentication of the user during the session provides a vast oppor­
tunity for attackers to compromise a system. However, forcing users to continuously
re-authenticate to systems is cumbersome and fails to take into account the human
factors of good security design, in order to ensure good levels of acceptability.
Unfortunately, within this context, the need to authenticate is increasing rather than
decreasing, with users interacting and engaging with a prolific variety of technologies
from PCs to PDAs, social networking to share dealing, and Instant Messenger to
Texting. A re-evaluation is therefore necessary to ensure user authentication is relevant,
usable, secure and ubiquitous.
The book presents the problem of user authentication from a completely different
standpoint to current literature. Rather than describing the requirements, technologies
and implementation issues of designing point-of-entry authentication, the text
introduces and investigates the technological requirements of implementing trans-
parent user authentication – where authentication credentials are captured during a
user’s normal interaction with a system. Achieving transparent authentication of a
user ensures the user is no longer required to provide explicit credentials to a system.
Moreover, once authentication can be achieved transparently, it is far simpler to
perform continuous authentication of the user minimising user inconvenience and
improving the overall level of security. This would transform current user authenti-
cation from a binary point-of-entry decision to a continuous identity confidence
measure. By understanding the current confidence in the identity of the user, the
system is able to ensure that appropriate access control decisions are made – providing
immediate access to resources with high confidences and requiring further validation
of a user’s identity with low confidences.
Preface
vi Preface
Part I begins by reviewing the current need for user authentication – identifying
the current thinking on point-of-entry authentication and why it falls short of
providing real and effective levels of information security. Chapter 1 focuses upon
the current implementation of user authentication and places the role of authentication
within the wider context of information security. The fundamental approaches to
user authentication and their evolutions are introduced. Chapter 2 takes an opportu-
nity to review the need for user authentication through an examination of the history
of modern computing. Whilst authentication is key to maintaining systems, it is
frequently overlooked and approaches are adopted that are simply not fit for purpose.
In particular, the human aspects of information security are introduced looking at
the role the user plays in providing effective security. The final chapter in Part I
investigates the role of user authentication in modern systems, what it is trying to
achieve and more importantly, if designed correctly, what it could achieve. A dis-
cussion on the applicability of utilising risk assessment and how continuous authen-
tication would function are described.
Part II is focussed upon the authentication approaches and providing an in-depth
analysis of how each operates. Chapter 4 takes each of the three fundamental
approaches in turn and discusses the various implementations and techniques avai­
lable. The chapter presents how each of the systems works and identifies key attacks
against them. Having thoroughly examined traditional authentication, Chap. 5
investigates transparent authentication approaches. Supported by current literature
and research, the chapter details how transparent authentication can be accomplished
and the various technological barriers that currently exist. Taking the concept of
transparent and continuous authentication further, Chap. 6 discusses multimodal
authentication. The chapter details what multimodal authentication is, what methods
of fusion exist and its applicability in this context. The final chapter in Part II,
describes the standardisation efforts currently underway in the field of biometrics.
Only through standardisation will widespread vendor-independent multimodal
systems be able to exist.
Part III examines the wider system-specific issues with designing large-scale
multimodal authentication systems. Chapters 8 and 9 look at the theoretical and
practical requirements of a system and discuss the limitations and advantages such
a system would pose. Obviously, with increasing user authentication and use of
biometrics, the issue of privacy arises and Chap. 9 focuses upon the need to ensure
privacy and the human factors of acceptability and perception. The book concludes
with a look into the future of user authentication, what the technological landscape
might look like and the effects upon the people using these systems.
vii
Acknowledgements
For the author, the research presented in this book started at the turn of the century
and represents a decade of research undertaken. During this time, a number of
M.Sc. and Ph.D. students have contributed towards furthering aspects of the research
problem, and thanks are due in no small part to all of them. Many of the examples
used in this book and their experimental findings are due to them. Specific thanks
are due to Fudong Li for his work on behavioural biometrics, Christopher Hocking
for conceptualising the Authentication Aura and Sevasti Karatzouni for her work on
the implementation and evaluation of early prototypes, and in particular her invaluable
contribution in Chap. 9.
The initial concept of performing authentication transparently needs to be credited
to my colleague and mentor Prof. Steven Furnell. It was due to his creativity that the
concept was initially created. He also needs to be credited with the guiding hand
behind much of the work presented in this book. It is only through his encouragement
and support that this book was made possible.
The reviewing and critiquing of book chapters is a time-consuming and arduous
task and thanks are due to Christopher Bolan in particular who gave a considerable
amount of personal time examining the manuscript. Along with others, I appreciate
all the time, patience and advice they have given.
Thanks are also due to all the companies and organisations that have funded
aspects of this research over the past 10 years. Specifically, thanks are due to the
Engineering and Physical Sciences Research Council (EPSRC), Orange Personal
Communications Ltd, France-Telecom, the EduServ Foundation and the University
of Plymouth.
I would also like to thank Simon Rees from Springer for his initial and continued
support for the book, even when timelines slipped and additions were made that led
to the text being delayed. I would also like to thank all the staff at Springer that have
helped in editing, proofing and publishing the text.
Final thanks are due to my wife, Amy, who has had to put up with countless
evenings and weekends alone whilst I prepared the manuscript. She was the inspiration
to write the book in the first place and I am appreciative of all the support, motivation
and enthusiasm she provided. Thankfully, this did not put her off marrying me!
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
ix
Contents
Part I Enabling Security Through User Authentication
1 Current Use of User Authentication.
...................................................... 3
1.1 Introduction.
...................................................................................... 3
1.2 Basics of Computer Security............................................................ 4
1.3 Fundamental Approaches to Authentication.................................... 10
1.4 Point-of-Entry Authentication.......................................................... 17
1.5 Single Sign On and Federated Authentication.
................................. 21
1.6 Summary........................................................................................... 22
References.................................................................................................. 23
2 The Evolving Technological Landscape................................................. 25
2.1 Introduction.
...................................................................................... 25
2.2 Evolution of User Authentication..................................................... 26
2.3 Cyber Security.................................................................................. 32
2.4 Human Aspects of Information Security.......................................... 38
2.5 Summary........................................................................................... 41
References.................................................................................................. 42
3 What Is Really Being Achieved with User Authentication?.
................ 45
3.1 Introduction.
...................................................................................... 45
3.2 The Authentication Process.............................................................. 46
3.3 Risk Assessment and Commensurate Security................................. 49
3.4 Transparent and Continuous Authentication.................................... 53
3.5 Summary........................................................................................... 57
Reference................................................................................................... 58
x Contents
Part II Authentication Approaches
4 Intrusive Authentication Approaches.................................................... 61
4.1 Introduction..................................................................................... 61
4.2 Secret-Knowledge Authentication.................................................. 61
4.2.1 Passwords, PINs and Cognitive Knowledge....................... 62
4.2.2 Graphical Passwords........................................................... 67
4.2.3 Attacks Against Passwords................................................. 70
4.3 Token Authentication...................................................................... 74
4.3.1 Passive Tokens.................................................................... 75
4.3.2 Active Tokens..................................................................... 76
4.3.3 Attacks Against Tokens...................................................... 80
4.4 Biometric Authentication................................................................ 82
4.4.1 Biometric System.
............................................................... 83
4.4.2 Biometric Performance Metrics.
......................................... 87
4.4.3 Physiological Biometric Approaches................................. 93
4.4.4 Behavioural Biometric Approaches.................................... 98
4.4.5 Attacks Against Biometrics................................................ 102
4.5 Summary......................................................................................... 107
References................................................................................................ 107
5 Transparent Techniques.......................................................................... 111
5.1 Introduction..................................................................................... 111
5.2 Facial Recognition.......................................................................... 112
5.3 Keystroke Analysis......................................................................... 119
5.4 Handwriting Recognition................................................................ 126
5.5 Speaker Recognition....................................................................... 128
5.6 Behavioural Profiling...................................................................... 130
5.7 Acoustic Ear Recognition............................................................... 139
5.8 RFID: Contactless Tokens.............................................................. 141
5.9 Other Approaches........................................................................... 144
5.10 Summary......................................................................................... 146
References.................................................................................................. 147
6 Multibiometrics........................................................................................ 151
6.1 Introduction..................................................................................... 151
6.2 Multibiometric Approaches............................................................ 153
6.3 Fusion.
............................................................................................. 157
6.4 Performance of Multi-modal Systems............................................ 160
6.5 Summary......................................................................................... 162
References................................................................................................ 163
7 Biometric Standards................................................................................ 165
7.1 Introduction..................................................................................... 165
7.2 Overview of Standardisation.
.......................................................... 165
7.3 Data Interchange Formats............................................................... 168
xi
Contents
7.4 Data Structure Standards.
................................................................ 171
7.5 Technical Interface Standards......................................................... 172
7.6 Summary......................................................................................... 174
References................................................................................................ 174
Part III 
System Design, Development and Implementation
Considerations
8 Theoretical Requirements of a Transparent
Authentication System............................................................................. 179
8.1 Introduction..................................................................................... 179
8.2 Transparent Authentication System................................................ 179
8.3 Architectural Paradigms.
................................................................. 184
8.4 An Example of TAS – NICA (Non-Intrusive
and Continuous Authentication)..................................................... 186
8.4.1 Process Engines.................................................................. 189
8.4.2 System Components........................................................... 193
8.4.3 Authentication Manager..................................................... 196
8.4.4 Performance Characteristics............................................... 201
8.5 Summary......................................................................................... 202
References................................................................................................ 203
9 Implementation Considerations in Ubiquitous Networks.................... 205
9.1 Introduction..................................................................................... 205
9.2 Privacy.
............................................................................................ 205
9.3 Storage and Processing Requirements............................................ 208
9.4 Bandwidth Requirements................................................................ 210
9.5 Mobility and Network Availability................................................. 212
9.6 Summary......................................................................................... 213
References................................................................................................ 214
10 Evolving Technology and the Future for Authentication..................... 215
10.1 Introduction..................................................................................... 215
10.2 Intelligent and Adaptive Systems................................................... 216
10.3 Next-Generation Technology.......................................................... 218
10.4 Authentication Aura........................................................................ 221
10.5 Summary......................................................................................... 224
References.................................................................................................. 224
Index.................................................................................................................. 225
About the Author............................................................................................. 229
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
xiii
List of Figures
Fig. 1.1 Facets of information security.
.......................................................... 6
Fig. 1.2 Information security risk assessment................................................ 8
Fig. 1.3 Managing information security......................................................... 9
Fig. 1.4 Typical system security controls....................................................... 10
Fig. 1.5 Lophcrack software........................................................................... 12
Fig. 1.6 Biometric performance characteristics.
............................................. 16
Fig. 2.1 O2
web authentication using SMS.
.................................................... 27
Fig. 2.2 O2 SMS one-time password.
............................................................. 28
Fig. 2.3 Google Authenticator........................................................................ 29
Fig. 2.4 Terminal-network security protocol.................................................. 29
Fig. 2.5 HP iPaq H5550 with fingerprint recognition.................................... 32
Fig. 2.6 Examples of phishing messages........................................................ 36
Fig. 2.7 Fingerprint recognition on HP PDA.
................................................. 39
Fig. 2.8 UPEK Eikon fingerprint sensor.
........................................................ 40
Fig. 3.1 Risk assessment process.
................................................................... 49
Fig. 3.2 Authentication security: traditional static model.............................. 51
Fig. 3.3 Authentication security: risk-based model........................................ 51
Fig. 3.4 Variation of the security requirements during
utilisation of a service. (a) Sending a text message,
(b) Reading and deleting text messages............................................ 52
Fig. 3.5 Transparent authentication on a mobile device................................. 54
Fig. 3.6 Normal authentication confidence.................................................... 55
Fig. 3.7 Continuous authentication confidence.............................................. 56
Fig. 3.8 Normal authentication with intermitted application-level
authentication.................................................................................... 56
Fig. 4.1 Googlemail password indicator.
........................................................ 66
Fig. 4.2 Choice-based graphical authentication............................................. 68
Fig. 4.3 Click-based graphical authentication................................................ 69
Fig. 4.4 Passfaces authentication.................................................................... 69
Fig. 4.5 Network monitoring using Wireshark............................................... 71
xiv List of Figures
Fig. 4.6 Senna Spy Trojan generator............................................................ 71
Fig. 4.7 AccessData password recovery toolkit........................................... 72
Fig. 4.8 Ophcrack password recovery.......................................................... 73
Fig. 4.9 Cain and Abel password recovery.
.................................................. 74
Fig. 4.10 Financial cards: Track 2 information.............................................. 76
Fig. 4.11 An authentication without releasing the base-secret....................... 76
Fig. 4.12 RSA securID token......................................................................... 78
Fig. 4.13 NatWest debit card and card reader................................................ 78
Fig. 4.14 Smartcard cross-section.................................................................. 79
Fig. 4.15 Cain and Abel’s RSA SecurID token calculator............................. 81
Fig. 4.16 The biometric process..................................................................... 84
Fig. 4.17 FAR/FRR performance curves........................................................ 88
Fig. 4.18 ROC curve (TAR against FMR)...................................................... 90
Fig. 4.19 ROC curve (FNMR against FMR).................................................. 90
Fig. 4.20 Characteristic FAR/FRR performance plot versus threshold.......... 91
Fig. 4.21 User A performance characteristics................................................ 92
Fig. 4.22 User B performance characteristics................................................ 92
Fig. 4.23 Anatomy of the ear.......................................................................... 94
Fig. 4.24 Fingerprint sensor devices.
.............................................................. 96
Fig. 4.25 Anatomy of an iris.
.......................................................................... 97
Fig. 4.26 Attributes of behavioural profiling.................................................. 99
Fig. 4.27 Attacks on a biometric system........................................................ 102
Fig. 4.28 USB memory with fingerprint authentication................................. 103
Fig. 4.29 Distributed biometric system.......................................................... 103
Fig. 4.30 Examples of fake fingerprint........................................................... 105
Fig. 4.31 Spoofing facial recognition using a photograph.
............................. 105
Fig. 4.32 Diagrammatic demonstration of feature space.
............................... 106
Fig. 5.1 Environmental and external factors affecting
facial recognition.
............................................................................ 113
Fig. 5.2 Normal facial recognition process.................................................. 115
Fig. 5.3 Proposed facial recognition process................................................ 115
Fig. 5.4 Effect upon the FRR with varying facial orientations.
.................... 118
Fig. 5.5 Effect upon the FRR using a composite facial template................. 119
Fig. 5.6 Continuous monitor for keystroke analysis.
.................................... 122
Fig. 5.7 Varying tactile environments of mobile devices............................. 123
Fig. 5.8 Variance of keystroke latencies....................................................... 124
Fig. 5.9 Results of keystroke analysis on a mobile phone.
........................... 125
Fig. 5.10 Handwriting recognition: user performance................................... 128
Fig. 5.11 Data extraction software................................................................. 134
Fig. 5.12 Variation in behavioural profiling performance over time.............. 136
Fig. 5.13 Acoustic ear recognition................................................................. 139
Fig. 5.14 Operation of an RFID token.
........................................................... 143
Fig. 5.15 Samples created for ear geometry................................................... 145
xv
List of Figures
Fig. 6.1 Transparent authentication on a mobile device............................... 152
Fig. 6.2 Cascade mode of processing of biometric samples........................ 157
Fig. 6.3 Matching score-level fusion............................................................ 158
Fig. 6.4 Feature-level fusion......................................................................... 158
Fig. 6.5 A hybrid model involving various fusion approaches.
.................... 161
Fig. 7.1 ISO/IEC onion-model of data interchange formats........................ 167
Fig. 7.2 Face image record format: overview............................................... 170
Fig. 7.3 Face image record format: facial record data.................................. 170
Fig. 7.4 A simple BIR.................................................................................. 171
Fig. 7.5 BioAPI patron format.
..................................................................... 172
Fig. 7.6 BioAPI architecture.
........................................................................ 173
Fig. 8.1 Identity confidence.......................................................................... 180
Fig. 8.2 A generic TAS framework.............................................................. 181
Fig. 8.3 TAS integration with system security............................................. 183
Fig. 8.4 Two-tier authentication approach.................................................... 183
Fig. 8.5 Network-centric TAS model........................................................... 185
Fig. 8.6 A device-centric TAS model........................................................... 185
Fig. 8.7 NICA – server architecture............................................................. 187
Fig. 8.8 NICA – client architecture.............................................................. 188
Fig. 8.9 NICA – data collection engine........................................................ 190
Fig. 8.10 NICA – biometric profile engine.................................................... 191
Fig. 8.11 NICA – authentication engine.
........................................................ 192
Fig. 8.12 NICA – communication engine...................................................... 192
Fig. 8.13 NICA – authentication manager process........................................ 199
Fig. 9.1 Level of concern over theft of biometric information..................... 208
Fig. 9.2 User preferences on location of biometric storage.
......................... 208
Fig. 9.3 Size of biometric templates............................................................. 209
Fig. 9.4 Average biometric data transfer requirements
(based upon 1.5 million users)........................................................ 211
Fig. 10.1 Conceptual model of the authentication aura.................................. 223
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
xvii
List of Tables
Table 1.1 Computer attacks affecting CIA................................................... 5
Table 1.2 Biometric techniques.................................................................... 14
Table 1.3 Level of adoption of authentication approaches........................... 17
Table 1.4 Top 20 most common passwords.
................................................. 18
Table 4.1 Password space based upon length............................................... 63
Table 4.2 Password space defined in bits..................................................... 64
Table 4.3 Examples of cognitive questions.................................................. 64
Table 4.4 Typical password policies.
............................................................ 65
Table 4.5 Components of a biometric system.............................................. 84
Table 4.6 Attributes of a biometric approach............................................... 86
Table 5.1 Subset of the FERET dataset utilised........................................... 116
Table 5.2 Datasets utilised in each experiment............................................ 117
Table 5.3 Facial recognition performance under normal conditions............ 117
Table 5.4 Facial recognition performance with facial orientations.............. 117
Table 5.5 Facial recognition using the composite template......................... 118
Table 5.6 Summary of keystroke analysis studies........................................ 120
Table 5.7 Performance of keystroke analysis on desktop PCs..................... 121
Table 5.8 Keystroke analysis variance between
best- and worst-case users............................................................ 125
Table 5.9 Handwriting recognition: individual word performance.............. 128
Table 5.10 ASPeCT performance comparison of classification
approaches.................................................................................... 131
Table 5.11 Cost-based performance............................................................... 132
Table 5.12 Behavioural profiling features...................................................... 134
Table 5.13 Behavioural profiling performance on a desktop PC.
................... 135
Table 5.14 MIT dataset.
.................................................................................. 137
Table 5.15 Application-level performance..................................................... 137
Table 5.16 Application-specific performance: telephone app........................ 138
Table 5.17 Application-specific performance: text app.
................................. 138
xviii List of Tables
Table 5.18 Performance of acoustic ear recognition
with varying frequency................................................................. 140
Table 5.19 Transparency of authentication approaches.
................................. 146
Table 6.1 Multi-modal performance: finger and face................................... 161
Table 6.2 Multi-modal performance: finger, face and hand modalities.
....... 162
Table 6.3 Multi-modal performance: face and ear modalities.
..................... 162
Table 7.1 ISO/IEC JTC1 SC37 working groups.......................................... 166
Table 7.2 ISO/IEC Biometric data interchange standards.
........................... 168
Table 7.3 ISO/IEC common biometric exchange formats framework......... 171
Table 7.4 ISO/IEC Biometric programming interface (BioAPI)................. 173
Table 8.1 Confidence level definitions......................................................... 194
Table 8.2 NICA – Authentication assets...................................................... 194
Table 8.3 NICA – Authentication response definitions.
............................... 196
Table 8.4 NICA – System integrity settings................................................. 197
Table 8.5 NICA – Authentication manager security levels.......................... 198
Table 8.6 NICA – Authentication performance........................................... 201
Part I
Enabling Security Through
User Authentication
Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth
3
N. Clarke, Transparent User Authentication: Biometrics, RFID and Behavioural Profiling,
DOI 10.1007/978-0-85729-805-8_1, © Springer-Verlag London Limited 2011
1.1 
Introduction
Information security has become increasingly important as technology integrates
into our everyday lives. In the past 10 years, computing-based technology has
­
permeated every aspect of our lives from desktop computers, laptops and mobile
phones to satellite navigation, MP3 players and game consoles. Whilst the motivation
for keeping systems secure has changed from the early days of mainframe systems
and the need to ensure reliable audits for accounting purposes, the underlying
requirement for a high level of security has always been present.
Computing is now ubiquitous in everything people do – directly or indirectly.
Even individuals who do not engage with personal computers (PCs) or mobile
phones still rely upon computing systems to provide their banking services, to
ensure sufficient stock levels in supermarkets, to purchase goods in stores and to
provide basic services such as water and electricity. In modern society there is a
significant reliance upon computing systems – without which civilisation, as we
know it, would arguably cease to exist.
As this reliance upon computers has grown, so have the threats against them.
Whilst initial endeavours of computer misuse, in the late 1970s and 1980s, were
largely focused upon demonstrating technical prowess, the twenty-first century has
seen a significant focus upon attacks that are financially motivated – from botnets
that attack individuals to industrial espionage. With this increasing focus towards
attacking systems, the domain of information systems security has also experienced
increasing attention.
Historically, whilst increasing attention has been paid to securing systems, such
a focus has not been universal. Developers have traditionally viewed information
security as an additional burden that takes significant time and resources, detracting
from developing additional functionality, and with little to no financial return. For
organisations, information security is seen as rarely driving the bottom line, and as
such they are unmotivated to adopt good security practice. What results is a variety
of applications, systems and organisations with a diverse set of security polices and
Chapter 1
Current Use of User Authentication
4 1 Current Use of User Authentication
levels of adoption – some very secure, a great many more less so. More recently, this
situation has improved as the consequences of being successfully attacked are
becoming increasingly severe and public. Within organisations, the desire to keep
intellectual property, regulation and legislation is a driving factor in improving
information security. Within the services and applications, people are beginning to
make purchasing decisions based upon whether a system or application is secure;
driving developers to ensure security is a design factor.
Authentication is key to providing effective information security. But in order to
understand the need for authentication it is important to establish the wider context
in which it resides. Through an appreciation of the domain, the issues that exist and
the technology available, it is clear why authentication approaches play such a pivotal
role in securing systems. It is also useful to understand the basic operation of authen-
tication technologies, their strengths and weaknesses and the current state of
implementation.
1.2 
Basics of Computer Security
The field of computer security has grown and evolved in line with the changing
threat to landscapes and the changing nature of technology. Whilst new research is
continually developing novel mechanisms to protect systems, the fundamental
principles that underpin the domain remain unchanged. Literature might differ a
little on the hierarchy of all the components that make up information security;
however, there is an agreement upon what the key objectives or goals are. The three
aims of information security are Confidentiality, Integrity and Availability and are
commonly referred to as the CIA triad. In terms of information, they can be defined
as follows:
Confidentiality refers to the prevention of unauthorised information disclosure.
•
Only those with permission to read a resource are able to do so. It is the element
most commonly associated with security in terms of ensuring the information
remains secret.
Integrity refers to ensuring that data are not modified by unauthorised users/
•
processes. Integrity of the information is therefore maintained as it can be
changed only by authorised users/processes of a system.
Availability refers to ensuring that information is available to authorised users
•
when they request it. This property is possibly the least intuitive of the three aims
but is fundamental. A good example that demonstrates the importance of
availability is a denial of service (DoS) attack. This attack consumes bandwidth,
processing power and/or memory to prevent legitimate users from being able to
access a system.
It is from these three core goals that all information security is derived. Whilst
perhaps difficult to comprehend in the first instance, some further analysis of the
root cause of individual attacks does demonstrate that one or more of the three
5
1.2 Basics of Computer Security
security goals are being affected. Consider, for instance, the role of the computer
virus. Fundamentally designed to self-replicate on a computer system, the virus will
have an immediate effect upon the availability of system resources, consuming all
the memory and processing capacity. However, depending upon the severity of the
self-replicating process, they can also have an effect upon the integrity of the data
stored. Viruses also have some form of payload, a purpose or reason for existing, as
few are non-malignant. This payload can vary considerably in purpose, but more
recently Trojans have become increasingly common. Trojans will search and capture
sensitive information and relay it back to the attacker, thereby affecting the
­
confidentiality of the information. To illustrate this further, Table 1.1 presents a
number of general attacks and their effect upon the security objectives.
In addition to the goals of information security, three core services support them.
Collectively referred to as AAA, these services are Authentication, Authorisation
and Accountability. In order to maintain confidentiality and integrity, it is imperative
for a system to establish the identity of the user so that the appropriate permissions
for access can be granted, without which anybody would be in a position to read and
modify information on the system. Authentication enables an individual to be
uniquely identified (albeit how uniquely is often in question!) and authorisation
provides the access control mechanism to ensure that users are granted their particular
set of permissions. Whilst both authentication and authorisation are used for proactive
defence of the system (i.e. if you don’t have a legitimate set of authentication
credentials you will not get access to the system), accountability is a reactive service
that enables a system administrator to track and monitor system interactions. In
cooperation with authentication, a system is able to log all system actions with a
corresponding identity. Should something have gone amiss, these logs will identify
the source and effect of these actions. The fact that this can only be done after an
incident makes it a reactive process. Together, the three services help maintain the
confidentiality, integrity and availability of information and systems.
Looking at security in terms of CIA and AAA, whilst accurate, paints a very
narrow picture of the information security domain. Information security is not
merely about systems and technical controls utilised in their protection. For instance,
whilst authentication does indeed ensure that access is only granted to a legitimate
identity, it does not consider that the authentication credential itself might be
Table 1.1 Computer attacks affecting CIA
Security goal
Attack Confidentiality Integrity Availability
(Distributed) Denial of service ✓
Hacker ✓ ✓
Malicious software (e.g. Worms, Viruses, Trojans) ✓ ✓ ✓
Phishing ✓
Rootkit ✓ ✓
Social engineering ✓
Spam ✓
6 1 Current Use of User Authentication
compromised through human neglect. Therefore, any subsequent action using that
compromised credential will have an impact upon the confidentiality and integrity
of the information. Furnell (2005) presents an interesting perspective on information
security, in the form of a jigsaw puzzle comprising the five facets of information:
technical, procedural, personnel, legal and physical (as illustrated in Fig. 1.1). Only
when the jigsaw is complete and all are considered together can an organisation
begin to establish a good information security environment. A lack of considering
any one element would have serious consequences on the ability to remain secure.
Whilst the role of the technical facet is often well documented, the roles of the
remaining facets are less so. The procedural element refers to the need for relevant
security processes to be undertaken. Key to these is the development of a security
policy, contingency planning and risk assessment. Without an understanding of
what is trying to be achieved, in terms of security, and an appreciation that not all
information has the same value, it is difficult to establish what security measures
need to be adopted. The personnel element refers to the human aspects of a system.
A popular security phrase, ‘security is only as strong as its weakness link’, demon-
strates that a break in only one element of the chain would result in compromise.
Unfortunately, literature has demonstrated that the weakest link is frequently the
user. The personnel element is therefore imperative to ensure security. It includes all
aspects that are people-related, including education and awareness training, ensuring
that appropriate measures are taken at recruitment and termination of employment
and maintaining a secure behaviour within the organisation. The legal element refers
to the need to ensure compliance with relevant legislation. An increased focus upon
legislation from many countries has resulted in significant controls on how
­
organisations use and store information. It is also important for an organisation to
comply with legislation in all countries in which it operates. The volume of legislation
Personnel
Procedural
Security
Technical
Physical
Legal
Fig. 1.1 Facets of
information security
7
1.2 Basics of Computer Security
is also growing, in part to better protect systems. For example, the following are a
sample of the laws that would need to be considered within the UK:
Computer Misuse Act 1990 (Crown
• 1990)
Police and Justice Act 2006 (included amendments to the Computer Misuse Act
•
1990) (Crown 2006)
Regulation of Investigatory Powers Act 2000 (Crown
• 2000a)
Data Protection Act 1998 (
• Crown 1998)
Electronic Communication Act 2000 (Crown
• 2000b)
In addition to legislation, the legal element also includes regulation. Regulations
provide specific details on how the legislation is to be enforced. Many regulations,
some industry-specific and others with a wider remit, exist that organisations must
legally ensure they comply against. Examples include:
The US Health Insurance Portability and Accountability Act (HIPAA) requires
•
all organisations involved in the provision of US medical services to conform to
its rules over the handling of medical information.
The US Sarbanes-Oxley Act requires all organisations doing business in the US
•
(whether they are a US company or not) to abide by the act. Given many non-US
companies have business interests in the US, they must ensure they conform to
the regulation.
Finally, the physical element refers to the physical controls that are put into place
to protect systems. Buildings, locked doors and security guards at ingress/egress
points are all examples of controls. In the discussion thus far, it has almost been
assumed that these facets related to deliberate misuse of systems. However, it is in
the remit of information security to also consider accidental threats. With respect to
the physical aspect, accidental threats would include the possibility of fire, floods,
power outages or natural disasters. Whilst this is conceivably not an issue for many
companies, for large-scale organisations that operate globally, such considerations
are key to maintaining availability of systems. Consider, for example, what would
happen to financial institutions if they did not consider these aspects to be appropriate.
Not only would banking transaction data be lost, access to money would be denied
and societies would grind to a stop. The banking crisis of 2009/2010 where large
volumes of money were lost on the markets, which consequently caused a global
recession, is a good example of the essential role these organisations play in daily
life and the impact they have upon individuals.
When considering how best to implement information security within an organi-
sation, it is imperative to ensure an organisation knows what it is protecting and why.
Securing assets merely for the sake of securing them is simply not cost-effective and
paying £1,000 to protect an asset worth only £10 does not make sense. To achieve this,
organisations can undertake an information security risk assessment. The concept of
risk assessment is an understanding of the value of the asset needing protection, the
threats against the asset and the likelihood or probability that the threat would
become a reality. As illustrated in Fig. 1.2, the compromise of the asset will also have
an impact upon the organisation and a subsequent consequence. Once a risk can be
8 1 Current Use of User Authentication
quantified, it is possible to consider the controls and countermeasures that can be
put into place to mitigate the risk to an acceptable level.
For organisations, particularly smaller entities, a risk assessment approach can
be prohibitively expensive. Baseline standards, such as the ISO27002 Information
Security Code of Practice (ISO 2005a), provide a comprehensive framework for
organisations to implement. Whilst this does not replace the need for a solid risk
assessment, it is a useful mechanism for organisations to begin the process of
being secure without the financial commitment of a risk assessment. The process
of assessing an organisation’s security posture is not a one-off process, but as
Fig. 1.3 illustrates is a constantly reoccurring process, as changes in policy, infra-
structure and threats all impact upon the level of protection being provided.
The controls and countermeasures that can be utilised vary from policy-related
statements of what will be secured and who is held responsible to technical controls
placed on individual assets, such as asset tagging to prevent theft. From an individual
system perspective, the controls you would expect to be included are an antivirus, a
firewall, a password, access control, backup, intrusion detection or prevention system,
anti-phishing and anti-spam filters, spyware detection, application and operating
system (OS) update utility, a logging facility and data encryption. The common
relationship between each countermeasure is that each and every control has an
effect upon one or more of three aims of information security: confidentiality,
integrity or availability.
The effect of the controls is to eliminate or, more precisely, mitigate particular
attacks (or sets of attacks). The antivirus provides a mechanism for monitoring all
data on the system for malicious software, and the firewall blocks all ports (except
for those required by the system), minimising the opportunity for hackers to enter
into the system. For those ports still open, an Intrusion Detection System is present,
monitoring for any manipulation of the underlying network protocols. Finally at the
application layer, there are application-specific countermeasures, such as anti-spam
and anti-phishing, that assist in preventing compromise of those services. As
illustrated in Fig. 1.4, these countermeasures are effectively layered, providing a
‘defence in depth’ strategy, where any single attack needs to compromise more than
one security control in order to succeed.
Consequence
Impact
Risk
Threat
Asset Vulnerability
Fig. 1.2 Information security
risk assessment
9
1.2 Basics of Computer Security
An analysis of Fig. 1.4 also reveals an overwhelming reliance upon a single
control. From a remote, Internet-based attack perspective, the hacker has a number
of controls to bypass, such as the firewall and intrusion detection system. A target
for the attacker would therefore be to disable the security controls. In order to
function, these controls are configurable so that individual users can set them up
to meet their specific requirements. These configuration settings are secured from
misuse by an authentication mechanism. If the firewall software has any software
vulnerability, a hacker can take advantage of the weakness to obtain access to
the firewall. Once the compromise is successful, the hacker is able to modify the
firewall access control policy to allow for further attacks. Similar methods can be
applied to the other countermeasures. For instance, switching off or modifying
the antivirus is a common strategy deployed by malware. If the system is set up
Implementation
Risk Analysis Monitor
Installed
Developing
Security Management
Security Policies
Maintain
Educate
Reassess
Recommend-
ations
Fig. 1.3 Managing informa­
tion
security
10 1 Current Use of User Authentication
for remote access, the hacker needs to only compromise the ­
authentication cre-
dentials to obtain access to the system. From a physical attack perspective, the
only control preventing access to the system is authentication – assuming they
have successfully bypassed the physical protection (if present). Authentication
therefore appears across the spectrum of technical controls. It is the vanguard in
ensuring the effective and secure operation of the system, applications and security
controls.
1.3 
Fundamental Approaches to Authentication
Authentication is key to maintaining the security chain. In order for authorisation
and accountability to function, which in turn maintain confidentiality, integrity and
availability, correct authentication of the user must be achieved. Whilst many forms
of authentication exist such as passwords, personal identification numbers (PINs),
fingerprint recognition, one-time passwords, graphical passwords, smartcards and
Subscriber Identity Modules (SIMs), they all fundamentally reside within one of
three categories (Wood 1977):
Something you know
•
Something you have
•
Something you are
•
Internet
Network Firewall
Personal Firewall
Intrusion Prevention System
Anti-Virus/Anti-Spyware
Anti-Spam
Login
Authentication
Anti-Phishing
Internet Browser
Email
Computer System
Fig. 1.4 Typical system security controls
11
1.3 Fundamental Approaches to Authentication
Something you know refers to a secret knowledge–based approach, where the
user has to remember a particular pattern, typically made up of character and
numbers. Something you have refers to a physical item the legitimate user has to
unlock the system and is typically referred to as a token. In non-technological
applications, tokens include physical keys used to unlock the house or car doors. In
a technological application, such as remote central locking, the token is an electronic
store for a password. Finally, something you are refers to a unique attribute of the
user. This unique attribute is transformed into a unique electronic pattern. Techniques
based upon something you are, are commonly referred to as biometrics.
The password and PIN are both common examples of the secret-knowledge
approach. Many systems are multi-user environments and therefore the password
is accompanied with a username or claimed identity. Whilst the claimed identity
holds no real secrecy, in that a username is relatively simple to establish, both are
used in conjunction to verify a user’s credentials. For single-user systems, such as
mobile phones and personal desktop assistance (PDA), only the password or PIN
is required.
The strength of the approach resides in the inability for an attacker to successfully
select the correct password. It is imperative therefore that the legitimate user selects
a password that is not easily guessable by an attacker. Unfortunately, selecting an
appropriate password is where the difficulty lies. Several attacks from social
engineering to brute-forcing can be used to recover passwords and therefore
subsequently circumvent the control. Particular password characteristics make this
process even simpler to achieve. For instance, a brute-force attack simply tries
every permutation of a password until the correct sequence is found. Short passwords
are therefore easier to crack than long passwords. Indeed, the strength of the
password is very much dependent upon ensuring that the number of possible
passwords or the password space is so large that it would be computationally
difficult to brute-force a password in a timely fashion. What defines timely is open
to question depending upon the application. If it is a password to a computer
system, it would be dependent on how frequently the password is changed – for
instance, a password policy stating that passwords should change monthly would
provide a month to a would-be attacker. After that time, the attacker would have to
start again. Longer passwords therefore take an exponentially longer time to crack.
Guidelines on password length do vary with password policies in the late 1990s,
suggesting that eight characters was the minimum. Current attacks such as
Ophtcrack (described in more detail in Sect. 4.2.3) are able to crack 14-character
random passwords in minutes (Ophcrack 2011).
Brute-forcing a password (if available) represents the most challenging attack for
hackers – and that is not particularly challenging if the password length is not suf-
ficient. However, there are even simpler attacks against trying every permutation.
This attack exploits the user’s inability to select a completely random password.
Instead they rely upon words or character sequences that have some meaning. After
all, they do have to remember the sequence and truly random passwords are simply
not easy to remember. A typical example of this is a word with some meaning, take
‘luke’ (my middle name) appended to a number ‘23’ (my age at the time) – luke23.
Many people perceive this to be a strong password, as it does not rely upon a single
12 1 Current Use of User Authentication
dictionary word. However, software such as AccessData’s Password Recovery
Toolkit (AccessData 2011) and Lophcrack (Security Focus 2010) have a process for
checking these types of sequence prior to the full brute force. Figure 1.5 illustrates
Lophcrack breaking this password, which was achieved in under 2 min.
It is worth noting that these types of attack are not always possible and do assume
that certain information is available and accessible to an attacker. In many situations
where passwords are applied this is not the case. In those situations, as long as the
three-attempt rule is in place (i.e. the user gets three attempts to login, after which
the account is locked) these types of brute-forcing attacks are not possible. However,
because of other weaknesses in using passwords, an attacker gaining access to one
system can also frequently provide access to other systems (where brute-forcing
was not possible) as passwords are commonly shared between systems. If you
consider the number of systems that you need to access, it soon becomes apparent
that this is not an insignificant number and is typically increasing over time. For
instance, a user might be expected to password protect:
Work/home computer
–
–
Work network access
–
–
Work email/home email
–
–
Bank accounts (of which he/she could have many with different providers –
–
–
mortgage, current, savings, joint account)
Paypal account
–
–
Amazon account
–
–
Home utilities (gas, electricity, water services all maintain online accounts for
–
–
payment and monitoring of the account)
Countless other online services that require one to register
–
–
Fig. 1.5 Lophcrack software
13
1.3 Fundamental Approaches to Authentication
It is simply not possible for the average user to remember unique passwords
(of sufficient length) for all these services without breaking a security policy.
Therefore users will tend to have a small bank of passwords that are reused or
simply use the same password on all systems.
Due to these weaknesses further attention was placed upon other forms of
authentication. From one perspective, tokens seem to solve the underlying problem
with passwords – the inability of people to remember a sufficiently long random
password. By using technology, the secret knowledge could be placed in a memory
chip rather than the human brain. In this fashion, the problems of needing to
remember 14-character random passwords, unique to each system and regularly
updated to avoid compromise, were all solved. It did, however, introduce one other
significant challenge. The physical protection afforded to secret-knowledge
approaches by the human brain does not exist within tokens. Theft or abuse of the
physical token removes any protection it would provide. It is less likely that your
brain can be abused in the same fashion (although techniques such as blackmail,
torture and coercion are certainly approaches of forcefully retrieving information
from people).
The key assumption with token-based authentication is that the token is in the
possession of the legitimate user. Similarly, with passwords, this reliance upon the
human participant in the authentication process is where the approach begins to fail.
With regard to tokens, people have mixed views on their importance and protection.
With house and car keys, people tend to be highly protective, with lost or stolen keys
resulting in a fairly immediate replacement of the locks and appropriate notification
to family members and law enforcement. This level of protection can also be seen
with regard to wallets and purses – which are a store for many tokens such as credit
cards. When using tokens for logical and physical access control, such as a work
identity card, the level of protection diminishes. Without strong policies on the
reporting of lost or stolen cards, the assumption that only the authorised user is in
possession of the token is weak at best. In both of the first two examples, the conse-
quence of misuse would have a direct financial impact on the individual, whereas
the final example has (on the face of it) no direct consequence. So the individual is
clearly motivated by financial considerations (not unexpectedly!). When it comes to
protecting information or data, even if it belongs to them, the motivation to protect
themselves is lessened, with many people unappreciative of the value of their infor-
mation. An easy example here is the wealth of private information people are happy
to share on social networking sites (BBC 2008a). The resultant effect of this insecu-
rity is that tokens are rarely utilised in isolation but rather combined with a second
form of authentication to provide a two-factor authentication. Tokens and PINs are
common combinations for example credit cards.
The feasibility of tokens is also brought into question when considering their
practical use. People already carry a multitude of token-based authentication
credentials and the utilisation of tokens for logical authentication would only serve
to increase this number. Would a different token be required to login in to the
computer, on to the online bank accounts, onto Amazon and so on? Some banks in
the UK have already issued a card reader that is used in conjunction with your current
14 1 Current Use of User Authentication
cash card to provide a unique one-time password. This password is then entered
onto the system to access particular services (NatWest 2010). Therefore, in order to
use the system, the user must remember to take not only the card but also the card
reader with them wherever they go (or constrain their use to a single location). With
multiple bank accounts from different providers this quickly becomes infeasible.
The third category of authentication, biometrics, serves to overcome the aforemen-
tioned weaknesses by removing the reliance upon the individual to either remember
a password or remember to take and secure a token. Instead, the approach relies
upon unique characteristics already present in the individual. Although the modern
interpretation of biometrics certainly places its origins in the twentieth century,
biometric techniques have been widely utilised for hundreds of years. There are
paintings from prehistoric times signed by handprints and the Babylonians used
fingerprints on legal documents.
The modern definition of biometrics goes further than simply referring to a
unique characteristic. A widely utilised reference by the International Biometrics
Group (IBG) defines biometrics as ‘the automated use of physiological or behavioural
characteristics to determine or verify identity’ (IBG 2010a). The principle difference
is in the term automated. Whilst many biometric characteristics may exist, they only
become a biometric once the process of authentication (or strictly identification) can
be achieved in an automated fashion. For example, whilst DNA is possibly one of
the more unique biometric characteristics known, it currently fails to qualify as a
biometric as it is not a completely automated process. However, significant research
is currently being conducted to make it so. The techniques themselves can be
broken down into two categories based upon whether the characteristic is a physical
attribute of the person or a learnt behaviour. Table 1.2 presents a list of biometric
techniques categorised by their physiological or behavioural attribute.
Fingerprint recognition is the most popular biometric technique in the market.
Linked inherently to its use initially within law enforcement, Automated Fingerprint
Identification Systems (AFIS) were amongst the first large-scale biometric systems.
Still extensively utilised by law enforcement, fingerprint systems have also found
their way into a variety of products such as laptops, mobile phones, mice and physical
access controls. Hand geometry was previously a significant market player,
principally in time-and-attendance systems; however, these have been surpassed by
facial and vascular pattern recognition systems in terms of sales. Both of the latter
Physiological Behavioural
Ear geometry Gait recognition
Facial recognition Handwriting recognition
Facial thermogram Keystroke analysis
Fingerprint recognition Mouse dynamics
Hand geometry Signature recognition
Iris recognition Speaker recognition
Retina recognition
Vascular pattern recognition
Table 1.2 Biometric
techniques
15
1.3 Fundamental Approaches to Authentication
techniques have increased in popularity since September 2001 for use in border
control and anti-terrorism efforts. Both iris and retina recognition systems are
amongst the most effective techniques in uniquely identifying a subject. Retina in
particular is quite intrusive to the user as the sample capture requires close interac-
tion between the user and capture device. Iris recognition is becoming more popular
as the technology for performing authentication at a distance advances.
The behavioural approaches are generally less unique in their characteristics than
their physiological counterparts; however, some have become popular due to the
application within which they are used. For instance, speaker recognition (also known
as voice verification) is widely utilised in telephony-based applications to verify the
identity of the user. Gait recognition, the ability to identify a person by the way in
which they walk, has received significant focus for use within airports, as identifica-
tion is possible at a distance. Some of the less well-established ­
techniques include
keystroke analysis, which refers to the ability to verify identity based upon the typing
characteristics of individuals, and mouse dynamics, verifying identity based upon
mouse movements. The latter has yet to make it out of research laboratories.
The biometric definition ends with the ability to determine or verify identity. This
refers to the two modes in which the biometric system can operate. To verify, or
verification (also referred to as authentication), is the process of confirming that a
claimed identity is the authorised user. This approach directly compares against the
password model utilised on computer systems, where the user enters a username –
thus claiming an identity, and then a password. The system verifies the password
against the claimed identity. However, biometrics can also be used to identify, or for
identification. In this mode, the user does not claim to be anybody and merely
presents their biometric sample to the system. It is up to the system to determine
whether the sample is an authorised sample and against which user. From a problem
complexity perspective, these are two very different problems.
From a system performance perspective (ignoring the compromise due to poor
selection etc.), biometric systems do differ from the other forms of authentication.
With both secret-knowledge and token-based approaches, the system is able to verify
the provided credential with 100% accuracy. The result of the comparison is a
Boolean (true or false) decision. In biometric-based systems, whilst the end result is
still normally a Boolean decision, that decision is based upon whether the sample has
met (or exceeded) a particular similarity score. In a password-based approach, the
system would not permit access unless all characters where identical. In a biomet-
ric-based system that comparison of similarity is not 100% – or indeed typically
anywhere near 100%. This similarity score gives rise to error rates that secret-knowl-
edge and token-based approaches do not have. The two principal error rates are:
False acceptance rate (FAR) – the rate at which an impostor is wrongly accepted
–
–
into the system
False rejection rate (FRR) – the rate at which an authorised user is wrongly
–
–
rejected from a system
Figure 1.6 illustrates the relationship between these two error rates. Mutually
exclusive as neither tends towards zero, it is necessary to determine a threshold
16 1 Current Use of User Authentication
value that is a suitable compromise between the level of security required (FAR)
and the level of user convenience (FRR). A third error rate, the equal error rate
(EER), is a measure of where the FAR and FRR cross and is frequently used as a
standard reference point to compare different biometric systems’ performance
(Ashbourn 2000). The performance of biometric systems has traditionally been
the prohibitive factor in widespread adoption (alongside cost), with error rates too
high to provide reliable and convenient authentication of the user. This has consider-
ably changed in recent years with significant enhancements being made in pattern
classification to improve performance.
Biometrics is considered to be the strongest form of authentication; however, a
variety of problems exist that can reduce their effectiveness, such as defining an
appropriate threshold level. They also introduce a significant level of additional
work, both in the design of the system and the deployment. The evidence for this
manifests itself in the fact that few off-the-shelf products exist for large-scale
biometric deployment. Instead, vendors offer a bespoke design solution involving
expensive consultants – highlighting the immaturity of the marketplace. Both tokens
and particularly passwords are simple for software designers to implement and
organisations to deploy. Further evidence of this can be found by looking at the
levels of adoption over the last 10 years. Table 1.3 illustrates the level of adoption
Fig. 1.6 Biometric performance characteristics
17
1.4 Point-of-Entry Authentication
of biometrics from 9% in 2001 rising to 21% in 2010,1
still only a minor player in
authentication versus the remaining approaches.
Whilst relatively low, it is interesting to note that adoption of biometrics has
increased over the 10-year period, whilst the other approaches have stayed fairly
static, if not slightly decreasing in more recent years. This growth in biometrics
reflects the growing capability, increasing standardisation, increasing performance
and decreasing cost of the systems.
Fundamentally, all the approaches come back to the same basic element: a unique
piece of information. With something you know, the responsibility for storing that
information is placed upon the user; with something you have, it is stored within the
token and with something you are, it is stored within the biometric characteristic.
Whilst each has its own weaknesses, it is imperative that verifying the identity of the
user is completed successfully if systems and information are to remain secure.
1.4 
Point-of-Entry Authentication
User authentication to systems, services or devices is performed using a single
approach – point-of-entry authentication. When authenticated successfully, the user
has access to the system for a period of time without having to re-authenticate, with
the period of time being defined on a case-by-case basis. For some systems, a
screensaver will lock the system after a few minutes of inactivity; for many Web-
based systems, the default time-out on the server (which would store the authenticated
credential) is 20 min and for other systems they will simply remain open for use
until the user manually locks the system or logs out of the service. The point-of-entry
mechanism is an intrusive interface that forces a user to authenticate. To better
understand and appreciate the current use of authentication, it is relevant to examine
the literature on the current use of each of the authentication categories.
Table 1.3 Level of adoption of authentication approaches
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010
Static account/login
password
48% 44% 47% 56% 52% 46% 51% 46% 42% 43%
Smartcard and other
one-time
passwords
a a a
35% 42% 38% 35% 36% 33% 35%
Biometrics 9% 10% 11% 11% 15% 20% 18% 23% 26% 21%
a
Data not available
1
These figures were compiled from the Computer Security Institutes (CSI) annual Computer Crime
and Abuse Survey (which until 2008 was jointly produced by the Federal Bureau of Investigation
(FBI)) between the period 2001 and 2010 (CSI, 2001-2010).
18 1 Current Use of User Authentication
An analysis of password use by Schneier (2006) highlighted the weakness of
allowing users to select the password. The study was based upon the analysis of
34,000 accounts from a MySpace phishing attack. Sixty-five percent of passwords
contain eight letters or less and the most common passwords were password1,
abc123 and myspace1. As illustrated in Table 1.4, none of the top 20 most frequently
used passwords contain any level of sophistication that a password cracker would
find remotely challenging. Another report by Imperva (2010), some 4 years later,
studied the passwords of over 32 million users of Rockyou.com after a hacker
obtained access to the database and posted them online. The analysis highlighted
again many of the traditional weaknesses of password-based approaches. The report
found that 30% of users’ passwords were six letters or fewer. Furthermore, 60% of
users used a limited set of alphanumeric characters, with 50% using slang/dictionary
or trivial passwords. Over 290,000 users selected 123456 as a password.
Further examination of password use reveals users are not simply content on
using simple passwords but continue their bad practice. A study in 2004 found that
70% of people would reveal their computer password in exchange for a chocolate
bar (BBC 2004). Thirty-four percent of respondents didn’t even need to be bribed
and volunteered their password. People are not even being socially engineered to
reveal their passwords but are simply giving them up in return for a relatively
inexpensive item. If other more sophisticated approaches like social engineering
were included, a worryingly significant number of accounts could be compromised,
Table 1.4 Top 20 most common passwords
Analysis of 34,000 passwords
(Schneier 2006) Analysis of 32 million passwords (Imperva 2010)
Rank Password Rank Password Number of users
1 password1 1 123456 290731
2 abc123 2 12345 79078
3 myspace1 3 123456789 76790
4 password 4 Password 61958
5 blink182 5 iloveyou 51622
6 qwerty1 6 princess 35231
7 fuckyou 7 rockyou 22588
8 123abc 8 1234567 21726
9 baseball1 9 12345678 20553
10 football1 10 abc123 17542
11 123456 11 Nicole 17168
12 soccer 12 Daniel 16409
13 monkey1 13 babygirl 16094
14 liverpool1 14 monkey 15294
15 princess1 15 Jessica 15162
16 jordan23 16 Lovely 14950
17 slipknot1 17 michael 14898
18 ­
superman1 18 Ashley 14329
19 iloveyou1 19 654321 13984
20 monkey 20 Qwerty 13856
19
1.4 Point-of-Entry Authentication
withouttheneedforanyformoftechnologicalhackingorbrute-forcing.Interestingly,
80% of those questioned were also fed up with passwords and would like a better
way to login to work computer systems.
Research carried out by the author into the use of PIN on mobile phones in 2005
found that 66% of the 297 respondents utilised the PIN on their device (Clarke and
Furnell 2005). In the first instance, this is rather promising – although it is worth
considering the third not using a PIN represents well over a billion people. More
concerning, however, was their use of the security:
45% of respondents never changed their PIN code from the factory default
•
setting
A further 42% had only changed their PIN once and
•
36% use the same PIN number for multiple services – which in all likelihood
•
would mean they also used the number for credit and cash cards.
Further results from the survey highlight the usability issues associated with
PINs that would lead to these types of result: 42% of respondents had experienced
some form of problem with their PIN which required a network operator to unlock
the device, and only 25% were confident in the protection the PIN would provide.
From a point-of-entry authentication perspective mobile phones pose a significantly
different threat to computer systems. Mobile phones are portable in nature and lack
the physical protection afforded to desktop computers. PCs reside in the home or in
work, within buildings that can have locks and alarms. Mobile phones are carried
around and only have the individual to rely upon to secure the device. The PIN is
entered upon switch-on of the device, perhaps in the morning (although normal practice
is now to leave the device on permanently), and the device remains on and accessible
without re-authentication of the user for the remainder of the day. The device can be
misused indefinitely to access the information stored on the device (and until reported
to the network operator misused to access the Internet and make international
telephone calls). A proportion of users are able to lock their device and re-enter the
PIN. From the survey, however, only 18% of respondents used this functionality.
When it comes to point-of-entry authentication, misuse of secret-knowledge
approaches is not unique and both tokens and biometrics also suffer from various
issues. Tokens have a chequered past. If we consider their use as cash/credit cards,
the level of fraud being conducted is enormous. The Association for Payment
Clearing Services (APACS), now known as the UK Payments Administration
(UKPA), reported the level of fraud at £535 million in 2007, a 25% rise on the
previous year (APACS 2008). Whilst not all the fraud can be directly attributed to
the misuse of the card, counterfeit cards and lost/stolen cards certainly can be, and
they account for over £200 million of the loss. Interestingly, counterfeit fraud within
the UK has dropped dramatically (71%) between 2004 and 2007, with the introduction
of chip and PIN. Chip and PIN moved cards and merchants away from using the
magnetic strip and a physical signature to a smartcard technology that made dupli-
cating cards far more difficult. Unfortunately, not everywhere in the world is this
new technology utilised and this gives rise to the significant level of fraud still
existing for counterfeit cards.
20 1 Current Use of User Authentication
The assumption that the authorised user is the person using the card obviously
does not hold true for a large number of credit card transactions. Moreover, even
with a token that you would expect users to be financially motivated to take care of,
significant levels of misuse still occur. One of the fundamental issues that gave rise
to counterfeit fraud of credit cards is the ease with which the magnetic-based cards
could be cloned. It is the magnetic strip of the card that stores the secret information
necessary to perform the transaction. A BBC report in 2003 stated that ‘a fraudulent
transaction takes place every 8s and cloning is the biggest type of credit card fraud’
(BBC 2003). Whilst smartcard technologies have improved the situation, there is
evidence that these are not impervious to attack. Researchers at Cambridge
University have found a way to trick the card reader into authenticating a transac-
tion without a valid PIN being entered (Espiner 2010).
Proximity or radio frequency identification (RFID)-based tokens have also
experienced problems with regard to cloning. RFID cards are contactless cards that
utilise a wireless signal to transmit the necessary authentication information. One
particular type of card, NXP Semiconductor’s Mifare Classic RFID card, was
hacked by Dutch researchers (de Winter 2008). The hack fundamentally involves
breaking the cryptographic protection, which only takes seconds to complete. The
significance of this hack is in tokens that contain the Mifare chip. The chip is used
not only in the Dutch transportation system but also in the US (Boston Charlie Card)
and the UK (London Oyster Card) (Dayal 2008). Subsequent reports regarding the
Oyster card reveal that duplicated Mifare chips can be used for free to travel on the
underground (although only for a day due to the asynchronous nature of the system)
(BBC 2008b). With over 34 million Oyster cards in circulation, a significant
opportunity exists for misuse. Since February 2010, new cards are being distributed
that no longer contain the Mifare Classic chip, but that in itself highlights another
weakness of token-based approaches, the cost of reissue and replacement.
With biometric systems, duplication of the biometric sample is possible. Facial
recognition systems could be fooled by a simple photocopied image of the legitimate
face (Michael 2009). Fingerprint systems can also be fooled in authorising the user,
using rubber or silicon impressions of the legitimate user’s finger (Matsumoto et al.
2002). Unfortunately, whilst the biometric characteristics are carried around with
us, they are also easily left behind. Cameras and microphones can capture our face
and voice characteristics. Fingerprints are left behind on glass cups we drink from
and DNA is shed from our bodies in the form of hair everywhere. There are, however,
more severe consequences that can happen. In 2005, the owner of a Mercedes
S-Class in Malaysia had his finger chopped off during an attack to steal his car
(Kent 2005). This particular model of car required fingerprint authentication to start
the car. The thieves were able to bypass the immobiliser, using the severed fingertip,
to gain access. With both tokens and secret knowledge, the information could have
been handed over without loss of limb. This has led more recent research to focus
upon the addition of liveness detectors that are able to sense whether a real person
(who is alive) is providing the biometric sample or if it is artificial.
The problem with point-of-entry authentication is that a Boolean decision is
made at the point of access as to whether to permit or deny access. This decision is
21
1.5 Single Sign On and Federated Authentication
frequently based upon only a single decision (i.e. a password) or perhaps two with
token-based approaches – but this is largely due to tokens providing no real authen-
tication security. The point-of-entry approach provides an attacker with an opportunity
to study the vulnerability of the system and to devise an appropriate mechanism to
circumvent it. As it is a one-off process, no subsequent effort is required on behalf
of the attacker and frequently they are able to re-access the system providing the
same credential they previously compromised.
However, when looking at the available options, the approach taken with
point-of-entry seems intuitively logical given the other seemingly limited choices:
To authenticate the user every couple of minutes in order to continuously ensure
•
the user still is the authorised user.
To authenticate the user before accessing each individual resource (whether that
•
be an application or file). The access control decision can therefore be more
confident in the authenticity of the user at that specific point in time.
Both of the examples above would in practice be far too inconvenient to users
and thus increase the likelihood that they would simply switch it off, circumvent it
or maintain such a short password sequence that it was simple to enter quickly. Even
if we ignore the inconvenience for a moment, these approaches still do not bypass
the point-of-entry authentication approach. It still is point-of-entry, but the user has
to perform the action more frequently. If an attacker has found a way to compromise
the authentication credential, compromising once is no different to compromising it
two, three, four or more times. So requesting additional verification of the user does
not provide additional security in this case. Authenticating the user periodically
with a variety of authentication techniques randomly selected would bypass the
compromised credential; however, at what cost in terms of inconvenience to the
user? Having to remember multiple passwords or carry several tokens for a single
system would simply not be viable. Multiple biometrics would also have cost
implications.
1.5 
Single Sign On and Federated Authentication
As the authentication demands increase upon the user, so technologies have been
developed to reduce them, and single sign on and federated authentication are two
prime examples. Single sign on allows a user to utilise a single username and pass-
word to access all the resources and applications within an organisation.
Operationally, this allows users to enter their credentials once and be subsequently
permitted to access resources for the remainder of the session. Federated authentication
extends this concept outside of the organisation to include other organisations.
Obviously, for federated identity to function, organisations need to ensure relevant
trust/use policies are in place beforehand. Both approaches reduce the need for
the users to repeatedly enter their credentials every time they want to access a
network resource.
22 1 Current Use of User Authentication
In enterprise organisations, single sign on is also replaced with reduced sign on.
Recognising that organisations place differing levels of risk on information, reduced
sign on permits a company to have additional levels of authentication for informa-
tion assets that need better protection. For instance, it might implement single sign
on functionality with a username and password combination for all low-level infor-
mation but require a biometric-based credential for access to more important data.
Both single sign on and, more recently, federated authentication have become
hugely popular. It is standard practice for large organisations to implement single
sign on, and OpenID, a federated identity scheme, has over a billion enabled
accounts and over nine million web sites that accept it (Kissel 2009). Whilst these
mechanisms do allow access to multiple systems through the use of a single creden-
tial, traditionally viewed as a bad practice, the improvement in usability for end-
users has overridden this issue.
In addition to single sign on described above, there are also examples of what
appear to be single sign on used frequently by users on desktop systems and browsers
utilising password stores. A password store will simply store all the individual
username and password combinations for all your systems/web sites. A single username
and password provides access to them. This system is different from normal single
sign on in that each of the resources that need access still has its own authentication
credentials and the password store acts as a middle layer in providing them, assuming
that the key to unlock the store is provided. In single sign on, there is only a single
authentication credential and a central service is responsible for managing them.
Password stores are therefore open to abuse by attacks that provide access to the
local host and to the password store. A potentially more significant issue with
password stores is the usability of such approaches. Whilst designed to improve
usability they could in many cases inhibit use. Password stores stop users from
having to enter their individual authentication credential to each service, which over
time is likely to lead to users simply forgetting what they are. When users need to
access the service from another computer, or from their own after a system reset, it
is likely that they will encounter issues over remembering their credentials.
Single sign on and federated identity, whilst helping to remove the burden placed
upon users for accessing services and applications, still only provide point-of-entry
verification of a user and thus only offer a partial solution to the authentication
problem.
1.6 
Summary
User authentication is an essential component in any secure system. Without it, it is
impossible to maintain the confidentiality, integrity and availability of systems.
Unlike firewalls, antivirus and encryption, it is also one of the few security controls
that all users have to interface and engage with. Both secret-knowledge and
token-based approaches rely upon the user to maintain security of the system. A lost
or stolen token or shared password will compromise the system. Biometrics do
23
References
provide an additional level of security, but are not necessarily impervious to compro-
mise. Current approaches to authentication are arguably therefore failing to meet
the needs or expectations of users or organisations.
In order to determine what form of authentication would be appropriate, it would
be prudent to investigate the nature of the problem that is trying to be solved. With
what is the user trying to authenticate? How do different technologies differ in their
security expectations? What threats exist and how do they impact the user? What
usability considerations need to be taken? The following chapters in Part I of this
book will address these issues.
References
AccessData: AccessData password recovery toolkit. AccessData. Available at: http://accessdata.
com/products/forensic-investigation/decryption (2011). Accessed 10 Apr 2011
APACS: Fraud: The facts 2008. Association for payment clearing services. Available at:
http://www.cardwatch.org.uk/images/uploads/publications/Fruad%20Facts%20202008_links.
pdf (2008). Accessed 10 Apr 2011
Ashbourn, J.: Biometrics: Advanced Identity Verification: The Complete Guide. Springer, London
(2000). ISBN 978-1852332433
BBC: Credit card cloning. BBC inside out. Available at: http://www.bbc.co.uk/insideout/east/
series3/credit_card_cloning.shtml (2003). Accessed 10 Apr 2011
BBC: Passwords revealed by sweet deal. BBC News. Available at: http://news.bbc.co.uk/1/hi/
technology/3639679.stm (2004). Accessed 10 Apr 2011
BBC: Personal data privacy at risk. BBC News. Available at: http://news.bbc.co.uk/1/hi/
business/7256440.stm (2008a). Accessed 10 Apr 2011
BBC: Oyster card hack to be published. BBC News. Available at: http://news.bbc.co.uk/1/hi/
technology/7516869.stm (2008b). Accessed 10 Apr 2011
Clarke, N.L., Furnell, S.M.: Authentication of users on mobile telephones – A survey of attitudes
and opinions. Comput. Secur. 24(7), 519–527 (2005)
Crown Copyright: Computer misuse act. Crown copyright. Available at: http://www.legislation.
gov.uk/ukpga/1990/18/contents (1990). Accessed 10 Apr 2011
Crown Copyright: Data protection act 1988. Crown copyright. Available at: http://www.legislation.
gov.uk/ukpga/1998/29/contents (1998). Accessed 10 Apr 2011
Crown Copyright: Regulation of investigatory powers act. Crown copyright. Available at:
http://www.legislation.gov.uk/ukpga/2000/23/contents (2000a). Accessed 10 Apr 2011
Crown Copyright: Electronic communication act. Crown copyright. Available at: http://www.
legislation.gov.uk/ukpga/2000/7/contents (2000b). Accessed 10 Apr 2011
Crown Copyright: Police and justice act. Crown copyright. Available at: http://www.legislation.
gov.uk/ukpga/2006/48/contents (2006). Accessed 10 Apr 2011
de Winter, B.: New hack trashes London’s Oyster card. Tech world. Available at: http://news.
techworld.com/security/105337/new-hack-trashes-londons-oyster-card/ (2008). Accessed 10
Apr 2011
Deyal, G.: MiFare RFID crack more extensive than previously thought. Computer world. Available
at: http://www.computerworld.com/s/article/9078038/MiFare_RFID_crack_more_extensive_
than_previously_thought (2008). Accessed 10 Apr 2011
Espiner, T.: Chip and PIN is broken, says researchers. ZDNet UK. Available at: http://www.zdnet.
co.uk/news/security-threats/2010/02/11/chip-and-pin-is-broken-say-researchers-40022674/
(2010). Accessed 3 Aug 2010
24 1 Current Use of User Authentication
Furnell, S.M.: Computer Insecurity: Risking the System. Springer, London (2005).
ISBN 978-1-85233-943-2
IBG: How is biometrics defined? International Biometrics Group. Available at: http://www.
biometricgroup.com/reports/public/reports/biometric_definition.html (2010a). Accessed 10
Apr 2011
Imperva: Consumer password worst practices. Imperva Application Defense Centre. Available at:
http://www.imperva.com/docs/WP_Consumer_Password_Worst_Practices.pdf (2010). Accessed
10 Apr 2011
ISO: ISO/IEC 27002:2005 information technology – Security techniques – Code of practice for
information security management. International Standards Organisation. Available at: http://
www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=50297 (2005a).
Accessed 10 Apr 2011
Kent, J.: Malaysia car thieves steal finger. BBC News. Available at: http://news.bbc.co.uk/1/hi/
world/asia-pacific/4396831.stm (2005). Accessed 10 Apr 2011
Kissel, B.: OpenID 2009 year in review. OpenID Foundation. Available at: http://openid.
net/2009/12/16/openid-2009-year-in-review/ (2009). Accessed 10 Apr 2011
Matsumoto, T., Matsumoto, H., Yamada, K., Hoshino, S.: Impact of artificial ‘gummy’ fingers on
fingerprint systems. Proc. SPiE 4677, 275–289 (2002)
Michael, S.: Facial recognition fails at Black Hat. eSecurity planet. Available at: http://www.
esecurityplanet.com/trends/article.php/3805011/Facial-Recognition-Fails-at-Black-Hat.htm
(2009). Accessed 10 Apr 2011
NatWest.: The secure way to get more from online banking. NatWest Bank. Available at: http://
www.natwest.com/personal/online-banking/g1/banking-safely-online/card-reader.ashx(2010).
Accessed 10 Apr 2011
Ophcrack.: What is ophcrack?. Sourceforge. Available at: http://ophcrack.sourceforge.net/ (2011).
Accessed 10 Apr 2011
Schneier, B.: Real-world passwords. Bruce Schneier Blog. Available at: http://www.schneier.com/
blog/archives/2006/12/realworld_passw.html (2006). Accessed 10 Apr 2011
Security Focus.: @Stake LC5. Security focus. Available at: http://www.securityfocus.com/
tools/1005 (2010). Accessed 10 April 2011
Wood, H.: The use of passwords for controlling the access to remote computer systems and services.
In: Dinardo, C.T. (ed.) Computers and Security, vol. 3, p. 137. AFIPS Press, Montvale (1977)
25
N. Clarke, Transparent User Authentication: Biometrics, RFID and Behavioural Profiling,
DOI 10.1007/978-0-85729-805-8_2, © Springer-Verlag London Limited 2011
2.1 
Introduction
Technology is closely intertwined with modern society, and few activities in our
daily life do not rely upon technology in some shape or form – from boiling a kettle
and making toast to washing clothes and keeping warm. The complexity of this
technology is however increasing, with more intelligence and connectivity being
added to a whole host of previously simple devices. For instance, home automation
enables every electrical device to be independently accessed remotely, from lights
and hot water to audio and visual systems. Cars now contain more computing power
than the computer that guided Apollo astronauts to the moon (Physics.org 2010).
With this increasing interoperability and flexibility comes a risk. What happens
when hackers obtain access to your home automation system? Switch devices on,
turn up the heating, or switch the fridge off? If hackers gain access to your car,
would they be able to perform a denial of service attack? Could they have more
underhand motives – perhaps cause an accident, stop the breaking or speed the car
up? Smart electricity meters are being deployed in the US, UK and elsewhere that
permit close monitoring of electricity and gas usage as part of the effort towards
reducing the carbon footprint (Anderson and Fluoria 2010). The devices also allow
an electricity/gas supplier to manage supplies at times of high usage, by switching
electricity off to certain homes whilst maintaining supply to critical services such as
hospitals. With smart meters being deployed in every home, an attack on these
devices could leave millions of homes without electricity. The impact upon society
and the resulting confusion and chaos that would derive is unimaginable.
With this closer integration of technology, ensuring systems remain secure has
never been more imperative. However, as society and technology evolve, the prob-
lem of what and how to secure systems also changes. Through an appreciation of
where technology has come from, where it is heading, the threats against it and the
users who use it, it is possible to develop strategies to secure systems that are proac-
tive in their protection rather than reactive to every small change – developing a
holistic approach is key to deriving long-term security practice.
Chapter 2
The Evolving Technological Landscape
26 2 The Evolving Technological Landscape
2.2 
Evolution of User Authentication
The need to authenticate users was identified early on in the history of computing.
Whilst initially for financial reasons – early computers were prohibitively expensive
and IT departments needed to ensure they charged the right departments for use –
the motivations soon developed into those we recognise today. Of the authentication
approaches, passwords were the only choice available in the first instance.
Initial implementations simply stored the username and password combinations
in clear-text form, allowing anyone with sufficient permission (or moreover anyone
who was able to obtain sufficient permission) the access to the file. Recognised as a
serious weakness to security, passwords were subsequently stored in a hashed form
(Morris and Thomson 1978). This provided significant strength to password secu-
rity as accessing the password file no longer revealed the list of passwords.
Cryptanalysis of the file is possible; however, success is largely dependent upon
whether users are following good password practice or not and the strength of the
hashing algorithm used.
Advice on what type of password to use has remained fairly constant, with a
trend towards more complex and longer passwords as computing power and its abil-
ity to brute-force the password space improved. As dictionary-based attacks became
more prevalent, the advice changed to ensure that passwords were more random in
nature, utilising a mixture of characters, numerals and symbols. The advice on the
length of the password has also varied depending upon its use. For instance, on
Windows NT machines that utilised the LAN Manager (LM) password, the system
would store the password into two separate 7-character hashes. Passwords of 9 char-
acters would therefore have the first 7 characters in the first hash and the remaining
2 in the second hash. Cracking a 2-character hash is a trivial task and could subse-
quently assist in cracking the first portion of the hash. As such, advice by many IT
departments of the day was to have 7- or 14-character passwords only. The policy
for password length now varies considerably between professionals and the litera-
ture. General guidance suggests passwords of 9 characters or more; however, pass-
word crackers such as Ophtcrack are able to crack 14-character LAN Manager (LM)
hashes (which were still utilised in Windows XP). Indeed, at the time of writing,
Ophtcrack had tables that can crack NTLM hashes (used in Windows Vista) of 6
characters (utilising any combination of upper, lower, special characters and num-
bers) – and this will only improve in time. Unfortunately, the fundamental boundary
to password length is the capacity for the user to remember it. In 2004, Bill Gates was
quoted as saying ‘passwords are dead’ (Kotadia 2004), citing numerous weaknesses
and deficiencies that password-based techniques experience.
To fill the gap created by the weaknesses in password-based approaches, several
token-based technologies were developed. Most notably, the one-time password
mechanism was created to combat the issue of having to remember long complex
passwords. It also provided protection against replay attacks, as each password
could only be utilised once. However, given the threat of lost or stolen tokens, most
implementations utilise one-time passwords as a two-factor approach, combining it
27
2.2 Evolution of User Authentication
with the traditional username and password for instance, thereby not necessarily
removing the issues associated with remembering and maintaining an appropriate
password.
Until more recently, token-based approaches have been largely utilised by corpo-
rate organisations for logical access control of their computer systems, particularly
for remote access where increased verification of authenticity is required. The major
barrier to widespread adoption is the cost associated with the physical token itself.
However, the ubiquitous nature of mobile phones has provided the platform for a
new surge in token-based approaches. Approaches utilise the Short-Message-
Service (SMS) (also known as the text message) to send the user a one-time pass-
word to enter onto the system. Mobile operators, such as O2 (amongst many others)
in the UK, utilise this mechanism for initial registration and password renewal pro-
cesses for access to your online account, as illustrated in Figs. 2.1 and 2.2.
Google has also developed its Google Authenticator, a two-step verification
approach that allows a user to enter a one-time code in addition to their username
and password (Google 2010). The code is delivered via a specialised application
installed on the user’s mobile handset, thus taking advantage of Internet-enabled
devices (as illustrated in Fig. 2.3). The assumption placed on these approaches is that
mobile phones are a highly personal device and as such will benefit from additional
physical protection from the user than traditional tokens. The costs of deploying to
these devices over providing the physical device are also significantly reduced.
The growth of the Internet has also resulted in an increased demand upon users
to authenticate – everything from the obvious such as financial web sites and corpo-
rate access, to less obvious news web sites and online gaming. Indeed, the need to
authenticate in many instances has less to do with security and more with possible
marketing information that can be gleamed from understanding users’ browsing
habits. Arguably this is placing an overwhelming pressure on users to remember a
large number of authentication credentials. In addition, the increase in Internet-
enabled devices from mobile phones to iPads ensures users are continuously con-
nected, consuming media from online news web sites and communicating using
social networks, instant messenger and Skype. Despite the massive change in tech-
nology, both in terms of the physical form factor of the device and the increasing
mobile nature of the device, and the services the technology enables, the nature of
authentication utilised is overwhelmingly still a username and password.
Fig. 2.1 O2
web authentication using SMS
28 2 The Evolving Technological Landscape
Further examination of the mobile phone reveals an interesting evolution of
technology, services and authentication. The mobile phone represents a ubiquitous
technology (in the developed world) with over 4.3 billion subscribers; almost
two-thirds of the world population1
(GSM Association 2010). The mobile phone,
known technically as the Mobile Station (MS), consists of two components: the
Mobile Equipment (ME) and a Subscriber Identification Module (SIM). The SIM is
a smart card with, amongst other information, the subscriber and network authenti-
cation keys. Subscriber authentication on a mobile phone is achieved through the
entry of a 4–8-digit number known as a Personal Identification Number (PIN). This
point-of-entry system then gives access to the user’s SIM, which will subsequently
give the user network access via the International Mobile Subscriber Identifier
(IMSI) and the Temporary Mobile Subscriber Identifier (TMSI), as illustrated in
Fig. 2.4. Thus the user’s authentication credential is used by the SIM to unlock the
necessary credentials for device authentication to the network.
The SIM card is a removable token allowing in principle for a degree of personal
mobility. For example, a subscriber could place their SIM card into another handset
Fig. 2.2 O2 SMS one-time
password
1
This number would include users with more than one subscription, such as a personal and business
contract. So this figure would represent a slightly smaller proportion of the total population
than stated.
29
2.2 Evolution of User Authentication
and use it in the same manner as they would use their own phone with calls being
charged to their account. However, the majority of mobile handsets are typically
locked to individual networks, and although the SIM card is in essence an authenti-
cation token, in practice the card remains within the mobile handset throughout the
life of the handset contract – removing any additional security that might be ­
provided
by a token-based authentication technique. Indeed, the lack of using the SIM as an
authenticate token has resulted in many manufacturers placing the SIM cardholder
in inaccessible areas on the device, for infrequent removal.
Fig. 2.3 Google Authenticator
Fig. 2.4 Terminal-network security protocol
30 2 The Evolving Technological Landscape
Interestingly, the purpose of the IMSI and TMSI are to authenticate the SIM card
itself on the network, and they do not ensure that the person using the phone is
­
actually the registered subscriber. This is typically achieved at switch on using the
PIN, although some manufacturers also have the PIN mechanism when you take the
mobile out of a stand-by mode. As such, a weakness of the point-of-entry system is
that, after the handset is switched on, the device is vulnerable to misuse should it be
left unattended or stolen.
In addition to the PIN associated with the SIM card, mobile phones also have
authentication mechanisms for the device itself. Whether the user is asked for the
SIM-based password, the handset-based password or even both depends upon indi-
vidual handsets and their configuration. The nature of the handset authentication
can vary but is typically either a PIN or alphanumeric password on devices that
support keyboards. Whilst the mobile phone has the opportunity to take advantage
of the stronger two-factor authentication (token and password), practical use of the
device on a day-to-day basis has removed the token aspect and minimised the effec-
tiveness of the secret-knowledge approach. A survey involving 297 participants found
that 85% of them left their phone on for more than 10 h a day – either switching it
on at the start of the day or leaving the device switched on continuously (Clarke and
Furnell 2005).
More recently, a few handset operators and manufacturers have identified the
needtoprovidemoresecureauthenticationmechanisms.Forinstance,NTTDoCoMo
F505i handset and Toshiba’s G910 come equipped with a built-in fingerprint sensor,
providing biometric authentication of the user (NTT DoCoMo 2003; Toshiba 2010).
Although fingerprint technology increases the level of security available to the hand-
set, the implementation of this mechanism has increased handset cost, and even then
the technique remains point-of-entry only and intrusive to the subscriber.
More notably, however, whilst the original concept of the PIN for first-generation
mobile phones may have been appropriate – given the risk associated with lost/sto-
len devices and the information they stored – from the third generation (3G) and
beyond, mobile phones offer a completely different value proposition. It can be
argued that handsets represent an even greater enticement for criminals because:
1. More technologically advanced mobile handsets – handsets are far more advanced
than previous mobile phones and are more expensive and subsequently attractive
to theft, resulting in a financial loss to the subscriber.
2. Availability of data services – networks provide the user with the ability to down-
load and purchase a whole range of data services and products that can be charged
to the subscriber’s account. Additionally, networks can provide access to bank
accounts, share trading and making micro-payments. Theft and misuse of the
handset would result in financial loss for the subscriber.
3. Personal Information – handsets are able to store much more information than
previous handsets. Contact lists not only include name and number but addresses,
dates of birth and other personal information. Handsets may also be able to
access personal medical records and home intranets, and their misuse would
result in a personal and financial loss for the subscriber.
31
2.2 Evolution of User Authentication
These additional threats were recognised by the architects of 3G networks. The
3GPP devised a set of standards concerning security on 3G handsets. In a document
called ‘3G – Security Threats and Requirements’ (3GPP 1999) the requirements for
authentication state:
It shall be possible for service providers to authenticate users at the start of, and during,
service delivery to prevent intruders from obtaining unauthorised access to 3G services by
masquerade or misuse of priorities.
The important consequence of this standard is to authenticate subscribers during
service delivery, an extension of the 2G point-of-entry authentication approach,
which requires continuous monitoring and authentication. However, network
operators, on the whole, have done little to improve authentication security, let alone
provide a mechanism for making it continuous. Even with the advent and deploy-
ment of 4G networks in several countries, the process of user authentication has
remained the same.
In comparison to passwords and tokens, biometrics has quite a different history
of use with its initial primary area of application within law enforcement. Sir Francis
Galton undertook some of the first research into using fingerprints to uniquely
identify people, but Sir Edward Henry is credited with developing that research for
use within law enforcement in the 1890s – known as the Henry Classification
System (IBG 2003). This initial work provided the foundation for understanding
the discriminative nature of human characteristics. However, it is not until the
1960s that biometric systems, as defined by the modern definition, began to be
developed. Some of the initial work was focused upon developing automated
approaches to replace the paper-based fingerprint searching law enforcement agencies
had to undertake.
As computing power improved throughout the 1970s significant advances in
biometrics have been made, with a variety of research being published throughout
this period on new biometric approaches. Early on, approaches such as speaker,
face, iris and signature were all identified as techniques that would yield positive
results. Whilst early systems were developed and implemented through the 1980s
and 1990s, it was not until 1999 that the FBI’s Integrated Automated Fingerprint
Identification System (IAFIS) became operational (FBI 2011), thus illustrating
that large-scale biometric systems are not simple to design and implement in
practice.
With respect to its use within or by organisations, biometrics was more com-
monly used for physical access control rather than logical in the first instance. Hand
geometry found early applications in time and attendance systems. The marketplace
was also dominated with vendors providing bespoke solutions to clients. It simply
wasn’t possible to purchase off-the-shelf enterprise solutions for biometrics; they
had to be individually designed. Only 9% of respondents from the 2001 Computer
Crime and Abuse Survey had implemented biometric systems (Power 2001).
Significant advances have been made in the last 10 years with the development of
interoperability standards to enable a move away from dedicated bespoke systems
to providing choice and a flexible upgrade path for customers, these efforts demon-
strating the increasing maturity of the domain.
Other documents randomly have
different content
Successors of
Cavalieri.
able to effect numerous integrations relating to the areas of portions
of conic sections and the volumes generated by the revolution of
these portions about various axes. At a later date, and partly in
answer to an attack made upon him by Paul Guldin, Cavalieri
published a treatise entitled Exercitationes geometricae sex (1647),
in which he adapted his method to the determination of centres of
gravity, in particular for solids of variable density.
Among the results which he obtained is that which we should
now write
∫x
0 xm dx =
xm+1
, (m integral).
m + 1
He regarded the problem thus solved as that of determining the
sum of the mth powers of all the lines drawn across a
parallelogram parallel to one of its sides.
At this period scientific investigators communicated their results to
one another through one or more intermediate persons. Such
intermediaries were Pierre de Carcavy and Pater Marin Mersenne;
and among the writers thus in communication
were Bonaventura Cavalieri, Christiaan Huygens,
Galileo Galilei, Giles Personnier de Roberval,
Pierre de Fermat, Evangelista Torricelli, and a little
later Blaise Pascal; but the letters of Carcavy or Mersenne would
probably come into the hands of any man who was likely to be
interested in the matters discussed. It often happened that, when
some new method was invented, or some new result obtained, the
method or result was quickly known to a wide circle, although it
might not be printed until after the lapse of a long time. When
Cavalieri was printing his two treatises there was much discussion of
Fermat’s
method of
Integration.
the problem of quadratures. Roberval (1634) regarded an area as
made up of “infinitely” many “infinitely” narrow strips, each of which
may be considered to be a rectangle, and he had similar ideas in
regard to lengths and volumes. He knew how to approximate to the
quantity which we express by ∫1
0 xmdx by the process of forming the
sum
0m + 1m + 2m + ... (n − 1)m
,
nm+1
and he claimed to be able to prove that this sum tends to 1/(m + 1),
as n increases for all positive integral values of m. The method of
integrating xm by forming this sum was found also by Fermat (1636),
who stated expressly that he arrived at it by
generalizing a method employed by Archimedes
(for the cases m = 1 and m = 2) in his books on
Conoids and Spheroids and on Spirals (see T. L.
Heath, The Works of Archimedes, Cambridge,
1897). Fermat extended the result to the case where m is fractional
(1644), and to the case where m is negative. This latter extension
and the proofs were given in his memoir, Proportionis geometricae in
quadrandis parabolis et hyperbolis usus, which appears to have
received a final form before 1659, although not published until 1679.
Fermat did not use fractional or negative indices, but he regarded
his problems as the quadratures of parabolas and hyperbolas of
various orders. His method was to divide the interval of integration
into parts by means of intermediate points the abscissae of which
are in geometric progression. In the process of § 5 above, the points
M must be chosen according to this rule. This restrictive condition
being understood, we may say that Fermat’s formulation of the
Various
Integrations.
problem of quadratures is the same as our definition of a definite
integral.
The result that the problem of quadratures could be solved for any
curve whose equation could be expressed in the form
y = xm (m ≠ −1),
or in the form
y = a1 xm1 + a2 xm2 + ... + an xmn,
where none of the indices is equal to −1, was used by John Wallis in
his Arithmetica infinitorum (1655) as well as by
Fermat (1659). The case in which m = −1 was
that of the ordinary rectangular hyperbola; and
Gregory of St Vincent in his Opus geometricum
quadraturae circuli et sectionum coni (1647) had proved by the
method of exhaustions that the area contained between the curve,
one asymptote, and two ordinates parallel to the other asymptote,
increases in arithmetic progression as the distance between the
ordinates (the one nearer to the centre being kept fixed) increases in
geometric progression. Fermat described his method of integration
as a logarithmic method, and thus it is clear that the relation
between the quadrature of the hyperbola and logarithms was
understood although it was not expressed analytically. It was not
very long before the relation was used for the calculation of
logarithms by Nicolaus Mercator in his Logarithmotechnia (1668). He
began by writing the equation of the curve in the form y = 1/(1 +
x), expanded this expression in powers of x by the method of
division, and integrated it term by term in accordance with the well-
Integration
before the
Integral
Calculus.
Fermat’s
methods of
Differentiation.
understood rule for finding the quadrature of a curve given by such
an equation as that written at the foot of p. 325.
By the middle of the 17th century many mathematicians could
perform integrations. Very many particular results had been
obtained, and applications of them had been
made to the quadrature of the circle and other
conic sections, and to various problems
concerning the lengths of curves, the areas they
enclose, the volumes and superficial areas of
solids, and centres of gravity. A systematic
account of the methods then in use was given, along with much that
was original on his part, by Blaise Pascal in his Lettres de Amos
Dettonville sur quelques-unes de ses inventions en géométrie
(1659).
16. The problem of maxima and minima and the problem of
tangents had also by the same time been effectively solved. Oresme
in the 14th century knew that at a point where the ordinate of a
curve is a maximum or a minimum its variation
from point to point of the curve is slowest; and
Kepler in the Stereometria doliorum remarked
that at the places where the ordinate passes from
a smaller value to the greatest value and then
again to a smaller value, its variation becomes insensible. Fermat in
1629 was in possession of a method which he then communicated to
one Despagnet of Bordeaux, and which he referred to in a letter to
Roberval of 1636. He communicated it to René Descartes early in
1638 on receiving a copy of Descartes’s Géométrie (1637), and with
it he sent to Descartes an account of his methods for solving the
problem of tangents and for determining centres of gravity.
Fig. 6.
Fermat’s method for maxima and
minima is essentially our method.
Expressed in a more modern notation,
what he did was to begin by connecting
the ordinate y and the abscissa x of a
point of a curve by an equation which
holds at all points of the curve, then to
subtract the value of y in terms of x from the value obtained by
substituting x + E for x, then to divide the difference by E, to
put E = 0 in the quotient, and to equate the quotient to zero.
Thus he differentiated with respect to x and equated the
differential coefficient to zero.
Fermat’s method for solving the problem of tangents may be
explained as follows:—Let (x, y) be the coordinates of a point P
of a curve, (x′, y′), those of a neighbouring point P′ on the
tangent at P, and let MM′ = E (fig. 6).
From the similarity of the triangles P′TM′, PTM we have
y′ : A − E = y : A,
where A denotes the subtangent TM. The point P′ being near
the curve, we may substitute in the equation of the curve x − E
for x and (yA − yE)/A for y. The equation of the curve is
approximately satisfied. If it is taken to be satisfied exactly, the
result is an equation of the form φ(x, y, A, E) = 0, the left-hand
member of which is divisible by E. Omitting the factor E, and
putting E = 0 in the remaining factor, we have an equation
which gives A. In this problem of tangents also Fermat found the
required result by a process equivalent to differentiation.
Fig. 7.
Fermat gave several examples of the application of his method;
among them was one in which he showed that he could differentiate
very complicated irrational functions. For such functions his method
was to begin by obtaining a rational equation. In rationalizing
equations Fermat, in other writings, used the device of introducing
new variables, but he did not use this device to simplify the process
of differentiation. Some of his results were published by Pierre
Hérigone in his Supplementum cursus mathematici (1642). His
communication to Descartes was not published in full until after his
death (Fermat, Opera varia, 1679). Methods similar to Fermat’s were
devised by René de Sluse (1652) for tangents, and by Johannes
Hudde (1658) for maxima and minima. Other methods for the
solution of the problem of tangents were devised by Roberval and
Torricelli, and published almost simultaneously in 1644. These
methods were founded upon the composition of motions, the theory
of which had been taught by Galileo (1638), and, less completely, by
Roberval (1636). Roberval and Torricelli could construct the tangents
of many curves, but they did not arrive at Fermat’s artifice. This
artifice is that which we have noted in § 10 as the fundamental
artifice of the infinitesimal calculus.
17. Among the comparatively few
mathematicians who before 1665 could
perform differentiations was Isaac
Barrow. In his book entitled Lectiones
opticae et geometricae, written
apparently in 1663, 1664, and published
in 1669, 1670, he gave a method of
tangents like that of Roberval and Torricelli, compounding two
velocities in the directions of the axes of x and y to obtain a
resultant along the tangent to a curve. In an appendix to this book
Barrow’s
Differential
Triangle.
Barrow’s
Inversion-
theorem.
he gave another method which differs from
Fermat’s in the introduction of a differential
equivalent to our dy as well as dx. Two
neighbouring ordinates PM and QN of a curve
(fig. 7) are regarded as containing an indefinitely
small (indefinite parvum) arc, and PR is drawn parallel to the axis of
x. The tangent PT at P is regarded as identical with the secant PQ,
and the position of the tangent is determined by the similarity of the
triangles PTM, PQR. The increments QR, PR of the ordinate and
abscissa are denoted by a and e; and the ratio of a to e is
determined by substituting x + e for x and y + a for y in the
equation of the curve, rejecting all terms which are of order higher
than the first in a and e, and omitting the terms which do not
contain a or e. This process is equivalent to differentiation. Barrow
appears to have invented it himself, but to have put it into his book
at Newton’s request. The triangle PQR is sometimes called “Barrow’s
differential triangle.”
The reciprocal relation between differentiation and integration
(§ 6) was first observed explicitly by Barrow in the book cited
above. If the quadrature of a curve y = ƒ(x) is known, so that
the area up to the ordinate x is given by F(x), the curve y = F(x)
can be drawn, and Barrow showed that the
subtangent of this curve is measured by the
ratio of its ordinate to the ordinate of the
original curve. The curve y = F(x) is often
called the “quadratrix” of the original curve;
and the result has been called “Barrow’s inversion-theorem.” He
did not use it as we do for the determination of quadratures, or
indefinite integrals, but for the solution of problems of the kind
which were then called “inverse problems of tangents.” In these
Nature of the
discovery
called the
Infinitesimal
Calculus.
problems it was sought to determine a curve from some
property of its tangent, e.g. the property that the subtangent is
proportional to the square of the abscissa. Such problems are
now classed under “differential equations.” When Barrow wrote,
quadratures were familiar and differentiation unfamiliar, just as
hyperbolas were trusted while logarithms were strange. The
functional notation was not invented till long afterwards (see
Function), and the want of it is felt in reading all the
mathematics of the 17th century.
18. The great secret which afterwards came to be called the
“infinitesimal calculus” was almost discovered by Fermat, and still
more nearly by Barrow. Barrow went farther than Fermat in the
theory of differentiation, though not in the
practice, for he compared two increments; he
went farther in the theory of integration, for he
obtained the inversion-theorem. The great
discovery seems to consist partly in the
recognition of the fact that differentiation, known
to be a useful process, could always be
performed, at least for the functions then known, and partly in the
recognition of the fact that the inversion-theorem could be applied
to problems of quadrature. By these steps the problem of tangents
could be solved once for all, and the operation of integration, as we
call it, could be rendered systematic. A further step was necessary in
order that the discovery, once made, should become accessible to
mathematicians in general; and this step was the introduction of a
suitable notation. The definite abandonment of the old tentative
methods of integration in favour of the method in which this
operation is regarded as the inverse of differentiation was especially
the work of Isaac Newton; the precise formulation of simple rules for
Newton’s
investigations.
the process of differentiation in each special case, and the
introduction of the notation which has proved to be the best, were
especially the work of Gottfried Wilhelm Leibnitz. This statement
remains true although Newton invented a systematic notation, and
practised differentiation by rules equivalent to those of Leibnitz,
before Leibnitz had begun to work upon the subject, and Leibnitz
effected integrations by the method of recognizing differential
coefficients before he had had any opportunity of becoming
acquainted with Newton’s methods.
19. Newton was Barrow’s pupil, and he knew to start with in 1664
all that Barrow knew, and that was practically all that was known
about the subject at that time. His original thinking on the subject
dates from the year of the great plague (1665-
1666), and it issued in the invention of the
“Calculus of Fluxions,” the principles and methods
of which were developed by him in three tracts
entitled De analysi per aequationes numero terminorum infinitas,
Methodus fluxionum et serierum infinitarum, and De quadratura
curvarum. None of these was published until long after they were
written. The Analysis per aequationes was composed in 1666, but
not printed until 1711, when it was published by William Jones. The
Methodus fluxionum was composed in 1671 but not printed till 1736,
nine years after Newton’s death, when an English translation was
published by John Colson. In Horsley’s edition of Newton’s works it
bears the title Geometria analytica. The Quadratura appears to have
been composed in 1676, but was first printed in 1704 as an
appendix to Newton’s Opticks.
20. The tract De Analysi per aequationes ... was sent by
Newton to Barrow, who sent it to John Collins with a request
Newton’s
method of
Series.
that it might be made known. One way of making it known
would have been to print it in the Philosophical Transactions of
the Royal Society, but this course was not
adopted. Collins made a copy of the tract and
sent it to Lord Brouncker, but neither of them
brought it before the Royal Society. The tract
contains a general proof of Barrow’s
inversion-theorem which is the same in principle as that in § 6
above. In this proof and elsewhere in the tract a notation is
introduced for the momentary increment (momentum) of the
abscissa or area of a curve; this “moment” is evidently meant to
represent a moment of time, the abscissa representing time, and
it is effectively the same as our differential element—the thing
that Fermat had denoted by E, and Barrow by e, in the case of
the abscissa. Newton denoted the moment of the abscissa by o,
that of the area z by ov. He used the letter v for the ordinate y,
thus suggesting that his curve is a velocity-time graph such as
Galileo had used. Newton gave the formula for the area of a
curve v = xm(m ± −1) in the form z = xm+1/(m + 1). In the
proof he transformed this formula to the form zn = cn xp, where
n and p are positive integers, substituted x + o for x and z + ov
for z, and expanded by the binomial theorem for a positive
integral exponent, thus obtaining the relation
zn + nzn−1 ov + ... = cn (xp + pxp−1 o + ...),
from which he deduced the relation
nzn−1 v = cn pxp−1
by omitting the equal terms zn and cnxp and dividing the
remaining terms by o, tacitly putting o = 0 after division. This
relation is the same as v = xm. Newton pointed out that,
conversely, from the relation v = xm the relation z = xm+1 / (m +
1) follows. He applied his formula to the quadrature of curves
whose ordinates can be expressed as the sum of a finite number
of terms of the form axm; and gave examples of its application
to curves in which the ordinate is expressed by an infinite series,
using for this purpose the binomial theorem for negative and
fractional exponents, that is to say, the expansion of (1 + x)n in
an infinite series of powers of x. This theorem he had
discovered; but he did not in this tract state it in a general form
or give any proof of it. He pointed out, however, how it may be
used for the solution of equations by means of infinite series. He
observed also that all questions concerning lengths of curves,
volumes enclosed by surfaces, and centres of gravity, can be
formulated as problems of quadratures, and can thus be solved
either in finite terms or by means of infinite series. In the
Quadratura (1676) the method of integration which is founded
upon the inversion-theorem was carried out systematically.
Among other results there given is the quadrature of curves
expressed by equations of the form y = xn (a + bxm)p; this has
passed into text-books under the title “integration of binomial
differentials” (see § 49). Newton announced the result in letters
to Collins and Oldenburg of 1676.
21. In the Methodus fluxionum (1671) Newton introduced his
characteristic notation. He regarded variable quantities as
generated by the motion of a point, or line, or plane, and called
the generated quantity a “fluent” and its rate of generation a
“fluxion.” The fluxion of a fluent x is represented by x, and its
moment, or “infinitely” small increment accruing in an “infinitely”
Newton’s
method of
Fluxions.
short time, is represented by ẋo. The
problems of the calculus are stated to be (i.)
to find the velocity at any time when the
distance traversed is given; (ii.) to find the
distance traversed when the velocity is given.
The first of these leads to differentiation. In any rational
equation containing x and y the expressions x + ẋo and y +ẏo
are to be substituted for x and y, the resulting equation is to be
divided by o, and afterwards o is to be omitted. In the case of
irrational functions, or rational functions which are not integral,
new variables are introduced in such a way as to make the
equations contain rational integral terms only. Thus Newton’s
rules of differentiation would be in our notation the rules (i.),
(ii.), (v.) of § 11, together with the particular result which we
write
dxm
= mxm−1, (m integral).
dx
a result which Newton obtained by expanding (x = ẋo)m by the
binomial theorem. The second problem is the problem of
integration, and Newton’s method for solving it was the method
of series founded upon the particular result which we write
∫ xm dx =
xm+1
.
m + 1
Newton added applications of his methods to maxima and
minima, tangents and curvature. In a letter to Collins of date
1672 Newton stated that he had certain methods, and he
described certain results which he had found by using them.
These methods and results are those which are to be found in
Publication of
the Fluxional
Notation.
the Methodus fluxionum; but the letter makes no mention of
fluxions and fluents or of the characteristic notation. The rule for
tangents is said in the letter to be analogous to de Sluse’s, but
to be applicable to equations that contain irrational terms.
22. Newton gave the fluxional notation also in the tract De
Quadratura curvarum (1676), and he there added to it notation
for the higher differential coefficients and for indefinite integrals,
as we call them. Just as x, y, z, ... are fluents
of which ẋ, ẏ, ̇z, ... are the fluxions, so ẋ, ẏ, ̇z,
... can be treated as fluents of which the
fluxions may be denoted by ẍ, ̈y, ̈z,... In like
manner the fluxions of these may be denoted
by ẍ, ̈y, ̈z, ... and so on. Again x, y, z, ... may be regarded as
fluxions of which the fluents may be denoted by ́x, ́y, ́z, ... and
these again as fluxions of other quantities denoted by ̋x, ̋y, ̋z, ...
and so on. No use was made of the notation ́ x, ̋ x, ... in the
course of the tract. The first publication of the fluxional notation
was made by Wallis in the second edition of his Algebra (1693)
in the form of extracts from communications made to him by
Newton in 1692. In this account of the method the symbols 0, ẋ,
ẍ, ... occur, but not the symbols ́ x, ̋ x, .... Wallis’s treatise also
contains Newton’s formulation of the problems of the calculus in
the words Data aequatione fluentes quotcumque quantitates
involvente fluxiones invenire et vice versa (“an equation
containing any number of fluent quantities being given, to find
their fluxions and vice versa”). In the Philosophiae naturalis
principia mathematica (1687), commonly called the “Principia,”
the words “fluxion” and “moment” occur in a lemma in the
second book; but the notation which is characteristic of the
calculus of fluxions is nowhere used.
Retarded
Publication of
the method of
Fluxions.
23. It is difficult to account for the fragmentary manner of
publication of the Fluxional Calculus and for the long delays which
took place. At the time (1671) when Newton composed the
Methodus fluxionum he contemplated bringing
out an edition of Gerhard Kinckhuysen’s treatise
on algebra and prefixing his tract to this treatise.
In the same year his “Theory of Light and
Colours” was published in the Philosophical
Transactions, and the opposition which it excited
led to the abandonment of the project with regard to fluxions. In
1680 Collins sought the assistance of the Royal Society for the
publication of the tract, and this was granted in 1682. Yet it
remained unpublished. The reason is unknown; but it is known that
about 1679, 1680, Newton took up again the studies in natural
philosophy which he had intermitted for several years, and that in
1684 he wrote the tract De motu which was in some sense a first
draft of the Principia, and it may be conjectured that the fluxions
were held over until the Principia should be finished. There is also
reason to think that Newton had become dissatisfied with the
arguments about infinitesimals on which his calculus was based. In
the preface to the De quadratura curvarum (1704), in which he
describes this tract as something which he once wrote (“olim
scripsi”) he says that there is no necessity to introduce into the
method of fluxions any argument about infinitely small quantities;
and in the Principia (1687) he adopted instead of the method of
fluxions a new method, that of “Prime and Ultimate Ratios.” By the
aid of this method it is possible, as Newton knew, and as was
afterwards seen by others, to found the calculus of fluxions on an
irreproachable method of limits. For the purpose of explaining his
discoveries in dynamics and astronomy Newton used the method of
Leibnitz’s
course of
discovery.
limits only, without the notation of fluxions, and he presented all his
results and demonstrations in a geometrical form. There is no doubt
that he arrived at most of his theorems in the first instance by using
the method of fluxions. Further evidence of Newton’s dissatisfaction
with arguments about infinitely small quantities is furnished by his
tract Methodus diferentialis, published in 1711 by William Jones, in
which he laid the foundations of the “Calculus of Finite Differences.”
24. Leibnitz, unlike Newton, was practically a self-taught
mathematician. He seems to have been first attracted to
mathematics as a means of symbolical expression, and on the
occasion of his first visit to London, early in 1673,
he learnt about the doctrine of infinite series
which James Gregory, Nicolaus Mercator, Lord
Brouncker and others, besides Newton, had used
in their investigations. It appears that he did not
on this occasion become acquainted with Collins, or see Newton’s
Analysis per aequationes, but he purchased Barrow’s Lectiones. On
returning to Paris he made the acquaintance of Huygens, who
recommended him to read Descartes’ Géométrie. He also read
Pascal’s Lettres de Dettonville, Gregory of St Vincent’s Opus
geometricum, Cavalieri’s Indivisibles and the Synopsis geometrica of
Honoré Fabri, a book which is practically a commentary on Cavalieri;
it would never have had any importance but for the influence which
it had on Leibnitz’s thinking at this critical period. In August of this
year (1673) he was at work upon the problem of tangents, and he
appears to have made out the nature of the solution—the method
involved in Barrow’s differential triangle—for himself by the aid of a
diagram drawn by Pascal in a demonstration of the formula for the
area of a spherical surface. He saw that the problem of the relation
between the differences of neighbouring ordinates and the ordinates
themselves was the important problem, and then that the solution of
this problem was to be effected by quadratures. Unlike Newton, who
arrived at differentiation and tangents through integration and areas,
Leibnitz proceeded from tangents to quadratures. When he turned
his attention to quadratures and indivisibles, and realized the nature
of the process of finding areas by summing “infinitesimal”
rectangles, he proposed to replace the rectangles by triangles having
a common vertex, and obtained by this method the result which we
write
1⁄4π = 1 − 1⁄3 + 1⁄5 − 1⁄7 + ...
In 1674 he sent an account of his method, called “transmutation,”
along with this result to Huygens, and early in 1675 he sent it to
Henry Oldenburg, secretary of the Royal Society, with inquiries as to
Newton’s discoveries in regard to quadratures. In October of 1675
he had begun to devise a symbolical notation for quadratures,
starting from Cavalieri’s indivisibles. At first he proposed to use the
word omnia as an abbreviation for Cavalieri’s “sum of all the lines,”
thus writing omnia y for that which we write “∫ ydx,” but within a
day or two he wrote “∫ y”. He regarded the symbol “∫” as
representing an operation which raises the dimensions of the subject
of operation—a line becoming an area by the operation—and he
devised his symbol “d” to represent the inverse operation, by which
the dimensions are diminished. He observed that, whereas “∫”
represents “sum,” “d” represents “difference.” His notation appears
to have been practically settled before the end of 1675, for in
November he wrote ∫ ydy = ½ y2, just as we do now.
25. In July of 1676 Leibnitz received an answer to his inquiry in
regard to Newton’s methods in a letter written by Newton to
Oldenburg. In this letter Newton gave a general statement of the
Correspondenc
e of Newton
and Leibnitz.
binomial theorem and many results relating to
series. He stated that by means of such series he
could find areas and lengths of curves, centres of
gravity and volumes and surfaces of solids, but,
as this would take too long to describe, he would
illustrate it by examples. He gave no proofs. Leibnitz replied in
August, stating some results which he had obtained, and which, as it
seemed, could not be obtained easily by the method of series, and
he asked for further information. Newton replied in a long letter to
Oldenburg of the 24th of October 1676. In this letter he gave a
much fuller account of his binomial theorem and indicated a method
of proof. Further he gave a number of results relating to
quadratures; they were afterwards printed in the tract De quadratura
curvarum. He gave many other results relating to the computation of
natural logarithms and other calculations in which series could be
used. He gave a general statement, similar to that in the letter to
Collins, as to the kind of problems relating to tangents, maxima and
minima, c., which he could solve by his method, but he concealed
his formulation of the calculus in an anagram of transposed letters.
The solution of the anagram was given eleven years later in the
Principia in the words we have quoted from Wallis’s Algebra. In
neither of the letters to Oldenburg does the characteristic notation of
the fluxional calculus occur, and the words “fluxion” and “fluent”
occur only in anagrams of transposed letters. The letter of October
1676 was not despatched until May 1677, and Leibnitz answered it in
June of that year. In October 1676 Leibnitz was in London, where he
made the acquaintance of Collins and read the Analysis per
aequationes, and it seems to have been supposed afterwards that
he then read Newton’s letter of October 1676, but he left London
before Oldenburg received this letter. In his answer of June 1677
Leibnitz’s
Differential
Calculus.
Leibnitz gave Newton a candid account of his differential calculus,
nearly in the form in which he afterwards published it, and explained
how he used it for quadratures and inverse problems of tangents.
Newton never replied.
26. In the Acta eruditorum of 1684 Leibnitz published a short
memoir entitled Nova methodus pro maximis et minimis, itemque
tangentibus, quae nec fractas nec irrationales quantitates moratur, et
singulare pro illis calculi genus. In this memoir the
differential dx of a variable x, considered as the
abscissa of a point of a curve, is said to be an
arbitrary quantity, and the differential dy of a
related variable y, considered as the ordinate of
the point, is defined as a quantity which has to dx the ratio of the
ordinate to the subtangent, and rules are given for operating with
differentials. These are the rules for forming the differential of a
constant, a sum (or difference), a product, a quotient, a power (or
root). They are equivalent to our rules (i.)-(iv.) of § 11 and the
particular result
d(xm) = mxm−1 dx.
The rule for a function of a function is not stated explicitly but is
illustrated by examples in which new variables are introduced, in
much the same way as in Newton’s Methodus fluxionum. In
connexion with the problem of maxima and minima, it is noted that
the differential of y is positive or negative according as y increases
or decreases when x increases, and the discrimination of maxima
from minima depends upon the sign of ddy, the differential of dy. In
connexion with the problem of tangents the differentials are said to
be proportional to the momentary increments of the abscissa and
ordinate. A tangent is defined as a line joining two “infinitely” near
Development of
the Calculus.
points of a curve, and the “infinitely” small distances (e.g., the
distance between the feet of the ordinates of such points) are said
to be expressible by means of the differentials (e.g., dx). The
method is illustrated by a few examples, and one example is given of
its application to “inverse problems of tangents.” Barrow’s inversion-
theorem and its application to quadratures are not mentioned. No
proofs are given, but it is stated that they can be obtained easily by
any one versed in such matters. The new methods in regard to
differentiation which were contained in this memoir were the use of
the second differential for the discrimination of maxima and minima,
and the introduction of new variables for the purpose of
differentiating complicated expressions. A greater novelty was the
use of a letter (d), not as a symbol for a number or magnitude, but
as a symbol of operation. None of these novelties account for the
far-reaching effect which this memoir has had upon the development
of mathematical analysis. This effect was a consequence of the
simplicity and directness with which the rules of differentiation were
stated. Whatever indistinctness might be felt to attach to the
symbols, the processes for solving problems of tangents and of
maxima and minima were reduced once for all to a definite routine.
27. This memoir was followed in 1686 by a second, entitled De
Geometria recondita et analysi indivisibilium atque infinitorum, in
which Leibnitz described the method of using his new differential
calculus for the problem of quadratures. This was
the first publication of the notation ∫ ydx. The
new method was called calculus summatorius.
The brothers Jacob (James) and Johann (John)
Bernoulli were able by 1690 to begin to make substantial
contributions to the development of the new calculus, and Leibnitz
adopted their word “integral” in 1695, they at the same time
Dispute
concerning
Priority.
adopting his symbol “∫.” In 1696 the marquis de l’Hospital published
the first treatise on the differential calculus with the title Analyse des
infiniment petits pour l’intelligence des lignes courbes. The few
references to fluxions in Newton’s Principia (1687) must have been
quite unintelligible to the mathematicians of the time, and the
publication of the fluxional notation and calculus by Wallis in 1693
was too late to be effective. Fluxions had been supplanted before
they were introduced.
The differential calculus and the integral calculus were rapidly
developed in the writings of Leibnitz and the Bernoullis. Leibnitz
(1695) was the first to differentiate a logarithm and an exponential,
and John Bernoulli was the first to recognize the property possessed
by an exponential (ax) of becoming infinitely great in comparison
with any power (xn) when x is increased indefinitely. Roger Cotes
(1722) was the first to differentiate a trigonometrical function. A
great development of infinitesimal methods took place through the
founding in 1696-1697 of the “Calculus of Variations” by the brothers
Bernoulli.
28. The famous dispute as to the priority of Newton and Leibnitz
in the invention of the calculus began in 1699 through the
publication by Nicolas Fatio de Duillier of a tract in which he stated
that Newton was not only the first, but by many
years the first inventor, and insinuated that
Leibnitz had stolen it. Leibnitz in his reply (Acta
Eruditorum, 1700) cited Newton’s letters and the
testimony which Newton had rendered to him in
the Principia as proofs of his independent authorship of the method.
Leibnitz was especially hurt at what he understood to be an
endorsement of Duillier’s attack by the Royal Society, but it was
explained to him that the apparent approval was an accident. The
dispute was ended for a time. On the publication of Newton’s tract
De quadratura curvarum, an anonymous review of it, written, as has
since been proved, by Leibnitz, appeared in the Acta Eruditorum,
1705. The anonymous reviewer said: “Instead of the Leibnitzian
differences Newton uses and always has used fluxions ... just as
Honoré Fabri in his Synopsis Geometrica substituted steps of
movements for the method of Cavalieri.” This passage, when it
became known in England, was understood not merely as belittling
Newton by comparing him with the obscure Fabri, but also as
implying that he had stolen his calculus of fluxions from Leibnitz.
Great indignation was aroused; and John Keill took occasion, in a
memoir on central forces which was printed in the Philosophical
Transactions for 1708, to affirm that Newton was without doubt the
first inventor of the calculus, and that Leibnitz had merely changed
the name and mode of notation. The memoir was published in 1710.
Leibnitz wrote in 1711 to the secretary of the Royal Society (Hans
Sloane) requiring Keill to retract his accusation. Leibnitz’s letter was
read at a meeting of the Royal Society, of which Newton was then
president, and Newton made to the society a statement of the
course of his invention of the fluxional calculus with the dates of
particular discoveries. Keill was requested by the society “to draw up
an account of the matter under dispute and set it in a just light.” In
his report Keill referred to Newton’s letters of 1676, and said that
Newton had there given so many indications of his method that it
could have been understood by a person of ordinary intelligence.
Leibnitz wrote to Sloane asking the society to stop these unjust
attacks of Keill, asserting that in the review in the Acta Eruditorum
no one had been injured but each had received his due, submitting
the matter to the equity of the Royal Society, and stating that he
was persuaded that Newton himself would do him justice. A
committee was appointed by the society to examine the documents
and furnish a report. Their report, presented in April 1712,
concluded as follows:
“The differential method is one and the same with the method
of fluxions, excepting the name and mode of notation; Mr
Leibnitz calling those quantities differences which Mr Newton
calls moments or fluxions, and marking them with the letter d, a
mark not used by Mr Newton. And therefore we take the proper
question to be, not who invented this or that method, but who
was the first inventor of the method; and we believe that those
who have reputed Mr Leibnitz the first inventor, knew little or
nothing of his correspondence with Mr Collins and Mr Oldenburg
long before; nor of Mr Newton’s having that method above
fifteen years before Mr. Leibnitz began to publish it in the Acta
Eruditorum of Leipzig. For which reasons we reckon Mr Newton
the first inventor, and are of opinion that Mr Keill, in asserting
the same, has been no ways injurious to Mr Leibnitz.”
The report with the letters and other documents was printed
(1712) under the title Commercium Epistolicum D. Johannis Collins
et aliorum de analysi promota, jussu Societatis Regiae in lucem
editum, not at first for publication. An account of the contents of the
Commercium Epistolicum was printed in the Philosophical
Transactions for 1715. A second edition of the Commercium
Epistolicum was published in 1722. The dispute was continued for
many years after the death of Leibnitz in 1716. To translate the
words of Moritz Cantor, it “redounded to the discredit of all
concerned.”
British and
Continental
Schools of
Mathematics.
29. One lamentable consequence of the dispute was a severance
of British methods from continental ones. In Great Britain it became
a point of honour to use fluxions and other Newtonian methods,
while on the continent the notation of Leibnitz
was universally adopted. This severance did not
at first prevent a great advance in mathematics in
Great Britain. So long as attention was directed to
problems in which there is but one independent
variable (the time, or the abscissa of a point of a
curve), and all the other variables depend upon this one, the
fluxional notation could be used as well as the differential and
integral notation, though perhaps not quite so easily. Up to about
the middle of the 18th century important discoveries continued to be
made by the use of the method of fluxions. It was the introduction
of partial differentiation by Leonhard Euler (1734) and Alexis Claude
Clairaut (1739), and the developments which followed upon the
systematic use of partial differential coefficients, which led to Great
Britain being left behind; and it was not until after the reintroduction
of continental methods into England by Sir John Herschel, George
Peacock and Charles Babbage in 1815 that British mathematics
began to flourish again. The exclusion of continental mathematics
from Great Britain was not accompanied by any exclusion of British
mathematics from the continent. The discoveries of Brook Taylor and
Colin Maclaurin were absorbed into the rapidly growing continental
analysis, and the more precise conceptions reached through a critical
scrutiny of the true nature of Newton’s fluxions and moments
stimulated a like scrutiny of the basis of the method of differentials.
30. This method had met with opposition from the first. Christiaan
Huygens, whose opinion carried more weight than that of any other
scientific man of the day, declared that the employment of
Oppositions to
the calculus.
The “Analyst”
controversy.
differentials was unnecessary, and that Leibnitz’s
second differential was meaningless (1691). A
Dutch physician named Bernhard Nieuwentijt
attacked the method on account of the use of
quantities which are at one stage of the process treated as
somethings and at a later stage as nothings, and he was especially
severe in commenting upon the second and higher differentials
(1694, 1695). Other attacks were made by Michel Rolle (1701), but
they were directed rather against matters of detail than against the
general principles. The fact is that, although Leibnitz in his answers
to Nieuwentijt (1695), and to Rolle (1702), indicated that the
processes of the calculus could be justified by the methods of the
ancient geometry, he never expressed himself very clearly on the
subject of differentials, and he conveyed, probably without intending
it, the impression that the calculus leads to correct results by
compensation of errors. In England the method of fluxions had to
face similar attacks. George Berkeley, bishop and philosopher, wrote
in 1734 a tract entitled The Analyst; or a Discourse addressed to an
Infidel Mathematician, in which he proposed to destroy the
presumption that the opinions of mathematicians
in matters of faith are likely to be more
trustworthy than those of divines, by contending
that in the much vaunted fluxional calculus there
are mysteries which are accepted unquestioningly by the
mathematicians, but are incapable of logical demonstration.
Berkeley’s criticism was levelled against all infinitesimals, that is to
say, all quantities vaguely conceived as in some intermediate state
between nullity and finiteness, as he took Newton’s moments to be
conceived. The tract occasioned a controversy which had the
important consequence of making it plain that all arguments about
infinitesimals must be given up, and the calculus must be founded
on the method of limits. During the controversy Benjamin Robins
gave an exceedingly clear explanation of Newton’s theories of
fluxions and of prime and ultimate ratios regarded as theories of
limits. In this explanation he pointed out that Newton’s moment
(Leibnitz’s “differential”) is to be regarded as so much of the actual
difference between two neighbouring values of a variable as is
needful for the formation of the fluxion (or differential coefficient)
(see G. A. Gibson, “The Analyst Controversy,” Proc. Math. Soc.,
Edinburgh, xvii., 1899). Colin Maclaurin published in 1742 a Treatise
of Fluxions, in which he reduced the whole theory to a theory of
limits, and demonstrated it by the method of Archimedes. This
notion was gradually transferred to the continental mathematicians.
Leonhard Euler in his Institutiones Calculi differentialis (1755) was
reduced to the position of one who asserts that all differentials are
zero, but, as the product of zero and any finite quantity is zero, the
ratio of two zeros can be a finite quantity which it is the business of
the calculus to determine. Jean le Rond d’Alembert in the
Encyclopédie méthodique (1755, 2nd ed. 1784) declared that
differentials were unnecessary, and that Leibnitz’s calculus was a
calculus of mutually compensating errors, while Newton’s method
was entirely rigorous. D’Alembert’s opinion of Leibnitz’s calculus was
expressed also by Lazare N. M. Carnot in his Réflexions sur la
métaphysique du calcul infinitésimal (1799) and by Joseph Louis de
la Grange (generally called Lagrange) in writings from 1760
onwards. Lagrange proposed in his Théorie des fonctions analytiques
(1797) to found the whole of the calculus on the theory of series. It
was not until 1823 that a treatise on the differential calculus founded
upon the method of limits was published. The treatise was the
Résumé des leçons ... sur le calcul infinitésimal of Augustin Louis
Cauchy’s
method of
limits.
Arithmetical
basis of
modern
analysis.
Cauchy. Since that time it has been understood
that the use of the phrase “infinitely small” in any
mathematical argument is a figurative mode of
expression pointing to a limiting process. In the
opinion of many eminent mathematicians such
modes of expression are confusing to students, but in treatises on
the calculus the traditional modes of expression are still largely
adopted.
31. Defective modes of expression did not hinder constructive
work. It was the great merit of Leibnitz’s symbolism that a
mathematician who used it knew what was to be done in order to
formulate any problem analytically, even though
he might not be absolutely clear as to the proper
interpretation of the symbols, or able to render a
satisfactory account of them. While new and
varied results were promptly obtained by using
them, a long time elapsed before the theory of
them was placed on a sound basis. Even after Cauchy had
formulated his theory much remained to be done, both in the rapidly
growing department of complex variables, and in the regions opened
up by the theory of expansions in trigonometric series. In both
directions it was seen that rigorous demonstration demanded
greater precision in regard to fundamental notions, and the
requirement of precision led to a gradual shifting of the basis of
analysis from geometrical intuition to arithmetical law. A sketch of
the outcome of this movement—the “arithmetization of analysis,” as
it has been called—will be found in Function. Its general tendency
has been to show that many theories and processes, at first
accepted as of general validity, are liable to exceptions, and much of
the work of the analysts of the latter half of the 19th century was
Fig. 8.
directed to discovering the most general conditions in which
particular processes, frequently but not universally applicable, can be
used without scruple.
III. Outlines of the Infinitesimal Calculus.
32. The general notions of functionality, limits and continuity are
explained in the article Function. Illustrations of the more immediate
ways in which these notions present themselves in the development
of the differential and integral calculus will be useful in what follows.
Geometrical
limits.
Tangents.
Progressive
and Regressive
33. Let y be given as a function of x, or, more generally, let x and y be given as functions
of a variable t. The first of these cases is included in the second by putting x = t. If certain
conditions are satisfied the aggregate of the points determined by the
functional relations form a curve. The first condition is that the
aggregate of the values of t to which values of x and y correspond must
be continuous, or, in other words, that these values must consist of all
real numbers, or of all those real numbers which lie between assigned extreme numbers.
When this condition is satisfied the points are “ordered,” and their order is determined by
the order of the numbers t, supposed to be arranged in order of increasing or decreasing
magnitude; also there are two senses of description of the curve, according as t is taken to
increase or to diminish. The second condition is that the aggregate of the points which are
determined by the functional relations must be “continuous.” This condition means that, if
any point P determined by a value of t is taken, and any distance δ, however small, is
chosen, it is possible to find two points Q, Q′ of the aggregate which are such that (i.) P is
between Q and Q′, (ii.) if R, R′ are any points between Q and Q′ the distance RR′ is less
than δ. The meaning of the word “between” in this statement is fixed by the ordering of
the points. Sometimes additional conditions are imposed upon the functional relations
before they are regarded as defining a curve. An aggregate of points which satisfies the
two conditions stated above is sometimes called a “Jordan curve.” It by no means follows
that every curve of this kind has a tangent. In order that the curve may
have a tangent at P it is necessary that, if any angle α, however small, is
specified, a distance δ can be found such that when P is between Q and
Q′, and PQ and PQ′ are less than δ, the angle RPR′ is less than α for all pairs of points R, R′
which are between P and Q, or between P and Q′ (fig. 8). When this condition is satisfied y
is a function of x which has a differential coefficient. The only way of finding out whether
this condition is satisfied or not is to attempt to form the differential coefficient. If the
quotient of differences Δy/Δx has a limit when Δx tends to zero, y is a differentiable
function of x, and the limit in question is the differential coefficient. The derived function, or
differential coefficient, of a function ƒ(x) is always defined by the formula
ƒ′(x) =
dƒ(x)
= lim.h=0
ƒ(x + h) − ƒ(x)
.
dx h
Rules for the formation of differential coefficients in particular cases have been given in §
11 above. The definition of a differential coefficient, and the rules of differentiation are
quite independent of any geometrical interpretation, such as that concerning tangents to a
curve, and the tangent to a curve is properly defined by means of the differential coefficient
of a function, not the differential coefficient by means of the tangent.
It may happen that the limit employed in defining the differential coefficient has one
value when h approaches zero through positive values, and a different value when h
approaches zero through negative values. The two limits are then called
the “progressive” and “regressive” differential coefficients. In
applications to dynamics, when x denotes a coordinate and t the time,
Differential
Coefficients.
Areas.
Lengths of
Curves.
dx/dt denotes a velocity. If the velocity is changed suddenly the
progressive differential coefficient measures the velocity just after the
change, and the regressive differential coefficient measures the velocity
just before the change. Variable velocities are properly defined by means of differential
coefficients.
All geometrical limits may be specified in terms similar to those employed in specifying
the tangent to a curve; in difficult cases they must be so specified. Geometrical intuition
may fail to answer the question of the existence or non-existence of the
appropriate limits. In the last resort the definitions of many quantities of
geometrical import must be analytical, not geometrical. As illustrations of
this statement we may take the definitions of the areas and lengths of curves. We may not
assume that every curve has an area or a length. To find out whether a curve has an area
or not, we must ascertain whether the limit expressed by ƒydx exists. When the limit exists
the curve has an area. The definition of the integral is quite independent of any geometrical
interpretation. The length of a curve again is defined by means of a limiting process. Let P,
Q be two points of a curve, and R1, R2, ... Rn−1 a set of intermediate points of the curve,
supposed to be described in the sense in which Q comes after P. The points R are supposed
to be reached successively in the order of the suffixes when the curve is described in this
sense. We form a sum of lengths of chords
PR1 + R1R2 + ... + Rn−1Q.
If this sum has a limit when the number of the points R is increased indefinitely and the
lengths of all the chords are diminished indefinitely, this limit is the length of the arc PQ.
The limit is the same whatever law may be adopted for inserting the
intermediate points R and diminishing the lengths of the chords. It
appears from this statement that the differential element of the arc of a
curve is the length of the chord joining two neighbouring points. In
accordance with the fundamental artifice for forming differentials (§§ 9, 10), the differential
element of arc ds may be expressed by the formula
ds = √ { (dx)2 + (dy)2 },
of which the right-hand member is really the measure of the distance between two
neighbouring points on the tangent. The square root must be taken to be positive. We may
describe this differential element as being so much of the actual arc between two
neighbouring points as need be retained for the purpose of forming the integral expression
for an arc. This is a description, not a definition, because the length of the short arc itself is
only definable by means of the integral expression. Similar considerations to those used in
defining the areas of plane figures and the lengths of plane curves are applicable to the
formation of expressions for differential elements of volume or of the areas of curved
surfaces.
Constants of
Integration.
Higher
Differential
Coefficients.
34. In regard to differential coefficients it is an important theorem
that, if the derived function ƒ′(x) vanishes at all points of an interval, the
function ƒ(x) is constant in the interval. It follows that, if two functions
have the same derived function they can only differ by a constant.
Conversely, indefinite integrals are indeterminate to the extent of an additive constant.
35. The differential coefficient dy/dx, or the derived function ƒ′(x), is itself a function of
x, and its differential coefficient is denoted by ƒ″(x) or d2y/dx2. In the
second of these notations d/dx is regarded as the symbol of an
operation, that of differentiation with respect to x, and the index 2
means that the operation is repeated. In like manner we may express
the results of n successive differentiations by ƒ(n)(x) or by dny/dxn.
When the second differential coefficient exists, or the first is differentiable, we have the
relation
ƒ″(x) = lim.h=0
ƒ(x + h) − 2ƒ(x) + ƒ(x − h)
.
h2
(i.)
The limit expressed by the right-hand member of this equation may exist in cases in which
ƒ′(x) does not exist or is not differentiable. The result that, when the limit here expressed
can be shown to vanish at all points of an interval, then ƒ(x) must be a linear function of x
in the interval, is important.
The relation (i.) is a particular case of the more general relation
ƒ(n)(x) = lim.h=0 h−n [ ƒ(x + nh) − nf {(x + (n − 1) h }
+
n (n − 1)
ƒ {x + (n − 2) h } − ... + (−1)n ƒ(x) ].
2! (ii.)
As in the case of relation (i.) the limit expressed by the right-hand member may exist
although some or all of the derived functions ƒ′(x), ƒ″(x), ... ƒ(n−1)(x) do not exist.
Corresponding to the rule iii. of § 11 we have the rule for forming the nth differential
coefficient of a product in the form
dn(uv)
= u
dnv
+ n
du dn−1v
+
n(n − 1) d2u dn−2v
+ ... +
dnu
v,
dxn dxn dx dxn−1 1·2 dx2 dxn−2 dxn
where the coefficients are those of the expansion of (1 + x)n in powers of x (n being a
positive integer). The rule is due to Leibnitz, (1695).
Differentials of higher orders may be introduced in the same way as the differential of
the first order. In general when y = ƒ(x), the nth differential dny is defined by the equation
dny = ƒ(n) (x) (dx)n,
in which dx is the (arbitrary) differential of x.
Symbols of
operation.
Fig. 9.
Theorem of
Intermediate
Value.
When d/dx is regarded as a single symbol of operation the symbol ƒ ... dx represents the
inverse operation. If the former is denoted by D, the latter may be denoted by D−1. Dn
means that the operation D is to be performed n times in succession;
D−n that the operation of forming the indefinite integral is to be
performed n times in succession. Leibnitz’s course of thought (§ 24)
naturally led him to inquire after an interpretation of Dn. where n is not
an integer. For an account of the researches to which this inquiry gave rise, reference may
be made to the article by A. Voss in Ency. d. math. Wiss. Bd. ii. A, 2 (Leipzig, 1889). The
matter is referred to as “fractional” or “generalized” differentiation.
36. After the formation of differential coefficients the most
important theorem of the differential calculus is the theorem of
intermediate value (“theorem of mean value,”
“theorem of finite increments,” “Rolle’s
theorem,” are other names for it). This
theorem may be explained as follows: Let A,
B be two points of a curve y = ƒ(x) (fig. 9).
Then there is a point P between A and B at which the tangent is parallel to the secant AB.
This theorem is expressed analytically in the statement that if ƒ′(x) is continuous between
a and b, there is a value x1 of x between a and b which has the property expressed by the
equation
ƒ(b) − ƒ(a)
= ƒ′(x1).
b − a (i.)
The value x1 can be expressed in the form a + θ(b − a) where θ is a number between 0
and 1.
A slightly more general theorem was given by Cauchy (1823) to the effect that, if ƒ′(x)
and F′(x) are continuous between x = a and x = b, then there is a number θ between 0
and 1 which has the property expressed by the equation
F(b) − F(a)
=
F′ {a + θ(b − a) }
.
ƒ(b) − ƒ(a) ƒ′ {a + θ(b − a) }
The theorem expressed by the relation (i.) was first noted by Rolle (1690) for the case
where ƒ(x) is a rational integral function which vanishes when x = a and also when x = b.
The general theorem was given by Lagrange (1797). Its fundamental importance was first
recognized by Cauchy (1823). It may be observed here that the theorem of integral
calculus expressed by the equation
F(b) − F(a) = ∫b
a F′(x) dx
follows at once from the definition of an integral and the theorem of intermediate value.
Taylor’s
Theorem.
The theorem of intermediate value may be generalized in the statement that, if ƒ(x) and
all its differential coefficients up to the nth inclusive are continuous in the interval between
x = a and x = b, then there is a number θ between 0 and 1 which has the property
expressed by the equation
ƒ(b) = ƒ(a) + (b − a) ƒ′(a) +
(b − a)2
ƒ″(a) + ... +
(b − a)n−1
ƒ(n−1)(a)
2! (n − 1)!
+
(b − a)n
ƒ(n) {a + θ (b − a) }.
n! (ii.)
37. This theorem provides a means for computing the values of a function at points near
to an assigned point when the value of the function and its differential coefficients at the
assigned point are known. The function is expressed by a terminated
series, and, when the remainder tends to zero as n increases, it may be
transformed into an infinite series. The theorem was first given by Brook
Taylor in his Methodus Incrementorum (1717) as a corollary to a
theorem concerning finite differences. Taylor gave the expression for ƒ(x + z) in terms of
ƒ(x), ƒ′(x), ... as an infinite series proceeding by powers of z. His notation was that
appropriate to the method of fluxions which he used. This rule for expressing a function as
an infinite series is known as Taylor’s theorem. The relation (i.), in which the remainder
after n terms is put in evidence, was first obtained by Lagrange (1797). Another form of
the remainder was given by Cauchy (1823) viz.,
(b − a)n
(1 − θ)n−1 ƒn {a + θ(b − a) }.
(n − 1)!
The conditions of validity of Taylor’s expansion in an infinite series have been investigated
very completely by A. Pringsheim (Math. Ann. Bd. xliv., 1894). It is not sufficient that the
function and all its differential coefficients should be finite at x = a; there must be a
neighbourhood of a within which Cauchy’s form of the remainder tends to zero as n
increases (cf. Function).
An example of the necessity of this condition is afforded by the function f(x) which is
given by the equation
ƒ(x) =
1
+ Σn=∞
n=1
(−1)n 1
.
1 + x2 n! 1 + 32n x2
(i.)
The sum of the series
ƒ(0) + xƒ′(0) +
x2
ƒ″(0)+ ...
2! (ii.)
is the same as that of the series
e−1 − x2 e−32 + x4 e−34 − ...
Expansions in
power series.
It is easy to prove that this is less than e−1 when x lies between 0 and 1, and also that f(x)
is greater than e−l when x = 1/√3. Hence the sum of the series (i.) is not equal to the sum
of the series (ii.).
The particular case of Taylor’s theorem in which a = 0 is often called Maclaurin’s
theorem, because it was first explicitly stated by Colin Maclaurin in his Treatise of Fluxions
(1742). Maclaurin like Taylor worked exclusively with the fluxional calculus.
Examples of expansions in series had been known for some time. The series for log (1 +
x) was obtained by Nicolaus Mercator (1668) by expanding (1 + x)−1 by the method of
algebraic division, and integrating the series term by term. He regarded
his result as a “quadrature of the hyperbola.” Newton (1669) obtained
the expansion of sin−1x by expanding (l − x2)−1/2 by the binomial
theorem and integrating the series term by term. James Gregory (1671)
gave the series for tan−1x. Newton also obtained the series for sin x, cos x, and ex by
reversion of series (1669). The symbol e for the base of the Napierian logarithms was
introduced by Euler (1739). All these series can be obtained at once by Taylor’s theorem.
James Gregory found also the first few terms of the series for tan x and sec x; the terms of
these series may be found successively by Taylor’s theorem, but the numerical coefficient of
the general term cannot be obtained in this way.
Taylor’s theorem for the expansion of a function in a power series was the basis of
Lagrange’s theory of functions, and it is fundamental also in the theory of analytic functions
of a complex variable as developed later by Karl Weierstrass. It has also numerous
applications to problems of maxima and minima and to analytical geometry. These matters
are treated in the appropriate articles.
The forms of the coefficients in the series for tan x and sec x can be expressed most
simply in terms of a set of numbers introduced by James Bernoulli in his treatise on
probability entitled Ars Conjectandi (1713). These numbers B1, B2, ... called Bernoulli’s
numbers, are the coefficients so denoted in the formula
x
= 1 −
x
+
B1
x2 −
B2
x4 +
B3
x6 − ...,
ex − 1 2 2! 4! 6!
and they are connected with the sums of powers of the reciprocals of the natural numbers
by equations of the type
Bn =
(2n)!
(
1
+
1
+
1
+ ... ).
22n−1 π2n 12n 22n 32n
The function
xm −
m
xm−1 +
m·m − 1
B1 xm−2 − ...
2 2!
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Full download Principles of Information Security 5th Edition Whitman Solution...
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual downl...
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Full download Principles of Information Security 5th Edition Whitman Solution...
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual downl...
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual

Similar to Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth (20)

PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Principles of Information Security 5th Edition Whitman Solutions Manual
PDF
Securing And Protecting Information
DOCX
main project doument
PDF
Investigation of Blockchain Based Identity System for Privacy Preserving Univ...
PDF
Nt1330 Week 1 Case Study Of EAP.pdfNt1330 Week 1 Case Study Of EAP
DOCX
3.Seminar Report Ashar Shaikh Final.docx
DOCX
Cain and AbelOphcrackStart H.docx
PDF
An Introduction to Authentication for Applications
PDF
Hazards of Biometric Authentication in Practice
PDF
"Does blockchain hold the key to a new age of supply chain transparency and t...
PDF
Two-factor authentication- A sample writing _Zaman
PDF
IRJET-An Economical and Secured Approach for Continuous and Transparent User ...
DOC
Cyb 610 Enhance teaching / snaptutorial.com
PDF
Class paper final
PDF
IMPLEMENTATION PAPER ON MACHINE LEARNING BASED SECURITY SYSTEM FOR OFFICE PRE...
DOC
social networking site
DOCX
CYB 610 Effective Communication/tutorialrank.com
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Principles of Information Security 5th Edition Whitman Solutions Manual
Securing And Protecting Information
main project doument
Investigation of Blockchain Based Identity System for Privacy Preserving Univ...
Nt1330 Week 1 Case Study Of EAP.pdfNt1330 Week 1 Case Study Of EAP
3.Seminar Report Ashar Shaikh Final.docx
Cain and AbelOphcrackStart H.docx
An Introduction to Authentication for Applications
Hazards of Biometric Authentication in Practice
"Does blockchain hold the key to a new age of supply chain transparency and t...
Two-factor authentication- A sample writing _Zaman
IRJET-An Economical and Secured Approach for Continuous and Transparent User ...
Cyb 610 Enhance teaching / snaptutorial.com
Class paper final
IMPLEMENTATION PAPER ON MACHINE LEARNING BASED SECURITY SYSTEM FOR OFFICE PRE...
social networking site
CYB 610 Effective Communication/tutorialrank.com
Ad

Recently uploaded (20)

PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
master seminar digital applications in india
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Lesson notes of climatology university.
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Cell Types and Its function , kingdom of life
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Cell Structure & Organelles in detailed.
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
01-Introduction-to-Information-Management.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
Computing-Curriculum for Schools in Ghana
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
master seminar digital applications in india
VCE English Exam - Section C Student Revision Booklet
O5-L3 Freight Transport Ops (International) V1.pdf
Lesson notes of climatology university.
human mycosis Human fungal infections are called human mycosis..pptx
Cell Types and Its function , kingdom of life
Module 4: Burden of Disease Tutorial Slides S2 2025
Cell Structure & Organelles in detailed.
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
102 student loan defaulters named and shamed – Is someone you know on the list?
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Microbial diseases, their pathogenesis and prophylaxis
2.FourierTransform-ShortQuestionswithAnswers.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPH.pptx obstetrics and gynecology in nursing
01-Introduction-to-Information-Management.pdf
RMMM.pdf make it easy to upload and study
Computing-Curriculum for Schools in Ghana
Ad

Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth

  • 1. Transparent User Authentication Biometrics Rfid And Behavioural Profiling 1st Edition Nathan Clarke Auth download https://ebookbell.com/product/transparent-user-authentication- biometrics-rfid-and-behavioural-profiling-1st-edition-nathan- clarke-auth-2451980 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Transparent Designs Personal Computing And The Politics Of Userfriendliness Michael L Black https://ebookbell.com/product/transparent-designs-personal-computing- and-the-politics-of-userfriendliness-michael-l-black-42272816 My Trans Parent A User Guide For When Your Parent Transitions Heather Bryant https://ebookbell.com/product/my-trans-parent-a-user-guide-for-when- your-parent-transitions-heather-bryant-46279772 Tutorial On Fiscal Transparency Portals An Usercentered Development Tarick Gracida https://ebookbell.com/product/tutorial-on-fiscal-transparency-portals- an-usercentered-development-tarick-gracida-34013728 Transparent Soil Modelling Technique And Its Application Honghua Zhao https://ebookbell.com/product/transparent-soil-modelling-technique- and-its-application-honghua-zhao-48685266
  • 3. Transparent Wood Materials Properties Applications And Fire Behaviour Igor Wachter https://ebookbell.com/product/transparent-wood-materials-properties- applications-and-fire-behaviour-igor-wachter-49028474 Transparent Teaching Of Adolescents Defining The Ideal Class For Students And Teachers 2nd Edition Mindy Kellerkyriakides Stacey Bruton Annmarie Dearman Victoria Grant Crystal Jovae Mazur Daniel Powell Christina Tate https://ebookbell.com/product/transparent-teaching-of-adolescents- defining-the-ideal-class-for-students-and-teachers-2nd-edition-mindy- kellerkyriakides-stacey-bruton-annmarie-dearman-victoria-grant- crystal-jovae-mazur-daniel-powell-christina-tate-51589834 Transparent Design In Higher Education Teaching And Leadership A Guide To Implementing The Transparency Framework Institutionwide To Improve Learning And Retention 1st Edition Maryann Winkelmes Allison Boye Suzanne Tapp Peter Felten Ashley Finley https://ebookbell.com/product/transparent-design-in-higher-education- teaching-and-leadership-a-guide-to-implementing-the-transparency- framework-institutionwide-to-improve-learning-and-retention-1st- edition-maryann-winkelmes-allison-boye-suzanne-tapp-peter-felten- ashley-finley-51651456 Transparent And Reproducible Social Science Research How To Do Open Science Garret Christensen Jeremy Freese Edward Miguel https://ebookbell.com/product/transparent-and-reproducible-social- science-research-how-to-do-open-science-garret-christensen-jeremy- freese-edward-miguel-51817886 Transparent Plastics Design And Technology Simone Jeska https://ebookbell.com/product/transparent-plastics-design-and- technology-simone-jeska-51929534
  • 9. Nathan Clarke Centre for Security, Communications & Network Research (CSCAN) Plymouth University Drake Circus PL4 8AA Plymouth United Kingdom N.Clarke@plymouth.ac.uk ISBN 978-0-85729-804-1 e-ISBN 978-0-85729-805-8 DOI 10.1007/978-0-85729-805-8 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2011935034 © Springer-Verlag London Limited 2011 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licenses issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
  • 10. v The world of user authentication is focussed upon developing technologies to solve the problem of point-of-entry identity verification required by many information systems. Unfortunately, authentication approaches; secret knowledge, token and biometric, all fail to provide universally strong user authentication – with various well-documented failings existing. Moreover, existing approaches fail to identify the real information security risk. Authenticating users at point-of-entry, and failing to require re-authentication of the user during the session provides a vast oppor­ tunity for attackers to compromise a system. However, forcing users to continuously re-authenticate to systems is cumbersome and fails to take into account the human factors of good security design, in order to ensure good levels of acceptability. Unfortunately, within this context, the need to authenticate is increasing rather than decreasing, with users interacting and engaging with a prolific variety of technologies from PCs to PDAs, social networking to share dealing, and Instant Messenger to Texting. A re-evaluation is therefore necessary to ensure user authentication is relevant, usable, secure and ubiquitous. The book presents the problem of user authentication from a completely different standpoint to current literature. Rather than describing the requirements, technologies and implementation issues of designing point-of-entry authentication, the text introduces and investigates the technological requirements of implementing trans- parent user authentication – where authentication credentials are captured during a user’s normal interaction with a system. Achieving transparent authentication of a user ensures the user is no longer required to provide explicit credentials to a system. Moreover, once authentication can be achieved transparently, it is far simpler to perform continuous authentication of the user minimising user inconvenience and improving the overall level of security. This would transform current user authenti- cation from a binary point-of-entry decision to a continuous identity confidence measure. By understanding the current confidence in the identity of the user, the system is able to ensure that appropriate access control decisions are made – providing immediate access to resources with high confidences and requiring further validation of a user’s identity with low confidences. Preface
  • 11. vi Preface Part I begins by reviewing the current need for user authentication – identifying the current thinking on point-of-entry authentication and why it falls short of providing real and effective levels of information security. Chapter 1 focuses upon the current implementation of user authentication and places the role of authentication within the wider context of information security. The fundamental approaches to user authentication and their evolutions are introduced. Chapter 2 takes an opportu- nity to review the need for user authentication through an examination of the history of modern computing. Whilst authentication is key to maintaining systems, it is frequently overlooked and approaches are adopted that are simply not fit for purpose. In particular, the human aspects of information security are introduced looking at the role the user plays in providing effective security. The final chapter in Part I investigates the role of user authentication in modern systems, what it is trying to achieve and more importantly, if designed correctly, what it could achieve. A dis- cussion on the applicability of utilising risk assessment and how continuous authen- tication would function are described. Part II is focussed upon the authentication approaches and providing an in-depth analysis of how each operates. Chapter 4 takes each of the three fundamental approaches in turn and discusses the various implementations and techniques avai­ lable. The chapter presents how each of the systems works and identifies key attacks against them. Having thoroughly examined traditional authentication, Chap. 5 investigates transparent authentication approaches. Supported by current literature and research, the chapter details how transparent authentication can be accomplished and the various technological barriers that currently exist. Taking the concept of transparent and continuous authentication further, Chap. 6 discusses multimodal authentication. The chapter details what multimodal authentication is, what methods of fusion exist and its applicability in this context. The final chapter in Part II, describes the standardisation efforts currently underway in the field of biometrics. Only through standardisation will widespread vendor-independent multimodal systems be able to exist. Part III examines the wider system-specific issues with designing large-scale multimodal authentication systems. Chapters 8 and 9 look at the theoretical and practical requirements of a system and discuss the limitations and advantages such a system would pose. Obviously, with increasing user authentication and use of biometrics, the issue of privacy arises and Chap. 9 focuses upon the need to ensure privacy and the human factors of acceptability and perception. The book concludes with a look into the future of user authentication, what the technological landscape might look like and the effects upon the people using these systems.
  • 12. vii Acknowledgements For the author, the research presented in this book started at the turn of the century and represents a decade of research undertaken. During this time, a number of M.Sc. and Ph.D. students have contributed towards furthering aspects of the research problem, and thanks are due in no small part to all of them. Many of the examples used in this book and their experimental findings are due to them. Specific thanks are due to Fudong Li for his work on behavioural biometrics, Christopher Hocking for conceptualising the Authentication Aura and Sevasti Karatzouni for her work on the implementation and evaluation of early prototypes, and in particular her invaluable contribution in Chap. 9. The initial concept of performing authentication transparently needs to be credited to my colleague and mentor Prof. Steven Furnell. It was due to his creativity that the concept was initially created. He also needs to be credited with the guiding hand behind much of the work presented in this book. It is only through his encouragement and support that this book was made possible. The reviewing and critiquing of book chapters is a time-consuming and arduous task and thanks are due to Christopher Bolan in particular who gave a considerable amount of personal time examining the manuscript. Along with others, I appreciate all the time, patience and advice they have given. Thanks are also due to all the companies and organisations that have funded aspects of this research over the past 10 years. Specifically, thanks are due to the Engineering and Physical Sciences Research Council (EPSRC), Orange Personal Communications Ltd, France-Telecom, the EduServ Foundation and the University of Plymouth. I would also like to thank Simon Rees from Springer for his initial and continued support for the book, even when timelines slipped and additions were made that led to the text being delayed. I would also like to thank all the staff at Springer that have helped in editing, proofing and publishing the text. Final thanks are due to my wife, Amy, who has had to put up with countless evenings and weekends alone whilst I prepared the manuscript. She was the inspiration to write the book in the first place and I am appreciative of all the support, motivation and enthusiasm she provided. Thankfully, this did not put her off marrying me!
  • 14. ix Contents Part I Enabling Security Through User Authentication 1 Current Use of User Authentication. ...................................................... 3 1.1 Introduction. ...................................................................................... 3 1.2 Basics of Computer Security............................................................ 4 1.3 Fundamental Approaches to Authentication.................................... 10 1.4 Point-of-Entry Authentication.......................................................... 17 1.5 Single Sign On and Federated Authentication. ................................. 21 1.6 Summary........................................................................................... 22 References.................................................................................................. 23 2 The Evolving Technological Landscape................................................. 25 2.1 Introduction. ...................................................................................... 25 2.2 Evolution of User Authentication..................................................... 26 2.3 Cyber Security.................................................................................. 32 2.4 Human Aspects of Information Security.......................................... 38 2.5 Summary........................................................................................... 41 References.................................................................................................. 42 3 What Is Really Being Achieved with User Authentication?. ................ 45 3.1 Introduction. ...................................................................................... 45 3.2 The Authentication Process.............................................................. 46 3.3 Risk Assessment and Commensurate Security................................. 49 3.4 Transparent and Continuous Authentication.................................... 53 3.5 Summary........................................................................................... 57 Reference................................................................................................... 58
  • 15. x Contents Part II Authentication Approaches 4 Intrusive Authentication Approaches.................................................... 61 4.1 Introduction..................................................................................... 61 4.2 Secret-Knowledge Authentication.................................................. 61 4.2.1 Passwords, PINs and Cognitive Knowledge....................... 62 4.2.2 Graphical Passwords........................................................... 67 4.2.3 Attacks Against Passwords................................................. 70 4.3 Token Authentication...................................................................... 74 4.3.1 Passive Tokens.................................................................... 75 4.3.2 Active Tokens..................................................................... 76 4.3.3 Attacks Against Tokens...................................................... 80 4.4 Biometric Authentication................................................................ 82 4.4.1 Biometric System. ............................................................... 83 4.4.2 Biometric Performance Metrics. ......................................... 87 4.4.3 Physiological Biometric Approaches................................. 93 4.4.4 Behavioural Biometric Approaches.................................... 98 4.4.5 Attacks Against Biometrics................................................ 102 4.5 Summary......................................................................................... 107 References................................................................................................ 107 5 Transparent Techniques.......................................................................... 111 5.1 Introduction..................................................................................... 111 5.2 Facial Recognition.......................................................................... 112 5.3 Keystroke Analysis......................................................................... 119 5.4 Handwriting Recognition................................................................ 126 5.5 Speaker Recognition....................................................................... 128 5.6 Behavioural Profiling...................................................................... 130 5.7 Acoustic Ear Recognition............................................................... 139 5.8 RFID: Contactless Tokens.............................................................. 141 5.9 Other Approaches........................................................................... 144 5.10 Summary......................................................................................... 146 References.................................................................................................. 147 6 Multibiometrics........................................................................................ 151 6.1 Introduction..................................................................................... 151 6.2 Multibiometric Approaches............................................................ 153 6.3 Fusion. ............................................................................................. 157 6.4 Performance of Multi-modal Systems............................................ 160 6.5 Summary......................................................................................... 162 References................................................................................................ 163 7 Biometric Standards................................................................................ 165 7.1 Introduction..................................................................................... 165 7.2 Overview of Standardisation. .......................................................... 165 7.3 Data Interchange Formats............................................................... 168
  • 16. xi Contents 7.4 Data Structure Standards. ................................................................ 171 7.5 Technical Interface Standards......................................................... 172 7.6 Summary......................................................................................... 174 References................................................................................................ 174 Part III  System Design, Development and Implementation Considerations 8 Theoretical Requirements of a Transparent Authentication System............................................................................. 179 8.1 Introduction..................................................................................... 179 8.2 Transparent Authentication System................................................ 179 8.3 Architectural Paradigms. ................................................................. 184 8.4 An Example of TAS – NICA (Non-Intrusive and Continuous Authentication)..................................................... 186 8.4.1 Process Engines.................................................................. 189 8.4.2 System Components........................................................... 193 8.4.3 Authentication Manager..................................................... 196 8.4.4 Performance Characteristics............................................... 201 8.5 Summary......................................................................................... 202 References................................................................................................ 203 9 Implementation Considerations in Ubiquitous Networks.................... 205 9.1 Introduction..................................................................................... 205 9.2 Privacy. ............................................................................................ 205 9.3 Storage and Processing Requirements............................................ 208 9.4 Bandwidth Requirements................................................................ 210 9.5 Mobility and Network Availability................................................. 212 9.6 Summary......................................................................................... 213 References................................................................................................ 214 10 Evolving Technology and the Future for Authentication..................... 215 10.1 Introduction..................................................................................... 215 10.2 Intelligent and Adaptive Systems................................................... 216 10.3 Next-Generation Technology.......................................................... 218 10.4 Authentication Aura........................................................................ 221 10.5 Summary......................................................................................... 224 References.................................................................................................. 224 Index.................................................................................................................. 225 About the Author............................................................................................. 229
  • 18. xiii List of Figures Fig. 1.1 Facets of information security. .......................................................... 6 Fig. 1.2 Information security risk assessment................................................ 8 Fig. 1.3 Managing information security......................................................... 9 Fig. 1.4 Typical system security controls....................................................... 10 Fig. 1.5 Lophcrack software........................................................................... 12 Fig. 1.6 Biometric performance characteristics. ............................................. 16 Fig. 2.1 O2 web authentication using SMS. .................................................... 27 Fig. 2.2 O2 SMS one-time password. ............................................................. 28 Fig. 2.3 Google Authenticator........................................................................ 29 Fig. 2.4 Terminal-network security protocol.................................................. 29 Fig. 2.5 HP iPaq H5550 with fingerprint recognition.................................... 32 Fig. 2.6 Examples of phishing messages........................................................ 36 Fig. 2.7 Fingerprint recognition on HP PDA. ................................................. 39 Fig. 2.8 UPEK Eikon fingerprint sensor. ........................................................ 40 Fig. 3.1 Risk assessment process. ................................................................... 49 Fig. 3.2 Authentication security: traditional static model.............................. 51 Fig. 3.3 Authentication security: risk-based model........................................ 51 Fig. 3.4 Variation of the security requirements during utilisation of a service. (a) Sending a text message, (b) Reading and deleting text messages............................................ 52 Fig. 3.5 Transparent authentication on a mobile device................................. 54 Fig. 3.6 Normal authentication confidence.................................................... 55 Fig. 3.7 Continuous authentication confidence.............................................. 56 Fig. 3.8 Normal authentication with intermitted application-level authentication.................................................................................... 56 Fig. 4.1 Googlemail password indicator. ........................................................ 66 Fig. 4.2 Choice-based graphical authentication............................................. 68 Fig. 4.3 Click-based graphical authentication................................................ 69 Fig. 4.4 Passfaces authentication.................................................................... 69 Fig. 4.5 Network monitoring using Wireshark............................................... 71
  • 19. xiv List of Figures Fig. 4.6 Senna Spy Trojan generator............................................................ 71 Fig. 4.7 AccessData password recovery toolkit........................................... 72 Fig. 4.8 Ophcrack password recovery.......................................................... 73 Fig. 4.9 Cain and Abel password recovery. .................................................. 74 Fig. 4.10 Financial cards: Track 2 information.............................................. 76 Fig. 4.11 An authentication without releasing the base-secret....................... 76 Fig. 4.12 RSA securID token......................................................................... 78 Fig. 4.13 NatWest debit card and card reader................................................ 78 Fig. 4.14 Smartcard cross-section.................................................................. 79 Fig. 4.15 Cain and Abel’s RSA SecurID token calculator............................. 81 Fig. 4.16 The biometric process..................................................................... 84 Fig. 4.17 FAR/FRR performance curves........................................................ 88 Fig. 4.18 ROC curve (TAR against FMR)...................................................... 90 Fig. 4.19 ROC curve (FNMR against FMR).................................................. 90 Fig. 4.20 Characteristic FAR/FRR performance plot versus threshold.......... 91 Fig. 4.21 User A performance characteristics................................................ 92 Fig. 4.22 User B performance characteristics................................................ 92 Fig. 4.23 Anatomy of the ear.......................................................................... 94 Fig. 4.24 Fingerprint sensor devices. .............................................................. 96 Fig. 4.25 Anatomy of an iris. .......................................................................... 97 Fig. 4.26 Attributes of behavioural profiling.................................................. 99 Fig. 4.27 Attacks on a biometric system........................................................ 102 Fig. 4.28 USB memory with fingerprint authentication................................. 103 Fig. 4.29 Distributed biometric system.......................................................... 103 Fig. 4.30 Examples of fake fingerprint........................................................... 105 Fig. 4.31 Spoofing facial recognition using a photograph. ............................. 105 Fig. 4.32 Diagrammatic demonstration of feature space. ............................... 106 Fig. 5.1 Environmental and external factors affecting facial recognition. ............................................................................ 113 Fig. 5.2 Normal facial recognition process.................................................. 115 Fig. 5.3 Proposed facial recognition process................................................ 115 Fig. 5.4 Effect upon the FRR with varying facial orientations. .................... 118 Fig. 5.5 Effect upon the FRR using a composite facial template................. 119 Fig. 5.6 Continuous monitor for keystroke analysis. .................................... 122 Fig. 5.7 Varying tactile environments of mobile devices............................. 123 Fig. 5.8 Variance of keystroke latencies....................................................... 124 Fig. 5.9 Results of keystroke analysis on a mobile phone. ........................... 125 Fig. 5.10 Handwriting recognition: user performance................................... 128 Fig. 5.11 Data extraction software................................................................. 134 Fig. 5.12 Variation in behavioural profiling performance over time.............. 136 Fig. 5.13 Acoustic ear recognition................................................................. 139 Fig. 5.14 Operation of an RFID token. ........................................................... 143 Fig. 5.15 Samples created for ear geometry................................................... 145
  • 20. xv List of Figures Fig. 6.1 Transparent authentication on a mobile device............................... 152 Fig. 6.2 Cascade mode of processing of biometric samples........................ 157 Fig. 6.3 Matching score-level fusion............................................................ 158 Fig. 6.4 Feature-level fusion......................................................................... 158 Fig. 6.5 A hybrid model involving various fusion approaches. .................... 161 Fig. 7.1 ISO/IEC onion-model of data interchange formats........................ 167 Fig. 7.2 Face image record format: overview............................................... 170 Fig. 7.3 Face image record format: facial record data.................................. 170 Fig. 7.4 A simple BIR.................................................................................. 171 Fig. 7.5 BioAPI patron format. ..................................................................... 172 Fig. 7.6 BioAPI architecture. ........................................................................ 173 Fig. 8.1 Identity confidence.......................................................................... 180 Fig. 8.2 A generic TAS framework.............................................................. 181 Fig. 8.3 TAS integration with system security............................................. 183 Fig. 8.4 Two-tier authentication approach.................................................... 183 Fig. 8.5 Network-centric TAS model........................................................... 185 Fig. 8.6 A device-centric TAS model........................................................... 185 Fig. 8.7 NICA – server architecture............................................................. 187 Fig. 8.8 NICA – client architecture.............................................................. 188 Fig. 8.9 NICA – data collection engine........................................................ 190 Fig. 8.10 NICA – biometric profile engine.................................................... 191 Fig. 8.11 NICA – authentication engine. ........................................................ 192 Fig. 8.12 NICA – communication engine...................................................... 192 Fig. 8.13 NICA – authentication manager process........................................ 199 Fig. 9.1 Level of concern over theft of biometric information..................... 208 Fig. 9.2 User preferences on location of biometric storage. ......................... 208 Fig. 9.3 Size of biometric templates............................................................. 209 Fig. 9.4 Average biometric data transfer requirements (based upon 1.5 million users)........................................................ 211 Fig. 10.1 Conceptual model of the authentication aura.................................. 223
  • 22. xvii List of Tables Table 1.1 Computer attacks affecting CIA................................................... 5 Table 1.2 Biometric techniques.................................................................... 14 Table 1.3 Level of adoption of authentication approaches........................... 17 Table 1.4 Top 20 most common passwords. ................................................. 18 Table 4.1 Password space based upon length............................................... 63 Table 4.2 Password space defined in bits..................................................... 64 Table 4.3 Examples of cognitive questions.................................................. 64 Table 4.4 Typical password policies. ............................................................ 65 Table 4.5 Components of a biometric system.............................................. 84 Table 4.6 Attributes of a biometric approach............................................... 86 Table 5.1 Subset of the FERET dataset utilised........................................... 116 Table 5.2 Datasets utilised in each experiment............................................ 117 Table 5.3 Facial recognition performance under normal conditions............ 117 Table 5.4 Facial recognition performance with facial orientations.............. 117 Table 5.5 Facial recognition using the composite template......................... 118 Table 5.6 Summary of keystroke analysis studies........................................ 120 Table 5.7 Performance of keystroke analysis on desktop PCs..................... 121 Table 5.8 Keystroke analysis variance between best- and worst-case users............................................................ 125 Table 5.9 Handwriting recognition: individual word performance.............. 128 Table 5.10 ASPeCT performance comparison of classification approaches.................................................................................... 131 Table 5.11 Cost-based performance............................................................... 132 Table 5.12 Behavioural profiling features...................................................... 134 Table 5.13 Behavioural profiling performance on a desktop PC. ................... 135 Table 5.14 MIT dataset. .................................................................................. 137 Table 5.15 Application-level performance..................................................... 137 Table 5.16 Application-specific performance: telephone app........................ 138 Table 5.17 Application-specific performance: text app. ................................. 138
  • 23. xviii List of Tables Table 5.18 Performance of acoustic ear recognition with varying frequency................................................................. 140 Table 5.19 Transparency of authentication approaches. ................................. 146 Table 6.1 Multi-modal performance: finger and face................................... 161 Table 6.2 Multi-modal performance: finger, face and hand modalities. ....... 162 Table 6.3 Multi-modal performance: face and ear modalities. ..................... 162 Table 7.1 ISO/IEC JTC1 SC37 working groups.......................................... 166 Table 7.2 ISO/IEC Biometric data interchange standards. ........................... 168 Table 7.3 ISO/IEC common biometric exchange formats framework......... 171 Table 7.4 ISO/IEC Biometric programming interface (BioAPI)................. 173 Table 8.1 Confidence level definitions......................................................... 194 Table 8.2 NICA – Authentication assets...................................................... 194 Table 8.3 NICA – Authentication response definitions. ............................... 196 Table 8.4 NICA – System integrity settings................................................. 197 Table 8.5 NICA – Authentication manager security levels.......................... 198 Table 8.6 NICA – Authentication performance........................................... 201
  • 24. Part I Enabling Security Through User Authentication
  • 26. 3 N. Clarke, Transparent User Authentication: Biometrics, RFID and Behavioural Profiling, DOI 10.1007/978-0-85729-805-8_1, © Springer-Verlag London Limited 2011 1.1  Introduction Information security has become increasingly important as technology integrates into our everyday lives. In the past 10 years, computing-based technology has ­ permeated every aspect of our lives from desktop computers, laptops and mobile phones to satellite navigation, MP3 players and game consoles. Whilst the motivation for keeping systems secure has changed from the early days of mainframe systems and the need to ensure reliable audits for accounting purposes, the underlying requirement for a high level of security has always been present. Computing is now ubiquitous in everything people do – directly or indirectly. Even individuals who do not engage with personal computers (PCs) or mobile phones still rely upon computing systems to provide their banking services, to ensure sufficient stock levels in supermarkets, to purchase goods in stores and to provide basic services such as water and electricity. In modern society there is a significant reliance upon computing systems – without which civilisation, as we know it, would arguably cease to exist. As this reliance upon computers has grown, so have the threats against them. Whilst initial endeavours of computer misuse, in the late 1970s and 1980s, were largely focused upon demonstrating technical prowess, the twenty-first century has seen a significant focus upon attacks that are financially motivated – from botnets that attack individuals to industrial espionage. With this increasing focus towards attacking systems, the domain of information systems security has also experienced increasing attention. Historically, whilst increasing attention has been paid to securing systems, such a focus has not been universal. Developers have traditionally viewed information security as an additional burden that takes significant time and resources, detracting from developing additional functionality, and with little to no financial return. For organisations, information security is seen as rarely driving the bottom line, and as such they are unmotivated to adopt good security practice. What results is a variety of applications, systems and organisations with a diverse set of security polices and Chapter 1 Current Use of User Authentication
  • 27. 4 1 Current Use of User Authentication levels of adoption – some very secure, a great many more less so. More recently, this situation has improved as the consequences of being successfully attacked are becoming increasingly severe and public. Within organisations, the desire to keep intellectual property, regulation and legislation is a driving factor in improving information security. Within the services and applications, people are beginning to make purchasing decisions based upon whether a system or application is secure; driving developers to ensure security is a design factor. Authentication is key to providing effective information security. But in order to understand the need for authentication it is important to establish the wider context in which it resides. Through an appreciation of the domain, the issues that exist and the technology available, it is clear why authentication approaches play such a pivotal role in securing systems. It is also useful to understand the basic operation of authen- tication technologies, their strengths and weaknesses and the current state of implementation. 1.2  Basics of Computer Security The field of computer security has grown and evolved in line with the changing threat to landscapes and the changing nature of technology. Whilst new research is continually developing novel mechanisms to protect systems, the fundamental principles that underpin the domain remain unchanged. Literature might differ a little on the hierarchy of all the components that make up information security; however, there is an agreement upon what the key objectives or goals are. The three aims of information security are Confidentiality, Integrity and Availability and are commonly referred to as the CIA triad. In terms of information, they can be defined as follows: Confidentiality refers to the prevention of unauthorised information disclosure. • Only those with permission to read a resource are able to do so. It is the element most commonly associated with security in terms of ensuring the information remains secret. Integrity refers to ensuring that data are not modified by unauthorised users/ • processes. Integrity of the information is therefore maintained as it can be changed only by authorised users/processes of a system. Availability refers to ensuring that information is available to authorised users • when they request it. This property is possibly the least intuitive of the three aims but is fundamental. A good example that demonstrates the importance of availability is a denial of service (DoS) attack. This attack consumes bandwidth, processing power and/or memory to prevent legitimate users from being able to access a system. It is from these three core goals that all information security is derived. Whilst perhaps difficult to comprehend in the first instance, some further analysis of the root cause of individual attacks does demonstrate that one or more of the three
  • 28. 5 1.2 Basics of Computer Security security goals are being affected. Consider, for instance, the role of the computer virus. Fundamentally designed to self-replicate on a computer system, the virus will have an immediate effect upon the availability of system resources, consuming all the memory and processing capacity. However, depending upon the severity of the self-replicating process, they can also have an effect upon the integrity of the data stored. Viruses also have some form of payload, a purpose or reason for existing, as few are non-malignant. This payload can vary considerably in purpose, but more recently Trojans have become increasingly common. Trojans will search and capture sensitive information and relay it back to the attacker, thereby affecting the ­ confidentiality of the information. To illustrate this further, Table 1.1 presents a number of general attacks and their effect upon the security objectives. In addition to the goals of information security, three core services support them. Collectively referred to as AAA, these services are Authentication, Authorisation and Accountability. In order to maintain confidentiality and integrity, it is imperative for a system to establish the identity of the user so that the appropriate permissions for access can be granted, without which anybody would be in a position to read and modify information on the system. Authentication enables an individual to be uniquely identified (albeit how uniquely is often in question!) and authorisation provides the access control mechanism to ensure that users are granted their particular set of permissions. Whilst both authentication and authorisation are used for proactive defence of the system (i.e. if you don’t have a legitimate set of authentication credentials you will not get access to the system), accountability is a reactive service that enables a system administrator to track and monitor system interactions. In cooperation with authentication, a system is able to log all system actions with a corresponding identity. Should something have gone amiss, these logs will identify the source and effect of these actions. The fact that this can only be done after an incident makes it a reactive process. Together, the three services help maintain the confidentiality, integrity and availability of information and systems. Looking at security in terms of CIA and AAA, whilst accurate, paints a very narrow picture of the information security domain. Information security is not merely about systems and technical controls utilised in their protection. For instance, whilst authentication does indeed ensure that access is only granted to a legitimate identity, it does not consider that the authentication credential itself might be Table 1.1 Computer attacks affecting CIA Security goal Attack Confidentiality Integrity Availability (Distributed) Denial of service ✓ Hacker ✓ ✓ Malicious software (e.g. Worms, Viruses, Trojans) ✓ ✓ ✓ Phishing ✓ Rootkit ✓ ✓ Social engineering ✓ Spam ✓
  • 29. 6 1 Current Use of User Authentication compromised through human neglect. Therefore, any subsequent action using that compromised credential will have an impact upon the confidentiality and integrity of the information. Furnell (2005) presents an interesting perspective on information security, in the form of a jigsaw puzzle comprising the five facets of information: technical, procedural, personnel, legal and physical (as illustrated in Fig. 1.1). Only when the jigsaw is complete and all are considered together can an organisation begin to establish a good information security environment. A lack of considering any one element would have serious consequences on the ability to remain secure. Whilst the role of the technical facet is often well documented, the roles of the remaining facets are less so. The procedural element refers to the need for relevant security processes to be undertaken. Key to these is the development of a security policy, contingency planning and risk assessment. Without an understanding of what is trying to be achieved, in terms of security, and an appreciation that not all information has the same value, it is difficult to establish what security measures need to be adopted. The personnel element refers to the human aspects of a system. A popular security phrase, ‘security is only as strong as its weakness link’, demon- strates that a break in only one element of the chain would result in compromise. Unfortunately, literature has demonstrated that the weakest link is frequently the user. The personnel element is therefore imperative to ensure security. It includes all aspects that are people-related, including education and awareness training, ensuring that appropriate measures are taken at recruitment and termination of employment and maintaining a secure behaviour within the organisation. The legal element refers to the need to ensure compliance with relevant legislation. An increased focus upon legislation from many countries has resulted in significant controls on how ­ organisations use and store information. It is also important for an organisation to comply with legislation in all countries in which it operates. The volume of legislation Personnel Procedural Security Technical Physical Legal Fig. 1.1 Facets of information security
  • 30. 7 1.2 Basics of Computer Security is also growing, in part to better protect systems. For example, the following are a sample of the laws that would need to be considered within the UK: Computer Misuse Act 1990 (Crown • 1990) Police and Justice Act 2006 (included amendments to the Computer Misuse Act • 1990) (Crown 2006) Regulation of Investigatory Powers Act 2000 (Crown • 2000a) Data Protection Act 1998 ( • Crown 1998) Electronic Communication Act 2000 (Crown • 2000b) In addition to legislation, the legal element also includes regulation. Regulations provide specific details on how the legislation is to be enforced. Many regulations, some industry-specific and others with a wider remit, exist that organisations must legally ensure they comply against. Examples include: The US Health Insurance Portability and Accountability Act (HIPAA) requires • all organisations involved in the provision of US medical services to conform to its rules over the handling of medical information. The US Sarbanes-Oxley Act requires all organisations doing business in the US • (whether they are a US company or not) to abide by the act. Given many non-US companies have business interests in the US, they must ensure they conform to the regulation. Finally, the physical element refers to the physical controls that are put into place to protect systems. Buildings, locked doors and security guards at ingress/egress points are all examples of controls. In the discussion thus far, it has almost been assumed that these facets related to deliberate misuse of systems. However, it is in the remit of information security to also consider accidental threats. With respect to the physical aspect, accidental threats would include the possibility of fire, floods, power outages or natural disasters. Whilst this is conceivably not an issue for many companies, for large-scale organisations that operate globally, such considerations are key to maintaining availability of systems. Consider, for example, what would happen to financial institutions if they did not consider these aspects to be appropriate. Not only would banking transaction data be lost, access to money would be denied and societies would grind to a stop. The banking crisis of 2009/2010 where large volumes of money were lost on the markets, which consequently caused a global recession, is a good example of the essential role these organisations play in daily life and the impact they have upon individuals. When considering how best to implement information security within an organi- sation, it is imperative to ensure an organisation knows what it is protecting and why. Securing assets merely for the sake of securing them is simply not cost-effective and paying £1,000 to protect an asset worth only £10 does not make sense. To achieve this, organisations can undertake an information security risk assessment. The concept of risk assessment is an understanding of the value of the asset needing protection, the threats against the asset and the likelihood or probability that the threat would become a reality. As illustrated in Fig. 1.2, the compromise of the asset will also have an impact upon the organisation and a subsequent consequence. Once a risk can be
  • 31. 8 1 Current Use of User Authentication quantified, it is possible to consider the controls and countermeasures that can be put into place to mitigate the risk to an acceptable level. For organisations, particularly smaller entities, a risk assessment approach can be prohibitively expensive. Baseline standards, such as the ISO27002 Information Security Code of Practice (ISO 2005a), provide a comprehensive framework for organisations to implement. Whilst this does not replace the need for a solid risk assessment, it is a useful mechanism for organisations to begin the process of being secure without the financial commitment of a risk assessment. The process of assessing an organisation’s security posture is not a one-off process, but as Fig. 1.3 illustrates is a constantly reoccurring process, as changes in policy, infra- structure and threats all impact upon the level of protection being provided. The controls and countermeasures that can be utilised vary from policy-related statements of what will be secured and who is held responsible to technical controls placed on individual assets, such as asset tagging to prevent theft. From an individual system perspective, the controls you would expect to be included are an antivirus, a firewall, a password, access control, backup, intrusion detection or prevention system, anti-phishing and anti-spam filters, spyware detection, application and operating system (OS) update utility, a logging facility and data encryption. The common relationship between each countermeasure is that each and every control has an effect upon one or more of three aims of information security: confidentiality, integrity or availability. The effect of the controls is to eliminate or, more precisely, mitigate particular attacks (or sets of attacks). The antivirus provides a mechanism for monitoring all data on the system for malicious software, and the firewall blocks all ports (except for those required by the system), minimising the opportunity for hackers to enter into the system. For those ports still open, an Intrusion Detection System is present, monitoring for any manipulation of the underlying network protocols. Finally at the application layer, there are application-specific countermeasures, such as anti-spam and anti-phishing, that assist in preventing compromise of those services. As illustrated in Fig. 1.4, these countermeasures are effectively layered, providing a ‘defence in depth’ strategy, where any single attack needs to compromise more than one security control in order to succeed. Consequence Impact Risk Threat Asset Vulnerability Fig. 1.2 Information security risk assessment
  • 32. 9 1.2 Basics of Computer Security An analysis of Fig. 1.4 also reveals an overwhelming reliance upon a single control. From a remote, Internet-based attack perspective, the hacker has a number of controls to bypass, such as the firewall and intrusion detection system. A target for the attacker would therefore be to disable the security controls. In order to function, these controls are configurable so that individual users can set them up to meet their specific requirements. These configuration settings are secured from misuse by an authentication mechanism. If the firewall software has any software vulnerability, a hacker can take advantage of the weakness to obtain access to the firewall. Once the compromise is successful, the hacker is able to modify the firewall access control policy to allow for further attacks. Similar methods can be applied to the other countermeasures. For instance, switching off or modifying the antivirus is a common strategy deployed by malware. If the system is set up Implementation Risk Analysis Monitor Installed Developing Security Management Security Policies Maintain Educate Reassess Recommend- ations Fig. 1.3 Managing informa­ tion security
  • 33. 10 1 Current Use of User Authentication for remote access, the hacker needs to only compromise the ­ authentication cre- dentials to obtain access to the system. From a physical attack perspective, the only control preventing access to the system is authentication – assuming they have successfully bypassed the physical protection (if present). Authentication therefore appears across the spectrum of technical controls. It is the vanguard in ensuring the effective and secure operation of the system, applications and security controls. 1.3  Fundamental Approaches to Authentication Authentication is key to maintaining the security chain. In order for authorisation and accountability to function, which in turn maintain confidentiality, integrity and availability, correct authentication of the user must be achieved. Whilst many forms of authentication exist such as passwords, personal identification numbers (PINs), fingerprint recognition, one-time passwords, graphical passwords, smartcards and Subscriber Identity Modules (SIMs), they all fundamentally reside within one of three categories (Wood 1977): Something you know • Something you have • Something you are • Internet Network Firewall Personal Firewall Intrusion Prevention System Anti-Virus/Anti-Spyware Anti-Spam Login Authentication Anti-Phishing Internet Browser Email Computer System Fig. 1.4 Typical system security controls
  • 34. 11 1.3 Fundamental Approaches to Authentication Something you know refers to a secret knowledge–based approach, where the user has to remember a particular pattern, typically made up of character and numbers. Something you have refers to a physical item the legitimate user has to unlock the system and is typically referred to as a token. In non-technological applications, tokens include physical keys used to unlock the house or car doors. In a technological application, such as remote central locking, the token is an electronic store for a password. Finally, something you are refers to a unique attribute of the user. This unique attribute is transformed into a unique electronic pattern. Techniques based upon something you are, are commonly referred to as biometrics. The password and PIN are both common examples of the secret-knowledge approach. Many systems are multi-user environments and therefore the password is accompanied with a username or claimed identity. Whilst the claimed identity holds no real secrecy, in that a username is relatively simple to establish, both are used in conjunction to verify a user’s credentials. For single-user systems, such as mobile phones and personal desktop assistance (PDA), only the password or PIN is required. The strength of the approach resides in the inability for an attacker to successfully select the correct password. It is imperative therefore that the legitimate user selects a password that is not easily guessable by an attacker. Unfortunately, selecting an appropriate password is where the difficulty lies. Several attacks from social engineering to brute-forcing can be used to recover passwords and therefore subsequently circumvent the control. Particular password characteristics make this process even simpler to achieve. For instance, a brute-force attack simply tries every permutation of a password until the correct sequence is found. Short passwords are therefore easier to crack than long passwords. Indeed, the strength of the password is very much dependent upon ensuring that the number of possible passwords or the password space is so large that it would be computationally difficult to brute-force a password in a timely fashion. What defines timely is open to question depending upon the application. If it is a password to a computer system, it would be dependent on how frequently the password is changed – for instance, a password policy stating that passwords should change monthly would provide a month to a would-be attacker. After that time, the attacker would have to start again. Longer passwords therefore take an exponentially longer time to crack. Guidelines on password length do vary with password policies in the late 1990s, suggesting that eight characters was the minimum. Current attacks such as Ophtcrack (described in more detail in Sect. 4.2.3) are able to crack 14-character random passwords in minutes (Ophcrack 2011). Brute-forcing a password (if available) represents the most challenging attack for hackers – and that is not particularly challenging if the password length is not suf- ficient. However, there are even simpler attacks against trying every permutation. This attack exploits the user’s inability to select a completely random password. Instead they rely upon words or character sequences that have some meaning. After all, they do have to remember the sequence and truly random passwords are simply not easy to remember. A typical example of this is a word with some meaning, take ‘luke’ (my middle name) appended to a number ‘23’ (my age at the time) – luke23. Many people perceive this to be a strong password, as it does not rely upon a single
  • 35. 12 1 Current Use of User Authentication dictionary word. However, software such as AccessData’s Password Recovery Toolkit (AccessData 2011) and Lophcrack (Security Focus 2010) have a process for checking these types of sequence prior to the full brute force. Figure 1.5 illustrates Lophcrack breaking this password, which was achieved in under 2 min. It is worth noting that these types of attack are not always possible and do assume that certain information is available and accessible to an attacker. In many situations where passwords are applied this is not the case. In those situations, as long as the three-attempt rule is in place (i.e. the user gets three attempts to login, after which the account is locked) these types of brute-forcing attacks are not possible. However, because of other weaknesses in using passwords, an attacker gaining access to one system can also frequently provide access to other systems (where brute-forcing was not possible) as passwords are commonly shared between systems. If you consider the number of systems that you need to access, it soon becomes apparent that this is not an insignificant number and is typically increasing over time. For instance, a user might be expected to password protect: Work/home computer – – Work network access – – Work email/home email – – Bank accounts (of which he/she could have many with different providers – – – mortgage, current, savings, joint account) Paypal account – – Amazon account – – Home utilities (gas, electricity, water services all maintain online accounts for – – payment and monitoring of the account) Countless other online services that require one to register – – Fig. 1.5 Lophcrack software
  • 36. 13 1.3 Fundamental Approaches to Authentication It is simply not possible for the average user to remember unique passwords (of sufficient length) for all these services without breaking a security policy. Therefore users will tend to have a small bank of passwords that are reused or simply use the same password on all systems. Due to these weaknesses further attention was placed upon other forms of authentication. From one perspective, tokens seem to solve the underlying problem with passwords – the inability of people to remember a sufficiently long random password. By using technology, the secret knowledge could be placed in a memory chip rather than the human brain. In this fashion, the problems of needing to remember 14-character random passwords, unique to each system and regularly updated to avoid compromise, were all solved. It did, however, introduce one other significant challenge. The physical protection afforded to secret-knowledge approaches by the human brain does not exist within tokens. Theft or abuse of the physical token removes any protection it would provide. It is less likely that your brain can be abused in the same fashion (although techniques such as blackmail, torture and coercion are certainly approaches of forcefully retrieving information from people). The key assumption with token-based authentication is that the token is in the possession of the legitimate user. Similarly, with passwords, this reliance upon the human participant in the authentication process is where the approach begins to fail. With regard to tokens, people have mixed views on their importance and protection. With house and car keys, people tend to be highly protective, with lost or stolen keys resulting in a fairly immediate replacement of the locks and appropriate notification to family members and law enforcement. This level of protection can also be seen with regard to wallets and purses – which are a store for many tokens such as credit cards. When using tokens for logical and physical access control, such as a work identity card, the level of protection diminishes. Without strong policies on the reporting of lost or stolen cards, the assumption that only the authorised user is in possession of the token is weak at best. In both of the first two examples, the conse- quence of misuse would have a direct financial impact on the individual, whereas the final example has (on the face of it) no direct consequence. So the individual is clearly motivated by financial considerations (not unexpectedly!). When it comes to protecting information or data, even if it belongs to them, the motivation to protect themselves is lessened, with many people unappreciative of the value of their infor- mation. An easy example here is the wealth of private information people are happy to share on social networking sites (BBC 2008a). The resultant effect of this insecu- rity is that tokens are rarely utilised in isolation but rather combined with a second form of authentication to provide a two-factor authentication. Tokens and PINs are common combinations for example credit cards. The feasibility of tokens is also brought into question when considering their practical use. People already carry a multitude of token-based authentication credentials and the utilisation of tokens for logical authentication would only serve to increase this number. Would a different token be required to login in to the computer, on to the online bank accounts, onto Amazon and so on? Some banks in the UK have already issued a card reader that is used in conjunction with your current
  • 37. 14 1 Current Use of User Authentication cash card to provide a unique one-time password. This password is then entered onto the system to access particular services (NatWest 2010). Therefore, in order to use the system, the user must remember to take not only the card but also the card reader with them wherever they go (or constrain their use to a single location). With multiple bank accounts from different providers this quickly becomes infeasible. The third category of authentication, biometrics, serves to overcome the aforemen- tioned weaknesses by removing the reliance upon the individual to either remember a password or remember to take and secure a token. Instead, the approach relies upon unique characteristics already present in the individual. Although the modern interpretation of biometrics certainly places its origins in the twentieth century, biometric techniques have been widely utilised for hundreds of years. There are paintings from prehistoric times signed by handprints and the Babylonians used fingerprints on legal documents. The modern definition of biometrics goes further than simply referring to a unique characteristic. A widely utilised reference by the International Biometrics Group (IBG) defines biometrics as ‘the automated use of physiological or behavioural characteristics to determine or verify identity’ (IBG 2010a). The principle difference is in the term automated. Whilst many biometric characteristics may exist, they only become a biometric once the process of authentication (or strictly identification) can be achieved in an automated fashion. For example, whilst DNA is possibly one of the more unique biometric characteristics known, it currently fails to qualify as a biometric as it is not a completely automated process. However, significant research is currently being conducted to make it so. The techniques themselves can be broken down into two categories based upon whether the characteristic is a physical attribute of the person or a learnt behaviour. Table 1.2 presents a list of biometric techniques categorised by their physiological or behavioural attribute. Fingerprint recognition is the most popular biometric technique in the market. Linked inherently to its use initially within law enforcement, Automated Fingerprint Identification Systems (AFIS) were amongst the first large-scale biometric systems. Still extensively utilised by law enforcement, fingerprint systems have also found their way into a variety of products such as laptops, mobile phones, mice and physical access controls. Hand geometry was previously a significant market player, principally in time-and-attendance systems; however, these have been surpassed by facial and vascular pattern recognition systems in terms of sales. Both of the latter Physiological Behavioural Ear geometry Gait recognition Facial recognition Handwriting recognition Facial thermogram Keystroke analysis Fingerprint recognition Mouse dynamics Hand geometry Signature recognition Iris recognition Speaker recognition Retina recognition Vascular pattern recognition Table 1.2 Biometric techniques
  • 38. 15 1.3 Fundamental Approaches to Authentication techniques have increased in popularity since September 2001 for use in border control and anti-terrorism efforts. Both iris and retina recognition systems are amongst the most effective techniques in uniquely identifying a subject. Retina in particular is quite intrusive to the user as the sample capture requires close interac- tion between the user and capture device. Iris recognition is becoming more popular as the technology for performing authentication at a distance advances. The behavioural approaches are generally less unique in their characteristics than their physiological counterparts; however, some have become popular due to the application within which they are used. For instance, speaker recognition (also known as voice verification) is widely utilised in telephony-based applications to verify the identity of the user. Gait recognition, the ability to identify a person by the way in which they walk, has received significant focus for use within airports, as identifica- tion is possible at a distance. Some of the less well-established ­ techniques include keystroke analysis, which refers to the ability to verify identity based upon the typing characteristics of individuals, and mouse dynamics, verifying identity based upon mouse movements. The latter has yet to make it out of research laboratories. The biometric definition ends with the ability to determine or verify identity. This refers to the two modes in which the biometric system can operate. To verify, or verification (also referred to as authentication), is the process of confirming that a claimed identity is the authorised user. This approach directly compares against the password model utilised on computer systems, where the user enters a username – thus claiming an identity, and then a password. The system verifies the password against the claimed identity. However, biometrics can also be used to identify, or for identification. In this mode, the user does not claim to be anybody and merely presents their biometric sample to the system. It is up to the system to determine whether the sample is an authorised sample and against which user. From a problem complexity perspective, these are two very different problems. From a system performance perspective (ignoring the compromise due to poor selection etc.), biometric systems do differ from the other forms of authentication. With both secret-knowledge and token-based approaches, the system is able to verify the provided credential with 100% accuracy. The result of the comparison is a Boolean (true or false) decision. In biometric-based systems, whilst the end result is still normally a Boolean decision, that decision is based upon whether the sample has met (or exceeded) a particular similarity score. In a password-based approach, the system would not permit access unless all characters where identical. In a biomet- ric-based system that comparison of similarity is not 100% – or indeed typically anywhere near 100%. This similarity score gives rise to error rates that secret-knowl- edge and token-based approaches do not have. The two principal error rates are: False acceptance rate (FAR) – the rate at which an impostor is wrongly accepted – – into the system False rejection rate (FRR) – the rate at which an authorised user is wrongly – – rejected from a system Figure 1.6 illustrates the relationship between these two error rates. Mutually exclusive as neither tends towards zero, it is necessary to determine a threshold
  • 39. 16 1 Current Use of User Authentication value that is a suitable compromise between the level of security required (FAR) and the level of user convenience (FRR). A third error rate, the equal error rate (EER), is a measure of where the FAR and FRR cross and is frequently used as a standard reference point to compare different biometric systems’ performance (Ashbourn 2000). The performance of biometric systems has traditionally been the prohibitive factor in widespread adoption (alongside cost), with error rates too high to provide reliable and convenient authentication of the user. This has consider- ably changed in recent years with significant enhancements being made in pattern classification to improve performance. Biometrics is considered to be the strongest form of authentication; however, a variety of problems exist that can reduce their effectiveness, such as defining an appropriate threshold level. They also introduce a significant level of additional work, both in the design of the system and the deployment. The evidence for this manifests itself in the fact that few off-the-shelf products exist for large-scale biometric deployment. Instead, vendors offer a bespoke design solution involving expensive consultants – highlighting the immaturity of the marketplace. Both tokens and particularly passwords are simple for software designers to implement and organisations to deploy. Further evidence of this can be found by looking at the levels of adoption over the last 10 years. Table 1.3 illustrates the level of adoption Fig. 1.6 Biometric performance characteristics
  • 40. 17 1.4 Point-of-Entry Authentication of biometrics from 9% in 2001 rising to 21% in 2010,1 still only a minor player in authentication versus the remaining approaches. Whilst relatively low, it is interesting to note that adoption of biometrics has increased over the 10-year period, whilst the other approaches have stayed fairly static, if not slightly decreasing in more recent years. This growth in biometrics reflects the growing capability, increasing standardisation, increasing performance and decreasing cost of the systems. Fundamentally, all the approaches come back to the same basic element: a unique piece of information. With something you know, the responsibility for storing that information is placed upon the user; with something you have, it is stored within the token and with something you are, it is stored within the biometric characteristic. Whilst each has its own weaknesses, it is imperative that verifying the identity of the user is completed successfully if systems and information are to remain secure. 1.4  Point-of-Entry Authentication User authentication to systems, services or devices is performed using a single approach – point-of-entry authentication. When authenticated successfully, the user has access to the system for a period of time without having to re-authenticate, with the period of time being defined on a case-by-case basis. For some systems, a screensaver will lock the system after a few minutes of inactivity; for many Web- based systems, the default time-out on the server (which would store the authenticated credential) is 20 min and for other systems they will simply remain open for use until the user manually locks the system or logs out of the service. The point-of-entry mechanism is an intrusive interface that forces a user to authenticate. To better understand and appreciate the current use of authentication, it is relevant to examine the literature on the current use of each of the authentication categories. Table 1.3 Level of adoption of authentication approaches 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 Static account/login password 48% 44% 47% 56% 52% 46% 51% 46% 42% 43% Smartcard and other one-time passwords a a a 35% 42% 38% 35% 36% 33% 35% Biometrics 9% 10% 11% 11% 15% 20% 18% 23% 26% 21% a Data not available 1 These figures were compiled from the Computer Security Institutes (CSI) annual Computer Crime and Abuse Survey (which until 2008 was jointly produced by the Federal Bureau of Investigation (FBI)) between the period 2001 and 2010 (CSI, 2001-2010).
  • 41. 18 1 Current Use of User Authentication An analysis of password use by Schneier (2006) highlighted the weakness of allowing users to select the password. The study was based upon the analysis of 34,000 accounts from a MySpace phishing attack. Sixty-five percent of passwords contain eight letters or less and the most common passwords were password1, abc123 and myspace1. As illustrated in Table 1.4, none of the top 20 most frequently used passwords contain any level of sophistication that a password cracker would find remotely challenging. Another report by Imperva (2010), some 4 years later, studied the passwords of over 32 million users of Rockyou.com after a hacker obtained access to the database and posted them online. The analysis highlighted again many of the traditional weaknesses of password-based approaches. The report found that 30% of users’ passwords were six letters or fewer. Furthermore, 60% of users used a limited set of alphanumeric characters, with 50% using slang/dictionary or trivial passwords. Over 290,000 users selected 123456 as a password. Further examination of password use reveals users are not simply content on using simple passwords but continue their bad practice. A study in 2004 found that 70% of people would reveal their computer password in exchange for a chocolate bar (BBC 2004). Thirty-four percent of respondents didn’t even need to be bribed and volunteered their password. People are not even being socially engineered to reveal their passwords but are simply giving them up in return for a relatively inexpensive item. If other more sophisticated approaches like social engineering were included, a worryingly significant number of accounts could be compromised, Table 1.4 Top 20 most common passwords Analysis of 34,000 passwords (Schneier 2006) Analysis of 32 million passwords (Imperva 2010) Rank Password Rank Password Number of users 1 password1 1 123456 290731 2 abc123 2 12345 79078 3 myspace1 3 123456789 76790 4 password 4 Password 61958 5 blink182 5 iloveyou 51622 6 qwerty1 6 princess 35231 7 fuckyou 7 rockyou 22588 8 123abc 8 1234567 21726 9 baseball1 9 12345678 20553 10 football1 10 abc123 17542 11 123456 11 Nicole 17168 12 soccer 12 Daniel 16409 13 monkey1 13 babygirl 16094 14 liverpool1 14 monkey 15294 15 princess1 15 Jessica 15162 16 jordan23 16 Lovely 14950 17 slipknot1 17 michael 14898 18 ­ superman1 18 Ashley 14329 19 iloveyou1 19 654321 13984 20 monkey 20 Qwerty 13856
  • 42. 19 1.4 Point-of-Entry Authentication withouttheneedforanyformoftechnologicalhackingorbrute-forcing.Interestingly, 80% of those questioned were also fed up with passwords and would like a better way to login to work computer systems. Research carried out by the author into the use of PIN on mobile phones in 2005 found that 66% of the 297 respondents utilised the PIN on their device (Clarke and Furnell 2005). In the first instance, this is rather promising – although it is worth considering the third not using a PIN represents well over a billion people. More concerning, however, was their use of the security: 45% of respondents never changed their PIN code from the factory default • setting A further 42% had only changed their PIN once and • 36% use the same PIN number for multiple services – which in all likelihood • would mean they also used the number for credit and cash cards. Further results from the survey highlight the usability issues associated with PINs that would lead to these types of result: 42% of respondents had experienced some form of problem with their PIN which required a network operator to unlock the device, and only 25% were confident in the protection the PIN would provide. From a point-of-entry authentication perspective mobile phones pose a significantly different threat to computer systems. Mobile phones are portable in nature and lack the physical protection afforded to desktop computers. PCs reside in the home or in work, within buildings that can have locks and alarms. Mobile phones are carried around and only have the individual to rely upon to secure the device. The PIN is entered upon switch-on of the device, perhaps in the morning (although normal practice is now to leave the device on permanently), and the device remains on and accessible without re-authentication of the user for the remainder of the day. The device can be misused indefinitely to access the information stored on the device (and until reported to the network operator misused to access the Internet and make international telephone calls). A proportion of users are able to lock their device and re-enter the PIN. From the survey, however, only 18% of respondents used this functionality. When it comes to point-of-entry authentication, misuse of secret-knowledge approaches is not unique and both tokens and biometrics also suffer from various issues. Tokens have a chequered past. If we consider their use as cash/credit cards, the level of fraud being conducted is enormous. The Association for Payment Clearing Services (APACS), now known as the UK Payments Administration (UKPA), reported the level of fraud at £535 million in 2007, a 25% rise on the previous year (APACS 2008). Whilst not all the fraud can be directly attributed to the misuse of the card, counterfeit cards and lost/stolen cards certainly can be, and they account for over £200 million of the loss. Interestingly, counterfeit fraud within the UK has dropped dramatically (71%) between 2004 and 2007, with the introduction of chip and PIN. Chip and PIN moved cards and merchants away from using the magnetic strip and a physical signature to a smartcard technology that made dupli- cating cards far more difficult. Unfortunately, not everywhere in the world is this new technology utilised and this gives rise to the significant level of fraud still existing for counterfeit cards.
  • 43. 20 1 Current Use of User Authentication The assumption that the authorised user is the person using the card obviously does not hold true for a large number of credit card transactions. Moreover, even with a token that you would expect users to be financially motivated to take care of, significant levels of misuse still occur. One of the fundamental issues that gave rise to counterfeit fraud of credit cards is the ease with which the magnetic-based cards could be cloned. It is the magnetic strip of the card that stores the secret information necessary to perform the transaction. A BBC report in 2003 stated that ‘a fraudulent transaction takes place every 8s and cloning is the biggest type of credit card fraud’ (BBC 2003). Whilst smartcard technologies have improved the situation, there is evidence that these are not impervious to attack. Researchers at Cambridge University have found a way to trick the card reader into authenticating a transac- tion without a valid PIN being entered (Espiner 2010). Proximity or radio frequency identification (RFID)-based tokens have also experienced problems with regard to cloning. RFID cards are contactless cards that utilise a wireless signal to transmit the necessary authentication information. One particular type of card, NXP Semiconductor’s Mifare Classic RFID card, was hacked by Dutch researchers (de Winter 2008). The hack fundamentally involves breaking the cryptographic protection, which only takes seconds to complete. The significance of this hack is in tokens that contain the Mifare chip. The chip is used not only in the Dutch transportation system but also in the US (Boston Charlie Card) and the UK (London Oyster Card) (Dayal 2008). Subsequent reports regarding the Oyster card reveal that duplicated Mifare chips can be used for free to travel on the underground (although only for a day due to the asynchronous nature of the system) (BBC 2008b). With over 34 million Oyster cards in circulation, a significant opportunity exists for misuse. Since February 2010, new cards are being distributed that no longer contain the Mifare Classic chip, but that in itself highlights another weakness of token-based approaches, the cost of reissue and replacement. With biometric systems, duplication of the biometric sample is possible. Facial recognition systems could be fooled by a simple photocopied image of the legitimate face (Michael 2009). Fingerprint systems can also be fooled in authorising the user, using rubber or silicon impressions of the legitimate user’s finger (Matsumoto et al. 2002). Unfortunately, whilst the biometric characteristics are carried around with us, they are also easily left behind. Cameras and microphones can capture our face and voice characteristics. Fingerprints are left behind on glass cups we drink from and DNA is shed from our bodies in the form of hair everywhere. There are, however, more severe consequences that can happen. In 2005, the owner of a Mercedes S-Class in Malaysia had his finger chopped off during an attack to steal his car (Kent 2005). This particular model of car required fingerprint authentication to start the car. The thieves were able to bypass the immobiliser, using the severed fingertip, to gain access. With both tokens and secret knowledge, the information could have been handed over without loss of limb. This has led more recent research to focus upon the addition of liveness detectors that are able to sense whether a real person (who is alive) is providing the biometric sample or if it is artificial. The problem with point-of-entry authentication is that a Boolean decision is made at the point of access as to whether to permit or deny access. This decision is
  • 44. 21 1.5 Single Sign On and Federated Authentication frequently based upon only a single decision (i.e. a password) or perhaps two with token-based approaches – but this is largely due to tokens providing no real authen- tication security. The point-of-entry approach provides an attacker with an opportunity to study the vulnerability of the system and to devise an appropriate mechanism to circumvent it. As it is a one-off process, no subsequent effort is required on behalf of the attacker and frequently they are able to re-access the system providing the same credential they previously compromised. However, when looking at the available options, the approach taken with point-of-entry seems intuitively logical given the other seemingly limited choices: To authenticate the user every couple of minutes in order to continuously ensure • the user still is the authorised user. To authenticate the user before accessing each individual resource (whether that • be an application or file). The access control decision can therefore be more confident in the authenticity of the user at that specific point in time. Both of the examples above would in practice be far too inconvenient to users and thus increase the likelihood that they would simply switch it off, circumvent it or maintain such a short password sequence that it was simple to enter quickly. Even if we ignore the inconvenience for a moment, these approaches still do not bypass the point-of-entry authentication approach. It still is point-of-entry, but the user has to perform the action more frequently. If an attacker has found a way to compromise the authentication credential, compromising once is no different to compromising it two, three, four or more times. So requesting additional verification of the user does not provide additional security in this case. Authenticating the user periodically with a variety of authentication techniques randomly selected would bypass the compromised credential; however, at what cost in terms of inconvenience to the user? Having to remember multiple passwords or carry several tokens for a single system would simply not be viable. Multiple biometrics would also have cost implications. 1.5  Single Sign On and Federated Authentication As the authentication demands increase upon the user, so technologies have been developed to reduce them, and single sign on and federated authentication are two prime examples. Single sign on allows a user to utilise a single username and pass- word to access all the resources and applications within an organisation. Operationally, this allows users to enter their credentials once and be subsequently permitted to access resources for the remainder of the session. Federated authentication extends this concept outside of the organisation to include other organisations. Obviously, for federated identity to function, organisations need to ensure relevant trust/use policies are in place beforehand. Both approaches reduce the need for the users to repeatedly enter their credentials every time they want to access a network resource.
  • 45. 22 1 Current Use of User Authentication In enterprise organisations, single sign on is also replaced with reduced sign on. Recognising that organisations place differing levels of risk on information, reduced sign on permits a company to have additional levels of authentication for informa- tion assets that need better protection. For instance, it might implement single sign on functionality with a username and password combination for all low-level infor- mation but require a biometric-based credential for access to more important data. Both single sign on and, more recently, federated authentication have become hugely popular. It is standard practice for large organisations to implement single sign on, and OpenID, a federated identity scheme, has over a billion enabled accounts and over nine million web sites that accept it (Kissel 2009). Whilst these mechanisms do allow access to multiple systems through the use of a single creden- tial, traditionally viewed as a bad practice, the improvement in usability for end- users has overridden this issue. In addition to single sign on described above, there are also examples of what appear to be single sign on used frequently by users on desktop systems and browsers utilising password stores. A password store will simply store all the individual username and password combinations for all your systems/web sites. A single username and password provides access to them. This system is different from normal single sign on in that each of the resources that need access still has its own authentication credentials and the password store acts as a middle layer in providing them, assuming that the key to unlock the store is provided. In single sign on, there is only a single authentication credential and a central service is responsible for managing them. Password stores are therefore open to abuse by attacks that provide access to the local host and to the password store. A potentially more significant issue with password stores is the usability of such approaches. Whilst designed to improve usability they could in many cases inhibit use. Password stores stop users from having to enter their individual authentication credential to each service, which over time is likely to lead to users simply forgetting what they are. When users need to access the service from another computer, or from their own after a system reset, it is likely that they will encounter issues over remembering their credentials. Single sign on and federated identity, whilst helping to remove the burden placed upon users for accessing services and applications, still only provide point-of-entry verification of a user and thus only offer a partial solution to the authentication problem. 1.6  Summary User authentication is an essential component in any secure system. Without it, it is impossible to maintain the confidentiality, integrity and availability of systems. Unlike firewalls, antivirus and encryption, it is also one of the few security controls that all users have to interface and engage with. Both secret-knowledge and token-based approaches rely upon the user to maintain security of the system. A lost or stolen token or shared password will compromise the system. Biometrics do
  • 46. 23 References provide an additional level of security, but are not necessarily impervious to compro- mise. Current approaches to authentication are arguably therefore failing to meet the needs or expectations of users or organisations. In order to determine what form of authentication would be appropriate, it would be prudent to investigate the nature of the problem that is trying to be solved. With what is the user trying to authenticate? How do different technologies differ in their security expectations? What threats exist and how do they impact the user? What usability considerations need to be taken? The following chapters in Part I of this book will address these issues. References AccessData: AccessData password recovery toolkit. AccessData. Available at: http://accessdata. com/products/forensic-investigation/decryption (2011). Accessed 10 Apr 2011 APACS: Fraud: The facts 2008. Association for payment clearing services. Available at: http://www.cardwatch.org.uk/images/uploads/publications/Fruad%20Facts%20202008_links. pdf (2008). Accessed 10 Apr 2011 Ashbourn, J.: Biometrics: Advanced Identity Verification: The Complete Guide. Springer, London (2000). ISBN 978-1852332433 BBC: Credit card cloning. BBC inside out. Available at: http://www.bbc.co.uk/insideout/east/ series3/credit_card_cloning.shtml (2003). Accessed 10 Apr 2011 BBC: Passwords revealed by sweet deal. BBC News. Available at: http://news.bbc.co.uk/1/hi/ technology/3639679.stm (2004). Accessed 10 Apr 2011 BBC: Personal data privacy at risk. BBC News. Available at: http://news.bbc.co.uk/1/hi/ business/7256440.stm (2008a). Accessed 10 Apr 2011 BBC: Oyster card hack to be published. BBC News. Available at: http://news.bbc.co.uk/1/hi/ technology/7516869.stm (2008b). Accessed 10 Apr 2011 Clarke, N.L., Furnell, S.M.: Authentication of users on mobile telephones – A survey of attitudes and opinions. Comput. Secur. 24(7), 519–527 (2005) Crown Copyright: Computer misuse act. Crown copyright. Available at: http://www.legislation. gov.uk/ukpga/1990/18/contents (1990). Accessed 10 Apr 2011 Crown Copyright: Data protection act 1988. Crown copyright. Available at: http://www.legislation. gov.uk/ukpga/1998/29/contents (1998). Accessed 10 Apr 2011 Crown Copyright: Regulation of investigatory powers act. Crown copyright. Available at: http://www.legislation.gov.uk/ukpga/2000/23/contents (2000a). Accessed 10 Apr 2011 Crown Copyright: Electronic communication act. Crown copyright. Available at: http://www. legislation.gov.uk/ukpga/2000/7/contents (2000b). Accessed 10 Apr 2011 Crown Copyright: Police and justice act. Crown copyright. Available at: http://www.legislation. gov.uk/ukpga/2006/48/contents (2006). Accessed 10 Apr 2011 de Winter, B.: New hack trashes London’s Oyster card. Tech world. Available at: http://news. techworld.com/security/105337/new-hack-trashes-londons-oyster-card/ (2008). Accessed 10 Apr 2011 Deyal, G.: MiFare RFID crack more extensive than previously thought. Computer world. Available at: http://www.computerworld.com/s/article/9078038/MiFare_RFID_crack_more_extensive_ than_previously_thought (2008). Accessed 10 Apr 2011 Espiner, T.: Chip and PIN is broken, says researchers. ZDNet UK. Available at: http://www.zdnet. co.uk/news/security-threats/2010/02/11/chip-and-pin-is-broken-say-researchers-40022674/ (2010). Accessed 3 Aug 2010
  • 47. 24 1 Current Use of User Authentication Furnell, S.M.: Computer Insecurity: Risking the System. Springer, London (2005). ISBN 978-1-85233-943-2 IBG: How is biometrics defined? International Biometrics Group. Available at: http://www. biometricgroup.com/reports/public/reports/biometric_definition.html (2010a). Accessed 10 Apr 2011 Imperva: Consumer password worst practices. Imperva Application Defense Centre. Available at: http://www.imperva.com/docs/WP_Consumer_Password_Worst_Practices.pdf (2010). Accessed 10 Apr 2011 ISO: ISO/IEC 27002:2005 information technology – Security techniques – Code of practice for information security management. International Standards Organisation. Available at: http:// www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=50297 (2005a). Accessed 10 Apr 2011 Kent, J.: Malaysia car thieves steal finger. BBC News. Available at: http://news.bbc.co.uk/1/hi/ world/asia-pacific/4396831.stm (2005). Accessed 10 Apr 2011 Kissel, B.: OpenID 2009 year in review. OpenID Foundation. Available at: http://openid. net/2009/12/16/openid-2009-year-in-review/ (2009). Accessed 10 Apr 2011 Matsumoto, T., Matsumoto, H., Yamada, K., Hoshino, S.: Impact of artificial ‘gummy’ fingers on fingerprint systems. Proc. SPiE 4677, 275–289 (2002) Michael, S.: Facial recognition fails at Black Hat. eSecurity planet. Available at: http://www. esecurityplanet.com/trends/article.php/3805011/Facial-Recognition-Fails-at-Black-Hat.htm (2009). Accessed 10 Apr 2011 NatWest.: The secure way to get more from online banking. NatWest Bank. Available at: http:// www.natwest.com/personal/online-banking/g1/banking-safely-online/card-reader.ashx(2010). Accessed 10 Apr 2011 Ophcrack.: What is ophcrack?. Sourceforge. Available at: http://ophcrack.sourceforge.net/ (2011). Accessed 10 Apr 2011 Schneier, B.: Real-world passwords. Bruce Schneier Blog. Available at: http://www.schneier.com/ blog/archives/2006/12/realworld_passw.html (2006). Accessed 10 Apr 2011 Security Focus.: @Stake LC5. Security focus. Available at: http://www.securityfocus.com/ tools/1005 (2010). Accessed 10 April 2011 Wood, H.: The use of passwords for controlling the access to remote computer systems and services. In: Dinardo, C.T. (ed.) Computers and Security, vol. 3, p. 137. AFIPS Press, Montvale (1977)
  • 48. 25 N. Clarke, Transparent User Authentication: Biometrics, RFID and Behavioural Profiling, DOI 10.1007/978-0-85729-805-8_2, © Springer-Verlag London Limited 2011 2.1  Introduction Technology is closely intertwined with modern society, and few activities in our daily life do not rely upon technology in some shape or form – from boiling a kettle and making toast to washing clothes and keeping warm. The complexity of this technology is however increasing, with more intelligence and connectivity being added to a whole host of previously simple devices. For instance, home automation enables every electrical device to be independently accessed remotely, from lights and hot water to audio and visual systems. Cars now contain more computing power than the computer that guided Apollo astronauts to the moon (Physics.org 2010). With this increasing interoperability and flexibility comes a risk. What happens when hackers obtain access to your home automation system? Switch devices on, turn up the heating, or switch the fridge off? If hackers gain access to your car, would they be able to perform a denial of service attack? Could they have more underhand motives – perhaps cause an accident, stop the breaking or speed the car up? Smart electricity meters are being deployed in the US, UK and elsewhere that permit close monitoring of electricity and gas usage as part of the effort towards reducing the carbon footprint (Anderson and Fluoria 2010). The devices also allow an electricity/gas supplier to manage supplies at times of high usage, by switching electricity off to certain homes whilst maintaining supply to critical services such as hospitals. With smart meters being deployed in every home, an attack on these devices could leave millions of homes without electricity. The impact upon society and the resulting confusion and chaos that would derive is unimaginable. With this closer integration of technology, ensuring systems remain secure has never been more imperative. However, as society and technology evolve, the prob- lem of what and how to secure systems also changes. Through an appreciation of where technology has come from, where it is heading, the threats against it and the users who use it, it is possible to develop strategies to secure systems that are proac- tive in their protection rather than reactive to every small change – developing a holistic approach is key to deriving long-term security practice. Chapter 2 The Evolving Technological Landscape
  • 49. 26 2 The Evolving Technological Landscape 2.2  Evolution of User Authentication The need to authenticate users was identified early on in the history of computing. Whilst initially for financial reasons – early computers were prohibitively expensive and IT departments needed to ensure they charged the right departments for use – the motivations soon developed into those we recognise today. Of the authentication approaches, passwords were the only choice available in the first instance. Initial implementations simply stored the username and password combinations in clear-text form, allowing anyone with sufficient permission (or moreover anyone who was able to obtain sufficient permission) the access to the file. Recognised as a serious weakness to security, passwords were subsequently stored in a hashed form (Morris and Thomson 1978). This provided significant strength to password secu- rity as accessing the password file no longer revealed the list of passwords. Cryptanalysis of the file is possible; however, success is largely dependent upon whether users are following good password practice or not and the strength of the hashing algorithm used. Advice on what type of password to use has remained fairly constant, with a trend towards more complex and longer passwords as computing power and its abil- ity to brute-force the password space improved. As dictionary-based attacks became more prevalent, the advice changed to ensure that passwords were more random in nature, utilising a mixture of characters, numerals and symbols. The advice on the length of the password has also varied depending upon its use. For instance, on Windows NT machines that utilised the LAN Manager (LM) password, the system would store the password into two separate 7-character hashes. Passwords of 9 char- acters would therefore have the first 7 characters in the first hash and the remaining 2 in the second hash. Cracking a 2-character hash is a trivial task and could subse- quently assist in cracking the first portion of the hash. As such, advice by many IT departments of the day was to have 7- or 14-character passwords only. The policy for password length now varies considerably between professionals and the litera- ture. General guidance suggests passwords of 9 characters or more; however, pass- word crackers such as Ophtcrack are able to crack 14-character LAN Manager (LM) hashes (which were still utilised in Windows XP). Indeed, at the time of writing, Ophtcrack had tables that can crack NTLM hashes (used in Windows Vista) of 6 characters (utilising any combination of upper, lower, special characters and num- bers) – and this will only improve in time. Unfortunately, the fundamental boundary to password length is the capacity for the user to remember it. In 2004, Bill Gates was quoted as saying ‘passwords are dead’ (Kotadia 2004), citing numerous weaknesses and deficiencies that password-based techniques experience. To fill the gap created by the weaknesses in password-based approaches, several token-based technologies were developed. Most notably, the one-time password mechanism was created to combat the issue of having to remember long complex passwords. It also provided protection against replay attacks, as each password could only be utilised once. However, given the threat of lost or stolen tokens, most implementations utilise one-time passwords as a two-factor approach, combining it
  • 50. 27 2.2 Evolution of User Authentication with the traditional username and password for instance, thereby not necessarily removing the issues associated with remembering and maintaining an appropriate password. Until more recently, token-based approaches have been largely utilised by corpo- rate organisations for logical access control of their computer systems, particularly for remote access where increased verification of authenticity is required. The major barrier to widespread adoption is the cost associated with the physical token itself. However, the ubiquitous nature of mobile phones has provided the platform for a new surge in token-based approaches. Approaches utilise the Short-Message- Service (SMS) (also known as the text message) to send the user a one-time pass- word to enter onto the system. Mobile operators, such as O2 (amongst many others) in the UK, utilise this mechanism for initial registration and password renewal pro- cesses for access to your online account, as illustrated in Figs. 2.1 and 2.2. Google has also developed its Google Authenticator, a two-step verification approach that allows a user to enter a one-time code in addition to their username and password (Google 2010). The code is delivered via a specialised application installed on the user’s mobile handset, thus taking advantage of Internet-enabled devices (as illustrated in Fig. 2.3). The assumption placed on these approaches is that mobile phones are a highly personal device and as such will benefit from additional physical protection from the user than traditional tokens. The costs of deploying to these devices over providing the physical device are also significantly reduced. The growth of the Internet has also resulted in an increased demand upon users to authenticate – everything from the obvious such as financial web sites and corpo- rate access, to less obvious news web sites and online gaming. Indeed, the need to authenticate in many instances has less to do with security and more with possible marketing information that can be gleamed from understanding users’ browsing habits. Arguably this is placing an overwhelming pressure on users to remember a large number of authentication credentials. In addition, the increase in Internet- enabled devices from mobile phones to iPads ensures users are continuously con- nected, consuming media from online news web sites and communicating using social networks, instant messenger and Skype. Despite the massive change in tech- nology, both in terms of the physical form factor of the device and the increasing mobile nature of the device, and the services the technology enables, the nature of authentication utilised is overwhelmingly still a username and password. Fig. 2.1 O2 web authentication using SMS
  • 51. 28 2 The Evolving Technological Landscape Further examination of the mobile phone reveals an interesting evolution of technology, services and authentication. The mobile phone represents a ubiquitous technology (in the developed world) with over 4.3 billion subscribers; almost two-thirds of the world population1 (GSM Association 2010). The mobile phone, known technically as the Mobile Station (MS), consists of two components: the Mobile Equipment (ME) and a Subscriber Identification Module (SIM). The SIM is a smart card with, amongst other information, the subscriber and network authenti- cation keys. Subscriber authentication on a mobile phone is achieved through the entry of a 4–8-digit number known as a Personal Identification Number (PIN). This point-of-entry system then gives access to the user’s SIM, which will subsequently give the user network access via the International Mobile Subscriber Identifier (IMSI) and the Temporary Mobile Subscriber Identifier (TMSI), as illustrated in Fig. 2.4. Thus the user’s authentication credential is used by the SIM to unlock the necessary credentials for device authentication to the network. The SIM card is a removable token allowing in principle for a degree of personal mobility. For example, a subscriber could place their SIM card into another handset Fig. 2.2 O2 SMS one-time password 1 This number would include users with more than one subscription, such as a personal and business contract. So this figure would represent a slightly smaller proportion of the total population than stated.
  • 52. 29 2.2 Evolution of User Authentication and use it in the same manner as they would use their own phone with calls being charged to their account. However, the majority of mobile handsets are typically locked to individual networks, and although the SIM card is in essence an authenti- cation token, in practice the card remains within the mobile handset throughout the life of the handset contract – removing any additional security that might be ­ provided by a token-based authentication technique. Indeed, the lack of using the SIM as an authenticate token has resulted in many manufacturers placing the SIM cardholder in inaccessible areas on the device, for infrequent removal. Fig. 2.3 Google Authenticator Fig. 2.4 Terminal-network security protocol
  • 53. 30 2 The Evolving Technological Landscape Interestingly, the purpose of the IMSI and TMSI are to authenticate the SIM card itself on the network, and they do not ensure that the person using the phone is ­ actually the registered subscriber. This is typically achieved at switch on using the PIN, although some manufacturers also have the PIN mechanism when you take the mobile out of a stand-by mode. As such, a weakness of the point-of-entry system is that, after the handset is switched on, the device is vulnerable to misuse should it be left unattended or stolen. In addition to the PIN associated with the SIM card, mobile phones also have authentication mechanisms for the device itself. Whether the user is asked for the SIM-based password, the handset-based password or even both depends upon indi- vidual handsets and their configuration. The nature of the handset authentication can vary but is typically either a PIN or alphanumeric password on devices that support keyboards. Whilst the mobile phone has the opportunity to take advantage of the stronger two-factor authentication (token and password), practical use of the device on a day-to-day basis has removed the token aspect and minimised the effec- tiveness of the secret-knowledge approach. A survey involving 297 participants found that 85% of them left their phone on for more than 10 h a day – either switching it on at the start of the day or leaving the device switched on continuously (Clarke and Furnell 2005). More recently, a few handset operators and manufacturers have identified the needtoprovidemoresecureauthenticationmechanisms.Forinstance,NTTDoCoMo F505i handset and Toshiba’s G910 come equipped with a built-in fingerprint sensor, providing biometric authentication of the user (NTT DoCoMo 2003; Toshiba 2010). Although fingerprint technology increases the level of security available to the hand- set, the implementation of this mechanism has increased handset cost, and even then the technique remains point-of-entry only and intrusive to the subscriber. More notably, however, whilst the original concept of the PIN for first-generation mobile phones may have been appropriate – given the risk associated with lost/sto- len devices and the information they stored – from the third generation (3G) and beyond, mobile phones offer a completely different value proposition. It can be argued that handsets represent an even greater enticement for criminals because: 1. More technologically advanced mobile handsets – handsets are far more advanced than previous mobile phones and are more expensive and subsequently attractive to theft, resulting in a financial loss to the subscriber. 2. Availability of data services – networks provide the user with the ability to down- load and purchase a whole range of data services and products that can be charged to the subscriber’s account. Additionally, networks can provide access to bank accounts, share trading and making micro-payments. Theft and misuse of the handset would result in financial loss for the subscriber. 3. Personal Information – handsets are able to store much more information than previous handsets. Contact lists not only include name and number but addresses, dates of birth and other personal information. Handsets may also be able to access personal medical records and home intranets, and their misuse would result in a personal and financial loss for the subscriber.
  • 54. 31 2.2 Evolution of User Authentication These additional threats were recognised by the architects of 3G networks. The 3GPP devised a set of standards concerning security on 3G handsets. In a document called ‘3G – Security Threats and Requirements’ (3GPP 1999) the requirements for authentication state: It shall be possible for service providers to authenticate users at the start of, and during, service delivery to prevent intruders from obtaining unauthorised access to 3G services by masquerade or misuse of priorities. The important consequence of this standard is to authenticate subscribers during service delivery, an extension of the 2G point-of-entry authentication approach, which requires continuous monitoring and authentication. However, network operators, on the whole, have done little to improve authentication security, let alone provide a mechanism for making it continuous. Even with the advent and deploy- ment of 4G networks in several countries, the process of user authentication has remained the same. In comparison to passwords and tokens, biometrics has quite a different history of use with its initial primary area of application within law enforcement. Sir Francis Galton undertook some of the first research into using fingerprints to uniquely identify people, but Sir Edward Henry is credited with developing that research for use within law enforcement in the 1890s – known as the Henry Classification System (IBG 2003). This initial work provided the foundation for understanding the discriminative nature of human characteristics. However, it is not until the 1960s that biometric systems, as defined by the modern definition, began to be developed. Some of the initial work was focused upon developing automated approaches to replace the paper-based fingerprint searching law enforcement agencies had to undertake. As computing power improved throughout the 1970s significant advances in biometrics have been made, with a variety of research being published throughout this period on new biometric approaches. Early on, approaches such as speaker, face, iris and signature were all identified as techniques that would yield positive results. Whilst early systems were developed and implemented through the 1980s and 1990s, it was not until 1999 that the FBI’s Integrated Automated Fingerprint Identification System (IAFIS) became operational (FBI 2011), thus illustrating that large-scale biometric systems are not simple to design and implement in practice. With respect to its use within or by organisations, biometrics was more com- monly used for physical access control rather than logical in the first instance. Hand geometry found early applications in time and attendance systems. The marketplace was also dominated with vendors providing bespoke solutions to clients. It simply wasn’t possible to purchase off-the-shelf enterprise solutions for biometrics; they had to be individually designed. Only 9% of respondents from the 2001 Computer Crime and Abuse Survey had implemented biometric systems (Power 2001). Significant advances have been made in the last 10 years with the development of interoperability standards to enable a move away from dedicated bespoke systems to providing choice and a flexible upgrade path for customers, these efforts demon- strating the increasing maturity of the domain.
  • 55. Other documents randomly have different content
  • 56. Successors of Cavalieri. able to effect numerous integrations relating to the areas of portions of conic sections and the volumes generated by the revolution of these portions about various axes. At a later date, and partly in answer to an attack made upon him by Paul Guldin, Cavalieri published a treatise entitled Exercitationes geometricae sex (1647), in which he adapted his method to the determination of centres of gravity, in particular for solids of variable density. Among the results which he obtained is that which we should now write ∫x 0 xm dx = xm+1 , (m integral). m + 1 He regarded the problem thus solved as that of determining the sum of the mth powers of all the lines drawn across a parallelogram parallel to one of its sides. At this period scientific investigators communicated their results to one another through one or more intermediate persons. Such intermediaries were Pierre de Carcavy and Pater Marin Mersenne; and among the writers thus in communication were Bonaventura Cavalieri, Christiaan Huygens, Galileo Galilei, Giles Personnier de Roberval, Pierre de Fermat, Evangelista Torricelli, and a little later Blaise Pascal; but the letters of Carcavy or Mersenne would probably come into the hands of any man who was likely to be interested in the matters discussed. It often happened that, when some new method was invented, or some new result obtained, the method or result was quickly known to a wide circle, although it might not be printed until after the lapse of a long time. When Cavalieri was printing his two treatises there was much discussion of
  • 57. Fermat’s method of Integration. the problem of quadratures. Roberval (1634) regarded an area as made up of “infinitely” many “infinitely” narrow strips, each of which may be considered to be a rectangle, and he had similar ideas in regard to lengths and volumes. He knew how to approximate to the quantity which we express by ∫1 0 xmdx by the process of forming the sum 0m + 1m + 2m + ... (n − 1)m , nm+1 and he claimed to be able to prove that this sum tends to 1/(m + 1), as n increases for all positive integral values of m. The method of integrating xm by forming this sum was found also by Fermat (1636), who stated expressly that he arrived at it by generalizing a method employed by Archimedes (for the cases m = 1 and m = 2) in his books on Conoids and Spheroids and on Spirals (see T. L. Heath, The Works of Archimedes, Cambridge, 1897). Fermat extended the result to the case where m is fractional (1644), and to the case where m is negative. This latter extension and the proofs were given in his memoir, Proportionis geometricae in quadrandis parabolis et hyperbolis usus, which appears to have received a final form before 1659, although not published until 1679. Fermat did not use fractional or negative indices, but he regarded his problems as the quadratures of parabolas and hyperbolas of various orders. His method was to divide the interval of integration into parts by means of intermediate points the abscissae of which are in geometric progression. In the process of § 5 above, the points M must be chosen according to this rule. This restrictive condition being understood, we may say that Fermat’s formulation of the
  • 58. Various Integrations. problem of quadratures is the same as our definition of a definite integral. The result that the problem of quadratures could be solved for any curve whose equation could be expressed in the form y = xm (m ≠ −1), or in the form y = a1 xm1 + a2 xm2 + ... + an xmn, where none of the indices is equal to −1, was used by John Wallis in his Arithmetica infinitorum (1655) as well as by Fermat (1659). The case in which m = −1 was that of the ordinary rectangular hyperbola; and Gregory of St Vincent in his Opus geometricum quadraturae circuli et sectionum coni (1647) had proved by the method of exhaustions that the area contained between the curve, one asymptote, and two ordinates parallel to the other asymptote, increases in arithmetic progression as the distance between the ordinates (the one nearer to the centre being kept fixed) increases in geometric progression. Fermat described his method of integration as a logarithmic method, and thus it is clear that the relation between the quadrature of the hyperbola and logarithms was understood although it was not expressed analytically. It was not very long before the relation was used for the calculation of logarithms by Nicolaus Mercator in his Logarithmotechnia (1668). He began by writing the equation of the curve in the form y = 1/(1 + x), expanded this expression in powers of x by the method of division, and integrated it term by term in accordance with the well-
  • 59. Integration before the Integral Calculus. Fermat’s methods of Differentiation. understood rule for finding the quadrature of a curve given by such an equation as that written at the foot of p. 325. By the middle of the 17th century many mathematicians could perform integrations. Very many particular results had been obtained, and applications of them had been made to the quadrature of the circle and other conic sections, and to various problems concerning the lengths of curves, the areas they enclose, the volumes and superficial areas of solids, and centres of gravity. A systematic account of the methods then in use was given, along with much that was original on his part, by Blaise Pascal in his Lettres de Amos Dettonville sur quelques-unes de ses inventions en géométrie (1659). 16. The problem of maxima and minima and the problem of tangents had also by the same time been effectively solved. Oresme in the 14th century knew that at a point where the ordinate of a curve is a maximum or a minimum its variation from point to point of the curve is slowest; and Kepler in the Stereometria doliorum remarked that at the places where the ordinate passes from a smaller value to the greatest value and then again to a smaller value, its variation becomes insensible. Fermat in 1629 was in possession of a method which he then communicated to one Despagnet of Bordeaux, and which he referred to in a letter to Roberval of 1636. He communicated it to René Descartes early in 1638 on receiving a copy of Descartes’s Géométrie (1637), and with it he sent to Descartes an account of his methods for solving the problem of tangents and for determining centres of gravity.
  • 60. Fig. 6. Fermat’s method for maxima and minima is essentially our method. Expressed in a more modern notation, what he did was to begin by connecting the ordinate y and the abscissa x of a point of a curve by an equation which holds at all points of the curve, then to subtract the value of y in terms of x from the value obtained by substituting x + E for x, then to divide the difference by E, to put E = 0 in the quotient, and to equate the quotient to zero. Thus he differentiated with respect to x and equated the differential coefficient to zero. Fermat’s method for solving the problem of tangents may be explained as follows:—Let (x, y) be the coordinates of a point P of a curve, (x′, y′), those of a neighbouring point P′ on the tangent at P, and let MM′ = E (fig. 6). From the similarity of the triangles P′TM′, PTM we have y′ : A − E = y : A, where A denotes the subtangent TM. The point P′ being near the curve, we may substitute in the equation of the curve x − E for x and (yA − yE)/A for y. The equation of the curve is approximately satisfied. If it is taken to be satisfied exactly, the result is an equation of the form φ(x, y, A, E) = 0, the left-hand member of which is divisible by E. Omitting the factor E, and putting E = 0 in the remaining factor, we have an equation which gives A. In this problem of tangents also Fermat found the required result by a process equivalent to differentiation.
  • 61. Fig. 7. Fermat gave several examples of the application of his method; among them was one in which he showed that he could differentiate very complicated irrational functions. For such functions his method was to begin by obtaining a rational equation. In rationalizing equations Fermat, in other writings, used the device of introducing new variables, but he did not use this device to simplify the process of differentiation. Some of his results were published by Pierre Hérigone in his Supplementum cursus mathematici (1642). His communication to Descartes was not published in full until after his death (Fermat, Opera varia, 1679). Methods similar to Fermat’s were devised by René de Sluse (1652) for tangents, and by Johannes Hudde (1658) for maxima and minima. Other methods for the solution of the problem of tangents were devised by Roberval and Torricelli, and published almost simultaneously in 1644. These methods were founded upon the composition of motions, the theory of which had been taught by Galileo (1638), and, less completely, by Roberval (1636). Roberval and Torricelli could construct the tangents of many curves, but they did not arrive at Fermat’s artifice. This artifice is that which we have noted in § 10 as the fundamental artifice of the infinitesimal calculus. 17. Among the comparatively few mathematicians who before 1665 could perform differentiations was Isaac Barrow. In his book entitled Lectiones opticae et geometricae, written apparently in 1663, 1664, and published in 1669, 1670, he gave a method of tangents like that of Roberval and Torricelli, compounding two velocities in the directions of the axes of x and y to obtain a resultant along the tangent to a curve. In an appendix to this book
  • 62. Barrow’s Differential Triangle. Barrow’s Inversion- theorem. he gave another method which differs from Fermat’s in the introduction of a differential equivalent to our dy as well as dx. Two neighbouring ordinates PM and QN of a curve (fig. 7) are regarded as containing an indefinitely small (indefinite parvum) arc, and PR is drawn parallel to the axis of x. The tangent PT at P is regarded as identical with the secant PQ, and the position of the tangent is determined by the similarity of the triangles PTM, PQR. The increments QR, PR of the ordinate and abscissa are denoted by a and e; and the ratio of a to e is determined by substituting x + e for x and y + a for y in the equation of the curve, rejecting all terms which are of order higher than the first in a and e, and omitting the terms which do not contain a or e. This process is equivalent to differentiation. Barrow appears to have invented it himself, but to have put it into his book at Newton’s request. The triangle PQR is sometimes called “Barrow’s differential triangle.” The reciprocal relation between differentiation and integration (§ 6) was first observed explicitly by Barrow in the book cited above. If the quadrature of a curve y = ƒ(x) is known, so that the area up to the ordinate x is given by F(x), the curve y = F(x) can be drawn, and Barrow showed that the subtangent of this curve is measured by the ratio of its ordinate to the ordinate of the original curve. The curve y = F(x) is often called the “quadratrix” of the original curve; and the result has been called “Barrow’s inversion-theorem.” He did not use it as we do for the determination of quadratures, or indefinite integrals, but for the solution of problems of the kind which were then called “inverse problems of tangents.” In these
  • 63. Nature of the discovery called the Infinitesimal Calculus. problems it was sought to determine a curve from some property of its tangent, e.g. the property that the subtangent is proportional to the square of the abscissa. Such problems are now classed under “differential equations.” When Barrow wrote, quadratures were familiar and differentiation unfamiliar, just as hyperbolas were trusted while logarithms were strange. The functional notation was not invented till long afterwards (see Function), and the want of it is felt in reading all the mathematics of the 17th century. 18. The great secret which afterwards came to be called the “infinitesimal calculus” was almost discovered by Fermat, and still more nearly by Barrow. Barrow went farther than Fermat in the theory of differentiation, though not in the practice, for he compared two increments; he went farther in the theory of integration, for he obtained the inversion-theorem. The great discovery seems to consist partly in the recognition of the fact that differentiation, known to be a useful process, could always be performed, at least for the functions then known, and partly in the recognition of the fact that the inversion-theorem could be applied to problems of quadrature. By these steps the problem of tangents could be solved once for all, and the operation of integration, as we call it, could be rendered systematic. A further step was necessary in order that the discovery, once made, should become accessible to mathematicians in general; and this step was the introduction of a suitable notation. The definite abandonment of the old tentative methods of integration in favour of the method in which this operation is regarded as the inverse of differentiation was especially the work of Isaac Newton; the precise formulation of simple rules for
  • 64. Newton’s investigations. the process of differentiation in each special case, and the introduction of the notation which has proved to be the best, were especially the work of Gottfried Wilhelm Leibnitz. This statement remains true although Newton invented a systematic notation, and practised differentiation by rules equivalent to those of Leibnitz, before Leibnitz had begun to work upon the subject, and Leibnitz effected integrations by the method of recognizing differential coefficients before he had had any opportunity of becoming acquainted with Newton’s methods. 19. Newton was Barrow’s pupil, and he knew to start with in 1664 all that Barrow knew, and that was practically all that was known about the subject at that time. His original thinking on the subject dates from the year of the great plague (1665- 1666), and it issued in the invention of the “Calculus of Fluxions,” the principles and methods of which were developed by him in three tracts entitled De analysi per aequationes numero terminorum infinitas, Methodus fluxionum et serierum infinitarum, and De quadratura curvarum. None of these was published until long after they were written. The Analysis per aequationes was composed in 1666, but not printed until 1711, when it was published by William Jones. The Methodus fluxionum was composed in 1671 but not printed till 1736, nine years after Newton’s death, when an English translation was published by John Colson. In Horsley’s edition of Newton’s works it bears the title Geometria analytica. The Quadratura appears to have been composed in 1676, but was first printed in 1704 as an appendix to Newton’s Opticks. 20. The tract De Analysi per aequationes ... was sent by Newton to Barrow, who sent it to John Collins with a request
  • 65. Newton’s method of Series. that it might be made known. One way of making it known would have been to print it in the Philosophical Transactions of the Royal Society, but this course was not adopted. Collins made a copy of the tract and sent it to Lord Brouncker, but neither of them brought it before the Royal Society. The tract contains a general proof of Barrow’s inversion-theorem which is the same in principle as that in § 6 above. In this proof and elsewhere in the tract a notation is introduced for the momentary increment (momentum) of the abscissa or area of a curve; this “moment” is evidently meant to represent a moment of time, the abscissa representing time, and it is effectively the same as our differential element—the thing that Fermat had denoted by E, and Barrow by e, in the case of the abscissa. Newton denoted the moment of the abscissa by o, that of the area z by ov. He used the letter v for the ordinate y, thus suggesting that his curve is a velocity-time graph such as Galileo had used. Newton gave the formula for the area of a curve v = xm(m ± −1) in the form z = xm+1/(m + 1). In the proof he transformed this formula to the form zn = cn xp, where n and p are positive integers, substituted x + o for x and z + ov for z, and expanded by the binomial theorem for a positive integral exponent, thus obtaining the relation zn + nzn−1 ov + ... = cn (xp + pxp−1 o + ...), from which he deduced the relation nzn−1 v = cn pxp−1 by omitting the equal terms zn and cnxp and dividing the remaining terms by o, tacitly putting o = 0 after division. This
  • 66. relation is the same as v = xm. Newton pointed out that, conversely, from the relation v = xm the relation z = xm+1 / (m + 1) follows. He applied his formula to the quadrature of curves whose ordinates can be expressed as the sum of a finite number of terms of the form axm; and gave examples of its application to curves in which the ordinate is expressed by an infinite series, using for this purpose the binomial theorem for negative and fractional exponents, that is to say, the expansion of (1 + x)n in an infinite series of powers of x. This theorem he had discovered; but he did not in this tract state it in a general form or give any proof of it. He pointed out, however, how it may be used for the solution of equations by means of infinite series. He observed also that all questions concerning lengths of curves, volumes enclosed by surfaces, and centres of gravity, can be formulated as problems of quadratures, and can thus be solved either in finite terms or by means of infinite series. In the Quadratura (1676) the method of integration which is founded upon the inversion-theorem was carried out systematically. Among other results there given is the quadrature of curves expressed by equations of the form y = xn (a + bxm)p; this has passed into text-books under the title “integration of binomial differentials” (see § 49). Newton announced the result in letters to Collins and Oldenburg of 1676. 21. In the Methodus fluxionum (1671) Newton introduced his characteristic notation. He regarded variable quantities as generated by the motion of a point, or line, or plane, and called the generated quantity a “fluent” and its rate of generation a “fluxion.” The fluxion of a fluent x is represented by x, and its moment, or “infinitely” small increment accruing in an “infinitely”
  • 67. Newton’s method of Fluxions. short time, is represented by ẋo. The problems of the calculus are stated to be (i.) to find the velocity at any time when the distance traversed is given; (ii.) to find the distance traversed when the velocity is given. The first of these leads to differentiation. In any rational equation containing x and y the expressions x + ẋo and y +ẏo are to be substituted for x and y, the resulting equation is to be divided by o, and afterwards o is to be omitted. In the case of irrational functions, or rational functions which are not integral, new variables are introduced in such a way as to make the equations contain rational integral terms only. Thus Newton’s rules of differentiation would be in our notation the rules (i.), (ii.), (v.) of § 11, together with the particular result which we write dxm = mxm−1, (m integral). dx a result which Newton obtained by expanding (x = ẋo)m by the binomial theorem. The second problem is the problem of integration, and Newton’s method for solving it was the method of series founded upon the particular result which we write ∫ xm dx = xm+1 . m + 1 Newton added applications of his methods to maxima and minima, tangents and curvature. In a letter to Collins of date 1672 Newton stated that he had certain methods, and he described certain results which he had found by using them. These methods and results are those which are to be found in
  • 68. Publication of the Fluxional Notation. the Methodus fluxionum; but the letter makes no mention of fluxions and fluents or of the characteristic notation. The rule for tangents is said in the letter to be analogous to de Sluse’s, but to be applicable to equations that contain irrational terms. 22. Newton gave the fluxional notation also in the tract De Quadratura curvarum (1676), and he there added to it notation for the higher differential coefficients and for indefinite integrals, as we call them. Just as x, y, z, ... are fluents of which ẋ, ẏ, ̇z, ... are the fluxions, so ẋ, ẏ, ̇z, ... can be treated as fluents of which the fluxions may be denoted by ẍ, ̈y, ̈z,... In like manner the fluxions of these may be denoted by ẍ, ̈y, ̈z, ... and so on. Again x, y, z, ... may be regarded as fluxions of which the fluents may be denoted by ́x, ́y, ́z, ... and these again as fluxions of other quantities denoted by ̋x, ̋y, ̋z, ... and so on. No use was made of the notation ́ x, ̋ x, ... in the course of the tract. The first publication of the fluxional notation was made by Wallis in the second edition of his Algebra (1693) in the form of extracts from communications made to him by Newton in 1692. In this account of the method the symbols 0, ẋ, ẍ, ... occur, but not the symbols ́ x, ̋ x, .... Wallis’s treatise also contains Newton’s formulation of the problems of the calculus in the words Data aequatione fluentes quotcumque quantitates involvente fluxiones invenire et vice versa (“an equation containing any number of fluent quantities being given, to find their fluxions and vice versa”). In the Philosophiae naturalis principia mathematica (1687), commonly called the “Principia,” the words “fluxion” and “moment” occur in a lemma in the second book; but the notation which is characteristic of the calculus of fluxions is nowhere used.
  • 69. Retarded Publication of the method of Fluxions. 23. It is difficult to account for the fragmentary manner of publication of the Fluxional Calculus and for the long delays which took place. At the time (1671) when Newton composed the Methodus fluxionum he contemplated bringing out an edition of Gerhard Kinckhuysen’s treatise on algebra and prefixing his tract to this treatise. In the same year his “Theory of Light and Colours” was published in the Philosophical Transactions, and the opposition which it excited led to the abandonment of the project with regard to fluxions. In 1680 Collins sought the assistance of the Royal Society for the publication of the tract, and this was granted in 1682. Yet it remained unpublished. The reason is unknown; but it is known that about 1679, 1680, Newton took up again the studies in natural philosophy which he had intermitted for several years, and that in 1684 he wrote the tract De motu which was in some sense a first draft of the Principia, and it may be conjectured that the fluxions were held over until the Principia should be finished. There is also reason to think that Newton had become dissatisfied with the arguments about infinitesimals on which his calculus was based. In the preface to the De quadratura curvarum (1704), in which he describes this tract as something which he once wrote (“olim scripsi”) he says that there is no necessity to introduce into the method of fluxions any argument about infinitely small quantities; and in the Principia (1687) he adopted instead of the method of fluxions a new method, that of “Prime and Ultimate Ratios.” By the aid of this method it is possible, as Newton knew, and as was afterwards seen by others, to found the calculus of fluxions on an irreproachable method of limits. For the purpose of explaining his discoveries in dynamics and astronomy Newton used the method of
  • 70. Leibnitz’s course of discovery. limits only, without the notation of fluxions, and he presented all his results and demonstrations in a geometrical form. There is no doubt that he arrived at most of his theorems in the first instance by using the method of fluxions. Further evidence of Newton’s dissatisfaction with arguments about infinitely small quantities is furnished by his tract Methodus diferentialis, published in 1711 by William Jones, in which he laid the foundations of the “Calculus of Finite Differences.” 24. Leibnitz, unlike Newton, was practically a self-taught mathematician. He seems to have been first attracted to mathematics as a means of symbolical expression, and on the occasion of his first visit to London, early in 1673, he learnt about the doctrine of infinite series which James Gregory, Nicolaus Mercator, Lord Brouncker and others, besides Newton, had used in their investigations. It appears that he did not on this occasion become acquainted with Collins, or see Newton’s Analysis per aequationes, but he purchased Barrow’s Lectiones. On returning to Paris he made the acquaintance of Huygens, who recommended him to read Descartes’ Géométrie. He also read Pascal’s Lettres de Dettonville, Gregory of St Vincent’s Opus geometricum, Cavalieri’s Indivisibles and the Synopsis geometrica of Honoré Fabri, a book which is practically a commentary on Cavalieri; it would never have had any importance but for the influence which it had on Leibnitz’s thinking at this critical period. In August of this year (1673) he was at work upon the problem of tangents, and he appears to have made out the nature of the solution—the method involved in Barrow’s differential triangle—for himself by the aid of a diagram drawn by Pascal in a demonstration of the formula for the area of a spherical surface. He saw that the problem of the relation between the differences of neighbouring ordinates and the ordinates
  • 71. themselves was the important problem, and then that the solution of this problem was to be effected by quadratures. Unlike Newton, who arrived at differentiation and tangents through integration and areas, Leibnitz proceeded from tangents to quadratures. When he turned his attention to quadratures and indivisibles, and realized the nature of the process of finding areas by summing “infinitesimal” rectangles, he proposed to replace the rectangles by triangles having a common vertex, and obtained by this method the result which we write 1⁄4π = 1 − 1⁄3 + 1⁄5 − 1⁄7 + ... In 1674 he sent an account of his method, called “transmutation,” along with this result to Huygens, and early in 1675 he sent it to Henry Oldenburg, secretary of the Royal Society, with inquiries as to Newton’s discoveries in regard to quadratures. In October of 1675 he had begun to devise a symbolical notation for quadratures, starting from Cavalieri’s indivisibles. At first he proposed to use the word omnia as an abbreviation for Cavalieri’s “sum of all the lines,” thus writing omnia y for that which we write “∫ ydx,” but within a day or two he wrote “∫ y”. He regarded the symbol “∫” as representing an operation which raises the dimensions of the subject of operation—a line becoming an area by the operation—and he devised his symbol “d” to represent the inverse operation, by which the dimensions are diminished. He observed that, whereas “∫” represents “sum,” “d” represents “difference.” His notation appears to have been practically settled before the end of 1675, for in November he wrote ∫ ydy = ½ y2, just as we do now. 25. In July of 1676 Leibnitz received an answer to his inquiry in regard to Newton’s methods in a letter written by Newton to Oldenburg. In this letter Newton gave a general statement of the
  • 72. Correspondenc e of Newton and Leibnitz. binomial theorem and many results relating to series. He stated that by means of such series he could find areas and lengths of curves, centres of gravity and volumes and surfaces of solids, but, as this would take too long to describe, he would illustrate it by examples. He gave no proofs. Leibnitz replied in August, stating some results which he had obtained, and which, as it seemed, could not be obtained easily by the method of series, and he asked for further information. Newton replied in a long letter to Oldenburg of the 24th of October 1676. In this letter he gave a much fuller account of his binomial theorem and indicated a method of proof. Further he gave a number of results relating to quadratures; they were afterwards printed in the tract De quadratura curvarum. He gave many other results relating to the computation of natural logarithms and other calculations in which series could be used. He gave a general statement, similar to that in the letter to Collins, as to the kind of problems relating to tangents, maxima and minima, c., which he could solve by his method, but he concealed his formulation of the calculus in an anagram of transposed letters. The solution of the anagram was given eleven years later in the Principia in the words we have quoted from Wallis’s Algebra. In neither of the letters to Oldenburg does the characteristic notation of the fluxional calculus occur, and the words “fluxion” and “fluent” occur only in anagrams of transposed letters. The letter of October 1676 was not despatched until May 1677, and Leibnitz answered it in June of that year. In October 1676 Leibnitz was in London, where he made the acquaintance of Collins and read the Analysis per aequationes, and it seems to have been supposed afterwards that he then read Newton’s letter of October 1676, but he left London before Oldenburg received this letter. In his answer of June 1677
  • 73. Leibnitz’s Differential Calculus. Leibnitz gave Newton a candid account of his differential calculus, nearly in the form in which he afterwards published it, and explained how he used it for quadratures and inverse problems of tangents. Newton never replied. 26. In the Acta eruditorum of 1684 Leibnitz published a short memoir entitled Nova methodus pro maximis et minimis, itemque tangentibus, quae nec fractas nec irrationales quantitates moratur, et singulare pro illis calculi genus. In this memoir the differential dx of a variable x, considered as the abscissa of a point of a curve, is said to be an arbitrary quantity, and the differential dy of a related variable y, considered as the ordinate of the point, is defined as a quantity which has to dx the ratio of the ordinate to the subtangent, and rules are given for operating with differentials. These are the rules for forming the differential of a constant, a sum (or difference), a product, a quotient, a power (or root). They are equivalent to our rules (i.)-(iv.) of § 11 and the particular result d(xm) = mxm−1 dx. The rule for a function of a function is not stated explicitly but is illustrated by examples in which new variables are introduced, in much the same way as in Newton’s Methodus fluxionum. In connexion with the problem of maxima and minima, it is noted that the differential of y is positive or negative according as y increases or decreases when x increases, and the discrimination of maxima from minima depends upon the sign of ddy, the differential of dy. In connexion with the problem of tangents the differentials are said to be proportional to the momentary increments of the abscissa and ordinate. A tangent is defined as a line joining two “infinitely” near
  • 74. Development of the Calculus. points of a curve, and the “infinitely” small distances (e.g., the distance between the feet of the ordinates of such points) are said to be expressible by means of the differentials (e.g., dx). The method is illustrated by a few examples, and one example is given of its application to “inverse problems of tangents.” Barrow’s inversion- theorem and its application to quadratures are not mentioned. No proofs are given, but it is stated that they can be obtained easily by any one versed in such matters. The new methods in regard to differentiation which were contained in this memoir were the use of the second differential for the discrimination of maxima and minima, and the introduction of new variables for the purpose of differentiating complicated expressions. A greater novelty was the use of a letter (d), not as a symbol for a number or magnitude, but as a symbol of operation. None of these novelties account for the far-reaching effect which this memoir has had upon the development of mathematical analysis. This effect was a consequence of the simplicity and directness with which the rules of differentiation were stated. Whatever indistinctness might be felt to attach to the symbols, the processes for solving problems of tangents and of maxima and minima were reduced once for all to a definite routine. 27. This memoir was followed in 1686 by a second, entitled De Geometria recondita et analysi indivisibilium atque infinitorum, in which Leibnitz described the method of using his new differential calculus for the problem of quadratures. This was the first publication of the notation ∫ ydx. The new method was called calculus summatorius. The brothers Jacob (James) and Johann (John) Bernoulli were able by 1690 to begin to make substantial contributions to the development of the new calculus, and Leibnitz adopted their word “integral” in 1695, they at the same time
  • 75. Dispute concerning Priority. adopting his symbol “∫.” In 1696 the marquis de l’Hospital published the first treatise on the differential calculus with the title Analyse des infiniment petits pour l’intelligence des lignes courbes. The few references to fluxions in Newton’s Principia (1687) must have been quite unintelligible to the mathematicians of the time, and the publication of the fluxional notation and calculus by Wallis in 1693 was too late to be effective. Fluxions had been supplanted before they were introduced. The differential calculus and the integral calculus were rapidly developed in the writings of Leibnitz and the Bernoullis. Leibnitz (1695) was the first to differentiate a logarithm and an exponential, and John Bernoulli was the first to recognize the property possessed by an exponential (ax) of becoming infinitely great in comparison with any power (xn) when x is increased indefinitely. Roger Cotes (1722) was the first to differentiate a trigonometrical function. A great development of infinitesimal methods took place through the founding in 1696-1697 of the “Calculus of Variations” by the brothers Bernoulli. 28. The famous dispute as to the priority of Newton and Leibnitz in the invention of the calculus began in 1699 through the publication by Nicolas Fatio de Duillier of a tract in which he stated that Newton was not only the first, but by many years the first inventor, and insinuated that Leibnitz had stolen it. Leibnitz in his reply (Acta Eruditorum, 1700) cited Newton’s letters and the testimony which Newton had rendered to him in the Principia as proofs of his independent authorship of the method. Leibnitz was especially hurt at what he understood to be an endorsement of Duillier’s attack by the Royal Society, but it was
  • 76. explained to him that the apparent approval was an accident. The dispute was ended for a time. On the publication of Newton’s tract De quadratura curvarum, an anonymous review of it, written, as has since been proved, by Leibnitz, appeared in the Acta Eruditorum, 1705. The anonymous reviewer said: “Instead of the Leibnitzian differences Newton uses and always has used fluxions ... just as Honoré Fabri in his Synopsis Geometrica substituted steps of movements for the method of Cavalieri.” This passage, when it became known in England, was understood not merely as belittling Newton by comparing him with the obscure Fabri, but also as implying that he had stolen his calculus of fluxions from Leibnitz. Great indignation was aroused; and John Keill took occasion, in a memoir on central forces which was printed in the Philosophical Transactions for 1708, to affirm that Newton was without doubt the first inventor of the calculus, and that Leibnitz had merely changed the name and mode of notation. The memoir was published in 1710. Leibnitz wrote in 1711 to the secretary of the Royal Society (Hans Sloane) requiring Keill to retract his accusation. Leibnitz’s letter was read at a meeting of the Royal Society, of which Newton was then president, and Newton made to the society a statement of the course of his invention of the fluxional calculus with the dates of particular discoveries. Keill was requested by the society “to draw up an account of the matter under dispute and set it in a just light.” In his report Keill referred to Newton’s letters of 1676, and said that Newton had there given so many indications of his method that it could have been understood by a person of ordinary intelligence. Leibnitz wrote to Sloane asking the society to stop these unjust attacks of Keill, asserting that in the review in the Acta Eruditorum no one had been injured but each had received his due, submitting the matter to the equity of the Royal Society, and stating that he
  • 77. was persuaded that Newton himself would do him justice. A committee was appointed by the society to examine the documents and furnish a report. Their report, presented in April 1712, concluded as follows: “The differential method is one and the same with the method of fluxions, excepting the name and mode of notation; Mr Leibnitz calling those quantities differences which Mr Newton calls moments or fluxions, and marking them with the letter d, a mark not used by Mr Newton. And therefore we take the proper question to be, not who invented this or that method, but who was the first inventor of the method; and we believe that those who have reputed Mr Leibnitz the first inventor, knew little or nothing of his correspondence with Mr Collins and Mr Oldenburg long before; nor of Mr Newton’s having that method above fifteen years before Mr. Leibnitz began to publish it in the Acta Eruditorum of Leipzig. For which reasons we reckon Mr Newton the first inventor, and are of opinion that Mr Keill, in asserting the same, has been no ways injurious to Mr Leibnitz.” The report with the letters and other documents was printed (1712) under the title Commercium Epistolicum D. Johannis Collins et aliorum de analysi promota, jussu Societatis Regiae in lucem editum, not at first for publication. An account of the contents of the Commercium Epistolicum was printed in the Philosophical Transactions for 1715. A second edition of the Commercium Epistolicum was published in 1722. The dispute was continued for many years after the death of Leibnitz in 1716. To translate the words of Moritz Cantor, it “redounded to the discredit of all concerned.”
  • 78. British and Continental Schools of Mathematics. 29. One lamentable consequence of the dispute was a severance of British methods from continental ones. In Great Britain it became a point of honour to use fluxions and other Newtonian methods, while on the continent the notation of Leibnitz was universally adopted. This severance did not at first prevent a great advance in mathematics in Great Britain. So long as attention was directed to problems in which there is but one independent variable (the time, or the abscissa of a point of a curve), and all the other variables depend upon this one, the fluxional notation could be used as well as the differential and integral notation, though perhaps not quite so easily. Up to about the middle of the 18th century important discoveries continued to be made by the use of the method of fluxions. It was the introduction of partial differentiation by Leonhard Euler (1734) and Alexis Claude Clairaut (1739), and the developments which followed upon the systematic use of partial differential coefficients, which led to Great Britain being left behind; and it was not until after the reintroduction of continental methods into England by Sir John Herschel, George Peacock and Charles Babbage in 1815 that British mathematics began to flourish again. The exclusion of continental mathematics from Great Britain was not accompanied by any exclusion of British mathematics from the continent. The discoveries of Brook Taylor and Colin Maclaurin were absorbed into the rapidly growing continental analysis, and the more precise conceptions reached through a critical scrutiny of the true nature of Newton’s fluxions and moments stimulated a like scrutiny of the basis of the method of differentials. 30. This method had met with opposition from the first. Christiaan Huygens, whose opinion carried more weight than that of any other scientific man of the day, declared that the employment of
  • 79. Oppositions to the calculus. The “Analyst” controversy. differentials was unnecessary, and that Leibnitz’s second differential was meaningless (1691). A Dutch physician named Bernhard Nieuwentijt attacked the method on account of the use of quantities which are at one stage of the process treated as somethings and at a later stage as nothings, and he was especially severe in commenting upon the second and higher differentials (1694, 1695). Other attacks were made by Michel Rolle (1701), but they were directed rather against matters of detail than against the general principles. The fact is that, although Leibnitz in his answers to Nieuwentijt (1695), and to Rolle (1702), indicated that the processes of the calculus could be justified by the methods of the ancient geometry, he never expressed himself very clearly on the subject of differentials, and he conveyed, probably without intending it, the impression that the calculus leads to correct results by compensation of errors. In England the method of fluxions had to face similar attacks. George Berkeley, bishop and philosopher, wrote in 1734 a tract entitled The Analyst; or a Discourse addressed to an Infidel Mathematician, in which he proposed to destroy the presumption that the opinions of mathematicians in matters of faith are likely to be more trustworthy than those of divines, by contending that in the much vaunted fluxional calculus there are mysteries which are accepted unquestioningly by the mathematicians, but are incapable of logical demonstration. Berkeley’s criticism was levelled against all infinitesimals, that is to say, all quantities vaguely conceived as in some intermediate state between nullity and finiteness, as he took Newton’s moments to be conceived. The tract occasioned a controversy which had the important consequence of making it plain that all arguments about
  • 80. infinitesimals must be given up, and the calculus must be founded on the method of limits. During the controversy Benjamin Robins gave an exceedingly clear explanation of Newton’s theories of fluxions and of prime and ultimate ratios regarded as theories of limits. In this explanation he pointed out that Newton’s moment (Leibnitz’s “differential”) is to be regarded as so much of the actual difference between two neighbouring values of a variable as is needful for the formation of the fluxion (or differential coefficient) (see G. A. Gibson, “The Analyst Controversy,” Proc. Math. Soc., Edinburgh, xvii., 1899). Colin Maclaurin published in 1742 a Treatise of Fluxions, in which he reduced the whole theory to a theory of limits, and demonstrated it by the method of Archimedes. This notion was gradually transferred to the continental mathematicians. Leonhard Euler in his Institutiones Calculi differentialis (1755) was reduced to the position of one who asserts that all differentials are zero, but, as the product of zero and any finite quantity is zero, the ratio of two zeros can be a finite quantity which it is the business of the calculus to determine. Jean le Rond d’Alembert in the Encyclopédie méthodique (1755, 2nd ed. 1784) declared that differentials were unnecessary, and that Leibnitz’s calculus was a calculus of mutually compensating errors, while Newton’s method was entirely rigorous. D’Alembert’s opinion of Leibnitz’s calculus was expressed also by Lazare N. M. Carnot in his Réflexions sur la métaphysique du calcul infinitésimal (1799) and by Joseph Louis de la Grange (generally called Lagrange) in writings from 1760 onwards. Lagrange proposed in his Théorie des fonctions analytiques (1797) to found the whole of the calculus on the theory of series. It was not until 1823 that a treatise on the differential calculus founded upon the method of limits was published. The treatise was the Résumé des leçons ... sur le calcul infinitésimal of Augustin Louis
  • 81. Cauchy’s method of limits. Arithmetical basis of modern analysis. Cauchy. Since that time it has been understood that the use of the phrase “infinitely small” in any mathematical argument is a figurative mode of expression pointing to a limiting process. In the opinion of many eminent mathematicians such modes of expression are confusing to students, but in treatises on the calculus the traditional modes of expression are still largely adopted. 31. Defective modes of expression did not hinder constructive work. It was the great merit of Leibnitz’s symbolism that a mathematician who used it knew what was to be done in order to formulate any problem analytically, even though he might not be absolutely clear as to the proper interpretation of the symbols, or able to render a satisfactory account of them. While new and varied results were promptly obtained by using them, a long time elapsed before the theory of them was placed on a sound basis. Even after Cauchy had formulated his theory much remained to be done, both in the rapidly growing department of complex variables, and in the regions opened up by the theory of expansions in trigonometric series. In both directions it was seen that rigorous demonstration demanded greater precision in regard to fundamental notions, and the requirement of precision led to a gradual shifting of the basis of analysis from geometrical intuition to arithmetical law. A sketch of the outcome of this movement—the “arithmetization of analysis,” as it has been called—will be found in Function. Its general tendency has been to show that many theories and processes, at first accepted as of general validity, are liable to exceptions, and much of the work of the analysts of the latter half of the 19th century was
  • 82. Fig. 8. directed to discovering the most general conditions in which particular processes, frequently but not universally applicable, can be used without scruple. III. Outlines of the Infinitesimal Calculus. 32. The general notions of functionality, limits and continuity are explained in the article Function. Illustrations of the more immediate ways in which these notions present themselves in the development of the differential and integral calculus will be useful in what follows.
  • 83. Geometrical limits. Tangents. Progressive and Regressive 33. Let y be given as a function of x, or, more generally, let x and y be given as functions of a variable t. The first of these cases is included in the second by putting x = t. If certain conditions are satisfied the aggregate of the points determined by the functional relations form a curve. The first condition is that the aggregate of the values of t to which values of x and y correspond must be continuous, or, in other words, that these values must consist of all real numbers, or of all those real numbers which lie between assigned extreme numbers. When this condition is satisfied the points are “ordered,” and their order is determined by the order of the numbers t, supposed to be arranged in order of increasing or decreasing magnitude; also there are two senses of description of the curve, according as t is taken to increase or to diminish. The second condition is that the aggregate of the points which are determined by the functional relations must be “continuous.” This condition means that, if any point P determined by a value of t is taken, and any distance δ, however small, is chosen, it is possible to find two points Q, Q′ of the aggregate which are such that (i.) P is between Q and Q′, (ii.) if R, R′ are any points between Q and Q′ the distance RR′ is less than δ. The meaning of the word “between” in this statement is fixed by the ordering of the points. Sometimes additional conditions are imposed upon the functional relations before they are regarded as defining a curve. An aggregate of points which satisfies the two conditions stated above is sometimes called a “Jordan curve.” It by no means follows that every curve of this kind has a tangent. In order that the curve may have a tangent at P it is necessary that, if any angle α, however small, is specified, a distance δ can be found such that when P is between Q and Q′, and PQ and PQ′ are less than δ, the angle RPR′ is less than α for all pairs of points R, R′ which are between P and Q, or between P and Q′ (fig. 8). When this condition is satisfied y is a function of x which has a differential coefficient. The only way of finding out whether this condition is satisfied or not is to attempt to form the differential coefficient. If the quotient of differences Δy/Δx has a limit when Δx tends to zero, y is a differentiable function of x, and the limit in question is the differential coefficient. The derived function, or differential coefficient, of a function ƒ(x) is always defined by the formula ƒ′(x) = dƒ(x) = lim.h=0 ƒ(x + h) − ƒ(x) . dx h Rules for the formation of differential coefficients in particular cases have been given in § 11 above. The definition of a differential coefficient, and the rules of differentiation are quite independent of any geometrical interpretation, such as that concerning tangents to a curve, and the tangent to a curve is properly defined by means of the differential coefficient of a function, not the differential coefficient by means of the tangent. It may happen that the limit employed in defining the differential coefficient has one value when h approaches zero through positive values, and a different value when h approaches zero through negative values. The two limits are then called the “progressive” and “regressive” differential coefficients. In applications to dynamics, when x denotes a coordinate and t the time,
  • 84. Differential Coefficients. Areas. Lengths of Curves. dx/dt denotes a velocity. If the velocity is changed suddenly the progressive differential coefficient measures the velocity just after the change, and the regressive differential coefficient measures the velocity just before the change. Variable velocities are properly defined by means of differential coefficients. All geometrical limits may be specified in terms similar to those employed in specifying the tangent to a curve; in difficult cases they must be so specified. Geometrical intuition may fail to answer the question of the existence or non-existence of the appropriate limits. In the last resort the definitions of many quantities of geometrical import must be analytical, not geometrical. As illustrations of this statement we may take the definitions of the areas and lengths of curves. We may not assume that every curve has an area or a length. To find out whether a curve has an area or not, we must ascertain whether the limit expressed by ƒydx exists. When the limit exists the curve has an area. The definition of the integral is quite independent of any geometrical interpretation. The length of a curve again is defined by means of a limiting process. Let P, Q be two points of a curve, and R1, R2, ... Rn−1 a set of intermediate points of the curve, supposed to be described in the sense in which Q comes after P. The points R are supposed to be reached successively in the order of the suffixes when the curve is described in this sense. We form a sum of lengths of chords PR1 + R1R2 + ... + Rn−1Q. If this sum has a limit when the number of the points R is increased indefinitely and the lengths of all the chords are diminished indefinitely, this limit is the length of the arc PQ. The limit is the same whatever law may be adopted for inserting the intermediate points R and diminishing the lengths of the chords. It appears from this statement that the differential element of the arc of a curve is the length of the chord joining two neighbouring points. In accordance with the fundamental artifice for forming differentials (§§ 9, 10), the differential element of arc ds may be expressed by the formula ds = √ { (dx)2 + (dy)2 }, of which the right-hand member is really the measure of the distance between two neighbouring points on the tangent. The square root must be taken to be positive. We may describe this differential element as being so much of the actual arc between two neighbouring points as need be retained for the purpose of forming the integral expression for an arc. This is a description, not a definition, because the length of the short arc itself is only definable by means of the integral expression. Similar considerations to those used in defining the areas of plane figures and the lengths of plane curves are applicable to the formation of expressions for differential elements of volume or of the areas of curved surfaces.
  • 85. Constants of Integration. Higher Differential Coefficients. 34. In regard to differential coefficients it is an important theorem that, if the derived function ƒ′(x) vanishes at all points of an interval, the function ƒ(x) is constant in the interval. It follows that, if two functions have the same derived function they can only differ by a constant. Conversely, indefinite integrals are indeterminate to the extent of an additive constant. 35. The differential coefficient dy/dx, or the derived function ƒ′(x), is itself a function of x, and its differential coefficient is denoted by ƒ″(x) or d2y/dx2. In the second of these notations d/dx is regarded as the symbol of an operation, that of differentiation with respect to x, and the index 2 means that the operation is repeated. In like manner we may express the results of n successive differentiations by ƒ(n)(x) or by dny/dxn. When the second differential coefficient exists, or the first is differentiable, we have the relation ƒ″(x) = lim.h=0 ƒ(x + h) − 2ƒ(x) + ƒ(x − h) . h2 (i.) The limit expressed by the right-hand member of this equation may exist in cases in which ƒ′(x) does not exist or is not differentiable. The result that, when the limit here expressed can be shown to vanish at all points of an interval, then ƒ(x) must be a linear function of x in the interval, is important. The relation (i.) is a particular case of the more general relation ƒ(n)(x) = lim.h=0 h−n [ ƒ(x + nh) − nf {(x + (n − 1) h } + n (n − 1) ƒ {x + (n − 2) h } − ... + (−1)n ƒ(x) ]. 2! (ii.) As in the case of relation (i.) the limit expressed by the right-hand member may exist although some or all of the derived functions ƒ′(x), ƒ″(x), ... ƒ(n−1)(x) do not exist. Corresponding to the rule iii. of § 11 we have the rule for forming the nth differential coefficient of a product in the form dn(uv) = u dnv + n du dn−1v + n(n − 1) d2u dn−2v + ... + dnu v, dxn dxn dx dxn−1 1·2 dx2 dxn−2 dxn where the coefficients are those of the expansion of (1 + x)n in powers of x (n being a positive integer). The rule is due to Leibnitz, (1695). Differentials of higher orders may be introduced in the same way as the differential of the first order. In general when y = ƒ(x), the nth differential dny is defined by the equation dny = ƒ(n) (x) (dx)n, in which dx is the (arbitrary) differential of x.
  • 86. Symbols of operation. Fig. 9. Theorem of Intermediate Value. When d/dx is regarded as a single symbol of operation the symbol ƒ ... dx represents the inverse operation. If the former is denoted by D, the latter may be denoted by D−1. Dn means that the operation D is to be performed n times in succession; D−n that the operation of forming the indefinite integral is to be performed n times in succession. Leibnitz’s course of thought (§ 24) naturally led him to inquire after an interpretation of Dn. where n is not an integer. For an account of the researches to which this inquiry gave rise, reference may be made to the article by A. Voss in Ency. d. math. Wiss. Bd. ii. A, 2 (Leipzig, 1889). The matter is referred to as “fractional” or “generalized” differentiation. 36. After the formation of differential coefficients the most important theorem of the differential calculus is the theorem of intermediate value (“theorem of mean value,” “theorem of finite increments,” “Rolle’s theorem,” are other names for it). This theorem may be explained as follows: Let A, B be two points of a curve y = ƒ(x) (fig. 9). Then there is a point P between A and B at which the tangent is parallel to the secant AB. This theorem is expressed analytically in the statement that if ƒ′(x) is continuous between a and b, there is a value x1 of x between a and b which has the property expressed by the equation ƒ(b) − ƒ(a) = ƒ′(x1). b − a (i.) The value x1 can be expressed in the form a + θ(b − a) where θ is a number between 0 and 1. A slightly more general theorem was given by Cauchy (1823) to the effect that, if ƒ′(x) and F′(x) are continuous between x = a and x = b, then there is a number θ between 0 and 1 which has the property expressed by the equation F(b) − F(a) = F′ {a + θ(b − a) } . ƒ(b) − ƒ(a) ƒ′ {a + θ(b − a) } The theorem expressed by the relation (i.) was first noted by Rolle (1690) for the case where ƒ(x) is a rational integral function which vanishes when x = a and also when x = b. The general theorem was given by Lagrange (1797). Its fundamental importance was first recognized by Cauchy (1823). It may be observed here that the theorem of integral calculus expressed by the equation F(b) − F(a) = ∫b a F′(x) dx follows at once from the definition of an integral and the theorem of intermediate value.
  • 87. Taylor’s Theorem. The theorem of intermediate value may be generalized in the statement that, if ƒ(x) and all its differential coefficients up to the nth inclusive are continuous in the interval between x = a and x = b, then there is a number θ between 0 and 1 which has the property expressed by the equation ƒ(b) = ƒ(a) + (b − a) ƒ′(a) + (b − a)2 ƒ″(a) + ... + (b − a)n−1 ƒ(n−1)(a) 2! (n − 1)! + (b − a)n ƒ(n) {a + θ (b − a) }. n! (ii.) 37. This theorem provides a means for computing the values of a function at points near to an assigned point when the value of the function and its differential coefficients at the assigned point are known. The function is expressed by a terminated series, and, when the remainder tends to zero as n increases, it may be transformed into an infinite series. The theorem was first given by Brook Taylor in his Methodus Incrementorum (1717) as a corollary to a theorem concerning finite differences. Taylor gave the expression for ƒ(x + z) in terms of ƒ(x), ƒ′(x), ... as an infinite series proceeding by powers of z. His notation was that appropriate to the method of fluxions which he used. This rule for expressing a function as an infinite series is known as Taylor’s theorem. The relation (i.), in which the remainder after n terms is put in evidence, was first obtained by Lagrange (1797). Another form of the remainder was given by Cauchy (1823) viz., (b − a)n (1 − θ)n−1 ƒn {a + θ(b − a) }. (n − 1)! The conditions of validity of Taylor’s expansion in an infinite series have been investigated very completely by A. Pringsheim (Math. Ann. Bd. xliv., 1894). It is not sufficient that the function and all its differential coefficients should be finite at x = a; there must be a neighbourhood of a within which Cauchy’s form of the remainder tends to zero as n increases (cf. Function). An example of the necessity of this condition is afforded by the function f(x) which is given by the equation ƒ(x) = 1 + Σn=∞ n=1 (−1)n 1 . 1 + x2 n! 1 + 32n x2 (i.) The sum of the series ƒ(0) + xƒ′(0) + x2 ƒ″(0)+ ... 2! (ii.) is the same as that of the series e−1 − x2 e−32 + x4 e−34 − ...
  • 88. Expansions in power series. It is easy to prove that this is less than e−1 when x lies between 0 and 1, and also that f(x) is greater than e−l when x = 1/√3. Hence the sum of the series (i.) is not equal to the sum of the series (ii.). The particular case of Taylor’s theorem in which a = 0 is often called Maclaurin’s theorem, because it was first explicitly stated by Colin Maclaurin in his Treatise of Fluxions (1742). Maclaurin like Taylor worked exclusively with the fluxional calculus. Examples of expansions in series had been known for some time. The series for log (1 + x) was obtained by Nicolaus Mercator (1668) by expanding (1 + x)−1 by the method of algebraic division, and integrating the series term by term. He regarded his result as a “quadrature of the hyperbola.” Newton (1669) obtained the expansion of sin−1x by expanding (l − x2)−1/2 by the binomial theorem and integrating the series term by term. James Gregory (1671) gave the series for tan−1x. Newton also obtained the series for sin x, cos x, and ex by reversion of series (1669). The symbol e for the base of the Napierian logarithms was introduced by Euler (1739). All these series can be obtained at once by Taylor’s theorem. James Gregory found also the first few terms of the series for tan x and sec x; the terms of these series may be found successively by Taylor’s theorem, but the numerical coefficient of the general term cannot be obtained in this way. Taylor’s theorem for the expansion of a function in a power series was the basis of Lagrange’s theory of functions, and it is fundamental also in the theory of analytic functions of a complex variable as developed later by Karl Weierstrass. It has also numerous applications to problems of maxima and minima and to analytical geometry. These matters are treated in the appropriate articles. The forms of the coefficients in the series for tan x and sec x can be expressed most simply in terms of a set of numbers introduced by James Bernoulli in his treatise on probability entitled Ars Conjectandi (1713). These numbers B1, B2, ... called Bernoulli’s numbers, are the coefficients so denoted in the formula x = 1 − x + B1 x2 − B2 x4 + B3 x6 − ..., ex − 1 2 2! 4! 6! and they are connected with the sums of powers of the reciprocals of the natural numbers by equations of the type Bn = (2n)! ( 1 + 1 + 1 + ... ). 22n−1 π2n 12n 22n 32n The function xm − m xm−1 + m·m − 1 B1 xm−2 − ... 2 2!
  • 89. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com