SlideShare a Scribd company logo
Realtime Digital Signal Processing
Implementations And Applications 2nd Edition Sen
M Kuo download
https://ebookbell.com/product/realtime-digital-signal-processing-
implementations-and-applications-2nd-edition-sen-m-kuo-33786762
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Realtime Digital Signal Processing Fundamentals Implementations And
Applications 3rd Sen M Kuo
https://ebookbell.com/product/realtime-digital-signal-processing-
fundamentals-implementations-and-applications-3rd-sen-m-kuo-5331202
Real Time Digital Signal Processing Implementations Applications And
Experiments With The Tms320c55x Sen M Kuo
https://ebookbell.com/product/real-time-digital-signal-processing-
implementations-applications-and-experiments-with-the-tms320c55x-sen-
m-kuo-4311340
Realtime Digital Signal Processing From Matlab To C With The Tms320c6x
Dsps 2nd Ed Wright
https://ebookbell.com/product/realtime-digital-signal-processing-from-
matlab-to-c-with-the-tms320c6x-dsps-2nd-ed-wright-5145054
Realtime Digital Signal Processing Based On The Tms320c6000 Nasser
Kehtarnavaz
https://ebookbell.com/product/realtime-digital-signal-processing-
based-on-the-tms320c6000-nasser-kehtarnavaz-992478
Realtime Digital Signal Processing From Matlab To C With The Tms320c6x
Dsps Third Edition 3rd Ed Morrow
https://ebookbell.com/product/realtime-digital-signal-processing-from-
matlab-to-c-with-the-tms320c6x-dsps-third-edition-3rd-ed-
morrow-9954322
Realtime Digital Signal Processing From Matlab To C With The Tms320c6x
Dsk Welch
https://ebookbell.com/product/realtime-digital-signal-processing-from-
matlab-to-c-with-the-tms320c6x-dsk-welch-9955500
Realtime Digital Signal Processing From Matlab To C With The Tms320c6x
Dsps Third Edition 3rd Edition Michael G Morrow Cameron Hg Wright Thad
B Welch Michael G Morrow
https://ebookbell.com/product/realtime-digital-signal-processing-from-
matlab-to-c-with-the-tms320c6x-dsps-third-edition-3rd-edition-michael-
g-morrow-cameron-hg-wright-thad-b-welch-michael-g-morrow-7384218
Architecting Highperformance Embedded Systems Design And Build
Highperformance Realtime Digital Systems Based On Fpgas And Custom
Circuits Ledin
https://ebookbell.com/product/architecting-highperformance-embedded-
systems-design-and-build-highperformance-realtime-digital-systems-
based-on-fpgas-and-custom-circuits-ledin-34810328
Indigeneity In Real Time The Digital Making Of Oaxacalifornia Ingrid
Kummels
https://ebookbell.com/product/indigeneity-in-real-time-the-digital-
making-of-oaxacalifornia-ingrid-kummels-51199030
Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo
Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo
Real-Time Digital
Signal Processing
Implementations and Applications
Second Edition
Sen M Kuo
Northern Illinois University, USA
Bob H Lee
Ingenient Technologies Inc., USA
Wenshun Tian
UTStarcom Inc., USA
Real-Time Digital
Signal Processing
Second Edition
Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo
Real-Time Digital
Signal Processing
Implementations and Applications
Second Edition
Sen M Kuo
Northern Illinois University, USA
Bob H Lee
Ingenient Technologies Inc., USA
Wenshun Tian
UTStarcom Inc., USA
Copyright C
 2006 John Wiley  Sons Ltd,
The Atrium, Southern Gate, Chichester,
West Sussex PO19 8SQ, England
Telephone (+44) 1243 779777
Email (for orders and customer service enquiries): cs-books@wiley.co.uk
Visit our Home Page on www.wileyeurope.com
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in
any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under
the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright
Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the
Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley  Sons Ltd,
The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or
faxed to (+44) 1243 770620.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and
product names used in this book are trade names, service marks, trademarks or registered trademarks of their
respective owners. The Publisher is not associated with any product or vendor mentioned in this book.
This publication is designed to provide accurate and authoritative information in regard to the subject matter
covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If
professional advice or other expert assistance is required, the services of a competent professional should be sought.
Other Wiley Editorial Offices
John Wiley  Sons Inc., 111 River Street, Hoboken, NJ 07030, USA
Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA
Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany
John Wiley  Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia
John Wiley  Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809
John Wiley  Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1
Wiley also publishes its books in a variety of electronic formats. Some content that appears
in print may not be available in electronic books.
Library of Congress Cataloging-in-Publication Data
Kuo, Sen M. (Sen-Maw)
Real-time digital signal processing : implementations, applications and experiments with the
TMS320C55X / Sen M Kuo, Bob H Lee, Wenshun Tian. – 2nd ed.
p. cm.
Includes bibliographical references and index.
ISBN 0-470-01495-4 (cloth)
1. Signal processing–Digital techniques. 2. Texas Instruments TMS320 series microprocessors.
I. Lee, Bob H. II. Tian, Wenshun. III. Title.
TK5102 .9 .K86 2006
621.3822-dc22 2005036660
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN-13 978-0-470-01495-0
ISBN-10 0-470-01495-4
Typeset in 9/11pt Times by TechBooks, New Delhi, India
Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire
This book is printed on acid-free paper responsibly manufactured from sustainable forestry
in which at least two trees are planted for each one used for paper production.
Contents
Preface xv
1 Introduction to Real-Time Digital Signal Processing 1
1.1 Basic Elements of Real-Time DSP Systems 2
1.2 Analog Interface 3
1.2.1 Sampling 3
1.2.2 Quantization and Encoding 7
1.2.3 Smoothing Filters 8
1.2.4 Data Converters 9
1.3 DSP Hardware 10
1.3.1 DSP Hardware Options 10
1.3.2 DSP Processors 13
1.3.3 Fixed- and Floating-Point Processors 15
1.3.4 Real-Time Constraints 16
1.4 DSP System Design 17
1.4.1 Algorithm Development 18
1.4.2 Selection of DSP Processors 19
1.4.3 Software Development 20
1.4.4 High-Level Software Development Tools 21
1.5 Introduction to DSP Development Tools 22
1.5.1 C Compiler 22
1.5.2 Assembler 23
1.5.3 Linker 24
1.5.4 Other Development Tools 25
1.6 Experiments and Program Examples 25
1.6.1 Experiments of Using CCS and DSK 26
1.6.2 Debugging Program Using CCS and DSK 29
1.6.3 File I/O Using Probe Point 32
1.6.4 File I/O Using C File System Functions 35
1.6.5 Code Efficiency Analysis Using Profiler 37
1.6.6 Real-Time Experiments Using DSK 39
1.6.7 Sampling Theory 42
1.6.8 Quantization in ADCs 44
References 45
Exercises 45
vi CONTENTS
2 Introduction to TMS320C55x Digital Signal Processor 49
2.1 Introduction 49
2.2 TMS320C55x Architecture 50
2.2.1 Architecture Overview 50
2.2.2 Buses 53
2.2.3 On-Chip Memories 53
2.2.4 Memory-Mapped Registers 55
2.2.5 Interrupts and Interrupt Vector 55
2.3 TMS320C55x Peripherals 58
2.3.1 External Memory Interface 60
2.3.2 Direct Memory Access 60
2.3.3 Enhanced Host-Port Interface 61
2.3.4 Multi-Channel Buffered Serial Ports 62
2.3.5 Clock Generator and Timers 65
2.3.6 General Purpose Input/Output Port 65
2.4 TMS320C55x Addressing Modes 65
2.4.1 Direct Addressing Modes 66
2.4.2 Indirect Addressing Modes 68
2.4.3 Absolute Addressing Modes 70
2.4.4 Memory-Mapped Register Addressing Mode 70
2.4.5 Register Bits Addressing Mode 71
2.4.6 Circular Addressing Mode 72
2.5 Pipeline and Parallelism 73
2.5.1 TMS320C55x Pipeline 73
2.5.2 Parallel Execution 74
2.6 TMS320C55x Instruction Set 76
2.6.1 Arithmetic Instructions 76
2.6.2 Logic and Bit Manipulation Instructions 77
2.6.3 Move Instruction 78
2.6.4 Program Flow Control Instructions 78
2.7 TMS320C55x Assembly Language Programming 82
2.7.1 Assembly Directives 82
2.7.2 Assembly Statement Syntax 84
2.8 C Language Programming for TMS320C55x 86
2.8.1 Data Types 86
2.8.2 Assembly Code Generation by C Compiler 87
2.8.3 Compiler Keywords and Pragma Directives 89
2.9 Mixed C-and-Assembly Language Programming 90
2.10 Experiments and Program Examples 93
2.10.1 Interfacing C with Assembly Code 93
2.10.2 Addressing Modes Using Assembly Programming 94
2.10.3 Phase-Locked Loop and Timers 97
2.10.4 EMIF Configuration for Using SDRAM 103
2.10.5 Programming Flash Memory Devices 105
2.10.6 Using McBSP 106
2.10.7 AIC23 Configurations 109
2.10.8 Direct Memory Access 111
References 115
Exercises 115
CONTENTS vii
3 DSP Fundamentals and Implementation
Considerations 121
3.1 Digital Signals and Systems 121
3.1.1 Elementary Digital Signals 121
3.1.2 Block Diagram Representation of Digital Systems 123
3.2 System Concepts 126
3.2.1 Linear Time-Invariant Systems 126
3.2.2 The z-Transform 130
3.2.3 Transfer Functions 132
3.2.4 Poles and Zeros 135
3.2.5 Frequency Responses 138
3.2.6 Discrete Fourier Transform 141
3.3 Introduction to Random Variables 142
3.3.1 Review of Random Variables 142
3.3.2 Operations of Random Variables 144
3.4 Fixed-Point Representations and Quantization Effects 147
3.4.1 Fixed-Point Formats 147
3.4.2 Quantization Errors 151
3.4.3 Signal Quantization 151
3.4.4 Coefficient Quantization 153
3.4.5 Roundoff Noise 153
3.4.6 Fixed-Point Toolbox 154
3.5 Overflow and Solutions 157
3.5.1 Saturation Arithmetic 157
3.5.2 Overflow Handling 158
3.5.3 Scaling of Signals 158
3.5.4 Guard Bits 159
3.6 Experiments and Program Examples 159
3.6.1 Quantization of Sinusoidal Signals 160
3.6.2 Quantization of Audio Signals 161
3.6.3 Quantization of Coefficients 162
3.6.4 Overflow and Saturation Arithmetic 164
3.6.5 Function Approximations 167
3.6.6 Real-Time Digital Signal Generation Using DSK 175
References 180
Exercises 180
4 Design and Implementation of FIR Filters 185
4.1 Introduction to FIR Filters 185
4.1.1 Filter Characteristics 185
4.1.2 Filter Types 187
4.1.3 Filter Specifications 189
4.1.4 Linear-Phase FIR Filters 191
4.1.5 Realization of FIR Filters 194
4.2 Design of FIR Filters 196
4.2.1 Fourier Series Method 197
4.2.2 Gibbs Phenomenon 198
4.2.3 Window Functions 201
viii CONTENTS
4.2.4 Design of FIR Filters Using MATLAB 206
4.2.5 Design of FIR Filters Using FDATool 207
4.3 Implementation Considerations 213
4.3.1 Quantization Effects in FIR Filters 213
4.3.2 MATLAB Implementations 216
4.3.3 Floating-Point C Implementations 218
4.3.4 Fixed-Point C Implementations 219
4.4 Applications: Interpolation and Decimation Filters 220
4.4.1 Interpolation 220
4.4.2 Decimation 221
4.4.3 Sampling-Rate Conversion 221
4.4.4 MATLAB Implementations 224
4.5 Experiments and Program Examples 225
4.5.1 Implementation of FIR Filters Using Fixed-Point C 226
4.5.2 Implementation of FIR Filter Using C55x Assembly
Language 226
4.5.3 Optimization for Symmetric FIR Filters 228
4.5.4 Optimization Using Dual MAC Architecture 230
4.5.5 Implementation of Decimation 232
4.5.6 Implementation of Interpolation 233
4.5.7 Sample Rate Conversion 234
4.5.8 Real-Time Sample Rate Conversion Using
DSP/BIOS and DSK 235
References 245
Exercises 245
5 Design and Implementation of IIR Filters 249
5.1 Introduction 249
5.1.1 Analog Systems 249
5.1.2 Mapping Properties 251
5.1.3 Characteristics of Analog Filters 252
5.1.4 Frequency Transforms 254
5.2 Design of IIR Filters 255
5.2.1 Bilinear Transform 256
5.2.2 Filter Design Using Bilinear Transform 257
5.3 Realization of IIR Filters 258
5.3.1 Direct Forms 258
5.3.2 Cascade Forms 260
5.3.3 Parallel Forms 262
5.3.4 Realization of IIR Filters Using MATLAB 263
5.4 Design of IIR Filters Using MATLAB 264
5.4.1 Filter Design Using MATLAB 264
5.4.2 Frequency Transforms Using MATLAB 267
5.4.3 Design and Realization Using FDATool 268
5.5 Implementation Considerations 271
5.5.1 Stability 271
5.5.2 Finite-Precision Effects and Solutions 273
5.5.3 MATLAB Implementations 275
CONTENTS ix
5.6 Practical Applications 279
5.6.1 Recursive Resonators 279
5.6.2 Recursive Quadrature Oscillators 282
5.6.3 Parametric Equalizers 284
5.7 Experiments and Program Examples 285
5.7.1 Floating-Point Direct-Form I IIR Filter 285
5.7.2 Fixed-Point Direct-Form I IIR Filter 286
5.7.3 Fixed-Point Direct-Form II Cascade IIR Filter 287
5.7.4 Implementation Using DSP Intrinsics 289
5.7.5 Implementation Using Assembly Language 290
5.7.6 Real-Time Experiments Using DSP/BIOS 293
5.7.7 Implementation of Parametric Equalizer 296
5.7.8 Real-Time Two-Band Equalizer Using DSP/BIOS 297
References 299
Exercises 299
6 Frequency Analysis and Fast Fourier Transform 303
6.1 Fourier Series and Transform 303
6.1.1 Fourier Series 303
6.1.2 Fourier Transform 304
6.2 Discrete Fourier Transform 305
6.2.1 Discrete-Time Fourier Transform 305
6.2.2 Discrete Fourier Transform 307
6.2.3 Important Properties 310
6.3 Fast Fourier Transforms 313
6.3.1 Decimation-in-Time 314
6.3.2 Decimation-in-Frequency 316
6.3.3 Inverse Fast Fourier Transform 317
6.4 Implementation Considerations 317
6.4.1 Computational Issues 317
6.4.2 Finite-Precision Effects 318
6.4.3 MATLAB Implementations 318
6.4.4 Fixed-Point Implementation Using MATLAB 320
6.5 Practical Applications 322
6.5.1 Spectral Analysis 322
6.5.2 Spectral Leakage and Resolution 323
6.5.3 Power Spectrum Density 325
6.5.4 Fast Convolution 328
6.6 Experiments and Program Examples 332
6.6.1 Floating-Point C Implementation of DFT 332
6.6.2 C55x Assembly Implementation of DFT 332
6.6.3 Floating-Point C Implementation of FFT 336
6.6.4 C55x Intrinsics Implementation of FFT 338
6.6.5 Assembly Implementation of FFT and Inverse FFT 339
6.6.6 Implementation of Fast Convolution 343
6.6.7 Real-Time FFT Using DSP/BIOS 345
6.6.8 Real-Time Fast Convolution 347
References 347
Exercises 348
x CONTENTS
7 Adaptive Filtering 351
7.1 Introduction to Random Processes 351
7.2 Adaptive Filters 354
7.2.1 Introduction to Adaptive Filtering 354
7.2.2 Performance Function 355
7.2.3 Method of Steepest Descent 358
7.2.4 The LMS Algorithm 360
7.2.5 Modified LMS Algorithms 361
7.3 Performance Analysis 362
7.3.1 Stability Constraint 362
7.3.2 Convergence Speed 363
7.3.3 Excess Mean-Square Error 363
7.3.4 Normalized LMS Algorithm 364
7.4 Implementation Considerations 364
7.4.1 Computational Issues 365
7.4.2 Finite-Precision Effects 365
7.4.3 MATLAB Implementations 366
7.5 Practical Applications 368
7.5.1 Adaptive System Identification 368
7.5.2 Adaptive Linear Prediction 369
7.5.3 Adaptive Noise Cancelation 372
7.5.4 Adaptive Notch Filters 374
7.5.5 Adaptive Channel Equalization 375
7.6 Experiments and Program Examples 377
7.6.1 Floating-Point C Implementation 377
7.6.2 Fixed-Point C Implementation of Leaky LMS Algorithm 379
7.6.3 ETSI Implementation of NLMS Algorithm 380
7.6.4 Assembly Language Implementation of Delayed LMS Algorithm 383
7.6.5 Adaptive System Identification 387
7.6.6 Adaptive Prediction and Noise Cancelation 388
7.6.7 Adaptive Channel Equalizer 392
7.6.8 Real-Time Adaptive Line Enhancer Using DSK 394
References 396
Exercises 397
8 Digital Signal Generators 401
8.1 Sinewave Generators 401
8.1.1 Lookup-Table Method 401
8.1.2 Linear Chirp Signal 404
8.2 Noise Generators 405
8.2.1 Linear Congruential Sequence Generator 405
8.2.2 Pseudo-Random Binary Sequence Generator 407
8.3 Practical Applications 409
8.3.1 Siren Generators 409
8.3.2 White Gaussian Noise 409
8.3.3 Dual-Tone Multifrequency Tone Generator 410
8.3.4 Comfort Noise in Voice Communication Systems 411
8.4 Experiments and Program Examples 412
8.4.1 Sinewave Generator Using C5510 DSK 412
8.4.2 White Noise Generator Using C5510 DSK 413
CONTENTS xi
8.4.3 Wail Siren Generator Using C5510 DSK 414
8.4.4 DTMF Generator Using C5510 DSK 415
8.4.5 DTMF Generator Using MATLAB Graphical User Interface 416
References 418
Exercises 418
9 Dual-Tone Multifrequency Detection 421
9.1 Introduction 421
9.2 DTMF Tone Detection 422
9.2.1 DTMF Decode Specifications 422
9.2.2 Goertzel Algorithm 423
9.2.3 Other DTMF Detection Methods 426
9.2.4 Implementation Considerations 428
9.3 Internet Application Issues and Solutions 431
9.4 Experiments and Program Examples 432
9.4.1 Implementation of Goertzel Algorithm Using Fixed-Point C 432
9.4.2 Implementation of Goertzel Algorithm Using C55x
Assembly Language 434
9.4.3 DTMF Detection Using C5510 DSK 435
9.4.4 DTMF Detection Using All-Pole Modeling 439
References 441
Exercises 442
10 Adaptive Echo Cancelation 443
10.1 Introduction to Line Echoes 443
10.2 Adaptive Echo Canceler 444
10.2.1 Principles of Adaptive Echo Cancelation 445
10.2.2 Performance Evaluation 446
10.3 Practical Considerations 447
10.3.1 Prewhitening of Signals 447
10.3.2 Delay Detection 448
10.4 Double-Talk Effects and Solutions 450
10.5 Nonlinear Processor 453
10.5.1 Center Clipper 453
10.5.2 Comfort Noise 453
10.6 Acoustic Echo Cancelation 454
10.6.1 Acoustic Echoes 454
10.6.2 Acoustic Echo Canceler 456
10.6.3 Subband Implementations 457
10.6.4 Delay-Free Structures 459
10.6.5 Implementation Considerations 459
10.6.6 Testing Standards 460
10.7 Experiments and Program Examples 461
10.7.1 MATLAB Implementation of AEC 461
10.7.2 Acoustic Echo Cancelation Using Floating-Point C 464
10.7.3 Acoustic Echo Canceler Using C55x Intrinsics 468
10.7.4 Experiment of Delay Estimation 469
References 472
Exercises 472
xii CONTENTS
11 Speech-Coding Techniques 475
11.1 Introduction to Speech-Coding 475
11.2 Overview of CELP Vocoders 476
11.2.1 Synthesis Filter 477
11.2.2 Long-Term Prediction Filter 481
11.2.3 Perceptual Based Minimization Procedure 481
11.2.4 Excitation Signal 482
11.2.5 Algebraic CELP 483
11.3 Overview of Some Popular CODECs 484
11.3.1 Overview of G.723.1 484
11.3.2 Overview of G.729 488
11.3.3 Overview of GSM AMR 490
11.4 Voice over Internet Protocol Applications 492
11.4.1 Overview of VoIP 492
11.4.2 Real-Time Transport Protocol and Payload Type 493
11.4.3 Example of Packing G.729 496
11.4.4 RTP Data Analysis Using Ethereal Trace 496
11.4.5 Factors Affecting the Overall Voice Quality 497
11.5 Experiments and Program Examples 497
11.5.1 Calculating LPC Coefficients Using Floating-Point C 497
11.5.2 Calculating LPC Coefficients Using C55x Intrinsics 499
11.5.3 MATLAB Implementation of Formant Perceptual Weighting Filter 504
11.5.4 Implementation of Perceptual Weighting Filter Using C55x Intrinsics 506
References 507
Exercises 508
12 Speech Enhancement Techniques 509
12.1 Introduction to Noise Reduction Techniques 509
12.2 Spectral Subtraction Techniques 510
12.2.1 Short-Time Spectrum Estimation 511
12.2.2 Magnitude Subtraction 511
12.3 Voice Activity Detection 513
12.4 Implementation Considerations 515
12.4.1 Spectral Averaging 515
12.4.2 Half-Wave Rectification 515
12.4.3 Residual Noise Reduction 516
12.5 Combination of Acoustic Echo Cancelation with NR 516
12.6 Voice Enhancement and Automatic Level Control 518
12.6.1 Voice Enhancement Devices 518
12.6.2 Automatic Level Control 519
12.7 Experiments and Program Examples 519
12.7.1 Voice Activity Detection 519
12.7.2 MATLAB Implementation of NR Algorithm 522
12.7.3 Floating-Point C Implementation of NR 522
12.7.4 Mixed C55x Assembly and Intrinsics Implementations of VAD 522
12.7.5 Combining AEC with NR 526
References 529
Exercises 529
CONTENTS xiii
13 Audio Signal Processing 531
13.1 Introduction 531
13.2 Basic Principles of Audio Coding 531
13.2.1 Auditory-Masking Effects for Perceptual Coding 533
13.2.2 Frequency-Domain Coding 536
13.2.3 Lossless Audio Coding 538
13.3 Multichannel Audio Coding 539
13.3.1 MP3 540
13.3.2 Dolby AC-3 541
13.3.3 MPEG-2 AAC 542
13.4 Connectivity Processing 544
13.5 Experiments and Program Examples 544
13.5.1 Floating-Point Implementation of MDCT 544
13.5.2 Implementation of MDCT Using C55x Intrinsics 547
13.5.3 Experiments of Preecho Effects 549
13.5.4 Floating-Point C Implementation of MP3 Decoding 549
References 553
Exercises 553
14 Channel Coding Techniques 555
14.1 Introduction 555
14.2 Block Codes 556
14.2.1 Reed–Solomon Codes 558
14.2.2 Applications of Reed–Solomon Codes 562
14.2.3 Cyclic Redundant Codes 563
14.3 Convolutional Codes 564
14.3.1 Convolutional Encoding 564
14.3.2 Viterbi Decoding 564
14.3.3 Applications of Viterbi Decoding 566
14.4 Experiments and Program Examples 569
14.4.1 Reed–Solomon Coding Using MATALB 569
14.4.2 Reed–Solomon Coding Using Simulink 570
14.4.3 Verification of RS(255, 239) Generation Polynomial 571
14.4.4 Convolutional Codes 572
14.4.5 Implementation of Convolutional Codes Using C 573
14.4.6 Implementation of CRC-32 575
References 576
Exercises 577
15 Introduction to Digital Image Processing 579
15.1 Digital Images and Systems 579
15.1.1 Digital Images 579
15.1.2 Digital Image Systems 580
15.2 RGB Color Spaces and Color Filter Array Interpolation 581
15.3 Color Spaces 584
15.3.1 YCbCr and YUV Color Spaces 584
15.3.2 CYMK Color Space 585
xiv CONTENTS
15.3.3 YIQ Color Space 585
15.3.4 HSV Color Space 585
15.4 YCbCr Subsampled Color Spaces 586
15.5 Color Balance and Correction 586
15.5.1 Color Balance 587
15.5.2 Color Adjustment 588
15.5.3 Gamma Correction 589
15.6 Image Histogram 590
15.7 Image Filtering 591
15.8 Image Filtering Using Fast Convolution 596
15.9 Practical Applications 597
15.9.1 JPEG Standard 597
15.9.2 2-D Discrete Cosine Transform 599
15.10 Experiments and Program Examples 601
15.10.1 YCbCr to RGB Conversion 601
15.10.2 Using CCS Link with DSK and Simulator 604
15.10.3 White Balance 607
15.10.4 Gamma Correction and Contrast Adjustment 610
15.10.5 Histogram and Histogram Equalization 611
15.10.6 2-D Image Filtering 613
15.10.7 Implementation of DCT and IDCT 617
15.10.8 TMS320C55x Image Accelerator for DCT and IDCT 621
15.10.9 TMS320C55x Hardware Accelerator Image/Video Processing Library 623
References 625
Exercises 625
Appendix A Some Useful Formulas and Definitions 627
A.1 Trigonometric Identities 627
A.2 Geometric Series 628
A.3 Complex Variables 628
A.4 Units of Power 630
References 631
Appendix B Software Organization and List of Experiments 633
Index 639
Preface
In recent years, digital signal processing (DSP) has expanded beyond filtering, frequency analysis, and
signal generation. More and more markets are opening up to DSP applications, where in the past,
real-time signal processing was not feasible or was too expensive. Real-time signal processing using
general-purpose DSP processors provides an effective way to design and implement DSP algorithms for
real-world applications. However, this is very challenging work in today’s engineering fields. With DSP
penetrating into many practical applications, the demand for high-performance digital signal processors
has expanded rapidly in recent years. Many industrial companies are currently engaged in real-time DSP
research and development. Therefore, it becomes increasingly important for today’s students, practicing
engineers, and development researchers to master not only the theory of DSP, but also the skill of real-time
DSP system design and implementation techniques.
This book provides fundamental real-time DSP principles and uses a hands-on approach to introduce
DSP algorithms, system design, real-time implementation considerations, and many practical applica-
tions. This book contains many useful examples like hands-on experiment software and DSP programs
using MATLAB, Simulink, C, and DSP assembly languages. Also included are various exercises for
further exploring the extensions of the examples and experiments. The book uses the Texas Instruments’
Code Composer Studio (CCS) with the Spectrum Digital TMS320VC5510 DSP starter kit (DSK) devel-
opment tool for real-time experiments and applications.
This book emphasizes real-time DSP applications and is intended as a text for senior/graduate-level
college students. The prerequisites of this book are signals and systems concepts, microprocessor ar-
chitecture and programming, and basic C programming knowledge. These topics are covered at the
sophomore and junior levels of electrical and computer engineering, computer science, and other related
engineering curricula. This book can also serve as a desktop reference for DSP engineers, algorithm
developers, and embedded system programmers to learn DSP concepts and to develop real-time DSP
applications on the job. We use a practical approach that avoids numerous theoretical derivations. A list of
DSP textbooks with mathematical proofs is given at the end of each chapter. Also helpful are the manuals
and application notes for the TMS320C55x DSP processors from Texas Instruments at www.ti.com,
and for the MATLAB and Simulink from Math Works at www.mathworks.com.
This is the second edition of the book titled ‘Real-Time Digital Signal Processing: Implementations,
Applications and Experiments with the TMS320C55x’ by Kuo and Lee, John Wiley  Sons, Ltd. in
2001. The major changes included in the revision are:
1. To utilize the effective software development process that begins from algorithm design and verifica-
tionusingMATLABandfloating-pointC,tofinite-wordlengthanalysis,fixed-pointCimplementation
and code optimization using intrinsics, assembly routines, and mixed C-and-assembly programming
xvi PREFACE
on fixed-point DSP processors. This step-by-step software development and optimization process
is applied to the finite-impulse response (FIR) filtering, infinite-impulse response (IIR) filtering,
adaptive filtering, fast Fourier transform, and many real-life applications in Chapters 8–15.
2. To add several widely used DSP applications such as speech coding, channel coding, audio coding,
image processing, signal generation and detection, echo cancelation, and noise reduction by expand-
ing Chapter 9 of the first edition to eight new chapters with the necessary background to perform the
experiments using the optimized software development process.
3. To design and analyze DSP algorithms using the most effective MATLAB graphic user interface
(GUI) tools such as Signal Processing Tool (SPTool), Filter Design and Analysis Tool (FDATool),
etc. These tools are powerful for filter designing, analysis, quantization, testing, and implementation.
4. To add step-by-step experiments to create CCS DSP/BIOS applications, configure the
TMS320VC5510 DSK for real-time audio applications, and utilize MATLAB’s Link for CCS feature
to improve DSP development, debug, analyze, and test efficiencies.
5. To update experiments to include new sets of hands-on exercises and applications. Also, to update all
programs using the most recent version of software and the TMS320C5510 DSK board for real-time
experiments.
There are many existing DSP algorithms and applications available in MATLAB and floating-point
C programs. This book provides a systematic software development process for converting these pro-
grams to fixed-point C and optimizing them for implementation on commercially available fixed-point
DSP processors. To effectively illustrate real-time DSP concepts and applications, MATLAB is used
for analysis and filter design, C program is used for implementing DSP algorithms, and CCS is in-
tegrated into TMS320C55x experiments and applications. To efficiently utilize the advanced DSP ar-
chitecture for fast software development and maintenance, the mixing of C and assembly programs is
emphasized.
Thisbookisorganizedintotwoparts:DSPimplementationandDSPapplication.PartI,DSPimplemen-
tation (Chapters 1–7) discusses real-time DSP principles, architectures, algorithms, and implementation
considerations. Chapter 1 reviews the fundamentals of real-time DSP functional blocks, DSP hardware
options, fixed- and floating-point DSP devices, real-time constraints, algorithm development, selection of
DSP chips, and software development. Chapter 2 introduces the architecture and assembly programming
of the TMS320C55x DSP processor. Chapter 3 presents fundamental DSP concepts and practical con-
siderations for the implementation of digital filters and algorithms on DSP hardware. Chapter 4 focuses
on the design, implementation, and application of FIR filters. Digital IIR filters are covered in Chapter 5,
and adaptive filters are presented in Chapter 7. The development, implementation, and application of
FFT algorithms are introduced in Chapter 6.
Part II, DSP application (Chapters 8–15) introduces several popular real-world applications in signal
processing that have played important roles in the realization of the systems. These selected DSP applica-
tions include signal (sinewave, noise, and multitone) generation in Chapter 8, dual-tone multifrequency
detection in Chapter 9, adaptive echo cancelation in Chapter 10, speech-coding algorithms in Chapter 11,
speech enhancement techniques in Chapter 12, audio coding methods in Chapter 13, error correction
coding techniques in Chapter 14, and image processing fundamentals in Chapter 15.
As with any book attempting to capture the state of the art at a given time, there will certainly be
updates that are necessitated by the rapidly evolving developments in this dynamic field. We are certain
that this book will serve as a guide for what has already come and as an inspiration for what will
follow.
SOFTWARE AVAILABILITY xvii
Software Availability
This text utilizes various MATLAB, floating-point and fixed-point C, DSP assembly and mixed C and
assembly programs for the examples, experiments, and applications. These programs along with many
other programs and real-world data files are available in the companion CD. The directory structure and
the subdirectory names are explained in Appendix B. The software will assist in gaining insight into the
understanding and implementation of DSP algorithms, and it is required for doing experiments in the last
section of each chapter. Some of these experiments involve minor modifications of the example code.
By examining, studying, and modifying the example code, the software can also be used as a prototype
for other practical applications. Every attempt has been made to ensure the correctness of the code. We
would appreciate readers bringing to our attention (kuo@ceet.niu.edu) any coding errors so that we
can correct, update, and post them on the website http://www.ceet.niu.edu/faculty/kuo.
Acknowledgments
We are grateful to Cathy Wicks and Gene Frantz of Texas Instruments, and to Naomi Fernandes and
Courtney Esposito of The MathWorks for providing us with the support needed to write this book. We
would like to thank several individuals at Wiley for their support on this project: Simone Taylor, Executive
Commissioning Editor; Emily Bone, Assistant Editor; and Lucy Bryan, Executive Project Editor. We also
thank the staff at Wiley for the final preparation of this book. Finally, we thank our families for the endless
love, encouragement, patience, and understanding they have shown throughout this period.
Sen M. Kuo, Bob H. Lee and Wenshun Tian
Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo
1
Introduction to Real-Time
Digital Signal Processing
Signals can be divided into three categories: continuous-time (analog) signals, discrete-time signals, and
digital signals. The signals that we encounter daily are mostly analog signals. These signals are defined
continuously in time, have an infinite range of amplitude values, and can be processed using analog
electronics containing both active and passive circuit elements. Discrete-time signals are defined only at
a particular set of time instances. Therefore, they can be represented as a sequence of numbers that have a
continuous range of values. Digital signals have discrete values in both time and amplitude; thus, they can
be processed by computers or microprocessors. In this book, we will present the design, implementation,
and applications of digital systems for processing digital signals using digital hardware. However, the
analysis usually uses discrete-time signals and systems for mathematical convenience. Therefore, we use
the terms ‘discrete-time’ and ‘digital’ interchangeably.
Digital signal processing (DSP) is concerned with the digital representation of signals and the use of
digital systems to analyze, modify, store, or extract information from these signals. Much research
has been conducted to develop DSP algorithms and systems for real-world applications. In recent
years, the rapid advancement in digital technologies has supported the implementation of sophisti-
cated DSP algorithms for real-time applications. DSP is now used not only in areas where analog
methods were used previously, but also in areas where applying analog techniques is very difficult or
impossible.
There are many advantages in using digital techniques for signal processing rather than traditional
analog devices, such as amplifiers, modulators, and filters. Some of the advantages of a DSP system over
analog circuitry are summarized as follows:
1. Flexibility: Functions of a DSP system can be easily modified and upgraded with software that
implements the specific applications. One can design a DSP system that can be programmed to
perform a wide variety of tasks by executing different software modules. A digital electronic device
can be easily upgraded in the field through the onboard memory devices (e.g., flash memory) to meet
new requirements or improve its features.
2. Reproducibility: The performance of a DSP system can be repeated precisely from one unit to another.
In addition, by using DSP techniques, digital signals such as audio and video streams can be stored,
transferred, or reproduced many times without degrading the quality. By contract, analog circuits
Real-Time Digital Signal Processing: Implementations and Applications S.M. Kuo, B.H. Lee, and W. Tian
C
 2006 John Wiley  Sons, Ltd
2 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
will not have the same characteristics even if they are built following identical specifications due to
analog component tolerances.
3. Reliability: The memory and logic of DSP hardware does not deteriorate with age. Therefore, the
field performance of DSP systems will not drift with changing environmental conditions or aged
electronic components as their analog counterparts do.
4. Complexity: DSP allows sophisticated applications such as speech recognition and image compres-
sion to be implemented with lightweight and low-power portable devices. Furthermore, there are
some important signal processing algorithms such as error correcting codes, data transmission and
storage, and data compression, which can only be performed using DSP systems.
With the rapid evolution in semiconductor technologies, DSP systems have a lower overall cost com-
pared to analog systems for most applications. DSP algorithms can be developed, analyzed, and simulated
using high-level language and software tools such as C/C++ and MATLAB (matrix laboratory). The
performance of the algorithms can be verified using a low-cost, general-purpose computer. Therefore, a
DSP system is relatively easy to design, develop, analyze, simulate, test, and maintain.
There are some limitations associated with DSP. For instance, the bandwidth of a DSP system is
limited by the sampling rate and hardware peripherals. Also, DSP algorithms are implemented using
a fixed number of bits with a limited precision and dynamic range (the ratio between the largest and
smallest numbers that can be represented), which results in quantization and arithmetic errors. Thus, the
system performance might be different from the theoretical expectation.
1.1 Basic Elements of Real-Time DSP Systems
There are two types of DSP applications: non-real-time and real-time. Non-real-time signal processing
involves manipulating signals that have already been collected in digital forms. This may or may not
represent a current action, and the requirement for the processing result is not a function of real time.
Real-time signal processing places stringent demands on DSP hardware and software designs to complete
predefined tasks within a certain time frame. This chapter reviews the fundamental functional blocks of
real-time DSP systems.
The basic functional blocks of DSP systems are illustrated in Figure 1.1, where a real-world analog
signal is converted to a digital signal, processed by DSP hardware, and converted back into an analog
Other digital
systems
Antialiasing
filter
ADC
x(n)
DSP
hardware
Other digital
systems
DAC
Reconstruction
filter y(n)
x(t)
x′(t)
Amplifier
Amplifier
y(t) y′(t)
Input channels
Output channels
Figure 1.1 Basic functional block diagram of a real-time DSP system
ANALOG INTERFACE 3
signal. Each of the functional blocks in Figure 1.1 will be introduced in the subsequent sections. For
some applications, the input signal may already be in digital form and/or the output data may not need
to be converted to an analog signal. For example, the processed digital information may be stored in
computer memory for later use, or it may be displayed graphically. In other applications, the DSP system
may be required to generate signals digitally, such as speech synthesis used for computerized services or
pseudo-random number generators for CDMA (code division multiple access) wireless communication
systems.
1.2 Analog Interface
In this book, a time-domain signal is denoted with a lowercase letter. For example, x(t) in Figure 1.1 is
used to name an analog signal of x which is a function of time t. The time variable t and the amplitude of
x(t) take on a continuum of values between −∞ and ∞. For this reason we say x(t) is a continuous-time
signal. The signals x(n) and y(n) in Figure 1.1 depict digital signals which are only meaningful at time
instant n. In this section, we first discuss how to convert analog signals into digital signals so that they
can be processed using DSP hardware. The process of converting an analog signal to a digital signal is
called the analog-to-digital conversion, usually performed by an analog-to-digital converter (ADC).
The purpose of signal conversion is to prepare real-world analog signals for processing by digital
hardware. As shown in Figure 1.1, the analog signal x
(t) is picked up by an appropriate electronic sensor
that converts pressure, temperature, or sound into electrical signals. For example, a microphone can be
used to collect sound signals. The sensor signal x
(t) is amplified by an amplifier with gain value g. The
amplified signal is
x(t) = gx
(t). (1.1)
The gain value g is determined such that x(t) has a dynamic range that matches the ADC used by the
system. If the peak-to-peak voltage range of the ADC is ±5 V, then g may be set so that the amplitude
of signal x(t) to the ADC is within ±5 V. In practice, it is very difficult to set an appropriate fixed gain
because the level of x
(t) may be unknown and changing with time, especially for signals with a larger
dynamic range such as human speech.
Once the input digital signal has been processed by the DSP hardware, the result y(n) is still in digital
form. In many DSP applications, we need to reconstruct the analog signal after the completion of digital
processing. We must convert the digital signal y(n) back to the analog signal y(t) before it is applied to an
appropriated analog device. This process is called the digital-to-analog conversion, typically performed by
a digital-to-analog converter (DAC). One example would be audio CD (compact disc) players, for which
the audio music signals are stored in digital form on CDs. A CD player reads the encoded digital audio
signals from the disk and reconstructs the corresponding analog waveform for playback via loudspeakers.
The system shown in Figure 1.1 is a real-time system if the signal to the ADC is continuously sampled
and the ADC presents a new sample to the DSP hardware at the same rate. In order to maintain real-time
operation, the DSP hardware must perform all required operations within the fixed time period, and
present an output sample to the DAC before the arrival of the next sample from the ADC.
1.2.1 Sampling
As shown in Figure 1.1, the ADC converts the analog signal x(t) into the digital signal x(n). Analog-
to-digital conversion, commonly referred as digitization, consists of the sampling (digitization in time)
and quantization (digitization in amplitude) processes as illustrated in Figure 1.2. The sampling process
depicts an analog signal as a sequence of values. The basic sampling function can be carried out with an
ideal ‘sample-and-hold’ circuit, which maintains the sampled signal level until the next sample is taken.
4 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
x(t)
Ideal sampler
x(nT)
Quantizer
x(n)
Analog-to-digital converter
Figure 1.2 Block diagram of an ADC
Quantization process approximates a waveform by assigning a number for each sample. Therefore, the
analog-to-digital conversion will perform the following steps:
1. The bandlimited signal x(t) is sampled at uniformly spaced instants of time nT, where n is a positive
integer and T is the sampling period in seconds. This sampling process converts an analog signal
into a discrete-time signal x(nT ) with continuous amplitude value.
2. The amplitude of each discrete-time sample is quantized into one of the 2B
levels, where B is the
number of bits that the ADC has to represent for each sample. The discrete amplitude levels are
represented (or encoded) into distinct binary words x(n) with a fixed wordlength B.
The reason for making this distinction is that these processes introduce different distortions. The sampling
process brings in aliasing or folding distortion, while the encoding process results in quantization noise.
As shown in Figure 1.2, the sampler and quantizer are integrated on the same chip. However, high-speed
ADCs typically require an external sample-and-hold device.
An ideal sampler can be considered as a switch that periodically opens and closes every T s (seconds).
The sampling period is defined as
T =
1
fs
, (1.2)
where fs is the sampling frequency (or sampling rate) in hertz (or cycles per second). The intermediate
signal x(nT ) is a discrete-time signal with a continuous value (a number with infinite precision) at discrete
time nT, n = 0, 1, . . . , ∞, as illustrated in Figure 1.3. The analog signal x(t) is continuous in both time
and amplitude. The sampled discrete-time signal x(nT ) is continuous in amplitude, but is defined only
at discrete sampling instants t = nT.
Time, t
x(nT)
0 T 2T 3T 4T
x(t)
Figure 1.3 Example of analog signal x(t) and discrete-time signal x(nT )
ANALOG INTERFACE 5
In order to represent an analog signal x(t) by a discrete-time signal x(nT ) accurately, the sampling
frequency fs must be at least twice the maximum frequency component ( fM) in the analog signal x(t).
That is,
fs ≥ 2 fM, (1.3)
where fM is also called the bandwidth of the signal x(t). This is Shannon’s sampling theorem, which states
that when the sampling frequency is greater than twice of the highest frequency component contained
in the analog signal, the original signal x(t) can be perfectly reconstructed from the corresponding
discrete-time signal x(nT ).
The minimum sampling rate fs = 2 fM is called the Nyquist rate. The frequency fN = fs/2 is called
the Nyquist frequency or folding frequency. The frequency interval [− fs/2, fs/2] is called the Nyquist
interval. When an analog signal is sampled at fs, frequency components higher than fs/2 fold back
into the frequency range [0, fs/2]. The folded back frequency components overlap with the original
frequency components in the same range. Therefore, the original analog signal cannot be recovered from
the sampled data. This undesired effect is known as aliasing.
Example 1.1: Consider two sinewaves of frequencies f1 = 1 Hz and f2 = 5 Hz that are sampled
at fs = 4 Hz, rather than at 10 Hz according to the sampling theorem. The analog waveforms are
illustrated in Figure 1.4(a), while their digital samples and reconstructed waveforms are illustrated
x(t), f1 = 1Hz x(t), f2 = 5Hz
t, second
x(t)
x(n)
t
x(n) x(t)
(a) Original analog waveforms and digital samplses for f1 = 1 Hz and f2 = 5 Hz.
x(n), f1 = 1Hz x(n), f2 = 5Hz
n
x(t)
x(n)
x(n)
x(t)
n
(b) Digital samples for f1 = 1 Hz and f2 = 5 Hz and reconstructed waveforms.
Figure 1.4 Example of the aliasing phenomenon: (a) original analog waveforms and digital samples for f1 = 1 Hz
and f2 = 5 Hz; (b) digital samples of f1 = 1 Hz and f2 = 5 Hz and reconstructed waveforms
6 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
in Figure 1.4(b). As shown in the figures, we can reconstruct the original waveform from the digital
samples for the sinewave of frequency f1 = 1 Hz. However, for the original sinewave of frequency
f2 = 5 Hz, the reconstructed signal is identical to the sinewave of frequency 1 Hz. Therefore, f1
and f2 are said to be aliased to one another, i.e., they cannot be distinguished by their discrete-time
samples.
Note that the sampling theorem assumes that the signal is bandlimited. For most practical applications,
the analog signal x(t) may have significant energies outside the highest frequency of interest, or may
contain noise with a wider bandwidth. In some cases, the sampling rate is predetermined by a given
application. For example, most voice communication systems use an 8 kHz sampling rate. Unfortunately,
the frequency components in a speech signal can be much higher than 4 kHz. To guarantee that the
sampling theorem defined in Equation (1.3) can be fulfilled, we must block the frequency components
that are above the Nyquist frequency. This can be done by using an antialiasing filter, which is an analog
lowpass filter with the cutoff frequency
fc ≤
fs
2
. (1.4)
Ideally, an antialiasing filter should remove all frequency components above the Nyquist frequency.
In many practical systems, a bandpass filter is preferred to remove all frequency components above the
Nyquist frequency, as well as to prevent undesired DC offset, 60 Hz hum, or other low-frequency noises.
A bandpass filter with passband from 300 to 3200 Hz can often be found in telecommunication systems.
Since antialiasing filters used in real-world applications are not ideal filters, they cannot completely
remove all frequency components outside the Nyquist interval. In addition, since the phase response of
the analog filter may not be linear, the phase of the signal will not be shifted by amounts proportional to
their frequencies. In general, a lowpass (or bandpass) filter with steeper roll-off will introduce more phase
distortion. Higher sampling rates allow simple low-cost antialiasing filter with minimum phase distortion
to be used. This technique is known as oversampling, which is widely used in audio applications.
Example 1.2: The range of sampling rate required by DSP systems is large, from approximately
1 GHz in radar to 1 Hz in instrumentation. Given a sampling rate for a specific application, the
sampling period can be determined by (1.2). Some real-world applications use the following
sampling frequencies and periods:
1. In International Telecommunication Union (ITU) speech compression standards, the sampling
rate of ITU-T G.729 and G.723.1 is fs = 8 kHz, thus the sampling period T = 1/8000 s =
125 μs. Note that 1 μs = 10−6
s.
2. Wideband telecommunication systems, such as ITU-T G.722, use a sampling rate of fs =
16 kHz, thus T = 1/16 000 s = 62.5 μs.
3. In audio CDs, the sampling rate is fs = 44.1 kHz, thus T = 1/44 100 s = 22.676 μs.
4. High-fidelity audio systems, such as MPEG-2 (moving picture experts group) AAC (advanced
audio coding) standard, MP3 (MPEG layer 3) audio compression standard, and Dolby AC-3,
have a sampling rate of fs = 48 kHz, and thus T = 1/48 000 s = 20.833 μs. The sampling
rate for MPEG-2 AAC can be as high as 96 kHz.
The speech compression algorithms will be discussed in Chapter 11 and the audio coding techniques
will be introduced in Chapter 13.
ANALOG INTERFACE 7
1.2.2 Quantization and Encoding
In previous sections, we assumed that the sample values x(nT ) are represented exactly with an infinite
number of bits (i.e., B → ∞). We now discuss a method of representing the sampled discrete-time signal
x(nT ) as a binary number with finite number of bits. This is the quantization and encoding process. If
the wordlength of an ADC is B bits, there are 2B
different values (levels) that can be used to represent
a sample. If x(n) lies between two quantization levels, it will be either rounded or truncated. Rounding
replaces x(n) by the value of the nearest quantization level, while truncation replaces x(n) by the value
of the level below it. Since rounding produces less biased representation of the analog values, it is widely
used by ADCs. Therefore, quantization is a process that represents an analog-valued sample x(nT ) with
its nearest level that corresponds to the digital signal x(n).
We can use 2 bits to define four equally spaced levels (00, 01, 10, and 11) to classify the signal into
the four subranges as illustrated in Figure 1.5. In this figure, the symbol ‘o’ represents the discrete-time
signal x(nT ), and the symbol ‘ r’ represents the digital signal x(n). The spacing between two consecutive
quantization levels is called the quantization width, step, or resolution. If the spacing between these levels
is the same, then we have a uniform quantizer. For the uniform quantization, the resolution is given by
dividing a full-scale range with the number of quantization levels, 2B
.
In Figure 1.5, the difference between the quantized number and the original value is defined as the
quantization error, which appears as noise in the converter output. It is also called the quantization noise,
which is assumed to be random variables that are uniformly distributed. If a B-bit quantizer is used, the
signal-to-quantization-noise ratio (SQNR) is approximated by (will be derived in Chapter 3)
SQNR ≈ 6B dB. (1.5)
This is a theoretical maximum. In practice, the achievable SQNR will be less than this value due to
imperfections in the fabrication of converters. However, Equation (1.5) still provides a simple guideline
for determining the required bits for a given application. For each additional bit, a digital signal will have
about 6-dB gain in SQNR. The problems of quantization and their solutions will be further discussed in
Chapter 3.
Example 1.3: If the input signal varies between 0 and 5 V, we have the resolutions and SQNRs
for the following commonly used data converters:
1. An 8-bit ADC with 256 (28
) levels can only provide 19.5 mV resolution and 48 dB SQNR.
2. A 12-bit ADC has 4096 (212
) levels of 1.22 mV resolution, and provides 72 dB SQNR.
0 2T
T 3T
00
01
10
11
Quantization level
Time
x(t)
Quantization errors
Figure 1.5 Digital samples using a 2-bit quantizer
8 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
3. A 16-bit ADC has 65 536 (216
) levels, and thus provides 76.294 μV resolution with 96 dB
SQNR.
Obviously, with more quantization levels, one can represent analog signals more accurately.
The dynamic range of speech signals is very large. If the uniform quantization scheme shown in
Figure 1.5 can adequately represent loud sounds, most of the softer sounds may be pushed into the
same small value. This means that soft sounds may not be distinguishable. To solve this problem, a
quantizer whose quantization level varies according to the signal amplitude can be used. In practice,
the nonuniform quantizer uses uniform levels, but the input signal is compressed first using a logarithm
function. That is, the logarithm-scaled signal, rather than the original input signal itself, will be quantized.
The compressed signal can be reconstructed by expanding it. The process of compression and expansion
is called companding (compressing and expanding). For example, the ITU-T G.711 μ-law (used in
North America and parts of Northeast Asia) and A-law (used in Europe and most of the rest of the world)
companding schemes are used in most digital telecommunications. The A-law companding scheme gives
slightly better performance at high signal levels, while the μ-law is better at low levels.
As shown in Figure 1.1, the input signal to DSP hardware may be a digital signal from other DSP
systems. In this case, the sampling rate of digital signals from other digital systems must be known. The
signal processing techniques called interpolation and decimation can be used to increase or decrease the
existing digital signals’ sampling rates. Sampling rate changes may be required in many multirate DSP
systems such as interconnecting DSP systems that are operated at different rates.
1.2.3 Smoothing Filters
Most commercial DACs are zero-order-hold devices, meaning they convert the input binary number to the
corresponding voltage level and then hold that value for T s until the next sampling instant. Therefore,
the DAC produces a staircase-shape analog waveform y
(t) as shown by the solid line in Figure 1.6,
which is a rectangular waveform with amplitude equal to the input value and duration of T s. Obviously,
this staircase output contains some high-frequency components due to an abrupt change in signal levels.
The reconstruction or smoothing filter shown in Figure 1.1 smoothes the staircase-like analog signal
generated by the DAC. This lowpass filtering has the effect of rounding off the corners (high-frequency
components) of the staircase signal and making it smoother, which is shown as a dotted line in Figure
1.6. This analog lowpass filter may have the same specifications as the antialiasing filter with cutoff
frequency fc ≤ fs/2. High-quality DSP applications, such as professional digital audio, require the use
y′(t)
Time, t
0 T 2T 3T 4T 5T
Smoothed output signal
Figure 1.6 Staircase waveform generated by a DAC
ANALOG INTERFACE 9
of reconstruction filters with very stringent specifications. To reduce the cost of using high-quality analog
filter, the oversampling technique can be adopted to allow the use of low-cost filter with slower roll off.
1.2.4 Data Converters
There are two schemes of connecting ADC and DAC to DSP processors: serial and parallel. A parallel
converter receives or transmits all the B bits in one pass, while the serial converters receive or transmit
B bits in a serial bit stream. Parallel converters must be attached to the DSP processor’s external address
and data buses, which are also attached to many different types of devices. Serial converters can be
connected directly to the built-in serial ports of DSP processors. This is why many practical DSP systems
use serial ADCs and DACs.
Many applications use a single-chip device called an analog interface chip (AIC) or a coder/decoder
(CODEC), which integrates an antialiasing filter, an ADC, a DAC, and a reconstruction filter all on
a single piece of silicon. In this book, we will use Texas Instruments’ TLV320AIC23 (AIC23) chip
on the DSP starter kit (DSK) for real-time experiments. Typical applications using CODEC include
modems, speech systems, audio systems, and industrial controllers. Many standards that specify the
nature of the CODEC have evolved for the purposes of switching and transmission. Some CODECs use a
logarithmic quantizer, i.e., A-law or μ-law, which must be converted into a linear format for processing.
DSP processors implement the required format conversion (compression or expansion) in hardware, or
in software by using a lookup table or calculation.
The most popular commercially available ADCs are successive approximation, dual slope, flash, and
sigma–delta. The successive-approximation ADC produces a B-bit output in B clock cycles by comparing
the input waveform with the output of a DAC. This device uses a successive-approximation register to
split the voltage range in half to determine where the input signal lies. According to the comparator
result, 1 bit will be set or reset each time. This process proceeds from the most significant bit to the least
significant bit. The successive-approximation type of ADC is generally accurate and fast at a relatively
low cost. However, its ability to follow changes in the input signal is limited by its internal clock rate,
and so it may be slow to respond to sudden changes in the input signal.
The dual-slope ADC uses an integrator connected to the input voltage and a reference voltage. The
integrator starts at zero condition, and it is charged for a limited time. The integrator is then switched
to a known negative reference voltage and charged in the opposite direction until it reaches zero volts
again. Simultaneously, a digital counter starts to record the clock cycles. The number of counts required
for the integrator output voltage to return to zero is directly proportional to the input voltage. This
technique is very precise and can produce ADCs with high resolution. Since the integrator is used for
input and reference voltages, any small variations in temperature and aging of components have little
or no effect on these types of converters. However, they are very slow and generally cost more than
successive-approximation ADCs.
A voltage divider made by resistors is used to set reference voltages at the flash ADC inputs. The
major advantage of a flash ADC is its speed of conversion, which is simply the propagation delay of the
comparators. Unfortunately, a B-bit ADC requires (2B
− 1) expensive comparators and laser-trimmed
resistors. Therefore, commercially available flash ADCs usually have lower bits.
Sigma–delta ADCs use oversampling and quantization noise shaping to trade the quantizer resolu-
tion with sampling rate. The block diagram of a sigma–delta ADC is illustrated in Figure 1.7, which
uses a 1-bit quantizer with a very high sampling rate. Thus, the requirements for an antialiasing
filter are significantly relaxed (i.e., the lower roll-off rate). A low-order antialiasing filter requires
simple low-cost analog circuitry and is much easier to build and maintain. In the process of quanti-
zation, the resulting noise power is spread evenly over the entire spectrum. The quantization noise be-
yond the required spectrum range can be filtered out using an appropriate digital lowpass filter. As a
result, the noise power within the frequency band of interest is lower. In order to match the sampling
frequency with the system and increase its resolution, a decimator is used. The advantages of sigma–delta
10 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Analog
input +
−
Σ
Sigma
Delta
1-bit B-bit
1-bit
DAC
1-bit
ADC
∫
Digital
decimator
Figure 1.7 A conceptual sigma–delta ADC block diagram
ADCs are high resolution and good noise characteristics at a competitive price because they use digital
filters.
Example 1.4: In this book, we use the TMS320VC5510 DSK for real-time experiments. The
C5510 DSK uses an AIC23 stereo CODEC for input and output of audio signals. The ADCs and
DACs within the AIC23 use the multi-bit sigma–delta technology with integrated oversampling
digital interpolation filters. It supports data wordlengths of 16, 20, 24, and 32 bits, with sampling
rates from 8 to 96 kHz including the CD standard 44.1 kHz. Integrated analog features consist
of stereo-line inputs and a stereo headphone amplifier with analog volume control. Its power
managementallowsselectiveshutdownofCODECfunctions,thusextendingbatterylifeinportable
applications such as portable audio and video players and digital recorders.
1.3 DSP Hardware
DSP systems are required to perform intensive arithmetic operations such as multiplication and addition.
These tasks may be implemented on microprocessors, microcontrollers, digital signal processors, or
custom integrated circuits. The selection of appropriate hardware is determined by the applications, cost,
or a combination of both. This section introduces different digital hardware implementations for DSP
applications.
1.3.1 DSP Hardware Options
As shown in Figure 1.1, the processing of the digital signal x(n) is performed using the DSP hardware.
Although it is possible to implement DSP algorithms on any digital computer, the real applications
determine the optimum hardware platform. Five hardware platforms are widely used for DSP systems:
1. special-purpose (custom) chips such as application-specific integrated circuits (ASIC);
2. field-programmable gate arrays (FPGA);
3. general-purpose microprocessors or microcontrollers (μP/μC);
4. general-purpose digital signal processors (DSP processors); and
5. DSP processors with application-specific hardware (HW) accelerators.
The hardware characteristics of these options are summarized in Table 1.1.
DSP HARDWARE 11
Table 1.1 Summary of DSP hardware implementations
DSP processors with
ASIC FPGA μP/μC DSP processor HW accelerators
Flexibility None Limited High High Medium
Design time Long Medium Short Short Short
Power consumption Low Low–medium Medium–high Low–medium Low–medium
Performance High High Low–medium Medium–high High
Development cost High Medium Low Low Low
Production cost Low Low–medium Medium–high Low–medium Medium
ASIC devices are usually designed for specific tasks that require a lot of computations such as digital
subscriber loop (DSL) modems, or high-volume products that use mature algorithms such as fast Fourier
transform and Reed–Solomon codes. These devices are able to perform their limited functions much
faster than general-purpose processors because of their dedicated architecture. These application-specific
products enable the use of high-speed functions optimized in hardware, but they are deficient in the
programmability to modify the algorithms and functions. They are suitable for implementing well-
defined and well-tested DSP algorithms for high-volume products, or applications demanding extremely
high speeds that can be achieved only by ASICs. Recently, the availability of core modules for some
common DSP functions has simplified the ASIC design tasks, but the cost of prototyping an ASIC device,
a longer design cycle, and the lack of standard development tools support and reprogramming flexibility
sometimes outweigh their benefits.
FPGAs have been used in DSP applications for years as glue logics, bus bridges, and peripherals for re-
ducingsystemcostsandaffordingahigherlevelofsystemintegration.Recently,FPGAshavebeengaining
considerable attention in high-performance DSP applications, and are emerging as coprocessors for stan-
dard DSP processors that need specific accelerators. In these cases, FPGAs work in conjunction with DSP
processors for integrating pre- and postprocessing functions. FPGAs provide tremendous computational
power by using highly parallel architectures for very high performance. These devices are hardware re-
configurable, thus allowing the system designer to optimize the hardware architectures for implementing
algorithms that require higher performance and lower production cost. In addition, the designer can imple-
ment high-performance complex DSP functions in a small fraction of the total device, and use the rest to
implement system logic or interface functions, resulting in both lower costs and higher system integration.
Example 1.5: There are four major FPGA families that are targeted for DSP systems: Cyclone
and Stratix from Altera, and Virtex and Spartan from Xilinx. The Xilinx Spartan-3 FPGA family
(introduced in 2003) uses 90-nm manufacturing technique to achieve low silicon die costs. To
support DSP functions in an area-efficient manner, Spartan-3 includes the following features:
r embedded 18 × 18 multipliers;
r distributed RAM for local storage of DSP coefficients;
r 16-bit shift register for capturing high-speed data; and
r large block RAM for buffers.
The current Spartan-3 family includes XC3S50, S200, S400, S1000, and S1500 devices. With
the aid of Xilinx System Generation for DSP, a tool used to port MATLAB Simulink model to
Xilinx hardware model, a system designer can model, simulate, and verify the DSP algorithms on
the target hardware under the Simulink environment.
12 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Program
memory
Processor
Program address bus
Program data bus
Data address bus
Data
memory
Data data bus
(a) Harvard architecture
Memory Processor
Address bus
Data bus
(b) von Newmann architecture
Figure 1.8 Different memory architectures: (a) Harvard architecture; (b) von Newmann architecture
General-purpose μP/μC becomes faster and increasingly able to handle some DSP applications. Many
electronic products are currently designed using these processors. For example, automotive controllers
use microcontrollers for engine, brake, and suspension control. If a DSP application is added to an
existing product that already contains a μP/μC, it is desired to add the new functions in software without
requiring an additional DSP processor. For example, Intel has adopted a native signal processing initiative
that uses the host processor in computers to perform audio coding and decoding, sound synthesis, and
so on. Software development tools for μP/μC devices are generally more sophisticated and powerful
than those available for DSP processors, thus easing development for some applications that are less
demanding on the performance and power consumption of processors.
General architectures of μP/μC fall into two categories: Harvard architecture and von Neumann archi-
tecture. As illustrated in Figure 1.8(a), Harvard architecture has a separate memory space for the program
and the data, so that both memories can be accessed simultaneously. The von Neumann architecture as-
sumes that there is no intrinsic difference between the instructions and the data, as illustrated in Figure
1.8(b). Operations such as add, move, and subtract are easy to perform on μPs/μCs. However, complex
instructions such as multiplication and division are slow since they need a series of shift, addition, or
subtraction operations. These devices do not have the architecture or the on-chip facilities required for
efficient DSP operations. Their real-time DSP performance does not compare well with even the cheaper
general-purpose DSP processors, and they would not be a cost-effective or power-efficient solution for
many DSP applications.
Example 1.6: Microcontrollers such as Intel 8081 and Freescale 68HC11 are typically used in in-
dustrial process control applications, in which I/O capability (serial/parallel interfaces, timers, and
interrupts) and control are more important than speed of performing functions such as multiplica-
tion and addition. Microprocessors such as Pentium, PowerPC, and ARM are basically single-chip
processors that require additional circuitry to improve the computation capability. Microprocessor
instruction sets can be either complex instruction set computer (CISC) such as Pentium or reduced
instruction set computer (RISC) such as ARM. The CISC processor includes instructions for basic
processor operations, plus some highly sophisticated instructions for specific functions. The RISC
processor uses hardwired simpler instructions such as LOAD and STORE to execute in a single
clock cycle.
DSP HARDWARE 13
It is important to note that some microprocessors such as Pentium add multimedia exten-
sion (MMX) and streaming single-instruction, multiple-data (SIMD) extension to support DSP
operations. They can run in high speed (3 GHz), provide single-cycle multiplication and arith-
metic operations, have good memory bandwidth, and have many supporting tools and software
available for easing development.
A DSP processor is basically a microprocessor optimized for processing repetitive numerically inten-
sive operations at high rates. DSP processors with architectures and instruction sets specifically designed
for DSP applications are manufactured by Texas Instruments, Freescale, Agere, Analog Devices, and
many others. The rapid growth and the exploitation of DSP technology is not a surprise, considering the
commercial advantages in terms of the fast, flexible, low power consumption, and potentially low-cost
design capabilities offered by these devices. In comparison to ASIC and FPGA solutions, DSP processors
have advantages in easing development and being reprogrammable in the field to allow a product feature
upgrade or bug fix. They are often more cost-effective than custom hardware such as ASIC and FPGA,
especially for low-volume applications. In comparison to the general-purpose μP/μC, DSP processors
have better speed, better energy efficiency, and lower cost.
In many practical applications, designers are facing challenges of implementing complex algorithms
that require more processing power than the DSP processors in use are capable of providing. For exam-
ple, multimedia on wireless and portable devices requires efficient multimedia compression algorithms.
The study of most prevalent imaging coding/decoding algorithms shows some DSP functions used for
multimedia compression algorithms that account for approximately 80 % of the processing load. These
common functions are discrete cosine transform (DCT), inverse DCT, pixel interpolation, motion es-
timation, and quantization, etc. The hardware extension or accelerator lets the DSP processor achieve
high-bandwidth performance for applications such as streaming video and interactive gaming on a sin-
gle device. The TMS320C5510 DSP used by this book consists of the hardware extensions that are
specifically designed to support multimedia applications. In addition, Altera has also added the hardware
accelerator into its FPGA as coprocessors to enhance the DSP processing abilities.
Today, DSP processors have become the foundation of many new markets beyond the traditional signal
processing areas for technologies and innovations in motor and motion control, automotive systems, home
appliances, consumer electronics, and vast range of communication systems and devices. These general-
purpose-programmable DSP processors are supported by integrated software development tools that
include C compilers, assemblers, optimizers, linkers, debuggers, simulators, and emulators. In this book,
we use Texas Instruments’ TMS320C55x for hands-on experiments. This high-performance and ultralow
power consumption DSP processor will be introduced in Chapter 2. In the following section, we will
briefly introduce some widely used DSP processors.
1.3.2 DSP Processors
In 1979, Intel introduced the 2920, a 25-bit integer processor with a 400 ns instruction cycle and a 25-bit
arithmetic-logic unit (ALU) for DSP applications. In 1982, Texas Instruments introduced the TMS32010,
a 16-bit fixed-point processor with a 16 × 16 hardware multiplier and a 32-bit ALU and accumulator.
This first commercially successful DSP processor was followed by the development of faster products
and floating-point processors. The performance and price range among DSP processors vary widely.
Today, dozens of DSP processor families are commercially available. Table 1.2 summarizes some of the
most popular DSP processors.
In the low-end and low-cost group are Texas Instruments’ TMS320C2000 (C24x and C28x) family,
Analog Devices’ ADSP-218x family, and Freescale’s DSP568xx family. These conventional DSP pro-
cessors include hardware multiplier and shifters, execute one instruction per clock cycle, and use the
complex instructions that perform multiple operations such as multiply, accumulate, and update address
14 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Table 1.2 Current commercially available DSP processors
Vendor Family Arithmetic type Clock speed
TMS320C24x Fixed-point 40 MHz
TMS320C28x Fixed-point 150 MHz
TMS320C54x Fixed-point 160 MHz
Texas instruments TMS320C55x Fixed-point 300 MHz
TMS320C62x Fixed-point 300 MHz
TMS320C64x Fixed-point 1 GHz
TMS320C67x Floating-point 300 MHz
ADSP-218x Fixed-point 80 MHz
ADSP-219x Fixed-point 160 MHz
Analog devices ADSP-2126x Floating-point 200 MHz
ADSP-2136x Floating-point 333 MHz
ADSP-BF5xx Fixed-point 750 MHz
ADSP-TS20x Fixed/Floating 600 MHz
DSP56300 Fixed, 24-bit 275 MHz
DSP568xx Fixed-point 40 MHz
Freescale DSP5685x Fixed-point 120 MHz
MSC71xx Fixed-point 200 MHz
MSC81xx Fixed-point 400 MHz
Agere DSP1641x Fixed-point 285 MHz
Source: Adapted from [11]
pointers. They provide good performance with modest power consumption and memory usage, thus are
widely used in automotives, appliances, hard disk drives, modems, and consumer electronics. For exam-
ple, the TMS320C2000 and DSP568xx families are optimized for control applications, such as motor
and automobile control, by integrating many microcontroller features and peripherals on the chip.
The midrange processor group includes Texas Instruments’ TMS320C5000 (C54x and C55x), Analog
Devices’ ADSP219x and ADSP-BF5xx, and Freescale’s DSP563xx. These enhanced processors achieve
higher performance through a combination of increased clock rates and more advanced architectures.
These families often include deeper pipelines, instruction cache, complex instruction words, multiple
data buses (to access several data words per clock cycle), additional hardware accelerators, and parallel
execution units to allow more operations to be executed in parallel. For example, the TMS320C55x
has two multiply–accumulate (MAC) units. These midrange processors provide better performance with
lower power consumption, thus are typically used in portable applications such as cellular phones and
wireless devices, digital cameras, audio and video players, and digital hearing aids.
These conventional and enhanced DSP processors have the following features for common DSP
algorithms such as filtering:
r Fast MAC units – The multiply–add or multiply–accumulate operation is required in most DSP
functions including filtering, fast Fourier transform, and correlation. To perform the MAC operation
efficiently,DSPprocessorsintegratethemultiplierandaccumulatorintothesamedatapathtocomplete
the MAC operation in single instruction cycle.
r Multiple memory accesses – Most DSP processors adopted modified Harvard architectures that keep
the program memory and data memory separate to allow simultaneous fetching of instruction and
data. In order to support simultaneous access of multiple data words, the DSP processors provide
multiple on-chip buses, independent memory banks, and on-chip dual-access data memory.
DSP HARDWARE 15
r Special addressing modes – DSP processors often incorporate dedicated data-address generation units
for generating data addresses in parallel with the execution of instruction. These units usually support
circular addressing and bit-reversed addressing for some specific algorithms.
r Special program control – Most DSP processors provide zero-overhead looping, which allows the
programmer to implement a loop without extra clock cycles for updating and testing loop counters,
or branching back to the top of loop.
r Optimize instruction set – DSP processors provide special instructions that support the computa-
tional intensive DSP algorithms. For example, the TMS320C5000 processors support compare-select
instructions for fast Viterbi decoding, which will be discussed in Chapter 14.
r Effective peripheral interface – DSP processors usually incorporate high-performance serial and
parallel input/output (I/O) interfaces to other devices such as ADC and DAC. They provide streamlined
I/O handling mechanisms such as buffered serial ports, direct memory access (DMA) controllers, and
low-overheadinterrupttotransferdatawithlittleornointerventionfromtheprocessor’scomputational
units.
These DSP processors use specialized hardware and complex instructions for allowing more operations
to be executed in every instruction cycle. However, they are difficult to program in assembly language
and also difficult to design efficient C compilers in terms of speed and memory usage for supporting
these complex-instruction architectures.
With the goals of achieving high performance and creating architecture that supports efficient C
compilers, some DSP processors, such as the TMS320C6000 (C62x, C64x, and C67x), use very simple
instructions. These processors achieve a high level of parallelism by issuing and executing multiple simple
instructions in parallel at higher clock rates. For example, the TMS320C6000 uses very long instruction
word(VLIW)architecturethatprovideseightexecutionunitstoexecutefourtoeightinstructionsperclock
cycle. These instructions have few restrictions on register usage and addressing modes, thus improving
the efficiency of C compilers. However, the disadvantage of using simple instructions is that the VLIW
processors need more instructions to perform a given task, thus require relatively high program memory
usage and power consumption. These high-performance DSP processors are typically used in high-end
videoandradarsystems,communicationinfrastructures,wirelessbasestations,andhigh-qualityreal-time
video encoding systems.
1.3.3 Fixed- and Floating-Point Processors
A basic distinction between DSP processors is the arithmetic formats: fixed-point or floating-point. This
is the most important factor for the system designers to determine the suitability of a DSP processor for a
chosen application. The fixed-point representation of signals and arithmetic will be discussed in Chapter 3.
Fixed-point DSP processors are either 16-bit or 24-bit devices, while floating-point processors are usually
32-bit devices. A typical 16-bit fixed-point processor, such as the TMS320C55x, stores numbers in a
16-bit integer or fraction format in a fixed range. Although coefficients and signals are only stored
with 16-bit precision, intermediate values (products) may be kept at 32-bit precision within the internal
40-bit accumulators in order to reduce cumulative rounding errors. Fixed-point DSP devices are usually
cheaper and faster than their floating-point counterparts because they use less silicon, have lower power
consumption, and require fewer external pins. Most high-volume, low-cost embedded applications, such
as appliance control, cellular phones, hard disk drives, modems, audio players, and digital cameras, use
fixed-point processors.
Floating-point arithmetic greatly expands the dynamic range of numbers. A typical 32-bit floating-
point DSP processor, such as the TMS320C67x, represents numbers with a 24-bit mantissa and an 8-bit
16 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
exponent. The mantissa represents a fraction in the rang −1.0 to +1.0, while the exponent is an integer
that represents the number of places that the binary point must be shifted left or right in order to obtain
the true value. A 32-bit floating-point format covers a large dynamic range, thus the data dynamic range
restrictions may be virtually ignored in a design using floating-point DSP processors. This is in contrast
to fixed-point designs, where the designer has to apply scaling factors and other techniques to prevent
arithmetic overflow, which are very difficult and time-consuming processes. As a result, floating-point
DSP processors are generally easy to program and use, but are usually more expensive and have higher
power consumption.
Example 1.7: The precision and dynamic range of commonly used 16-bit fixed-point processors
are summarized in the following table:
Precision Dynamic range
Unsigned integer 1 0 ≤ x ≤ 65 535
Signed integer 1 −32 768 ≤ x ≤ 32 767
Unsigned fraction 2−16 0 ≤ x ≤ (1 −2−16)
Signed fraction 2−15 −1 ≤ x ≤ (1 −2−15)
The precision of 32-bit floating-point DSP processors is 2−23
since there are 24 mantissa bits.
The dynamic range is 1.18 ×10−38
≤ x ≤ 3.4 × 1038
.
System designers have to determine the dynamic range and precision needed for the applications.
Floating-point processors may be needed in applications where coefficients vary in time, signals and
coefficients require a large dynamic range and high precisions, or where large memory structures are
required, such as in image processing. Floating-point DSP processors also allow for the efficient use of
high-level C compilers, thus reducing the cost of development and maintenance. The faster development
cycle for a floating-point processor may easily outweigh the extra cost of the DSP processor itself.
Therefore, floating-point processors can also be justified for applications where development costs are
high and production volumes are low.
1.3.4 Real-Time Constraints
A limitation of DSP systems for real-time applications is that the bandwidth of the system is limited by
the sampling rate. The processing speed determines the maximum rate at which the analog signal can
be sampled. For example, with the sample-by-sample processing, one output sample is generated when
one input sample is presented to the system. Therefore, the delay between the input and the output for
sample-by-sample processing is at most one sample interval (T s). A real-time DSP system demands
that the signal processing time, tp, must be less than the sampling period, T , in order to complete the
processing task before the new sample comes in. That is,
tp + to  T, (1.6)
where to is the overhead of I/O operations.
This hard real-time constraint limits the highest frequency signal that can be processed by DSP systems
using sample-by-sample processing approach. This limit on real-time bandwidth fM is given as
fM ≤
fs
2

1
2

tp + to
. (1.7)
DSP SYSTEM DESIGN 17
It is clear that the longer the processing time tp, the lower the signal bandwidth that can be handled by a
given processor.
Although new and faster DSP processors have continuously been introduced, there is still a limit to
the processing that can be done in real time. This limit becomes even more apparent when system cost
is taken into consideration. Generally, the real-time bandwidth can be increased by using faster DSP
processors, simplified DSP algorithms, optimized DSP programs, and parallel processing using multiple
DSP processors, etc. However, there is still a trade-off between the system cost and performance.
Equation (1.7) also shows that the real-time bandwidth can be increased by reducing the overhead of I/O
operations. This can be achieved by using block-by-block processing approach. With block processing
methods, the I/O operations are usually handled by a DMA controller, which places data samples in a
memory buffer. The DMA controller interrupts the processor when the input buffer is full, and a block of
signal samples will be processed at a time. For example, for a real-time N-point fast Fourier transform
(will be discussed in Chapter 6), the N input samples have to be buffered by the DMA controller. The
block of N samples is processed after the buffer is full. The block computation must be completed before
the next block of N samples is arrived. Therefore, the delay between input and output in block processing
is dependent on the block size N, and this may cause a problem for some applications.
1.4 DSP System Design
A generalized DSP system design process is illustrated in Figure 1.9. For a given application, the theoret-
ical aspects of DSP system specifications such as system requirements, signal analysis, resource analysis,
and configuration analysis are first performed to define system requirements.
H
A
R
D
W
A
R
E
S
O
F
T
W
A
R
E
System requirements specifications
Algorithm development and simulation
Select DSP processor
Software
architecture
Coding and
debugging
Hardware
schematic
System integration and debug
System testing and release
Application
Hardware
prototype
Figure 1.9 Simplified DSP system design flow
18 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
1.4.1 Algorithm Development
DSPsystemsareoftencharacterizedbytheembeddedalgorithm,whichspecifiesthearithmeticoperations
to be performed. The algorithm for a given application is initially described using difference equations
or signal-flow block diagrams with symbolic names for the inputs and outputs. In documenting an
algorithm, it is sometimes helpful to further clarify which inputs and outputs are involved by means
of a data-flow diagram. The next stage of the development process is to provide more details on the
sequence of operations that must be performed in order to derive the output. There are two methods of
characterizing the sequence of operations in a program: flowcharts or structured descriptions.
At the algorithm development stage, we most likely work with high-level language DSP tools (such
as MATLAB, Simulink, or C/C++) that are capable of algorithmic-level system simulations. We then
implementthealgorithmusingsoftware,hardware,orboth,dependingonspecificneeds.ADSPalgorithm
can be simulated using a general-purpose computer so that its performance can be tested and analyzed. A
block diagram of general-purpose computer implementation is illustrated in Figure 1.10. The test signals
may be internally generated by signal generators or digitized from a real environment based on the given
application or received from other computers via the networks. The simulation program uses the signal
samples stored in data file(s) as input(s) to produce output signals that will be saved in data file(s) for
further analysis.
Advantages of developing DSP algorithms using a general-purpose computer are:
1. Using high-level languages such as MATLAB, Simulink, C/C++, or other DSP software packages on
computers can significantly save algorithm development time. In addition, the prototype C programs
used for algorithm evaluation can be ported to different DSP hardware platforms.
2. It is easy to debug and modify high-level language programs on computers using integrated software
development tools.
3. Input/output operations based on disk files are simple to implement and the behaviors of the system
are easy to analyze.
4. Floating-point data format and arithmetic can be used for computer simulations, thus easing devel-
opment.
5. We can easily obtain bit-true simulations of the developed algorithms using MATLAB or Simulink
for fixed-point DSP implementation.
Analysis
MATLAB or C/C++
ADC
Other
computers
DAC
Other
computers
Signal generators
DSP
algorithms
DSP
software
Data
files
Data
files
Figure 1.10 DSP software developments using a general-purpose computer
DSP SYSTEM DESIGN 19
1.4.2 Selection of DSP Processors
As discussed earlier, DSP processors are used in a wide range of applications from high-performance
radar systems to low-cost consumer electronics. As shown in Table 1.2, semiconductor vendors have
responded to this demand by producing a variety of DSP processors. DSP system designers require
a full understanding of the application requirements in order to select the right DSP processor for
a given application. The objective is to choose the processor that meets the project’s requirements
with the most cost-effective solution. Some decisions can be made at an early stage based on arith-
metic format, performance, price, power consumption, ease of development, and integration, etc. In
real-time DSP applications, the efficiency of data flow into and out of the processor is also criti-
cal. However, these criteria will probably still leave a number of candidate processors for further
analysis.
Example 1.8:There are a number of ways to measure a processor’s execution speed. They include:
r MIPS – millions of instructions per second;
r MOPS – millions of operations per second;
r MFLOPS – millions of floating-point operations per second;
r MHz – clock rate; and
r MMACS – millions of multiply–accumulate operations.
In addition, there are other metrics such as milliwatts for measuring power consumption, MIPS
per mw, or MIPS per dollar. These numbers provide only the sketchiest indication about perfor-
mance, power, and price for a given application. They cannot predict exactly how the processor
will measure up in the target system.
For high-volume applications, processor cost and product manufacture integration are important fac-
tors. For portable, battery-powered products such as cellular phones, digital cameras, and personal mul-
timedia players, power consumption is more critical. For low- to medium-volume applications, there will
be trade-offs among development time, cost of development tools, and the cost of the DSP processor
itself. The likelihood of having higher performance processors with upward-compatible software in the
future is also an important factor. For high-performance, low-volume applications such as communica-
tion infrastructures and wireless base stations, the performance, ease of development, and multiprocessor
configurations are paramount.
Example 1.9: A number of DSP applications along with the relative importance for performance,
price, and power consumption are listed in Table 1.3. This table shows that the designer of a
handheld device has extreme concerns about power efficiency, but the main criterion of DSP
selection for the communications infrastructures is its performance.
When processing speed is at a premium, the only valid comparison between processors is on an
algorithm-implementation basis. Optimum code must be written for all candidates and then the execution
time must be compared. Other important factors are memory usage and on-chip peripheral devices, such
as on-chip converters and I/O interfaces.
20 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Table 1.3 Some DSP applications with the relative importance rating
Application Performance Price Power consumption
Audio receiver 1 2 3
DSP hearing aid 2 3 1
MP3 player 3 1 2
Portable video recorder 2 1 3
Desktop computer 1 2 3
Notebook computer 3 2 1
Cell phone handset 3 1 2
Cellular base station 1 2 3
Source: Adapted from [12]
Note: Rating – 1–3, with 1 being the most important
In addition, a full set of development tools and supports are important for DSP processor selection,
including:
r Software development tools such as C compilers, assemblers, linkers, debuggers, and simulators.
r Commercially available DSP boards for software development and testing before the target DSP
hardware is available.
r Hardware testing tools such as in-circuit emulators and logic analyzers.
r Development assistance such as application notes, DSP function libraries, application libraries, data
books, and low-cost prototyping, etc.
1.4.3 Software Development
The four common measures of good DSP software are reliability, maintainability, extensibility, and
efficiency. A reliable program is one that seldom (or never) fails. Since most programs will occasionally
fail, a maintainable program is one that is easily correctable. A truly maintainable program is one that can
be fixed by someone other than the original programmers. In order for a program to be truly maintainable,
it must be portable on more than one type of hardware. An extensible program is one that can be easily
modified when the requirements change.
A program is usually tested in a finite number of ways much smaller than the number of input data
conditions. This means that a program can be considered reliable only after years of bug-free use in
many different environments. A good DSP program often contains many small functions with only
one purpose, which can be easily reused by other programs for different purposes. Programming tricks
should be avoided at all costs, as they will often not be reliable and will almost always be difficult for
someone else to understand even with lots of comments. In addition, the use of variable names should
be meaningful in the context of the program.
As shown in Figure 1.9, the hardware and software design can be conducted at the same time for a
given DSP application. Since there are a lot of interdependent factors between hardware and software, an
ideal DSP designer will be a true ‘system’ engineer, capable of understanding issues with both hardware
and software. The cost of hardware has gone down dramatically in recent years, thus the majority of the
cost of a DSP solution now resides in software.
The software life cycle involves the completion of a software project: the project definition, the
detailed specification, coding and modular testing, integration, system testing, and maintenance. Software
DSP SYSTEM DESIGN 21
maintenance is a significant part of the cost for a DSP system. Maintenance includes enhancing the
software functions, fixing errors identified as the software is used, and modifying the software to work
with new hardware and software. It is essential to document programs thoroughly with titles and comment
statements because this greatly simplifies the task of software maintenance.
As discussed earlier, good programming techniques play an essential role in successful DSP ap-
plications. A structured and well-documented approach to programming should be initiated from the
beginning. It is important to develop an overall specification for signal processing tasks prior to writing
any program. The specification includes the basic algorithm and task description, memory requirements,
constraints on the program size, execution time, and so on. A thoroughly reviewed specification can catch
mistakes even before code has been written and prevent potential code changes at the system integration
stage. A flow diagram would be a very helpful design tool to adopt at this stage.
Writing and testing DSP code is a highly interactive process. With the use of integrated software de-
velopment tools that include simulators or evaluation boards, code may be tested regularly as it is written.
Writing code in modules or sections can help this process, as each module can be tested individually,
thus increasing the chance of the entire system working at the system integration stage.
There are two commonly used methods in developing software for DSP devices: using assembly
program or C/C++ program. Assembly language is similar to the machine code actually used by the
processor. Programming in assembly language gives the engineers full control of processor functions and
resources, thus resulting in the most efficient program for mapping the algorithm by hand. However, this
is a very time-consuming and laborious task, especially for today’s highly paralleled DSP architectures.
A C program, on the other hand, is easier for software development, upgrade, and maintenance. However,
the machine code generated by a C compiler is inefficient in both processing speed and memory usage.
Recently, DSP manufacturers have improved C compiler efficiency dramatically, especially with the DSP
processors that use simple instructions and general register files.
Often the ideal solution is to work with a mixture of C and assembly code. The overall program
is controlled and written by C code, but the run-time critical inner loops and modules are written in
assembly language. In a mixed programming environment, an assembly routine may be called as a
function or intrinsics, or in-line coded into the C program. A library of hand-optimized functions may
be built up and brought into the code when required. The assembly programming for the TMS320C55x
will be discussed in Chapter 2.
1.4.4 High-Level Software Development Tools
Software tools are computer programs that have been written to perform specific operations. Most DSP
operations can be categorized as being either analysis tasks or filtering tasks. Signal analysis deals
with the measurement of signal properties. MATLAB is a powerful environment for signal analysis
and visualization, which are critical components in understanding and developing a DSP system. C
programming is an efficient tool for performing signal processing and is portable over different DSP
platforms.
MATLAB is an interactive, technical computing environment for scientific and engineering numerical
analysis, computation, and visualization. Its strength lies in the fact that complex numerical problems
can be solved easily in a fraction of the time required with a programming language such as C. By using
its relatively simple programming capability, MATLAB can be easily extended to create new functions,
and is further enhanced by numerous toolboxes such as the Signal Processing Toolbox and Filter Design
Toolbox. In addition, MATLAB provides many graphical user interface (GUI) tools such as Filter Design
and Analysis Tool (FDATool).
The purpose of a programming language is to solve a problem involving the manipulation of informa-
tion. The purpose of a DSP program is to manipulate signals to solve a specific signal processing problem.
High-level languages such as C and C++ are computer languages that have English-like commands and
22 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
C program
(Source)
Machine
code
(Object)
Linker/loader Execution
Program
output
Libraries Data
C compiler
Figure 1.11 Program compilation, linking, and execution flow
instructions. High-level language programs are usually portable, so they can be recompiled and run on
many different computers. Although C/C++ is categorized as a high-level language, it can also be written
for low-level device drivers. In addition, a C compiler is available for most modern DSP processors such
as the TMS320C55x. Thus C programming is the most commonly used high-level language for DSP
applications.
C has become the language of choice for many DSP software development engineers not only because
it has powerful commands and data structures but also because it can easily be ported on different DSP
processors and platforms. The processes of compilation, linking/loading, and execution are outlined in
Figure 1.11. C compilers are available for a wide range of computers and DSP processors, thus making
the C program the most portable software for DSP applications. Many C programming environments
include GUI debugger programs, which are useful in identifying errors in a source program. Debugger
programs allow us to see values stored in variables at different points in a program, and to step through
the program line by line.
1.5 Introduction to DSP Development Tools
The manufacturers of DSP processors typically provide a set of software tools for the user to develop
efficient DSP software. The basic software development tools include C compiler, assembler, linker, and
simulator. In order to execute the designed DSP tasks on the target system, the C or assembly programs
must be translated into machine code and then linked together to form an executable code. This code
conversion process is carried out using software development tools illustrated in Figure 1.12.
The TMS320C55x software development tools include a C compiler, an assembler, a linker, an archiver,
a hex conversion utility, a cross-reference utility, and an absolute lister. The C55x C compiler generates
assembly source code from the C source files. The assembler translates assembly source files, either
hand-coded by DSP programmers or generated by the C compiler, into machine language object files.
The assembly tools use the common object file format (COFF) to facilitate modular programming.
Using COFF allows the programmer to define the system’s memory map at link time. This maximizes
performance by enabling the programmer to link the code and data objects into specific memory locations.
The archiver allows users to collect a group of files into a single archived file. The linker combines object
files and libraries into a single executable COFF object module. The hex conversion utility converts a
COFF object file into a format that can be downloaded to an EPROM programmer or a flash memory
program utility.
In this section, we will briefly describe the C compiler, assembler, and linker. A full description of
these tools can be found in the user’s guides [13, 14].
1.5.1 C Compiler
C language is the most popular high-level tool for evaluating algorithms and developing real-time soft-
ware for DSP applications. The C compiler can generate either a mnemonic assembly code or an algebraic
assembly code. In this book, we use the mnemonic assembly (ASM) language. The C compiler pack-
age includes a shell program, code optimizer, and C-to-ASM interlister. The shell program supports
INTRODUCTION TO DSP DEVELOPMENT TOOLS 23
Macro
source files
C
source files
C compiler
Archiver
Archiver
Library of
object files
Hex-
converter
EPROM
programmer
Linker
COFF
executable
file
COFF
object files
TMS320C55x
Target
Absolute
lister
×-reference
lister
Debugger
tools
Run-time
support
libraries
Library-build
utility
Macro
library
Assembly
source files
Assembler
Figure 1.12 TMS320C55x software development flow and tools
automatically compiled, assembled, and linked modules. The optimizer improves run-time and code
density efficiency of the C source file. The C-to-ASM interlister inserts the original comments in C
source code into the compiler’s output assembly code so users can view the corresponding assembly
instructions for each C statement generated by the compiler.
The C55x C compiler supports American National Standards Institute (ANSI) C and its run-time
support library. The run-time support library rts55.lib (or rts55x.lib for large memory model)
includes functions to support string operation, memory allocation, data conversion, trigonometry, and
exponential manipulations.
ClanguagelacksspecificfeaturesofDSP,especiallythosefixed-pointdataoperationsthatarenecessary
for many DSP algorithms. To improve compiler efficiency for DSP applications, the C55x C compiler
supports in-line assembly language for C programs. This allows adding highly efficient assembly code
directly into the C program. Intrinsics are another improvement for substituting DSP arithmetic operation
with DSP assembly intrinsic operators. We will introduce more compiler features in Chapter 2 and
subsequent chapters.
1.5.2 Assembler
The assembler translates processor-specific assembly language source files (in ASCII format) into binary
COFF object files. Source files can contain assembler directives, macro directives, and instructions.
24 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Assembler directives are used to control various aspects of the assembly process, such as the source file
listing format, data alignment, section content, etc. Binary object files contain separate blocks (called
sections) of code or data that can be loaded into memory space.
Once the DSP algorithm has been written in assembly, it is necessary to add important assembly
directives to the source code. Assembler directives are used to control the assembly process and enter
data into the program. Assembly directives can be used to initialize memory, define global variables, set
conditional assembly blocks, and reserve memory space for code and data.
1.5.3 Linker
Thelinkercombinesmultiplebinaryobjectfilesandlibrariesintoasingleexecutableprogramforthetarget
DSP hardware. It resolves external references and performs code relocation to create the executable mod-
ule. The C55x linker handles various requirements of different object files and libraries, as well as targets
system memory configurations. For a specific hardware configuration, the system designers need to pro-
vide the memory mapping specification to the linker. This task can be accomplished by using a linker com-
mand file. The visual linker is also a very useful tool that provides a visualized memory usage map directly.
The linker commands support expression assignment and evaluation, and provides MEMORY and
SECTION directives. Using these directives, we can define the memory model for the given target system.
We can also combine object file sections, allocate sections into specific memory areas, and define or
redefine global symbols at link time.
An example linker command file is listed in Table 1.4. The first portion uses the MEMORY directive to
identify the range of memory blocks that physically exist in the target hardware. These memory blocks
Table 1.4 Example of linker command file used by TMS320C55x
/* Specify the system memory map */
MEMORY
{
RAM (RWIX) : o = 0x000100, l = 0x00feff /* Data memory */
RAM0 (RWIX) : o = 0x010000, l = 0x008000 /* Data memory */
RAM1 (RWIX) : o = 0x018000, l = 0x008000 /* Data memory */
RAM2 (RWIX) : o = 0x040100, l = 0x040000 /* Program memory */
ROM (RIX) : o = 0x020100, l = 0x020000 /* Program memory */
VECS (RIX) : o = 0xffff00, l = 0x000100 /* Reset vector */
}
/* Specify the sections allocation into memory */
SECTIONS
{
vectors  VECS /* Interrupt vector table */
.text  ROM /* Code */
.switch  RAM /* Switch table info */
.const  RAM /* Constant data */
.cinit  RAM2 /* Initialization tables */
.data  RAM /* Initialized data */
.bss  RAM /* Global  static vars */
.stack  RAM /* Primary system stack */
.sysstack  RAM /* Secondary system stack */
expdata0  RAM0 /* Global  static vars */
expdata1  RAM1 /* Global  static vars */
}
EXPERIMENTS AND PROGRAM EXAMPLES 25
are available for the software to use. Each memory block has its name, starting address, and the length
of the block. The address and length are given in bytes for C55x processors and in words for C54x
processors. For example, the data memory block called RAM starts at the byte address 0x100, and it
has a size of 0xFEFF bytes. Note that the prefix 0x indicates the following number is represented in
hexadecimal (hex) form.
The SECTIONS directive provides different code section names for the linker to allocate the program
and data within each memory block. For example, the program can be loaded into the .text section,
and the uninitialized global variables are in the .bss section. The attributes inside the parentheses are
optional to set memory access restrictions. These attributes are:
R – Memory space can be read.
W – Memory space can be written.
X – Memory space contains executable code.
I – Memory space can be initialized.
Several additional options used to initialize the memory can be found in [13].
1.5.4 Other Development Tools
Archiver is used to group files into a single archived file, that is, to build a library. The archiver can
also be used to modify a library by deleting, replacing, extracting, or adding members. Hex-converter
converts a COFF object file into an ASCII hex format file. The converted hex format files are often used
to program EPROM and flash memory devices. Absolute lister takes linked object files to create the .abs
files. These .abs files can be assembled together to produce a listing file that contains absolute addresses
of the entire system program. Cross-reference lister takes all the object files to produce a cross-reference
listing file. The cross-reference listing file includes symbols, definitions, and references in the linked
source files.
The DSP development tools also include simulator, EVM, XDS, and DSK. A simulator is the soft-
ware simulation tool that does not require any hardware support. The simulator can be used for code
development and testing. The EVM is a hardware evaluation module including I/O capabilities to allow
developers to evaluate the DSP algorithms for the specific DSP processor in real time. EVM is usually
a computer board to be connected with a host computer for evaluating the DSP tasks. The XDS usually
includes in-circuit emulation and boundary scan for system development and debug. The XDS is an
external stand-alone hardware device connected to a host computer and a DSP board. The DSK is a
low-cost development board for the user to develop and evaluate DSP algorithms under a Windows
operation system environment. In this book, we will use the Spectrum Digital’s TMS320VC5510 DSK
for real-time experiments.
The DSK works under the Code Composer Studio (CCS) development environment. The DSK package
includes a special version of the CCS [15]. The DSK communicates with CCS via its onboard universal
serial bus (USB) JTAG emulator. The C5510 DSK uses a 200 MHz TMS320VC5510 DSP processor, an
AIC23 stereo CODEC, 8 Mbytes synchronous DRAM, and 512 Kbytes flash memory.
1.6 Experiments and Program Examples
Texas Instruments’ CCS Integrated Development Environment (IDE) is a DSP development tool that
allows users to create, edit, build, debug, and analyze DSP programs. For building applications, the CCS
provides a project manager to handle the programming project. For debugging purposes, it provides
26 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
breakpoints, variable watch windows, memory/register/stack viewing windows, probe points to stream
data to and from the target, graphical analysis, execution profile, and the capability to display mixed
disassembled and C instructions. Another important feature of the CCS is its ability to create and manage
large projects from a GUI environment. In this section, we will use a simple sinewave example to introduce
the basic editing features, key IDE components, and the use of the C55x DSP development tools. We also
demonstrate simple approaches to software development and the debug process using the TMS320C55x
simulator. Finally, we will use the C5510 DSK to demonstrate an audio loop-back example in real time.
1.6.1 Experiments of Using CCS and DSK
After installing the DSK or CCS simulator, we can start the CCS IDE. Figure 1.13 shows the CCS running
on the DSK. The IDE consists of the standard toolbar, project toolbar, edit toolbar, and debug toolbar.
Some basic functions are summarized and listed in Figure 1.13. Table 1.5 briefly describes the files used
in this experiment.
Procedures of the experiment are listed as follows:
1. Create a project for the CCS: Choose Project→New to create a new project file and save it as
useCCS.pjt to the directory ..experimentsexp1.6.1_CCSandDSK. The CCS uses the project
to operate its built-in utilities to create a full-build application.
Figure 1.13 CCS IDE
EXPERIMENTS AND PROGRAM EXAMPLES 27
Table 1.5 File listing for experiment exp1.6.1CCSandDSK
Files Description
useCCS.c C file for testing experiment
useCCS.h C header file
useCCS.pjt DSP project file
useCCS.cmd DSP linker command file
2. Create C program files using the CCS editor: Choose File→New to create a new file, type in
the example C code listed in Tables 1.6 and 1.7. Save C code listed in Table 1.6 as useCCS.c to
..experimentsexp1.6.1_CCSandDSKsrc, and save C code listed in Table 1.7 as useCCS.h
to the directory ..experimentsexp1.6.1_CCSandDSKinc. This example reads precalculated
sine values from a data table, negates them, and stores the results in a reversed order to an output
buffer. The programs useCCS.c and useCCS.h are included in the companion CD. However, it is
recommended that we create them using the editor to become familiar with the CCS editing functions.
3. Create a linker command file for the simulator: Choose File→New to create another new file, and
type in the linker command file as listed in Table 1.4. Save this file as useCCS.cmd to the directory
..experimentsexp1.6.1_CCSandDSK. The command file is used by the linker to map different
program segments into a prepartitioned system memory space.
4. Setting up the project: Add useCCS.c and useCCS.cmd to the project by choosing
Project→Add Files to Project, then select files useCCS.c and useCCS.cmd. Before build-
ing a project, the search paths of the included files and libraries should be setup for C com-
piler, assembler, and linker. To setup options for C compiler, assembler, and linker choose
Project→Build Options. We need to add search paths to include files and libraries that are
not included in the C55x DSP tools directories, such as the libraries and included files we created
Table 1.6 Program example, useCCS.c
#include useCCS.h
short outBuffer[BUF_SIZE];
void main()
{
short i, j;
j = 0;
while (1)
{
for (i=BUF_SIZE-1; i= 0;i--)
{
outBuffer [j++] = 0 - sineTable[i]; // - Set breakpoint
if (j = BUF_SIZE)
j = 0;
}
j++;
}
}
28 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Table 1.7 Program example header file, useCCS.h
#define BUF_SIZE 40
const short sineTable[BUF_SIZE]=
{0x0000, 0x000f, 0x001e, 0x002d, 0x003a, 0x0046, 0x0050, 0x0059,
0x005f, 0x0062, 0x0063, 0x0062, 0x005f, 0x0059, 0x0050, 0x0046,
0x003a, 0x002d, 0x001e, 0x000f, 0x0000, 0xfff1, 0xffe2, 0xffd3,
0xffc6, 0xffba, 0xffb0, 0xffa7, 0xffa1, 0xff9e, 0xff9d, 0xff9e,
0xffa1, 0xffa7, 0xffb0, 0xffba, 0xffc6, 0xffd3, 0xffe2, 0xfff1};
in the working directory. Programs written in C language require the use of the run-time support
library, either rts55.lib or rts55x.lib, for system initialization. This can be done by selecting
the compiler and linker dialog box and entering the C55x run-time support library, rts55.lib, and
adding the header file path related to the source file directory. We can also specify different directories
to store the output executable file and map file. Figure 1.14 shows an example of how to set the search
paths for compiler, assembler, and linker.
5. Build and run the program: Use Project→Rebuild All command to build the project.
If there are no errors, the CCS will generate the executable output file, useCCS.out. Be-
fore we can run the program, we need to load the executable output file to the C55x DSK
or the simulator. To do so, use File→Load Program menu and select the useCCS.out in
..exprimentsexp1.6.1_CCSandDSKDebug directory and load it. Execute this program by
choosing Debug→Run. The processor status at the bottom-left-hand corner of the CCS will change
from CPU HALTED to CPU RUNNING. The running process can be stopped by the Debug→Halt
command. We can continue the program by reissuing the Run command or exiting the DSK or the
simulator by choosing File→Exit menu.
(a) Setting the include file searching path. (b) Setting the run-time support library.
Figure 1.14 Setup search paths for C compiler, assembler, and linker: (a) setting the include file searching path;
(b) setting the run-time support library
EXPERIMENTS AND PROGRAM EXAMPLES 29
1.6.2 Debugging Program Using CCS and DSK
TheCCSIDEhasextendedtraditionalDSPcodegenerationtoolsbyintegratingasetofediting,emulating,
debugging, and analyzing capabilities in one entity. In this section, we will introduce some program
building steps and software debugging capabilities of the CCS.
The standard toolbar in Figure 1.13 allows users to create and open files, cut, copy, and paste text
within and between files. It also has undo and redo capabilities to aid file editing. Finding text can be
done within the same file or in different files. The CCS built-in context-sensitive help menu is also located
in the standard toolbar menu. More advanced editing features are in the edit toolbar menu, including
mark to, mark next, find match, and find next open parenthesis for C programs. The features of out-indent
and in-indent can be used to move a selected block of text horizontally. There are four bookmarks that
allow users to create, remove, edit, and search bookmarks.
The project environment contains C compiler, assembler, and linker. The project toolbar menu (see
Figure 1.13) gives users different choices while working on projects. The compile only, incremental
build, and build all features allow users to build the DSP projects efficiently. Breakpoints permit users to
set software breakpoints in the program and halt the processor whenever the program executes at those
breakpoint locations. Probe points are used to transfer data files in and out of the programs. The profiler
can be used to measure the execution time of given functions or code segments, which can be used to
analyze and identify critical run-time blocks of the programs.
The debug toolbar menu illustrated in Figure 1.13 contains several stepping operations: step-into-a-
function, step-over-a-function, and step-out-off-a-function. It can also perform the run-to-cursor-position
operation, which is a very convenient feature, allowing users to step through the code. The next three hot
buttons in the debug toolbar are run, halt, and animate. They allow users to execute, stop, and animate
the DSP programs. The watch windows are used to monitor variable contents. CPU registers and data
memory viewing windows provide additional information for ease of debugging programs. More custom
options are available from the pull-down menus, such as graphing data directly from the processor
memory.
We often need to check the changing values of variables during program execution for developing and
testing programs. This can be accomplished with debugging settings such as breakpoints, step commands,
and watch windows, which are illustrated in the following experiment.
Procedures of the experiment are listed as follows:
1. Add and remove breakpoints: Start with Project→Open, select useCCS.pjt from the directory
..experimentsexp1.6.2_CCSandDSK. Build and load the example project useCCS.out. Dou-
ble click the C file, useCCS.c, in the project viewing window to open it in the editing window. To
add a breakpoint, move the cursor to the line where we want to set a breakpoint. The command
to enable a breakpoint can be given either from the Toggle Breakpoint hot button on the project
toolbar or by clicking the mouse button on the line of interest. The function key F9 is a shortcut
that can be used to toggle a breakpoint. Once a breakpoint is enabled, a red dot will appear on the
left to indicate where the breakpoint is set. The program will run up to that line without passing
it. To remove breakpoints, we can either toggle breakpoints one by one or select the Remove All
Breakpoints hot button from the debug toolbar to clear all the breakpoints at once. Now load the
useCCS.out and open the source code window with source code useCCS.c, and put the cursor on
the line:
outBuffer[j++] = 0 - sineTable[i]; // - set breakpoint
Click the Toggle Breakpoint button (or press F9) to set the breakpoint. The breakpoint will be
set as shown in Figure 1.15.
30 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING
Figure 1.15 CCS screen snapshot of the example using CCS
2. Set up viewing windows: CCS IDE provides many useful windows to ease code development and the
debugging process. The following are some of the most often used windows:
CPU register viewing window: On the standard tool menu bar, click View→Registers→
CPU Registers to open the CPU registers window. We can edit the contents of any CPU register
by double clicking it. If we right click the CPU Register Window and select Allow Docking, we
can move the window around and resize it. As an example, try to change the temporary register T0
and accumulator AC0 to new values of T0 = 0x1234 and AC0 = 0x56789ABC.
Commandwindow:FromtheCCSmenubar,clickTools→Command Window toaddthecommand
window. We can resize and dock it as well. The command window will appear each time when we
rebuild the project.
Disassembly window: Click View→Disassembly on the menu bar to see the disassembly window.
Every time we reload an executable out file, the disassembly window will appear automatically.
3. Workspace feature: We can customize the CCS display and settings using the workspace feature.
To save a workspace, click File→Workspace→Save Workspace and give the workspace a name
and path where the workspace will be stored. When we restart CCS, we can reload the workspace
by clicking File→Workspace→Load Workspace and use a workspace from previous work. Now
save the workspace for your current CCS settings then exit the CCS. Restart CCS and reload the
workspace. After the workspace is reloaded, you should have the identical settings restored.
4. Using the single-step features: When using C programs, the C55x system uses a function called boot
from the run-time support library rts55.lib to initialize the system. After we load the useCCS.out,
Discovering Diverse Content Through
Random Scribd Documents
Lotze.
Ueberweg. The group is loosely constituted however. There was
scope for diversity of view and there was diversity of view, according
as the vital issue of the formula was held to lie in the relation of
intellectual function to organic function or in the not quite equivalent
relation of thinking to being. Moreover, few of the writers who,
whatsoever it was that they baptized with the name of logic, were at
least earnestly engaged in an endeavour to solve the problem of
knowledge within a circle of ideas which was on the whole Kantian,
were under the dominance of a single inspiration. Beneke’s
philosophy is a striking instance of this, with application to Fries and
affinity to Herbart conjoined with obligations to Schelling both
directly and through Schleiermacher. Lotze again wove together
many threads of earlier thought, though the web was assuredly his
own. Finally it must not be forgotten that the host of writers who
were in reaction against Hegelianism tended to take refuge in some
formula of correlation, as a half-way house between that and
formalism or psychologism or both, without reference to, and often
perhaps without consciousness of, the way in which historically it
had taken shape to meet the problem held to have been left
unresolved by Kant.
Lotze on the one hand held the Hegelian “deduction” to be
untenable, and classed himself with those who in his own phrase
“passed to the order of the day,” while on the other hand he
definitely raised the question, how an “object”
could be brought into forms to which it was not in
some sense adapted. Accordingly, though he
regards logic as formal, its forms come into relation to objectivity in
some sort even within the logical field itself, while when taken in the
setting of his system as a whole, its formal character is not of a kind
that ultimately excludes psychological and metaphysical reference, at
least speculatively. As a logician Lotze stands among the masters.
His flair for the essentials in his problem, his subtlety of analysis, his
patient willingness to return upon a difficulty from a fresh and still a
fresh point of view, and finally his fineness of judgment, make his
logic137 so essentially logic of the present, and of its kind not soon to
be superseded, that nothing more than an indication of the historical
significance of some of its characteristic features need be attempted
here.
In Lotze’s pure logic it is the Herbartian element that tends to be
disconcerting. Logic is formal. Its unit, the logical concept, is a
manipulated product and the process of manipulation may be called
abstraction. Processes of the psychological mechanism lie below it.
The paradox of the theory of judgment is due to the ideal of identity,
and the way in which this is evaded by supplementation to produce
a non-judgmental identity, followed by translation of the introduced
accessories with conditions in the hypothetical judgment, is
thoroughly in Herbart’s manner. The reduction of judgments is on
lines already familiar. Syllogism is no instrumental method by which
we compose our knowledge, but an ideal to the form of which it
should be brought. It is, as it were, a schedule to be filled in, and is
connected with the disjunctive judgment as a schematic setting forth
of alternatives, not with the hypothetical, and ultimately the
apodictic judgment with their suggestion that it is the real movement
of thought that is subjected to analysis. Yet the resultant impression
left by the whole treatment is not Herbartian. The concept is
accounted for in Kantian terms. There is no discontinuity between
the pre-logical or sub-logical conversion of impressions into “first
universals” and the formation of the logical concept. Abstraction
proves to be synthesis with compensatory universal marks in the
place of the particular marks abstracted from. Synthesis as the work
of thought always supplies, beside the mere conjunction or
disjunction of ideas, a ground of their coherence or non-coherence.
It is evident that thought, even as dealt with in pure logic, has an
objectifying function. Its universals have objective validity, though
this does not involve direct real reference. The formal conception of
pure logic, then, is modified by Lotze in such a way as not only to be
compatible with a view of the structural and functional adequacy of
thought to that which at every point at which we take thinking is still
distinguishable from thought, but even inevitably to suggest it. That
the unit for logic is the concept and not the judgment has proved a
stumbling-block to those of Lotze’s critics who are accustomed to
think in terms of the act of thought as unit. Lotze’s procedure is,
indeed, analogous to the way in which, in his philosophy of nature,
he starts from a plurality of real beings, but by means of a reductive
movement, an application of Kant’s transcendental method, arrives
at the postulate or fact of a law of their reciprocal action which calls
for a monistic and idealist interpretation. He starts, that is in logic,
with conceptual units apparently self-contained and admitting of
nothing but external relation, but proceeds to justify the intrinsic
relation between the matter of his units by an appeal to the fact of
the coherence of all contents of thought. Indeed, if thought admits
irreducible units, what can unite? Yet he is left committed to his
puzzle as to a reduction of judgment to identity, which partially
vitiates his treatment of the theory of judgment. The outstanding
feature of this is, nevertheless, not affected, viz. the attempt that he
makes, inspired clearly by Hegel, “to develop the various forms of
judgment systematically as members of a series of operations, each
of which leaves a part of its problem unmastered and thereby gives
rise to the next.”138 As to inference, finally, the ideal of the
articulation of the universe of discourse, as it is for complete
knowledge, when its disjunctions have been thoroughly followed out
and it is exhaustively determined, carried the day with him against
the view that the organon for gaining knowledge is syllogism. The
Aristotelian formula is “merely the expression, formally expanded
and complete, of the truth already embodied in disjunctive
judgment, namely, that every S which is a specific form of M
possesses as its predicate a particular modification of each of the
universal predicates of M to the exclusion of the rest.”
Schleiermacher’s separation of inference from judgment and his
attribution of the power to knowledge in process cannot find
acceptance with Lotze. The psychologist and the formal logician do
indeed join hands in the denial of a real movement of thought in
syllogism. Lotze’s logic then, is formal in a sense in which a logic
which does not find the conception of synthetic truth embarrassing
is not so. It is canon and not organon. In the one case, however,
where it recognizes what is truly synthesis, i.e. in its account of the
concept, it brings the statics of knowledge, so to speak, into integral
relation with the dynamics. And throughout, wherever the survival
from 1843, the identity bug-bear, is for the moment got rid of in
what is really a more liberal conception, the statical doctrine is
developed in a brilliant and informing manner. Yet it is in the detail
of his logical investigations, something too volatile to fix in summary,
that Lotze’s greatness as a logician more especially lies.
With Lotze the ideal that at last the forms of thought shall be
realized to be adequate to that which at any stage of actual
knowledge always proves relatively intractable is an illuminating
projection of faith. He takes courage from the reflection that to
accept scepticism is to presume the competence of the thought that
accepts. He will, however, take no easy way of parallelism. Our
human thought pursues devious and circuitous methods. Its forms
Logic as
Metaphysic.
are not unseldom scaffolding for the house of knowledge rather than
the framework of the house itself. Our task is not to realise
correspondence with something other than thought, but to make
explicit those justificatory notions which condition the form of our
apprehension. “However much we may presuppose an original
reference of the forms of thought to that nature of things which is
the goal of knowledge, we must be prepared to find in them many
elements which do not directly reproduce the actual reality to the
knowledge of which they are to lead us.”139 The impulse of thought
to reduce coincidence to coherence reaches immediately only to
objectivity or validity. The sense in which the presupposition of a
further reference is to be interpreted and in which justificatory
notions for it can be adduced is only determinable in a philosophic
system as a whole, where feeling has a place as well as thought,
value equally with validity.
Lotze’s logic then represents the statical aspect of the function of
thought in knowledge, while, so far as we go in knowledge thought
is always engaged in the unification of a manifold, which remains
contradistinguished from it, though not, of course, completely alien
to and unadapted to it. The further step to the determination of the
ground of harmony is not to be taken in logic, where limits are
present and untranscended.
The position of the search for truth, for which knowledge is a
growing organism in which thought needs, so to speak, to feed on
something other than itself, is conditioned in the post-Kantian period
by antagonism to the speculative movement
which culminated in the dialectic of Hegel. The
radical thought of this movement was voiced in
the demand of Reinhold140 that philosophy should
Hegel.
“deduce” it all from a single principle and by a single method. Kant’s
limits that must needs be thought and yet cannot be thought must
be thought away. An earnest attempt to satisfy this demand was
made by Fichte whose single principle was the activity of the pure
Ego, while his single method was the assertion of a truth revealed by
reflection on the content of conscious experience, the
characterization of this as a half truth and the supplementation of it
by its other, and finally the harmonization of both. The pure ego is
inferred from the fact that the non-ego is realized only in the act of
the ego in positing it. The ego posits itself, but reflection on the
given shows that we must add that it posits also the non-ego. The
two positions are to be conciliated in the thought of reciprocal
limitation of the posited ego and non-ego. And so forth. Fichte
cannot be said to have developed a logic, but this rhythm of thesis,
antithesis and synthesis, foreshadowed in part for Fichte in Spinoza’s
formula, “omnis determinatio est negatio” and significantly in Kant’s
triadic grouping of his categories, gave a cue to the thought of
Hegel. Schelling, too, called for a single principle and claimed to
have found it in his Absolute, “the night” said Hegel, “in which all
cows are black,” but his historical influence lay, as we have seen, in
the direction of a parallelism within the unity, and he also developed
no logic. It is altogether otherwise with Hegel.
Hegel’s logic,141 though it involves inquiries which custom regards
as metaphysical, is not to be characterized as a metaphysic with a
method. It is logic or a rationale of thought by thought, with a full
development among other matters of all that the
most separatist of logicians regards as thought
forms. It offers a solution of what has throughout
appeared as the logical problem. That solution lies doubtless in the
evolution of the Idea, i.e. an all-inclusive in which mere or pure
thought is cancelled in its separateness by a transfiguration, while
logic is nothing but the science of the Idea viewed in the medium of
pure thought. But, whatever else it be, this Panlogismus, to use the
word of J. E. Erdmann, is at least a logic. Thought in its progressive
unfolding, of which the history of philosophy taken in its broad
outline offers a pageant, necessarily cannot find anything external to
or alien from itself, though that there is something external for it is
another matter. As Fichte’s Ego finds that its non-ego springs from
and has its home within its very self, so with Hegel thought finds
itself in its “other,” both subsisting in the Idea which is both and
neither. Either of the two is the all, as, for example, the law of the
convexity of the curve is the law of the curve and the law of its
concavity. The process of the development of the Idea or Absolute is
in one regard the immanent process of the all. Logically regarded,
i.e. “in the medium of mere thought,” it is dialectical method. Any
abstract and limited point of view carries necessarily to its
contradictory. This can only be atoned with the original
determination by fresh negation in which a new thought-
determination is born, which is yet in a sense the old, though
enriched, and valid on a higher plane. The limitations of this in turn
cause a contradiction to emerge, and the process needs repetition.
At last, however, no swing into the opposite, with its primarily
conflicting, if ultimately complementary function, is any longer
possible. That in which no further contradiction is possible is the
absolute Idea. Bare or indeterminate being, for instance, the first of
the determinations of Hegel’s logic, as the being of that which is not
anything determinate, of Kant’s thing-in-itself, for example, positively
understood, implicated at once the notion of not-being, which
negates it, and is one with it, yet with a difference, so that we have
the transition to determinate being, the transition being baptized as
becoming. And so forth. It is easy to raise difficulties not only in
regard to the detail in Hegel’s development of his categories,
especially the higher ones, but also in regard to the essential rhythm
of his method. The consideration that mere double negation leaves
us precisely where we were and not upon a higher plane where the
dominant concept is richer, is, of course, fatal only to certain verbal
expressions of Hegel’s intent. There is a differentiation in type
between the two negations. But if we grant this it is no longer
obviously the simple logical operation indicated. It is inferred then
that Hegel complements from the stuff of experience, and fails to
make good the pretension of his method to be by itself and of itself
the means of advance to higher and still higher concepts till it can
rest in the Absolute. He discards, as it were, and takes in from the
stock while professing to play from what he has originally in his
hand. He postulates his unity in senses and at stages in which it is
inadmissible, and so supplies only a schema of relations otherwise
won, a view supported by the way in which he injects certain
determinations in the process, e.g. the category of chemism. Has he
not cooked the process in the light of the result? In truth the
Hegelian logic suffers from the fact that the good to be reached is
presupposed in the beginning. Nature, e.g., is not deduced as real
because rational, but being real its rationality is presumed and, very
imperfectly, exhibited in a way to make it possible to conceive it as
in its essence the reflex of Reason. It is a vision rather than a
construction. It is a “theosophical logic.” Consider the rational-real in
the unity that must be, and this is the way of it, or an approximation
to the way of it! It was inevitable that the epistemologists of the
search for truth would have none of it. The ideal in whatsoever
sense real still needs to be realized. It is from the human standpoint
regulative and only hypothetically or formally constitutive. We must
not confuse οὐσία with εἶναι, nor εἶναι with γίγνεσθαι.
Yet in a less ambitious form the fundamental contentions of
Hegel’s method tend to find a qualified acceptance. In any piece of
presumed knowledge its partial or abstract character involves the
presence of loose edges which force the conviction of inadequacy
and the development of contradictions. Contradictions must be
annulled by complementation, with resultant increasing coherence in
ascending stages. At each successive stage in our progress fresh
contradictions break out, but the ideal of a station at which the
thought-process and its other, if not one, are at one, is permissible
as a limiting conception. Yet if Hegel meant only this he has indeed
succeeded in concealing his meaning.
Hegel’s treatment of the categories or thought determinations
which arise in the development of the immanent dialectic is rich in
flashes of insight, but most of them are in the ordinary view of logic
wholly metaphysical. In the stage, however, of his process in which
he is concerned with the notion are to be found concept, judgment,
syllogism. Of the last he declares that it “is the reasonable and
everything reasonable” (Encyk. § 181), and has the phantasy to
speak of the definition of the Absolute as being “at this stage” simply
the syllogism. It is, of course, the rhythm of the syllogism that
attracts him. The concept goes out from or utters itself in judgment
to return to an enhanced unity in syllogism. Ueberweg (System §
101) is, on the whole, justified in exclaiming that Hegel’s
rehabilitation of syllogism “did but slight service to the Aristotelian
theory of syllogism,” yet his treatment of syllogism must be regarded
as an acute contribution to logical criticism in the technical sense. He
insists on its objectivity. The transition from judgment is not brought
about by our subjective action. The syllogism of “all-ness” is
convicted of a petitio principii (Encyk. § 190), with consequent lapse
into the inductive syllogism, and, finally, since inductive syllogism is
involved in the infinite process, into analogy. “The syllogism of
necessity,” on the contrary, does not presuppose its conclusion in its
premises. The detail, too, of the whole discussion is rich in
suggestion, and subsequent logicians—Ueberweg himself perhaps,
Lotze certainly in his genetic scale of types of judgment and
inference, Professor Bosanquet notably in his systematic
development of “the morphology of knowledge,” and others—have
with reason exploited it.
Hegel’s logic as a whole, however, stands and falls not with his
thoughts on syllogism, but with the claim made for the dialectical
method that it exhibits logic in its integral unity with metaphysic, the
thought-process as the self-revelation of the Idea. The claim was
disallowed. To the formalist proper it was self-condemned in its
pretension to develop the content of thought and its rejection of the
formula of bare-identity. To the epistemologist it seemed to confuse
foundation and keystone, and to suppose itself to build upon the
latter in a construction illegitimately appropriative of materials
otherwise accumulated. At most it was thought to establish a
schema of formal unity which might serve as a regulative ideal. To
the methodologist of science in genesis it appeared altogether to fail
to satisfy any practical interest. Finally, to the psychologist it spelt
the failure of intellectualism, and encouraged, therefore, some form
of rehabilitated experientialism.
In the Hegelian school in the narrower sense the logic of the
master receives some exegesis and defence upon single points of
doctrine rather than as a whole. Its effect upon logic is rather to be
seen in the rethinking of the traditional body of logical doctrine in
the light of an absolute presupposed as ideal, with the postulate that
a regulative ideal must ultimately exhibit itself as constitutive, the
justification of the postulate being held to lie in the coherence and
all-inclusiveness of the result. In such a logic, if and so far as
coherence should be attained, would be found something akin to the
spirit of what Hegel achieves, though doubtless alien to the letter of
what it is his pretension to have achieved. There is perhaps no
serious misrepresentation involved in regarding a key-thought of this
type, though not necessarily expressed in those verbal forms, as
pervading such logic of the present as coheres with a philosophy of
the absolute conceived from a point of view that is intellectualist
throughout. All other contemporary movements may be said to be in
revolt from Hegel.
v. Logic from 1880-1910
Logic in the present exhibits, though in characteristically modified
shapes, all the main types that have been found in its past history.
There is an intellectualist logic coalescent with an absolutist
metaphysic as aforesaid. There is an epistemological logic with
sometimes formalist, sometimes methodological leanings. There is a
formal-symbolic logic engaged with the elaboration of a relational
calculus. Finally, there is what may be termed psychological-
voluntaryist logic. It is in the rapidity of development of logical
investigations of the third and fourth types and the growing number
of their exponents that the present shows most clearly the history of
logic in the making. All these movements are logic of the present,
and a very brief indication may be added of points of historical
significance.
Of intellectualist logic Francis Herbert Bradley142 (b. 1846) and
Bernard Bosanquet143 (1848) may be taken as typical exponents.
The philosophy of the former concludes to an Absolute by the
annulment of contradictions, though the ladder of Hegel is
conspicuous by its absence. His metaphysical method, however, is
like Herbart’s, not identifiable with his logic, and the latter has for its
central characteristic its thorough restatement of the logical forms
traditional in language and the text-books, in such a way as to
harmonize with the doctrine of a reality whose organic unity is all-
inclusive. The thorough recasting that this involves, even of the
thought of the masters when it occasionally echoes them, has
resulted in a phrasing uncouth to the ear of the plain man with his
world of persons and things in which the former simply think about
the latter, but it is fundamentally necessary for Bradley’s purpose.
The negative judgment, for example, cannot be held in one and the
same undivided act to presuppose the unity of the real, project an
adjective as conceivably applicable to it and assert its rejection. We
need, therefore, a restatement of it. With Bradley reality is the one
subject of all judgment immediate or mediate. The act of judgment
“which refers an ideal content (recognized as such) to a reality
beyond the act” is the unit for logic. Grammatical subject and
predicate necessarily both fall under the rubric of the adjectival, that
is, within the logical idea or ideal content asserted. This is a meaning
or universal, which can have no detached or abstract self-
subsistence. As found in judgment it may exhibit differences within
itself, but it is not two, but one, an articulation of unity, not a fusion,
which could only be a confusion, of differences. With a brilliant
subtlety Bradley analyses the various types of judgment in his own
way, with results that must be taken into account by all subsequent
logicians of this type. The view of inference with which he
complements it is only less satisfactory because of a failure to
distinguish the principle of nexus in syllogism from its traditional
formulation and rules, and because he is hampered by the
intractability which he finds in certain forms of relational
construction.
Bosanquet had the advantage that his logic was a work of a
slightly later date. He is, perhaps, more able than Bradley has shown
himself, to use material from alien sources and to penetrate to what
is of value in the thought of writers from whom, whether on the
whole or on particular issues, he disagrees. He treats the book-
tradition, however, a debt to which, nowadays inevitable, he is
generous in acknowledging,144 with a judicious exercise of freedom
in adaptation, i.e. constructively as datum, never eclectically. In his
fundamental theory of judgment his obligation is to Bradley. It is to
Lotze, however, that he owes most in the characteristic feature of his
logic, viz., the systematic development of the types of judgment, and
inference from less adequate to more adequate forms. His
fundamental continuity with Bradley may be illustrated by his
definition of inference. “Inference is the indirect reference to reality
of differences within a universal, by means of the exhibition of this
universal in differences directly referred to reality.”145 Bosanquet’s
Logic will long retain its place as an authoritative exposition of logic
of this type.
Of epistemological logic in one sense of the phrase Lotze is still to
be regarded as a typical exponent. Of another type Chr. Sigwart
(q.v.) may be named as representative. Sigwart’s aim was “to
reconstruct logic from the point of view of methodology.” His
problem was the claim to arrive at propositions universally valid, and
so true of the object, whosoever the individual thinker. His solution,
within the Kantian circle of ideas, was that such principles as the
Kantian principle of causality were justified as “postulates of the
endeavour after complete knowledge.” “What Kant has shown is not
that irregular fleeting changes can never be the object of
consciousness, but only that the ideal consciousness of complete
science would be impossible without the knowledge of the necessity
of all events.”146 “The universal presuppositions which form the
outline of our ideal of knowledge are not so much laws which the
understanding prescribes to nature ... as laws which the
understanding lays down for its own regulation in its investigation
and consideration of nature. They are a priori because no experience
is sufficient to reveal or confirm them in unconditional universality;
but they are a priori ... only in the sense of presuppositions without
which we should work with no hope of success and merely at
random and which therefore we must believe.” Finally they are akin
to our ethical principles. With this coheres his dictum, with its far-
reaching consequences for the philosophy of induction, that “the
logical justification of the inductive process rests upon the fact that it
is an inevitable postulate of our effort after knowledge, that the
given is necessary, and can be known as proceeding from its
grounds according to universal laws.”147 It is characteristic of
Sigwart’s point of view that he acknowledges obligation to Mill as
well as to Ueberweg. The transmutation of Mill’s induction of
inductions into a postulate is an advance of which the psychological
school of logicians have not been slow to make use. The comparison
of Sigwart with Lotze is instructive, in regard both to their
agreement and their divergence as showing the range of the
epistemological formula.
Of the formal-symbolic logic all that falls to be said here is, that
from the point of view of logic as a whole, it is to be regarded as a
legitimate praxis as long as it shows itself aware of the sense in
which alone form is susceptible of abstraction, and is aware that in
itself it offers no solution of the logical problem. “It is not an
algebra,” said Kant148 of his technical logic, and the kind of support
lent recently to symbolic logic by the Gegenstandstheorie identified
with the name of Alexius Meinong (b. 1853)149 is qualified by the
warning that the real activity of thought tends to fall outside the
calculus of relations and to attach rather to the subsidiary function of
denoting. The future of symbolic logic as coherent with the rest of
logic, in the sense which the word has borne throughout its history
seems to be bound up with the question of the nature of the
analysis that lies behind the symbolism, and of the way in which this
is justified in the setting of a doctrine of validity. The “theory of the
object,” itself, while affecting logic alike in the formal and in the
psychological conception of it very deeply, does not claim to be
regarded as logic or a logic, apart from a setting supplied from
elsewhere.
Finally we have a logic of a type fundamentally psychological, if it
be not more properly characterized as a psychology which claims to
cover the whole field of philosophy, including the logical field. The
central and organizing principle of this is that knowledge is in
genesis, that the genesis takes place in the medium of individual
minds, and that this fact implies that there is a necessary reference
throughout to interests or purposes of the subject which thinks
because it wills and acts. Historically this doctrine was formulated as
the declaration of independence of the insurgents in revolt against
the pretensions of absolutist logic. It drew for support upon the
psychological movement that begins with Fries and Herbart. It has
been chiefly indebted to writers, who were not, or were not
primarily, logicians, to Avenarius, for example, for the law of the
economy of thought, to Wundt, whose system, and therewith his
logic,150 is a pendant to his psychology, for the volitional character of
judgment, to Herbert Spencer and others. A judgment is practical,
and not to be divorced without improper abstraction from the
purpose and will that informs it. A concept is instrumental to an end
beyond itself, without any validity other than its value for action. A
situation involving a need of adaptation to environment arises and
the problem it sets must be solved that the will may control
environment and be justified by success. Truth is the improvised
machinery that is interjected, so far as this works. It is clear that we
are in the presence of what is at least an important half-truth, which
intellectuallism with its statics of the rational order viewed as a
completely articulate system has tended to ignore. It throws light on
many phases of the search for truth, upon the plain man’s claim to
start with a subject which he knows whose predicate which he does
not know is still to be developed, or again upon his use of the
negative form of judgment, when the further determination of his
purposive system is served by a positive judgment from without, the
positive content of which is yet to be dropped as irrelevant to the
matter in hand. The movement has, however, scarcely developed its
logic151 except as polemic. What seems clear is that it cannot be the
whole solution. While man must confront nature from the human
and largely the practical standpoint, yet his control is achieved only
by the increasing recognition of objective controls. He conquers by
obedience. So truth works and is economical because it is truth.
Working is proportioned to inner coherence. It is well that the view
should be developed into all its consequences. The result will be to
limit it, though perhaps also to justify it, save in its claim to reign
alone.
There is, perhaps, an increasing tendency to recognize that the
organism of knowledge is a thing which from any single viewpoint
must be seen in perspective. It is of course a postulate that all truths
harmonize, but to give the harmonious whole in a projection in one
plane is an undertaking whose adequacy in one sense involves an
inadequacy in another. No human architect can hope to take up in
succession all essential points of view in regard to the form of
knowledge or to logic. “The great campanile is still to finish.”
Bibliography.—Historical: No complete history of logic in the
sense in which it is to be distinguished from theoretical
philosophy in general has as yet been written. The history of
logic is indeed so little intelligible apart from constant reference
to tendencies in philosophical development as a whole, that the
historian, when he has made the requisite preparatory studies,
inclines to essay the more ambitious task. Yet there are, of
course, works devoted to the history of logic proper.
Of these Prantl’s Geschichte der Logik im Abendlande (4 vols.,
1855-1870), which traces the rise, development and fortunes of
the Aristotelian logic to the close of the middle ages, is
monumental. Next in importance are the works of L. Rabus,
Logik und Metaphysik, i. (1868) (pp. 123-242 historical, pp. 453-
518 bibliographical, pp. 514 sqq. a section on apparatus for the
study of the history of logic), Die neuesten Bestrebungen auf
dem Gebiete der Logik bei den Deutschen (1880), Logik (1895),
especially for later writers § 17. Ueberweg’s System der Logik
und Geschichte der logischen Lehren (4th ed. and last revised by
the author, 1874, though it has been reissued later, Eng. trans.,
1871) is alone to be named with these. Harms’ posthumously
published Geschichte der Logik (1881) (Die Philosophie in ihrer
Geschichte, ii.) was completed by the author only so far as
Leibnitz. Blakey’s Historical Sketch of Logic (1851), though, like
all this writer’s works, closing with a bibliography of some
pretensions, is now negligible. Franck, Esquisse d’une histoire de
la logique (1838) is the chief French contribution to the subject
as a whole.
Of contributions towards the history of special periods or
schools of logical thought the list, from the opening chapters of
Ramus’s Scholae Dialecticae (1569) downwards (v. Rabus loc.
cit.) would be endless. What is of value in the earlier works has
now been absorbed. The System der Logik (1828) of Bachmann
(a Kantian logician of distinction) contains a historical survey
(pp. 569-644), as does the Denklehre (1822) of van Calker
(allied in thought to Fries) pp. 12 sqq.; Eberstein’s Geschichte
der Logik und Metaphysik bei den Deutschen von Leibniz bis auf
gegenwärtige Zeit (latest edition, 1799) is still of importance in
regard to logicians of the school of Wolff and the origines of
Kant’s logical thought. Hoffmann, the editor and disciple of von
Baader, published Grundzüge einer Geschichte der Begriffe der
Logik in Deutschland von Kant bis Baader (1851). Wallace’s
prolegomena and notes to his Logic of Hegel (1874, revised and
augmented 1892-1894) are of use for the history and
terminology, as well as the theory. Riehl’s article entitled Logik in
Die Kultur der Gegenwart, vi. 1. Systematische Philosophie
(1907), is excellent, and touches on quite modern
developments. Liard, Les Logiciens Anglais Contemporains (5th
ed., 1907), deals only with the 19th-century inductive and
formal-symbolic logicians down to Jevons, to whom the book
was originally dedicated. Venn’s Symbolic Logic (1881) gave a
careful history and bibliography of that development. The history
of the more recent changes is as yet to be found only in the
form of unshaped material in the pages of review and
Jahresbericht. (H. W. B.*)
1 Cf. Heidel, “The Logic of the Pre-Socratic Philosophy,” in Dewey’s
Studies in Logical Theory (Chicago, 1903).
2 Heraclitus, Fragmm. 107 (Diels, Fragmente der Vorsokratiker) and 2,
on which see Burnet, Early Greek Philosophy, p. 153 note (ed. 2).
3 e.g. Diog. Laërt. ix. 25, from the lost Sophistes of Aristotle.
4 Plato and Platonism, p. 24.
5 Nothing is. If anything is, it cannot be known. If anything is known
it cannot be communicated.
6 Metaphys. μ. 1078b 28 sqq.
7 Cf. Arist. Top. θ. i. 1 ad fin.
8 For whom see Dümmler, Antisthenica (1882, reprinted in his Kleine
Schriften, 1901).
9 Aristotle, Metaphys. 1024b 32 sqq.
10 Plato, Theaetetus, 201 E. sqq., where, however, Antisthenes is not
named, and the reference to him is sometimes doubted. But cf. Aristotle,
Met. H 3. 1043b 24-28.
11 Diog. Laërt. ii. 107.
12 Aristotle, An. Pr. i. 31, 46a 32 sqq.; cf. 91b 12 sqq.
13 Athenaeus ii. 59c. See Usener, Organisation der wissenschaftl.
Arbeit (1884; reprinted in his Vorträge und Aufsätze, 1907).
14 Socrates’ reference of a discussion to its presuppositions
(Xenophon, Mem. iv. 6, 13) is not relevant for the history of the
terminology of induction.
15 Theaetetus, 186c.
16 Timaeus, 37a, b (quoted in H. F. Carlill’s translation of the
Theaetetus, p. 60).
17 Theaetetus, 186d.
18 Sophistes, 253d.
19 Ib. id.; cf. Theaetetus, 197d.
20 Aristotle, de An. 430b 5, and generally iii. 2, iii. 5.
21 For Plato’s Logic, the controversies as to the genuineness of the
dialogues may be treated summarily. The Theaetetus labours under no
suspicion. The Sophistes is apparently matter for animadversion by
Aristotle in the Metaphysics and elsewhere, but derives stronger support
from the testimonies to the Politicus which presumes it. The Politicus and
Philebus are guaranteed by the use made of them in Aristotle’s Ethics.
The rejection of the Parmenides would involve the paradox of a nameless
contemporary of Plato and Aristotle who was inferior as a metaphysician
to neither. No other dialogue adds anything to the logical content of
these.
Granted their genuineness, the relative dating of three of them is
given, viz. Theaetetus, Sophistes and Politicus in the order named. The
Philebus seems to presuppose Politicus, 283-284, but if this be an error, it
will affect the logical theory not at all. There remains the Parmenides. It
can scarcely be later than the Sophistes. The antinomies with which it
concludes are more naturally taken as a prelude to the discussion of the
Sophistes than as an unnecessary retreatment of the doctrine of the one
and the many in a more negative form. It may well be earlier than the
Theaetetus in its present form. The stylistic argument shows the
Theaetetus relatively early. The maturity of its philosophic outlook tends
to give it a place relatively advanced in the Platonic canon. To meet the
problem here raised, the theory has been devised of an earlier and a later
version. The first may have linked on to the series of Plato’s dialogues of
search, and to put the Parmenides before it is impossible. The second,
though it might still have preceded the Parmenides might equally well
have followed the negative criticism of that dialogue, as the beginning of
reconstruction. For Plato’s logic this question only has interest on account
of the introduction of an Ἀριστοτέλης in a non-speaking part in the
Parmenides. If this be pressed as suggesting that the philosopher
Aristotle was already in full activity at the date of writing, it is of
importance to know what Platonic dialogues were later than the début of
his critical pupil.
On the stylistic argument as applied to Platonic controversies Janell’s
Quaestiones Platonicae (1901) is important. On the whole question of
genuineness and dates of the dialogues, H. Raeder, Platons
philosophische Entwickelung (1905), gives an excellent conspectus of the
views held and the grounds alleged. See also Plato.
22 E.g. that of essence and accident. Republic, 454.
23 E.g. the discussion of correlation, ib. 437 sqq.
24 Politicus, 285d.
25 Sophistes, 261c sqq.
26 E.g. in Nic. Eth. i. 6.
27 Philebus, 16d.
28 Principal edition still that of Waitz, with Latin commentary, (2 vols.,
1844-1846). Among the innumerable writers who have thrown light upon
Aristotle’s logical doctrine, St Hilaire, Trendelenburg, Ueberweg, Hamilton,
Mansel, G. Grote may be named. There are, however, others of equal
distinction. Reference to Prantl, op. cit., is indispensable. Zeller, Die
philosophie der Griechen, ii. 2, “Aristoteles” (3rd ed., 1879), pp. 185-257
(there is an Eng. trans.), and Maier, Die Syllogistik des Aristoteles (2 vols.,
1896, 1900) (some 900 pp.), are also of first-rate importance.
29 Sophist. Elench. 184, espec. b 1-3, but see Maier, loc. cit. i. 1.
30 References such as 18b 12 are the result of subsequent editing and
prove nothing. See, however, Aristotle.
31 Adrastus is said to have called them πρὸ τῶν τοπικῶν.
32 Metaphys. E. 1.
33 De Part. Animal. A. 1, 639a 1 sqq.; cf. Metaphys. 1005b 2 sqq.
34 De Interpretatione 16a sqq.
35 De Interpretatione 16a 24-25.
36 Ib. 18a 28 sqq.
37 Ib. 19a 28-29.
38 As shown e.g. by the way in which the relativity of sense and the
object of sense is conceived, 7b 35-37.
39 Topics 101a 27 and 36-b 4.
40 Topics 100.
41 Politics 1282a 1 sqq.
42 103b 21.
43 Topics 160a 37-b 5.
44 This is the explanation of the formal definition of induction, Prior
Analytics, ii. 23, 68b 15 sqq.
45 25b 36.
46 Prior Analytics, i. 1. 24a 18-20, Συλλογισμὸς δὲ ἑστὶ λόγος ἐν ᾦ
τεθέντων τινῶν ἕτερόν τι τῶν κειμένων ἐξ ἀνάγκης σνμβαίνει τῷ
ταῦτα εἶναι. The equivalent previously in Topics 100a 25 sqq.
47 Prior Analytics, ii. 21; Posterior Analytics, i. 1.
48 67a 33-37, μὴ συνθεωρῶν τὸ καθ᾽ ἑκάτερον.
49 67a 39-63.
50 79a 4-5.
51 24b 10-11.
52 Posterior Analytics, i. 4 καθ᾽ αὐτὸ means (1) contained in the
definition of the subject; (2) having the subject contained in its definition,
as being an alternative determination of the subject, crooked, e.g. is per
se of line; (3) self-subsistent; (4) connected with the subject as
consequent to ground. Its needs stricter determination therefore.
53 73b 26 sqq., 74a 37 sqq.
54 90b 16.
55 Metaphys. Z. 12, H. 6 ground this formula metaphysically.
56 94a 12, 75b 32.
57 90a 6. Cf. Ueberweg, System der Logik, § 101.
58 78a 30 sqq.
59 Topics, 101b 18, 19.
60 Posterior Analytics, ii. 13.
61 Posterior Analytics, ii. 16.
62 Posterior Analytics, i. 13 ad. fin., and i. 27. The form which a
mathematical science treats as relatively self-subsistent is certainly not
the constitutive idea.
63 Posterior Analytics, i. 3.
64 Posterior Analytics, ii. 19.
65 De Anima, 428b 18, 19.
66 Prior Analytics, i. 30, 46a 18.
67 Topics, 100b 20, 21.
68 Topics, 101a 25, 36-37, b1-4, c.
69 Zeller (loc. cit. p. 194), who puts this formula in order to reject it.
70 Metaphys. Δ 1, 1013a 14.
71 Posterior Analytics, 72a 16 seq.
72 Posterior Analytics, 77a 26, 76a 37 sqq.
73 Metaphys. Γ.
74 Posterior Analytics, ii. 19.
75 de Anima, iii. 4-6.
76 Metaphys. M. 1087a 10-12; Zeller loc. cit. 304 sqq.; McLeod Innes,
The Universal and Particular in Aristotle’s Theory of Knowledge (1886).
77 Topics, 105a 13.
78 Metaphys. 995a 8.
79 E.g., Topics, 108b 10, “to induce” the universal.
80 Posterior Analytics, ii. 19, 100b 3, 4.
81 Topics, i. 18, 108b 10.
82 Prior Analytics, ii. 23.
83 Παράδειγμα, Prior Analytics, ii. 24.
84 Sigwart, Logik, Eng. trans. vol. ii. p. 292 and elsewhere.
85 Ueberweg, System, § 127, with a ref. to de Partibus Animalium,
667a.
86 See 67a 17 ἐξ ἁπάντων τῶν ἀτόμων.
87 Ἐπιφορά. Ἐπι = “in” as in ἐπαγωγὴ, inductio, and -φορὰ = -
ferentia, as in διαφορὰ, differentia.
88 Diog. Laërt. x. 33 seq.; Sext. Emp. Adv. Math. vii. 211.
89 Diog. Laërt. x. 87; cf. Lucretius, vi. 703 sq., v. 526 sqq. (ed.
Munro).
90 Sextus Empiricus, Pyrrhon. Hypotyp. ii. 195, 196.
91 Sextus, op. cit. ii. 204.
92 Op. cit. iii. 17 sqq., and especially 28.
93 The point is raised by Aristotle, 95A.
94 See Jourdain, Recherches critiques sur l’âge et l’origine des
traductions latines d’Aristote (1843).
95 See E. Cassirer, Das Erkenntnisproblem, i. 134 seq., and the
justificatory excerpts, pp. 539 sqq.
96 See Riehl in Vierteljahrschr. f. wiss. Philos. (1893).
97 Bacon, Novum Organum, ii. 22, 23; cf. also Aristotle, Topics i. 12.
13, ii. 10. 11 (Stewart, ad Nic. Eth. 1139b 27) and Sextus Empiricus, Pyrr.
Hypot. iii. 15.
98 Bacon’s Works, ed. Ellis and Spedding, iii. 164-165.
99 A notable formula of Bacon’s Novum Organum ii. 4 § 3 turns out,
Valerius Terminus, cap. 11, to come from Aristotle, Post. An. i. 4 via
Ramus. See Ellis in Bacon’s Works, iii. 203 sqq.
100 De Civitate Dei, xi. 26. “Certum est me esse, si fallor.”
101 Cf. Plato, Republic, 381E seq.
102 Elementa Philosophiæ, i. 3. 20, i. 6. 17 seq.
103 Hobbes, Elementa Philosophiæ, i. 1. 5.
104 Id. ib. i. 6. 16.
105 Id. ib. i. 4. 8; cf. Locke’s Essay of Human Understanding, iv. 17.
106 Id. Leviathan, i. 3.
107 Id. Elem. Philos. i. 6. 10.
108 Condillac, Langue des Calculs, p. 7.
109 Locke, Essay, iii. 3.
110 Id. ib. iv. 17.
111 Loc. cit. § 8.
112 Id. ib. iv. 4, §§ 6 sqq.
113 Berkeley, Of the Principles of Human Knowledge, § 142.
114 Hume, Treatise of Human Nature, i. 1. 7 (from Berkeley, op. cit.,
introd., §§ 15-16).
115 Essay, iv. 17, § 3.
116 Hume, Treatise of Human Nature, i. 3. 15.
117 Mill, Examination of Sir William Hamilton’s Philosophy, cap. 17.
118 Cf. Mill, Autobiography, p. 159. “I grappled at once with the
problem of Induction, postponing that of Reasoning.” Ib. p. 182 (when he
is preoccupied with syllogism), “I could make nothing satisfactory of
Induction at this time.”
119 Autobiography, p. 181.
120 The insight, for instance, of F. H. Bradley’s criticism, Principles of
Logic, II. ii. 3, is somewhat dimmed by a lack of sympathy due to
extreme difference in the point of view adopted.
121 Bacon, Novum organum, i. 100.
122 Russell’s Philosophy of Leibnitz, capp. 1-5.
123 See especially remarks on the letter of M. Arnauld (Gerhardt’s
edition of the philosophical works, ii. 37 sqq.).
124 Gerhardt, vi. 612, quoted by Russell, loc. cit., p. 19.
125 Ibid., ii. 62, Russell, p. 33.
126 Spinoza, ed. van Vloten and Land, i. 46 (Ethica, i. 11).
127 Nouveaux essais, iv. 2 § 9, 17 § 4 (Gerhardt v. 351, 460).
128 Critique of Judgment, Introd. § 2, ad. fin. (Werke, Berlin Academy
edition, vol. v. p. 176, l. 10).
129 Kant’s Introduction to Logic and his Essay on the Mistaken
Subtlety of the Four Figures, trans. T. K. Abbott (1885).
130 Loc. cit., p. 11.
131 Or antitheses. Kant follows, for example, a different line of
cleavage between form and content from that developed between
thought and the “given.” And these are not his only unresolved dualities,
even in the Critique of Pure Reason. For the logical inquiry, however, it is
permissible to ignore or reduce these differences.
The determination too of the sense in which Kant’s theory of
knowledge involves an unresolved antithesis is for the logical purpose
necessary so far only as it throws light upon his logic and his influence
upon logical developments. Historically the question of the extent to
which writers adopted the dualistic interpretation or one that had the like
consequences is of greater importance.
It may be said summarily that Kant holds the antithesis between
thought and “the given” to be unresolved and within the limits of theory
of knowledge irreducible. The dove of thought falls lifeless if the resistant
atmosphere of “the given” be withdrawn (Critique of Pure Reason, ed. 2
Introd. Kant’s Werke, ed. of the Prussian Academy, vol. iii. p. 32, ll. 10
sqq.). Nevertheless the thing-in-itself is a problematic conception and of a
limiting or negative use merely. He “had woven,” according to an often
quoted phrase of Goethe, “a certain sly element of irony into his method;
... he pointed as it were with a side gesture beyond the limits which he
himself had drawn.” Thus (loc. cit. p. 46, ll. 8, 9) he declares that “there
are two lineages united in human knowledge, which perhaps spring from
a common stock, though to us unknown—namely sense and
understanding.” Some indication of the way in which he would
hypothetically and speculatively mitigate the antithesis is perhaps
afforded by the reflection that the distinction of the mental and what
appears as material is an external distinction in which the one appears
outside to the other. “Yet what as thing-in-itself lies back of the
phenomenon may perhaps not be so wholly disparate after all” (ib. p.
278, ll. 26 sqq.).
132 Critique of Judgment, Introd. § 2 (Werke, v., 276 ll. 9 sqq.); cf.
Bernard’s “Prolegomena” to his translation of this, (pp. xxxviii. sqq.).
133 Die Logik, insbesondere die Analytik (Schleswig, 1825). August
Detlev Christian Twesten (1789-1876), a Protestant theologian,
succeeded Schleiermacher as professor in Berlin in 1835.
134 See Sir William Hamilton: The Philosophy of Perception, by J.
Hutchison Stirling.
135 Hauptpunkte der Logik, 1808 (Werke, ed. Hartenstein, i. 465
sqq.), and specially Lehrbuch der Einleitung in die Philosophie (1813),
and subsequently §§ 34 sqq. (Werke, i. 77 sqq.).
136 See Ueberweg, System of Logic and History of Logical Doctrines,
§ 34.
137 Drei Bücher der Logik, 1874 (E.T., 1884). The Book on Pure Logic
follows in essentials the line of thought of an earlier work (1843).
138 Logic, Eng. trans. 35 ad. fin.
139 Logic, Introd. § ix.
140 For whom see Höffding, History of Modern Philosophy, Eng.
trans., vol. ii. pp. 122 sqq.; invaluable for the logical methods of modern
philosophers.
141 Wissenschaft der Logik (1812-1816), in course of revision at
Hegel’s death in 1831 (Werke, vols. iii.-v.), and Encyklopädie der
philosophischen Wissenschaften, i.; Die Logik (1817; 3rd ed., 1830);
Werke, vol. vi., Eng. trans., Wallace (2nd ed., 1892).
142 The Principles of Logic (1883).
143 Logic, or The Morphology of Thought (2 vols., 1888).
144 Logic, Pref. pp. 6 seq.
145 Id. vol. ii. p. 4.
146 Logik (1873, 1889), Eng. trans. ii. 17.
147 Op. cit. ii. 289.
148 Introd. to Logic., trans. Abbott, p. 10.
149 Ueber Annahmen (1902, c.).
150 Logik (1880, and in later editions).
151 Yet see Studies in Logic, by John Dewey and others (1903).
LOGOCYCLIC CURVE,
STROPHOID or FOLIATE, a cubic curve
generated by increasing or diminishing the
radius vector of a variable point Q on a straight
line AB by the distance QC of the point from the
foot of the perpendicular drawn from the origin
to the fixed line. The polar equation is r cos θ =
a(1 ± sinθ), the upper sign referring to the case
when the vector is increased, the lower when it
is diminished. Both branches are included in the
Cartesian equation (x2 + y2)(2a − x) = a2x,
where a is the distance of the line from the
origin. If we take for axes the fixed line and the perpendicular
through the initial point, the equation takes the form y √(a − x) = x
√(a + x). The curve resembles the folium of Descartes, and has a
node between x = 0, x = a, and two branches asymptotic to the line
x = 2a.
LOGOGRAPHI (λόγος, γράφω, writers of prose histories or
tales), the name given by modern scholars to the Greek
historiographers before Herodotus.1 Thucydides, however, applies
the term to all his own predecessors, and it is therefore usual to
make a distinction between the older and the younger logographers.
Their representatives, with one exception, came from Ionia and its
islands, which from their position were most favourably situated for
the acquisition of knowledge concerning the distant countries of East
and West. They wrote in the Ionic dialect, in what was called the
unperiodic style, and preserved the poetic character of their epic
model. Their criticism amounts to nothing more than a crude
attempt to rationalize the current legends and traditions connected
with the founding of cities, the genealogies of ruling families, and
the manners and customs of individual peoples. Of scientific criticism
there is no trace whatever. The first of these historians was probably
Cadmus of Miletus (who lived, if at all, in the early part of the 6th
century), the earliest writer of prose, author of a work on the
founding of his native city and the colonization of Ionia (so Suïdas);
Pherecydes of Leros, who died about 400, is generally considered
the last. Mention may also be made of the following: Hecataeus of
Miletus (550-476); Acusilaus of Argos,2 who paraphrased in prose
(correcting the tradition where it seemed necessary) the
genealogical works of Hesiod in the Ionic dialect; he confined his
attention to the prehistoric period, and made no attempt at a real
history; Charon of Lampsacus (c. 450), author of histories of Persia,
Libya, and Ethiopia, of annals (ὦροι) of his native town with lists of
the prytaneis and archons, and of the chronicles of Lacedaemonian
kings; Xanthus of Sardis in Lydia (c. 450), author of a history of
Lydia, one of the chief authorities used by Nicolaus of Damascus (fl.
during the time of Augustus); Hellanicus of Mytilene; Stesimbrotus of
Thasos, opponent of Pericles and reputed author of a political
pamphlet on Themistocles, Thucydides and Pericles; Hippys and
Glaucus, both of Rhegium, the first the author of histories of Italy
and Sicily, the second of a treatise on ancient poets and musicians,
used by Harpocration and Plutarch; Damastes of Sigeum, pupil of
Hellanicus, author of genealogies of the combatants before Troy (an
ethnographic and statistical list), of short treatises on poets,
sophists, and geographical subjects.
On the early Greek historians, see G. Busolt, Griechische
Geschichte (1893), i. 147-153; C. Wachsmuth, Einleitung in das
Studium der alten Geschichte (1895); A. Schäfer, Abriss der
Quellenkunde der griechischen und römischen Geschichte (ed.
H. Nissen, 1889); J. B. Bury, Ancient Greek Historians (1909),
lecture i.; histories of Greek literature by Müller-Donaldson (ch.
18) and W. Mure (bk. iv. ch. 3), where the little that is known
concerning the life and writings of the logographers is
exhaustively discussed. The fragments will be found, with Latin
notes, translation, prolegomena, and copious indexes, in C. W.
Müller’s Fragmenta historicorum Graecorum (1841-1870).
See also Greece: History, Ancient (section, “Authorities”).
1 The word is also used of the writers of speeches for the use of the
contending parties in the law courts, who were forbidden to employ
advocates.
2 There is some doubt as to whether this Acusilaus was of
Peloponnesian or Boeotian Argos. Possibly there were two of the name.
For an example of the method of Acusilaus see Bury, op. cit. p. 19.
LOGOS λόγος, a common term in ancient philosophy and
theology. It expresses the idea of an immanent reason in the world,
and, under various modifications, is met with in Indian, Egyptian and
Persian systems of thought. But the idea was developed mainly in
Hellenic and Hebrew philosophy, and we may distinguish the
following stages:
1. The Hellenic Logos.—To the Greek mind, which saw in the world
a κόσμος (ordered whole), it was natural to regard the world as the
product of reason, and reason as the ruling principle in the world. So
we find a Logos doctrine more or less prominent from the dawn of
Hellenic thought to its eclipse. It rises in the realm of physical
speculation, passes over into the territory of ethics and theology,
and makes its way through at least three well-defined stages. These
are marked off by the names of Heraclitus of Ephesus, the Stoics
and Philo.
It acquires its first importance in the theories of Heraclitus (6th
century b.c.), who, trying to account for the aesthetic order of the
visible universe, broke away to some extent from the purely physical
conceptions of his predecessors and discerned at work in the cosmic
process a λόγος analogous to the reasoning power in man. On the
one hand the Logos is identified with γνώμη and connected with
δίκη, which latter seems to have the function of correcting
deviations from the eternal law that rules in things. On the other
hand it is not positively distinguished either from the ethereal fire, or
from the εἱμαρμένη and the ἀνάγκη according to which all things
occur. Heraclitus holds that nothing material can be thought of
without this Logos, but he does not conceive the Logos itself to be
immaterial. Whether it is regarded as in any sense possessed of
intelligence and consciousness is a question variously answered. But
there is most to say for the negative. This Logos is not one above
the world or prior to it, but in the world and inseparable from it.
Man’s soul is a part of it. It is relation, therefore, as Schleiermacher
expresses it, or reason, not speech or word. And it is objective, not
subjective, reason. Like a law of nature, objective in the world, it
gives order and regularity to the movement of things, and makes the
system rational.1
The failure of Heraclitus to free himself entirely from the physical
hypotheses of earlier times prevented his speculation from
influencing his successors. With Anaxagoras a conception entered
which gradually triumphed over that of Heraclitus, namely, the
conception of a supreme, intellectual principle, not identified with
the world but independent of it. This, however, was νοῦς, not Logos.
In the Platonic and Aristotelian systems, too, the theory of ideas
involved an absolute separation between the material world and the
world of higher reality, and though the term Logos is found the
conception is vague and undeveloped. With Plato the term selected
for the expression of the principle to which the order visible in the
universe is due is νοῦς or σοφία, not λόγος. It is in the pseudo-
Platonic Epinomis that λόγος appears as a synonym for νοῦς. In
Aristotle, again, the principle which sets all nature under the rule of
thought, and directs it towards a rational end, is νοῦς, or the divine
spirit itself; while λόγος is a term with many senses, used as more
or less identical with a number of phrases, οὖ ἕνεκα, ἐνέργια,
ἐντελέχεια, οὐσία, εἶδος, μορφή, c.
In the reaction from Platonic dualism, however, the Logos doctrine
reappears in great breadth. It is a capital element in the system of
the Stoics. With their teleological views of the world they naturally
predicated an active principle pervading it and determining it. This
operative principle is called both Logos and God. It is conceived of
as material, and is described in terms used equally of nature and of
God. There is at the same time the special doctrine of the λόγος
σπερματικός, the seminal Logos, or the law of generation in the
world, the principle of the active reason working in dead matter. This
parts into λόγοι σπερματικοί, which are akin, not to the Platonic
ideas, but rather to the λόγοι ἔνυλοι of Aristotle. In man, too, there
is a Logos which is his characteristic possession, and which is
ἐνδιάθετος, as long as it is a thought resident within his breast, but
προφορικός when it is expressed as a word. This distinction
between Logos as ratio and Logos as oratio, so much used
subsequently by Philo and the Christian fathers, had been so far
anticipated by Aristotle’s distinction between the ἔξω λόγος and the
λόγος ἐν τῇ ψυχῇ. It forms the point of attachment by which the
Logos doctrine connected itself with Christianity. The Logos of the
Stoics (q.v.) is a reason in the world gifted with intelligence, and
analogous to the reason in man.
2. The Hebrew Logos.—In the later Judaism the earlier
anthropomorphic conception of God and with it the sense of the
divine nearness had been succeeded by a belief which placed God at
a remote distance, severed from man and the world by a deep
chasm. The old familiar name Yahweh became a secret; its place
was taken by such general expressions as the Holy, the Almighty, the
Majesty on High, the King of Kings, and also by the simple word
“Heaven.” Instead of the once powerful confidence in the immediate
presence of God there grew up a mass of speculation regarding on
the one hand the distant future, on the other the distant past.
Various attempts were made to bridge the gulf between God and
man, including the angels, and a number of other hybrid forms of
which it is hard to say whether they are personal beings or
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Full download Real Time Digital Signal Processing Implementation and Applicat...
PDF
Real Time Digital Signal Processing Implementations Applications And Experime...
PDF
DSP Applications Using C and the TMS320C6x DSK 1st Edition Chassaing
PDF
Data Acquisition and Signal Processing for Smart Sensors Nikolay V. Kirianaki
PDF
Realtime Digital Signal Processing Fundamentals Implementations And Applicati...
PDF
Data Acquisition and Signal Processing for Smart Sensors Nikolay V. Kirianaki
PDF
Enabling Technologies for Mobile Services The MobiLife Book 1st Edition Mika ...
PDF
Domain Architecture Models And Architecture For Umi Applications Daniel J Duffy
Full download Real Time Digital Signal Processing Implementation and Applicat...
Real Time Digital Signal Processing Implementations Applications And Experime...
DSP Applications Using C and the TMS320C6x DSK 1st Edition Chassaing
Data Acquisition and Signal Processing for Smart Sensors Nikolay V. Kirianaki
Realtime Digital Signal Processing Fundamentals Implementations And Applicati...
Data Acquisition and Signal Processing for Smart Sensors Nikolay V. Kirianaki
Enabling Technologies for Mobile Services The MobiLife Book 1st Edition Mika ...
Domain Architecture Models And Architecture For Umi Applications Daniel J Duffy

Similar to Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo (20)

PDF
Emerging Wireless Multimedia Services And Technologies Apostolis Salkintzis
PDF
Error Control Coding From Theory to Practice 1st Edition Peter Sweeney
PDF
Programming Mobile Devices An Introduction for Practitioners 1st Edition Tomm...
PDF
Digital Signal Processing and Applications Second Edition Dag Stranneby
PDF
Symbian OS Communications Programming 2nd ed Edition Iain Campbell
PDF
The IMS IP Multimedia Concepts and Services Second Edition Miikka Poikselka
PDF
Digital Audio Broadcasting Principles and Applications of DAB DAB and DMB 3rd...
PDF
Symbian Os Communications Programming 2nd Ed Iain Campbell Dale Self
PDF
Symbian OS Communications Programming 2nd ed Edition Iain Campbell
PDF
Debugging At The Electronic System Level 1st Edition Frank Rogin
PDF
The Ims Ip Multimedia Concepts And Services 2nd Edition Miikka Poikselka Aki ...
PDF
Converged Multimedia Networks 1st Edition Juliet Bates Chris Gallon
PDF
Error Control Coding From Theory to Practice 1st Edition Peter Sweeney
PDF
Largescale Software Architecture A Practical Guide Using Uml 1st Edition Garland
PDF
Mobile Java Development On Symbian Os Java Me And Doja For Smartphones Roy Be...
PDF
Protocols And Architectures For Wireless Sensor Networks 1st Edition Holger Karl
PDF
Synchronization And Arbitration In Digital Systems David J Kinnimentauth
PDF
Creating Value Added Services and Applications for Converged Communications N...
PDF
Embedded Systems Design 2nd Edition Steve Heath
PDF
Methodology For The Digital Calibration Of Analog Circuits And Systems Marc P...
Emerging Wireless Multimedia Services And Technologies Apostolis Salkintzis
Error Control Coding From Theory to Practice 1st Edition Peter Sweeney
Programming Mobile Devices An Introduction for Practitioners 1st Edition Tomm...
Digital Signal Processing and Applications Second Edition Dag Stranneby
Symbian OS Communications Programming 2nd ed Edition Iain Campbell
The IMS IP Multimedia Concepts and Services Second Edition Miikka Poikselka
Digital Audio Broadcasting Principles and Applications of DAB DAB and DMB 3rd...
Symbian Os Communications Programming 2nd Ed Iain Campbell Dale Self
Symbian OS Communications Programming 2nd ed Edition Iain Campbell
Debugging At The Electronic System Level 1st Edition Frank Rogin
The Ims Ip Multimedia Concepts And Services 2nd Edition Miikka Poikselka Aki ...
Converged Multimedia Networks 1st Edition Juliet Bates Chris Gallon
Error Control Coding From Theory to Practice 1st Edition Peter Sweeney
Largescale Software Architecture A Practical Guide Using Uml 1st Edition Garland
Mobile Java Development On Symbian Os Java Me And Doja For Smartphones Roy Be...
Protocols And Architectures For Wireless Sensor Networks 1st Edition Holger Karl
Synchronization And Arbitration In Digital Systems David J Kinnimentauth
Creating Value Added Services and Applications for Converged Communications N...
Embedded Systems Design 2nd Edition Steve Heath
Methodology For The Digital Calibration Of Analog Circuits And Systems Marc P...
Ad

Recently uploaded (20)

PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
Cell Structure & Organelles in detailed.
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Complications of Minimal Access Surgery at WLH
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
01-Introduction-to-Information-Management.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
master seminar digital applications in india
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPH.pptx obstetrics and gynecology in nursing
Renaissance Architecture: A Journey from Faith to Humanism
Cell Structure & Organelles in detailed.
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Complications of Minimal Access Surgery at WLH
2.FourierTransform-ShortQuestionswithAnswers.pdf
Basic Mud Logging Guide for educational purpose
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Pharma ospi slides which help in ospi learning
Microbial diseases, their pathogenesis and prophylaxis
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
01-Introduction-to-Information-Management.pdf
human mycosis Human fungal infections are called human mycosis..pptx
master seminar digital applications in india
STATICS OF THE RIGID BODIES Hibbelers.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Supply Chain Operations Speaking Notes -ICLT Program
102 student loan defaulters named and shamed – Is someone you know on the list?
Ad

Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo

  • 1. Realtime Digital Signal Processing Implementations And Applications 2nd Edition Sen M Kuo download https://ebookbell.com/product/realtime-digital-signal-processing- implementations-and-applications-2nd-edition-sen-m-kuo-33786762 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Realtime Digital Signal Processing Fundamentals Implementations And Applications 3rd Sen M Kuo https://ebookbell.com/product/realtime-digital-signal-processing- fundamentals-implementations-and-applications-3rd-sen-m-kuo-5331202 Real Time Digital Signal Processing Implementations Applications And Experiments With The Tms320c55x Sen M Kuo https://ebookbell.com/product/real-time-digital-signal-processing- implementations-applications-and-experiments-with-the-tms320c55x-sen- m-kuo-4311340 Realtime Digital Signal Processing From Matlab To C With The Tms320c6x Dsps 2nd Ed Wright https://ebookbell.com/product/realtime-digital-signal-processing-from- matlab-to-c-with-the-tms320c6x-dsps-2nd-ed-wright-5145054 Realtime Digital Signal Processing Based On The Tms320c6000 Nasser Kehtarnavaz https://ebookbell.com/product/realtime-digital-signal-processing- based-on-the-tms320c6000-nasser-kehtarnavaz-992478
  • 3. Realtime Digital Signal Processing From Matlab To C With The Tms320c6x Dsps Third Edition 3rd Ed Morrow https://ebookbell.com/product/realtime-digital-signal-processing-from- matlab-to-c-with-the-tms320c6x-dsps-third-edition-3rd-ed- morrow-9954322 Realtime Digital Signal Processing From Matlab To C With The Tms320c6x Dsk Welch https://ebookbell.com/product/realtime-digital-signal-processing-from- matlab-to-c-with-the-tms320c6x-dsk-welch-9955500 Realtime Digital Signal Processing From Matlab To C With The Tms320c6x Dsps Third Edition 3rd Edition Michael G Morrow Cameron Hg Wright Thad B Welch Michael G Morrow https://ebookbell.com/product/realtime-digital-signal-processing-from- matlab-to-c-with-the-tms320c6x-dsps-third-edition-3rd-edition-michael- g-morrow-cameron-hg-wright-thad-b-welch-michael-g-morrow-7384218 Architecting Highperformance Embedded Systems Design And Build Highperformance Realtime Digital Systems Based On Fpgas And Custom Circuits Ledin https://ebookbell.com/product/architecting-highperformance-embedded- systems-design-and-build-highperformance-realtime-digital-systems- based-on-fpgas-and-custom-circuits-ledin-34810328 Indigeneity In Real Time The Digital Making Of Oaxacalifornia Ingrid Kummels https://ebookbell.com/product/indigeneity-in-real-time-the-digital- making-of-oaxacalifornia-ingrid-kummels-51199030
  • 6. Real-Time Digital Signal Processing Implementations and Applications Second Edition Sen M Kuo Northern Illinois University, USA Bob H Lee Ingenient Technologies Inc., USA Wenshun Tian UTStarcom Inc., USA
  • 9. Real-Time Digital Signal Processing Implementations and Applications Second Edition Sen M Kuo Northern Illinois University, USA Bob H Lee Ingenient Technologies Inc., USA Wenshun Tian UTStarcom Inc., USA
  • 10. Copyright C 2006 John Wiley Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777 Email (for orders and customer service enquiries): cs-books@wiley.co.uk Visit our Home Page on www.wileyeurope.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to permreq@wiley.co.uk, or faxed to (+44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia John Wiley Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data Kuo, Sen M. (Sen-Maw) Real-time digital signal processing : implementations, applications and experiments with the TMS320C55X / Sen M Kuo, Bob H Lee, Wenshun Tian. – 2nd ed. p. cm. Includes bibliographical references and index. ISBN 0-470-01495-4 (cloth) 1. Signal processing–Digital techniques. 2. Texas Instruments TMS320 series microprocessors. I. Lee, Bob H. II. Tian, Wenshun. III. Title. TK5102 .9 .K86 2006 621.3822-dc22 2005036660 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN-13 978-0-470-01495-0 ISBN-10 0-470-01495-4 Typeset in 9/11pt Times by TechBooks, New Delhi, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
  • 11. Contents Preface xv 1 Introduction to Real-Time Digital Signal Processing 1 1.1 Basic Elements of Real-Time DSP Systems 2 1.2 Analog Interface 3 1.2.1 Sampling 3 1.2.2 Quantization and Encoding 7 1.2.3 Smoothing Filters 8 1.2.4 Data Converters 9 1.3 DSP Hardware 10 1.3.1 DSP Hardware Options 10 1.3.2 DSP Processors 13 1.3.3 Fixed- and Floating-Point Processors 15 1.3.4 Real-Time Constraints 16 1.4 DSP System Design 17 1.4.1 Algorithm Development 18 1.4.2 Selection of DSP Processors 19 1.4.3 Software Development 20 1.4.4 High-Level Software Development Tools 21 1.5 Introduction to DSP Development Tools 22 1.5.1 C Compiler 22 1.5.2 Assembler 23 1.5.3 Linker 24 1.5.4 Other Development Tools 25 1.6 Experiments and Program Examples 25 1.6.1 Experiments of Using CCS and DSK 26 1.6.2 Debugging Program Using CCS and DSK 29 1.6.3 File I/O Using Probe Point 32 1.6.4 File I/O Using C File System Functions 35 1.6.5 Code Efficiency Analysis Using Profiler 37 1.6.6 Real-Time Experiments Using DSK 39 1.6.7 Sampling Theory 42 1.6.8 Quantization in ADCs 44 References 45 Exercises 45
  • 12. vi CONTENTS 2 Introduction to TMS320C55x Digital Signal Processor 49 2.1 Introduction 49 2.2 TMS320C55x Architecture 50 2.2.1 Architecture Overview 50 2.2.2 Buses 53 2.2.3 On-Chip Memories 53 2.2.4 Memory-Mapped Registers 55 2.2.5 Interrupts and Interrupt Vector 55 2.3 TMS320C55x Peripherals 58 2.3.1 External Memory Interface 60 2.3.2 Direct Memory Access 60 2.3.3 Enhanced Host-Port Interface 61 2.3.4 Multi-Channel Buffered Serial Ports 62 2.3.5 Clock Generator and Timers 65 2.3.6 General Purpose Input/Output Port 65 2.4 TMS320C55x Addressing Modes 65 2.4.1 Direct Addressing Modes 66 2.4.2 Indirect Addressing Modes 68 2.4.3 Absolute Addressing Modes 70 2.4.4 Memory-Mapped Register Addressing Mode 70 2.4.5 Register Bits Addressing Mode 71 2.4.6 Circular Addressing Mode 72 2.5 Pipeline and Parallelism 73 2.5.1 TMS320C55x Pipeline 73 2.5.2 Parallel Execution 74 2.6 TMS320C55x Instruction Set 76 2.6.1 Arithmetic Instructions 76 2.6.2 Logic and Bit Manipulation Instructions 77 2.6.3 Move Instruction 78 2.6.4 Program Flow Control Instructions 78 2.7 TMS320C55x Assembly Language Programming 82 2.7.1 Assembly Directives 82 2.7.2 Assembly Statement Syntax 84 2.8 C Language Programming for TMS320C55x 86 2.8.1 Data Types 86 2.8.2 Assembly Code Generation by C Compiler 87 2.8.3 Compiler Keywords and Pragma Directives 89 2.9 Mixed C-and-Assembly Language Programming 90 2.10 Experiments and Program Examples 93 2.10.1 Interfacing C with Assembly Code 93 2.10.2 Addressing Modes Using Assembly Programming 94 2.10.3 Phase-Locked Loop and Timers 97 2.10.4 EMIF Configuration for Using SDRAM 103 2.10.5 Programming Flash Memory Devices 105 2.10.6 Using McBSP 106 2.10.7 AIC23 Configurations 109 2.10.8 Direct Memory Access 111 References 115 Exercises 115
  • 13. CONTENTS vii 3 DSP Fundamentals and Implementation Considerations 121 3.1 Digital Signals and Systems 121 3.1.1 Elementary Digital Signals 121 3.1.2 Block Diagram Representation of Digital Systems 123 3.2 System Concepts 126 3.2.1 Linear Time-Invariant Systems 126 3.2.2 The z-Transform 130 3.2.3 Transfer Functions 132 3.2.4 Poles and Zeros 135 3.2.5 Frequency Responses 138 3.2.6 Discrete Fourier Transform 141 3.3 Introduction to Random Variables 142 3.3.1 Review of Random Variables 142 3.3.2 Operations of Random Variables 144 3.4 Fixed-Point Representations and Quantization Effects 147 3.4.1 Fixed-Point Formats 147 3.4.2 Quantization Errors 151 3.4.3 Signal Quantization 151 3.4.4 Coefficient Quantization 153 3.4.5 Roundoff Noise 153 3.4.6 Fixed-Point Toolbox 154 3.5 Overflow and Solutions 157 3.5.1 Saturation Arithmetic 157 3.5.2 Overflow Handling 158 3.5.3 Scaling of Signals 158 3.5.4 Guard Bits 159 3.6 Experiments and Program Examples 159 3.6.1 Quantization of Sinusoidal Signals 160 3.6.2 Quantization of Audio Signals 161 3.6.3 Quantization of Coefficients 162 3.6.4 Overflow and Saturation Arithmetic 164 3.6.5 Function Approximations 167 3.6.6 Real-Time Digital Signal Generation Using DSK 175 References 180 Exercises 180 4 Design and Implementation of FIR Filters 185 4.1 Introduction to FIR Filters 185 4.1.1 Filter Characteristics 185 4.1.2 Filter Types 187 4.1.3 Filter Specifications 189 4.1.4 Linear-Phase FIR Filters 191 4.1.5 Realization of FIR Filters 194 4.2 Design of FIR Filters 196 4.2.1 Fourier Series Method 197 4.2.2 Gibbs Phenomenon 198 4.2.3 Window Functions 201
  • 14. viii CONTENTS 4.2.4 Design of FIR Filters Using MATLAB 206 4.2.5 Design of FIR Filters Using FDATool 207 4.3 Implementation Considerations 213 4.3.1 Quantization Effects in FIR Filters 213 4.3.2 MATLAB Implementations 216 4.3.3 Floating-Point C Implementations 218 4.3.4 Fixed-Point C Implementations 219 4.4 Applications: Interpolation and Decimation Filters 220 4.4.1 Interpolation 220 4.4.2 Decimation 221 4.4.3 Sampling-Rate Conversion 221 4.4.4 MATLAB Implementations 224 4.5 Experiments and Program Examples 225 4.5.1 Implementation of FIR Filters Using Fixed-Point C 226 4.5.2 Implementation of FIR Filter Using C55x Assembly Language 226 4.5.3 Optimization for Symmetric FIR Filters 228 4.5.4 Optimization Using Dual MAC Architecture 230 4.5.5 Implementation of Decimation 232 4.5.6 Implementation of Interpolation 233 4.5.7 Sample Rate Conversion 234 4.5.8 Real-Time Sample Rate Conversion Using DSP/BIOS and DSK 235 References 245 Exercises 245 5 Design and Implementation of IIR Filters 249 5.1 Introduction 249 5.1.1 Analog Systems 249 5.1.2 Mapping Properties 251 5.1.3 Characteristics of Analog Filters 252 5.1.4 Frequency Transforms 254 5.2 Design of IIR Filters 255 5.2.1 Bilinear Transform 256 5.2.2 Filter Design Using Bilinear Transform 257 5.3 Realization of IIR Filters 258 5.3.1 Direct Forms 258 5.3.2 Cascade Forms 260 5.3.3 Parallel Forms 262 5.3.4 Realization of IIR Filters Using MATLAB 263 5.4 Design of IIR Filters Using MATLAB 264 5.4.1 Filter Design Using MATLAB 264 5.4.2 Frequency Transforms Using MATLAB 267 5.4.3 Design and Realization Using FDATool 268 5.5 Implementation Considerations 271 5.5.1 Stability 271 5.5.2 Finite-Precision Effects and Solutions 273 5.5.3 MATLAB Implementations 275
  • 15. CONTENTS ix 5.6 Practical Applications 279 5.6.1 Recursive Resonators 279 5.6.2 Recursive Quadrature Oscillators 282 5.6.3 Parametric Equalizers 284 5.7 Experiments and Program Examples 285 5.7.1 Floating-Point Direct-Form I IIR Filter 285 5.7.2 Fixed-Point Direct-Form I IIR Filter 286 5.7.3 Fixed-Point Direct-Form II Cascade IIR Filter 287 5.7.4 Implementation Using DSP Intrinsics 289 5.7.5 Implementation Using Assembly Language 290 5.7.6 Real-Time Experiments Using DSP/BIOS 293 5.7.7 Implementation of Parametric Equalizer 296 5.7.8 Real-Time Two-Band Equalizer Using DSP/BIOS 297 References 299 Exercises 299 6 Frequency Analysis and Fast Fourier Transform 303 6.1 Fourier Series and Transform 303 6.1.1 Fourier Series 303 6.1.2 Fourier Transform 304 6.2 Discrete Fourier Transform 305 6.2.1 Discrete-Time Fourier Transform 305 6.2.2 Discrete Fourier Transform 307 6.2.3 Important Properties 310 6.3 Fast Fourier Transforms 313 6.3.1 Decimation-in-Time 314 6.3.2 Decimation-in-Frequency 316 6.3.3 Inverse Fast Fourier Transform 317 6.4 Implementation Considerations 317 6.4.1 Computational Issues 317 6.4.2 Finite-Precision Effects 318 6.4.3 MATLAB Implementations 318 6.4.4 Fixed-Point Implementation Using MATLAB 320 6.5 Practical Applications 322 6.5.1 Spectral Analysis 322 6.5.2 Spectral Leakage and Resolution 323 6.5.3 Power Spectrum Density 325 6.5.4 Fast Convolution 328 6.6 Experiments and Program Examples 332 6.6.1 Floating-Point C Implementation of DFT 332 6.6.2 C55x Assembly Implementation of DFT 332 6.6.3 Floating-Point C Implementation of FFT 336 6.6.4 C55x Intrinsics Implementation of FFT 338 6.6.5 Assembly Implementation of FFT and Inverse FFT 339 6.6.6 Implementation of Fast Convolution 343 6.6.7 Real-Time FFT Using DSP/BIOS 345 6.6.8 Real-Time Fast Convolution 347 References 347 Exercises 348
  • 16. x CONTENTS 7 Adaptive Filtering 351 7.1 Introduction to Random Processes 351 7.2 Adaptive Filters 354 7.2.1 Introduction to Adaptive Filtering 354 7.2.2 Performance Function 355 7.2.3 Method of Steepest Descent 358 7.2.4 The LMS Algorithm 360 7.2.5 Modified LMS Algorithms 361 7.3 Performance Analysis 362 7.3.1 Stability Constraint 362 7.3.2 Convergence Speed 363 7.3.3 Excess Mean-Square Error 363 7.3.4 Normalized LMS Algorithm 364 7.4 Implementation Considerations 364 7.4.1 Computational Issues 365 7.4.2 Finite-Precision Effects 365 7.4.3 MATLAB Implementations 366 7.5 Practical Applications 368 7.5.1 Adaptive System Identification 368 7.5.2 Adaptive Linear Prediction 369 7.5.3 Adaptive Noise Cancelation 372 7.5.4 Adaptive Notch Filters 374 7.5.5 Adaptive Channel Equalization 375 7.6 Experiments and Program Examples 377 7.6.1 Floating-Point C Implementation 377 7.6.2 Fixed-Point C Implementation of Leaky LMS Algorithm 379 7.6.3 ETSI Implementation of NLMS Algorithm 380 7.6.4 Assembly Language Implementation of Delayed LMS Algorithm 383 7.6.5 Adaptive System Identification 387 7.6.6 Adaptive Prediction and Noise Cancelation 388 7.6.7 Adaptive Channel Equalizer 392 7.6.8 Real-Time Adaptive Line Enhancer Using DSK 394 References 396 Exercises 397 8 Digital Signal Generators 401 8.1 Sinewave Generators 401 8.1.1 Lookup-Table Method 401 8.1.2 Linear Chirp Signal 404 8.2 Noise Generators 405 8.2.1 Linear Congruential Sequence Generator 405 8.2.2 Pseudo-Random Binary Sequence Generator 407 8.3 Practical Applications 409 8.3.1 Siren Generators 409 8.3.2 White Gaussian Noise 409 8.3.3 Dual-Tone Multifrequency Tone Generator 410 8.3.4 Comfort Noise in Voice Communication Systems 411 8.4 Experiments and Program Examples 412 8.4.1 Sinewave Generator Using C5510 DSK 412 8.4.2 White Noise Generator Using C5510 DSK 413
  • 17. CONTENTS xi 8.4.3 Wail Siren Generator Using C5510 DSK 414 8.4.4 DTMF Generator Using C5510 DSK 415 8.4.5 DTMF Generator Using MATLAB Graphical User Interface 416 References 418 Exercises 418 9 Dual-Tone Multifrequency Detection 421 9.1 Introduction 421 9.2 DTMF Tone Detection 422 9.2.1 DTMF Decode Specifications 422 9.2.2 Goertzel Algorithm 423 9.2.3 Other DTMF Detection Methods 426 9.2.4 Implementation Considerations 428 9.3 Internet Application Issues and Solutions 431 9.4 Experiments and Program Examples 432 9.4.1 Implementation of Goertzel Algorithm Using Fixed-Point C 432 9.4.2 Implementation of Goertzel Algorithm Using C55x Assembly Language 434 9.4.3 DTMF Detection Using C5510 DSK 435 9.4.4 DTMF Detection Using All-Pole Modeling 439 References 441 Exercises 442 10 Adaptive Echo Cancelation 443 10.1 Introduction to Line Echoes 443 10.2 Adaptive Echo Canceler 444 10.2.1 Principles of Adaptive Echo Cancelation 445 10.2.2 Performance Evaluation 446 10.3 Practical Considerations 447 10.3.1 Prewhitening of Signals 447 10.3.2 Delay Detection 448 10.4 Double-Talk Effects and Solutions 450 10.5 Nonlinear Processor 453 10.5.1 Center Clipper 453 10.5.2 Comfort Noise 453 10.6 Acoustic Echo Cancelation 454 10.6.1 Acoustic Echoes 454 10.6.2 Acoustic Echo Canceler 456 10.6.3 Subband Implementations 457 10.6.4 Delay-Free Structures 459 10.6.5 Implementation Considerations 459 10.6.6 Testing Standards 460 10.7 Experiments and Program Examples 461 10.7.1 MATLAB Implementation of AEC 461 10.7.2 Acoustic Echo Cancelation Using Floating-Point C 464 10.7.3 Acoustic Echo Canceler Using C55x Intrinsics 468 10.7.4 Experiment of Delay Estimation 469 References 472 Exercises 472
  • 18. xii CONTENTS 11 Speech-Coding Techniques 475 11.1 Introduction to Speech-Coding 475 11.2 Overview of CELP Vocoders 476 11.2.1 Synthesis Filter 477 11.2.2 Long-Term Prediction Filter 481 11.2.3 Perceptual Based Minimization Procedure 481 11.2.4 Excitation Signal 482 11.2.5 Algebraic CELP 483 11.3 Overview of Some Popular CODECs 484 11.3.1 Overview of G.723.1 484 11.3.2 Overview of G.729 488 11.3.3 Overview of GSM AMR 490 11.4 Voice over Internet Protocol Applications 492 11.4.1 Overview of VoIP 492 11.4.2 Real-Time Transport Protocol and Payload Type 493 11.4.3 Example of Packing G.729 496 11.4.4 RTP Data Analysis Using Ethereal Trace 496 11.4.5 Factors Affecting the Overall Voice Quality 497 11.5 Experiments and Program Examples 497 11.5.1 Calculating LPC Coefficients Using Floating-Point C 497 11.5.2 Calculating LPC Coefficients Using C55x Intrinsics 499 11.5.3 MATLAB Implementation of Formant Perceptual Weighting Filter 504 11.5.4 Implementation of Perceptual Weighting Filter Using C55x Intrinsics 506 References 507 Exercises 508 12 Speech Enhancement Techniques 509 12.1 Introduction to Noise Reduction Techniques 509 12.2 Spectral Subtraction Techniques 510 12.2.1 Short-Time Spectrum Estimation 511 12.2.2 Magnitude Subtraction 511 12.3 Voice Activity Detection 513 12.4 Implementation Considerations 515 12.4.1 Spectral Averaging 515 12.4.2 Half-Wave Rectification 515 12.4.3 Residual Noise Reduction 516 12.5 Combination of Acoustic Echo Cancelation with NR 516 12.6 Voice Enhancement and Automatic Level Control 518 12.6.1 Voice Enhancement Devices 518 12.6.2 Automatic Level Control 519 12.7 Experiments and Program Examples 519 12.7.1 Voice Activity Detection 519 12.7.2 MATLAB Implementation of NR Algorithm 522 12.7.3 Floating-Point C Implementation of NR 522 12.7.4 Mixed C55x Assembly and Intrinsics Implementations of VAD 522 12.7.5 Combining AEC with NR 526 References 529 Exercises 529
  • 19. CONTENTS xiii 13 Audio Signal Processing 531 13.1 Introduction 531 13.2 Basic Principles of Audio Coding 531 13.2.1 Auditory-Masking Effects for Perceptual Coding 533 13.2.2 Frequency-Domain Coding 536 13.2.3 Lossless Audio Coding 538 13.3 Multichannel Audio Coding 539 13.3.1 MP3 540 13.3.2 Dolby AC-3 541 13.3.3 MPEG-2 AAC 542 13.4 Connectivity Processing 544 13.5 Experiments and Program Examples 544 13.5.1 Floating-Point Implementation of MDCT 544 13.5.2 Implementation of MDCT Using C55x Intrinsics 547 13.5.3 Experiments of Preecho Effects 549 13.5.4 Floating-Point C Implementation of MP3 Decoding 549 References 553 Exercises 553 14 Channel Coding Techniques 555 14.1 Introduction 555 14.2 Block Codes 556 14.2.1 Reed–Solomon Codes 558 14.2.2 Applications of Reed–Solomon Codes 562 14.2.3 Cyclic Redundant Codes 563 14.3 Convolutional Codes 564 14.3.1 Convolutional Encoding 564 14.3.2 Viterbi Decoding 564 14.3.3 Applications of Viterbi Decoding 566 14.4 Experiments and Program Examples 569 14.4.1 Reed–Solomon Coding Using MATALB 569 14.4.2 Reed–Solomon Coding Using Simulink 570 14.4.3 Verification of RS(255, 239) Generation Polynomial 571 14.4.4 Convolutional Codes 572 14.4.5 Implementation of Convolutional Codes Using C 573 14.4.6 Implementation of CRC-32 575 References 576 Exercises 577 15 Introduction to Digital Image Processing 579 15.1 Digital Images and Systems 579 15.1.1 Digital Images 579 15.1.2 Digital Image Systems 580 15.2 RGB Color Spaces and Color Filter Array Interpolation 581 15.3 Color Spaces 584 15.3.1 YCbCr and YUV Color Spaces 584 15.3.2 CYMK Color Space 585
  • 20. xiv CONTENTS 15.3.3 YIQ Color Space 585 15.3.4 HSV Color Space 585 15.4 YCbCr Subsampled Color Spaces 586 15.5 Color Balance and Correction 586 15.5.1 Color Balance 587 15.5.2 Color Adjustment 588 15.5.3 Gamma Correction 589 15.6 Image Histogram 590 15.7 Image Filtering 591 15.8 Image Filtering Using Fast Convolution 596 15.9 Practical Applications 597 15.9.1 JPEG Standard 597 15.9.2 2-D Discrete Cosine Transform 599 15.10 Experiments and Program Examples 601 15.10.1 YCbCr to RGB Conversion 601 15.10.2 Using CCS Link with DSK and Simulator 604 15.10.3 White Balance 607 15.10.4 Gamma Correction and Contrast Adjustment 610 15.10.5 Histogram and Histogram Equalization 611 15.10.6 2-D Image Filtering 613 15.10.7 Implementation of DCT and IDCT 617 15.10.8 TMS320C55x Image Accelerator for DCT and IDCT 621 15.10.9 TMS320C55x Hardware Accelerator Image/Video Processing Library 623 References 625 Exercises 625 Appendix A Some Useful Formulas and Definitions 627 A.1 Trigonometric Identities 627 A.2 Geometric Series 628 A.3 Complex Variables 628 A.4 Units of Power 630 References 631 Appendix B Software Organization and List of Experiments 633 Index 639
  • 21. Preface In recent years, digital signal processing (DSP) has expanded beyond filtering, frequency analysis, and signal generation. More and more markets are opening up to DSP applications, where in the past, real-time signal processing was not feasible or was too expensive. Real-time signal processing using general-purpose DSP processors provides an effective way to design and implement DSP algorithms for real-world applications. However, this is very challenging work in today’s engineering fields. With DSP penetrating into many practical applications, the demand for high-performance digital signal processors has expanded rapidly in recent years. Many industrial companies are currently engaged in real-time DSP research and development. Therefore, it becomes increasingly important for today’s students, practicing engineers, and development researchers to master not only the theory of DSP, but also the skill of real-time DSP system design and implementation techniques. This book provides fundamental real-time DSP principles and uses a hands-on approach to introduce DSP algorithms, system design, real-time implementation considerations, and many practical applica- tions. This book contains many useful examples like hands-on experiment software and DSP programs using MATLAB, Simulink, C, and DSP assembly languages. Also included are various exercises for further exploring the extensions of the examples and experiments. The book uses the Texas Instruments’ Code Composer Studio (CCS) with the Spectrum Digital TMS320VC5510 DSP starter kit (DSK) devel- opment tool for real-time experiments and applications. This book emphasizes real-time DSP applications and is intended as a text for senior/graduate-level college students. The prerequisites of this book are signals and systems concepts, microprocessor ar- chitecture and programming, and basic C programming knowledge. These topics are covered at the sophomore and junior levels of electrical and computer engineering, computer science, and other related engineering curricula. This book can also serve as a desktop reference for DSP engineers, algorithm developers, and embedded system programmers to learn DSP concepts and to develop real-time DSP applications on the job. We use a practical approach that avoids numerous theoretical derivations. A list of DSP textbooks with mathematical proofs is given at the end of each chapter. Also helpful are the manuals and application notes for the TMS320C55x DSP processors from Texas Instruments at www.ti.com, and for the MATLAB and Simulink from Math Works at www.mathworks.com. This is the second edition of the book titled ‘Real-Time Digital Signal Processing: Implementations, Applications and Experiments with the TMS320C55x’ by Kuo and Lee, John Wiley Sons, Ltd. in 2001. The major changes included in the revision are: 1. To utilize the effective software development process that begins from algorithm design and verifica- tionusingMATLABandfloating-pointC,tofinite-wordlengthanalysis,fixed-pointCimplementation and code optimization using intrinsics, assembly routines, and mixed C-and-assembly programming
  • 22. xvi PREFACE on fixed-point DSP processors. This step-by-step software development and optimization process is applied to the finite-impulse response (FIR) filtering, infinite-impulse response (IIR) filtering, adaptive filtering, fast Fourier transform, and many real-life applications in Chapters 8–15. 2. To add several widely used DSP applications such as speech coding, channel coding, audio coding, image processing, signal generation and detection, echo cancelation, and noise reduction by expand- ing Chapter 9 of the first edition to eight new chapters with the necessary background to perform the experiments using the optimized software development process. 3. To design and analyze DSP algorithms using the most effective MATLAB graphic user interface (GUI) tools such as Signal Processing Tool (SPTool), Filter Design and Analysis Tool (FDATool), etc. These tools are powerful for filter designing, analysis, quantization, testing, and implementation. 4. To add step-by-step experiments to create CCS DSP/BIOS applications, configure the TMS320VC5510 DSK for real-time audio applications, and utilize MATLAB’s Link for CCS feature to improve DSP development, debug, analyze, and test efficiencies. 5. To update experiments to include new sets of hands-on exercises and applications. Also, to update all programs using the most recent version of software and the TMS320C5510 DSK board for real-time experiments. There are many existing DSP algorithms and applications available in MATLAB and floating-point C programs. This book provides a systematic software development process for converting these pro- grams to fixed-point C and optimizing them for implementation on commercially available fixed-point DSP processors. To effectively illustrate real-time DSP concepts and applications, MATLAB is used for analysis and filter design, C program is used for implementing DSP algorithms, and CCS is in- tegrated into TMS320C55x experiments and applications. To efficiently utilize the advanced DSP ar- chitecture for fast software development and maintenance, the mixing of C and assembly programs is emphasized. Thisbookisorganizedintotwoparts:DSPimplementationandDSPapplication.PartI,DSPimplemen- tation (Chapters 1–7) discusses real-time DSP principles, architectures, algorithms, and implementation considerations. Chapter 1 reviews the fundamentals of real-time DSP functional blocks, DSP hardware options, fixed- and floating-point DSP devices, real-time constraints, algorithm development, selection of DSP chips, and software development. Chapter 2 introduces the architecture and assembly programming of the TMS320C55x DSP processor. Chapter 3 presents fundamental DSP concepts and practical con- siderations for the implementation of digital filters and algorithms on DSP hardware. Chapter 4 focuses on the design, implementation, and application of FIR filters. Digital IIR filters are covered in Chapter 5, and adaptive filters are presented in Chapter 7. The development, implementation, and application of FFT algorithms are introduced in Chapter 6. Part II, DSP application (Chapters 8–15) introduces several popular real-world applications in signal processing that have played important roles in the realization of the systems. These selected DSP applica- tions include signal (sinewave, noise, and multitone) generation in Chapter 8, dual-tone multifrequency detection in Chapter 9, adaptive echo cancelation in Chapter 10, speech-coding algorithms in Chapter 11, speech enhancement techniques in Chapter 12, audio coding methods in Chapter 13, error correction coding techniques in Chapter 14, and image processing fundamentals in Chapter 15. As with any book attempting to capture the state of the art at a given time, there will certainly be updates that are necessitated by the rapidly evolving developments in this dynamic field. We are certain that this book will serve as a guide for what has already come and as an inspiration for what will follow.
  • 23. SOFTWARE AVAILABILITY xvii Software Availability This text utilizes various MATLAB, floating-point and fixed-point C, DSP assembly and mixed C and assembly programs for the examples, experiments, and applications. These programs along with many other programs and real-world data files are available in the companion CD. The directory structure and the subdirectory names are explained in Appendix B. The software will assist in gaining insight into the understanding and implementation of DSP algorithms, and it is required for doing experiments in the last section of each chapter. Some of these experiments involve minor modifications of the example code. By examining, studying, and modifying the example code, the software can also be used as a prototype for other practical applications. Every attempt has been made to ensure the correctness of the code. We would appreciate readers bringing to our attention (kuo@ceet.niu.edu) any coding errors so that we can correct, update, and post them on the website http://www.ceet.niu.edu/faculty/kuo. Acknowledgments We are grateful to Cathy Wicks and Gene Frantz of Texas Instruments, and to Naomi Fernandes and Courtney Esposito of The MathWorks for providing us with the support needed to write this book. We would like to thank several individuals at Wiley for their support on this project: Simone Taylor, Executive Commissioning Editor; Emily Bone, Assistant Editor; and Lucy Bryan, Executive Project Editor. We also thank the staff at Wiley for the final preparation of this book. Finally, we thank our families for the endless love, encouragement, patience, and understanding they have shown throughout this period. Sen M. Kuo, Bob H. Lee and Wenshun Tian
  • 25. 1 Introduction to Real-Time Digital Signal Processing Signals can be divided into three categories: continuous-time (analog) signals, discrete-time signals, and digital signals. The signals that we encounter daily are mostly analog signals. These signals are defined continuously in time, have an infinite range of amplitude values, and can be processed using analog electronics containing both active and passive circuit elements. Discrete-time signals are defined only at a particular set of time instances. Therefore, they can be represented as a sequence of numbers that have a continuous range of values. Digital signals have discrete values in both time and amplitude; thus, they can be processed by computers or microprocessors. In this book, we will present the design, implementation, and applications of digital systems for processing digital signals using digital hardware. However, the analysis usually uses discrete-time signals and systems for mathematical convenience. Therefore, we use the terms ‘discrete-time’ and ‘digital’ interchangeably. Digital signal processing (DSP) is concerned with the digital representation of signals and the use of digital systems to analyze, modify, store, or extract information from these signals. Much research has been conducted to develop DSP algorithms and systems for real-world applications. In recent years, the rapid advancement in digital technologies has supported the implementation of sophisti- cated DSP algorithms for real-time applications. DSP is now used not only in areas where analog methods were used previously, but also in areas where applying analog techniques is very difficult or impossible. There are many advantages in using digital techniques for signal processing rather than traditional analog devices, such as amplifiers, modulators, and filters. Some of the advantages of a DSP system over analog circuitry are summarized as follows: 1. Flexibility: Functions of a DSP system can be easily modified and upgraded with software that implements the specific applications. One can design a DSP system that can be programmed to perform a wide variety of tasks by executing different software modules. A digital electronic device can be easily upgraded in the field through the onboard memory devices (e.g., flash memory) to meet new requirements or improve its features. 2. Reproducibility: The performance of a DSP system can be repeated precisely from one unit to another. In addition, by using DSP techniques, digital signals such as audio and video streams can be stored, transferred, or reproduced many times without degrading the quality. By contract, analog circuits Real-Time Digital Signal Processing: Implementations and Applications S.M. Kuo, B.H. Lee, and W. Tian C 2006 John Wiley Sons, Ltd
  • 26. 2 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING will not have the same characteristics even if they are built following identical specifications due to analog component tolerances. 3. Reliability: The memory and logic of DSP hardware does not deteriorate with age. Therefore, the field performance of DSP systems will not drift with changing environmental conditions or aged electronic components as their analog counterparts do. 4. Complexity: DSP allows sophisticated applications such as speech recognition and image compres- sion to be implemented with lightweight and low-power portable devices. Furthermore, there are some important signal processing algorithms such as error correcting codes, data transmission and storage, and data compression, which can only be performed using DSP systems. With the rapid evolution in semiconductor technologies, DSP systems have a lower overall cost com- pared to analog systems for most applications. DSP algorithms can be developed, analyzed, and simulated using high-level language and software tools such as C/C++ and MATLAB (matrix laboratory). The performance of the algorithms can be verified using a low-cost, general-purpose computer. Therefore, a DSP system is relatively easy to design, develop, analyze, simulate, test, and maintain. There are some limitations associated with DSP. For instance, the bandwidth of a DSP system is limited by the sampling rate and hardware peripherals. Also, DSP algorithms are implemented using a fixed number of bits with a limited precision and dynamic range (the ratio between the largest and smallest numbers that can be represented), which results in quantization and arithmetic errors. Thus, the system performance might be different from the theoretical expectation. 1.1 Basic Elements of Real-Time DSP Systems There are two types of DSP applications: non-real-time and real-time. Non-real-time signal processing involves manipulating signals that have already been collected in digital forms. This may or may not represent a current action, and the requirement for the processing result is not a function of real time. Real-time signal processing places stringent demands on DSP hardware and software designs to complete predefined tasks within a certain time frame. This chapter reviews the fundamental functional blocks of real-time DSP systems. The basic functional blocks of DSP systems are illustrated in Figure 1.1, where a real-world analog signal is converted to a digital signal, processed by DSP hardware, and converted back into an analog Other digital systems Antialiasing filter ADC x(n) DSP hardware Other digital systems DAC Reconstruction filter y(n) x(t) x′(t) Amplifier Amplifier y(t) y′(t) Input channels Output channels Figure 1.1 Basic functional block diagram of a real-time DSP system
  • 27. ANALOG INTERFACE 3 signal. Each of the functional blocks in Figure 1.1 will be introduced in the subsequent sections. For some applications, the input signal may already be in digital form and/or the output data may not need to be converted to an analog signal. For example, the processed digital information may be stored in computer memory for later use, or it may be displayed graphically. In other applications, the DSP system may be required to generate signals digitally, such as speech synthesis used for computerized services or pseudo-random number generators for CDMA (code division multiple access) wireless communication systems. 1.2 Analog Interface In this book, a time-domain signal is denoted with a lowercase letter. For example, x(t) in Figure 1.1 is used to name an analog signal of x which is a function of time t. The time variable t and the amplitude of x(t) take on a continuum of values between −∞ and ∞. For this reason we say x(t) is a continuous-time signal. The signals x(n) and y(n) in Figure 1.1 depict digital signals which are only meaningful at time instant n. In this section, we first discuss how to convert analog signals into digital signals so that they can be processed using DSP hardware. The process of converting an analog signal to a digital signal is called the analog-to-digital conversion, usually performed by an analog-to-digital converter (ADC). The purpose of signal conversion is to prepare real-world analog signals for processing by digital hardware. As shown in Figure 1.1, the analog signal x (t) is picked up by an appropriate electronic sensor that converts pressure, temperature, or sound into electrical signals. For example, a microphone can be used to collect sound signals. The sensor signal x (t) is amplified by an amplifier with gain value g. The amplified signal is x(t) = gx (t). (1.1) The gain value g is determined such that x(t) has a dynamic range that matches the ADC used by the system. If the peak-to-peak voltage range of the ADC is ±5 V, then g may be set so that the amplitude of signal x(t) to the ADC is within ±5 V. In practice, it is very difficult to set an appropriate fixed gain because the level of x (t) may be unknown and changing with time, especially for signals with a larger dynamic range such as human speech. Once the input digital signal has been processed by the DSP hardware, the result y(n) is still in digital form. In many DSP applications, we need to reconstruct the analog signal after the completion of digital processing. We must convert the digital signal y(n) back to the analog signal y(t) before it is applied to an appropriated analog device. This process is called the digital-to-analog conversion, typically performed by a digital-to-analog converter (DAC). One example would be audio CD (compact disc) players, for which the audio music signals are stored in digital form on CDs. A CD player reads the encoded digital audio signals from the disk and reconstructs the corresponding analog waveform for playback via loudspeakers. The system shown in Figure 1.1 is a real-time system if the signal to the ADC is continuously sampled and the ADC presents a new sample to the DSP hardware at the same rate. In order to maintain real-time operation, the DSP hardware must perform all required operations within the fixed time period, and present an output sample to the DAC before the arrival of the next sample from the ADC. 1.2.1 Sampling As shown in Figure 1.1, the ADC converts the analog signal x(t) into the digital signal x(n). Analog- to-digital conversion, commonly referred as digitization, consists of the sampling (digitization in time) and quantization (digitization in amplitude) processes as illustrated in Figure 1.2. The sampling process depicts an analog signal as a sequence of values. The basic sampling function can be carried out with an ideal ‘sample-and-hold’ circuit, which maintains the sampled signal level until the next sample is taken.
  • 28. 4 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING x(t) Ideal sampler x(nT) Quantizer x(n) Analog-to-digital converter Figure 1.2 Block diagram of an ADC Quantization process approximates a waveform by assigning a number for each sample. Therefore, the analog-to-digital conversion will perform the following steps: 1. The bandlimited signal x(t) is sampled at uniformly spaced instants of time nT, where n is a positive integer and T is the sampling period in seconds. This sampling process converts an analog signal into a discrete-time signal x(nT ) with continuous amplitude value. 2. The amplitude of each discrete-time sample is quantized into one of the 2B levels, where B is the number of bits that the ADC has to represent for each sample. The discrete amplitude levels are represented (or encoded) into distinct binary words x(n) with a fixed wordlength B. The reason for making this distinction is that these processes introduce different distortions. The sampling process brings in aliasing or folding distortion, while the encoding process results in quantization noise. As shown in Figure 1.2, the sampler and quantizer are integrated on the same chip. However, high-speed ADCs typically require an external sample-and-hold device. An ideal sampler can be considered as a switch that periodically opens and closes every T s (seconds). The sampling period is defined as T = 1 fs , (1.2) where fs is the sampling frequency (or sampling rate) in hertz (or cycles per second). The intermediate signal x(nT ) is a discrete-time signal with a continuous value (a number with infinite precision) at discrete time nT, n = 0, 1, . . . , ∞, as illustrated in Figure 1.3. The analog signal x(t) is continuous in both time and amplitude. The sampled discrete-time signal x(nT ) is continuous in amplitude, but is defined only at discrete sampling instants t = nT. Time, t x(nT) 0 T 2T 3T 4T x(t) Figure 1.3 Example of analog signal x(t) and discrete-time signal x(nT )
  • 29. ANALOG INTERFACE 5 In order to represent an analog signal x(t) by a discrete-time signal x(nT ) accurately, the sampling frequency fs must be at least twice the maximum frequency component ( fM) in the analog signal x(t). That is, fs ≥ 2 fM, (1.3) where fM is also called the bandwidth of the signal x(t). This is Shannon’s sampling theorem, which states that when the sampling frequency is greater than twice of the highest frequency component contained in the analog signal, the original signal x(t) can be perfectly reconstructed from the corresponding discrete-time signal x(nT ). The minimum sampling rate fs = 2 fM is called the Nyquist rate. The frequency fN = fs/2 is called the Nyquist frequency or folding frequency. The frequency interval [− fs/2, fs/2] is called the Nyquist interval. When an analog signal is sampled at fs, frequency components higher than fs/2 fold back into the frequency range [0, fs/2]. The folded back frequency components overlap with the original frequency components in the same range. Therefore, the original analog signal cannot be recovered from the sampled data. This undesired effect is known as aliasing. Example 1.1: Consider two sinewaves of frequencies f1 = 1 Hz and f2 = 5 Hz that are sampled at fs = 4 Hz, rather than at 10 Hz according to the sampling theorem. The analog waveforms are illustrated in Figure 1.4(a), while their digital samples and reconstructed waveforms are illustrated x(t), f1 = 1Hz x(t), f2 = 5Hz t, second x(t) x(n) t x(n) x(t) (a) Original analog waveforms and digital samplses for f1 = 1 Hz and f2 = 5 Hz. x(n), f1 = 1Hz x(n), f2 = 5Hz n x(t) x(n) x(n) x(t) n (b) Digital samples for f1 = 1 Hz and f2 = 5 Hz and reconstructed waveforms. Figure 1.4 Example of the aliasing phenomenon: (a) original analog waveforms and digital samples for f1 = 1 Hz and f2 = 5 Hz; (b) digital samples of f1 = 1 Hz and f2 = 5 Hz and reconstructed waveforms
  • 30. 6 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING in Figure 1.4(b). As shown in the figures, we can reconstruct the original waveform from the digital samples for the sinewave of frequency f1 = 1 Hz. However, for the original sinewave of frequency f2 = 5 Hz, the reconstructed signal is identical to the sinewave of frequency 1 Hz. Therefore, f1 and f2 are said to be aliased to one another, i.e., they cannot be distinguished by their discrete-time samples. Note that the sampling theorem assumes that the signal is bandlimited. For most practical applications, the analog signal x(t) may have significant energies outside the highest frequency of interest, or may contain noise with a wider bandwidth. In some cases, the sampling rate is predetermined by a given application. For example, most voice communication systems use an 8 kHz sampling rate. Unfortunately, the frequency components in a speech signal can be much higher than 4 kHz. To guarantee that the sampling theorem defined in Equation (1.3) can be fulfilled, we must block the frequency components that are above the Nyquist frequency. This can be done by using an antialiasing filter, which is an analog lowpass filter with the cutoff frequency fc ≤ fs 2 . (1.4) Ideally, an antialiasing filter should remove all frequency components above the Nyquist frequency. In many practical systems, a bandpass filter is preferred to remove all frequency components above the Nyquist frequency, as well as to prevent undesired DC offset, 60 Hz hum, or other low-frequency noises. A bandpass filter with passband from 300 to 3200 Hz can often be found in telecommunication systems. Since antialiasing filters used in real-world applications are not ideal filters, they cannot completely remove all frequency components outside the Nyquist interval. In addition, since the phase response of the analog filter may not be linear, the phase of the signal will not be shifted by amounts proportional to their frequencies. In general, a lowpass (or bandpass) filter with steeper roll-off will introduce more phase distortion. Higher sampling rates allow simple low-cost antialiasing filter with minimum phase distortion to be used. This technique is known as oversampling, which is widely used in audio applications. Example 1.2: The range of sampling rate required by DSP systems is large, from approximately 1 GHz in radar to 1 Hz in instrumentation. Given a sampling rate for a specific application, the sampling period can be determined by (1.2). Some real-world applications use the following sampling frequencies and periods: 1. In International Telecommunication Union (ITU) speech compression standards, the sampling rate of ITU-T G.729 and G.723.1 is fs = 8 kHz, thus the sampling period T = 1/8000 s = 125 μs. Note that 1 μs = 10−6 s. 2. Wideband telecommunication systems, such as ITU-T G.722, use a sampling rate of fs = 16 kHz, thus T = 1/16 000 s = 62.5 μs. 3. In audio CDs, the sampling rate is fs = 44.1 kHz, thus T = 1/44 100 s = 22.676 μs. 4. High-fidelity audio systems, such as MPEG-2 (moving picture experts group) AAC (advanced audio coding) standard, MP3 (MPEG layer 3) audio compression standard, and Dolby AC-3, have a sampling rate of fs = 48 kHz, and thus T = 1/48 000 s = 20.833 μs. The sampling rate for MPEG-2 AAC can be as high as 96 kHz. The speech compression algorithms will be discussed in Chapter 11 and the audio coding techniques will be introduced in Chapter 13.
  • 31. ANALOG INTERFACE 7 1.2.2 Quantization and Encoding In previous sections, we assumed that the sample values x(nT ) are represented exactly with an infinite number of bits (i.e., B → ∞). We now discuss a method of representing the sampled discrete-time signal x(nT ) as a binary number with finite number of bits. This is the quantization and encoding process. If the wordlength of an ADC is B bits, there are 2B different values (levels) that can be used to represent a sample. If x(n) lies between two quantization levels, it will be either rounded or truncated. Rounding replaces x(n) by the value of the nearest quantization level, while truncation replaces x(n) by the value of the level below it. Since rounding produces less biased representation of the analog values, it is widely used by ADCs. Therefore, quantization is a process that represents an analog-valued sample x(nT ) with its nearest level that corresponds to the digital signal x(n). We can use 2 bits to define four equally spaced levels (00, 01, 10, and 11) to classify the signal into the four subranges as illustrated in Figure 1.5. In this figure, the symbol ‘o’ represents the discrete-time signal x(nT ), and the symbol ‘ r’ represents the digital signal x(n). The spacing between two consecutive quantization levels is called the quantization width, step, or resolution. If the spacing between these levels is the same, then we have a uniform quantizer. For the uniform quantization, the resolution is given by dividing a full-scale range with the number of quantization levels, 2B . In Figure 1.5, the difference between the quantized number and the original value is defined as the quantization error, which appears as noise in the converter output. It is also called the quantization noise, which is assumed to be random variables that are uniformly distributed. If a B-bit quantizer is used, the signal-to-quantization-noise ratio (SQNR) is approximated by (will be derived in Chapter 3) SQNR ≈ 6B dB. (1.5) This is a theoretical maximum. In practice, the achievable SQNR will be less than this value due to imperfections in the fabrication of converters. However, Equation (1.5) still provides a simple guideline for determining the required bits for a given application. For each additional bit, a digital signal will have about 6-dB gain in SQNR. The problems of quantization and their solutions will be further discussed in Chapter 3. Example 1.3: If the input signal varies between 0 and 5 V, we have the resolutions and SQNRs for the following commonly used data converters: 1. An 8-bit ADC with 256 (28 ) levels can only provide 19.5 mV resolution and 48 dB SQNR. 2. A 12-bit ADC has 4096 (212 ) levels of 1.22 mV resolution, and provides 72 dB SQNR. 0 2T T 3T 00 01 10 11 Quantization level Time x(t) Quantization errors Figure 1.5 Digital samples using a 2-bit quantizer
  • 32. 8 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING 3. A 16-bit ADC has 65 536 (216 ) levels, and thus provides 76.294 μV resolution with 96 dB SQNR. Obviously, with more quantization levels, one can represent analog signals more accurately. The dynamic range of speech signals is very large. If the uniform quantization scheme shown in Figure 1.5 can adequately represent loud sounds, most of the softer sounds may be pushed into the same small value. This means that soft sounds may not be distinguishable. To solve this problem, a quantizer whose quantization level varies according to the signal amplitude can be used. In practice, the nonuniform quantizer uses uniform levels, but the input signal is compressed first using a logarithm function. That is, the logarithm-scaled signal, rather than the original input signal itself, will be quantized. The compressed signal can be reconstructed by expanding it. The process of compression and expansion is called companding (compressing and expanding). For example, the ITU-T G.711 μ-law (used in North America and parts of Northeast Asia) and A-law (used in Europe and most of the rest of the world) companding schemes are used in most digital telecommunications. The A-law companding scheme gives slightly better performance at high signal levels, while the μ-law is better at low levels. As shown in Figure 1.1, the input signal to DSP hardware may be a digital signal from other DSP systems. In this case, the sampling rate of digital signals from other digital systems must be known. The signal processing techniques called interpolation and decimation can be used to increase or decrease the existing digital signals’ sampling rates. Sampling rate changes may be required in many multirate DSP systems such as interconnecting DSP systems that are operated at different rates. 1.2.3 Smoothing Filters Most commercial DACs are zero-order-hold devices, meaning they convert the input binary number to the corresponding voltage level and then hold that value for T s until the next sampling instant. Therefore, the DAC produces a staircase-shape analog waveform y (t) as shown by the solid line in Figure 1.6, which is a rectangular waveform with amplitude equal to the input value and duration of T s. Obviously, this staircase output contains some high-frequency components due to an abrupt change in signal levels. The reconstruction or smoothing filter shown in Figure 1.1 smoothes the staircase-like analog signal generated by the DAC. This lowpass filtering has the effect of rounding off the corners (high-frequency components) of the staircase signal and making it smoother, which is shown as a dotted line in Figure 1.6. This analog lowpass filter may have the same specifications as the antialiasing filter with cutoff frequency fc ≤ fs/2. High-quality DSP applications, such as professional digital audio, require the use y′(t) Time, t 0 T 2T 3T 4T 5T Smoothed output signal Figure 1.6 Staircase waveform generated by a DAC
  • 33. ANALOG INTERFACE 9 of reconstruction filters with very stringent specifications. To reduce the cost of using high-quality analog filter, the oversampling technique can be adopted to allow the use of low-cost filter with slower roll off. 1.2.4 Data Converters There are two schemes of connecting ADC and DAC to DSP processors: serial and parallel. A parallel converter receives or transmits all the B bits in one pass, while the serial converters receive or transmit B bits in a serial bit stream. Parallel converters must be attached to the DSP processor’s external address and data buses, which are also attached to many different types of devices. Serial converters can be connected directly to the built-in serial ports of DSP processors. This is why many practical DSP systems use serial ADCs and DACs. Many applications use a single-chip device called an analog interface chip (AIC) or a coder/decoder (CODEC), which integrates an antialiasing filter, an ADC, a DAC, and a reconstruction filter all on a single piece of silicon. In this book, we will use Texas Instruments’ TLV320AIC23 (AIC23) chip on the DSP starter kit (DSK) for real-time experiments. Typical applications using CODEC include modems, speech systems, audio systems, and industrial controllers. Many standards that specify the nature of the CODEC have evolved for the purposes of switching and transmission. Some CODECs use a logarithmic quantizer, i.e., A-law or μ-law, which must be converted into a linear format for processing. DSP processors implement the required format conversion (compression or expansion) in hardware, or in software by using a lookup table or calculation. The most popular commercially available ADCs are successive approximation, dual slope, flash, and sigma–delta. The successive-approximation ADC produces a B-bit output in B clock cycles by comparing the input waveform with the output of a DAC. This device uses a successive-approximation register to split the voltage range in half to determine where the input signal lies. According to the comparator result, 1 bit will be set or reset each time. This process proceeds from the most significant bit to the least significant bit. The successive-approximation type of ADC is generally accurate and fast at a relatively low cost. However, its ability to follow changes in the input signal is limited by its internal clock rate, and so it may be slow to respond to sudden changes in the input signal. The dual-slope ADC uses an integrator connected to the input voltage and a reference voltage. The integrator starts at zero condition, and it is charged for a limited time. The integrator is then switched to a known negative reference voltage and charged in the opposite direction until it reaches zero volts again. Simultaneously, a digital counter starts to record the clock cycles. The number of counts required for the integrator output voltage to return to zero is directly proportional to the input voltage. This technique is very precise and can produce ADCs with high resolution. Since the integrator is used for input and reference voltages, any small variations in temperature and aging of components have little or no effect on these types of converters. However, they are very slow and generally cost more than successive-approximation ADCs. A voltage divider made by resistors is used to set reference voltages at the flash ADC inputs. The major advantage of a flash ADC is its speed of conversion, which is simply the propagation delay of the comparators. Unfortunately, a B-bit ADC requires (2B − 1) expensive comparators and laser-trimmed resistors. Therefore, commercially available flash ADCs usually have lower bits. Sigma–delta ADCs use oversampling and quantization noise shaping to trade the quantizer resolu- tion with sampling rate. The block diagram of a sigma–delta ADC is illustrated in Figure 1.7, which uses a 1-bit quantizer with a very high sampling rate. Thus, the requirements for an antialiasing filter are significantly relaxed (i.e., the lower roll-off rate). A low-order antialiasing filter requires simple low-cost analog circuitry and is much easier to build and maintain. In the process of quanti- zation, the resulting noise power is spread evenly over the entire spectrum. The quantization noise be- yond the required spectrum range can be filtered out using an appropriate digital lowpass filter. As a result, the noise power within the frequency band of interest is lower. In order to match the sampling frequency with the system and increase its resolution, a decimator is used. The advantages of sigma–delta
  • 34. 10 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Analog input + − Σ Sigma Delta 1-bit B-bit 1-bit DAC 1-bit ADC ∫ Digital decimator Figure 1.7 A conceptual sigma–delta ADC block diagram ADCs are high resolution and good noise characteristics at a competitive price because they use digital filters. Example 1.4: In this book, we use the TMS320VC5510 DSK for real-time experiments. The C5510 DSK uses an AIC23 stereo CODEC for input and output of audio signals. The ADCs and DACs within the AIC23 use the multi-bit sigma–delta technology with integrated oversampling digital interpolation filters. It supports data wordlengths of 16, 20, 24, and 32 bits, with sampling rates from 8 to 96 kHz including the CD standard 44.1 kHz. Integrated analog features consist of stereo-line inputs and a stereo headphone amplifier with analog volume control. Its power managementallowsselectiveshutdownofCODECfunctions,thusextendingbatterylifeinportable applications such as portable audio and video players and digital recorders. 1.3 DSP Hardware DSP systems are required to perform intensive arithmetic operations such as multiplication and addition. These tasks may be implemented on microprocessors, microcontrollers, digital signal processors, or custom integrated circuits. The selection of appropriate hardware is determined by the applications, cost, or a combination of both. This section introduces different digital hardware implementations for DSP applications. 1.3.1 DSP Hardware Options As shown in Figure 1.1, the processing of the digital signal x(n) is performed using the DSP hardware. Although it is possible to implement DSP algorithms on any digital computer, the real applications determine the optimum hardware platform. Five hardware platforms are widely used for DSP systems: 1. special-purpose (custom) chips such as application-specific integrated circuits (ASIC); 2. field-programmable gate arrays (FPGA); 3. general-purpose microprocessors or microcontrollers (μP/μC); 4. general-purpose digital signal processors (DSP processors); and 5. DSP processors with application-specific hardware (HW) accelerators. The hardware characteristics of these options are summarized in Table 1.1.
  • 35. DSP HARDWARE 11 Table 1.1 Summary of DSP hardware implementations DSP processors with ASIC FPGA μP/μC DSP processor HW accelerators Flexibility None Limited High High Medium Design time Long Medium Short Short Short Power consumption Low Low–medium Medium–high Low–medium Low–medium Performance High High Low–medium Medium–high High Development cost High Medium Low Low Low Production cost Low Low–medium Medium–high Low–medium Medium ASIC devices are usually designed for specific tasks that require a lot of computations such as digital subscriber loop (DSL) modems, or high-volume products that use mature algorithms such as fast Fourier transform and Reed–Solomon codes. These devices are able to perform their limited functions much faster than general-purpose processors because of their dedicated architecture. These application-specific products enable the use of high-speed functions optimized in hardware, but they are deficient in the programmability to modify the algorithms and functions. They are suitable for implementing well- defined and well-tested DSP algorithms for high-volume products, or applications demanding extremely high speeds that can be achieved only by ASICs. Recently, the availability of core modules for some common DSP functions has simplified the ASIC design tasks, but the cost of prototyping an ASIC device, a longer design cycle, and the lack of standard development tools support and reprogramming flexibility sometimes outweigh their benefits. FPGAs have been used in DSP applications for years as glue logics, bus bridges, and peripherals for re- ducingsystemcostsandaffordingahigherlevelofsystemintegration.Recently,FPGAshavebeengaining considerable attention in high-performance DSP applications, and are emerging as coprocessors for stan- dard DSP processors that need specific accelerators. In these cases, FPGAs work in conjunction with DSP processors for integrating pre- and postprocessing functions. FPGAs provide tremendous computational power by using highly parallel architectures for very high performance. These devices are hardware re- configurable, thus allowing the system designer to optimize the hardware architectures for implementing algorithms that require higher performance and lower production cost. In addition, the designer can imple- ment high-performance complex DSP functions in a small fraction of the total device, and use the rest to implement system logic or interface functions, resulting in both lower costs and higher system integration. Example 1.5: There are four major FPGA families that are targeted for DSP systems: Cyclone and Stratix from Altera, and Virtex and Spartan from Xilinx. The Xilinx Spartan-3 FPGA family (introduced in 2003) uses 90-nm manufacturing technique to achieve low silicon die costs. To support DSP functions in an area-efficient manner, Spartan-3 includes the following features: r embedded 18 × 18 multipliers; r distributed RAM for local storage of DSP coefficients; r 16-bit shift register for capturing high-speed data; and r large block RAM for buffers. The current Spartan-3 family includes XC3S50, S200, S400, S1000, and S1500 devices. With the aid of Xilinx System Generation for DSP, a tool used to port MATLAB Simulink model to Xilinx hardware model, a system designer can model, simulate, and verify the DSP algorithms on the target hardware under the Simulink environment.
  • 36. 12 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Program memory Processor Program address bus Program data bus Data address bus Data memory Data data bus (a) Harvard architecture Memory Processor Address bus Data bus (b) von Newmann architecture Figure 1.8 Different memory architectures: (a) Harvard architecture; (b) von Newmann architecture General-purpose μP/μC becomes faster and increasingly able to handle some DSP applications. Many electronic products are currently designed using these processors. For example, automotive controllers use microcontrollers for engine, brake, and suspension control. If a DSP application is added to an existing product that already contains a μP/μC, it is desired to add the new functions in software without requiring an additional DSP processor. For example, Intel has adopted a native signal processing initiative that uses the host processor in computers to perform audio coding and decoding, sound synthesis, and so on. Software development tools for μP/μC devices are generally more sophisticated and powerful than those available for DSP processors, thus easing development for some applications that are less demanding on the performance and power consumption of processors. General architectures of μP/μC fall into two categories: Harvard architecture and von Neumann archi- tecture. As illustrated in Figure 1.8(a), Harvard architecture has a separate memory space for the program and the data, so that both memories can be accessed simultaneously. The von Neumann architecture as- sumes that there is no intrinsic difference between the instructions and the data, as illustrated in Figure 1.8(b). Operations such as add, move, and subtract are easy to perform on μPs/μCs. However, complex instructions such as multiplication and division are slow since they need a series of shift, addition, or subtraction operations. These devices do not have the architecture or the on-chip facilities required for efficient DSP operations. Their real-time DSP performance does not compare well with even the cheaper general-purpose DSP processors, and they would not be a cost-effective or power-efficient solution for many DSP applications. Example 1.6: Microcontrollers such as Intel 8081 and Freescale 68HC11 are typically used in in- dustrial process control applications, in which I/O capability (serial/parallel interfaces, timers, and interrupts) and control are more important than speed of performing functions such as multiplica- tion and addition. Microprocessors such as Pentium, PowerPC, and ARM are basically single-chip processors that require additional circuitry to improve the computation capability. Microprocessor instruction sets can be either complex instruction set computer (CISC) such as Pentium or reduced instruction set computer (RISC) such as ARM. The CISC processor includes instructions for basic processor operations, plus some highly sophisticated instructions for specific functions. The RISC processor uses hardwired simpler instructions such as LOAD and STORE to execute in a single clock cycle.
  • 37. DSP HARDWARE 13 It is important to note that some microprocessors such as Pentium add multimedia exten- sion (MMX) and streaming single-instruction, multiple-data (SIMD) extension to support DSP operations. They can run in high speed (3 GHz), provide single-cycle multiplication and arith- metic operations, have good memory bandwidth, and have many supporting tools and software available for easing development. A DSP processor is basically a microprocessor optimized for processing repetitive numerically inten- sive operations at high rates. DSP processors with architectures and instruction sets specifically designed for DSP applications are manufactured by Texas Instruments, Freescale, Agere, Analog Devices, and many others. The rapid growth and the exploitation of DSP technology is not a surprise, considering the commercial advantages in terms of the fast, flexible, low power consumption, and potentially low-cost design capabilities offered by these devices. In comparison to ASIC and FPGA solutions, DSP processors have advantages in easing development and being reprogrammable in the field to allow a product feature upgrade or bug fix. They are often more cost-effective than custom hardware such as ASIC and FPGA, especially for low-volume applications. In comparison to the general-purpose μP/μC, DSP processors have better speed, better energy efficiency, and lower cost. In many practical applications, designers are facing challenges of implementing complex algorithms that require more processing power than the DSP processors in use are capable of providing. For exam- ple, multimedia on wireless and portable devices requires efficient multimedia compression algorithms. The study of most prevalent imaging coding/decoding algorithms shows some DSP functions used for multimedia compression algorithms that account for approximately 80 % of the processing load. These common functions are discrete cosine transform (DCT), inverse DCT, pixel interpolation, motion es- timation, and quantization, etc. The hardware extension or accelerator lets the DSP processor achieve high-bandwidth performance for applications such as streaming video and interactive gaming on a sin- gle device. The TMS320C5510 DSP used by this book consists of the hardware extensions that are specifically designed to support multimedia applications. In addition, Altera has also added the hardware accelerator into its FPGA as coprocessors to enhance the DSP processing abilities. Today, DSP processors have become the foundation of many new markets beyond the traditional signal processing areas for technologies and innovations in motor and motion control, automotive systems, home appliances, consumer electronics, and vast range of communication systems and devices. These general- purpose-programmable DSP processors are supported by integrated software development tools that include C compilers, assemblers, optimizers, linkers, debuggers, simulators, and emulators. In this book, we use Texas Instruments’ TMS320C55x for hands-on experiments. This high-performance and ultralow power consumption DSP processor will be introduced in Chapter 2. In the following section, we will briefly introduce some widely used DSP processors. 1.3.2 DSP Processors In 1979, Intel introduced the 2920, a 25-bit integer processor with a 400 ns instruction cycle and a 25-bit arithmetic-logic unit (ALU) for DSP applications. In 1982, Texas Instruments introduced the TMS32010, a 16-bit fixed-point processor with a 16 × 16 hardware multiplier and a 32-bit ALU and accumulator. This first commercially successful DSP processor was followed by the development of faster products and floating-point processors. The performance and price range among DSP processors vary widely. Today, dozens of DSP processor families are commercially available. Table 1.2 summarizes some of the most popular DSP processors. In the low-end and low-cost group are Texas Instruments’ TMS320C2000 (C24x and C28x) family, Analog Devices’ ADSP-218x family, and Freescale’s DSP568xx family. These conventional DSP pro- cessors include hardware multiplier and shifters, execute one instruction per clock cycle, and use the complex instructions that perform multiple operations such as multiply, accumulate, and update address
  • 38. 14 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Table 1.2 Current commercially available DSP processors Vendor Family Arithmetic type Clock speed TMS320C24x Fixed-point 40 MHz TMS320C28x Fixed-point 150 MHz TMS320C54x Fixed-point 160 MHz Texas instruments TMS320C55x Fixed-point 300 MHz TMS320C62x Fixed-point 300 MHz TMS320C64x Fixed-point 1 GHz TMS320C67x Floating-point 300 MHz ADSP-218x Fixed-point 80 MHz ADSP-219x Fixed-point 160 MHz Analog devices ADSP-2126x Floating-point 200 MHz ADSP-2136x Floating-point 333 MHz ADSP-BF5xx Fixed-point 750 MHz ADSP-TS20x Fixed/Floating 600 MHz DSP56300 Fixed, 24-bit 275 MHz DSP568xx Fixed-point 40 MHz Freescale DSP5685x Fixed-point 120 MHz MSC71xx Fixed-point 200 MHz MSC81xx Fixed-point 400 MHz Agere DSP1641x Fixed-point 285 MHz Source: Adapted from [11] pointers. They provide good performance with modest power consumption and memory usage, thus are widely used in automotives, appliances, hard disk drives, modems, and consumer electronics. For exam- ple, the TMS320C2000 and DSP568xx families are optimized for control applications, such as motor and automobile control, by integrating many microcontroller features and peripherals on the chip. The midrange processor group includes Texas Instruments’ TMS320C5000 (C54x and C55x), Analog Devices’ ADSP219x and ADSP-BF5xx, and Freescale’s DSP563xx. These enhanced processors achieve higher performance through a combination of increased clock rates and more advanced architectures. These families often include deeper pipelines, instruction cache, complex instruction words, multiple data buses (to access several data words per clock cycle), additional hardware accelerators, and parallel execution units to allow more operations to be executed in parallel. For example, the TMS320C55x has two multiply–accumulate (MAC) units. These midrange processors provide better performance with lower power consumption, thus are typically used in portable applications such as cellular phones and wireless devices, digital cameras, audio and video players, and digital hearing aids. These conventional and enhanced DSP processors have the following features for common DSP algorithms such as filtering: r Fast MAC units – The multiply–add or multiply–accumulate operation is required in most DSP functions including filtering, fast Fourier transform, and correlation. To perform the MAC operation efficiently,DSPprocessorsintegratethemultiplierandaccumulatorintothesamedatapathtocomplete the MAC operation in single instruction cycle. r Multiple memory accesses – Most DSP processors adopted modified Harvard architectures that keep the program memory and data memory separate to allow simultaneous fetching of instruction and data. In order to support simultaneous access of multiple data words, the DSP processors provide multiple on-chip buses, independent memory banks, and on-chip dual-access data memory.
  • 39. DSP HARDWARE 15 r Special addressing modes – DSP processors often incorporate dedicated data-address generation units for generating data addresses in parallel with the execution of instruction. These units usually support circular addressing and bit-reversed addressing for some specific algorithms. r Special program control – Most DSP processors provide zero-overhead looping, which allows the programmer to implement a loop without extra clock cycles for updating and testing loop counters, or branching back to the top of loop. r Optimize instruction set – DSP processors provide special instructions that support the computa- tional intensive DSP algorithms. For example, the TMS320C5000 processors support compare-select instructions for fast Viterbi decoding, which will be discussed in Chapter 14. r Effective peripheral interface – DSP processors usually incorporate high-performance serial and parallel input/output (I/O) interfaces to other devices such as ADC and DAC. They provide streamlined I/O handling mechanisms such as buffered serial ports, direct memory access (DMA) controllers, and low-overheadinterrupttotransferdatawithlittleornointerventionfromtheprocessor’scomputational units. These DSP processors use specialized hardware and complex instructions for allowing more operations to be executed in every instruction cycle. However, they are difficult to program in assembly language and also difficult to design efficient C compilers in terms of speed and memory usage for supporting these complex-instruction architectures. With the goals of achieving high performance and creating architecture that supports efficient C compilers, some DSP processors, such as the TMS320C6000 (C62x, C64x, and C67x), use very simple instructions. These processors achieve a high level of parallelism by issuing and executing multiple simple instructions in parallel at higher clock rates. For example, the TMS320C6000 uses very long instruction word(VLIW)architecturethatprovideseightexecutionunitstoexecutefourtoeightinstructionsperclock cycle. These instructions have few restrictions on register usage and addressing modes, thus improving the efficiency of C compilers. However, the disadvantage of using simple instructions is that the VLIW processors need more instructions to perform a given task, thus require relatively high program memory usage and power consumption. These high-performance DSP processors are typically used in high-end videoandradarsystems,communicationinfrastructures,wirelessbasestations,andhigh-qualityreal-time video encoding systems. 1.3.3 Fixed- and Floating-Point Processors A basic distinction between DSP processors is the arithmetic formats: fixed-point or floating-point. This is the most important factor for the system designers to determine the suitability of a DSP processor for a chosen application. The fixed-point representation of signals and arithmetic will be discussed in Chapter 3. Fixed-point DSP processors are either 16-bit or 24-bit devices, while floating-point processors are usually 32-bit devices. A typical 16-bit fixed-point processor, such as the TMS320C55x, stores numbers in a 16-bit integer or fraction format in a fixed range. Although coefficients and signals are only stored with 16-bit precision, intermediate values (products) may be kept at 32-bit precision within the internal 40-bit accumulators in order to reduce cumulative rounding errors. Fixed-point DSP devices are usually cheaper and faster than their floating-point counterparts because they use less silicon, have lower power consumption, and require fewer external pins. Most high-volume, low-cost embedded applications, such as appliance control, cellular phones, hard disk drives, modems, audio players, and digital cameras, use fixed-point processors. Floating-point arithmetic greatly expands the dynamic range of numbers. A typical 32-bit floating- point DSP processor, such as the TMS320C67x, represents numbers with a 24-bit mantissa and an 8-bit
  • 40. 16 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING exponent. The mantissa represents a fraction in the rang −1.0 to +1.0, while the exponent is an integer that represents the number of places that the binary point must be shifted left or right in order to obtain the true value. A 32-bit floating-point format covers a large dynamic range, thus the data dynamic range restrictions may be virtually ignored in a design using floating-point DSP processors. This is in contrast to fixed-point designs, where the designer has to apply scaling factors and other techniques to prevent arithmetic overflow, which are very difficult and time-consuming processes. As a result, floating-point DSP processors are generally easy to program and use, but are usually more expensive and have higher power consumption. Example 1.7: The precision and dynamic range of commonly used 16-bit fixed-point processors are summarized in the following table: Precision Dynamic range Unsigned integer 1 0 ≤ x ≤ 65 535 Signed integer 1 −32 768 ≤ x ≤ 32 767 Unsigned fraction 2−16 0 ≤ x ≤ (1 −2−16) Signed fraction 2−15 −1 ≤ x ≤ (1 −2−15) The precision of 32-bit floating-point DSP processors is 2−23 since there are 24 mantissa bits. The dynamic range is 1.18 ×10−38 ≤ x ≤ 3.4 × 1038 . System designers have to determine the dynamic range and precision needed for the applications. Floating-point processors may be needed in applications where coefficients vary in time, signals and coefficients require a large dynamic range and high precisions, or where large memory structures are required, such as in image processing. Floating-point DSP processors also allow for the efficient use of high-level C compilers, thus reducing the cost of development and maintenance. The faster development cycle for a floating-point processor may easily outweigh the extra cost of the DSP processor itself. Therefore, floating-point processors can also be justified for applications where development costs are high and production volumes are low. 1.3.4 Real-Time Constraints A limitation of DSP systems for real-time applications is that the bandwidth of the system is limited by the sampling rate. The processing speed determines the maximum rate at which the analog signal can be sampled. For example, with the sample-by-sample processing, one output sample is generated when one input sample is presented to the system. Therefore, the delay between the input and the output for sample-by-sample processing is at most one sample interval (T s). A real-time DSP system demands that the signal processing time, tp, must be less than the sampling period, T , in order to complete the processing task before the new sample comes in. That is, tp + to T, (1.6) where to is the overhead of I/O operations. This hard real-time constraint limits the highest frequency signal that can be processed by DSP systems using sample-by-sample processing approach. This limit on real-time bandwidth fM is given as fM ≤ fs 2 1 2 tp + to . (1.7)
  • 41. DSP SYSTEM DESIGN 17 It is clear that the longer the processing time tp, the lower the signal bandwidth that can be handled by a given processor. Although new and faster DSP processors have continuously been introduced, there is still a limit to the processing that can be done in real time. This limit becomes even more apparent when system cost is taken into consideration. Generally, the real-time bandwidth can be increased by using faster DSP processors, simplified DSP algorithms, optimized DSP programs, and parallel processing using multiple DSP processors, etc. However, there is still a trade-off between the system cost and performance. Equation (1.7) also shows that the real-time bandwidth can be increased by reducing the overhead of I/O operations. This can be achieved by using block-by-block processing approach. With block processing methods, the I/O operations are usually handled by a DMA controller, which places data samples in a memory buffer. The DMA controller interrupts the processor when the input buffer is full, and a block of signal samples will be processed at a time. For example, for a real-time N-point fast Fourier transform (will be discussed in Chapter 6), the N input samples have to be buffered by the DMA controller. The block of N samples is processed after the buffer is full. The block computation must be completed before the next block of N samples is arrived. Therefore, the delay between input and output in block processing is dependent on the block size N, and this may cause a problem for some applications. 1.4 DSP System Design A generalized DSP system design process is illustrated in Figure 1.9. For a given application, the theoret- ical aspects of DSP system specifications such as system requirements, signal analysis, resource analysis, and configuration analysis are first performed to define system requirements. H A R D W A R E S O F T W A R E System requirements specifications Algorithm development and simulation Select DSP processor Software architecture Coding and debugging Hardware schematic System integration and debug System testing and release Application Hardware prototype Figure 1.9 Simplified DSP system design flow
  • 42. 18 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING 1.4.1 Algorithm Development DSPsystemsareoftencharacterizedbytheembeddedalgorithm,whichspecifiesthearithmeticoperations to be performed. The algorithm for a given application is initially described using difference equations or signal-flow block diagrams with symbolic names for the inputs and outputs. In documenting an algorithm, it is sometimes helpful to further clarify which inputs and outputs are involved by means of a data-flow diagram. The next stage of the development process is to provide more details on the sequence of operations that must be performed in order to derive the output. There are two methods of characterizing the sequence of operations in a program: flowcharts or structured descriptions. At the algorithm development stage, we most likely work with high-level language DSP tools (such as MATLAB, Simulink, or C/C++) that are capable of algorithmic-level system simulations. We then implementthealgorithmusingsoftware,hardware,orboth,dependingonspecificneeds.ADSPalgorithm can be simulated using a general-purpose computer so that its performance can be tested and analyzed. A block diagram of general-purpose computer implementation is illustrated in Figure 1.10. The test signals may be internally generated by signal generators or digitized from a real environment based on the given application or received from other computers via the networks. The simulation program uses the signal samples stored in data file(s) as input(s) to produce output signals that will be saved in data file(s) for further analysis. Advantages of developing DSP algorithms using a general-purpose computer are: 1. Using high-level languages such as MATLAB, Simulink, C/C++, or other DSP software packages on computers can significantly save algorithm development time. In addition, the prototype C programs used for algorithm evaluation can be ported to different DSP hardware platforms. 2. It is easy to debug and modify high-level language programs on computers using integrated software development tools. 3. Input/output operations based on disk files are simple to implement and the behaviors of the system are easy to analyze. 4. Floating-point data format and arithmetic can be used for computer simulations, thus easing devel- opment. 5. We can easily obtain bit-true simulations of the developed algorithms using MATLAB or Simulink for fixed-point DSP implementation. Analysis MATLAB or C/C++ ADC Other computers DAC Other computers Signal generators DSP algorithms DSP software Data files Data files Figure 1.10 DSP software developments using a general-purpose computer
  • 43. DSP SYSTEM DESIGN 19 1.4.2 Selection of DSP Processors As discussed earlier, DSP processors are used in a wide range of applications from high-performance radar systems to low-cost consumer electronics. As shown in Table 1.2, semiconductor vendors have responded to this demand by producing a variety of DSP processors. DSP system designers require a full understanding of the application requirements in order to select the right DSP processor for a given application. The objective is to choose the processor that meets the project’s requirements with the most cost-effective solution. Some decisions can be made at an early stage based on arith- metic format, performance, price, power consumption, ease of development, and integration, etc. In real-time DSP applications, the efficiency of data flow into and out of the processor is also criti- cal. However, these criteria will probably still leave a number of candidate processors for further analysis. Example 1.8:There are a number of ways to measure a processor’s execution speed. They include: r MIPS – millions of instructions per second; r MOPS – millions of operations per second; r MFLOPS – millions of floating-point operations per second; r MHz – clock rate; and r MMACS – millions of multiply–accumulate operations. In addition, there are other metrics such as milliwatts for measuring power consumption, MIPS per mw, or MIPS per dollar. These numbers provide only the sketchiest indication about perfor- mance, power, and price for a given application. They cannot predict exactly how the processor will measure up in the target system. For high-volume applications, processor cost and product manufacture integration are important fac- tors. For portable, battery-powered products such as cellular phones, digital cameras, and personal mul- timedia players, power consumption is more critical. For low- to medium-volume applications, there will be trade-offs among development time, cost of development tools, and the cost of the DSP processor itself. The likelihood of having higher performance processors with upward-compatible software in the future is also an important factor. For high-performance, low-volume applications such as communica- tion infrastructures and wireless base stations, the performance, ease of development, and multiprocessor configurations are paramount. Example 1.9: A number of DSP applications along with the relative importance for performance, price, and power consumption are listed in Table 1.3. This table shows that the designer of a handheld device has extreme concerns about power efficiency, but the main criterion of DSP selection for the communications infrastructures is its performance. When processing speed is at a premium, the only valid comparison between processors is on an algorithm-implementation basis. Optimum code must be written for all candidates and then the execution time must be compared. Other important factors are memory usage and on-chip peripheral devices, such as on-chip converters and I/O interfaces.
  • 44. 20 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Table 1.3 Some DSP applications with the relative importance rating Application Performance Price Power consumption Audio receiver 1 2 3 DSP hearing aid 2 3 1 MP3 player 3 1 2 Portable video recorder 2 1 3 Desktop computer 1 2 3 Notebook computer 3 2 1 Cell phone handset 3 1 2 Cellular base station 1 2 3 Source: Adapted from [12] Note: Rating – 1–3, with 1 being the most important In addition, a full set of development tools and supports are important for DSP processor selection, including: r Software development tools such as C compilers, assemblers, linkers, debuggers, and simulators. r Commercially available DSP boards for software development and testing before the target DSP hardware is available. r Hardware testing tools such as in-circuit emulators and logic analyzers. r Development assistance such as application notes, DSP function libraries, application libraries, data books, and low-cost prototyping, etc. 1.4.3 Software Development The four common measures of good DSP software are reliability, maintainability, extensibility, and efficiency. A reliable program is one that seldom (or never) fails. Since most programs will occasionally fail, a maintainable program is one that is easily correctable. A truly maintainable program is one that can be fixed by someone other than the original programmers. In order for a program to be truly maintainable, it must be portable on more than one type of hardware. An extensible program is one that can be easily modified when the requirements change. A program is usually tested in a finite number of ways much smaller than the number of input data conditions. This means that a program can be considered reliable only after years of bug-free use in many different environments. A good DSP program often contains many small functions with only one purpose, which can be easily reused by other programs for different purposes. Programming tricks should be avoided at all costs, as they will often not be reliable and will almost always be difficult for someone else to understand even with lots of comments. In addition, the use of variable names should be meaningful in the context of the program. As shown in Figure 1.9, the hardware and software design can be conducted at the same time for a given DSP application. Since there are a lot of interdependent factors between hardware and software, an ideal DSP designer will be a true ‘system’ engineer, capable of understanding issues with both hardware and software. The cost of hardware has gone down dramatically in recent years, thus the majority of the cost of a DSP solution now resides in software. The software life cycle involves the completion of a software project: the project definition, the detailed specification, coding and modular testing, integration, system testing, and maintenance. Software
  • 45. DSP SYSTEM DESIGN 21 maintenance is a significant part of the cost for a DSP system. Maintenance includes enhancing the software functions, fixing errors identified as the software is used, and modifying the software to work with new hardware and software. It is essential to document programs thoroughly with titles and comment statements because this greatly simplifies the task of software maintenance. As discussed earlier, good programming techniques play an essential role in successful DSP ap- plications. A structured and well-documented approach to programming should be initiated from the beginning. It is important to develop an overall specification for signal processing tasks prior to writing any program. The specification includes the basic algorithm and task description, memory requirements, constraints on the program size, execution time, and so on. A thoroughly reviewed specification can catch mistakes even before code has been written and prevent potential code changes at the system integration stage. A flow diagram would be a very helpful design tool to adopt at this stage. Writing and testing DSP code is a highly interactive process. With the use of integrated software de- velopment tools that include simulators or evaluation boards, code may be tested regularly as it is written. Writing code in modules or sections can help this process, as each module can be tested individually, thus increasing the chance of the entire system working at the system integration stage. There are two commonly used methods in developing software for DSP devices: using assembly program or C/C++ program. Assembly language is similar to the machine code actually used by the processor. Programming in assembly language gives the engineers full control of processor functions and resources, thus resulting in the most efficient program for mapping the algorithm by hand. However, this is a very time-consuming and laborious task, especially for today’s highly paralleled DSP architectures. A C program, on the other hand, is easier for software development, upgrade, and maintenance. However, the machine code generated by a C compiler is inefficient in both processing speed and memory usage. Recently, DSP manufacturers have improved C compiler efficiency dramatically, especially with the DSP processors that use simple instructions and general register files. Often the ideal solution is to work with a mixture of C and assembly code. The overall program is controlled and written by C code, but the run-time critical inner loops and modules are written in assembly language. In a mixed programming environment, an assembly routine may be called as a function or intrinsics, or in-line coded into the C program. A library of hand-optimized functions may be built up and brought into the code when required. The assembly programming for the TMS320C55x will be discussed in Chapter 2. 1.4.4 High-Level Software Development Tools Software tools are computer programs that have been written to perform specific operations. Most DSP operations can be categorized as being either analysis tasks or filtering tasks. Signal analysis deals with the measurement of signal properties. MATLAB is a powerful environment for signal analysis and visualization, which are critical components in understanding and developing a DSP system. C programming is an efficient tool for performing signal processing and is portable over different DSP platforms. MATLAB is an interactive, technical computing environment for scientific and engineering numerical analysis, computation, and visualization. Its strength lies in the fact that complex numerical problems can be solved easily in a fraction of the time required with a programming language such as C. By using its relatively simple programming capability, MATLAB can be easily extended to create new functions, and is further enhanced by numerous toolboxes such as the Signal Processing Toolbox and Filter Design Toolbox. In addition, MATLAB provides many graphical user interface (GUI) tools such as Filter Design and Analysis Tool (FDATool). The purpose of a programming language is to solve a problem involving the manipulation of informa- tion. The purpose of a DSP program is to manipulate signals to solve a specific signal processing problem. High-level languages such as C and C++ are computer languages that have English-like commands and
  • 46. 22 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING C program (Source) Machine code (Object) Linker/loader Execution Program output Libraries Data C compiler Figure 1.11 Program compilation, linking, and execution flow instructions. High-level language programs are usually portable, so they can be recompiled and run on many different computers. Although C/C++ is categorized as a high-level language, it can also be written for low-level device drivers. In addition, a C compiler is available for most modern DSP processors such as the TMS320C55x. Thus C programming is the most commonly used high-level language for DSP applications. C has become the language of choice for many DSP software development engineers not only because it has powerful commands and data structures but also because it can easily be ported on different DSP processors and platforms. The processes of compilation, linking/loading, and execution are outlined in Figure 1.11. C compilers are available for a wide range of computers and DSP processors, thus making the C program the most portable software for DSP applications. Many C programming environments include GUI debugger programs, which are useful in identifying errors in a source program. Debugger programs allow us to see values stored in variables at different points in a program, and to step through the program line by line. 1.5 Introduction to DSP Development Tools The manufacturers of DSP processors typically provide a set of software tools for the user to develop efficient DSP software. The basic software development tools include C compiler, assembler, linker, and simulator. In order to execute the designed DSP tasks on the target system, the C or assembly programs must be translated into machine code and then linked together to form an executable code. This code conversion process is carried out using software development tools illustrated in Figure 1.12. The TMS320C55x software development tools include a C compiler, an assembler, a linker, an archiver, a hex conversion utility, a cross-reference utility, and an absolute lister. The C55x C compiler generates assembly source code from the C source files. The assembler translates assembly source files, either hand-coded by DSP programmers or generated by the C compiler, into machine language object files. The assembly tools use the common object file format (COFF) to facilitate modular programming. Using COFF allows the programmer to define the system’s memory map at link time. This maximizes performance by enabling the programmer to link the code and data objects into specific memory locations. The archiver allows users to collect a group of files into a single archived file. The linker combines object files and libraries into a single executable COFF object module. The hex conversion utility converts a COFF object file into a format that can be downloaded to an EPROM programmer or a flash memory program utility. In this section, we will briefly describe the C compiler, assembler, and linker. A full description of these tools can be found in the user’s guides [13, 14]. 1.5.1 C Compiler C language is the most popular high-level tool for evaluating algorithms and developing real-time soft- ware for DSP applications. The C compiler can generate either a mnemonic assembly code or an algebraic assembly code. In this book, we use the mnemonic assembly (ASM) language. The C compiler pack- age includes a shell program, code optimizer, and C-to-ASM interlister. The shell program supports
  • 47. INTRODUCTION TO DSP DEVELOPMENT TOOLS 23 Macro source files C source files C compiler Archiver Archiver Library of object files Hex- converter EPROM programmer Linker COFF executable file COFF object files TMS320C55x Target Absolute lister ×-reference lister Debugger tools Run-time support libraries Library-build utility Macro library Assembly source files Assembler Figure 1.12 TMS320C55x software development flow and tools automatically compiled, assembled, and linked modules. The optimizer improves run-time and code density efficiency of the C source file. The C-to-ASM interlister inserts the original comments in C source code into the compiler’s output assembly code so users can view the corresponding assembly instructions for each C statement generated by the compiler. The C55x C compiler supports American National Standards Institute (ANSI) C and its run-time support library. The run-time support library rts55.lib (or rts55x.lib for large memory model) includes functions to support string operation, memory allocation, data conversion, trigonometry, and exponential manipulations. ClanguagelacksspecificfeaturesofDSP,especiallythosefixed-pointdataoperationsthatarenecessary for many DSP algorithms. To improve compiler efficiency for DSP applications, the C55x C compiler supports in-line assembly language for C programs. This allows adding highly efficient assembly code directly into the C program. Intrinsics are another improvement for substituting DSP arithmetic operation with DSP assembly intrinsic operators. We will introduce more compiler features in Chapter 2 and subsequent chapters. 1.5.2 Assembler The assembler translates processor-specific assembly language source files (in ASCII format) into binary COFF object files. Source files can contain assembler directives, macro directives, and instructions.
  • 48. 24 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Assembler directives are used to control various aspects of the assembly process, such as the source file listing format, data alignment, section content, etc. Binary object files contain separate blocks (called sections) of code or data that can be loaded into memory space. Once the DSP algorithm has been written in assembly, it is necessary to add important assembly directives to the source code. Assembler directives are used to control the assembly process and enter data into the program. Assembly directives can be used to initialize memory, define global variables, set conditional assembly blocks, and reserve memory space for code and data. 1.5.3 Linker Thelinkercombinesmultiplebinaryobjectfilesandlibrariesintoasingleexecutableprogramforthetarget DSP hardware. It resolves external references and performs code relocation to create the executable mod- ule. The C55x linker handles various requirements of different object files and libraries, as well as targets system memory configurations. For a specific hardware configuration, the system designers need to pro- vide the memory mapping specification to the linker. This task can be accomplished by using a linker com- mand file. The visual linker is also a very useful tool that provides a visualized memory usage map directly. The linker commands support expression assignment and evaluation, and provides MEMORY and SECTION directives. Using these directives, we can define the memory model for the given target system. We can also combine object file sections, allocate sections into specific memory areas, and define or redefine global symbols at link time. An example linker command file is listed in Table 1.4. The first portion uses the MEMORY directive to identify the range of memory blocks that physically exist in the target hardware. These memory blocks Table 1.4 Example of linker command file used by TMS320C55x /* Specify the system memory map */ MEMORY { RAM (RWIX) : o = 0x000100, l = 0x00feff /* Data memory */ RAM0 (RWIX) : o = 0x010000, l = 0x008000 /* Data memory */ RAM1 (RWIX) : o = 0x018000, l = 0x008000 /* Data memory */ RAM2 (RWIX) : o = 0x040100, l = 0x040000 /* Program memory */ ROM (RIX) : o = 0x020100, l = 0x020000 /* Program memory */ VECS (RIX) : o = 0xffff00, l = 0x000100 /* Reset vector */ } /* Specify the sections allocation into memory */ SECTIONS { vectors VECS /* Interrupt vector table */ .text ROM /* Code */ .switch RAM /* Switch table info */ .const RAM /* Constant data */ .cinit RAM2 /* Initialization tables */ .data RAM /* Initialized data */ .bss RAM /* Global static vars */ .stack RAM /* Primary system stack */ .sysstack RAM /* Secondary system stack */ expdata0 RAM0 /* Global static vars */ expdata1 RAM1 /* Global static vars */ }
  • 49. EXPERIMENTS AND PROGRAM EXAMPLES 25 are available for the software to use. Each memory block has its name, starting address, and the length of the block. The address and length are given in bytes for C55x processors and in words for C54x processors. For example, the data memory block called RAM starts at the byte address 0x100, and it has a size of 0xFEFF bytes. Note that the prefix 0x indicates the following number is represented in hexadecimal (hex) form. The SECTIONS directive provides different code section names for the linker to allocate the program and data within each memory block. For example, the program can be loaded into the .text section, and the uninitialized global variables are in the .bss section. The attributes inside the parentheses are optional to set memory access restrictions. These attributes are: R – Memory space can be read. W – Memory space can be written. X – Memory space contains executable code. I – Memory space can be initialized. Several additional options used to initialize the memory can be found in [13]. 1.5.4 Other Development Tools Archiver is used to group files into a single archived file, that is, to build a library. The archiver can also be used to modify a library by deleting, replacing, extracting, or adding members. Hex-converter converts a COFF object file into an ASCII hex format file. The converted hex format files are often used to program EPROM and flash memory devices. Absolute lister takes linked object files to create the .abs files. These .abs files can be assembled together to produce a listing file that contains absolute addresses of the entire system program. Cross-reference lister takes all the object files to produce a cross-reference listing file. The cross-reference listing file includes symbols, definitions, and references in the linked source files. The DSP development tools also include simulator, EVM, XDS, and DSK. A simulator is the soft- ware simulation tool that does not require any hardware support. The simulator can be used for code development and testing. The EVM is a hardware evaluation module including I/O capabilities to allow developers to evaluate the DSP algorithms for the specific DSP processor in real time. EVM is usually a computer board to be connected with a host computer for evaluating the DSP tasks. The XDS usually includes in-circuit emulation and boundary scan for system development and debug. The XDS is an external stand-alone hardware device connected to a host computer and a DSP board. The DSK is a low-cost development board for the user to develop and evaluate DSP algorithms under a Windows operation system environment. In this book, we will use the Spectrum Digital’s TMS320VC5510 DSK for real-time experiments. The DSK works under the Code Composer Studio (CCS) development environment. The DSK package includes a special version of the CCS [15]. The DSK communicates with CCS via its onboard universal serial bus (USB) JTAG emulator. The C5510 DSK uses a 200 MHz TMS320VC5510 DSP processor, an AIC23 stereo CODEC, 8 Mbytes synchronous DRAM, and 512 Kbytes flash memory. 1.6 Experiments and Program Examples Texas Instruments’ CCS Integrated Development Environment (IDE) is a DSP development tool that allows users to create, edit, build, debug, and analyze DSP programs. For building applications, the CCS provides a project manager to handle the programming project. For debugging purposes, it provides
  • 50. 26 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING breakpoints, variable watch windows, memory/register/stack viewing windows, probe points to stream data to and from the target, graphical analysis, execution profile, and the capability to display mixed disassembled and C instructions. Another important feature of the CCS is its ability to create and manage large projects from a GUI environment. In this section, we will use a simple sinewave example to introduce the basic editing features, key IDE components, and the use of the C55x DSP development tools. We also demonstrate simple approaches to software development and the debug process using the TMS320C55x simulator. Finally, we will use the C5510 DSK to demonstrate an audio loop-back example in real time. 1.6.1 Experiments of Using CCS and DSK After installing the DSK or CCS simulator, we can start the CCS IDE. Figure 1.13 shows the CCS running on the DSK. The IDE consists of the standard toolbar, project toolbar, edit toolbar, and debug toolbar. Some basic functions are summarized and listed in Figure 1.13. Table 1.5 briefly describes the files used in this experiment. Procedures of the experiment are listed as follows: 1. Create a project for the CCS: Choose Project→New to create a new project file and save it as useCCS.pjt to the directory ..experimentsexp1.6.1_CCSandDSK. The CCS uses the project to operate its built-in utilities to create a full-build application. Figure 1.13 CCS IDE
  • 51. EXPERIMENTS AND PROGRAM EXAMPLES 27 Table 1.5 File listing for experiment exp1.6.1CCSandDSK Files Description useCCS.c C file for testing experiment useCCS.h C header file useCCS.pjt DSP project file useCCS.cmd DSP linker command file 2. Create C program files using the CCS editor: Choose File→New to create a new file, type in the example C code listed in Tables 1.6 and 1.7. Save C code listed in Table 1.6 as useCCS.c to ..experimentsexp1.6.1_CCSandDSKsrc, and save C code listed in Table 1.7 as useCCS.h to the directory ..experimentsexp1.6.1_CCSandDSKinc. This example reads precalculated sine values from a data table, negates them, and stores the results in a reversed order to an output buffer. The programs useCCS.c and useCCS.h are included in the companion CD. However, it is recommended that we create them using the editor to become familiar with the CCS editing functions. 3. Create a linker command file for the simulator: Choose File→New to create another new file, and type in the linker command file as listed in Table 1.4. Save this file as useCCS.cmd to the directory ..experimentsexp1.6.1_CCSandDSK. The command file is used by the linker to map different program segments into a prepartitioned system memory space. 4. Setting up the project: Add useCCS.c and useCCS.cmd to the project by choosing Project→Add Files to Project, then select files useCCS.c and useCCS.cmd. Before build- ing a project, the search paths of the included files and libraries should be setup for C com- piler, assembler, and linker. To setup options for C compiler, assembler, and linker choose Project→Build Options. We need to add search paths to include files and libraries that are not included in the C55x DSP tools directories, such as the libraries and included files we created Table 1.6 Program example, useCCS.c #include useCCS.h short outBuffer[BUF_SIZE]; void main() { short i, j; j = 0; while (1) { for (i=BUF_SIZE-1; i= 0;i--) { outBuffer [j++] = 0 - sineTable[i]; // - Set breakpoint if (j = BUF_SIZE) j = 0; } j++; } }
  • 52. 28 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Table 1.7 Program example header file, useCCS.h #define BUF_SIZE 40 const short sineTable[BUF_SIZE]= {0x0000, 0x000f, 0x001e, 0x002d, 0x003a, 0x0046, 0x0050, 0x0059, 0x005f, 0x0062, 0x0063, 0x0062, 0x005f, 0x0059, 0x0050, 0x0046, 0x003a, 0x002d, 0x001e, 0x000f, 0x0000, 0xfff1, 0xffe2, 0xffd3, 0xffc6, 0xffba, 0xffb0, 0xffa7, 0xffa1, 0xff9e, 0xff9d, 0xff9e, 0xffa1, 0xffa7, 0xffb0, 0xffba, 0xffc6, 0xffd3, 0xffe2, 0xfff1}; in the working directory. Programs written in C language require the use of the run-time support library, either rts55.lib or rts55x.lib, for system initialization. This can be done by selecting the compiler and linker dialog box and entering the C55x run-time support library, rts55.lib, and adding the header file path related to the source file directory. We can also specify different directories to store the output executable file and map file. Figure 1.14 shows an example of how to set the search paths for compiler, assembler, and linker. 5. Build and run the program: Use Project→Rebuild All command to build the project. If there are no errors, the CCS will generate the executable output file, useCCS.out. Be- fore we can run the program, we need to load the executable output file to the C55x DSK or the simulator. To do so, use File→Load Program menu and select the useCCS.out in ..exprimentsexp1.6.1_CCSandDSKDebug directory and load it. Execute this program by choosing Debug→Run. The processor status at the bottom-left-hand corner of the CCS will change from CPU HALTED to CPU RUNNING. The running process can be stopped by the Debug→Halt command. We can continue the program by reissuing the Run command or exiting the DSK or the simulator by choosing File→Exit menu. (a) Setting the include file searching path. (b) Setting the run-time support library. Figure 1.14 Setup search paths for C compiler, assembler, and linker: (a) setting the include file searching path; (b) setting the run-time support library
  • 53. EXPERIMENTS AND PROGRAM EXAMPLES 29 1.6.2 Debugging Program Using CCS and DSK TheCCSIDEhasextendedtraditionalDSPcodegenerationtoolsbyintegratingasetofediting,emulating, debugging, and analyzing capabilities in one entity. In this section, we will introduce some program building steps and software debugging capabilities of the CCS. The standard toolbar in Figure 1.13 allows users to create and open files, cut, copy, and paste text within and between files. It also has undo and redo capabilities to aid file editing. Finding text can be done within the same file or in different files. The CCS built-in context-sensitive help menu is also located in the standard toolbar menu. More advanced editing features are in the edit toolbar menu, including mark to, mark next, find match, and find next open parenthesis for C programs. The features of out-indent and in-indent can be used to move a selected block of text horizontally. There are four bookmarks that allow users to create, remove, edit, and search bookmarks. The project environment contains C compiler, assembler, and linker. The project toolbar menu (see Figure 1.13) gives users different choices while working on projects. The compile only, incremental build, and build all features allow users to build the DSP projects efficiently. Breakpoints permit users to set software breakpoints in the program and halt the processor whenever the program executes at those breakpoint locations. Probe points are used to transfer data files in and out of the programs. The profiler can be used to measure the execution time of given functions or code segments, which can be used to analyze and identify critical run-time blocks of the programs. The debug toolbar menu illustrated in Figure 1.13 contains several stepping operations: step-into-a- function, step-over-a-function, and step-out-off-a-function. It can also perform the run-to-cursor-position operation, which is a very convenient feature, allowing users to step through the code. The next three hot buttons in the debug toolbar are run, halt, and animate. They allow users to execute, stop, and animate the DSP programs. The watch windows are used to monitor variable contents. CPU registers and data memory viewing windows provide additional information for ease of debugging programs. More custom options are available from the pull-down menus, such as graphing data directly from the processor memory. We often need to check the changing values of variables during program execution for developing and testing programs. This can be accomplished with debugging settings such as breakpoints, step commands, and watch windows, which are illustrated in the following experiment. Procedures of the experiment are listed as follows: 1. Add and remove breakpoints: Start with Project→Open, select useCCS.pjt from the directory ..experimentsexp1.6.2_CCSandDSK. Build and load the example project useCCS.out. Dou- ble click the C file, useCCS.c, in the project viewing window to open it in the editing window. To add a breakpoint, move the cursor to the line where we want to set a breakpoint. The command to enable a breakpoint can be given either from the Toggle Breakpoint hot button on the project toolbar or by clicking the mouse button on the line of interest. The function key F9 is a shortcut that can be used to toggle a breakpoint. Once a breakpoint is enabled, a red dot will appear on the left to indicate where the breakpoint is set. The program will run up to that line without passing it. To remove breakpoints, we can either toggle breakpoints one by one or select the Remove All Breakpoints hot button from the debug toolbar to clear all the breakpoints at once. Now load the useCCS.out and open the source code window with source code useCCS.c, and put the cursor on the line: outBuffer[j++] = 0 - sineTable[i]; // - set breakpoint Click the Toggle Breakpoint button (or press F9) to set the breakpoint. The breakpoint will be set as shown in Figure 1.15.
  • 54. 30 INTRODUCTION TO REAL-TIME DIGITAL SIGNAL PROCESSING Figure 1.15 CCS screen snapshot of the example using CCS 2. Set up viewing windows: CCS IDE provides many useful windows to ease code development and the debugging process. The following are some of the most often used windows: CPU register viewing window: On the standard tool menu bar, click View→Registers→ CPU Registers to open the CPU registers window. We can edit the contents of any CPU register by double clicking it. If we right click the CPU Register Window and select Allow Docking, we can move the window around and resize it. As an example, try to change the temporary register T0 and accumulator AC0 to new values of T0 = 0x1234 and AC0 = 0x56789ABC. Commandwindow:FromtheCCSmenubar,clickTools→Command Window toaddthecommand window. We can resize and dock it as well. The command window will appear each time when we rebuild the project. Disassembly window: Click View→Disassembly on the menu bar to see the disassembly window. Every time we reload an executable out file, the disassembly window will appear automatically. 3. Workspace feature: We can customize the CCS display and settings using the workspace feature. To save a workspace, click File→Workspace→Save Workspace and give the workspace a name and path where the workspace will be stored. When we restart CCS, we can reload the workspace by clicking File→Workspace→Load Workspace and use a workspace from previous work. Now save the workspace for your current CCS settings then exit the CCS. Restart CCS and reload the workspace. After the workspace is reloaded, you should have the identical settings restored. 4. Using the single-step features: When using C programs, the C55x system uses a function called boot from the run-time support library rts55.lib to initialize the system. After we load the useCCS.out,
  • 55. Discovering Diverse Content Through Random Scribd Documents
  • 56. Lotze. Ueberweg. The group is loosely constituted however. There was scope for diversity of view and there was diversity of view, according as the vital issue of the formula was held to lie in the relation of intellectual function to organic function or in the not quite equivalent relation of thinking to being. Moreover, few of the writers who, whatsoever it was that they baptized with the name of logic, were at least earnestly engaged in an endeavour to solve the problem of knowledge within a circle of ideas which was on the whole Kantian, were under the dominance of a single inspiration. Beneke’s philosophy is a striking instance of this, with application to Fries and affinity to Herbart conjoined with obligations to Schelling both directly and through Schleiermacher. Lotze again wove together many threads of earlier thought, though the web was assuredly his own. Finally it must not be forgotten that the host of writers who were in reaction against Hegelianism tended to take refuge in some formula of correlation, as a half-way house between that and formalism or psychologism or both, without reference to, and often perhaps without consciousness of, the way in which historically it had taken shape to meet the problem held to have been left unresolved by Kant. Lotze on the one hand held the Hegelian “deduction” to be untenable, and classed himself with those who in his own phrase “passed to the order of the day,” while on the other hand he definitely raised the question, how an “object” could be brought into forms to which it was not in some sense adapted. Accordingly, though he regards logic as formal, its forms come into relation to objectivity in some sort even within the logical field itself, while when taken in the setting of his system as a whole, its formal character is not of a kind that ultimately excludes psychological and metaphysical reference, at
  • 57. least speculatively. As a logician Lotze stands among the masters. His flair for the essentials in his problem, his subtlety of analysis, his patient willingness to return upon a difficulty from a fresh and still a fresh point of view, and finally his fineness of judgment, make his logic137 so essentially logic of the present, and of its kind not soon to be superseded, that nothing more than an indication of the historical significance of some of its characteristic features need be attempted here. In Lotze’s pure logic it is the Herbartian element that tends to be disconcerting. Logic is formal. Its unit, the logical concept, is a manipulated product and the process of manipulation may be called abstraction. Processes of the psychological mechanism lie below it. The paradox of the theory of judgment is due to the ideal of identity, and the way in which this is evaded by supplementation to produce a non-judgmental identity, followed by translation of the introduced accessories with conditions in the hypothetical judgment, is thoroughly in Herbart’s manner. The reduction of judgments is on lines already familiar. Syllogism is no instrumental method by which we compose our knowledge, but an ideal to the form of which it should be brought. It is, as it were, a schedule to be filled in, and is connected with the disjunctive judgment as a schematic setting forth of alternatives, not with the hypothetical, and ultimately the apodictic judgment with their suggestion that it is the real movement of thought that is subjected to analysis. Yet the resultant impression left by the whole treatment is not Herbartian. The concept is accounted for in Kantian terms. There is no discontinuity between the pre-logical or sub-logical conversion of impressions into “first universals” and the formation of the logical concept. Abstraction proves to be synthesis with compensatory universal marks in the place of the particular marks abstracted from. Synthesis as the work
  • 58. of thought always supplies, beside the mere conjunction or disjunction of ideas, a ground of their coherence or non-coherence. It is evident that thought, even as dealt with in pure logic, has an objectifying function. Its universals have objective validity, though this does not involve direct real reference. The formal conception of pure logic, then, is modified by Lotze in such a way as not only to be compatible with a view of the structural and functional adequacy of thought to that which at every point at which we take thinking is still distinguishable from thought, but even inevitably to suggest it. That the unit for logic is the concept and not the judgment has proved a stumbling-block to those of Lotze’s critics who are accustomed to think in terms of the act of thought as unit. Lotze’s procedure is, indeed, analogous to the way in which, in his philosophy of nature, he starts from a plurality of real beings, but by means of a reductive movement, an application of Kant’s transcendental method, arrives at the postulate or fact of a law of their reciprocal action which calls for a monistic and idealist interpretation. He starts, that is in logic, with conceptual units apparently self-contained and admitting of nothing but external relation, but proceeds to justify the intrinsic relation between the matter of his units by an appeal to the fact of the coherence of all contents of thought. Indeed, if thought admits irreducible units, what can unite? Yet he is left committed to his puzzle as to a reduction of judgment to identity, which partially vitiates his treatment of the theory of judgment. The outstanding feature of this is, nevertheless, not affected, viz. the attempt that he makes, inspired clearly by Hegel, “to develop the various forms of judgment systematically as members of a series of operations, each of which leaves a part of its problem unmastered and thereby gives rise to the next.”138 As to inference, finally, the ideal of the articulation of the universe of discourse, as it is for complete
  • 59. knowledge, when its disjunctions have been thoroughly followed out and it is exhaustively determined, carried the day with him against the view that the organon for gaining knowledge is syllogism. The Aristotelian formula is “merely the expression, formally expanded and complete, of the truth already embodied in disjunctive judgment, namely, that every S which is a specific form of M possesses as its predicate a particular modification of each of the universal predicates of M to the exclusion of the rest.” Schleiermacher’s separation of inference from judgment and his attribution of the power to knowledge in process cannot find acceptance with Lotze. The psychologist and the formal logician do indeed join hands in the denial of a real movement of thought in syllogism. Lotze’s logic then, is formal in a sense in which a logic which does not find the conception of synthetic truth embarrassing is not so. It is canon and not organon. In the one case, however, where it recognizes what is truly synthesis, i.e. in its account of the concept, it brings the statics of knowledge, so to speak, into integral relation with the dynamics. And throughout, wherever the survival from 1843, the identity bug-bear, is for the moment got rid of in what is really a more liberal conception, the statical doctrine is developed in a brilliant and informing manner. Yet it is in the detail of his logical investigations, something too volatile to fix in summary, that Lotze’s greatness as a logician more especially lies. With Lotze the ideal that at last the forms of thought shall be realized to be adequate to that which at any stage of actual knowledge always proves relatively intractable is an illuminating projection of faith. He takes courage from the reflection that to accept scepticism is to presume the competence of the thought that accepts. He will, however, take no easy way of parallelism. Our human thought pursues devious and circuitous methods. Its forms
  • 60. Logic as Metaphysic. are not unseldom scaffolding for the house of knowledge rather than the framework of the house itself. Our task is not to realise correspondence with something other than thought, but to make explicit those justificatory notions which condition the form of our apprehension. “However much we may presuppose an original reference of the forms of thought to that nature of things which is the goal of knowledge, we must be prepared to find in them many elements which do not directly reproduce the actual reality to the knowledge of which they are to lead us.”139 The impulse of thought to reduce coincidence to coherence reaches immediately only to objectivity or validity. The sense in which the presupposition of a further reference is to be interpreted and in which justificatory notions for it can be adduced is only determinable in a philosophic system as a whole, where feeling has a place as well as thought, value equally with validity. Lotze’s logic then represents the statical aspect of the function of thought in knowledge, while, so far as we go in knowledge thought is always engaged in the unification of a manifold, which remains contradistinguished from it, though not, of course, completely alien to and unadapted to it. The further step to the determination of the ground of harmony is not to be taken in logic, where limits are present and untranscended. The position of the search for truth, for which knowledge is a growing organism in which thought needs, so to speak, to feed on something other than itself, is conditioned in the post-Kantian period by antagonism to the speculative movement which culminated in the dialectic of Hegel. The radical thought of this movement was voiced in the demand of Reinhold140 that philosophy should
  • 61. Hegel. “deduce” it all from a single principle and by a single method. Kant’s limits that must needs be thought and yet cannot be thought must be thought away. An earnest attempt to satisfy this demand was made by Fichte whose single principle was the activity of the pure Ego, while his single method was the assertion of a truth revealed by reflection on the content of conscious experience, the characterization of this as a half truth and the supplementation of it by its other, and finally the harmonization of both. The pure ego is inferred from the fact that the non-ego is realized only in the act of the ego in positing it. The ego posits itself, but reflection on the given shows that we must add that it posits also the non-ego. The two positions are to be conciliated in the thought of reciprocal limitation of the posited ego and non-ego. And so forth. Fichte cannot be said to have developed a logic, but this rhythm of thesis, antithesis and synthesis, foreshadowed in part for Fichte in Spinoza’s formula, “omnis determinatio est negatio” and significantly in Kant’s triadic grouping of his categories, gave a cue to the thought of Hegel. Schelling, too, called for a single principle and claimed to have found it in his Absolute, “the night” said Hegel, “in which all cows are black,” but his historical influence lay, as we have seen, in the direction of a parallelism within the unity, and he also developed no logic. It is altogether otherwise with Hegel. Hegel’s logic,141 though it involves inquiries which custom regards as metaphysical, is not to be characterized as a metaphysic with a method. It is logic or a rationale of thought by thought, with a full development among other matters of all that the most separatist of logicians regards as thought forms. It offers a solution of what has throughout appeared as the logical problem. That solution lies doubtless in the evolution of the Idea, i.e. an all-inclusive in which mere or pure
  • 62. thought is cancelled in its separateness by a transfiguration, while logic is nothing but the science of the Idea viewed in the medium of pure thought. But, whatever else it be, this Panlogismus, to use the word of J. E. Erdmann, is at least a logic. Thought in its progressive unfolding, of which the history of philosophy taken in its broad outline offers a pageant, necessarily cannot find anything external to or alien from itself, though that there is something external for it is another matter. As Fichte’s Ego finds that its non-ego springs from and has its home within its very self, so with Hegel thought finds itself in its “other,” both subsisting in the Idea which is both and neither. Either of the two is the all, as, for example, the law of the convexity of the curve is the law of the curve and the law of its concavity. The process of the development of the Idea or Absolute is in one regard the immanent process of the all. Logically regarded, i.e. “in the medium of mere thought,” it is dialectical method. Any abstract and limited point of view carries necessarily to its contradictory. This can only be atoned with the original determination by fresh negation in which a new thought- determination is born, which is yet in a sense the old, though enriched, and valid on a higher plane. The limitations of this in turn cause a contradiction to emerge, and the process needs repetition. At last, however, no swing into the opposite, with its primarily conflicting, if ultimately complementary function, is any longer possible. That in which no further contradiction is possible is the absolute Idea. Bare or indeterminate being, for instance, the first of the determinations of Hegel’s logic, as the being of that which is not anything determinate, of Kant’s thing-in-itself, for example, positively understood, implicated at once the notion of not-being, which negates it, and is one with it, yet with a difference, so that we have the transition to determinate being, the transition being baptized as
  • 63. becoming. And so forth. It is easy to raise difficulties not only in regard to the detail in Hegel’s development of his categories, especially the higher ones, but also in regard to the essential rhythm of his method. The consideration that mere double negation leaves us precisely where we were and not upon a higher plane where the dominant concept is richer, is, of course, fatal only to certain verbal expressions of Hegel’s intent. There is a differentiation in type between the two negations. But if we grant this it is no longer obviously the simple logical operation indicated. It is inferred then that Hegel complements from the stuff of experience, and fails to make good the pretension of his method to be by itself and of itself the means of advance to higher and still higher concepts till it can rest in the Absolute. He discards, as it were, and takes in from the stock while professing to play from what he has originally in his hand. He postulates his unity in senses and at stages in which it is inadmissible, and so supplies only a schema of relations otherwise won, a view supported by the way in which he injects certain determinations in the process, e.g. the category of chemism. Has he not cooked the process in the light of the result? In truth the Hegelian logic suffers from the fact that the good to be reached is presupposed in the beginning. Nature, e.g., is not deduced as real because rational, but being real its rationality is presumed and, very imperfectly, exhibited in a way to make it possible to conceive it as in its essence the reflex of Reason. It is a vision rather than a construction. It is a “theosophical logic.” Consider the rational-real in the unity that must be, and this is the way of it, or an approximation to the way of it! It was inevitable that the epistemologists of the search for truth would have none of it. The ideal in whatsoever sense real still needs to be realized. It is from the human standpoint
  • 64. regulative and only hypothetically or formally constitutive. We must not confuse οὐσία with εἶναι, nor εἶναι with γίγνεσθαι. Yet in a less ambitious form the fundamental contentions of Hegel’s method tend to find a qualified acceptance. In any piece of presumed knowledge its partial or abstract character involves the presence of loose edges which force the conviction of inadequacy and the development of contradictions. Contradictions must be annulled by complementation, with resultant increasing coherence in ascending stages. At each successive stage in our progress fresh contradictions break out, but the ideal of a station at which the thought-process and its other, if not one, are at one, is permissible as a limiting conception. Yet if Hegel meant only this he has indeed succeeded in concealing his meaning. Hegel’s treatment of the categories or thought determinations which arise in the development of the immanent dialectic is rich in flashes of insight, but most of them are in the ordinary view of logic wholly metaphysical. In the stage, however, of his process in which he is concerned with the notion are to be found concept, judgment, syllogism. Of the last he declares that it “is the reasonable and everything reasonable” (Encyk. § 181), and has the phantasy to speak of the definition of the Absolute as being “at this stage” simply the syllogism. It is, of course, the rhythm of the syllogism that attracts him. The concept goes out from or utters itself in judgment to return to an enhanced unity in syllogism. Ueberweg (System § 101) is, on the whole, justified in exclaiming that Hegel’s rehabilitation of syllogism “did but slight service to the Aristotelian theory of syllogism,” yet his treatment of syllogism must be regarded as an acute contribution to logical criticism in the technical sense. He insists on its objectivity. The transition from judgment is not brought
  • 65. about by our subjective action. The syllogism of “all-ness” is convicted of a petitio principii (Encyk. § 190), with consequent lapse into the inductive syllogism, and, finally, since inductive syllogism is involved in the infinite process, into analogy. “The syllogism of necessity,” on the contrary, does not presuppose its conclusion in its premises. The detail, too, of the whole discussion is rich in suggestion, and subsequent logicians—Ueberweg himself perhaps, Lotze certainly in his genetic scale of types of judgment and inference, Professor Bosanquet notably in his systematic development of “the morphology of knowledge,” and others—have with reason exploited it. Hegel’s logic as a whole, however, stands and falls not with his thoughts on syllogism, but with the claim made for the dialectical method that it exhibits logic in its integral unity with metaphysic, the thought-process as the self-revelation of the Idea. The claim was disallowed. To the formalist proper it was self-condemned in its pretension to develop the content of thought and its rejection of the formula of bare-identity. To the epistemologist it seemed to confuse foundation and keystone, and to suppose itself to build upon the latter in a construction illegitimately appropriative of materials otherwise accumulated. At most it was thought to establish a schema of formal unity which might serve as a regulative ideal. To the methodologist of science in genesis it appeared altogether to fail to satisfy any practical interest. Finally, to the psychologist it spelt the failure of intellectualism, and encouraged, therefore, some form of rehabilitated experientialism. In the Hegelian school in the narrower sense the logic of the master receives some exegesis and defence upon single points of doctrine rather than as a whole. Its effect upon logic is rather to be
  • 66. seen in the rethinking of the traditional body of logical doctrine in the light of an absolute presupposed as ideal, with the postulate that a regulative ideal must ultimately exhibit itself as constitutive, the justification of the postulate being held to lie in the coherence and all-inclusiveness of the result. In such a logic, if and so far as coherence should be attained, would be found something akin to the spirit of what Hegel achieves, though doubtless alien to the letter of what it is his pretension to have achieved. There is perhaps no serious misrepresentation involved in regarding a key-thought of this type, though not necessarily expressed in those verbal forms, as pervading such logic of the present as coheres with a philosophy of the absolute conceived from a point of view that is intellectualist throughout. All other contemporary movements may be said to be in revolt from Hegel. v. Logic from 1880-1910 Logic in the present exhibits, though in characteristically modified shapes, all the main types that have been found in its past history. There is an intellectualist logic coalescent with an absolutist metaphysic as aforesaid. There is an epistemological logic with sometimes formalist, sometimes methodological leanings. There is a formal-symbolic logic engaged with the elaboration of a relational calculus. Finally, there is what may be termed psychological- voluntaryist logic. It is in the rapidity of development of logical investigations of the third and fourth types and the growing number of their exponents that the present shows most clearly the history of logic in the making. All these movements are logic of the present, and a very brief indication may be added of points of historical significance.
  • 67. Of intellectualist logic Francis Herbert Bradley142 (b. 1846) and Bernard Bosanquet143 (1848) may be taken as typical exponents. The philosophy of the former concludes to an Absolute by the annulment of contradictions, though the ladder of Hegel is conspicuous by its absence. His metaphysical method, however, is like Herbart’s, not identifiable with his logic, and the latter has for its central characteristic its thorough restatement of the logical forms traditional in language and the text-books, in such a way as to harmonize with the doctrine of a reality whose organic unity is all- inclusive. The thorough recasting that this involves, even of the thought of the masters when it occasionally echoes them, has resulted in a phrasing uncouth to the ear of the plain man with his world of persons and things in which the former simply think about the latter, but it is fundamentally necessary for Bradley’s purpose. The negative judgment, for example, cannot be held in one and the same undivided act to presuppose the unity of the real, project an adjective as conceivably applicable to it and assert its rejection. We need, therefore, a restatement of it. With Bradley reality is the one subject of all judgment immediate or mediate. The act of judgment “which refers an ideal content (recognized as such) to a reality beyond the act” is the unit for logic. Grammatical subject and predicate necessarily both fall under the rubric of the adjectival, that is, within the logical idea or ideal content asserted. This is a meaning or universal, which can have no detached or abstract self- subsistence. As found in judgment it may exhibit differences within itself, but it is not two, but one, an articulation of unity, not a fusion, which could only be a confusion, of differences. With a brilliant subtlety Bradley analyses the various types of judgment in his own way, with results that must be taken into account by all subsequent logicians of this type. The view of inference with which he
  • 68. complements it is only less satisfactory because of a failure to distinguish the principle of nexus in syllogism from its traditional formulation and rules, and because he is hampered by the intractability which he finds in certain forms of relational construction. Bosanquet had the advantage that his logic was a work of a slightly later date. He is, perhaps, more able than Bradley has shown himself, to use material from alien sources and to penetrate to what is of value in the thought of writers from whom, whether on the whole or on particular issues, he disagrees. He treats the book- tradition, however, a debt to which, nowadays inevitable, he is generous in acknowledging,144 with a judicious exercise of freedom in adaptation, i.e. constructively as datum, never eclectically. In his fundamental theory of judgment his obligation is to Bradley. It is to Lotze, however, that he owes most in the characteristic feature of his logic, viz., the systematic development of the types of judgment, and inference from less adequate to more adequate forms. His fundamental continuity with Bradley may be illustrated by his definition of inference. “Inference is the indirect reference to reality of differences within a universal, by means of the exhibition of this universal in differences directly referred to reality.”145 Bosanquet’s Logic will long retain its place as an authoritative exposition of logic of this type. Of epistemological logic in one sense of the phrase Lotze is still to be regarded as a typical exponent. Of another type Chr. Sigwart (q.v.) may be named as representative. Sigwart’s aim was “to reconstruct logic from the point of view of methodology.” His problem was the claim to arrive at propositions universally valid, and so true of the object, whosoever the individual thinker. His solution,
  • 69. within the Kantian circle of ideas, was that such principles as the Kantian principle of causality were justified as “postulates of the endeavour after complete knowledge.” “What Kant has shown is not that irregular fleeting changes can never be the object of consciousness, but only that the ideal consciousness of complete science would be impossible without the knowledge of the necessity of all events.”146 “The universal presuppositions which form the outline of our ideal of knowledge are not so much laws which the understanding prescribes to nature ... as laws which the understanding lays down for its own regulation in its investigation and consideration of nature. They are a priori because no experience is sufficient to reveal or confirm them in unconditional universality; but they are a priori ... only in the sense of presuppositions without which we should work with no hope of success and merely at random and which therefore we must believe.” Finally they are akin to our ethical principles. With this coheres his dictum, with its far- reaching consequences for the philosophy of induction, that “the logical justification of the inductive process rests upon the fact that it is an inevitable postulate of our effort after knowledge, that the given is necessary, and can be known as proceeding from its grounds according to universal laws.”147 It is characteristic of Sigwart’s point of view that he acknowledges obligation to Mill as well as to Ueberweg. The transmutation of Mill’s induction of inductions into a postulate is an advance of which the psychological school of logicians have not been slow to make use. The comparison of Sigwart with Lotze is instructive, in regard both to their agreement and their divergence as showing the range of the epistemological formula. Of the formal-symbolic logic all that falls to be said here is, that from the point of view of logic as a whole, it is to be regarded as a
  • 70. legitimate praxis as long as it shows itself aware of the sense in which alone form is susceptible of abstraction, and is aware that in itself it offers no solution of the logical problem. “It is not an algebra,” said Kant148 of his technical logic, and the kind of support lent recently to symbolic logic by the Gegenstandstheorie identified with the name of Alexius Meinong (b. 1853)149 is qualified by the warning that the real activity of thought tends to fall outside the calculus of relations and to attach rather to the subsidiary function of denoting. The future of symbolic logic as coherent with the rest of logic, in the sense which the word has borne throughout its history seems to be bound up with the question of the nature of the analysis that lies behind the symbolism, and of the way in which this is justified in the setting of a doctrine of validity. The “theory of the object,” itself, while affecting logic alike in the formal and in the psychological conception of it very deeply, does not claim to be regarded as logic or a logic, apart from a setting supplied from elsewhere. Finally we have a logic of a type fundamentally psychological, if it be not more properly characterized as a psychology which claims to cover the whole field of philosophy, including the logical field. The central and organizing principle of this is that knowledge is in genesis, that the genesis takes place in the medium of individual minds, and that this fact implies that there is a necessary reference throughout to interests or purposes of the subject which thinks because it wills and acts. Historically this doctrine was formulated as the declaration of independence of the insurgents in revolt against the pretensions of absolutist logic. It drew for support upon the psychological movement that begins with Fries and Herbart. It has been chiefly indebted to writers, who were not, or were not primarily, logicians, to Avenarius, for example, for the law of the
  • 71. economy of thought, to Wundt, whose system, and therewith his logic,150 is a pendant to his psychology, for the volitional character of judgment, to Herbert Spencer and others. A judgment is practical, and not to be divorced without improper abstraction from the purpose and will that informs it. A concept is instrumental to an end beyond itself, without any validity other than its value for action. A situation involving a need of adaptation to environment arises and the problem it sets must be solved that the will may control environment and be justified by success. Truth is the improvised machinery that is interjected, so far as this works. It is clear that we are in the presence of what is at least an important half-truth, which intellectuallism with its statics of the rational order viewed as a completely articulate system has tended to ignore. It throws light on many phases of the search for truth, upon the plain man’s claim to start with a subject which he knows whose predicate which he does not know is still to be developed, or again upon his use of the negative form of judgment, when the further determination of his purposive system is served by a positive judgment from without, the positive content of which is yet to be dropped as irrelevant to the matter in hand. The movement has, however, scarcely developed its logic151 except as polemic. What seems clear is that it cannot be the whole solution. While man must confront nature from the human and largely the practical standpoint, yet his control is achieved only by the increasing recognition of objective controls. He conquers by obedience. So truth works and is economical because it is truth. Working is proportioned to inner coherence. It is well that the view should be developed into all its consequences. The result will be to limit it, though perhaps also to justify it, save in its claim to reign alone.
  • 72. There is, perhaps, an increasing tendency to recognize that the organism of knowledge is a thing which from any single viewpoint must be seen in perspective. It is of course a postulate that all truths harmonize, but to give the harmonious whole in a projection in one plane is an undertaking whose adequacy in one sense involves an inadequacy in another. No human architect can hope to take up in succession all essential points of view in regard to the form of knowledge or to logic. “The great campanile is still to finish.”
  • 73. Bibliography.—Historical: No complete history of logic in the sense in which it is to be distinguished from theoretical philosophy in general has as yet been written. The history of logic is indeed so little intelligible apart from constant reference to tendencies in philosophical development as a whole, that the historian, when he has made the requisite preparatory studies, inclines to essay the more ambitious task. Yet there are, of course, works devoted to the history of logic proper. Of these Prantl’s Geschichte der Logik im Abendlande (4 vols., 1855-1870), which traces the rise, development and fortunes of the Aristotelian logic to the close of the middle ages, is monumental. Next in importance are the works of L. Rabus, Logik und Metaphysik, i. (1868) (pp. 123-242 historical, pp. 453- 518 bibliographical, pp. 514 sqq. a section on apparatus for the study of the history of logic), Die neuesten Bestrebungen auf dem Gebiete der Logik bei den Deutschen (1880), Logik (1895), especially for later writers § 17. Ueberweg’s System der Logik und Geschichte der logischen Lehren (4th ed. and last revised by the author, 1874, though it has been reissued later, Eng. trans., 1871) is alone to be named with these. Harms’ posthumously published Geschichte der Logik (1881) (Die Philosophie in ihrer Geschichte, ii.) was completed by the author only so far as Leibnitz. Blakey’s Historical Sketch of Logic (1851), though, like all this writer’s works, closing with a bibliography of some pretensions, is now negligible. Franck, Esquisse d’une histoire de la logique (1838) is the chief French contribution to the subject as a whole. Of contributions towards the history of special periods or schools of logical thought the list, from the opening chapters of
  • 74. Ramus’s Scholae Dialecticae (1569) downwards (v. Rabus loc. cit.) would be endless. What is of value in the earlier works has now been absorbed. The System der Logik (1828) of Bachmann (a Kantian logician of distinction) contains a historical survey (pp. 569-644), as does the Denklehre (1822) of van Calker (allied in thought to Fries) pp. 12 sqq.; Eberstein’s Geschichte der Logik und Metaphysik bei den Deutschen von Leibniz bis auf gegenwärtige Zeit (latest edition, 1799) is still of importance in regard to logicians of the school of Wolff and the origines of Kant’s logical thought. Hoffmann, the editor and disciple of von Baader, published Grundzüge einer Geschichte der Begriffe der Logik in Deutschland von Kant bis Baader (1851). Wallace’s prolegomena and notes to his Logic of Hegel (1874, revised and augmented 1892-1894) are of use for the history and terminology, as well as the theory. Riehl’s article entitled Logik in Die Kultur der Gegenwart, vi. 1. Systematische Philosophie (1907), is excellent, and touches on quite modern developments. Liard, Les Logiciens Anglais Contemporains (5th ed., 1907), deals only with the 19th-century inductive and formal-symbolic logicians down to Jevons, to whom the book was originally dedicated. Venn’s Symbolic Logic (1881) gave a careful history and bibliography of that development. The history of the more recent changes is as yet to be found only in the form of unshaped material in the pages of review and Jahresbericht. (H. W. B.*) 1 Cf. Heidel, “The Logic of the Pre-Socratic Philosophy,” in Dewey’s Studies in Logical Theory (Chicago, 1903). 2 Heraclitus, Fragmm. 107 (Diels, Fragmente der Vorsokratiker) and 2, on which see Burnet, Early Greek Philosophy, p. 153 note (ed. 2).
  • 75. 3 e.g. Diog. Laërt. ix. 25, from the lost Sophistes of Aristotle. 4 Plato and Platonism, p. 24. 5 Nothing is. If anything is, it cannot be known. If anything is known it cannot be communicated. 6 Metaphys. μ. 1078b 28 sqq. 7 Cf. Arist. Top. θ. i. 1 ad fin. 8 For whom see Dümmler, Antisthenica (1882, reprinted in his Kleine Schriften, 1901). 9 Aristotle, Metaphys. 1024b 32 sqq. 10 Plato, Theaetetus, 201 E. sqq., where, however, Antisthenes is not named, and the reference to him is sometimes doubted. But cf. Aristotle, Met. H 3. 1043b 24-28. 11 Diog. Laërt. ii. 107. 12 Aristotle, An. Pr. i. 31, 46a 32 sqq.; cf. 91b 12 sqq. 13 Athenaeus ii. 59c. See Usener, Organisation der wissenschaftl. Arbeit (1884; reprinted in his Vorträge und Aufsätze, 1907). 14 Socrates’ reference of a discussion to its presuppositions (Xenophon, Mem. iv. 6, 13) is not relevant for the history of the terminology of induction. 15 Theaetetus, 186c. 16 Timaeus, 37a, b (quoted in H. F. Carlill’s translation of the Theaetetus, p. 60). 17 Theaetetus, 186d. 18 Sophistes, 253d. 19 Ib. id.; cf. Theaetetus, 197d. 20 Aristotle, de An. 430b 5, and generally iii. 2, iii. 5.
  • 76. 21 For Plato’s Logic, the controversies as to the genuineness of the dialogues may be treated summarily. The Theaetetus labours under no suspicion. The Sophistes is apparently matter for animadversion by Aristotle in the Metaphysics and elsewhere, but derives stronger support from the testimonies to the Politicus which presumes it. The Politicus and Philebus are guaranteed by the use made of them in Aristotle’s Ethics. The rejection of the Parmenides would involve the paradox of a nameless contemporary of Plato and Aristotle who was inferior as a metaphysician to neither. No other dialogue adds anything to the logical content of these. Granted their genuineness, the relative dating of three of them is given, viz. Theaetetus, Sophistes and Politicus in the order named. The Philebus seems to presuppose Politicus, 283-284, but if this be an error, it will affect the logical theory not at all. There remains the Parmenides. It can scarcely be later than the Sophistes. The antinomies with which it concludes are more naturally taken as a prelude to the discussion of the Sophistes than as an unnecessary retreatment of the doctrine of the one and the many in a more negative form. It may well be earlier than the Theaetetus in its present form. The stylistic argument shows the Theaetetus relatively early. The maturity of its philosophic outlook tends to give it a place relatively advanced in the Platonic canon. To meet the problem here raised, the theory has been devised of an earlier and a later version. The first may have linked on to the series of Plato’s dialogues of search, and to put the Parmenides before it is impossible. The second, though it might still have preceded the Parmenides might equally well have followed the negative criticism of that dialogue, as the beginning of reconstruction. For Plato’s logic this question only has interest on account of the introduction of an Ἀριστοτέλης in a non-speaking part in the Parmenides. If this be pressed as suggesting that the philosopher Aristotle was already in full activity at the date of writing, it is of importance to know what Platonic dialogues were later than the début of his critical pupil. On the stylistic argument as applied to Platonic controversies Janell’s Quaestiones Platonicae (1901) is important. On the whole question of genuineness and dates of the dialogues, H. Raeder, Platons
  • 77. philosophische Entwickelung (1905), gives an excellent conspectus of the views held and the grounds alleged. See also Plato. 22 E.g. that of essence and accident. Republic, 454. 23 E.g. the discussion of correlation, ib. 437 sqq. 24 Politicus, 285d. 25 Sophistes, 261c sqq. 26 E.g. in Nic. Eth. i. 6. 27 Philebus, 16d. 28 Principal edition still that of Waitz, with Latin commentary, (2 vols., 1844-1846). Among the innumerable writers who have thrown light upon Aristotle’s logical doctrine, St Hilaire, Trendelenburg, Ueberweg, Hamilton, Mansel, G. Grote may be named. There are, however, others of equal distinction. Reference to Prantl, op. cit., is indispensable. Zeller, Die philosophie der Griechen, ii. 2, “Aristoteles” (3rd ed., 1879), pp. 185-257 (there is an Eng. trans.), and Maier, Die Syllogistik des Aristoteles (2 vols., 1896, 1900) (some 900 pp.), are also of first-rate importance. 29 Sophist. Elench. 184, espec. b 1-3, but see Maier, loc. cit. i. 1. 30 References such as 18b 12 are the result of subsequent editing and prove nothing. See, however, Aristotle. 31 Adrastus is said to have called them πρὸ τῶν τοπικῶν. 32 Metaphys. E. 1. 33 De Part. Animal. A. 1, 639a 1 sqq.; cf. Metaphys. 1005b 2 sqq. 34 De Interpretatione 16a sqq. 35 De Interpretatione 16a 24-25. 36 Ib. 18a 28 sqq. 37 Ib. 19a 28-29.
  • 78. 38 As shown e.g. by the way in which the relativity of sense and the object of sense is conceived, 7b 35-37. 39 Topics 101a 27 and 36-b 4. 40 Topics 100. 41 Politics 1282a 1 sqq. 42 103b 21. 43 Topics 160a 37-b 5. 44 This is the explanation of the formal definition of induction, Prior Analytics, ii. 23, 68b 15 sqq. 45 25b 36. 46 Prior Analytics, i. 1. 24a 18-20, Συλλογισμὸς δὲ ἑστὶ λόγος ἐν ᾦ τεθέντων τινῶν ἕτερόν τι τῶν κειμένων ἐξ ἀνάγκης σνμβαίνει τῷ ταῦτα εἶναι. The equivalent previously in Topics 100a 25 sqq. 47 Prior Analytics, ii. 21; Posterior Analytics, i. 1. 48 67a 33-37, μὴ συνθεωρῶν τὸ καθ᾽ ἑκάτερον. 49 67a 39-63. 50 79a 4-5. 51 24b 10-11. 52 Posterior Analytics, i. 4 καθ᾽ αὐτὸ means (1) contained in the definition of the subject; (2) having the subject contained in its definition, as being an alternative determination of the subject, crooked, e.g. is per se of line; (3) self-subsistent; (4) connected with the subject as consequent to ground. Its needs stricter determination therefore. 53 73b 26 sqq., 74a 37 sqq. 54 90b 16. 55 Metaphys. Z. 12, H. 6 ground this formula metaphysically. 56 94a 12, 75b 32.
  • 79. 57 90a 6. Cf. Ueberweg, System der Logik, § 101. 58 78a 30 sqq. 59 Topics, 101b 18, 19. 60 Posterior Analytics, ii. 13. 61 Posterior Analytics, ii. 16. 62 Posterior Analytics, i. 13 ad. fin., and i. 27. The form which a mathematical science treats as relatively self-subsistent is certainly not the constitutive idea. 63 Posterior Analytics, i. 3. 64 Posterior Analytics, ii. 19. 65 De Anima, 428b 18, 19. 66 Prior Analytics, i. 30, 46a 18. 67 Topics, 100b 20, 21. 68 Topics, 101a 25, 36-37, b1-4, c. 69 Zeller (loc. cit. p. 194), who puts this formula in order to reject it. 70 Metaphys. Δ 1, 1013a 14. 71 Posterior Analytics, 72a 16 seq. 72 Posterior Analytics, 77a 26, 76a 37 sqq. 73 Metaphys. Γ. 74 Posterior Analytics, ii. 19. 75 de Anima, iii. 4-6. 76 Metaphys. M. 1087a 10-12; Zeller loc. cit. 304 sqq.; McLeod Innes, The Universal and Particular in Aristotle’s Theory of Knowledge (1886). 77 Topics, 105a 13. 78 Metaphys. 995a 8.
  • 80. 79 E.g., Topics, 108b 10, “to induce” the universal. 80 Posterior Analytics, ii. 19, 100b 3, 4. 81 Topics, i. 18, 108b 10. 82 Prior Analytics, ii. 23. 83 Παράδειγμα, Prior Analytics, ii. 24. 84 Sigwart, Logik, Eng. trans. vol. ii. p. 292 and elsewhere. 85 Ueberweg, System, § 127, with a ref. to de Partibus Animalium, 667a. 86 See 67a 17 ἐξ ἁπάντων τῶν ἀτόμων. 87 Ἐπιφορά. Ἐπι = “in” as in ἐπαγωγὴ, inductio, and -φορὰ = - ferentia, as in διαφορὰ, differentia. 88 Diog. Laërt. x. 33 seq.; Sext. Emp. Adv. Math. vii. 211. 89 Diog. Laërt. x. 87; cf. Lucretius, vi. 703 sq., v. 526 sqq. (ed. Munro). 90 Sextus Empiricus, Pyrrhon. Hypotyp. ii. 195, 196. 91 Sextus, op. cit. ii. 204. 92 Op. cit. iii. 17 sqq., and especially 28. 93 The point is raised by Aristotle, 95A. 94 See Jourdain, Recherches critiques sur l’âge et l’origine des traductions latines d’Aristote (1843). 95 See E. Cassirer, Das Erkenntnisproblem, i. 134 seq., and the justificatory excerpts, pp. 539 sqq. 96 See Riehl in Vierteljahrschr. f. wiss. Philos. (1893). 97 Bacon, Novum Organum, ii. 22, 23; cf. also Aristotle, Topics i. 12. 13, ii. 10. 11 (Stewart, ad Nic. Eth. 1139b 27) and Sextus Empiricus, Pyrr. Hypot. iii. 15.
  • 81. 98 Bacon’s Works, ed. Ellis and Spedding, iii. 164-165. 99 A notable formula of Bacon’s Novum Organum ii. 4 § 3 turns out, Valerius Terminus, cap. 11, to come from Aristotle, Post. An. i. 4 via Ramus. See Ellis in Bacon’s Works, iii. 203 sqq. 100 De Civitate Dei, xi. 26. “Certum est me esse, si fallor.” 101 Cf. Plato, Republic, 381E seq. 102 Elementa Philosophiæ, i. 3. 20, i. 6. 17 seq. 103 Hobbes, Elementa Philosophiæ, i. 1. 5. 104 Id. ib. i. 6. 16. 105 Id. ib. i. 4. 8; cf. Locke’s Essay of Human Understanding, iv. 17. 106 Id. Leviathan, i. 3. 107 Id. Elem. Philos. i. 6. 10. 108 Condillac, Langue des Calculs, p. 7. 109 Locke, Essay, iii. 3. 110 Id. ib. iv. 17. 111 Loc. cit. § 8. 112 Id. ib. iv. 4, §§ 6 sqq. 113 Berkeley, Of the Principles of Human Knowledge, § 142. 114 Hume, Treatise of Human Nature, i. 1. 7 (from Berkeley, op. cit., introd., §§ 15-16). 115 Essay, iv. 17, § 3. 116 Hume, Treatise of Human Nature, i. 3. 15. 117 Mill, Examination of Sir William Hamilton’s Philosophy, cap. 17. 118 Cf. Mill, Autobiography, p. 159. “I grappled at once with the problem of Induction, postponing that of Reasoning.” Ib. p. 182 (when he
  • 82. is preoccupied with syllogism), “I could make nothing satisfactory of Induction at this time.” 119 Autobiography, p. 181. 120 The insight, for instance, of F. H. Bradley’s criticism, Principles of Logic, II. ii. 3, is somewhat dimmed by a lack of sympathy due to extreme difference in the point of view adopted. 121 Bacon, Novum organum, i. 100. 122 Russell’s Philosophy of Leibnitz, capp. 1-5. 123 See especially remarks on the letter of M. Arnauld (Gerhardt’s edition of the philosophical works, ii. 37 sqq.). 124 Gerhardt, vi. 612, quoted by Russell, loc. cit., p. 19. 125 Ibid., ii. 62, Russell, p. 33. 126 Spinoza, ed. van Vloten and Land, i. 46 (Ethica, i. 11). 127 Nouveaux essais, iv. 2 § 9, 17 § 4 (Gerhardt v. 351, 460). 128 Critique of Judgment, Introd. § 2, ad. fin. (Werke, Berlin Academy edition, vol. v. p. 176, l. 10). 129 Kant’s Introduction to Logic and his Essay on the Mistaken Subtlety of the Four Figures, trans. T. K. Abbott (1885). 130 Loc. cit., p. 11. 131 Or antitheses. Kant follows, for example, a different line of cleavage between form and content from that developed between thought and the “given.” And these are not his only unresolved dualities, even in the Critique of Pure Reason. For the logical inquiry, however, it is permissible to ignore or reduce these differences. The determination too of the sense in which Kant’s theory of knowledge involves an unresolved antithesis is for the logical purpose necessary so far only as it throws light upon his logic and his influence upon logical developments. Historically the question of the extent to
  • 83. which writers adopted the dualistic interpretation or one that had the like consequences is of greater importance. It may be said summarily that Kant holds the antithesis between thought and “the given” to be unresolved and within the limits of theory of knowledge irreducible. The dove of thought falls lifeless if the resistant atmosphere of “the given” be withdrawn (Critique of Pure Reason, ed. 2 Introd. Kant’s Werke, ed. of the Prussian Academy, vol. iii. p. 32, ll. 10 sqq.). Nevertheless the thing-in-itself is a problematic conception and of a limiting or negative use merely. He “had woven,” according to an often quoted phrase of Goethe, “a certain sly element of irony into his method; ... he pointed as it were with a side gesture beyond the limits which he himself had drawn.” Thus (loc. cit. p. 46, ll. 8, 9) he declares that “there are two lineages united in human knowledge, which perhaps spring from a common stock, though to us unknown—namely sense and understanding.” Some indication of the way in which he would hypothetically and speculatively mitigate the antithesis is perhaps afforded by the reflection that the distinction of the mental and what appears as material is an external distinction in which the one appears outside to the other. “Yet what as thing-in-itself lies back of the phenomenon may perhaps not be so wholly disparate after all” (ib. p. 278, ll. 26 sqq.). 132 Critique of Judgment, Introd. § 2 (Werke, v., 276 ll. 9 sqq.); cf. Bernard’s “Prolegomena” to his translation of this, (pp. xxxviii. sqq.). 133 Die Logik, insbesondere die Analytik (Schleswig, 1825). August Detlev Christian Twesten (1789-1876), a Protestant theologian, succeeded Schleiermacher as professor in Berlin in 1835. 134 See Sir William Hamilton: The Philosophy of Perception, by J. Hutchison Stirling. 135 Hauptpunkte der Logik, 1808 (Werke, ed. Hartenstein, i. 465 sqq.), and specially Lehrbuch der Einleitung in die Philosophie (1813), and subsequently §§ 34 sqq. (Werke, i. 77 sqq.). 136 See Ueberweg, System of Logic and History of Logical Doctrines, § 34.
  • 84. 137 Drei Bücher der Logik, 1874 (E.T., 1884). The Book on Pure Logic follows in essentials the line of thought of an earlier work (1843). 138 Logic, Eng. trans. 35 ad. fin. 139 Logic, Introd. § ix. 140 For whom see Höffding, History of Modern Philosophy, Eng. trans., vol. ii. pp. 122 sqq.; invaluable for the logical methods of modern philosophers. 141 Wissenschaft der Logik (1812-1816), in course of revision at Hegel’s death in 1831 (Werke, vols. iii.-v.), and Encyklopädie der philosophischen Wissenschaften, i.; Die Logik (1817; 3rd ed., 1830); Werke, vol. vi., Eng. trans., Wallace (2nd ed., 1892). 142 The Principles of Logic (1883). 143 Logic, or The Morphology of Thought (2 vols., 1888). 144 Logic, Pref. pp. 6 seq. 145 Id. vol. ii. p. 4. 146 Logik (1873, 1889), Eng. trans. ii. 17. 147 Op. cit. ii. 289. 148 Introd. to Logic., trans. Abbott, p. 10. 149 Ueber Annahmen (1902, c.). 150 Logik (1880, and in later editions). 151 Yet see Studies in Logic, by John Dewey and others (1903).
  • 85. LOGOCYCLIC CURVE, STROPHOID or FOLIATE, a cubic curve generated by increasing or diminishing the radius vector of a variable point Q on a straight line AB by the distance QC of the point from the foot of the perpendicular drawn from the origin to the fixed line. The polar equation is r cos θ = a(1 ± sinθ), the upper sign referring to the case when the vector is increased, the lower when it is diminished. Both branches are included in the Cartesian equation (x2 + y2)(2a − x) = a2x, where a is the distance of the line from the origin. If we take for axes the fixed line and the perpendicular through the initial point, the equation takes the form y √(a − x) = x √(a + x). The curve resembles the folium of Descartes, and has a node between x = 0, x = a, and two branches asymptotic to the line x = 2a. LOGOGRAPHI (λόγος, γράφω, writers of prose histories or tales), the name given by modern scholars to the Greek historiographers before Herodotus.1 Thucydides, however, applies the term to all his own predecessors, and it is therefore usual to make a distinction between the older and the younger logographers. Their representatives, with one exception, came from Ionia and its
  • 86. islands, which from their position were most favourably situated for the acquisition of knowledge concerning the distant countries of East and West. They wrote in the Ionic dialect, in what was called the unperiodic style, and preserved the poetic character of their epic model. Their criticism amounts to nothing more than a crude attempt to rationalize the current legends and traditions connected with the founding of cities, the genealogies of ruling families, and the manners and customs of individual peoples. Of scientific criticism there is no trace whatever. The first of these historians was probably Cadmus of Miletus (who lived, if at all, in the early part of the 6th century), the earliest writer of prose, author of a work on the founding of his native city and the colonization of Ionia (so Suïdas); Pherecydes of Leros, who died about 400, is generally considered the last. Mention may also be made of the following: Hecataeus of Miletus (550-476); Acusilaus of Argos,2 who paraphrased in prose (correcting the tradition where it seemed necessary) the genealogical works of Hesiod in the Ionic dialect; he confined his attention to the prehistoric period, and made no attempt at a real history; Charon of Lampsacus (c. 450), author of histories of Persia, Libya, and Ethiopia, of annals (ὦροι) of his native town with lists of the prytaneis and archons, and of the chronicles of Lacedaemonian kings; Xanthus of Sardis in Lydia (c. 450), author of a history of Lydia, one of the chief authorities used by Nicolaus of Damascus (fl. during the time of Augustus); Hellanicus of Mytilene; Stesimbrotus of Thasos, opponent of Pericles and reputed author of a political pamphlet on Themistocles, Thucydides and Pericles; Hippys and Glaucus, both of Rhegium, the first the author of histories of Italy and Sicily, the second of a treatise on ancient poets and musicians, used by Harpocration and Plutarch; Damastes of Sigeum, pupil of Hellanicus, author of genealogies of the combatants before Troy (an
  • 87. ethnographic and statistical list), of short treatises on poets, sophists, and geographical subjects. On the early Greek historians, see G. Busolt, Griechische Geschichte (1893), i. 147-153; C. Wachsmuth, Einleitung in das Studium der alten Geschichte (1895); A. Schäfer, Abriss der Quellenkunde der griechischen und römischen Geschichte (ed. H. Nissen, 1889); J. B. Bury, Ancient Greek Historians (1909), lecture i.; histories of Greek literature by Müller-Donaldson (ch. 18) and W. Mure (bk. iv. ch. 3), where the little that is known concerning the life and writings of the logographers is exhaustively discussed. The fragments will be found, with Latin notes, translation, prolegomena, and copious indexes, in C. W. Müller’s Fragmenta historicorum Graecorum (1841-1870). See also Greece: History, Ancient (section, “Authorities”). 1 The word is also used of the writers of speeches for the use of the contending parties in the law courts, who were forbidden to employ advocates. 2 There is some doubt as to whether this Acusilaus was of Peloponnesian or Boeotian Argos. Possibly there were two of the name. For an example of the method of Acusilaus see Bury, op. cit. p. 19.
  • 88. LOGOS λόγος, a common term in ancient philosophy and theology. It expresses the idea of an immanent reason in the world, and, under various modifications, is met with in Indian, Egyptian and Persian systems of thought. But the idea was developed mainly in Hellenic and Hebrew philosophy, and we may distinguish the following stages: 1. The Hellenic Logos.—To the Greek mind, which saw in the world a κόσμος (ordered whole), it was natural to regard the world as the product of reason, and reason as the ruling principle in the world. So we find a Logos doctrine more or less prominent from the dawn of Hellenic thought to its eclipse. It rises in the realm of physical speculation, passes over into the territory of ethics and theology, and makes its way through at least three well-defined stages. These are marked off by the names of Heraclitus of Ephesus, the Stoics and Philo. It acquires its first importance in the theories of Heraclitus (6th century b.c.), who, trying to account for the aesthetic order of the visible universe, broke away to some extent from the purely physical conceptions of his predecessors and discerned at work in the cosmic process a λόγος analogous to the reasoning power in man. On the one hand the Logos is identified with γνώμη and connected with δίκη, which latter seems to have the function of correcting deviations from the eternal law that rules in things. On the other hand it is not positively distinguished either from the ethereal fire, or from the εἱμαρμένη and the ἀνάγκη according to which all things occur. Heraclitus holds that nothing material can be thought of without this Logos, but he does not conceive the Logos itself to be immaterial. Whether it is regarded as in any sense possessed of intelligence and consciousness is a question variously answered. But
  • 89. there is most to say for the negative. This Logos is not one above the world or prior to it, but in the world and inseparable from it. Man’s soul is a part of it. It is relation, therefore, as Schleiermacher expresses it, or reason, not speech or word. And it is objective, not subjective, reason. Like a law of nature, objective in the world, it gives order and regularity to the movement of things, and makes the system rational.1 The failure of Heraclitus to free himself entirely from the physical hypotheses of earlier times prevented his speculation from influencing his successors. With Anaxagoras a conception entered which gradually triumphed over that of Heraclitus, namely, the conception of a supreme, intellectual principle, not identified with the world but independent of it. This, however, was νοῦς, not Logos. In the Platonic and Aristotelian systems, too, the theory of ideas involved an absolute separation between the material world and the world of higher reality, and though the term Logos is found the conception is vague and undeveloped. With Plato the term selected for the expression of the principle to which the order visible in the universe is due is νοῦς or σοφία, not λόγος. It is in the pseudo- Platonic Epinomis that λόγος appears as a synonym for νοῦς. In Aristotle, again, the principle which sets all nature under the rule of thought, and directs it towards a rational end, is νοῦς, or the divine spirit itself; while λόγος is a term with many senses, used as more or less identical with a number of phrases, οὖ ἕνεκα, ἐνέργια, ἐντελέχεια, οὐσία, εἶδος, μορφή, c. In the reaction from Platonic dualism, however, the Logos doctrine reappears in great breadth. It is a capital element in the system of the Stoics. With their teleological views of the world they naturally predicated an active principle pervading it and determining it. This
  • 90. operative principle is called both Logos and God. It is conceived of as material, and is described in terms used equally of nature and of God. There is at the same time the special doctrine of the λόγος σπερματικός, the seminal Logos, or the law of generation in the world, the principle of the active reason working in dead matter. This parts into λόγοι σπερματικοί, which are akin, not to the Platonic ideas, but rather to the λόγοι ἔνυλοι of Aristotle. In man, too, there is a Logos which is his characteristic possession, and which is ἐνδιάθετος, as long as it is a thought resident within his breast, but προφορικός when it is expressed as a word. This distinction between Logos as ratio and Logos as oratio, so much used subsequently by Philo and the Christian fathers, had been so far anticipated by Aristotle’s distinction between the ἔξω λόγος and the λόγος ἐν τῇ ψυχῇ. It forms the point of attachment by which the Logos doctrine connected itself with Christianity. The Logos of the Stoics (q.v.) is a reason in the world gifted with intelligence, and analogous to the reason in man. 2. The Hebrew Logos.—In the later Judaism the earlier anthropomorphic conception of God and with it the sense of the divine nearness had been succeeded by a belief which placed God at a remote distance, severed from man and the world by a deep chasm. The old familiar name Yahweh became a secret; its place was taken by such general expressions as the Holy, the Almighty, the Majesty on High, the King of Kings, and also by the simple word “Heaven.” Instead of the once powerful confidence in the immediate presence of God there grew up a mass of speculation regarding on the one hand the distant future, on the other the distant past. Various attempts were made to bridge the gulf between God and man, including the angels, and a number of other hybrid forms of which it is hard to say whether they are personal beings or
  • 91. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com