SlideShare a Scribd company logo
INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI
films the text directly from the original or copy submitted. Thus, some
thesis and dissertation copies are in typewriter face, while others may
be from any type ofcomputer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality
illustrations and photographs, print bleedthrough, substandard margin*,
and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete
manuscript and there are missing pages, these will be noted. Also, if
unauthorized copyright material had to be removed, a note will indicate
the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand corner and
continuingfrom left to right in equal sectionswith small overlaps. Each
original is also photographed in one exposure and is included in
reduced form at the backofthe book.
Photographs included in the original manuscript have been reproduced
xerographically in this copy. Higher quality 6" x 9" black and white
photographic prints are available for any photographs or illustrations
appearing in this copy for an additional charge. Contact UMI directly
to order.
University Microfilms international
A Bell &Howell Information Company
300 North Zeeb Road. Ann Arbor. Ml 48106-1346 USA
313/761-4700 800/521-0600
Order Number 941891*
The Spell-Out parameters: A M inimalist approach to syntax
Wu, Andi, Ph.D.
University of California, Los Angeles, 1994
U M I300 N. Zeeb Rd.
Ann Arbor, MI 4? 106
UNIVERSITY OF CALIFORNIA
Lot Angeles
The Spell-Out Parameters:
A Minimalist Approach to Syntax
A dissertation submitted in partial satisfaction of the
requirements for the degree Doctor of Philosophy
in Linguistics
by
Andi Wu
1994
The dissertation of Andi Wu is approved.
Terry K. A n
D. Stott Parker
yfdLsi/vt-dy
Dominique Sportiche
fedward P. Stabler, Jr., Committee Chair
University of California, Los Angeles
1994
ii
Contents
1 Introduction 1
2 The Spell-Out Parameters 8
2.1 The Notion of S pell-O ut......................................................................... 9
2.1.1 The Minimalist Program .............................................................. 9
2.1.2 The Timing of Spell-O ut.............................................................. 13
2.1.3 The S-Parameters........................................................................... 17
2.2 The S(M)-Parameter and Word Order . ............................................. 21
2.2.1 An Alternative Approach toWord O rd e r................................... 22
2.2.2 The Invariant X-Structure Hypothesis (IX S H )........................ 25
2.2.3 Modifying the IX S H ..................................................................... 33
2.3 Interaction of S(F)- and S(M)- Param eters.......................................... 41
2.4 S u m m a ry ...................................................................... 48
3 An Experimental Grammar 50
3.1 The Categorial and Feature S y ste m s.................................................... 55
3.1.1 Categories....................................................... 55
3.1.2 F eatures........................................................................................... 57
3.1.3 Features and C ategories.............................................................. 65
3.1.4 The Spell-Out of F eatu res........................................................... 67
3.2 The Computational S y s te m .................................................................... 71
3.2.1 Lexical Projection........................................................................... 71
3.2.2 Generalized Transform ation........................................................ 83
3.2.3 Move-a ........................................................................................... 91
3.3 S u m m a ry ......................................................................................................110
4 T he P aram eter Space 111
4.1 The Parameter Space of S(M )-Param eters.............................................115
4.1.1 An Initial Typology...........................................................................116
4.1.2 Further Differentiation.................................................................... 125
4.1.3 Some Set-Theoretic O bservations................................................. 129
4.2 Other Param eters......................................... 134
4.2.1 HD-Param eters................................................................................ 134
4.2.2 The Spell-Out of Functional H e a d s ............................................. 138
4.2.3 HD-Parameters and Functional Heads .......................................141
4.2.4 S(F)-parameters................................................................................ 144
4.3 Case Studies...................................................................................................148
4.3.1 English: An SVO L anguage.......................................................... 149
4.3.2 Japanese: An SOV language.......................................................... 156
4.3.3 Berber: A VSO L anguage............................................................. 159
4.3.4 German: A V2 Language.................................................................162
4.3.5 Chinese: A Head-Final SVO L anguage.......................................167
4.3.6 French: A Language with Clitics .................................................169
4.4 S u m m ary ......................................................................................................171
5 S etting th e P aram eters 172
5.1 Basic A ssum ptions...................................................................................... 173
5.1.1 Assumptions about the I n p u t ....................................................... 173
5.1.2 Assumptions about the L earn er....................................................176
5.1.3 Assumptions about the Criterion of Successful Learning . . . 177
5.2 Setting S(M)-Parameters.............................................................................180
5.2.1 The Ordering A lgorithm ................................................................ 180
5.2.2 Ordering and Learnability............................................................. 186
5.2.3 The Learning A lgorithm ................................................................ 194
5.2.4 Properties of the Learning A lgorithm .......................................... 196
5.2.5 Learning All Languages in the Parameter S p a c e .......................199
5.3 Setting Other P a ram e te rs..........................................................................205
5.3.1 Setting HD-Parameters....................................................................205
5.3.2 Setting S(F)-Param eters................................................................ 211
5.4 Acquiring Little Languages..........................................................................217
5.4.1 Acquiring Little E n g lish ................................................................ 218
5.4.2 Acquiring Little Ja p a n e se ............................................................. 219
5.4.3 Acquiring Little Berber....................................................................221
5.4.4 Acquiring Little C hinese................................................................223
5.4.5 Acquiring Little FYench....................................................... 225
5.4.6 Acquiring Little G erm an................................................................ 228
5.5 S u m m ary ......................................................................................................233
6 Parsing w ith S-P aram eters 235
6.1 Distinguishing Characteristics of the Parser . ................................... 236
6.2 A Prolog Implementation............................................................................ 252
6.2.1 TVee-Building...................................................................................253
iv
6.2.2 Feature-Checking............................................................................255
6.2.3 Leaf-A ttachm ent............................................................................259
6.2.4 The Parser in A ction..................................................................... 261
6.2.5 Universal vs. Language-Particular P a rs e rs ............................... 274
6.3 S u m m ary ............... 276
7 Final Discussion 277
7.1 Possible Extensions.................................................................................... 277
7.2 Potential P roblem s.................................................................................... 284
7.3 Concluding R em ark s................................................................................. 292
A Prolog Program s 294
A.l pspace.pl........................................................................................................294
A.2 s e ts .p l.......................................................................................................... 297
A.3 o rd e r.p l.............. 299
A.4 s p .p l..............................................................................................................300
A.5 sp u til.p l....................................................................................................... 303
A.6 sp2.pl ...........................................................................................................304
A.7 parser.pl....................................................................................................... 310
A.8 p a rs e rl.p l.................................................................................................... 321
B P aram eter Spaces 325
B.l P-Space of S(M)-Parameters ( 1 ) .............................................................. 325
B.2 P-Space of S(M)-Parameters ( 2 ) .............................................................. 329
B.3 P-Space of S(M)-Parameters (with A d v ) ...............................................331
B.4 P-Space of S(M) & HD P aram eters........................................................334
B.5 P-Space of S(M)-Parameters ( with A u x )...............................................339
B.6 P-Space of S(M) it HD Parameters ( with Aux ) ................................. 342
B.7 P-Space of S(F)-Param eters.....................................................................346
C P artial O rdering of S(M )-Param eter Settings 349
D Learning Sessions 358
D.l Acquiring an Individual Language (1 ) .....................................................358
D.2 Acquiring an Individual Language (2 ) .....................................................360
D.3 Acquiring All Languages in the P-Space of S(M)-Parameters . . . . 362
D.4 Acquiring All Languages in the P-Space of S(M)-Parameters (with
A d v )...............................................................................................................365
D.5 Parameter Setting with Noisy In p u t........................................................374
D.6 Setting S(M), S(F) and HD Parameters in a Particular Language . 378
References 383
List of Figures
The Derivational Process of M PLT.................................................................11
A Specific Interpretation of the Derivational Process.................................. 12
The Invariant X-bar Tree in Wu (1993)........................................................ 27
The Coverage of S-Parameters and HD-Parameters...................................38
An Elementary X-bar T ree..............................................................................71
Multiple Complements in a Binary T ree........................................................ 74
Feature Value Binding in A grl-P ....................................................................78
VP-Projection of a Transitive Verb............................................................... 80
GT Substitution................................................................................................85
GT Adjunction................................................................................................ 86
Base Tree of a Simple Transitive Sentence..................................................88
Base Tree of a Simple Intransitive Sentence ................................... 89
Feature-Checking Movements ..............................................................96
Head Movement as Substitution.................................................... ... 99
Head Movement as Adjunction........................................................................100
The Position of O ften............................................................... 127
Head-Initial and Head-Final IP s .................... ............................................. 135
Japanese Trees....................................................................................... 158
Berber Trees................. ...................................... ............................... 161
German Trees . ............................................................................................. 165
A Chinese Tree ............................................................................. 168
Parse Tree of an SOV Sentence........................................ 239
Parse Tree of a VSO Sentence......................................................................... 240
A Simple DP T ree.......................................................................................... 278
A Hypothetical PP T ree..................................................................................279
A Triple-Agr I P .................................................................................................. 282
A More Traditional T ree..................................................................................285
vi
List of Tables
The Parameter Space of Wu (1993)............................................................. 30
Selectional R ules................................................................................................ 74
Featurized Selectional Rules.............................................................................82
Decision Table for the Spell-Out of V and N P ..............................................245
Decision Table for the Spell-Out of L-Features.............................................247
Decision Table for the Spell-Out of F-Features .................................249
vii
ACKNOWLEDGMENTS
First and foremost, I want to thank Ed Stabler, my advisor and committee
chair, whose intellectual insights, financial assistance and personal friendship has
made the writing of this dissertation one of the most memorable events in my life.
He taught me many things in computational linguistics virtually by hand and he
enabled me to see the real motivations for syntactic research. I benefited a great
deal from those hours spent in his office where his constructive criticism of my
work helped me stay closer to the truth. This dissertation would not exist if he
had not come to UCLA.
I am also very grateful to all the other members of my committee. Dominique
Sportiche has been my main source of inspiration in syntactic theory. Quick to
detect the flaws in my model, he has never let me finish our meeting without giving
me a new perspective on my ideas. Nina Hyams has been supportive of my work
ever since my M.A. years. She introduced me into the fields of language acquisition
and language processing and her constant guidance and encouragement have been
essential to my graduate education. Ed Keenan has led me to see both the empirical
and mathematical sides of languages. It was in his class that I began to view
linguistics as a science. I want to thank Stott Parker from Computer Science
and Terry Ou from psychology for listening patiently to my abstract stories and
reminding me of the computational and psychological relevance of my hypotheses.
I also owe intellectual debt to many other professors in our department. Among
them Tim Stowell, Hilda Koopman and Anoop Mahajan deserve special credit.
Their comments and suggestions on this work have been invaluable.
During the course of this project I also benefited from discussions with many
linguists on the east coast and across the Atlantic Ocean. Special thanks are
due to Stephen Crain, Robert Berwick, Mark Johnson, Robert Frank and Edward
Gibson who paid attention to my half-baked ideas and made a number of useful
suggestions. The conversations I had with Aravind Joshi, Ken Wexler, Lyn Frazier,
Amy Weinberg, Craig Thiersch, Luigi Burzio, Christer Platzack, Ian Roberts, Jean
Pollock and David Pesetsky have also been very helpful. I did not have a chance
to meet Richard Kayne, but the papers he sent me and his encouragement have
made me more confident of my project.
Another place I have been getting inspiration from is the Workshop on Theoret­
ical East-Asian Linguistics at UC Irvine. Those monthly discussions have helped
me to know more about my own language. In particular, I want to thank James
Huang who has never failed to respond to my call for help.
I would also like to thank my fellow students at UCLA for their help and
friendship. I owe a lot to Tom Cornell who “lured" me into computational lin­
guistics. In the computer lab I got much help from Karan Wallace, Johnathan
Mead, Claire Chyi and Susan Hirsh. My academic and personal life at UCLA
have also been made more meaningful by Yasushi Zenno, Kuoming Sung, Feng-
shi Liu, Mats Johansson, Sue Inouye, Chris Golston, Cheryl Chan, Emily Sityar,
Harold Crook, Bonnie Chiu, Bill Dolan, Stephan Schuetze-Coburn, Daniel Valois,
Dan Silverman, Tetsuya Sano, Dorit Ben-Shalom, Akira Nakamura, Murat Kural,
Luc Moritz, Jeannette Schaeffer, Seungho Nam, Hyunoo Lee, Jongho Jun, Filippo
Beghelli, Bonny Sands, Abby Kaun, among many others.
I feel very fortunate to have chosen the Department of Linguistics at UCLA
as the home of my graduate study. In addition to the academic nourishment and
financial support I received from this excellent institution, I especially appreciate
the warm and human environment here which makes learning more of a pleasure.
In particular, I want to extend my special thanks to Anna Meyer, John Bulger,
and the three successive Chairs: Paul Schachter, Russ Schuh and Tim Stowell,
without whom my life could be miserable. I am also fortunate to have a chance
to work at Intelligent Text Processing where I learned a lot more about parsing. I
thank Kathy Dahlgren for making it possible for me to have a rewarding job and
finish my dissertation at the same time.
The people who deserve my thanks most are those in my family: my wife
Luoqin Hou whose love, encouragement and hard work have been essential to my
survival; my 5-year-old daughter Cathy Wu who lets me work at the computer
even though she does not think that this is the best way to spend my time; my
parents who encouraged me to pursue learning in a period when knowledge was
despised in China; and my mother-in-law who did everything possible to save my
time. This dissertation is dedicated to them.
x
July 11, 1957
1982
1985
1985-1986
1987-1991
1989
1991-1993
1993-
VITA
Born, Shanghai, China
B.A. in English Language and Literature
Nanjing University, China
M.A. in Language and Linguistics
Nanjing University, China
Lecturer
Nanjing University, China
Teaching Assistant
Dept, of Linguistics and
Dept, of East Asian Languages and Cultures
University of California, Los Angeles
M.A. in Linguistics
University of California, Los Angeles
Research Assistant
University of California, Los Angeles
Programmer
Intelligent Text Processing, Los Angeles
xi
PUBLICATIONS AND PRESENTATIONS
Wu, A. (1986) The Stylistic Effects of Left- and Right-Branching Structures. In
Journal of Nanjing University.
Wu, A. (1991) Center-Embedding and Parsing Constraints. Presented at the
Colloquium of the UCLA Dept, of Linguistics.
Wu, A. (1992) A Computational Approach to ‘Intake’ Ordering in Syntactic Ac­
quisition. In Proceedings of the Eleventh West Coast Conference on Formal
Linguistics, Chicago University Press.
Wu, A. (1992) Why Both Top-Down and Bottom-Up: evidence from Chinese.
Presented at the 5th CUNY Conference on Human Sentence Processing.
Wu, A. (1992) Arguments for an Extended Head-Driven Parser. Presented at the
UC Irvine Workshop on East Asian Linguistics.
Wu, A. (1992) Acquiring Word Order Without Word Order Parameters? In
UCLA Working Papers in Psycholinguistics.
Wu, A. (1993) The P-Parameter and the Acquisition of Word Order. Presented
at the 67th annual meeting of the Linguistic Society of America (LSA).
Wu, A. (1993) A Minimalist Universal Parser. In UCLA Occasional Papers in
Linguistics, Vol. 11.
Wu, A. (1993) Parsing DS, SS and LF Simultaneously. Presented at the 6th
CUNY Conference on Human Sentence Processing.
Wu, A. (1993) The S-Parameter. Presented at the 16th GLOW Colloquium.
Wu, A. (1993) Left-Corner Parsers and Head-Driven Parsers. In Linguistics
Abroad, 56, journal of the Institute of Linguistics, Academy of Social Sci­
ences of China.
ABSTRACT OF THE DISSERTATION
The Spell-Out Parameters:
A Minimalist Approach to Syntax
by
Andi Wu
Doctor of Philosophy in Linguistics
University of California, Los Angeles, 1994
Professor Edward P. Stabler, Jr., Chair
This thesis explores a new parametric syntactic model which is developed from
the notion of Spell-Out in the Minimalist framework (Chomsky 1992). The main
hypothesis is that languages are identical up to the point of Spell-Out: the sets
of movements and morphological features are universal but different languages
can have different word orders and morphological paradigms depending on which
movements or features are visible. We can thus account for a wide range of cross-
linguistic variation by parameterizing the spell-out options.
The model proposed in this thesis has two sets of Spell-Out parameters. The
S(M)-parameters determine which movements occur before Spell-Out in a given
language. By varying the values of these parameters, we get a rich spectrum of
word order phenomena, including all basic word orders, V2, and a variety of scram­
bling. The S(F)-parameters determine which features are morphologically realized.
Different value combinations of these parameters produce different morphological
paradigms. The values of these two sets of parameters can also interact, resulting
in the occurrence of auxiliaries, grammatical particles and expletives.
Computational experiments are conducted on a minimal version of this model
in terms of language typology, language acquisition and language processing. It is
found that the parameter space of this model can accommodate a wide range of
cross-linguistic variation in word order and inflectional morphology. In addition,
all the languages generated in this parameter space are found to be learnable via a
linguistically motivated parameter-setting algorithm. This algorithm incorporates
Chomsky’s (1992) principle of Procrastinate into the standard induction by enu­
meration learning paradigm. It observes the Subset Principle, and consequently
every language can be exactly identified without the need of negative evidence.
Finally, this new parametric system leads to a new conception of parsing. The fact
that the underlying set of movements is universal makes it possible to have parsers
where the chain-building process is uniformly defined. An experimental parser is
presented to illustrate this new possibility. This parser can parse every language in
the parameter space by consulting the parameter settings. A language-particular
parser for each language can be derived from this Universal parser” through partial
evaluation. All these experimental results, though preliminary in nature, indicate
that the line of research suggested in this thesis is worth pursuing.
Chapter 1
Introduction
The goal of this thesis is to explore a new parametric system within the theoretical
framework of Principles and Parameters (P&P theory hereafter) which is repre­
sented by Chomsky (1981a, 1981b, 1982,1986a, 1986b, 1991,1992) and many other
works in generative syntax. A basic assumption of this theory is the following:
Children are endowed at birth with a certain kind of grammatical
knowledge called Universal Grammar (UG) which consists of a number
of universal principles along with a number of parameters. Each of
the parameters has a number of possible values and any possible nat­
ural language grammar results from a particular combination of those
parameter values. In acquisition, a child’s task is to figure out the pa­
rameter setting of a given language on the basis of the sentences he
hears in this language.
At the present stage, the P&P theory is still more of a research paradigm than
a fully-developed model. No final agreement has been reached as to what the
principles are and how many parameters are available. In this thesis, I will propose
a set of parameters and investigate the consequences of this new parametric system
1
in terms of language typology, language acquisition and language processing.
The syntactic model to be explored in this thesis was inspired by some recent
developments in syntactic theory exemplified by Chomsky (1992) and Kayne (1992,
1993). In A Minimalist Program for Linguistic Theory (Chomsky 1992), the notion
of Spell-Out was introduced into the theory. Spell-Out is a syntactic operation
which feeds a syntactic representation into the PF (Phonetic Form) component,
where a sentence is pronounced. The ideal assumption is that languages have
identical underlying structures and all surface differences are due to Spell-Out.
In the model I propose, the notion of Spell-Out applies to both movements and
features. When a movement is spelled out, we see overt movement. We see overt
inflectional morphology when one or more grammatical features are spelled out.
Different languages can have different word orders and different morphological
paradigms if they can choose to spell out different subsets of movements or features.
Since so many cross-linguistic variations have come to be associated with Spell-
Out, it is natural to assume that this syntactic operation is parameterized. We
need a set of parameters which determine which movements are overt and which
features are morphologically visible. The main objective of this thesis is to propose
and test out such a set of parameters.
As the new parameters to be proposed in this thesis are all related to Spell-Out,
we will call those parameters Spell-Out Parameters (S-parameters for short here­
after). There are two types of S-parameters: the S(M)-parameters which control
the spell-out of movements and the S(F)-parameters which control the spell-out of
features. We will see that the value combinations of these parameters can explain a
wide range of cross-linguistic variation in word order and inflectional morphology.
They also offer an interesting account for the distribution of certain functional
2
elements in languages, such as auxiliaries, expletives and grammatical particles.
In terms of language acquisition, this new parametric system also has some de­
sirable learnability properties. As we will see, all the languages generated in this
new parameter space are learnable via a linguistically motivated parameter-setting
algorithm. This algorithm incorporates Chomsky's (1992) principle of Procrasti­
nate into the standard induction by enumeration learning paradigm. It observes
the Subset Principle, and consequently every language can be exactly identified
without the need of negative evidence. Finally, this new parametric system leads
to a new conception of parsing. The fact that the underlying set of movements
is universal makes it possible to have parsers where the chain-building process is
uniformly defined. An experimental parser is presented to illustrate this new possi­
bility. This parser can parse every language in the parameter space by consulting
the parameter settings. A language-particular parser for each language can be
derived from this "universal parser” through partial evaluation.
All the experiments on this parametric syntactic model were performed by a
computer. The syntactic system and the learning algorithm are implemented in
Prolog. The parameter space was searched exhaustively using a Prolog program
which finds every language that can be generated by our grammar. The learning
algorithm has also been tested against every possible language in our parameter
space. These computer experiments serve as additional proofs for my arguments.
It should be pointed out here that in this thesis the term language will often be
used in a special sense to refer to an abstract set of strings. In order to concentrate
on basic word order and basic inflectional morphology, and study those properties
in a wide variety of languages, I will represent the “sentences” of a language in an
abstract way which shows the word order and overt morphology of a sentence but
3
nothing else. An example of this is given in (1).
(1) s -[c l] o-[c2] v- [tns ,asp]
The string in (1) represents a sentence in some SOV language. The lists attached
to S, 0 and V represent features that are morphologically realized. The “words”
we find in (1) are:
(a) a subject NP which is inflected for case (cl);
(b) an object NP which is inflected for a different case (c2); and
(c) a verb which is inflected for tense and aspect.
A language then consists of a set of such strings. Here is an example:
(2) { s -[c l] v- [tn s, asp] ,
• -[ c l] o-[c2] v -[tn s,asp ],
o-[c2] * -[c l] v-[tns,asp]
>
What (2) represents is a verb final language where the subject and object NPs can
scramble. The NPs in this this language are overtly marked for case and the verb in
this language is inflected for tense and aspect. Such abstract string representation
makes it possible to let the computer read strings from any “language". In many
situations we will be using the term “language" to refer to such a set of strings.
The fact that we will be conducting computer experiments with these artificial
languages does not mean that we will be detached from reality, however. Many
real languages will also be discussed in connection with these simplified languages.
Although the representation in (2) is fairly abstract, it is not hard to see what
4
natural language it may represent. As a matter of fact, most languages generated in
our parameter space can correspond to some natural languages. The results of our
experiments are therefore empirically meaningful. In the course of our discussion,
we will often furnish real language examples to exemplify those abstract languages.
The rest of this thesis is organized as follows.
Chapter 2 examines the notion of Spell-Out and considers its implications for
linguistic theory. After a brief description of the Minimalist model, I propose a
syntactic model where cross-linguistic variations in word order and morphology
are mainly determined by two sets of S-parameters: S(M)-parameters and S(F)-
parameters. Arguments for this new model are given and the potential of the new
system is illustrated with examples from natural languages.
Chapter 3 describes in detail the syntactic model to be used in the experiments.
In order to implement the new system in Prolog and use computer search to find
out all the consequences of this system, we need a fully specified grammar. At
the present stage, however, not every detail of the Minimalist model has been
worked out. For this reason, I will define a partial grammar which includes only
those aspects of syntax which are directly relevant to our discussion. The partial
grammar includes a categorial system, a feature system, a parameter system, and a
computational system whose basic operations are Lexical Projection, Generalized
Transformation and Move-a. The grammar will be specific enough for computer
implementation and rich enough for the generation of various word orders and
morphological paradigms.
In Chapter 4, we consider the consequences of our experimental grammar in
terms of the language typology it predicts. It is found that our new parameter
space is capable of accommodating a wide range of linguistic phenomena. In terms
5
of word order, we are able to derive all basic word orders (SVO, SOV, VSO, VOS,
OSV, OVS and V2) as well as many kinds of scrambling. In terms of inflectional
morphology, we can get a variety of inflectional paradigms. There are also parame­
ter settings which account for the occurrence of auxiliaries, grammatical particles,
expletives and clitics. Many value combinations in the parameter space will be
illustrated by examples from natural languages.
The topic of Chapter 5 is learnability. We consider the question of whether all
the “languages” in our parameter space can be learned by setting parameters. It is
discovered that each of those languages can be correctly identified in Gold's (1967)
induction by enumeration paradigm if the hypothetical settings are enumerated
in a certain order. It turns out that this ordering of hypotheses can be derived
from some general linguistic principle, namely the principle of Procrastinate. Our
experiments show that, with this linguistically motivated ordering algorithm, the
learner can converge on any particular grammar in an incremental fashion without
the need of negative evidence or input ordering.
In Chapter 6, we discuss the implications of our parametric syntactic model
for language processing. We will see that this new model can result in a parser
which is more universal in nature. The uniform treatment of movement in this
model makes it possible for one of the parsing processes - chain-building - to
be defined universally. The new system also facilitates the handling of empty
categories. Whether any given terminal node must dominate lexical material or
not can be uniquely determined by the parameter values. The new possibilities are
illustrated with a sample parser. This parser is capable of processing any language
in the parameter space according to the parameter values. Any language-particular
parser can be obtained by partially executing the universal pane with a particular
6
parameter setting.
Chapter 7 concludes the thesis by considering possible extensions and potential
problems of the present model. One extension to be discussed is how our approach
can be applied to the word order variation within PP/DP/NP. It seems that we
can account for the internal structures of these phrases using a similar approach.
The main potential problem to be considered is the dependency of our model
on certain syntactic assumptions. We realize that the particular model we have
implemented does rely on some theoretical assumptions which are yet to be proved.
However, the general approach we are taking here can remain valid no matter how
the specific assumptions change. The model can be updated as the research in
linguistic theory advances.
7
Chapter 2
The Spell-Out Parameters
In this chapter, we examine the notion of Spell-Out and consider its implications
for cross-linguistic variations in word order and inflectional morphology. We will
see that a considerable amount of word order variation can be explained in terms
of the Spell-Out of movements, while the Spell-Out of features can account for
much morphological variation. Two sets of Spell-Out parameters are proposed:
the S(M)-parameters which determine the Spell-Out of movements and the S(F)-
parameters which are responsible for the Spell-Out of features. We shall see that
the parameter space created by these two sets of parameters can cover a wide range
of linguistic phenomena.
This chapter will only present a very general picture of how things might work
in this new model. The full account is given in Chapter 3 and Chapter 4. In
the brief sketch that follows, we will start by looking at the notion of Spell-Out
in Chomsky (1992). This notion will then be applied first to movement and then
to infectional morphology. Finally, we will have a quick glance at the possible
interactions between the Spell-Out of movements and the Spell-Out of features.
8
2.1 The N otion o f Spell-O ut
2.1.1 The Minimalist Program
Spell-Out as & technical term is formally introduced in Chomsky's (1992) Min­
imalist Program for Linguistic Theory (MPLT hereafter), though the notion it
represents has been around for some time. The most salient feature of the Min­
imalist framework1 is the elimination of D-structure and S-structure. The levels
of representation are reduced to nothing but the two interfaces: Phonetic Form
(PF), which interacts with the articulatory-perceptual system, and Logical Form
(LF) which interacts with the conceptual-intentional system. Consequently, gram­
matical constraints have come to be associated with these two interface levels only.
Most of the well-formedness conditions that used to apply at D-structure (DS) and
5-structure (SS) have shifted their domain of application to either PF or LF. In
this new model, structural descriptions (SDs) are generated from the lexicon and
the SDs undergo syntactic derivation until they become legitimate objects at both
PF and LF. Given a SD which consists of the pair (x, A)a, “ ... a derivation D
converges if it yields a legitimate SD; otherwise it crashes; D converges at PF if
x is legitimate and crashes at PF if it is not; D converges at LF if A is legiti­
mate and crashes at LF if it is not” (MPLT p7). The legitimacy of PF and LF
representations will be discussed later.
The derivation is carried out in the computational system which consists of three
distinct operations: lexical projection (LP),3 generalized transformation (GT), and
1Throughout this thesis 1 will try to make a distinction between MPLT and the Minimalist
framework. The former refers to the specific model described in MPLT while the latter refers to
the general approach to syntax initiated by MPLT.
J» stands for PF and A stands for LF.
*This is not a term used in MPLT, but the operation denoted by this term is obviously
9
move-a.
LP “selects an item X from the lexicon and projects it to an X-bar structure
of one of the forms in (3), where X = X ° = [rX].” (MPLT, p30).
(3) (i) X
(ii) l**J
(iii) U-[**]]
The generation of a sentence typically involves the projection of a set of such
elementary phrase*markers (P-markers) which serve as the input to GT.
GT reduces the set of phrase-markers generated by LP to a single P-marker.
The operation proceeds in a binary fashion: it “takes a phrase-marker K 1 and
inserts it in a designated empty position 0 in a phrase-marker K, forming the new
phrase-marker K ‘, which satisfies X-bar theory” (MPLT, p30). In other words,
GT takes two trees K and K l, “targets” K by adding 0 to K , and then substitutes
K 1 for 0. The P-markers generated by LP are combined pair-wise in this fashion
until no more reduction is possible.
Move-a is required by the satisfaction of LF constraints. Some constituents in
the sentence must be licensed or checked in more than one structural position and
the only way to achieve this kind of multiple checking is through movement. Unlike
GT which operates on pairs of trees, mapping (K , K l ) to K*, move-a operates on
a single tree, mapping K to K*. It “targets K , adds 0 , and substitutes a for 0 ,
where a in this case is a phrase within the targeted phrase-marker K itself. We
assume further that the operation leaves behind a trace t of a and forms the chain
(a, t).” (MPLT, p31).
assumed in the paper.
10
There is an additional operation called Spell-Out in the computational system.
This operation feeds the an SD into the PF component. The derivation of a
sentence can consist of a number of intermediate SDs but only one of them is
actually pronounced or heard.4 The function of Spell-Out is to select such an SD. It
takes a "snap-shot’’ of the derivational process, so to speak. According to Chomsky,
Spell-Out can occur at any point in the course of derivation. Given a sequence of
SDs in the derivation, < S D i,S D i, ..., SD n >, each SDi representing a derivational
step, the system can in principle choose to spell out any SD i, 1 < t < n. This
notion of Spell-Out is illustrated in (4) where the curly bracket is meant to indicate
that Spell-Out can occur anywhere along the derivational procedure.
(4)
Lexicon
LEXICAL PROJECTION
GT OPERATION
MOVE-ALPHA
LF
The Derivational Process of MPLT
However, not every SD that we choose to spell out is acceptable to the PF compo­
nent. Only those SDs which are legitimate objects at PF can be pronounced. In
other words, the SD being fed into PF must at least satisfy the PF requirements.
Once these requirements are met, an SD can be spelled out regardless of how many
LF constraints have been satisfied.
4The derivation may proceed in more than one way. In that case, we can have different
intermediate SDs depending on which particular derivational procedure is being followed.
SPELL-OUT
PF -------------------
11
One of the PF constraints proposed in MPLT requires that the input to PF be
a single P-marker. If the representation being spelled out “is not a single phrase
marker, the derivation crashes at PF, since PF rules cannot apply to a set of phrase
markers and no legitimate PF representation jt is generated.” (MPLT, p30). In
other words, a given P-marker cannot be spelled out until all its subtrees have been
projected and reduced to a single tree.9 In normal cases, therefore, the spell-out of
any constituent must occur after the completion of LP and GT within this given
constituent.6 This leads to the conclusion that Spell-Out can only apply in the
process of move-a. So (5) is a more specific description of the derivational process.
(5)
PF*
Lexicon
LEXICAL PROJECTION
Elementary Phrase Markers
GT OPERATION
Single Phrase Marker
SPELL-OUT
1
MOVE-ALPHA
LF
A Specific Interpretation of the Derivational Process
This diagram may seem to suggest a sequencing of the computational operations,
•One possible reason why PF can take only one tree at a time might be the following: For a
sentence to converge, it must be assigned a proper intonation. What intonation to assign depends
on the tree structure of the sentence. Apparently, there is no way of assigning a single intonation
to two unconnected trees.
8We might get sentence fragments or a broken sentence if Spell-Out occurs before the com­
pletion of GT in a CP.
12
with LP preceding GT which in turn precedes move-a. No such strict sequencing
is implied here, however. The picture is intended to be a logical description of
linguistic theory rather than a flow chart for procedural computation. The ordering
is relative in nature. In actual language production and language comprehension,
these computational operations can be co-routined. For instance, GT operations
may be interleaved with movement operations, as long as the starting point and the
landing site of a given movement are in a single tree before the movement applies.
The crucial point this diagram is meant to convey is that only single trees can be
accepted by PF, with the logical consequence that the spell-out of any particular
constituent can only occur after the GT operation is complete within this given
constituent.
2.1.2 The Timing of Spell-Out
Now let us take a closer look at Spell-Out which, as we have argued, normally
occurs in the process of move-a, where this operation is free to apply at any
time. What is the consequence of this freedom? Before answering this question,
we had better find out exactly what happens in move-a. In the pre-Minimalist
PfcP model, some movements are forced by S-structure requirements and some by
LF requirements. The ones that are forced by SS constraints must take place in
overt syntax. In our current terminology, we can say that these movements must
occur before Spell-Out. The movements forced by LF requirements, however, can
be either overt or covert. A typical example is wh-movement which is forced by
the scope requirement on wh-phrases. It has been generally accepted since Huang
(1982) that the scope requirement is satisfied at SS in languages like English and
at LF in languages like Chinese. This is why wh-movement is overt in English
13
but covert in Chinese. Now that S-structure is gone, all movements are forced
by LF requirements. Consequently, every movement has become an LF movement
which, like wh-movement, can be either overt or covert. The Case Filter (Chomsky
& Lasnik 1977, Chomsky 1981b, Vergnaud 1982, etc.) and the Stray Morpheme
Filter (Lasnik’s Filter) (Lasnik 1981)r , for instance, are no longer requirements
on overt syntax only. They may apply either before Spell-Out or after Spell-Out,
as long as they do get satisfied by LF. Consequently, the movements motivated by
these filters can be either visible or invisible.
It should be mentioned here that all LF requirements in the Minimalist frame­
work are checking requirements. An SD is a legitimate object at LF only if all
its features have been checked. In cases where the checking involves two differ­
ent structural positions, movement is required to occur. In fact, movement takes
place for no reason other than feature-checking in this model. The visibility of a
movement depends on the timing of feature-checking. It is visible if the relevant
feature is checked before Spell-Out and invisible if it is checked after Spell-Out.
Now the question is why some features are checked before Spell-Out. According
to Chomsky’s principle of Procrastinate (MPLT, p43) which requires that overt
movement be avoided as much as possible, the optimal situation should be the
one where every movement is covert. There must be some other requirements that
force a movement to occur before Spell-Out. In MPLT, overt movement is forced
by a PF constraint which requires that ustrong” features be checked before Spell-
Out. u ... ‘strong’ features are visible at PF and ‘weak’ features invisible at PF.
These features (i.e. those features that are visible*) are not legitimate objects at
1This filter requires that morphemes designated as affixes be "supported” by lexical material
at PF. It is the primary motivation for V-to-I raising or do-support.
•Comment added by Andi Wu
14
PF; they are not proper components of phonetic matrices. Therefore, if a strong
feature remains after Spell-Out, the derivation crashes.” (MPLT, p43) To prevent
a strong feature from being visible at PF, the checking of this feature must be
done before Spell-Out. Once a feature is checked, it disappears and no PF con­
straint will be violated. Chomsky cites French and English to illustrate this: “the
V-fe&tures of AGR are strong in French, weak in English. ... In French, overt
raising is a prerequisite for convergence; in English, it is not.” (MPLT, p43) The
combined effect of this “Strong Feature Filter” and the principle of Procrastinate
is a precise condition for overt movement: a movement occurs before Spell-Out if
and only if the feature it checks is strong. This account is very attractive but it
has its problems, as we will see later when we come to an alternative account in
2.1.3.
The timing of feature-checking and consequently the timing of movement are
obviously relevant to word order. This is clearly illustrated by wh-movement which
checks the scope feature. This movement is before Spell-Out in English and after
Spell-Out in Chinese. As a result, wh-phrases are sentence-initial in English but
remain in situ in Chinese. Now that every movement has the option of being
either overt or covert, the amount of word order variation that can be attributed
to movement is much greater. As we will see in 2.2.2, given current syntactic
assumptions which incorporate the VP-Interaal Subject Hypothesis9 (Koopman
and Sportiche 1985, 1988, 1990, Kitagawa 1986, Kuroda 1988, Sportiche 1990,
etc.) and the Split-Infl Hypothesis10 (Pollock 1989, Belletti 1990, Chomsky 1991,
9Thia hypothesis assumes that every argument of a VP (including the subject) is generated
VP-internally.
10This hypothesis assumes a more articulated Infl structure where different functional elements
such as Tense and Agreement count as different categories and bead their own projections.
15
etc.), it is possible to derive all the basic word orders (including SVO, SOV, VSO,
V2, VOS, OSV and OVS) just from movement. TITSS'suggests that movement can
have a much more important role to play in cross-linguistic word order variation
than we have previously assumed. We may even begin to wonder whether all
the variation in word order can be accounted for in terms of movement. If so,
no variation in the X-bar component will be necessary. This idea has in fact been
proposed in Kayne (1992, 1993) and implemented in a specific model by Wu (1992,
1993a, 1993b, 1993c, 1993d). We will come back to this in 2.2.2.
There is another assumption in the Minimalist theory which has made the
prospect of deriving word order variations from movement a more realistic one.
This is the assumption that all lexical items come from the lexicon fully inflected.
In pre-Minimalist models, a lexical root and its affixes are generated separately
in different positions. To pick up the inflections, the lexical root must move to
the functional category where the inflectional features reside. For instance, a verb
must move to Infl to get its tense morphology and a subject NP must move to
the Spec of IP to be assigned its case morphology. Without movement, verbs and
nouns will remain uninflected. This assumption that lexical roots depend on move­
ment for their inflectional morphology runs into difficulty whenever we find a case
where the verb or noun is inflected but no movement seems to have taken place. It
has been generally accepted that the English verb does not move to the position
where agreement morphology is supposed to be located (Chomsky 1957, Emonds
1976, Pollock 1989, among many others). To account for the fact that verbs are
inflected for subject-verb agreement in English, we have to say that, instead of the
verb moving up, the affixes are lowered onto the verb. In the Minimalist theory,
however, lowering is prohibited. The requirement that each move-a operation must
16
extend the target has the effect of restricting movement to raising only. In fact,
this requirement of target extension can be viewed as notations! variant for the
no-lowering requirement. At first sight, this seems to put us in a dilemma: low­
ering is not permitted, but without lowering the affixes will get stranded in many
cases. But this problem does not exist in the Minimalist model. In this model,
words come from the lexicon fully inflected. Verbs and nouns “are drawn from
the lexicon with all of their morphological features, including Case and ^-features”
(MPLT, p41). They no longer have to move in order to pick up the morphology.
Therefore, whether they carry certain overt morphological features has nothing to
do with movement. Movement is still necessary, but the purpose of movement has
changed from fe.aturt-assignm.cnt to feature-checking. The morphological features
which come with nouns and verbs must be checked in the appropriate positions.
For instance, a verb must move to T(ense) to have its tense morphology checked
and a noun must move to the Spec of some agreement phrase to have its case and
agreement morphology checked. These checking requirements are all LF require­
ments. Therefore, the movements involved in the checking can take place either
before or after Spell-Out. The cases where lowering was required are exactly those
where the checking takes place after Spell-Out.
2.1.3 The S-Parameters
We have seen that the timing of Spell-Out can vary and the variation can have
consequences in word order. We have mentioned Chomsky's account of this vari­
ation: a movement occurs before Spell-Out just in case the feature it checks is
“strong”. Now, what is the distinction between strong and weak features? Ac­
cording to Chomsky, this distinction is morphologically based. He does not have a
17
precise definition of this distinction in MPLT, but the idea he wants to suggest is
clear: a feature is strong if it is realized in overt morphology and weak otherwise.11
Let us assume that there is an underlying set of features which are found in every
language. A given feature is realized in overt morphology when this feature is
spelled out. Then the PF constraint in Chomsky’s system simply says that a fea­
ture must be checked before it is spelled out. Given the principle of Procrastinate,
a movement will occur before Spell-Out just in case the morphological feature(s)
to be checked by this movement is overt. This bijection between overt movement
and overt morphology is conceptually very appealing. If this is true, syntactic
acquisition will be easier, since in that case overt morphology and overt movement
will be mutually predictable. The morphological knowledge children have acquired
can help them acquire the syntax while their syntactic knowledge can also aid their
acquisition of morphology. We will indeed have a much better theory if this re­
lationship actually exists. Unfortunately, this bijection does not seem to hold in
every language.13 Since Pollock (1989) where this linkage between movement and
morphology is seriously proposed, many people have challenged this correlation.
Counter-examples to this claim come in two varieties. On the one hand, there
are languages where we find overt movement but not overt morphology. Chinese
may be such a language. As has been observed in Cheng (1991) and Chiu (1992),
the subject NP in Chinese moves out of the VP-shell to a higher position in overt
“ Chomsky uses the term “rich morphology1* instead of “overt morphology**. The agreement
morphology in French, for example, is supposed to be richer than that in English. In this way,
French and English can be different from each other even though both have overt agreement
morphology. Unfortunately, the concept of “richness” remains a fussy one. Chomsky does not
specify how the rich/poor differentiation is to be computed.
13We can treat this correlation between syntax and morphology as a higher idealisation of the
linguistic system. But then we must be able to tolerate the notion that some existing languages
have deviated from the ideal grammar.
18
syntax. But there is no overt morphological motivation for this movement, for
this NP carries no inflectional morphology at all. Other examples are found in the
Kru languages (Koopman 1984) where the verb can move to Agr just as it does
in French in spite of the fact that there is no subject-verb agreement in these lan­
guages. On the other hand, there exist languages where we And overt morphology
but not overt movement. The most frequently cited example is English. In view
of the fact that agreement features are spelled out in English, the verbs in English
are expected to move as high as those in French. This is not the case, as is well
known. The agreement in English is of course “poor”, but even in Italian which is a
language with very rich subject-verb agreement, it is still controversial whether the
verb always moves to AgrS (c/. Rizzi (1982), Hyams (1986), Belletti (1990), etc.).
There are other examples of rich agreement without overt movement. Schaufele
(1991) argues that Vedic Sanskrit is a language of this kind. If we insist on the
“iff” relationship between overt movement and overt morphology, we will face two
kinds of difficulties. In cases of overt movement without overt morphology, the
principle of Procrastinate is violated. We find movements that occur before Spell-
Out for no reason. In cases of overt morphology without overt movement, the PF
constraint will be violated which requires that overt features be checked before
Spell-Out. Wc cannot rule out the possibility that, under some different analyses,
all the examples cited above may cease to be counter-examples. However, we will
hesitate to base our whole model upon this assumed linkage between syntax and
morphology until we have seen more evidence for this hypothesis.
There is an additional problem with this morphology-based explanation for
overt/covert movement. Apparently, not all movements have a morphological mo­
tivation. V-movement to C and XP-movement to Spec of CP, for example, are not
19
very likely to be morphologically related. They are certainly related to feature-
checking, but these features are seldom morphologically realized.13 Why such
extremely “weak” features should force overt movement in many languages is a
puzzle.
Since the correspondence between overt morphology and overt movement is
not perfect, I will not rely on the strong/weak distinction for an explanation for
the timing of feature-checking. Instead of regarding overt morphology and overt
movement as two sides of the same coin, let us assume for the the time being that
these two phenomena are independent of each other. In other words, whether a
feature is spelled out and whether the feature-checking movement is overt will be
treated as two separate issues. We will further assume that both the spell-out of
features and the spell-out of the feature-checking movements can vary arbitrarily
across languages. I therefore propose that two types of Spell-Out Parameters (S-
Parameters) be hypothesized. One type of S-par&meters determines whether a
given feature is spelled out. The other type determines whether a given feature-
checking movement occurs before Spell-Out. Let us call the first type of parameters
S(F)-parameters (F standing for “feature”) and the second type S(M)-parameter
(M standing for “movement”). Both types of parameters are binary with two
possible values: 1 and 0. When an S(F)-parameter is set to 1, the feature it
is associated with will be morphologically visible. It is invisible when its S(F)-
parameter is set to 0. The S(M)-parameter affects the visibility of movement.
When an S(M)-parameter is set to 1, the relevant movement will be overt. The
movement will be covert if its S(M)-parameter is set to 0.
13We do not exclude the possibility that these features can be realised in some special visible
forms, such as intonation and stress.
20
The S(F)-parameters determine, at least in part, the morphological paradigm
of a language. A language has overt agreement just in case the S(F)-parameter
for agreement features are set to 1, and it has an overt case system just in case
the S(F)-parameter for the case features is set to 1. Given a sufficiently rich set
of features, the value combinations of S(F)-parameters can generate most of the
inflectional systems we find in natural languages. All this is conceptually very
simple and no further explanation is needed. The exact correspondences between
S(F)-parameters and morphological paradigms will be discussed in Chapter 3 and
Chapter 4.
The S(M)-parameters, on the other hand, determine (at least partially) the
word order of a language. How this works is not so obvious. So we will devote the
next section (2.2) to the discussion of this question. In 2.3 we will consider the
interaction between S(F)-parameters and S(M)-parameters.
2.2 The S(M )-Param eter and W ord Order
In this section, we consider the question of how word order variation can be ex­
plained in terms of the parameterization of movements. We will first look at the
traditional approach where word order is determined by the values of head-direction
parameters14 (hereafter HD-parametcrs for short) and then examine an alternative
approach where S(M)-parameter values are the determinants of word order. The
two approaches will be compared and a decision will be made as to what kind of
parameterization we will adopt as a working hypothesis.
‘^Various names have been given to this parameter in the literature. The one adopted here is
from Atkinson (1902). Other names include X-parameter* and ktad parameter*.
21
2.2.1 An Alternative Approach to Word Order
Traditionally, word order has been regarded mainly as a property of phrase struc­
ture. It is often assumed that different languages can generate different word orders
because their phrase structure rules can be different. In the Principles and Param­
eters theory, cross-linguistic variations in basic word order are usually explained in
X-theoretic terms, (c/ Jackendoff (1977), Stowell (1981), Koopman (1984), Hoek-
stra (1984), Travis (1984), Chomsky (1986), Nyberg (1987), Gibson and Wexler
(1993), etc.) The basic phrase structure rules of this theory are all of the following
forms:
(6) X P { X , (specifier) }
X => { X , (complement) }
The use of curly brackets indicates that the the constituents on the right-hand side
are unspecified for linear order. Which constituent precedes the other in a partic­
ular language depends on the values of HD-parameters. There are two types of
HD-parameters: the specifier-head parameter which determines whether the spec­
ifier precedes or follows X and the complement-kead parameter which determines
whether the complement precedes or follows X . The values of these parameters are
language-particular and category-particular. When acquiring a language, a child’s
task is to set these parameters for each category.
It is true that the parameter space of HD-parameters can accommodate a fairly
wide range of word order variation. The parameterization can successfully explain
the word order differences between English and Japanese, for instance. However,
there are many word order facts which fall outside this parameter space. The most
obvious example is the VSO order. If we assume that the direct object is a verbal
22
complement and therefore must be base-generated adjacent to the verb, we will
not be able to get this common word order no matter how the HD-parameters are
set. The same is true of the OSV order. A more general problem is scrambling. It
has long been recognized that this word order phenomenon cannot be accounted
for in terms of HD-parameters alone. All this suggests that the HD-parameters
are at least insufficient, if not incorrect, for the explanation of word order. To
account for the complete range of word order variation, we need some additional
or alternative parameters.
The observation that not all word order facts can be explained in terms of
phrase structure is by no means new. Ever since Chomsky (1955, 1957), linguists
have found it necessary to account for word order variation in terms of move­
ment in addition to phrase structure. In fact, this is one of the main motivations
that triggered the birth of transformational grammars. All the problems with
HD-parameters mentioned above can disappear once movement is accepted as an
additional or alternative source of word order variation. The VSO order can be
derived, for example, if we assume that, while the verb and the object are adjacent
at D-structure, the verb has moved to a higher position at S-structure (Emonds
1980, Koopman 1984, Sproat 1985, Koopman and Sportiche 1988, 1990, Sportiche
1990, among others). Scrambling can also receive an elegant account in terms
of movement. As Mahajan (1990) has shown, many scrambled word orders (at
least those in Hindi) can be derived from A and A-movements. As a matter of
fact, very few people will challenge the assumption that movement is at least par­
tially responsible for cross-linguistic differences in word order. However, there has
not been any model where the movement options are systematically parameter­
ized. The notion that the visibility of movements can be parameterized has been
23
around for quite some time. It is very clearly stated, for example, in Huang (1982).
But it has not been pursued as a main explanation for cross-linguistic word order
variation until recently when Kayne (1992, 1993) proposed the antisymmetry of
syntactic structures. One reason for the lack of exploration in this area is probably
the pre-Minimalist view of movements. In the standard GB theory, most move­
ments are motivated by S-structure requirements which are often universal. As a
result, many movements do not have the option of being either overt or covert.
For instance, the A-movement forced by the Case Filter and the head movement
forced by the Stray Morpheme Filter are always required to be overt. There are
very few choices. The parameterization of movement, even if it were implemented,
would not be rich enough to account for a sufficiently wide range of word order
phenomena.
Things are different in the Minimalist framework we have adopted, as we have
seen in the previous section. In this model, every movement has the option of being
either overt or covert. We have proposed that an S(M)-parameter be associated
with each of the movements and let the value of this parameter determine whether
the given movement is to occur before Spell-Out (overt) or after Spell-Out (covert).
The word order of a particular language then depends at least partially on the
values of S(M)-parameters.
Now that we can account for word order variation in terms of S(M)-parameters,
we may want to reconsider the status of HD-parameters. Since HD-parameters by
themselves are insufficient for explaining all word order phenomena, there are two
possibilities to consider:
(7) (i) S(M)-parameters can account for all the word order facts, including those
covered by HD-parameters. In this case, HD-parameters can be replaced
24
by S(M)-parameters.
(ii) S(M)-parameters cannot account for all the word order facts that HD-
parameters are able to explain. In this case we will need both types of
parameters.
To choose between these two possibilities, we have to know whether all the word
orders that are derivable from the values of HD-parameters can be derived from
the values of S(M)-parameters as well. To find out the answer to this question,
we must first of all get a better understanding of the parameter space created by
S(M)-parameters. We will therefore devote the next section to the exploration of
this new parameter space.
2.2.2 The Invariant X-Structure Hypothesis (IXSH)
If S(M)-parameters can replace HD-parameters to become the only source of word
order differences, variations in phrase structure can be assumed to be non-existent.
We will be able to envision a model where X-bar structures are invariant and all
word order variations are derived from movement. Let us call this the invariant
X-structure hypothesis (IXSH). This hypothesis can be traced back to the univer­
sal base hypothesis of Wexler and Hamburger (1973): “A strong interpretation of
one version of linguistic theory (Chomsky 196-5) is that there is a single univer­
sal context-free base, and every natural language is defined by a transformational
grammar on that base" (pl73). A stronger version of this universal base is recently
put forward in Kayne (1992, 1993). He argues that X-bar structures are antisym-
metrical. In the model he proposes, the specifier invariably precedes the head and
the complement invariably follows the head. The structure is right-branching in
every language and linear order corresponds to asymmetric C-command relations.
25
According to this hypothesis, there is a single set of X-bar trees which are found in
all languages. All variations in word order are results of movement. HD-parameters
are thus unnecessary.
Kayne’s proposal has been explored in terms of parameterization in Wu (1992,
1993) where the new approach is tried out in the Minimalist framework. In Wu
(1993) I experimented with the IXSH in a restricted model of syntax and showed
that a surprising amount of variation in word order can be derived from a single
X-bar tree. Since the results of this experiment will give us a more concrete idea
as to what S(M)-parameters can do and cannot do, we will take a closer look at
this model.
26
The invariant X-bar tree I assumed in the model for a simple transitive sentence
is given in (8).15
(8)
CP
/ Xspec a
/ AC AgrSP
/ XSPEC AgrSl
/ AgrS TP
T AgrOP
SPEC AgrOl
/XAgrO VP
Subject V NP
Verb Object
The Invariant X-bar Tree in Wu (1993)
15The tree for an intransitive sentence is identical to (8) except that the verb will not have an
interna] argument. It is assumed that AgrO exists even in an intransitive sentence, though it
may not be active.
27
The set of LF requirements which force movements in this model are listed in (9).
(9) (A) The verb must move to AgrO-O to have its ^-features checked for object-
verb agreement.
(B) The verb must move to TO to have its tense/aspect features checked.
(C) The verb must move to AgrS-0 to have its ^-features checked for subject-
verb agreement.
(D) The verb must move to CO to have its predication feature checked.
(E) The subject NP must move to Spec-of-AgrSP to have its case and <f>
features checked.
(F) The object NP must move to Spec-of-AgrOP to have its case and <j>
features checked.
(G) The XP which has scope over the whole sentence or serves as the
topic/focus of the sentence must move to Spec-of-CP to have its op­
erator feature checked.
Each of the seven movements listed above, referred to as A, B, C, D, E, F and G,
is supposed to be associated with an S-parameter whose value determines whether
the movement under question is to be applied before or after Spell-Out. The seven
S-parameters are referred to as 5(A), 5(B), 5(C), S(-D), S(E), S(F), and S(G).
All the parameters are binary (1 or 0) except S(G) which has three values: 1, 0
and 1/0. The last value is a variable which can be either 1 or 0. The overtness of
movement is optional if this value is chosen.
28
The application of those movements is subject to the two constraints in (10).
(10) (i) Head Movement Constraint. This constraint requires that no intermedi­
ate head be skipped during head movement. For a verb to move from
its VP-internal position all the way to C, for instance, it must land
successively in AgrO, T and AgrS. This means that, if D occurs before
Spell-Out, A, B and C will also occur before Spell-Out. Consequently,
setting S(D) to 1 will require that 5(A), S(B) and 5(C) be set to 1 as
well. As a result, there is a transitive implicational relationship between
the values of 5(A), 5(B), 5(C) and S(D): given the order here, if one
of them is set to 1, then the ones that precede it must also be set to 1.
(ii) The requirement that the subject NP and object NP must move to Spec-
of-AgrS and Spec-of-AgrO respectively to have their case/agreement
features checked before moving to Spec-of-CP. This means that 5(C)
cannot be set to 1 unless S(E) or S(F) is set to 1.
It was shown that with all the assumptions given above, the parameter space
consists of fifty possible settings.10. These settings and the corresponding word
orders they account for are given in (11).
16With 6 binary parameters and one triple-valued one, logically there should be 192 possi­
ble settings. But most of these settings are ruled out as syntactically impossible by the two
constraints in (10).
29
(11)
# Values of S-parameters Word Order
5(A) 5(5) S(C) m 5(5) 5(5) 5(6)
1 0 0 0 0 0 0 0 S V(O)
2 1 0 0 0 0 0 0 VS (0)
3 0 0 0 0 1 0 0 S V (0)
4 0 0 0 0 1 0 (0) 5 V
5 1 1 0 0 0 0 v S (0)
6 1 0 0 0 1 0 0 S V(O)
7 1 0 0 0 1 0 (0 ) v s
8 0 0 0 1 1 0 S (0) V
9 1 1 1 0 0 0 VS (0)
10 1 1 0 0 1 0 0 S V (0)
11 1 1 0 0 1 0 V (0) s
12 1 0 0 1 1 0 s (0) V 1
13 1 1 1 1 0 0 V S (0) 1
14 1 1 1 0 1 0 0 S V (0)
15 1 1 1 0 1 0 V (0) S 1
16 1 1 0 1 1 0 S V (0) 1
17 1 1 1 1 1 0 0 v s (0 ) 1
18 1 1 1 1 1 0 v (0) S 1
19 1 1 1 0 1 1 0 S V (0) 1
20 1 1 1 1 1 1 0 VS (0)
21 0 0 0 0 1 0 1 S V (0)
22 0 0 0 0 1 1 1 S (0) V
0 S V
23 1 0 0 0 1 0 1 S V(0)
24 1 0 0 0 1 1 1 S (0) V
0 S V
25 1 1 0 0 1 0 1 S V(0)
26 1 1 0 0 1 1 1 S V(0)
0 S V
27 1 1 1 0 1 0 1 S V (O)
28 1 1 1 0 1 1 1 § V(6)
0 S V
29 1 1 1 1 1 0 1 S V(O)
30 1 1 1 1 1 1 1 S V (0)
0 VS
31 0 0 0 0 0 0 !/0 S V(O)
32 1 0 0 0 0 0 1/0 V S (0)
30
# Values of S-parameters Word Order
5(A) 5(5) 3{C) S{D) S{E) $(F) $(G)
33 0 0 0 0 1 0 1/0 S V(O)
34 0 0 0 0 0 1 1/0 (O) SV
35 1 1 0 0 0 0 1/0 V(O) s
36 1 0 0 0 1 0 i/0 S v( 6)
37 1 0 0 0 0 1 v° (O) vs
38 0 0 0 1 1 1/0 S(O) V
OSV
39 1 1 1 0 0 v° VS (0)
40 1 1 0 0 1 1/0 SV(0)
41 1 1 0 0 0 1 1/0 V(O) s
OVS
42 1 0 0 1 1 1/0 s (O) V
OSV
43 1 1 1 0 1 1/0 SV(O)
44 1 1 1 0 0 1 1/0 V(O) s
OVS
45 1 1 0 1 1 1/0 SV(0)
OSV
46 1 1 1 1 0 1/0 VS (O)
47 1 1 1 0 1 1 1/0 SV(0)
OSV
48 1 1 1 1 1 1/0 VS (O)
S V(O)
49 1 1 1 1 0 1 1/0 V(O) S
OVS
50 1 1 1 1 1 1 1/0 VS (O)
SV(0)
OVS
The Parameter Space in Wu (1993)
As we can see, the word orders accommodated in this parameter space include SVO,
SOV, VSO, V2, VOS, OSV, and OVS. In other words, all the basic word orders
are covered. The parameter space also permits a certain degree of scrambling.
In addition to this new word order typology, I also showed in Wu (1993) that
all the languages in this parameter space are learnable. I proposed a parameter-
31
setting algorithm which has the following properties.17
• Convergence is guaranteed without the need of negative evidence.
• The learning process is incrementat the resetting decision can be based on
the current setting and the current input string only.
• The resetting procedure is deterministic at any point of the learning process,
there is a unique setting which will make the current string interpretable.
• Data presentation is order-independent. Convergence is achieved regardless
of the order in which the input strings are presented, as long as all the
distinguishing strings eventually appear.
17The parameter-setting algorithm is based on the principle of Procrastinate (MPLT) which
basically says "avoid overt movement as much as possible". Following this principle, I assumed
that all the S-parameters are set to 0 at the initial stage. The parameter-setting algorithm
is basically the failure-driven one described in Gold (1967) (induction by enumeration). The
learner always tries to parse the input sentences with the current setting of parameters. The
setting remains the same if the parse is successful. If it fails, the learner wilt try parsing the
input sentence using some different settings until he finds one which results in a successful parse.
The order in which alternative settings are to be tried are determined by the following algorithm:
Sort the possible settings into a partial order where precedence is determined by
the following sub-algorithm:
(i) Given two settings P I and PI, P I < P2 if S(G) is set to 0 in P I while it
is set to 1 or 1/0 in P2. (The setting which permits no overt movement to
Spec-of-CP is preferred.) Go to (ii) only if (i) fails to order P I and P2.
(ii) Given two settings P I and P2, P I < P2 if S(G) is set to 1 in P I while it
is set to 1/0 in P2. (If overt movement to Spec-of-C is required, the setting
which permits no optional movement is preferred.) Go to (iii) only if (ii) fails
to order P I and P2.
(iii) Given two settings P l(i) and P2(j) where i and j are the number of param­
eters set to 1 in the respective settings, P I < P2 if i < j. (The setting
permits fewer overt movements is preferred.)
The resulting order is the one given in (11). What the learner does in cases of failure is try those
settings one by one until he finds one that works. The learner is always going down the list and
no previous setting will be tried again.
32
2.2.3 Modifying the IXSH
So far the IXSH approach to word order has appeared to be very promising. We
have a parameter space which can accommodate all the basic word orders and
a parameter-setting algorithm which has some desirable properties. Many word
order facts that used to be derived from the values of HD-parameters have proved
to be derivable from the the values of S(M)-parameters as well. In addition, the
parameter space of S-parameter can account for word order phenomena (such as V2
and scrambling) which are difficult to be accommodated in the parameter space of
HD-parameters. What we have seen has certainly convinced us that movement can
have a much more important role to play in the derivation of word orders. However,
we have not yet proved that HD-parameters can be eliminated altogether. In other
words, we are not yet sure whether S(M)-parameters can account for everything
that HD-parameters are capable of accounting for.
Both Kayne (1992,1993) and Wu (1992, 1993) have assumed a base structure
where the head invariably precedes its complement. This structure is strictly
right-branching. One consequence of this is that linear precedence corresponds
to asymmetric C-command relations in every case. Given two terminal nodes
A and B where A asymmetrically C-commands B, A necessarily precedes B in
the tree. In current theories, functional categories dominate all lexical categories
in a single IP/CP, with all the functional heads asymmetrically C-command the
lexical heads. This means that, in any sinlge extended projection (in the sense
of Grimshaw (1991)18), all functional heads precede the lexical heads in the base
structure. This is apparent from the tree in (8). Let us assume that a functional
l*In such extended projections, the whole CP/IP is projected from the verb.
33
head can be spelled out in two different ways: (i) as an affix on a lexical head
or (ii) as an independent word such as an auxiliary, a grammatical particle, an
expletive, etc. ( This assumption will be discussed in detail in 2.3.) In Case (i),
the lexical head must have moved to or through the functional head, resulting
in an “amalgamation” of the lexical head and the functional head. The prefix or
suffix to the lexical head may look like a functional head preceding or following the
lexical head, but the two cannot be separated by an intervening element. Case (ii) is
possible only if the lexical head has not moved to the functional head, for otherwise
the functional head would have merged into the lexical head. Consequently, the
lexical head such as a verb must follow the functional head such as an auxiliary
in a surface string. This is so because the lexical head is base-generated lower in
the tree and will be asymmetrically C-commanded by all functional heads unless it
moves to a higher position. What all this amounts to is the prediction that, unless
we have VP-preposing or the kind of IP-preposing suggested in Kayne (1993), we
should never find a string where a functional element appears to the right of the
verb but is not adjacent to it. In other words, the sequence in (12) is predicted to
be impossible where F + stands for one or more overtly realized functional heads,
such as auxiliaries and grammatical particles, and X stands for any intervening
material between the verb and the functional element(s).
(12) U ... [ip ... Ver6 X F* ])
One may argue that this sequence is possible if excorporation in Koopman’s (1992)
sense can occur. In this kind of excorporation, a verb moves to a functional head
without getting amalgamated with it, and then moves further up. In that case, the
verb will end up in a position which is higher than some functional heads. When
34
these functional heads are spelled out as auxiliaries or particles, they will follow
the verb. But this does not account for all cases of (12). Consider the Chinese
sentence in (13)19.
(13) Ta kan-wan nci-bcn shu le ma
you finish reading that book Asp Q/A
'Have you finished reading that book?' or
'He has finished reading that book, as you know.'
This sentence fits the pattern in (12). The aspect marker le20 and the ques­
tion/affirmation particle ma31 are not adjacent to the verb, being separated from
it by a full NP. This order does not seem to be derivable from excorporation. The
aspect particle le is presumably generated in AspP (aspect phrase) and the ques­
tion/affirmation particle ma is generated in CP (Cheng 1991, Chiu 1992). In order
for both the verb and the object NP to precede le or ma, the verb must move to
a position higher than Asp0 or C°, and the object NP must also be in a position
higher than Asp0 or C°. This is impossible given standard assumptions. Therefore
the sentence in (13), which is perfectly grammatical, is predicted to be impossi­
ble in Wu’s (1993) model. Of course, this sentence can be generated if we accept
Kayne’s (1993) hypothesis that the whole IP can move to the Spec of CP. However,
this kind of extensive pied-piping still remains a speculation at the present. We do
not feel justified to adopt this new hypothesis just to save this single construction.
1#Thi» sentence is ambiguous in its romanised form, having both an interrogative reading and
a declarative reading. (They are not ambiguous when written in Chinese characters, as the two
senses of ms are written in two different characters: “ o ” and “ ”).
30There are two distinctive le's in Chinese: an inchoative marker and a perfective marker (Teng
1973). These two aspect markers can co-occur and they occupy distinct positions in a sentence.
The inchoative le is always sentence-final in a statement while the perfective le immediately
follows the verb. The le in (13) is an instance of the inchoative marker.
31Whether it is the interrogative ms or affirmative ms depends on the intonation.
35
Moreover, the adoption of this new movement can make our system too powerful,
with the result that too many unattested word orders are predicted. Finally, the
movement of IP cannot solve every problem we have here. The position of /e, for
instance, would still be a mystery even if the new hypothesis were taken. All this
shows that there are linguistic facts which the Invariant X-Structure Hypothesis is
unable to account for without some strenuous stretch in our basic assumptions. By
contrast, these facts can receive a very natural explanation if some HD-parameters
are allowed for. We can assume that CP and IP (which contains AspP) are head-
final in Chinese. The verb does not move overtly in Chinese, so the heads of AspP
and CP are spelled out as an aspect marker and a question/affirmation marker
respectively. Since they are generated to the right of the whole VP, they must
occur in sentence-final positions.
The assumption that CP and IP can be head-final is supported by facts in
other languages. In Japanese, for example, we find the following sentences.
(14) 33 Yamada-aenaei-wa kim-ashi-ta ka
Yamada-teacher-Topic come-Hon-Past Q
‘Did Professor Yamada come?'
(15) 33 Kcsa hatizi kara benkyoosi-tc i-ru
this-moming 8 from study-Cont be-Nonpast
*1 have been studying since eight this morning.*
In (14), we find the question particle ka in a post-verbal position. This particle is
clearly a functional element.34 As assumed in Cheng (1992) and Fukuda (1993),
33Example from Tetsuys Sano.
3aFrom Kuno (1978).
34A number of other particles can appear in this position, such as no, ss, yo, to, etc. There
is no doubt that they are functional elements, though the exact functions they perform are
controversial.
36
question particles in Japanese are positioned in C°. In (15), the verb is followed
by i-ru which is most likely located in T. In both cases, a functional head appears
after the verb, which is to be expected if CP and TP (which is part of IP) are
head-final in Japanese. The IXSH will have difficulty explaining this unless we
assume that these particles and auxiliaries are in fact suffixes which come together
with the verb from the lexicon. But there is strong evidence that ka and i-ru are
not suffixes. We could also argue that the verb has managed to precede ka and
i-ru by left-adjoining to T0 and C® through head movement. But in that case
the verb would be in Co and the word order would be VSO instead of SOV. The
excorporation story is not plausible either, for the verb in (14) would have to move
to a position higher than C in order to precede ka.
Arguments for the existence of HD-parameter in IP are also found in European
languages. In German, an auxiliary can appear after the verb, as we see in (16).
(16) 25 dat Wim dat boek gekocht heeft
that Wim that book bought has
4that Wim has bought that book.’
The clause-final heeft is located in the head of some functional projection. It is not
a suffix which can be drawn from the lexicon together with the verb.36 If we stick
with the IXSH, the word order in (16) would not be possible. For the verb to get
in front of heeft, it must move to a position at least as high as heeft.Butthe word
order in that case would be SVO or VSO insteadof SOV. However,(16) will not
be a problem if we say that IP is head-final in German.
It is becoming evident that a strong version of IXSH, where HD-parameters are
J5Example from Haefeman (1991)
MThe fact that heeft can appear in positions not adjacent to the verb shows that it is not an
affix.
37
eliminated altogether, is very difficult to maintain. We have seen that, although
the S(M)-parameters can explain things that HD-parameters fail to explain, there
are also facts which are best explained by HD-parameters. In other words, we have
come to the conclusion that the second possibility in (7) is more plausible. It seems
that that the word order facts covered by S(M)-parameters and HD-parameters
intersect each other. While some facts can receive an explanation in terms of either
S(M)-parameters or HD-parameters, there are word order phenomena which can be
explained by S(M)-parameters only or HD-parameters only. The situation we have
here is graphically illustrated in (17). Therefore we need both types of parameters.
(17)
A = facts covered by S(C)-parameters
B = facts covered by head-direction parameters
C = facts covered by both parameters
The Coverage of S-Parameters and HD-Parametera
The next question is how to coordinate these two kinds of parameters in an account
of word order variation. As (17) shows, the empirical grounds covered by these two
types of parameters overlap a lot. If we keep both parameters in full, there can
be too much redundancy. The number of parameters we get will be greater than
necessary. Ideally the parameter spaces of these parameters should complement
each other. There should therefore be a new division of labor so that the two kinds
of parameters duplicate each other's work as little as possible. There are at least
two ways in which this can happen.
38
(i) Word order is derived mainly through movement, with variation in phrase
structures covering what is left out. In other words, the facts in the A and
C areas of (17) are accounted for by S(M)-parameters and those in B by
HD-parameters.
(ii) Word order is mainly a property of phrase structure, with movement account­
ing for what is left out. In other words, the facts in the B and C areas of (17)
are accounted for by HD-parameters and those in A by S(M)-parameters.
The choice between the two can be based on several different considerations. The
model to be favored should be syntactically simpler and typologically more ade­
quate. In addition, it should be a better model in terms of language acquisition
and language processing. In order to have a good comparison and make the correct
decision, we must have good knowledge of both approaches. The approach in (ii)
has already been explored extensively. This phrase-structure-plus-some-movement
account has been the standard story for many years. The results are familiar to
everyone. The approach in (i), however, has not received enough investigation yet.
This is an area where more research is needed. For this reason, the remainder
of this thesis will be devoted mainly to the first approach. We will examine this
approach carefully in terms of syntactic theory, language typology, language ac­
quisition, and language processing. It is hoped that such an investigation will put
us in a better position to judge different theoretical approaches.
Having decided on the main goal of the present research, we can now start
looking into the specifics of a model which implements the idea in (i). One of the
obvious questions that is encountered immediately is “how many S(M)-parameters
are there and how many HD-parameters”. This question will be addressed in
39
Chapter 3 when we work on a full specification of the model. It will be proposed
that the HD-parameters be restricted to functional categories only. In particu­
lar, there will be only two HD-parameters: a complement-head parameter for CP
and a complement-head parameter for IP. The former determines whether CP is
head-initial or head-final and the latter determines directionality of IP. With the
assumption that IP consists of a number of functional categories each heading its
own projection, the number of heads in IP can be more than one. Instead of as­
suming that each of these functional projections has an independent parameter, I
will assume that the complement-head parameter applies to every individual pro­
jection in IP. This is to say that the value of this parameter will apply to every
projection in IP. No situation will arise where, say, AgrP is head-initial while TP
is head-final. This decision is based on the following considerations:
(i) All previous models incorporating HD-parameters have treated IP as a single
unit.
(ii) There has been no strong evidence that IP can be “bidirectional” in the sense
that some of its projections are left-headed and some right-headed.
(iii) The organization of the IP-internal structure is still a controversial issue.
There has not been enough consensus as to how many IP-internal categories
there are and how they are hierarchically organized. It is risky, therefore, to
attach a parameter to any specific “sub-IPs”.
Similar arguments can be made for CP which is treated as a single unit in spite
of the fact that some other CP-like categories, such as FP (Focus Phrase) (Brody
1990, Horvath 1992, etc.) and TopicP, have been proposed.
40
By reducing the number of HD-parameters to two, we have not destroyed the
Invariant X-bar Structure Hypothesis completely. What are we are left with is a
weaker version of the IXSH where the directions of specifiers are still fixed and the
directions of complements are fixed except those in functional categories. Com­
pared with the model where each category can have both a specifier-head parameter
and a complement-head parameter, the degree of invariability in X-bar structure
is much higher. As a result, the number of possible tree structures has decreased
considerably. The significance of this modified version of IXSH for language ty­
pology, language acquisition and language processing will be studied in Chapters
4, Chapter 5 and Chapter 6.
2.3 Interaction of S(F)- and S(M )- Param eters
In this section, we come back to the relationship between overt morphology and
overt movement. We have assumed for the time being that these two phenomena
are independent from each other. It has been proposed that each feature has
two separate S-parameters associated to it: the S(F)-parameter that determines
whether the feature is spelled out morphologically and the S(M)-parameter that
determines whether the movement responsible for the checking of the feature occurs
before Spell-Out. It is interesting to note that both the S(F)- and S(M)- parameters
are feature-related. Given that both the S(F)-parameter and S(M)-parameter are
binary (1 or 0)37, there is a parameter space of 4 possible settings for each feature:
(18) S(F) S(M)
(i) 1 1
J7We will see later on that the S(M)-parameter can have a third value: 1/0.
41
(ii) 1 0
(iii) 0 1
(iv) 0 0
What is the linguistic significance of these value combinations? Before answering
this questions, let us have a closer look at how feature-checking works.
In the Minimalist framework, we can entertain the assumption that any feature
that needs to be checked is generated in two different places, one in a functional
category and one in a lexical category. Let us call these two instances of the same
feature F-feature and L-feature respectively. For instance, the tense feature is
found in both T and V. To make sure that the two tense features match, the verb
(which carries the L-feature) must move to T (which carries the F-feature) so that
the value can be checked. If the checking (which entails movement) occurs before
Spell-Out, the F-feature and the L-feature will get unified and become indistin­
guishable from each other38. As a result, only a single instance of this feature will
be available at the point of Spell-Out. If the checking occurs after Spell-Out, how­
ever, both the F-feature and the L-feature will be present at Spell-Out and either
can be overtly realized. This assumption has important consequences. As will be
discussed in detail in Chapter 4, this approach provides a way of reconciling the
movement view and base-generation view of many syntactic issues. The checking
mechanism we assume here requires both base-generation and movement for many
syntactic phenomena. Many features are base-generated in two different places
but related to each other through movement. I will not go into the details here.
The implications of these assumptions will be fully explored in Chapter 4.
3sThis is similar to smalgamation where the functional element becomes part of the lexical
element.
42
Now let us examine the values in (18) and see what can possibly happen in
each case.
In (18(i)), both the S(F)-parameter and the S(M)-parameter are set to 1. This
means both the feature itself and the movement that checks this feature will be
overt. In this case, the F-feature and L-feature will be unified after the movement
and be spelled out on the lexical head in the form of, say, an affix. In addition,
this lexical item will be in a position no lower than the one where the F-feature is
located. If the features under question are agreement features, for example, we will
see an inflected verb in IP/AgrP or a higher position, with the inflection carrying
agreement information. This seems to be the “normal" case that occurs in many
languages. One example is French where the verb does seem to appear in IP/AgrP
and it has overt morphology indicating subject-verb agreement (see (19)).2®
(19) Mea parents parlent souvent espagnol
my parents speak-3P often Spanish
(My parents often speak Spanish.’
If the feature to be considered is the case feature of an NP, this NP will move to
the Spec of IP/AgrP and be overtly marked for case. Japanese seems to exemplify
this situation, as can be seen in (20).30
(20) 31 Taroo-ga Hanako-o yoku mi-ru
Taroo-nom Hanako-acc often see-pres
‘Taroo often sees Hanako’
28The fact that the verb precedes the adverb sosvenf in this sentence tells us that the verb has
moved to IP/AgrP.
30We can assume that the subject NP in this sentence has moved to the Spec of AgrSP and
the object has moved to Spec of AgrOP.
31Example from Akira Nakamura.
43
If the feature to be checked is the scope feature of a wh-phrase, the wh-phrase
will move to the Spec of CP, with the scope feature overtly realized in some way.
English might serve as an example of this case, though it is not clear how the scope
feature is overtly realized.
In (18(ii)), the S(F)-parameter is set to 1 but the S(M)-parameter is set to
0. This means that the feature must be overt but the movement that checks this
feature must not. Since the checking movement takes place after Spell-Out, both
the F-feature and the L-feature will be present at the point of Spell-Out and at
least one of them must be overtly realized. There are three logical possibilities
for spelling the feature out: (a) spell out the F-feature only, (b) spell out the
L-feature only, or (c) spell out both, (a) and (b) seem to be exemplified by the
agreement/tense features in English. The English verb does not seem to move to
I before Spell-Out. Since the features appear both in I and on the verb, we can
pronounce either the L-features or the F-features. When the L-features are pro­
nounced, we see an inflected verb, as in (21). When the F-features are pronounced,
we see an auxiliary, as in (22) where does can be viewed as the overt realization
of the I-features. (In other words, Do-Support is a way of spelling out the head of
IP.) The third possibility where the feature is spelled out in both places does not
seem to be allowed in English, as (23) shows.
(21) John loves Mary.
(22) John does love Mary.
(23) * John does loves Mary.
In Swedish, however, “double” spell-out seems to be possible. When a whole VP
is fronted to first position, which probably means that the verb did not have a
44
chance to undergo head movement to T(ense), the tense feature is spelled out on
both the verb and the auxiliary which has moved from T to C. This is shown in
(24).
(24) 33 Oeppnade docrrcn ] gjorde han
open-Past door-the do-Past he
‘He opened the door/
The value in (18(ii)) can also be illustrated with respect to case/agreement
features and the scope feature. It seems that in English the features in Spec of
IP/AgrSP must be spelled out. When overt NP movement to this position takes
place, the features appear on the subject NP. In cases where no NP movement
takes place, however, Spec of AgrSP is occupied by an expletive (as shown in (25))
which can be interpreted as the overt realization of the case/agreement features in
Spec of AgrSP.
(25) There came three men.
The situation where the scope feature is overtly realized without overt wh-movement
is found in German partial wh-movement. In German, wh-movement can be either
complete or partial, as shown in (26), (27), (28) and (29). 33
(26) [ mit wem ]< glaubst du U dass Hana meint [ep t, doss Jakob ti
with whom believe you that Hans think that Jakob
gesprochen hat ]]
talked has
‘With whom do you believe that Hans thinks that Jakob talked?*
53Example from Chrieter Platsack.
S3Examples are from McDasial (1989) and Cheng (1993)
45
(27) wan glaubst du [ rot'f wem ], Hans meint [cp dass Jakob t,
WHAT believe you with whom Hans think that Jakob
gesprochen hat ]]
talked has
‘With whom do you believe that Hans thinks that Jakob talked?’
(28) wasi glaubst du [„ wasi Hans meint [«, [ mit wem ]<Jakob t,
WHAT believe you WHAT Hans think with whom Jakob
gesprochen hat ]]
talked has
‘With whom do you believe that Hans thinks that Jakob talked?’
(29) *tva$i glaubst du [ep dass Hans meint [cp [ mit wem ],-Jakob U
WHAT believe you that Hans think with whom Jakob
gesprochen hat ]]
talked has
‘With whom do you believe that Hans thinks that Jakob talked?’
These four sentences have the same meaning but different movement patterns. In
(26) the wh-phrase (mit wem) moves all the way to the matrix CP. In (27) and
(28), the wh-phrase also moves, but only to an intermediate CP. The Spec(s) of
CP(s) which mit wem has not moved to are filled by was which is usually called
a wh-scope marker. (29), which is ungrammatical, is identical to (28) except that
the specifier of the immediately embedded CP does not contain was. It seems
that the scope-feature in German has the value in (18(ii)) which requires that this
feature be made overt. In (26) the wh-phrase has moved through all the Specs of
CP, which means all the scope features have been checked before Spell-Out. So the
F-features and L-feature have unified and we see the wh-phrase only. In (27) and
(28), the checking movement is partial and all the unchecked scope features are
spelled out as was. (29) is ungrammatical because one of the unchecked features
is not spelled out, which contradicts the value of S(F) that requires that all scope
features be made overt.
46
The last example to illustrate the value in (18(H)) is from French. Previous
examples have shown that the specifiers of CP and AgrSP can be spelled out by
themselves. The French example here is intended to show that the head of CP can
also be spelled out in that way. Let us suppose that C° contains a certain predi­
cation feature which determines, for instance, whether the sentence is a statement
or a question. Let us further suppose that this feature must be spelled out in a
question in French. When I-to-C movement takes place, as in (30), this feature is
not spelled out by itself. It is merged with the verb in C°.
(30) Apprenez voua le russc
learn you Russian
‘Do you learn Russian?1
In cases where no overt I-to-C movement takes place, however, the predication
feature must be spelled out on its own. This is shown in (31) where eat-ce que can
be regarded as an overtly realized head of CP.
(31) Est-ce que voua apprenez le russc
you learn Russian
‘Do you learn Russian?*
In (18(iii)), the S(F)-parameter is set 0 while the S(M)-parameter set to 1. This
means the feature is checked through movement before Spell-Out but not overtly
realized. We will see overt movement but not overt morphology. This seems to
happen to the case/agreement features of the subject NP in Chinese, as (32) shows.
(32) Ta bu xihuan Lisi
He not like Lisi
'He doesn’t like Lisi.’
We assume that the subject NP ta moves overtly to the Spec of IP/AgrSP
(Cheng 1991, Chiu 1992). One argument for this is the following: with the assump­
tion that the subject is base-generated within VP and Neg is generated outside VP,
47
the subject could not precede Neg had it not moved to Spec of IP/AgrSP. However,
there is no case/agreement marking on this NP at all, which shows that the S(F)-
parameter for case/agreement features is set to 0. More examples demonstrating
the value in (18(iii)) can be found in Chapter 4.
In (18(iv)), both the S(F)-parameter and the S(M)-parameter are set to 0. In
this case, there is neither overt morphology nor overt movement. The object-verb
agreement features in many languages seem to exemplify this value: there is no
overt agreement marker and the object does not move. Examples of this will be
given in Chapter 4.
The examples we have seen so far is sufficient to show that the interaction
between S(F)-parameters and S(F)-parameters can give rise to a wide variety of
syntactic phenomena. It has added a new dimension to our parameter space.
2.4 Sum m ary
In this chapter, I have proposed a syntactic model which is built on some of the new
assumptions in the Minimalist framework. By fully exploiting the notion of Spell-
Out, we discovered a model where a wide range of linguistic facts can be accounted
for in terms of two sets of'S-parameters. The values of S(F)-parameters determine
which features are morphologically realized; the values of S(M)-parameters deter­
mine which movements are overt. It is found that so much variation in word order
can be derived from movement that the number of HD-parameters can be reduced.
It is also found that the interaction between S(F)-parameters and S(M)-parameters
can make interesting predictions for the distribution of functional elements, such
as affixes, auxiliaries, expletives, particles and wh-scope markers. In short, we
have found a parameter space which has rich typological implications. So far the
48
model has been presented in a very sketchy way with many details left out. But
the details are important. In the chapters that follow, we will make the model
more specific and put it to the test of language typology, language acquisition and
language processing.
49
Chapter 3
An Experim ental Grammar
In the previous chapter 1 proposed a new approach to syntax where cross-linguistic
variations in word order and inflectional morphology are attributed to three sets of
parameters: the S(F)-parameters, the S(M)-parameters, and the HD-parameters.
So far the discussion has been very general, with many details unattended to. To
fully explore the consequences of this new hypothesis in language typology, lan­
guage acquisition and language processing, we need to work with a more concrete
model. We must have a grammar which is specific enough so that the consequences
can be computed. The goal of this chapter is to specify such an experimental gram­
mar.
Obviously, the presentation of a syntactic model which is complete in any sense
is an unrealistic goal here. In the first place, the Minimalist theory has not been
fully specified. Many issues that the standard P&P model has addressed have not
been accommodated in the new program yet, not to mention those areas that even
the standard model has left unexplored. In addition, I will not follow MPLT in
every detail, though the theory I am proposing is in the Minimalist framework.1
1As has been stated in the last chapter, I will try to make a distinction between MPLT and
the Minimalist framework. The former refers to the specific model described in MPLT while the
latter refers to the general approach to syntax initiated by MPLT.
50
This leaves more issues open, for even those things that are supposed to have been
discussed in MPLT may have to be reconsidered here. The best we can do at
this moment is to come up with a partial model which has full specifications for
those parts of the grammar that are relevant to the testing of our hypothesis. The
conclusions we draw from this partial grammar will not be definitive, but they can
at least provide us with some way of evaluating the new theory. Such evaluation
will give us some idea as to whether this line of research is worth pursuing at
all. For this reason, the syntactic model to be presented in this chapter will be
minimal. In particular, we will be concerned only with those parts of the grammar
which are responsible for the basic word orders and morphological characteristics
of languages.
We will start with a grammar which is restricted in the following ways.
• Declarative sentences only. We will focus on cross-linguistic variations in
statements first. In many languages, the word orders found in questions
are different from those in statements. We do not want to get into this
complication before we have a better understanding of how the parameters
work in the “basic” type of sentences, i.e. declarative sentences. Therefore,
I will put other sentence types aside in this chapter, though some of them
will be picked up in Chapter 4.
• Matrix clauses only. We will start with simple sentences with no embedding.
In other words, we will be concerned mainly with Degree-0 sentences.3 There
are two reasons for this temporary exclusion of embedded clauses. First,
main clauses and subordinate clauses have different word orders in some
3Discussions on the “degrees” of sentences can be found in Wexler and Culicover 1980, Morgan
1986 and Ligbtfoot 1989, 1991.
51
languages (e.g. German, Dutch, and many other V2 languages). Why there
is this difference deserves some special discussion. We will come to that
after matrix clauses have been analyzed. Second, the inclusion of embedded
clauses will make it necessary to deal with “long-distance” movement whose
application involves the notion of “barriers’* or “minimality”. How these
notions are defined in the Minimalist framework is not clear yet. It is very
likely that the standard definitions can be transplanted in the present model
without too much tinkering. But I prefer to put these issues aside until we
have worked on aspects of the grammar which are more directly related to
basic word order.
• Two types of verbs only. Since I will be mainly concerned with the ordering
of S(ubject), O(bject) and V(erb) in this experimental study, I will only look
at two types of verbs: (a) intransitive verbs with a single NP argument (e.g.
swim) and (b) transitive verbs with two NP arguments (e.g. love).
• IP/CP only. I will experiment with the ordering in IP/CP first and leave the
internal structures of NP/DP aside for the moment. There have been many
observations on the parallels between IP/CP and DP (Stowell 1981,1989,
Abney 1986, Szabolsci 1989, Valois 1991, etc.). The new approach considered
here can definitely apply to the word order phenomena within DP. It is
very likely that the movement patterns in IP/CP and NP/DP are related
(Koopman, 1992). However, I will single out IP/CP for analysis first, All
NPs/DPs will be treated as unanalyzed wholes for the time being. Their
internal structures and internal word orders will be given a very preliminary
analysis in Chapter 7.
52
• No binding theory. Binding theory is one of those components of the gram­
mar that needs major re-working in the Minimalist framework. In order not
to get distracted from my main topic, I will not go into a Minimalist account
of binding theory.
What is listed above does not exhaust the topics which are left out in this chapter.
Other things will be noted in the course of presentation. We will see that, in spite
of these simplifications and omissions, the model will be rich enough to spell out
the basic properties of the present approach. The typology, the parser and the
acquisition procedure based on this minimal model will not be complete, but they
will be sufficient for the illustration of these properties.
It must be emphasized again that the model to be described below does not
follow MPLT in every detail. The model is in the Minimalist framework in the
sense that it keeps to the spirit of the Minimalist approach. I will try to point out
the differences as we go along. It should also be emphasized that the grammar to
be described is not the only one where the new approach will work. I am simply
trying out a particular instantiation of the theory to show that my proposal can
be put to practice in at least one given version of the model. By the time when
we have completed the experiment, we will realize that the main results of our
experiment do not have to rely on this particular grammar. The approach should
apply in general to many different specifications of the theory.
We now start on our particular model. To compute the relationships between
the parameter values on the one hand and the variations in word order and mor­
phology on the other, we must specify at least the following.
53
(i) The categorial system of the model. This provides the building blocks of
linguistic structures.
(ii) The feature system of the model. Since both the S(F)- and S(M)- parameters
are associated with features, we will not know how many S-parameters are
needed unless we know what features can be spelled out and what features
need checking.
(iii) The computational system of the model. This includes the following sub­
systems;
• Lexical Projection (LP) which determines the phrasal projections of all
categories.
• Generalized Transformation (GT) which determines how the phrasal
projections are joined to form a single tree.
• Move-a which is responsible for feature-checking.
(iv) The PF constraints.
(v) The LF constraints.
The PF and LF constraints can be easily defined in this model. We will therefore
specify (iv) and (v) first.
There is only one PF constraint in this model which requires that the input
to PF be a single tree. Presumably, the violation of this constraint might result
in “broken" sentences. This does not mean, however, that we are not allowed to
produce sentence fragments. A single NP or PP can also constitute a single tree
and thus be a legitimate object at PF. In MPLT there is another constraint which
54
rejects strong features that survive to PF, as we have discussed in 2.1.2. Since we
have chosen not to resort to the strong/weak distinction as a possible explanation
for overt movement, this constraint does not exist in the present model.
The only LF requirement in this model is that all the features must be checked.
Since checking requires movement, it actually requires a set of movements to take
place in the derivation. The nature of these movements will become clear in 3.2.
In what follows, we will look at these systems one by one.
3.1 The Categorial and Feature System s
The categorial system and the feature system will be discussed in the same section,
because they are closely related. Every feature is associated with one or more
categories and every category is basically a bundle of features.
3.1.1 Categories
The categories to be used in this mini-grammar will be limited to the ones in (33).
(33) { C(omp), Agr(eement)l, Agr(eement)2, T(ense), A(spect), V(erb), N )
Agrl and Agr2 is equivalent to what we usually call AgrS and AgrO. We prefer not
to use AgrS and AgrO because AgrS is not always associated with the subject, nor
is AgrO always associated with the object. The use of Agrl and Agr2 will facilitate
our discussion on ergative constructions, passive constructions, unaccusatives, etc.
We see that all categories except N are verbal in nature while some nominal
categories like D(eterminer) are missing. This is because, for the time being,
we will not look into the internal structures of NP or DP, all of which will be
treated as a single unit. For instance, John, the boy and the boy who smiled will
55
be treated identically as NPs. Other common categories that are absent in the list
include P(reposition), Adv(erb), Adj(ective) and Neg(ation). Some of them will
be introduced into our system in succeeding chapters when they become relevant
to our discussion.
We assume that the set of categories in (33) is universal. In other words, the
categories are innately given as part of UG. The fact that some categories do not
seem to show up in some languages can be explained in two different ways:
(i) Only a subset of those categories is used in each particular language. After
the critical period of language acquisition, the categories that are not used
are “trashed”.
(ii) All the categories are present not only in UG but in every individual adult
grammar as well. The fact that some categories are invisible simply means
they are not spelled out.
The explanation to be adopted in our present model is the one in (ii). There are
several arguments against (i). First of all, the assumption that some categories
can be trashed after the critical period implies that different languages can have
very different X-bar structures. If we assume (ii), however, the structures will be
more uniform. Secondly, the “trashing” of categories is not a simple computational
operation. It may mean a partial or total rehash of selectional relations among
categories. Suppose that in UG C selects Agr as a complement, Agr selects T,
and T selects Asp. If a language does not have overt tense and agreement, we
have to remove all those selectional rules and replace them with a new rule which
may let C select Asp as its complement. How this complicated operation can be
triggered and accomplished is a question. Finally, even in languages where certain
56
categories seem to be missing, the concepts represented by those categories appear
to be present. Many people have analyzed Mandarin Chinese as a language where
the category T is missing (e.g. Cheng 1991). This by no means indicates that
speakers of this language are tense-insensitive. As a matter of fact, every Chinese
sentence is interpreted in some tense frame. This is true even in cases where no
time adverbial is present. The simplest explanation for this fact is that T is present
though it is not overtly realized.
3.1.2 Features
The arguments we made above about the universal nature of categories can be
applied to features in a similar way. The features to be assumed in this model are
also supposed to be universal. They are
in UG and they remain in the grammar of every individual language. The fact
that only a subset of those features is visible in a given language means that only
this subset is spelled out. Therefore, we can assume the existence of a feature
as long as this feature is visible in some languages. We have hypothesized that
the visibility of a feature depends on the value of its S(F)-parameter rather than
the availability of this feature itself. Some people may argue that the distinction
we are making here has no empirical import. After all, what is the difference
between invisible existence and non-existence? But there is a difference in terms
of language acquisition and language processing. As we will see, both of them can
be simplified with our assumptions.
Now let us specify the features of our model. It is commonly accepted that the
basic features are of two types: the V-features and the NP-features. What these
features should exactly be is an open question, but we can start with the tentative
57
set in (34).
(34) V-features: 5-grid, case, tense, aspect, ^-features, predication features
NP-features: 5-role, case, ^-features, operator features
This set is by no means complete but it will be enough for experimental purposes.
In what follows, I will give some justification for the inclusion of those features in
our model, and specify for each feature whether there is a S(P)-parameter associ­
ated with it.
The existence of 5-grids is relatively non-controversial. Whether this feature
can be spelled out, however, is open to debate. On the one hand, we can say that
it is always overtly realized in the argument structure of a sentence; on the other
hand, there does not seem to be a language where the 5-grids are morphologically
realized on the verb. But one thing is almost certain: the spell-out of 5-grids
does not vary from language to language. We can think of this features as being
either always spelled out (in the argument structure) or never spelled out (in verbal
morphology). Therefore there is no reason to assume an S(F)-parameter for this
feature.
It is also well accepted that NPs carry 5-roles, though their realization is mixed
up with that of case features. In many cases, it may seem that 5-roles and cases
are two sides of the same coin, morphological case being the overt realization of
5-roles. However, while all 5-role-carrying NPs can have case markers in some
languages, not every case-marked NP carries a theta-role. We will therefore adopt
the standard view and treat case and 5-role as two different animals. Furthermore,
we will assume that all case-markers are overt realizations of case features while
5-roles are understood but never spelled out. Consequently, there is no reason to
58
suppose that there is an S(F)-parameter associated with the 0 features.
The ^-feature is used as a cover term for all agreement features, such as Person,
Number and Gender. It is both a V-feature and an NP-feature. As far as overt
agreement is concerned, however, the Spell-Out of V-features is more important
than that of NP-features. A language is considered to have overt agreement as
long as the V-features are overt, regardless of the status of the NP-features3. In
this experimental model, we will only be interested in those features which are
involved in overt agreement. For this reason, the spell-out of ^-features in NPs
will be ignored for the time being. When we say a ^-feature or an agreement feature
is spelled out, we mean the V-feature is overt. Another fact we will temporarily
ignore is that different ^-features can be spelled out independently. We could let
each ^-feature be associated with an independent S(F)-parameter. This is justified
because each feature can be overt or covert regardless of the status of other <f>-
features. For instance, we can find a language where person and number features
are spelled out on the verb but the gender feature is not. However, such details
do not affect the general approach we are taking. They can be easily added to the
system after the big picture has been made clear. To simplify our parameter space
so as to concentrate on the more interesting aspects of the theory, we will assume
a single S(F)-parameter for the whole set of ^-features. Its value is 1 (spelled out)
if any subset of the ^-features is overt.
The existence of the tense and aspect features is again non-controversial. They
are overt in some languages and covert in others. We could put these two features
in a single bundle and let them be associated with a single S(F)-parameter, as
3Functionally speaking, case-marking and agreement perform the same role, i.e. identifying
the grammatical functions of NPs. This function is realised on the NP when case is spelled out
and realised on the verb when agreement is spelled out.
59
we have done to the ^-features. However, the differentiation of these two features
is more important than that of ^-features in the present model because we are
focusing on the verbal system. How tense and aspect features can be realized on
their own is of interest to us. We have decided in 3.1.1 that T(ense) and Asp(ect)
constitute two different categories, each having its own features. Therefore we will
let tense and aspect be associated two independent S(F)-parameters. This will
enable us to have a more detailed analysis of the tense/aspect system.
The case feature is usually regarded as an NP-feature, for it is often overtly
realized as case-markers on NPs. There is no doubt that there should be at least
one S(F)-parameter for the case features. Potentially we can associate a parameter
with each different case, but we will assume a simpler system where the spell-out
of all case features is determined by a single S(F)-parameter. This parameter will
have the value “1” in a language if there is any kind of morphological case in this
language.
It is not so obvious, however, whether case is also a V-feature. Most people
would think, at least initially, that verbs do not have case features. But there is
evidence that the verb does carry case features and these features are sometimes
visible. One example is found in Tagalog (Schachter 1976). In this language,
the verb can have a case marker and the case feature varies according to which
NP in the sentence is being topicalized. Consider the sentences in (35), (36),
(37) and (38). (The topic marker is ang while ng marks agent and patient, sa
marks locative and para sa beneficiary.) We could treat those markers as spelled
out theta-roles, but we will stick with our assumption that theta-roles are never
spelled out. Whatever is spelled out is always the case feature.
60
(35) Mag-salis ang babae ng bigas sa sako para sa bata
A-will:take woman rice sack child
‘The woman will take rice out of a/the sack for a/the child.’
(36) Aalisin ng babae ang bigas sa sako para sa bata
0-will:take woman rice sack child
‘A/The woman will take the rice out of a/the sack for a/the child.’
(37) Aalisan ng babae ng bigas ang sako para sa bata
Loc-will:take woman rice sack child
‘A/The woman will take some rice out of the sack for a/the child.’
(38) Ipagsalis ng babae ng bigas sa sako ang bata
A'wilktake woman rice sack child
‘A/The woman will take some rice out of a/the sack for the child.’
These examples suggest that the case feature exist in both nouns and verbs.
Another possible example of verbal case features is cliticization. We can think
of cliticization as a process whereby case features are spelled out on the verb. This
view has been expressed by Borer (1984) who calls this process Clitic Spell-Out.
It is also reminiscent of the treatment of subject clitics in Safir (1985). Clitics can
be viewed as something between a pronoun and an affix. In fact, they are more
like affixes than pronouns. If we are allowed to treat them as affixes, as in the
lexical analyses of clitics, 4 they will start to look like case and agreement markers
affixed to the verb. In the following French sentence, for example, me and la can
be viewed as the overt realization of case and agreement features on the verb, the
former being the feature matrix of [case:dat, person:1, number:s] and the latter
[case:acc, person:3,numbers, gender:f].
(39) Jean me-la-montre
John me-it-shows
‘John shows me it.’
4Lexical analyses claim that a clitic is in effect a derivational affix modifying the lexical entry
of a predicate. For instance, the alternation between lift an litre and le lire is taken to be an
alternation between a transitive verb lire and an intransitive le+lire.
61
Since languages can differ with respect to whether the verbal case feature is spelled
out, we assume that this feature has an S(F)-parameter associated with it.
The feature “predication” is supposed to contain information about sentence
type. It tells us, for instance, whether the verb (or the “predicate”) is used in a
statement or a question ([-Q] or [+Q]). Is this feature visible in some languages?
The answer seems to be positive. One way in which this feature can be said to be
realized is through intonation. It is very common for statements and questions to
have different intonational contours. The fact the verb seems to be the main bearer
of the clausal intonation suggests that there is a verbal feature which can be overtly
realized. This feature also seems to show up morphologically sometimes. The
English word whether can be regarded as a spell-out of [+Q] in an embedded CP.
In Chinese, this feature can be morphologically realized as a question/affirmation
particle, as shown in (13) repeated here as (40), or in the A-not-A construction,
as in (41).
(40) Ta kan-wan nei-ben shu le ma
you finish reading that book Asp Q/A
‘Have you finished reading that book?’ or
‘He has finished reading that book, as you know.’
(41) Ni he-bu-he pijiu
you drink-not-drink beer
‘Do you drink beer?
The verbal complex he-bu-he (‘drink or not1) in (41) can be viewed as an instance
where the [+Q] feature is spelled out on the verb. It seems that this feature must be
spelled out in Chinese either in a verbal form or as a grammatical particle.5 We will
5The A-not-A construction never co-occurs with the question particle ms, which suggests that
“double spell-out” is prohibited in Chinese.
62
assume that there is an S(F)-parameter associated with this predication feature,
as languages can vary as to whether this feature is morphologically realized.
The last feature we will discuss is “operator”. This feature is used as a cover
term for such features as “scope”, “topic” and “focus”. The status of these features
is open to discussion. We will assume that quantifier-raising (QR), topicalization
and focalization involve similar syntactic operations, i.e. putting a constituent
in a prominent position. This is the view expressed by Chomsky: “The natural
assumption is that C may have an operator feature (which we can take to be the
Q or wk- feature standardly assumed in C in such cases), and that this feature is a
morphological property of such operators as wh-. For appropriate C, the operators
raise for feature checking to the checking domain of C: [SPEC,C], or adjunction
to specifier (absorption), thereby satisfying their scopal properties. Topicalization
and focus could be treated the same way.” (MPLT p45) However, there seems
to be evidence that these operations are syntactically distinct. In Hungarian, for
instance, a quantified NP, a topic and a focus can apparently co-occur in a single
sentence. Consider (42).®
(42) Mari mindenkinek Petit mutatta be
Mary-Nom everyone-Dat Pete-Acc showed in
‘Mary introduced Pete to everyone/
In this sentence, Mari is the topic, mindenkinek the raised quantified NP, and Petit
the focus. All three of them are raised to the beginning of the sentence and they
must appear in the order of Topic < QP < Focus. To account for these facts, we
will assume that there are distinct operator features but they are checked through
the same syntactic operation, namely, by raising a constituent to the Spec of CP
through A-bar movement. The fact that more than one constituent can be raised
sExample from Anna Ssabolcsi.
63
in this way simply means that there is more than one operator. The multiple
A-bar movements that seem to be involved may be handled in a way analogous
to the treatment of multiple wh-movement. We can have a layered CP where the
specifier position of each layer contains one operator. The different layers might
be named Topic-P, Focus-P, etc. We can also let the operators adjoin to CP one
after another. We can even put an ordered list of operators in Spec of CP. For
the purpose of our preliminary experiments, however, there is no need to commit
ourselves to any of those options. Being minimal again, we will currently limit
ourselves to those cases where only one operator feature is checked. This may be
the scope feature, the topic feature, or the focus feature.
The next question is whether the operator feature is ever morphologically re­
alized. The answer seems to be “yes”. The wh-scope marker was in German, for
instance, can be taken as an overt scope feature, while the topic marker -wa in
Japanese can be considered an overt topic feature. A German example is given in
(43) (same as (28)) and a Japanese example is given in (44).
(43) wasi glaubst du [<-? was, Hans meint [ mit wcm ], Jakob <,
WHAT believe you WHAT Hans think with whom Jakob
gesprochen hat ]]
talked has
‘With whom do you believe that Hans thinks that Jakob talked?’
(44) Taroo-wa scnsci da
Taroo-Topic teacher is
‘Taroo is a teacher.’
We will associate an S(F)-parameter to the operator feature to account for the fact
it is overt in some languages but not in others.
In sum, we have six S(F)-parameters which are associated with the following
features: case, agreement, tense, aspect, operator and predication. They will
64
be called S(F(case)), S(F(agr))t S(F(tns)), S(F(asp)), S(F(op)) and S(F(pred))
respectively.
3.1.3 Features and Categories
So far we have been talking about V-features and NP-features as if only verbs
and nouns had features. This is not true, of course. Every category, whether
lexical or functional, has a set of features. Moreover, many features are found in
more than one category. Typically, a feature appears in two different places, one
in a lexical projection and one in a functional projection. The tense feature, for
instance, exists in both T and V. We have called features in functional projections
F-ftaturta and those in lexical projections L-features. Whenever a feature resides in
both a functional projection and a lexical projection, feature-checking is required.
To make sure that the F-feature and the L-feature agree in their values, the lexical
element bearing the L-feature must move to the position where the F-feature is
located. The only features that seem to have L-features only are the 0-features.
The 0-grid is found in the verb only and the 0-roles are found in NPs only. All
other features are generated in more than one position. The following table lists
the features and the categories that contain them.
functional lexical
0-grid V
0-role N
tense: T V
aspect: Asp V
Agrl V, NP1
Agr2 V, NP2
65
case(l): Agrl
case(2): Agr2
predication: C
operator: C NP
V, NP1
V, NP2
V
There are two sets of ^-features. ^(1) consists of the features involved in subject-
verb agreement. <^(2) is related to object-verb agreement. NP1 and NP2 usually
correspond to Subject and Object, but not always so. Technically, NPl is just the
higher NP and NP2 the lower one. Parallel to the ^-features, there are also two
sets of case features. Case(l) is the case assigned/checked in Agrl, and case(2)
the one in Agr2. The NP with which the operator feature is associated can be any
NP, subject or object.
Conceptually, we can think of the F-features as representing the information
the speaker intends to convey and the L-features as the features to be physically
realized. To ensure that “we say what we mean”, so to speak, the lexical features
must be checked against the functional features.7 As we will see, the dual presence
of F-features and L-features can account for many interesting linguistic phenomena.
Each feature has a set of values. For instance, the value of the tense feature
can be instantiated to present, past, future, etc. However, we are not interested
in those specific values in this abstract model. What we will be focusing on is the
spell-out of those features: whether they are morphologically realized in a given
language, whatever their values may be.
rThe (set that the 0-features need not be checked this way does not mean that they are not
checked. They do get checked but the checking takes place in the process of lexical projection.
66
3.1.4 The Spell-Out of Features
In 3.1.2, we have assumed six S(F)-parameters. Here is review of those parameters
and the features they are associated with.
(46) S(F)-Parameter Feature
S(F(case)) case
S(F(agr)) agreement
S(F(tns)) tense
S(F(asp)) aspect
S(F(op)) operator
S(F(pred)) predication
When the S(F)-parameter of a given feature is set to 1, this feature will be spelled
out in overt morphology. What is actually spelled out, however, can vary in dif­
ferent cases. First of all, there is a distinction between pre-Spell-Out checking and
post-Spell-Out checking. When a feature is checked before Spell-Out, the lexical
element carrying the feature will have moved to the functional position where this
feature is checked. As a result, the L-feature and F-feature will get unified be­
fore Spell-Out and become indistinguishable from each other. We may say that
the F-feature disappears after checking and what is available for spell-out is the
L-feature only. Thus the feature always shows up in an inflected lexical head if it
is spelled out. By “inflected” we mean the lexical element has an affix, a special
tone, or any other forms of conjugation. In cases where the feature is checked after
Spell-Out, the L-feature and the F-feature will co-exist in the representation fed
into PF. Consequently, both of them will be available for potential phonological
realization. Logically speaking, then, there are four possibilities for the spell-out
67
of features, as we have mentioned in the previous chapter:
(47) (i) Only the F-feature is spelled out;
(ii) Only the L-feature is spelled out;
(ui) Both the F-feature and the L-feature are spelled out;
(iv) Neither the F-feature nor the L-feature is spelled out.
The possibilities in (i) and (iii) are available only if the feature is checed after
Spell-Out.
All the four possibilities can be illustrated by examples from real languages.
Let us take the tense feature as an example. In English, this feature must be
spelled out and what is overtly realized can be either the F-feature or L-feature.
In (48) the F-feature is spelled out. The verb does not move to TP and the head
of TP is overtly realized by itself as will In (49) the L-feature is spelled out as a
suffix to the verb.
(48) John wilt visit Paris.
(49) John visit-ed Paris.
It seems that double spell-out, i.e. spelling out both the F-feature and the L-
feature, is prohibited in English, for (50) is ungrammatical.
(50) *John did visit-ed Paris.
In Swedish, however, both the F-feature and L-feature can be spelled out, as we
have seen in (24) repeated here as (51).
(51) [v, Oeppnade doerren ] gjorde han
open-Past door-the do-Past he
‘He opened the door.1
68
In fact, the sentence will be unacceptable if the L-feature is not spelled out. In
(52), the infinitive form of the verb oeppna is used instead of the past tense form
oeppnade, and the sentence is out. 8
(52) *Oeppna doerrtn gjorde han
open (inf) door-the do-Past he
*He opened the door.*
On the other hand, there are also languages where neither the L-feature nor the
F-feature is spelled out. Chinese offers examples of such null spell-out:
(53) Ta meitian qu zveziao
he/she everyday go school
‘He/she goes to school everyday.’
(54) Ta mingtian qu zueziao
he/she tomorrow go school
‘He/she will go to school tomorrow.’
(53) is in the present tense while (54) is in the future tense, but there is no overt
tense marking at all.9
We have thus seen that all the four logical possibilities in (47) are empirically
attested. This suggests that the S(F)-parameter can have two sub-parameters,
8This sentence and the judgment on it is from Christer Platxaek.
9There are certain things in Chinese that are arguably tense markers. For example, jiang and
Am can be treated as future tense markers, and the perfective marker It can be treated as a past
tense marker (Chiu 1992), as we can see in (55) and (56).
(55) Ta tningfian jiang zueziao
he/she tomorrow JIANG go school
'He/she will go to school tomorrow.’
(56) Ta rnotias qu It zueziao
he/she tomorrow go LE school
'He/she went to school yesterday.’
But even if these are tense markers, it still remains true that at least in some sentences the tense
feature is not overt.
69
one for the spell-out of L-features and one for the F-features. We will represent
these two sub-parameters by splitting the value of each S(F)-parameter in the
form X-Y. The value of X determines whether the F-feature is spelled out and Y
the L-feature. Then the four possibilities in (47) will correspond to the following
parameter values.
(57) Features Spelled Out Values of SfFJ-parameter
(i) F-feature only 1-0
(ii) L-feature only 0-1
(iii) both F and L feautres 1-1
(iv) neither F nor L features 0-0
The values in (i) and (iii) are possible only if feature-checking takes place after
Spell-Out because only in these cases will both the L-feature and F-feature be
available at the time of Spell-Out. The significance of these sub-values will be
further discussed in Chapter 4 where more examples will be given to illustrate
those possibilities.
70
3.2 The Com putational System
There are three major operations in the computational system: lexical projection
(LP), generalized transformation (GT) and move-a. We will specify them one by
one.
3.2.1 Lexical Projection
3.2.1.1. Basic Operation
In our system every category X projects the tree in (58).
*P
/ 
(ZP) id
/ 
(58) x CYP1
An Elementary X-bar Tree
This is different from the lexical projection described in MPLT. We assume that
the projections are invariable, with every projection resulting in an XP (i.e. X2).
In MPLT, however, the projection only goes “as far as it needs to” and what is
projected can be an XO, an XI or an XP. In addition, the specifier and complement
positions do not appear in the initial projection in MPLT. They are added later in
the GT process. Consequently, the distinction between substitution and adjunction
is in fact gone. In our system, however, this distinction is maintained. Generally
speaking, any position which is obligatory is to be generated in the process of lexical
projection. Attachment or movement to these positions is therefore substitution.
All positions that are optional are not base-generated. They are to be added to
the structure through adjunction in the process of GT or move-a.
71
Now let us take a closer look at (58) which will be called an elementary X-bar
tree. The (act that ZP and YP appear in upper case letters indicate that they are
empty when the tree is projected. They contain a set of features but no lexical
content. In the process of syntactic derivation, they will either be filled by a subtree
or be licensed to remain empty. In the former case, they serve as attachment points
for GT or move-a operations. The parentheses around ZP and YP indicate their
optionality: not every projection contains them. The actual status of ZP and YP
is determined by X. ZP can appear as a specifier of X if and only if X selects a ZP
specifier. This selectional rule can be represented as (59).
(59) specifier(X,Z)
Similarly, YP can appear as a complement of X if and only if X selects YP as a
complement. This can be represented as (60).
(60) complement(X,Y)
The specifier can be absent if the rule in (61) exists and the complement can be
absent if (62) exists.
(61) specifier(X,e)
(62) complement(X,t)
The rules in (59) and (60) do not tell us the directionality of the specifier or
complement. In any actual implementation of these rules, however, the direction
has to be specific. To generate the tree in (58), for example, we must make clear
that the specifier is to occur to the left of the head and the complement to the right
of the head. We have decided in 2.2.4 that we will adopt a weaker version of the
72
Invariant X-bar Structure Hypothesis where there are only two HD-parameters: a
head-complement parameter for CP and a head-complement parameter for IP. The
specifiers always occur to the left of their heads. The complements always occur
to the right of their heads except those in functional categories. The positions of
heads in functional categories are determined by two HD-parameters: HDl and
HD2. The value of HDl determines the position of head in CP and the value of
HD2 determines the head-position in IP. HDl and HD2 each have two values: I
(head-initial) and F (head-final).
It is assumed here that each category can take at most one specifier and one
complement. As a result, the tree to be generated will never be more than binary
branching. The assumption that each category can take no more than one specifier
(Spec) is well accepted. The single-complement assumption, however, may seem
to be unsupported at first sight. There are obviously structures where two or
more complements are found. The double-object construction is am example of
this. However, the fact that some categories need more than one complement does
not mean that any single elementary X-bar tree has to contain more than one
complement position. Adopting the layered-VP hypothesis of Larson (1988), we
can let a category project more than one elementary X-bar tree, with each tree
taking only one complement. For instance, instead of (63(a)), we can have the
tree in (63(b)) where both UP and WP are generated while binary branching is
maintained.
73
XP
/ zp xl
/ x xp
/ UP xl
. . . / 
(63) X UP WP x WP
(b)
Multiple Complement* in a Binary Tree
The atructure in (63(b)) will be discussed further when we come to the projection
of VP.
3.2.1.2. Selectional Rules (Simplified)
Now we define the selectional rules for each category. We will tentatively as­
sume that UG contains the rules in (64).
(64) specifier(c,x) (i)
specifier(agrl,n) (ii)
specifier(t,c) (iii)
8pecifier(aap,e) (iv)
specifier(agr2,n) (v)
specifier(v,n) (vi)
74
complement(c,agrl) (vii)
complement(agrl ,t) (viii)
complementt,asp) (ix)
complement(asp,agr2) (X)
complement(agr2,v) (xi)
complementv»v) (xii)
complementv,e) (xiii)
Selectional Rules
Some notes on these rules are in order here.
There are no HD-parameters associated with the specifier rules: the specifier
selected by the head always occurs to the left of the head. The complement rules
which are sensitive to the values of HD-parameters are (vii), (viii), (ix), (x) and
(xi). The head in (vii) precedes the complement when HDl is set to I and follows
the head when HDl is set to F. The heads in (viii), (ix), (x) and (xi) precede
their complements when HD2 is set to / and follow their heads when HD2 is
set to F. The fact that the values of HD2 applies in all the three rules reflects
what we have assumed in 2.2.4: IP is treated as a whole regardless of how many
independent projections it may contain. In other words, Agrl-P, TP, AspP and
Agr2-P are to be treated as segments of a single IP. Therefore they share the value
of a single HD parameter. The application of (xii) is not subject to the value of
any HD-parameter. A verb always precedes its complement.
The Spec of CP (Cspec hereafter) can be any maximal projection. We use
“x” to represent this. The Specs of Agrl, Agr2, and V are all NPs in these rules.
(From now on, we will refer to these positions as Agrlspec, Agr2spec, and Vspec
75
respectively.) This is again a simplification. In a more complete system, CPs or
IPs should also be able to appear in those Spec positions.
We notice that T and Asp do not have a Spec position. This does not mean that
there is any principled reason against these categories having a specifier. There
have been many syntactic arguments which rely crucially on the presense of this
position (e.g. Bobaljik and Jonas (1993)). It is not present in this grammar simply
because we are trying to keep the system as small as possible.
The rules in (64) are applicable to both transitive and intransitive sentences.
In other words, both Agrl-P and Agr2-P will be projected no mather whether
there is an object or not. When a sentence is intransitive, one of the agreement
phrases may be inactive. I will basically follow Bobaljik (1992), Chomsky (1992),
and Laka (1992) in assuming that there is an Obligatory Case Parameter whose
value determines which case-assigner (Agrl or Agr2) is active in an intransitive
sentence. According to this assumption, we get a nominative/accusative construc­
tion if Agrl is active and an ergative/absolutive construction if Agr2 is active.
There is evidence that both Agrl and Agr2 exist in a single intransitive sentence.
In some languages case marking identifies the patient of a transitive verb with
the intransitive subject, while agreement identifies the agent of a transitive verb
with the intransitive subject. It seems that, in these situations, the case system
is ergative while the agreement system is nominative. We have to conclude then
that both Agrl and Agr2 can be partially active in an intransitive sentence.
3.2.I.3. Selectional Rules (Fe&turized)
So far the selectional rules and the projection trees have been presented in
a simplified form with a lot of information left out. The nodes in (58) contain
76
nothing but the category label, and the selectional rules in (64) only tell us the
categorial status of a specifier or complement. It is obvious that the nodes do not
consist of category labels only. Each node is a set of features, the category label
being just one of them. Take the verb catches as an example. It has at least the
syntactic features in (65).
(65) Category: v
0-Grid: [agent, patient]
Tense: present
^•features: [person:3,numer:s]
The specifier or complement selected by the category is also a bundle of features.
For instance, Agrlspec may have the following features:
(66) Category: n
Case: 1
^-features: X
The value of the ^-features, represented here as a variable, is selected by the head of
Agrl. This selectional relation can be seen in (67) which is the maximal projection
of Agrl.
77
egrl-OQ tpfl
cased, tanee:Yl)
(67) phiOO)
Feature Value Binding in Agrl-P
The variable “X” is found in both the specifier and the head. This indicates that
the two nodes must have “unifiable” or non-distinct <t>features.10 This is the way
Spec-head agreement is achieved. The actual value of this feature is not an intrinsic
property of Agrl. They depend on (a) the verb that moves to Agrl-0 and (b) the
NP that moves to Agrlspec. What Agrl plays is a mediating role. It ensures that,
whatever the value may be, it must be shared by the specifier and the head.
The structure in (67) also tells us that the NP specifier must have Case 1, i.e.
the case assigned by Agrl. The NP to be attached or moved to this position must
have the same case. In this way the case-marking of nouns will get checked. Notice
that the case feature also appears in the Agrl-0 and the Agrl-P node. This means
that case is an intrinsic feature of Agr. The specifier of Agr gets the value of its
case feature via Spec-head agreement. The presence of the case feature in Agr also
10“Unifiable” ia used here in the standard sense of unification. Intuitively, two sets of features
are uniflable if they do not have incompatible values. For instance, the feature matrices [per­
son^,number:p,cender:m] and [person:3,numberp, gender:Y] are unifiable (X and Y are variables
meaning “any value”), while [person:2,number:p,gender:f] and [person:3,number:p,gender:f] are
not unifiable. The values of person clashes with each other.
explain* why case is a V-feature as well. A verb acquires the case feature or has a
case feature checked when it moves to Agr-0 through head movement.
The complement in this projection tree is a TP whose tense feature has a
variable “Y" as its value. The value will be instantiated when a TP is attached to
this position.
To generate the tree in (67), we need two“featurized” selectional rules, such as
the ones in (68).
(68) specifier(agrl(case:l,phi:X), n(caae:l,phi:X))
complement(agrl,t)
Obviously, all the rules in (64) need to be featurized this way. But there is one
more point to be elaborated on before we do this. I have mentioned earlier that,
in order to preserve binary branching, we will adopt the “Layered-VP” hypothesis
of Larson (1988). The VP structure assumed in this experimental grammar is a
pseudo-Larsonian one which in a way carries Larson's idea to the extreme. In
addition to the general layered-VP structure, we also assume the following:
(i) The number of VP layers corresponds to the number of arguments a verb
takes or the number of 0-roles it assigns. In other words, each layer of VP
will contain exactly one argument.
(ii) The argument in every VP layer appears in Vspec.
The assumption in (i) in fact follows from (ii). Each layer of VP can have only one
specifier and therefore we need as many layers as the number of arguments. The
VP tree for a transitive verb will look like the following:
79
vp
/ 
np{ll vl
/ v yp
/ 
npC2] vl
(69) !
VP-Projection of a Transitive Verb
These assumptions have the following consequences.
First, the distinction between internal and external arguments are eliminated.
What remains is just a thematic hierarchy. An argument can just be relatively
“higher” or “lower" than some others. What is traditionally called the “subject”
is simply the highest argument in the VP-shell. The Extended Projection Principle
is now translated into the requirement that every sentence must contain at least
one argument. There is no longer the need to explain, for example, why an NP
with a “Theme” role can be either an internal or external argument. The kind of
argument promotion observed in (70a) and (70b) now receives a natural account.
(70) (a) The man opened the door.
(b) The door opened.
Presumably the agent theta role, carried by the man, is higher in the thematic
hierarchy than the patient or theme role carried by the door. When both theta
roles are present in a sentence, as in (70a), the door must take a lower position in
the VP tree and be the object of the sentence. When the agent the man is absent
80
as in (70b), however, the door is promoted to subjecthood since there is no other
NP in the sentence that carries a “higher” theta role.
Passive and unaccusative constructions are also accounted for. The passive verb
has had its first 0-role “absorbed”. Therefore the VP projected from a passive verb
will not have the layer which is the top one in its active counterpart. The remaining
arguments are thus promoted one layer up. The argument which is originally in the
second layer is now in the first one and treated like a subject. The unaccusative
verb has only one 0-role to assign, so only one layer of VP is projected. Since
this single layer is the top layer, the argument of an unaccusative verb can enjoy
subjecthood. This is just another way of stating Burzio’s generalization11 (Burzio
1986).
Secondly, the VP structure assumed here lets 0-role assignment be performed
uniformly in the Spec-Head configuration. This is conceptually appealing because
0-role assignment, case-checking and agreement-checking now involve the same
type of operation, namely Spec-head agreement. We thus have a more general
and more consistent notion of the Spec position being the checking domain of each
category.
The structure in (69) consists of two elementary X-bar trees but they are in fact
projected from a single head. The verb exists as a chain, each VO being one of its
links. The two links are identical except for the number of a 0-roles they contain.
The higher one has two while the lower one has only one. This does not mean that
n Bunio’» Generalisation:
(i) A verb which lacks an external argument fails to assign accusative case.
(ii) A verb which fails to assign accusative case fails to theta-mark an external argument.
81
two different verbs are involved here. The difference is used as a computational
device which makes sure that the correct number of layers are projected. We have
seen in (64) that there are two complement rules for V: it can take either a VP
complement or no complement. The choice is determined by the 0-feature. If the
0-grid contains only one 0-role, this V will have no complement and the current
VP will be the last layer. If the 0-grid contains n + 1 (for any n > 0) 0-roles, this
V will take a VP complement. The 0-grid of this VP complement will contain n 0
roles, with the understanding that the other role has been assigned in the higher
layer. In each layer, the 0-role being assigned is always the first one in the list.
This role is removed from the list after the assignment so that the next role can be
assigned in the next layer. The layer-by-layer stripping of 0-roles also ensures that
eventually there will be only one role left so that the VP projection will terminate.
In the case of (69), the verb has two 0-roles to assign. No more VP complement is
permitted after the second layer because there is no more theta-role to assign.
Now we are ready for a “featurized” version of (64). The new lexical projection
rules are given in (71). The structure F : V means that the feature F has V as its
value.
(71)
speciffer(c,x(op:+)) (i)
specifier(agrl(case:1,phi:X),n(case:l,phi:X)) (H)
specifier(t,e) (iii)
specifier(a,e) (iv)
specifier(agr2(case:2,phi:X),n(case:2,phi:X)) (v)
specifier(v(0-Grid:(Th,...],n(0-role:Th) (vi)
82
complement(c,agrl) (vii)
complement(agrl,t) (viii)
complement(t,asp) (ix)
complement(asp,agr2) (X)
complement(agr2,v) (xi)
complement v(0-Grid:[Thl,Th2,...]),v(0-Grid:[Th2,...]) (xii)
complementv(0-Grid:[Th]),e) (xiii)
Featurised Selectional Rules
The 0-grids in these rules contain variables like “T hl”, “Th2”, etc. instead of
names like “agent" and “patient".13 This is done for the sake of generality. The
notation means that, given any two 0-roles, “Thl" is higher than “Th2" in the
thematic hierarchy and is to be assigned in a higher layer of VP.
The op(erator) feature in Cspec has the value “+ ”. This means that the NP
or any XP to be substituted into this position is going to be the operator, i.e. it
will receive the wide-scope, topic, or focus interpretation.
3.2.2 Generalized Transformation
The LP operation described in the previous section produces a set of elementary
X-bar trees. The function of Generalized Transformation GT) is to put those trees
into a single tree. In MPLT, there is only one type of GT operation which subsumes
both substitution and adjunction. In both cases, we add an empty position to the
target tree (which looks like adjunction) and then substitute a subtree into this
position. This will not be the version of GT to be assumed in the present model.
lJThe in the 0-grids represents the rest of the theta-roles which can be empty.
83
As I have stated earlier, we will maintain the distinction between substitution and
adjunction, the former associated with obligatory constituents and the latter with
optional ones.
In GT substitution, a subtree K is substituted into an empty position 0 in the
target tree K’, resulting in K*. The empty position 0 in K’ is either a specifier
position or a complement position which has already been generated in the process
of Lexical Projection. The position is empty in the Bense that it has features
but no lexical content. It is an attachment site into which another tree may be
substituted. The substitution is possible only if the attachment site and the subtree
to be attached have compatible (i.e. unifiable) features. For instance, the subtree
to be attached to the Agrlspec in (67) must be an NP whose case feature has the
value 1 and whose ^-feature has the value X. If X has already been instantiated
to [person:3,numer:s] in Agrl, only a third person singular NP can be attached to
this point. If the X in Agrl is instantiated to [person:3,number:N] where N is a
variable, however, either a singular or a plural third person NP can be substituted.
(72) and (73) illustrate the two basic cases of GT substitution. In (72a), an NP
is being substituted into Agrlspec. The tree that results is (72b). Notice the
unification of feature values in the substitution process. (“3sm” is a short-hand
form of [person:3,numbers,generrm].) In (73a), a TP is being substituted into the
complement position of Agrl-P. The result is (73b).
84
(72)
•flrl-pd
eutd,
phiiXU
i:YB
•grl-pfl
phi:3n0)
phi:3*aj)
•grl-0([ tp([
c k k I , ttns*:Y])
phi^€«])
(a) (b)
agri-pfl
cassd.
phi:X])
npd agrl-1
ph!*X]‘)
agrl-0([ tpfl
id, tsn»s:Y])
phiOQ)
(73)
«i>a
tsnssrprssj)
•grl-pd
cassd,
phiJC)
npQ ogrl-1
phBd>
•grl-Otf
pWdd)
tp([
tsnsaiprssD
(a) (b)
GT Substitution
85
In GT adjunction, a subtree is added or inserted into a constituent. In this
experimental model, we will only be concerned with adjunction to XI. In other
words, we will only consider the adjunctions whose function is adding modifiers into
the structure. The subtree to be adjoined to an XI must be a maximal projection.
In this GT process we create an additional segment of XI which contains an empty
position 0 and then attach a subtree to 0 . In (74), an extra segment of XI is
created and AP is substituted into the empty position contained in this extra XI.
Txl
/ 
0 xl
1
yp
(74)
GT Adjunction
We assume that all adjunctions are left-adjunctions. We also assume that the
attachment point created during the adjunction has certain selections] properties,
so that each category will only accept a certain class of modifiers. For instance,
the adjunction site in a VI may require the adjunct to be an AdvP. Therefore we
will not be able to adjoin an NP to a VI. If an adjunct is acceptable to two or
more X l’s, it can then choose to adjoin to any of them. I will not try to specify
a full theory of modifier adjunction here. Some further discussion on this will be
given when the need arises.
GT operations are applied recursively on pairs of trees until there is only a
86
single tree left. If there are two or more subtrees left and no GT operation can
apply to reduce them to a single tree, the derivation crashes.
At this point, we might be interested to see what structures are produced by LP
and GT in our system. Given the rules in (71), we can get 4 different CP structures
for each type of verb by varying the values of HDl and HD2. For illustration, we
will look at one of the 4 structures where both HDl and HD2 are set to I. We
will demonstrate it with two types of verbs: a transitive verb that takes two NP
arguments and an intransitive verb that takes one argument. The former will
be illustrated by the English sentence Mary caught him and the latter by Mary
is swimming. The structures generated by LP and GT for these two sentences
are given in (75) and (76) respectively. (All the nodes in these trees should have
features other than the category label, but to save space the other features are
omitted in all but the terminal nodes.)
87
■grl-p
text
m.-pMt])
•525in tc t
■gr2HMI vp
$8%
npfloo:U
ngintll
Mary vOa .
«S £^
t£-griS?ftigant,pati*nt]})
I
caught nptt
nwtiant])
hi* vOfl
BBSf-
(75) Base Ttee of a Simple Transitive Sentence
op: ♦!)
eOtt. r,» •BTi-ppr«d:-Qj)
np<[ sgrl-1
A
«grl-0([ tp
33&
tO(I
tanserpraa])
aspect:prog])
•or2HM vp
Mice:'
SrToSaagentD
Mary vOd
aspecfcprog,
tR^W:(*9*nt]])
I
swiaaing
(76) Base Tree of a Simple Intransitive Sentence
Some specific points about these trees are worth mentioning.
89
Firstly, We see that all the lexical items appear VP-internally. Each of the
NPs is in a position to which a 0-role is assigned, but none of them , however, is
in a position where its case can be checked. This is different from the traditional
view that internal arguments are assigned cases VP-internally under government.
In our system, there is no internal arguments and government does not play a
role in case-checking at all. Every NP is drawn from the lexicon together with
its case feature, but it cannot be checked VP-internally. To satisfy the checking
requirement, it must move to the Spec position of one of the agreement phrases.
This kind of movement will be discussed in 3.2.3.
Secondly, the copula is in Mary is swimming does not appear in the tree. This
follows from our assumption that is is an expletive which is not base-generated.
It is inserted in the Spell-Out process as a way of overtly realizing the features in
Agrl-0 and TO.
Finally, we find in those trees all the features we have assumed. The values of
these features are constants in some cases and variables in others. (All the upper­
case letters stand for variables.) The variables all represent unspecified values,
but their syntactic status can be different depending on whether the feature is an
F-feature (feature in a functional category) or an L-feature (feature in a lexical
categories). The variables in functional projections are all used for agreement.
Two nodes are supposed to have the same value for a certain feature if the same
variable appears in both. For instance, the values of ^-features in both Agrl-P and
Agr2-P are variables. The fact that <j>has X or Y as its value in both the head and
the Spec of AgrP ensures that the subject/object and the verb will agree in their
^-features. The values of these features will be instantiated when the VP-internal
NPs move to the Specs of AgrPs and the verb moves to the heads of AgrPs.
90
The variables in the lexical projections indicate that the features in question are
morphologically unspecified. In other words, there are no morphemes in the lexicon
that represent the values of those features. In (75), for examples, the NP Mary is
morphologically unspecified for the case feature and the verb caught is unspecified
for the aspect feature. The features will get instantiated when movement takes
place for feature-checking. In (76), swimming is specified for the aspect feature
which is morphologically realized as the suffix -ing. But it is not morphologically
specified for the tense feature. Hence the variable for the tense feature. The values
of operator features in the two NPs (Mary and Atm) are also variables. When the
sentence is used in a given context, however, one of them can get the **+* value
and only this NP can eventually move to Cspec.
3.2.3 Move-a
In our present system, movement takes place for no other reason than feature-
checking. Following Chomsky's principle of Procrastinate (MPLT) which requires
that no movement take place unless it is the only way to save a derivation from
crashing, we will assume that a movement occurs if and only if there is an LF check­
ing requirement whose satisfaction depends soly on this movement. We should be
reminded at this point that the movements we are discussing here are LF move­
ments which are universal. They take place in every language by LF, though only
a subset of them may be visible in a particular language.
91
S.2.3.1. M ovement as a Feature-Checking Mechanism
The necessity of movement in feature-checking can be viewed from two different
perspectives. From the viewpoint of lexical items, we see that a given word may
have two or more features, each of which must be checked in a different structural
position. Take NP as an example. UG requires that it be assigned a 0-role and
be checked for case. However, 0-roles are assigned in Vspecs only and cases are
checked in Agrspecs only. To meet both requirements, an NP must move from one
position to the other, which forms a chain linking the two positions. Once this
occurs, the NP exists as a chain rather than a single node. It enters a structural
relation whenever one of its links is in the required position for that relation.
From the viewpoint of features, we see that most features are found in more than
one node. In (75) and (76), for instance, the tense feature appears in both TP
and VP. To make sure that a given feature has the same value throughout the
whole structure, we have to form chains to link nodes which are related by feature-
checking movements but are not in the same projection. All the chains in our
system are formed in this way.
Since movement occurs for feature-checking only, we get the set of movements
required by UG by locating all the features whose checking requires movement. As
we have seen, only those features which appear in more than one projection need
to be checked through movement. Furthermore, in all the cases where a feature
appears in two different projections, one of them is in a lexical projection and
the other in a functional projection. This is clear in (45), (75) and (76). To see
what movements are required, we only have to list all the features that are both
L-features and F-features. According to (45), they include the following: tense,
aspect, ^(1), ^(2), case(l), case(2), predication and operator.
92
The tense feature is found in both T and V. In order for the feature in V
to be checked against the feature in T, the verb must move to T. The aspect
feature is found in both Asp and V. Therefore, the verb must move to Asp for
feature-checking. The predication feature is found in both C and V. Forced by the
feature-checking requirement, the verb must move to C. The operator feature is
found in both Cspec and NFs. For feature-checking, one of the NPs must move to
Cspec. Since the value of the operator feature is always u+ ” in Cspec, only the NP
which is the operator can move there. The case feature is found in both Agrspecs
and NPs. Therefore, each NP must move to some Agrspec to have its case feature
checked. NPl must move to Agrlspec and NP2 to Agr2spec. We assume that,
when both NPl and NP2 are present in the VP projection, NPl cannot move to
Agr2spec, nor can NP2 move to Agrlspec. There are various ways to account for
this restriction. In MPLT, this restriction is supposed to be derived from the notion
of equidistance. I will not go into the the mechanisms that implement this notion.
At an intuitive level, we can view the restriction as a special way of observing the
Principle of Economy which requires, among other things, that short moves be
preferred over long moves. If we move NPl to Agrlspec and NP2 to Agr2spec,
both movements will be relatively short. If we move N Pl to Agr2spec and NP2
to Agrlspec, however, the NPl-to-Agr2spec movement will be very short but the
NP2-to-Agrlspec movement will be longer than any of the two movements in the
previous case. This economy-based argument is not well-understood yet but we
can get the same effect from some simpler principles. In our model we assume
that the case hierarchy and the thematic hierarchy in a sentence must agree with
each other. Given two NPs, NPn and N Pn+j , and two roles 6n and 0„+i with 0n
preceding 6n+i in the 0-grid of the verb, NPn must have its case checked in a higher
93
case position if NPn is assigned 0» and NPn+i assigned 0„+i- (A case position
(Agrspec) is higher than another one if the former asymmetrically C-commands
the latter.) Intuitively, this assumption simply means that the subject must be
assigned the subject case and the object the object case. In passive constructions,
the first 0-role in the 0-grid is suppressed and the one that follows it will become
the the first. As a result, the NP assigned this promoted 0-role is free to move
to the highest case position. In unaccusative constructions, the “subject” 0-role is
missing from the 0-grid. Consequently, some other role will be the first in the grid
and the NP assigned this role can go to the highest case position.
The ^-features are similar to the case features in that they are both NP-features
and V-features. In terms of the NPs, the ^-features are found in both Agrspecs
and the NPs. During feature-checking, NPl must move to Agrlspec and NP2
must move to Agr2spec. The movement patterns are identical to those involved
in case-checking. In terms of the verb, the ^-features are found in Agrl-0, Agr2-0
and VO. The verb therefore must move to Agrl-0 and Agr2-0 to have the features
checked. During the movement, the verb will also pick up the case features in Agrl
and Agr2. This indicates that case and agreement are two sides of the same coin.
They have the common function of identifying grammatical relations.
94
To sum up, we list in (77) all the movements forced by feature-checking.
(77) A. The verb must move to Agr2-0 to have its <j>& case features checked for
object-verb agreement.
B. After moving to Agr2-0, the verb must move to AspO to have its aspect
feature checked.
C. After moving to AspO, the verb must move to TO to have its tense feature
checked.
D. After moving to TO, the verb must move to Agrl-0 to have its <t>& case
features checked for subject-verb agreement.
E. After moving to Agrl-0, the verb must move to CO to have its predication
features checked.
F. NPl must move to Agrlspec to have its <f>& case features checked.
G. NP2 must move to Agr2spec to have its <f>& case features checked.
H . After moving to an Agrspec, one of the NPs must move to Cspec to have
its operator features checked.13
From now on, we will refer to these movements as M(agr2) , M (asp)y M (tns),
M (agrl), AI(c), Af(specl), M(spec2) and Af(cspec) respectively. These move­
ments are illustrated graphically in (78) with the English sentence Alary caught
him where Alary is NPl and him is NP2.
iaThis implies that every sentence has an underlying topic or focus or an NP that receives a
wide-scope interpretation.
95
5£L -nil •®rl_pprea:-QD
agrHXl tp
m b
m
t«u*:pe*tj)
phiiYD
npa«o:U
’agent))
jg L *
tSSexli^
tSrfiplEIKftgent.petlent]])
:petlent])
his vO([ .
O S ^
(78) Feature-Checking Movement*
96
We can see that M(agr2), M(asp), M (tns), M (agrl) and M(c) are head move­
ments, Af(specl) and M(spec2) are A-movements, and M{cspec) is A-movement.
There are two instances of M (cspec) in the diagram. One involves NPl moving
to Cspec while the other involves NP2. In a particular sentence only one of the
movements can occur. Which one occurs depends on which of the NPs is the topic
or focus of the sentence.
S.2.3.2. M ovement in Operation
Having identified the set of movements involved in feature-checking, we will
now take a closer look at the computational operation involved in these move­
ments. It has been assumed that all the movements are raising movements in our
system. Lowering is prohibited. Therefore, it is illegal to have any “yoyo” type
of movement where a constituent moves up and then down or down and then up.
Other operational constraints on movement are discussed below.
M ovem ent as S ubstitution All the movements discussed here are substitution
operations. The landing site of every movement is an existing attachment point
which is an empty node created in lexical projection. The substitution is possible
only if the moved element and the landing site have identical categories and com­
patible feature values. It is not possible, for example, to move an XO to an XP
or vice versa. Nor is possible to move an NP to a position where a different value
is required for the case or ^ feature. This guarantees that all the movements are
structure-preserving.
The substitution operation is obvious in the cases of A-movement and A-
movement. The landing sites of these movements are all Spec positions projected
in LP: Agrlspec in the case of M(specl), Agr2spec in the case of Af(spec2), and
97
Cspec in the case of M(cspec). In cases of head movement, however, this is less ob­
vious. At first sight, substitution seems to be impossible. How can a VO substitute
for a TO or CO, for instance? For a node to serve as the landing site of a move­
ment, it must be (a) empty, and (b) have feature values which are unifiable with
those of the moved element. The condition in (a) seems to hold. The landing sites
of V-movement are all heads of functional categories which are feature matrices
without lexical content. The condition in (b), however, looks a little problematic.
For one thing, the landing site and the moved element do not seem to have the
same categorial features. We seem to be substituting a V for a T, an Agr, a C, etc.,
which should be impossible. This is one of the reasons why head movements have
been standardly treated as adjunction rather than substitution operations. But a
second thought on the status of C, Agr, T and Asp suggests that the substitution
story is plausible. These categories are after all extended V-projections. Since
none of these functional categories has a lexical head, all of them can be said to
have been projected ultimately from the verb. In other words, they are just some
additional layers of the V-projection. This is the view expressed in Grimshaw
(1991) according to which the V, Agr2, Asp, T, Agrl and C projections form an
extended projection. All the heads in this single projection can share the same set
of categorial features. This being the case, there should be no reason why sub­
stitution is impossible. We therefore assume in this experimental grammar that
head movement involves substitution instead of adjunction, (c/. Emonds (1985),
Roberts (1991, 1992)). When a verb is substituted into the head of a functional
category, the two heads will merge into one. We will call this new head V, with
the understanding that all the features of the original functional head have been
preserved. We choose to call it V rather than T or Asp because the features of this
98
new head are spelled on the verb in the form of verbal inflection. The diagram in
(79) illustrates the substitution involved in a head movement where a verb moves
to the head of Agr2-0. (79a) shows the pre-movement structure and the movement
which is taking place. (79b) shows the post-movement structure.
■gr2p-p
/ 
np sgr2-l
agr2-0([ vp
c h e 2 ,
phi:Yl)
np vt
vOd
aspsctrA,
tn w p H t,
th-grid:[sgt,pst]])
(79)
rOd
H piet'i,
2.
phl;Y.
th-grid:[sgt,pst]j)
■spsetut,
th-grid:[»gt,pst]])
(a) 0>)
Head Movement as Substitution
The kind of head movement assumed here fails to make some of the predictions
that are made by the standard version of head movement. In head movement by
adjunction, the moving head gets attached to the target head either from the left
or from the right, so the head and the affix will appear in a certain linear order.
99
In (80), for instance, the verb has moved to CO through Agr2-0, AspO, TO and
Agrl-0.
/ <0  agri-p
/ / V  / 
/ y 
(80)
tO agrl-0 /
vO agr-2-0 /'
Head Movement as Adjunction
The successive adjunction results in a big verbal complex which is boxed in the
diagram. Suppose that in this language agreement features and tense features
are morphologically realized as suffixes. Then the structure of this verbal com­
plex predicts that the inflected verb will be spelled out as V-T(ense)-Agr(eement)
100
rather than V-Agr(eement)-T(ense). This prediction is based on the Mirror Prin­
ciple (Baker 1985a) which requires that morphological derivations reflect syntactic
derivations (and vice versa). In the substitution story of head movement, this pre­
diction is gone. The movement just results in a complex feature structure where
no linear order is implied. This result can be good or bad depending on whether
the Mirror Principle is really valid. If it is, our version of head movement will
be less desirable because it has missed an important generalization. However,
counter-examples to the Mirror Principle do exist. In terms of the T-suffix and the
Agr-suffix, both orders seem to be possible. In Italian (81) and Chichewa (82), for
example, we find T inside Agr while in Berber (83) and Arabic (84), we find Agr
inside T.
(81) 14 legge-va-no
read-imp(Asp/Tns)-3ps(Agr)
’They read’
(82) 18 Mtsuko u-na-gw-a
waterpot SP(Agr)-past(Tns)-fall-Asp
’The waterpot fell’
(83) 18ad-y-segh Moha ijn teddart
fut(Tns)-3ms(Agr)-buy Moha one house
‘Moha will buy a house.’
(84) 17sa-ya-shtarii Zayd-un dar-an
fut(Tns)-3ms(Agr)-buy Zayd-Nom house-Acc
‘Zayd will buy a house.’
MExample* from Belletti (1988)
18Example from Baker (1968)
lflExample from Ouhalla (1991)
17Example from Ouhalla (1991)
101
In order to preserve the Mirror Principle, some (e.g. Ouhalla 1991) have proposed
that the hierarchical order of AgrP and TP be parameterized, i.e. in some lan­
guages AgrP dominates TP while in other languages TP dominates AgrP. But the
price we pay here to save the Mirror Principle seems too expensive. In our system,
such reshuffling in the tree structure is not necessary. What syntax provides for
each node is a feature matrix. The linear order in which the features are spelled
out can be treated as an independent matter which probably falls in the domain
of morphological theory. Different languages may simply choose to spell out the
features in different orders. In acquisition the linear order can be learned in the
same way that other ordering rules in morphology are learned.
B arriers to M ovem ent I have mentioned earlier in this chapter that the bound­
ing theory may need some revision in the Minimalist framework. In the standard
model, there is the distinction between SS movement and LF movement. It is
assumed that some of the barriers which constrain SS movements do not apply to
LF movement. Now that all movements are LF movements, it is no longer clear
what the barriers are. Fortunately, this status of affair does not seem to affect
our experimental model very much, since we are currently only concerned with
simple sentences. Some barriers do exist within a single clause, but we can for
the time being describe them in a case-by-case manner without attempting a gen­
eral account. In what follows, we will look at head movement, A-movement and
A-movement one by one and discuss the constraints on each of them.
For head movement, we will assume the Head Movement Constraint (HMC)
which requires that no intermediate head be skipped during the movement. Given
three heads, Hi, Hj and Hj, where Hi asymmetrically C-commands Hi and Hi
102
asymmetrically C-commands Hs, no XO can move from H$ to H without moving
to Hi first. For a verb to move from its VP-internal position to CO, for example,
it must move successively to Agr2-0, AspO, TO and Agrl-0 first.
For A-movement, there will be no clause-internal barriers. We usually assume
that A-movement has to be local. According to Sportiche (1990), for instance, A-
movement has to go in a Spec-to-Spec fashion. A movement is blocked whenever it
has to go through a Spec position which is already filled by some other XP or one
of the links of another XP chain. Obviously there would be problems if this locality
constraint were imposed on the A-movements in our present system. For NPl to
move to Agrlspec, it would have to go through Agr2spec, but this is impossible.18
A similar problem exists for NP2 which would have to go through NPl to reach
Agr2spec. To account for the fact that M(specl) and M(spcc2) are possible, we
will assume that the domain of XP movement can be extended by head movement
(c/. Baker 1988). As a result, all the projections that a single head can go through
will be transparent to each other. In our system, the verb moves all the way to
C through Agr2, Asp, T and Agrl. So the whole CP tree is transparent for XP
movement. Another way to describe this transparency is to say that there is no
barrier to movement in a single extended projection (Grimshaw 1991). Within
this single projection an NP can move to any Spec position without violating any
locality constraint.
This domain for XP-movement also applies to A-movement. As a result, any
NP within a single CP can move to Cspec without crossing any barriers. But
there is an independent constraint which prevents an NP from moving from its
18This is impossible because (i) the NP moving to Agrlspec must have Case 1 and will not be
able to unify with Agr2spee which has Cass 2, and (ii) Agr2spec belongs to the chain headed by
NP2 and therefore it is already filled and should serve as a barrier for the movement of NPl.
103
VP-internal position directly to Cspec. It is required in our grammar that every
NP move to a Agrspec to have its case & agreement features checked. If an NP
moves directly to Cspec, skipping all Specs of AgrPs, the the case & agreement
features will fail to be checked. Once in Cspec, an NP will not be able to move to
an Agrspec any more, since lowering is prohibited. Consequently, an LF constraint
is violated and the derivation will crash. To avoid the crash, an NP must move to
a position to have its case & agreement features checked before moving to Cspec.
In other words, NPl must move to Agrlspec first and NP2 to Agr2spec first.
If we go back to (78) now, we will realize that all the constraints discussed
above are observed there. In fact, the movements illustrated in (78) represent not
only all the possible movements in our system but also all the possible paths for
these movements. In particular, each movement has a unique path and results in
a unique chain.
Before we close this section, I will mention an apparent problem related to
head movement. We have assumed that the verb always moves all the way up to
CO. Superficially, however, there seem to be many cases where the verb only moves
half-way up and what moves to Agrl or C is an auxiliary. It looks as if the checking
movement were broken up into two parts, one performed by verb movement and
one by auxiliary movement, resulting in two separate chains. I will argue that,
even in these cases, what moves to CO at LF is still the verb and there is only a
single chain. After Spell-Out the verb will move further up to the positions which
the auxiliaries seem to have moved through. The movement is not blocked because
auxiliaries are invisible at LF and their features are incorporated into the verb.
Why the movement seems to be split at Spell-Out will be explained in Chapter
4. We will see that there are particular settings of S(M)-parameters which are
104
responsible for this superficial phenomenon.
3.2.3.3. The S(M )-Parameters
In 3.2.3.1, we identified a set of 8 movements: M(agr2), M(asp), M(fns),
M (agrl), Af(c), M(specl), Af(spec2) and M(cspec). We assume that each of
these movements can occur either before or after Spell-Out. In other words, each
of them has an S(M)-parameter associated with it. We will call those parame­
ters S{M(agr2)), S(M(asp)), S(M(tns)), 5(A/(ayrl)), S(M{c)), S(M(specl)),
S(M(spec2)) and S(M(cspec)) respectively. When S(M (X)) is set to 1, A/(.Y)
will be overt. It is covert when S(A/(X)) is set to 0.
Now the question is whether an S(M)-parameters can have a third value, namely
1/0, which is a variable. What this says is that the relevant movement can be either
overt or covert, hence the optionality of the movement. Our immediate reaction to
this idea might be negative. According to the Principle of Economy in general and
the principle of Procrastinate in particular, no movement should be optional. If a
movement can be either overt or covert, it should always be covert. In addition,
there are both acquisitional and processing arguments against optional movement.
Optional rules are more difficult to acquire. They also make the parsing process
less deterministic. However, there is empirical evidence which shows that the “no
optionality” assumption is too strong. It runs into difficulty whenever a language
has alternative word orders. If we insist on the binarity of S(M)-parameters values,
any given movement will be either always overt or always covert. As a result, only
a single word order will be permitted in any language. The fact most languages
do have alternative word orders shows that this binarity is too restrictive. We
can of course say that any given language has a canonical word order. This order
105
is determined by the obligatory movements and all the optional movements are
“stylistic” or “discourse-driven”. But this leads to the assumption that there
are two independent sets of movements: one syntactic and one stylistic. This
assumption is not totally implausible, yet the necessity of identifying a different set
of movements in addition to the checking movements we now have makes the theory
more complicated. We will have a simpler theory if we assume that there is only
a single set of movements and the “syntactic” and “stylistic” movements are overt
manifestations of the same set of movements. In this way, we will not need to define
a separate set of movements in addition to the movements we have defined here.
All the “stylistic” movements correspond to movements whose S(M)-parameters
are set to 1/0. This value is a variable which can be instantiated to either 1
or 0. As far as the S(M)-parameter values are concerned, therefore, both overt
and covert movements are allowed. In stylistically neutral or unmarked cases, the
Principle of Economy will dictate that the variable be instantiated to 0. As a result,
the movements are invisible and the “canonical” order surfaces. In contexts where
other factors call for overt movement, the Principle of Economy may be overridden.
Consequently, the variable will be instantiated to 1 and the movement is visible.
In short, when an S(M)-parameter is set to 1/0, the movement with which the
parameter is associated will be covert unless there are some stylistic or discourse
factors calling for overt movement. So the movement is not really optional. Once
we have a stylistic or discourse theory which defines precisely when overt movement
is needed, the choice will be clear. In any given context, the variable can only be
instantiated to a single value. However, the model we are describing here is a
purely syntactic one which does not include a stylistic or discourse module. This
other module is absolutely necessary, but it falls outside the scope of the present
106
study. The issues involved there need to be addressed in a separate project. What
we can do in syntax is providing all the options. The choice will be made when the
syntactic module is interfaced with other modules. For this reason, we will allow
some movements to be optionally overt in our grammar. In particular, we will
let the three S(M)-parameters associated with XP/NP movement - S(M(specl)),
S(M(spec2)) and S(M(cspec)) - have three values: 1, 0 and 1/0. This by no
means implies that head movement cannot be optional. We have simply chosen
to experiment with optional movement on A-movement and A-movement first.
There are two motivations for this choice. First, we want to try out some optional
movements and find out their basic properties before generalizing optionality to
all movements. Second, the main purpose of permitting optional movement in our
grammar is to account for those scrambling facts which involve A-movement or
A-movement. Optional head movement will be discussed briefly in this chapter
but will be be put aside in subsequent discussion.
To give the above argument more substance, we will look at two specific
cases where the S(M)-parameters seem to be set to 1/0, one involving optional
A-movement and one A-movement.
For optional A-movement we can find an example in English. In (85) and (86)
(same as (25)), we see an alternation between overt and covert NP movement.
(85) Three men came.
(86) There came three men.
In (85), Af(specl) (NP movement to Agrlspec) is overt. It is covert in (86). We
thus conclude that S(M(specl)) is set to 1/0 in English. This explains why both
orders are possible. However, in a particular context only one of them will be
107
appropriate. (86) seems to be the unmarked case where there is no reason for
overt movement. In (85), however, the Principle of Economy has apparently been
overridden by some discourse considerations.
An example of optional A-bar movement can also be found in English where
topicalization produces a word order other than SVO.
(87) John likes apples,
(88) Apples, John likes.
In our system we assume that topicalization involves XP-movement to Cspec.
Then it seems that apples has moved to Cspec in (88) but not in (87). We can
conclude then that M(cspec) is optional in English and S(M(cspec)) is set to
1/0. In unmarked cases the movement does not occur overtly due to the Principle
of Economy. When a constituent needs to be overtly topicalized, however, the
Economy principle is overridden and the movement becomes visible.
Although we will put optional verb movement aside in subsequent discussion,
we will assume that it is possible in principle. An example of this kind of option­
ality can be found in French. There we find the word order alternation between
statements and questions, as shown in (89) and (90).
(89) Nous allon d la bibliothique
we go to the library
‘We are going to the library.'
(90) Allez vous a la bibliothique
go you to the library
'Are you going to the library?'
(89) is a statement where the verb is presumably in Agrl-0 while (90) is a question
where the verb has moved to CO. It seems that the verb movement from Agrl-0 to
108
CO is optional in French, since both orders are possible. We can therefore assume
that 5(M(c)), the S(M)-parameter for verb movement to CO, is set to 1/0. This is
why the verb can either precede or follow the subject. However, the movement is
non-optional in any particular case. Let us suppose that the declarative sentence
constitutes the unmarked case where there is no special motivation for Agrl-to-C
movement. Thus the Principle of Economy will apply and the sentence will be
ungrammatical if the movement is overt. In the case of interrogative sentences,
there seems to be a special need for overt movement. We will not discuss what the
need is here, but apparently it can override the Principle of Economy and require
that the movement be overt. The Principle of Economy thus looks like a default
principle. It applies only if no other principle is being applied.
In terms of the values of S(Af(c)), French can be contrasted with V2 languages
on the one hand and Chinese and Japanese on the other. In V2 languages, the
Agrl-to-C movement seems to occur overtly regardless of whether the sentence
is a statement or question. This shows that S(M(c)) is set to 1 rather than 1/0
in these languages. This is why the movement is always obligatory. In Chinese
and Japanese, on the other hand, the Agrl-to-C movement is never visible. This
suggests that S(M(c)) is set to 0 in these languages. In this case, the verb does
not have the option to move to C even if this movement is motivated in some way.
We will see in Chapter 4 that the value 1/0 for S(M(specl)), S(M(spec2)) and
S(M(cspec)) can account for many interesting facts which would otherwise be left
unexplained. The addition of this value will of course make the task of acquisition
and parsing more challenging, but the challenge will give us a better understanding
of the acquisitional and parsing processes.
109
3.3 Sum m ary
In this chapter we have defined an experimental grammar upon which our study in
syntactic typology, syntactic acquisition and syntactic processing in later chapters
will be based. We defined a categorial system, a feature system and a computa­
tional system. The feature system includes a set of features and a set of S(F)-
parameters which determine the morphological visibility of those features. The
computational system is composed of three sub-components: lexical projection
(LP), generalized transformation (GT), and move-a. For LP we defined a set of
selectional rules which determine the specifier and complement of each category
and two HD-parameters which determine the position of heads in functional pro­
jections. No parameterization exists in GT which is performed in a universal way.
For move-a we defined a set of feature-checking movements, each of which has a
S(M)-parameter that determines the visibility of the movement. In the next chap­
ter we will put this grammar to work. We will examine the parameter space created
by the parameters and the language variations accommodated in the parameter
space.
110
Chapter 4
The Parameter Space
In this chapter we consider the consequences of our experimental grammar in
terms of the language typology it predicts. The parameters we have defined in
the previous chapter can have many value combinations, each of which making
the grammar generate a particular language.1 Those different value combinations
form our parameter space where a variety of languages are found. We will ex­
plore the parameter space and find out its main properties and the languages it
accommodates.
We should be reminded here that the term “language” is used in a special sense
here. In most cases we will be using the quoted form of this term to mean a set
of strings which are composed of abstract symbols like S(ubject), O(bject) and
V(erb). A string such as S V 0 represents a sentence where the subject precedes
the verb and the object follows the verb. In addition, each symbol can carry a list
of features. The features in this list represent overt morphology, i.e. the features
that are spelled out. For instance, V-[agr,tns] represents a verb which is inflected
for agreement and tense. A typical “language” in our system looks like (91) which
‘The language generated can be empty, i.e. it contains no string.
Ill
tells us the following facts: (a) this “language" has an SOV word order; (b) the
NPs in this “language" carry case markers; (c) the verbs in this “language" are
inflected for agreement and tense; (d) this “language” has both transitive and
intransitive sentences; and (e) this is a nominative-accusative language where the
subject in an intransitive sentence has the same case-marking as the subject in a
transitive sentence.
(91) < s -[c l] v -[a g r,tn s],
s -[c l] v-[agr,tns] o-[c2]
>
This set of strings may resemble some subset of a real language, but it is far
from a perfect representation of any natural language. It is only an abstract
representations of certain properties of a human language. The properties we are
interested in are word order and inflectional morphology. When we say a set of
strings corresponds to an existing language, we mean that it reflects the word
order and morphology in this language. All the languages that are generated in
our systems are such abstract languages. In spite of their abstractness, however,
it will not be hard to see what languages they may represent. We will see in this
chapter that many “languages" accommodated in our parameter space have real
language counterparts and most real languages can find an abstract representation
in our parameter space.
Let us start the exploration by reviewing the parameters we have assumed.
(i) S(M)-parameters. These parameters determine what movements are overt in a
given language. There are eight S(M)-parameters corresponding to the eight
movements assumed in our theory:
112
S(M(agr2)) [V-to-Agr2] S(M(c)) [Agrl-to-C]
5(M (asp)) [Agr2-to-Asp] 5(Jl/(specl)) [NPl-to-Agrlspec]
5(Af(tns)) [Asp-to-T] S{M{aptc2)) [NP2-to-Agr2spec]
5(A /(aprl)) [T-to-Agrl] S(M(capec)) [XP-to-Cspec]
The movement in brackets is overt (before Spell-Out) if the corresponding
S(M)-parameter is set to 1 and covert (after Spell-Out) if the parameter is set
to 0. We have assumed in Chapter 3 that A and A movements ( M(specl),
M(spec2) and M(cspec) ) can be optional before Spell-Out. Therefore the
value of S(M(specl)), S(M(spec2)) or S(M(cspec)) can be a variable - 1/0 -
which indicates that the associated movement can be either overt or covert.
(ii) S(F)-parameters. These parameters determine what morphological features
are overt in a language. Six of them are assumed: S(F(agr)), S(F(case)),
S(F(tns)), S(F(asp)), S(F(pred)) and S(F(op)). Each of these parameters
can have four values: 0-1 (spell out the L-feature only), 1-0 (spell out the
F-feature only), 1-1 (spell out both the L-feature and the F-feature), and 0-0
(spell out neither the L-feature nor the F-feature). Recall that most features
in our system are base-generated in two positions, one in a lexical category
(the L-feature) and one in a functional category (the F-feature). The two
features are checked against each other via movement.
(iii) Two HD-parameters: HD1 which determines whether the head of CP pre­
cedes or follows its complement, and HD2 which determines whether the
heads in IP precede or follow their complements. These two parameters can
be set to either I (head-initial) or F (head-final). The value of HD2 applies
to every segment of IP: AgrlP, TP, AspP and Agr2P.
113
Putting these parameters together, we have 8 binary-valued parameters, 3 triple­
valued ones, and 6 quadruple-valued ones. They make up a parameter space where
there are 14,155,776 (i.e. 27 x 3s x 4®) value combinations. Two questions arise
immediately:
(92) Does every existing human language has a corresponding “language” in our
parameter space?
(93) Does every “language” in our parameter corresponds to some natural lan­
guage?
As we will see in this chapter, the answers to these questions are basically positive.
In order to get the answers to these questions, we must first of all get all
the value combinations, generate a language with each setting, and collect the
languages that are generated together with their corresponding settings. This is a
straight-forward computational task and it can be accomplished using the Prolog
program in Appendix A.I. There is obviously an expository problem as to how the
results of this experiment can be presented and analyzed, since simply listing all
the settings and the languages they generate may take a million pages. In order
to describe the whole parameter space in a single chapter, I will break down the
parameter space into natural sub-spaces and look at them one at a time. This
can be done because the three sets of parameters in our systems are independent
of one another. We can vary the values of certain parameters while keeping the
others constant. Some properties of the parameter space are local in the sense that
they are properties of a particular sub-space or a particular type of parameters.
We can get a clear idea about those properties by examining the relevant sub­
spaces. In areas where different types of parameters interact, we will concentrate on
114
some representative cases instead of exhaustively listing all the possibilities. Such
sampling will hopefully enable us to envision the potential of the entire parameter
space.
In what follows, we will look at the space of S(M)-parameters first and then ex­
pand it to include the HD-parameters. After that we will bring the S(F)-parameters
into the picture and consider their interaction with S(M)-parameters. Many nat­
ural language examples will be cited in the course of discussion to illustrate the
relevance of our experimental results to empirical linguistic data.
4.1 The Param eter Space of S(M )-Param eters
In this section, we will single out the S(M)-parameters and explore the range of
language variation they can account for. To do this we need to look at the value
combinations of S(M)-parameters while keeping the values of S(F)-parameters and
HD-parameters constant. In the following experiment, the HD-parameters are
constantly set to I (head-initial). In other words, we will be restricted to the tree
structures in (75) and (76) (Chapter 3) where every head precedes its complement.
We will not be concerned with morphology at this moment. The settings of S(F)-
parameters will be temporarily ignored. The “words” that appear in strings will
therefore be simplified as s (subject) o (object) and v (verb) which are to be
interpreted as NPs and verbs with any inflectional morphology.
115
4.1.1 An Initial Typology
We have assumed eight S(M)-parameters and we will represent their values in a
vector of eight coordinates:
C S(M(agr2)) S(M(asp)) S(H(tns>) S(M(agrl))
S(M(c)) S(NCapecl)) S(M(spec2)) S(M(csp*c)) ]
A setting like [ 1 0 0 0 0 1 0 0 ] means that S(Af(apr2)) is set to 1, S(M(a$p))
set to 0, S(M(tns)) set to 0, and so on.
With 5 binary-valued parameters (£(Af(apr2)), S(M(asp)), S(A/(<ns)),
5(Af(aprl)), S(Af(c))) and 3 triple-valued ones (S(M(specl))t S(M(spec2)),
S(M(cspec))), we have a parameter space of 864 settings. However, it is not the
case that each of these value combinations is syntactically meaningful. This is true
at least within this sub-space of S(M)-parameters. When the S(M)-parameter val­
ues are matched up with the values of S(F)-parameters, more of these settings will
become syntactically relevant. But we will investigate this sub-space first. Only
after we have understood why certain settings of S(M)-parameters are syntactically
meaningless in this sub-space, can we see the reason why some of these settings
may make sense once the S(F)-parameters are brought into the picture.
In our current sub-space where only S(M)-parameters are active and the only
X0 category that can move is the verb, many settings are meaningless due to
the two syntactic constraints discussed in 3.2.3: the Head Movement Constraint
(HMC) and the constraint that an NP must move to an Agrspec before moving on
to Cspec.
The HMC requires that no intermediate head be skipped during head move­
ment. For a verb to move from its VP-internal position to CO, for example, it
116
must move successively to Agr2-0, AspO, TO and Agrl-0 first. In other words, verb-
movement to C must consist of 5 short movements: M(agr2) (V-to-Agr2), Af(asp)
(Agr2-to-Asp), M(ins) (Asp-to-T), M(agrl) (T-to-Agrl) and A/(c) (Agrl-to-C).
The verb cannot be in CO if any of those intermediate movements fails to occur.
If the only XO in a grammar that can undergo head movement is the verb, there
will be a transitive implicational hierarchy in the form of
M(agr2) < M(asp) < M(tn&) < M(agrl) < Af(c)
No movement on the right-hand side of an < can occur without the one(s) on
the left-hand side occurring at the same time. However, there are many value
combinations of 5(Af(ayr2)), S(M(asp)), S{M
which are contradictory to the HMC. Consider
(94) (a) 1 1 1 1 1
(b) 0 1 1 1 1
(c) 1 0 1 1 1
(d) 1 1 0 1 1
(•) 1 1 1 0 1
(f) 0 0 1 1 1
(g) 0 1 0 1 1
(h) 0 1 1 0 1
(i) 1 0 0 1 1
(j) 1 0 1 0 1
Ck) 1 1 0 0 1
(1) 0 0 0 1 1
(a) 0 1 0 0 1
(n) 0 0 1 0 1
(o) 1 0 0 0 1
Cp) 0 0 0 0 1
tns)), S(M(agrl)) and 5(Af(c))
lie settings in (94).
117
All these settings have S(M(c)) set to 1, requiring that the verb move to CO
before Spell-Out. They all differ from each, however, in the values of S{M(agr2)),
S(M(asp))i S{M(tns)) and S(A/(a£rl)). In (94(a)) they are all set to 1; in all
the other settings, however, at least one of them is set to 0. This results in a
contradiction in all the cases of (94(b))-(94(p)). The fact that S(A/(e)) is set to 1
requires that the verb moves to Agr2-0, AspO, TO, Agrl-0 and finally to CO before
Spell-Out. But the values of the other parameters require that at least one of
those intermediate movements must not occur before Spell-Out. Take the setting
in (94(p)) as an example. This setting says that the verb must move to CO before
Spell-Out, but it must not move to any of the intermediate heads before Spell-Out.
Since overt verb movement to CO requires overt movement through Ag2-0, Asp-0,
TO and Agrl-0, this setting makes no sense. Similar contradictions are found in
all the other settings except the one in (94(a)) where no intermediate movement
is blocked.
This kind of contradiction is found in many other settings- The net result of
all this is that, in so far as verb movement is concerned, there are only six settings
which are syntactically meaningful:3
sThere a n alternative ways of looking at the situations described hen. It has been sug­
gested by R. Frank (personal communication) and D. Sportiche (personal communication) that
movement to a certain head need not be blocked simply by the fact that the S(M)-parameter
associated with this bead is set to 0. Following the HMC, the movement will go from head to
head regardless of the S(M)-parameter values. If this is the case, then the settings in (94(b-p))
will all be equivalent to (94(a)) in terms of the overt movement forced: all of them requin that
the verb move to CO before Spell-Out. Instead of saying that then a n only six syntactically
meaningful settings, we can then say that the settings can be grouped into six equivalent classes.
Different settings can be in the same equivalent class by virtue of the fact that the final landing
site for the movement is the same in each setting. Each of the settings in (95) then npresents
an equivalent class.
118
(95) 0 0 0 0 0
0 0 0 0
1 0 0 0
1 1 0 0
1 1 1 0
1 1 1 1
(no overt V-aovement)
(overt V-aoveaent to Agr2-0)
(overt V-aovement to AspO)
(overt V-novement to TO)
(overt V-aoveaent to Agrl-0)
(overt V-aoveaent to CO)
An analogous situation seems to exist for the values of S(M(apecl))t
S(M(spec2)) and S(M(cspec)) which are responsible for the overtness of XP move­
ment. The requirement that an NP must move to an Agrspec before moving to
Cspec has the following consequence. If an XP appears in Cspec at Spell-Out, it
must have moved to Agrlspec or Agr2spec during the derivation. But the rela­
tionship between different settings here is a little more complicated. Consider the
settings in (96).
(96) (a) [ . . . 1 1 1 ]
(b) [ . . . 1 0 1 ]
(c) [ . . . 0 1 1 ]
(d) [ . . . 0 0 1 ]
Some of these settings may seem syntactically meaningless. In (96(d)), some XP
(including the subject NP and the object NP) is required to move overtly to Cspec,
but neither the subject nor the object can move to its Agrspec position before
Spell-Out. Upon further reflection, however, we realize that all those settings
can be syntactically meaningful. Recall that the XP in Cspec can be any of the
following: the subject NP, the object NP, or some adjunct (AdvP, PP, etc.). Since
119
an adjunct need not move to any Agrspec before moving to Cspec, the setting in
(96(d)) will make sense if the sentence contains an adjunct XP which can move
to Cspec regardless of the values of S(M(specl)) and S(M(spec2)). All the other
settings in (96) can also be meaningful and they mean different things. They are
similar in that Cspec must be filled at Spell-Out, but they may differ in terms of
the XP that Alls the Cspec. Here are the possible interpretations:
(97) (96(a)): Cspec can be filled by any XP;
(96(b)): Cspec can be filled by a subject NP or an adjunct, but not by an
object NP, as it does not move overtly to Agr2spec;
(96(c)): Cspec can be filled by an object NP or an adjunct, but not by a
subject NP, as it does not move overtly to Agrlspec;
(96(d)): Cspec can be filled by an adjunct only, as neither the subject NP
nor the object NP moves overtly to Agrspec.
The setting in (96(d)) has an interpretation, but it predicts that there can be a
language where every sentence must contain an XP other than the subject or object
NP. As such a language does not seem to exist, this setting will be disregarded.
Thus the meaningful settings for 5(Af(spec2)), S(M(specl)) and 5(M(c)) are:
(98) (a) c . . . 0 0 o ]
(b) c . . . 1 0 0 ]
(c) c . . . 0 l 0 ]
(d) c . . l l 0 ]
(•) 1 . . . l 0 1 ]
(f) [ . . . 0 l 1 ]
(g) c . . . l l 1 ]
120
As each of these parameters has a third setting (1/0), we also have the following
value combinations:
(h) [ . . 1/0 0 0 ]
(i) C . . 0 1/0 0 ]
<j> r . . 0 0 1/0 ]
00 [ . . 1/0 1 0 ]
(l) c . . 1 1/0 0 ]
Cm) [ . . 1/0 1/0 o 1
(n) [ . . 1/0 0 1 1
(o) [ . . 1 0 1/0 ]
(p) c . . 1/0 0 1/0 ]
Cq) C . . 0 1/0 1 ]
(r) [ . . 0 1 1/0 3
(>) [ . . 0 1/0 1/0 ]
(t) [ . . 1/0 1 1 ]
(u) [ . . 1 1/0 1 ]
(v) [ . . 1 1 1/0 ]
<w> [ . . 1/0 1/0 l 3
(x) [ . . 1/0 1 1/0 3
(y) [ . . 1 1/0 1/0 3
(z) [ . . 1/0 1/0 i/o 3
The total number of value combinations which are syntactically meaningful in
our present sub-space is therefore 156 rather than 864. These settings and the
corresponding sets of strings they generate are shown in Appendix B.l. In this
121
list, each entry consists of a vector of S(M)-parameter values and the set of strings
generated with this value combination. We call each vector a setting and each set
of strings a “language”.
If we examine these setting-language pairs carefully, we will find that the corre­
spondence between settings and “languages” is not one-to-one. For instance, both
the setting in #1 ( [ 0 0 0 0 0 0 0 0 ] ) and the one in #3 ([ 0 0 0 0 0 1 0
0 ]) can generate the language [ s v, i v o ].3 In fact, the correspondence is
many-to-one in most cases. We see in Appendix B.l that many settings generate
identical languages. The language [ v m, v a o ], for instance, can be generated
with 14 different settings including #4, #8, #12, #16, #20, etc.. There are 156
settings in the list, but the number of distinct languages that are generated is only
31. These 31 languages and their corresponding settings are listed in Appendix
B.2. The significance of such many-to-one correspondences will be discussed in
4.1.2.
Looking at the languages listed in Appendix B.2, we find that our current
parameter space is capable of accommodating all the basic word orders: SV(O)
(#1), S(0)V (#4), VS(0) (#2), V(0)S (#8), (O)SV (#3), «>d (O)VS (#5). We
also find a V2 language (#11). One of the settings with which a V2 language can
be generated is [ 1 1 1 1 1 1 1 1 ] where every movement is overt. According
to this setting, the verb must move overtly all the way to CO, the NPs to their
respective Agrspecs, and one of the NPs must move further on to Cspec. In an
intransitive sentence, there is only one NP and this NP must move to Cspec. The
slt is important to note the distinction between strings and languages. A language comprises
a set of strings, and two languages are identical only if they have exactly the same set of strinp.
For instance, [ s v, s v o ] and C s ▼, s o v ] are not the same language in spite of the
fact they both contain the string “« v”.
122
resulting word order is SV. In a transitive sentence, either the subject NP or object
NP can fill Cspec. We have SVO when the subject is in Cspec and OVS when
the object is. This may well be what is happening in German root clauses. If so,
the German sentences in (100(a)) and (101(a)) will have the structures in (100(b))
and (101(b)) respectively.
(100) 4 a. Der Mann sah den Hund
the(nom) man saw the(acc) dog
‘The man saw the dog.1
b. [cp D er M sniii [ci [e sah^] [a^rip [agri—1 • • • [agrip den Hund*
Ugn-i [«gr2 ej] [„p e< e, e*]]]]]]]]
(101) a. Den Hund sah der Mann
the(acc) dog saw the(nom) man
‘The dog, the man saw.’
b. [cp Den Hund* [cj [c uhy] [ojrip der M ann ,■[ayri—i • • • [agrip
[.0r2 t j ) [v, e, ej e*J]]]]]]]
In addition to basic word orders and V2, the current parameter space also allows
for a considerable amount of scrambling. Scrambling is a general term referring to
word order variation in a single language, usually the variation that results from
clause-internal movements of maximal projections. In our present discussion, a
language is considered to have scrambling if it has more than one way of ordering
S, V and O. For example, [ o s v . s v , s v o ] and [ i o v, o s v, s v ]
are scrambling languages.
We can see that many of the “languages” in Appendix B.2 are scrambling
languages. We do not know whether each of those languages is empirically attested,
4Examples from Federika Moltmann
123
but at least some of them are. Let us take a look at the languages in #9, #16 and
#30.
The “language" in #9, [ s o t , o s v, s v ], seems to be exemplified by
Japanese and Korean. They are verb-final language where the subject and object
can be put in any order as long as they precede the verb. The Japanese sentences
in (102) and (103) illustrate this.
(102) Taroo-ga Hanako-o nagut-ta
Taroo-Nom Hanako-Acc hit-Past
‘Taroo hit Hanako’
(103) Hanako-o Taroo-ga nagut-ta
Hanako-Acc Taroo-Nom hit-Past
‘Taroo hit Hanako'
The “language" in #16 is [ o i v, s o v , s v , s v o ] where the subject
must precede the verb but the object can appear anywhere in the sentence. Chinese
seems to be an example of this, as we can see in (104), (105) and (106).
(104) wo jian guo neige ren
I see Perf that person
‘I have seen that person before.'
(105) wo neige ren jian guo
I that person see Perf
‘I have seen that person before.'
(106) neige ren wo jian guo
that person 1 see Perf
‘That person, I have seen before.’
The “language" in #30 has very extensive scrambling, permitting all the following
orders for a transitive sentence: [ o v s, s v o , s o v , o s v , v s o ]. All
those orders are found in Hindi. For instance, the Hindi sentence in (107) has the
alternative orders in (108)-(112). ‘
'Examples from Mahajan (1990)
124
(107) raam-ne kelaa khaayaa (SOV)
Ram-Erg banana ate
‘Ram ate a banana.’
(108) raam-ne khaayaa kelaa (SVO)
(109) kelaa raam-ne khaayaa (OSV)
(110) kelaa khaayaa raam-ne (OVS)
(111) khaayaa raam-ne kelaa (VSO)
(112) khaayaa kelaa raam-ne (VOS)
The only order which is found in Hindi but not in #30 is VOS.
Languages like Japanese, Korean, Chinese and Hindi have been described as
“non-configurational” in the literature (Saito and Hoji 1983, Saito 1985, Hoji 1985,
among many others). They are supposed to be problematic for any strong version
of X-bar theory where phrasal projections are assumed to be universal. This
problem does not seem to exist in our present model. As we have seen, all the
scrambled orders can be derived from a single configuration if some A and A
movements are allowed to be optional at Spell-Out. This is in fact one of the
current views of scrambling (e.g. Mahajan 1990).
4.1.2 Further Differentiation
One thing in Appendix B.2 that may cause concern is the fact that a single language
can be derived with many different parameter settings. Those settings generate
weakly equivalent languages8 which cannot be distinguished from each other on the
“Two languages are weakly equivalent if they have identical surface strings but different gram­
mars.
125
basis of surface strings. One reason why we cannot distinguish them is that many
of the movements are string-vacuous. For example, in cases where the subject NP
has moved to Agrlspec while the object NP remains in situ, there is no way of
telling from an SVO string whether the verb is in Agrl-0, TO, AspO, Agr2-0 or its
VP-internal position. There is simply no reference point by which the movement
can be detected. But this seems to be an accidental rather than an intrinsic prop­
erty of the current model. The apparent indistinction between different settings is
due at least in part to the artifact that only nouns and verbs have been taken into
consideration in our grammar so far. But nouns and verbs are not the only entities
in natural languages. There are other constituents which may serve as reference
points for movement by virtue of the fact that some movements have to go “over”
them. It has been widely assumed since Pollock (1989) that negative elements and
certain adverbs are constituents of this kind. Once these reference points appear
in the string, many movements will cease to be string-vacuous and many otherwise
indistinguishable languages/settings will become distinct from each other. To illus­
trate this, I will introduce just one reference point into our grammar: an adverb of
the “often” type. For easy reference I will simply call it often, meaning an adverb
in any language which means has the syntactic properties of often. Furthermore I
will assume that this temporal/aspectual adverb is a modifier of T and therefore
left-adjoined to T l. The partial tree in (113) shows the part of the structure where
often appears.
126
Agrl-1
/ XAgrl-0 TP
I
T1
/ X
often T1
X 
TO AspP
(113) ^ 2 ^ X
The Position of Often
In terms of linear order, often appears between Agrl-0 and TO. This position can
serve as a diagnostic for verb movement from TO to Agrl-0 or NP movement to
Agrlspec, both of which relates a position following often to a position preceding
often. Any verb before often must have moved to Agrl-0 or higher and any NP
before it must be in Agrlspec or a higher position. Given a string such as 0 S
o ften V, for instance, we can tell that 0 is in Cspec and S in Agrlspec, since
these are the only two NP positions before often. Moreover, we know that the
verb cannot be in CO or Agrl-0.
Appendix B.3 shows us what the setting-language correspondences will be if
the position of often is taken into consideration. We have the same number of
value combinations (156) here as we had in B.l or B.2.7 However, the number
7To have a fair comparison, we have restricted A-movement to NPs only. The number of
settings will be greater if the AdvP often is allowed to move to Cspec as well.
127
of distinct languages that are generated with these settings has increased from
31 to 66. The addition of often into the strings has helped to differentiate many
“languages” that are otherwise indistinguishable from each other. Take the SVO
“languages” as an example. In Appendix B.2, there is only a single SVO language
which can be generated with 32 different settings. In Appendix B.3, however, four
SVO languages are differentiated:
#1 [ (often) s v, (often) s v o ] (2 settin gs)
#2 [ s (often) v, s (often) v o ] (20 settin gs)
•12 [ s v (often ), s v (often) o ] (8 settin gs)
•20 [ s (often) v o, s (often) v,
(often) s v, (often) s v o ] (2 settin gs)
We are now able to distinguish between English and French which seem to cor­
respond to #2 and #12 respectively. In English often precedes the verb while it
follows the verb in French, as shown in (114) and (115).
(114) John often visits China.
(115) Jean visite souvent la Chine
John visits often China
‘John often visits China.’
We can expect that, with more reference points (such as Neg and other adverbs)
taken into account, even finer differentiations are available. However, it is neither
likely nor necessary that we should have enough reference points to differentiate
every setting from every other setting. There does not seem to be any principled
reason against having more than one way of generating the same language, as long
as all the different languages are identifiable. The existence of weakly equivalent
languages may just be a fact.
128
A natural question to ask at this moment is whether the reference points that
have been assumed are always reliable. So far we have assumed that constituents
such as Neg and adverbs do not move. However, this assumption does not seem
well supported. In English, for instance, the adverb often does seem to move
around. It can appear clause-initially as in (116), clause-medially as in (117), and
clause-iinally as in (118).
(116) Often she plays the piano.
(117) She often plays the piano.
(118) She plays the piano very often.
In (116), often seems to have moved to Cspec. This is possible if S(M(cspec)) is set
to 1 or 1/0 in English. To account for (118), we have to assume that often may be
adjoined to either T1 or VI (i.e. it can be either a T modifier or V modifier). The
order in (118) can be derived if often is attached to VI while the verb has moved to
TO and the object NP to Agr2spec. This shows that the so called “reference points”
may not always serve as a good indicator for the movement ofother constituents.
There might be a canonical position for each adverb andonly this position serves
as a reference point. But how a certain position can be identified as being canonical
by the learner is a question that remains to be answered.
4.1.3 Some Set-Theoretic Observations
In this section, we consider the set-theoretic relations between the “languages”
in our parameter space. We will also consider the parameter settings that are
responsible for those relations. The languages to be examined will be those in
129
Appendix B.2. There are only 31 distinct languages there and it does not take
long to compute all the relations between those languages. (The Prolog program
which does the computation is in Appendix A.2.) The number of languages we
have when all other parameters are taken into consideration will be much greater
than 31, but the properties we find in this small number of languages will hold
when the parameter space is expanded to include other parameters.
The set-theoretical relations can be considered in a pair-wise fashion. We take
each of the 31 languages and check it against each of the other 30 languages.8
The total number of pairs we get this way is 465 (30*31/2). There are three
possible set-theoretic relations that can hold between the two languages in each
pair: disjunction, intersection and proper inclusion.9 Two languages are disjoint
with each other if they have no string in common. An example of this is the relation
between #1 [ s v, s v o ] and # 2 [ v a, v a o ]. Two languages intersect
each other if they are not identical but have at least one string in common. For
instance, #1 [ a v, a v o ] and # 4 [ s o v , s v ] intersect by virtue of the
fact that (i) each of them has a string that the other does not have ( s v o is
in #1 only and s o v in #4 only.), and (ii) both of them contain the the string
s v . A language LI is properly included in another language L2 if the strings
generated in Ll forms a proper subset of those generated in L2. This relationship
is found, for example, between #1 [ s v, s v o ] and #10 [ o s v, s v, s
v o ]. Each string in #1 is in #10, but not vice versa. Our computation shows
that the subset relationship holds in 162 of the 465 pairs. The fact that subset
*We are not interested in pairing each language with itself, since every language is identical
to itself.
®No languages in any pair can be identical to each other, since all the 31 languages in B.2 are
distinct.
130
relations do exist in a substantial number of cases will have important implications
for the learning algorithm to be discussed in Chapter 5. We will not get into the
learnability issues here. What we want to find out here is why subset relations
arise in our parameter space.
For convenience we will refer to a language which properly includes some other
language(s) a superset language and a language which is properly included in some
other language(s) a subset language. These notions are of course relative in na­
ture. We may have three languages LI, L2 and L3 where LI properly includes
L2 which in turn properly includes L3. In this case, L2 will be a subset language
relative to LI but a superset language relative to L3. Examining the languages in
B.2, we find that all superset languages are scrambling languages. 10 Since only
two types of sentences, (transitive and intransitive) are produced in our system,
any language in B.2 that contains more than two distinct strings is a scrambling
language. Now, what do those scrambling languages have in common with regard
to their parameter values? An examination of the languages in B.2 reveals that
the superset languages either have 5(M (specl)), S(M(spec2)) and S{M(cspec))
all set to 1 or have at least one of them set to 1/0. In other words, there is ei­
10The only exceptions are the superset languages for #6 [o s v ] and #7 [o v •]. For instance,
# 3 [o • v, s v] is a superset language for #6 (o s v] but there is no scrambling in #3. However,
the languages in #0 and #7 are odd in the sense that they do not contain any string which
consists of one NP only. In other words, no intransitive sentences can occur in these languages.
Such patterns do not seem to occur in natural languages. Looking at the parameter settings for
#6 and #7, we see that they share the property of having 5(Jlf(*pecl)) set to 0, S(Af(spec2))
set to 1 or 1/0, and S(A/(cipec)) set to 1. In other words, they all require that Cspec be filled
before Spell-Out but only the object can move there. There is no overt subject NP movement
to Agrlspec and hence no movement of the subject to Cspec. This situation will not arise if
our grammar has the following constraint: if the object NP can move to Cspec in a language,
then the subject NP in this language must be able to move to Cspec as well. (i.e. S(M(cspec))
cannot be set to 1 unless S(Af(#pecl)) is.) The languages in # 6 and #7 will not be generated if
this constraint is in place. In any case, the languages in #6 and #7 can probably be ignored. If
so, all superset languages are scrambling languages.
131
ther overt A-movement of both the subject and the object or there is at least one
optional A or A movement.
When S(M(spec1)), S(M(spec2)) and S(M(cspec)) are all set to 1, either the
subject NP or the object NP must move to Cspec. As a result, each transitive
sentence will have two alternative orders, depending on which of the two NPs is
in Cspec. Take the first setting in #9 as an example. In this setting, all the
S(M)-parameters for head movement are set to 0. The verb therefore remains
VP-internal at Spell-Out. The subject and object NP must move to Agrlspec and
Agr2spec respectively, and one of them must continue to move to Cspec. When a
sentence is transitive, the word order is SOV if the subject is in Cspec and OSV
if the object is. Hence the scrambling phenomenon. Similar situations are found
in the 2nd setting in #9, the 1st, 2nd and 3rd settings in #10, and the the 1st
setting in #11.
We now consider the second source of scrambling: optional movement. That
optional movement can create scrambling is fairly obvious. If the movement is
not string-vacuous, we will have one word order if the movement does occur and
another one if it does not occur. In terms of parameter settings, any setting where
one or more parameters are set to 1/0 is equivalent to the merge of two different
parameter settings, one with that parameter set to 1 and the other with that set
to 0. For instance, the setting [ 0 0 0 0 0 1/0 1 0 ] is equal to [ 0 0 0 0 0
1 1 0 ] plus [ 0 0 0 0 0 0 1 0 ] . The language generated with this setting is
the union of the language generated with [ 0 0 0 0 0 0 1 1 0 ] (i.e. #4 [ s
o v, s v ]) and the one generated with [ 0 0 0 0 0 0 1 0 ] (i.e. # 3 [ o s
v, s ▼]). This is why the language generated with [ 0 0 0 0 0 1/0 1 0 ] is
#9 [ s o v , o s v , s v ] . This is also why # 9 is a superset language for #3
132
and #4. The value 1/0 is a variable which can be instantiated to either 1 or 0.
In general, a setting with n parameters set to 1/0 can be instantiated to 3n —1
subset settings (including partial instantiations). For instance, a setting with two
parameters set to 1/0 - [ . . . 1/0 . . . 1/0 . . . ] - can have the following
eight instantiations:
[ .
C .
C .
[ .
. 1
. 0
. 1/0
1
. 1
1
.. 1
. 1/0
If none of the movements contra
C .
C .
[ .
C •
. 1 .
. 0 .
. I/O
0
0
0
1/0
. ]
. ]
. ]
. ]
lied by these parameters is string-vacuous, a lan­
guage with n optional movements can then have 2n subset languages. In B.2 there
are many cases where the setting has one or more parameters set to 1/0, but the
the language generated is not a scrambling one. These are all cases where the op­
tional movement is string-vacuous. The subset languages are thus string-identical.
Consider the setting [ 0 0 0 0 0 1/0 0 0 ] which is equivalent to [ 0 0 0 0
0 1 0 0 ] plus [ 0 0 0 0 0 0 0 0 ] . The movement which is optional here is
subject NP movement to Agrlspec. Since both the verb and the object remain in
situ (in the order of VO), we will have SVO regardless of whether the subject is in
Vspec or in Agrlspec.
To sum up, there are two sources for scrambling or superset languages: overt A-
movement and optional movement. This observation is important for learnability
issues. The significance of this observation will discussed in Chapter 5.
133
4.2 Other Param eters
In 4.1 we investigated the parameter space of S(M)-parameters. We kept the values
of HD-parameters and S(F)-parameters constant in order to concentrate on the
properties of S(M)-parameters. Now we will start taking those other parameters
into consideration. In what follows, we will bring in the HD-parameters first, then
some S(F)-parameters, and finally all the S(F)-parameters.
4.2.1 HD-Parameters
Only two HD-parameters are assumed in our system: one for CP and one for IP.
There is no specifier-head parameter for any category and there is no complement-
head parameter for lexical categories. The parameter for CP (HDl) determines
whether CP is head-initial or head-final. The parameter for IP (HD2) determines
for every segment of IP (i.e. AgrlP, TP, AspP and Agr2P) whether the head
precedes or follows the complement. If the parameter is set to 7, all those segments
will be head-initial and the whole IP will have the structure in (119(a)). The whole
IP will be head-final if the parameter is set to F , as shown in (119(b)).
134
AgrlP
Spac Agrl-1
/  y 
Agrl TP ^
I / T1 TP Agrl
Z T AapP 71
T / / M ^ T
Z  / A*> Agr2P " f ” **f
(119)
/ X
Spw Agr2-1 * 5 ®
/ NAgr2 VP * * * ***2-1
/ X / VP Agr2
Z X
(») (b)
Head-Initial and Head-Final IP*
It is evident that the values of HD-parameters can affect the linear order of
a sentence. In our system, they work in conjunction with the values of S(M)-
parameters in determining the word order of a language. Given a certain value
combination of S(M)-parameters, the order may vary according to the values of
HD-parameters. Our parameter vector now has 10 coordinates:
135
[ S(M(agr2)) S(MCa«p)) S(M(tna)) S(M(agrl)) S(M(c))
S(M(apecl)) S(M(apec2» S(M(capec)>, HD1 HD2 ]
We can keep the values of S(M)-parameters constant and get different word orders
by changing the values of HD1 and HD2. Consider the following two settings:
(t) [ 1 1 1 0 0 1 0 0 , 1 1 ]
(b) [ 1 1 1 0 0 1 0 0 , F F ].
Both settings require that the verb move to TO, the subject NP move to Agrlspcc,
and the object NP stay VP-internal before Spell-Out, but TO precedes the VP in
(a) and follows the VP in (b). As a result, the language generated with (a) is SVO
and the one with (b) is SOV. It should be noted that the value of a HD-parameter
has an effect on word order only if the relevant head is the final landing site of an
overt head movement. In (a) and (b), overt head movement reaches TO which is a
head in IP but not CO which is the head of CP. Consequently, the value of IID1
plays no role in determining the word order. The languages generated with [ 1 1
1 0 0 1 0 0 , 1 1 ] and 1 1 1 0 0 1 0 0 , I F ] are identical. So are the
languages generated with [ 1 1 1 0 0 1 0 0 , F F ] and [ 1 1 1 0 0 1 0 0
, F 1 ]. In a Betting like [ 0 0 0 0 0 . ] where there is no overt head
movement, the language will be the same no matter what values IID 1 and IID2
may be assigned.
We observed earlier that, modulo our current grammar, there are 156 settings of
S(M)-parameters with which are syntactically meaningful. Each of these settings
can be combined with any of the four settings of HD-parameters: [ I I ], [ F F
], [ I F ] , and [ F I ] . The number of possible settings is therefore 624. The
languages generated with these 624 settings are given in Appendix B.4. As in B.2,
136
I have grouped together all the settings that generate the same language. It is
a surprise to see that, in spite of the four-fold increase in the number of value
combinations, the number of distinct languages generated did not increase at all.
We had 31 languages when both H D l and H D 2 are fixed to / . We expected to get
more languages when the values of H D l and H D 2 are allowed to vary, but there
are still exactly 31 languages. In other words, the languages generated when H D l
and H D 2 are set to [ I F 3, [ F I ] and [ F F ] respectively all form subsets
of the languages generated with. [ I I ]. As we can see in B.4, every language
that can be generated by setting H D l and H D 2 to [ I F ] , [ F I ] or [ F F
] can also be generated with the parameters set to [ I I ] . This might suggest
that the parameterization of head directions is superfluous. What is the point
of varying the directions if the variation accounts for no more word order facts?
Why not eliminate the parameters and assume that all projections are universally
head-initial? These will certainly be arguments in favor of the views in Kayne
(1992, 1993) and Wu (1993).
However, the superfluity of HD-parameters is more apparent than real. The
parameterization does not seem to increase the generative power of the grammar
because the strings in B.2 and B.4 are too simple. They contain nothing more than
S, 0 and V. As soon as other constituents are allowed to appear in the strings, the
difference in generative power will show up. We can add often to the strings and
see what happens.11 When often is added while the HD-parameter values are fixed
to [ X I ], 66 languages are distinguished (B.3). When the HD-parameter values
are allowed to vary, we can distinguish 86 languages if often appears in the strings.
(Due to space limitation, the list of settings and languages are not given in the
"W e assume as before that often is left-adjoined to Tl.
137
Appendix.) The increase in number is not dramatic, but it does show that there
are things that can fail to be generated if HD-parameters are eliminated altogether.
One of the languages that cannot be generated if all projections are head-initial is
[ o ften s v, o ften s o t ] . To get the SOV order while maintaining a strictly
head-initial structure, the subject NP must move to Agrlspec and the object NP
to Agr2spec. But the movement to Agrlspec must go beyond often, putting the
subject on its left side. Thus s often o v is possible while o ften s o v is
impossible. However there is no problem in generating the latter if head direction
is permitted to vary. We can get this order with the setting [ 1 1 1 0 0 0 0
0 , F F ], for instance. This setting makes the verb move overtly to TO while
keeping everything else in situ. Since TP is head-final, the verb that lands in TO
will follow every other elements in the string. The string that results is o ften s
o v.
There are other languages which cannot be generated in the absence of HD-
parameters. In Chapter 2, we mentioned languages with sentence-final auxiliaries
and grammatical particles. In the next section, we will look at them again. We
will find that the addition of the two HD-parameters can make a big difference in
terms of generative power once some functional elements are spelled out.
4.2.2 The Spell-Out of Functional Heads
So far we have ignored the values of S(F)-parameters. As a result, morphological
variation has been kept out of the picture. One consequence of this simplification is
that no functional heads have been spelled out. The head of a functional category
consists of a set of features but it has no lexical content. We assume that, unlike
the head of a lexical category which is visible regardless of the value of S(F)-
138
parameters, the head of a functional category is not visible unless its features
are spelled out. When all S(F)-parameters are set to 0, no functional heads are
visible and the only head that can undergo overt head movement is the verb. Once
some functional heads are spelled out, however, the situation is different. We have
assumed earlier that the feature matrix of a functional head can be spelled out
as auxiliaries, grammatical particles or expletives. Apparently, auxiliaries can also
undergo head movement. This has very interesting syntactic consequences.
Recall that each S(F)-parameter can have two sub-parameters in the form F-
L. The value of F tells us whether the F-feature (i.e. the feature residing in a
functional category) is spelled out and L tells us whether the L-feature is spelled
out. Take the parameter S(F(tns)) as an example. When it is set to 1-0, tense
features must be spelled out in TO but not on the verb. This is possible only if
the verb does not move to TO before Spell-Out. Otherwise, the F-feature and the
L-feature will have merged into a single one which can appear on the verb only.
Let us assume that, when spelled out, TO appears as an auxiliary. We will use
”Aux” to represent such a morphologically realized functional head. The strings
produced by our grammar can now contain Aux in addition to S, O and V. As
a visible head, Aux can undergo head movement just as a verb does. This has a
significant effect on the parameter space of S(M)-parameters.
We have seen in 4.1.1 that many settings of S(M)-parameters are not syntacti­
cally meaningful when the only head that can move is the verb. Now that we have
Aux in addition to the main verb, the picture is very different. Many previously
meaningless settings now start to make sense. Consider the following setting:
(120) [ 1 1 0 1 1 . . . ]
139
When head movement is restricted to the verb only, this setting is syntactically
meaningless. The verb is required to move to CO, but the movement is blocked by
the value of S(M {tns)) which is set to 0. When a functional head (such as TO) is
spelled out as an Aux, however, there are two heads that can move. Consequently,
the movement required by this setting can now be “split”. The requirement that
some head move to Agrl-0 and CO can be satisfied by moving Aux to CO, while
the verb can move to AspO to satisfy the requirement that some head move to
Agr2-0 and then to AspO. The setting in (120) has therefore become syntactically
meaningful.
Many other settings of S(M)-parameters are made syntactically meaningful by
the spell-out of functional heads. An exhaustive listing of those settings will take
too much space. For illustration, we will list below just those settings which become
syntactically significant as a result of spelling out TO as an Aux.
(121) (a) [ 0 0 0 1 0 . . . ] (V in situ ; Aux to Agrl-0)
(b) [ 0 0 0 1 1 . . . ] ( Vi n situ ; Aux to CO)
(c) [ 1 0 0 1 0 . . . ] (V to Agr2-0; Aux to Agrl-0)
(d) [ 1 0 0 1 1 . . . ] ( Vt o Agr2-0; Aux to CO)
(•) [ 1 1 0 1 0 . . . ] ( Vt o AspO; Aux to Agrl-0)
(f) [ 1 1 0 1 1 . . . ] (V to AspO; Aux to CO)
As a consequence of these new settings, the number of distinct languages that
can be generated in our parameter space increased significantly. Our computation
shows that with HD-parameters constantly set to J, S(F(tns)) set to 1-0 and
other S(F)-parameters set to 0-0, the number of languages that are generated by
varying the values of S(M)-parameters is 83 instead of the original 31. Those 83
140
languages are listed in Appendix B.5. To save space, only one possible setting for
each language is given, but this should be enough for the illustration of how each
of the languages could be derived.
4*2.3 HD-Parameters and Functional Heads
Now we try varying the values of HD-parameters. With S(F(tns)) set to 1*0 and
all other S(F)) parameters to 0*0, the number of distinct “languages” that can
be generated with all possible value combinations of HD-parameters and S(M)-
parameters is 117. These languages are listed in Appendix B.6. As in Appendix
B.5, only one setting is given for each language for the purpose of saving space.
The result of this experiment shows again that there are languages that cannot
be generated without HD-parameters. When we ignored the HD-parameter by
allowing for head-initial constructions only, 83 languages were generated (B.5).
Thirty-four additional languages are generated when the two HD-parameters are
brought into play. There are languages which can be generated only if CP is
head-initial and IP is head-final (e.g. #14). There are also languages which are
possible only if CP is head-final and IP is head-initial (e.g. #18). So far there is
no language which cannot be generated unless both CP and IP are head-final. But
there will be such cases when more than one functional head is spelled out, as we
will see.
The incorporation of auxiliaries into our system has given us a richer typology.
In B.2 or B.4, where there is no auxiliary due to the fact that all S(F)-parameters
are ignored, only one pure SVO language13 is distinguished: [ s v, s v o ]. As
a result of setting S(F(tna)) to 1*0 and thus allowing for the appearance of one
13By “pure” I mean there is no scrambling.
141
auxiliary, we now have four different SVO languages: #1 [ aux a v, aux ■ v o
], #2 [ a ▼aux, a v o aux ], #4 [ a aux v, a aux v o ] and #20 [ a v,
a v o ].
Certain predictions are made in this partial typology of SVO languages. Among
other things it is predicted that no TO auxiliary can appear between the verb and
the object in an SVO language. The sequence v aux o is impossible because of
the following contradiction. The fact that the TO Aux precedes 0 shows that IP
must be head-initial. In this case the verb must be higher than TO in order to
appear to the left of the TO Aux. However, if the verb is higher than TO, it must
have moved through TO and become unified with it. In this case, TO will not be
able to be spelled out by itself as an Aux. Now what if we do find the sequence s
v aux o in natural languages? One potential example can be found in Chinese:
(122) 7a mat It net 6en shu
he buy Past that book
‘He bought that book.’
The past tense marker /e13 looks like a TO auxiliary that appears between V and
0. However, this may not be a counter-example to the prediction under question.
We may analyze It as a suffix of the verb, i.e. a tense feature which is spelled out
on the verb. The sequence we are looking at here is therefore [ a v- [tns] o ]
rather than [ a v aux o ]. The former can be generated if S(F(tns))is set to
0-1 (spell out the L-feature only) instead of 1-0.
In some cases a tense marker can be analyzed either as an auxiliary or anaffix.
Take the Japanese sentence (123) as an example.
13Le has traditionally be treated as an aspect marker. See Chiu (1992) for arguments for the
treatment of le as a tense marker.
142
(123) Taroo-ga Hanako-o mi-ta
Taroo-nom Hanako-acc see-past
‘Taroo saw Hanako'
The tense marker to can be treated either as a suffix to the verb (the string analyzed
as ■ o v -[tn s]) or as a TO auxiliary (the string analyzed as s o v aux). The
former is possible in a setting, for example, where 5(Af(fns)) is set to 0-1 and
the S(M) and HD-parameters are set to [ 0 0 0 0 0 1 1 0 , i i ]. In this
case, both CP and IP are head-initial. The subject and object move to Agrlspec
and Agr2spec respectively and the verb remains in situ with its tense feature (the
L-feature) spelled out. The latter sequence (s o v aux) is possible, for instance,
when 5(F(tns)) is set to 1-0 and the S(M) and HD-parameters set to [ 1 1 0 0
0 0 0 0 , f f ] . I n this case, both CP and IP are head-final. The subject and
object remain VP-internal while the verb moves to AspO. TO has not merged with
the verb and it is spelled out as an auxiliary. On the basis of (123), we cannot tell
if Japanese is a o v- [tns] or s o v aux. The interesting observation is that a
language like Japanese which has been regarded as a typical head-final language
can be generated with either a head-initial or a head-final structure in our system.
We have so far only touched upon one type of auxiliary: an overtly realized TO.
Other types of auxiliaries can be obtained by spelling out other functional heads
such as CO and AspO. I will not explore these possibilities exhaustively as I did
with the spell-out of TO. Some of them will be mentioned later on in this chapter
when we consider the settings for some specific languages, but it will basically
be left for the reader to figure out what will happen when, say, S(F(prcd)) or
S(F(asp)) is set to 1-0.
One desirable property of the present account of auxiliaries is that no extra
machinery is needed to get a much richer typology. The S(M)-parameters and
143
S(F)-parameters are not specifically designed to account for auxiliaries. We need
them for independent reasons: S(M)-parameters for word order variation and S(F)-
parameters for morphological variation. It just happens that certain value combi­
nations of those two types of parameters predict the occurrence of auxiliaries. In
other words, we are able to accommodate auxiliaries in our parameter space at no
extra cost.
We conclude this section by pointing out that, no matter whether there are
auxiliary movements or not, the verb always moves all the way up to CO at LF.
For reasons relating to the principle of Full Interpretation (Chomsky 1991, 1992),
auxiliaries are assumed to be invisible at LF. Given a setting like [ 0 0 0 1 1 .
] where there is no overt verb movement but TO moves overtly to CO as
an auxiliary, the Aux along with the movement it has undergone will disappear at
LF where the verb will move to Agr2-0, AspO, TO, Agrl-0 and CO.
4.2.4 S(F)-parameters
Up till now we have examined the parameter space created by S(M)-parameters,
HD-parameters and one S(F)-parameter (5(F(tns))). When we take all the other
S(F)-parameter into consideration, combining their values with S(M)- and HD-
parameters, the number of possible settings is huge and the number of languages
that can be generated will be in the order of tens of thousands. It is impossible to
list all those languages in the Appendix, not to mention the settings each of those
languages can be generated with. The best we can do here is to look at a small
subset of them and get some idea of what kinds of languages can be generated when
all the S(F)-parameters enter the parameter space. One way to do it is to keep
the values of S(M)-parameters relatively constant while varying the values of HD-
144
and S(F)-parameters. In the following experiment, we will restrict the possible
settings of S(M)-parameters to just two: [ 0 0 0 1 0 1 0 0 . ] and
[ 1 1 1 1 0 1 0 0 . .]. The first setting represents the case where there
is auxiliary movement but no overt verb movement; the second is a case where
there is overt verb movement and no auxiliary shows up. The two HD-parameters
will work as usual, with four possible settings. Of the six S(F)-parameters that
have been assumed - S(F(apr)), S(F(case)), S(F(fns)), S(F(asp)), S(F(prcd))
and S{F(op)) - two will be kept constant and the other four allowed to vary. The
two S(F)-parameters whose values will be kept constant in the experiment will be
5(F(pred)) and S(F(op)). They will always be set to 0-0. As a result, we will not
see in this experiment any language where CO or Cspec is spelled out. The other
S(F)-parameters will have some of their values considered. S(F(a$rr)) will vary
between three values tt 0-0 (no agreement features spelled out), 1-0 (agreement
features spelled out on the auxiliary), and 0-1 (agreement features spelled out on
the verb). S(F(caae)) will vary between 0-0 (no case feature spelled out) and 0-1
(case feature spelled out on the noun). S(F{tns)) varies between 0-0 (no tense
feature spelled out), 1-0 (tense feature spelled out on the auxiliary), and 0-1 (tense
feature spelled out on the verb). The auxiliary which spells out TO will continue
to be called Aux. Finally, S(F(aap)) varies between 0-0 (no aspect feature spelled
out) and 0-1 (aspect feature spelled out on the verb). 14 Each parameter setting
will now be a vector of 14 coordinates:
14The value 1-0 is impossible with the two settings of S(M)-parameters we ate restricted to
here.
145
C S(M(*gr2» S(H(asp)) S(M(tns)) S(M(agrl))
S(M(c)) S(M(specl)) S(M(spec2)) , HDl HD2 ,
S (F (cu «)) S(F(agr>) S(F(tna)> S(F(asp)) ]
As we can see, even the S(F)-parameters which are active will not have its full range
of value variation tried out in the experiment. Only a subset of their possible values
is to be considered. All this is done for the purpose of illustrating the range of
variation by looking at a very small sample of the “languages” that are generated.
This small sample should be enough to give vis some idea as to what languages can
be accommodated in our parameter space when all parameters are fully active.
The value combinations and the languages that are generated in this very re­
stricted parameter space is given in Appendix B.7 (only one of the possible settings
is shown for each language). Forty-eight distinct languages are generated. These
languages form a small subset of the SVO and SOV languages that can be gener­
ated in our system.
Looking at the strings in each language, we see that every terminal symbol
in these strings has a list attached to it. The list contains information about
inflectional morphology. The appearance of a feature in the list indicates that this
feature is morphologically visible (spelled out). Any symbol that has an empty
list attached to it has no overt inflectional morphology. The list can contain more
than one feature when the terminal symbol is inflected for more than one feature.
The feature list is unordered, which means that the order in which the features
are listed has no implication for the order of affixation or whatever other ordering.
A symbol like v- [ag r.tn s] does not necessarily mean v -ag r-tn s whrrr agr and
146
tn s are actual morphemes attached to the verb. It only indicates that the verb is
inflected for those two features. How the inflection is morphologically represented
is not our concern here.
The symbols that appear in B.7 and the syntactic entities they represent are
displayed in (124).
(124) s-[] a subject NP with no case-marking
s-C c(l)] a subject NP overtly marked for Case 1
o- [] a object NP with no case-marking
o -[c(2 )j a object NP overtly marked for Case 2
v- [] a verb with no inflection
v- [agr] a verb inflected for agreement
v- [tns] a verb inflected for tense
v- [asp] a verb inflected for aspect
v-[agr itns] a verb inflected for both agreement and tense
v-[ag r, asp] a verb inflected for both agreement and aspect
v- [tn s, asp] a verb inflected for both tense and aspect
v- [agr,tn s , asp] a verb inflected for agreement, tense and aspect
aux- [tns] the TO auxiliary
au x -[tns,agr] the TO auxiliary inflected for agreement
From the sample in B.7, which mainly illustrates the range of morphological varia­
tion in our system, and B.4, which illustrates word order variation, we can tell how
many typological distinctions can be made in the parameter space. With all the
parameters working together, there are more than 200,000 syntactically meaning­
ful parameter settings and approximately 30,000 languages (sets of strings) can be
147
distinguished. We can get languages with almost any basic word order and with
many different types of inflectional morphology. We should list all the “languages”
that can be generated in this parameter space and try to match each of them with
a natural language. Given the huge number of languages in the parameter space,
such listing is impossible in a thesis of the present size. However, to get a better
understanding of the generative power of our present system, we will try to fit at
least some real languages into the space. In what follows, therefore, we will choose
some languages for case study. These case studies will put us in a better position
to judge the potential and limitations of the present model.
4.3 Case Studies
In this section, we will look at a few natural languages and try to see to what
extent they can be accommodated in the parameter space we have assumed. It
is unrealistic to expect our parameter space to account for everything of any real
language. There are many reasons why this is unrealistic. The grammar we have
been using is only a partial UG. There are other modules of UG which have not
been taken into consideration so far. We are therefore bound to run into facts
that cannot be explained until our model is interfaced with those other modules.
The present module is only concerned with basic word order and basic inflectional
morphology. Even in these domains we have further restricted ourselves to sim­
ple declarative sentences whose only components are S, V, 0 , Aux and possibly
some adverb. Consequently, the “languages” generated in our parameters can­
not be exact matches of natural languages. However, this does not prevent those
“languages” from resembling certain natural languages or some subsets of natural
languages. When we say that a certain language is accommodated in our param­
148
eter space, we mean that there is a parameter value combination that generates a
“language” which is a rough approximation of this natural language. We have a
long way to go before we can account for everything with our model, but there is
no reason why we should not find out how much can be done in the the current
partial model. In what follows, we will be considering some subsets of English,
Japanese, Berber, German, Chinese and French. For convenience we will refer
to these subsets as English, Jananese, etc., meaning some small subsets of those
languages.
4.3.1 English: An SVO Language
The first question we have to deal with is how to represent English as a set of
strings in the format we have been using here. In terms of word order, English is
SVO. In addition, adverbs of the often type appear before the verb. The order OSV
is found in topicalization. Morphologically, English pronouns are overtly marked
for case. The verb in English shows overt tense and subject-verb agreement. We
may therefore tentatively describe English as (125).
(125) s -[c (l)3 (often) v-[agr,tns,asp]
s - [ c ( l) ] (often) v-[agr,tns,asp] o-[c(2)3
o-[c(2)] s - [ c ( l) ] (often) v-[agr,tns,asp]
The language in (125) can be generated with many different parameter settings.15
One of them is (126).
(126) [ 1 0 0 0 1 1 1/0 , i i , 0-1 0-1 0-1 0-1 ]
11The fact that this language can be generated with so many different settings can have inter*
esting implications for learning. These issues are addressed in Chapter 5.
149
Other settings include:
(127) (a) [ o 0 0 0 0 1 1 1/0 , i i , 0-1 0-1 0-1 0-1 ]
(b) C i 0 0 0 0 1 1 l/o , i i , 0-1 0-1 0-1 0-1 ]
(c) C l l 1 0 0 1 1 i/o , i i , 0-1 0-1 0-1 0-1]
(d) Co 0 0 0 0 1 1/0 1/0 . i i , 0-1 0-1 0-1 0-1 ]
(•) C l 0 0 0 0 1 1/0 1/0 , i i , 0-1 0-1 0-1 0-1 ]
(f) [ l 1 0 0 0 1 1/0 1/0 » i i , 0-1 0-1 0-1 0-1 ]
Cg) [ l 1 1 0 0 1 1/0 1/0 . i i , 0-1 0-1 0-1 0-1 ]
(h) C o 0 0 0 0 1 1 1. i i , 0-1 0-1 0-1 0-1 ]
(i) [ l 0 0 0 0 1 1 1 , i i . 0-1 0-1 0-1 0-1 ]
(j) C l 1 0 0 0 1 1 1 . i i . 0-1 0-1 0-1 0-1 ]
(k) C i 1 1 0 0 1 1 1 . i i . 0-1 0-1 0-1 0-1]
Cl) Co 0 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ]
<*) C l 0 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ]
(n) C l 1 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ]
Co)
etc.
C l 1 1 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ]
According to the setting in (126), English is a strictly head-initial language. At
Spell-Out, the verb moves to AspO, the subject NP to Agrlspec and the object
NP to Agr2spec. Furthermore, one of the XPs may optionally move to Cspec. We
have the SVO order when Cspec is unfilled or filled by the subject. The OSV order
occurs when the object moves to Cspec. If often appears in the sentence, it may
go to Cspec instead of the subject or object. We then have the strings in (128) in
addition to the ones in (125).
150
(128) o ften s - [ c ( l) ] v - [a g r(l),tn s,asp ]
o ften e-[c(l> ] v-C eg r(l), tn s,asp] o-[c(2>]
Morphologically this setting requires that the agreement features be spelled out
on the verb, the case features spelled out on the noun, and the tense and aspect
features spelled out on the verb.
Several questions arise immediately. First of all, the SVO and OSV orders are
given equal status in (125). This seems undesirable for it fails to reflect the fact that
the SVO order is more basic and occurs far more frequently than the OSV order.
But this problem is more apparent than real. With our current setting, the OSV
order occurs only if the object has undergone the optional A-movement to Cspec.
We know from the principle of Procrastinate that, given the option of whether to
move overtly or not, the default choice is always “do not move”. Therefore the
object will not move to Cspec unless this default decision is overridden by some
other factor such as discourse context. As a result, we will find SVO most of
the time and find OSV only in those situations where topicalization is required.
Things would be different if we have the setting in (129) or any of the settings in
(127(h))—(127(o)).
(129) [ 1 1 0 0 0 1 1 1 , i i , 0-1 0-1 0-1 0 - 1 , 1 ]
This setting can also account for the strings in (125), but the movement to Cspec
is obligatory. So Cspec must always be filled, either by the subject NP or Object
NP. If this is the setting for English, we will have to find some other explanation
for the peripheral nature of the OSV order. We have to say, for example, that
the subject NP is the topic of the sentence in most of the case. Consequently, the
Subject NP moves to Cspec far more frequently than the object NP in spite of the
151
fact that both S and 0 have equal rights to move there.
The second question concerns inflectional morphology. The values of S(F)-
parameters are currently assumed to be absolute. Each parameter is set to a
single value and no alternation between different values are permitted. This seems
to create a number of problems:
(130) We have set S{F(cast)) to 0-1 but not every NP in English is overtly marked
for case. Only the pronouns are.
(131) S(F(asp)) is set to 0-1 but not every verb seems to inflect for aspect.
(132) S'(F(apr)) and 5(F(lne)) are set to 0-1, indicating that agreement and
tense are to be spelled out on the verb only. This seems contrary to the fact
that these features can also be spelled out in an auxiliary in English. This
actually leads to the more general problem that the setting in (126) does not
let auxiliaries occur in this language.
We will deal with these problems one by one.
The problem in (130) may be solved by refining our parameter system. So far we
have not tried to differentiate various types of NPs. The S(F(case)) parameter,
which is associated with the whole class of NP, is blind to the distinction, for
example, between pronouns and other NPs. Since the value of this parameter does
not seem to apply across the board to all types of NPs (at least in English), we
may need to distinguish two S(F{case)) parameters, one for pronouns and one
for other NPs. Once this distinction is made, the absolute nature of the S(F)-
parameter values is no longer a problem. In fact, alternation of parameter values
should not be permitted, for pronouns must be case-marked in English and other
152
NPs must not be case-marked. The 5 (/r’(caae)) parameter for pronouns is always
set to 0-1 and that for other NPs always to 0-0. If the learner is able to distinguish
between pronouns and other NPs, the parameters can be correctly set. How the
learner becomes aware of this distinction is of course a different learning problem
that needs to be addressed separately.
The problem in (131) is not a problem if we assume that a verb is inflected
as long as it is morphologically different from other verbs. With regard to aspect
marking, the progressive aspect is spelled out as ~ing and the perfective aspect as
-ed. When a verb has neither -ing nor -td attached to it, we know that this verb
has an aspect feature which is neither progressive nor perfective. In this sense, this
verb has had its aspect features overtly marked through zero inflection.
Now we look at the problem in (132). The setting in (126) does not allow for
the following set of strings which are actually found in English.
(133) a-[c(l> ] aux-[agr,tns] (often) v-[asp]
s - [ c ( l) ] au x-[agr,tns] (often) v-[asp] o -[c(2 )]
o -[c(2 )] s -[ c ( l) ] au x-[agr,tns] (often) v-[asp]
Each of these strings contains an auxiliary which spells out the agreement and
tense features of Agrl and T. Verbs are inflected for aspect only. For this set of
strings to be generated, S(F(agr)) and S(F(tns)) must be set to 1-0 rather than
0-1. In addition, S(Af(agrl)) may have to be set to 1 so that the TO auxiliary
can move to Agrl to pick up the agreement features. In other words, we need the
following setting.
(134) [ 1 1 0 1 0 1 1 1/0 , i i , 1-0 0-1 1-0 0-1 ]
153
To generate the strings in both (125) and (133), we seem to need a setting which
is a merge of (126) and (134), such as the following:
(135) [ 1 1 0 1/0 0 1 1 1/0 , i i ,
1-0 1- 0/ 0-1 1- 0/ 0-1 0-1 ]
This setting raises several questions. First of all, the fact that 5(M (a^r)) is now
set to 1/0 means that head movement can be optionally overt as well. This option
is not available in our minimal model, but what we have seen here suggests that
we may have to let the S(M)-parameters for head movement have the value 1/0
just like S(M(cspec)), S(M (specl)) and S(M(spec2)). There is other evidence
in English which shows that head movement can also be optional. So far we
have limited our attention to declarative sentences only. As soon as we look at
questions, we find that £(Af(c)) must be set to 1/0 in English: head movement
from Agrl-0 to CO occurs overtly in questions but not in statements. If so, this
movement will be covert unless the principle of Procrastinate is overridden by some
other requirement, such as the need of checking the predication feature (which is in
CO) before Spell-Out when this feature has the value “+Q ”. In any case, it seems
necessary that optional head movement should be incorporated into our parameter
system.
Another question concerns the fact that S(F(agr)) and S f/’ffns)) are set to
1-0/0-1. This setting is intended to represent the fact that (a) agreement and
tense features must be spelled out in English, and (b) we can spell out either the
F-feature or the L-feature, but not both. When the F-feature is spelled out, an
auxiliary appears and this auxiliary may move to Agrl-0 or CO. When the L-
feature is spelled out, the agreement and tense features appear on the verb and
154
there is no auxiliary. The question is why we have to spell out the F-feature in
some cases but the L-feature in some others. We do not find an answer to this
question in our minimal model here. However, once this model is interfaced with
other modules of the grammar, the choice may have an explanation. It maj' turn
out that negation requires the spell-out of F-features. This might explain why
(136) and (137) are grammatical while (138) and (139)) are not.
(136) John does not love Mary.
(137) John did not see Mary.
(138) John not loves Mary.
(139) John not saw Mary.
It is also possible that the F-features of agreement and tense must be spelled out
when the aspect feature is “strong” or “marked” in some sense. In English, this
seems to happen when the aspect is progressive or perfective, as shown in (140)
and (141).
(140) John is writing a poem.
(141) John has written a poem.
The auxiliaries be and have here are treated as overtly realized functional heads
and they are represented as “aux-[agr,tus]n in our system. Why “aux-(agr,tns]” is
spelled out as be in some cases, have insome othercases, and do in most of the
other cases has to be explained bytheories whichhave not yet been incorporated
into our system.
155
4.3.2 Japanese: An SOV language
Japanese is a verb-final language with a considerable amount of scrambling. The
subject and the object must precede the verb but they can be ordered freely,
resulting in SOV or OSV. Japanese NPs are always case-marked16 and Japanese
verbs come with tense markers.17 There does not seem to be any overt agreement
whose function is grammatical.1*
In terms of surface strings, Japanese can be described as either (142) or (143)
depending on whether we treat the tense marker as a suffix or grammatical particle.
In (142) the tense marker ta is treated as a verbal suffix while in (143) it is treated
as a grammatical particle which spells out the tense feature in TO.
(142) s-C c(l)] v -[tn s]
a -[c (l)] o-[c(2 )] v-Ctna]
o-[c(2 )] [s -[c (l)] v -[tn s]
(143) [a-[c (l)3 v -[] aux
[s -[c (l)J o-[c<2)] v-[] aux
o-[c(2 )] [a -[c (l)] v-[] aux
These patterns are illustrated by the Japanese sentences in (144), (145) and (146).
(144) Taroo-ga ki-ta
Taroo-Nom come-Past
‘Taroo came.'
16Except in very casual speech.
1TIt is controversial whether there is aspect markers in Japanese. What we mean by tense
marker here will include the aspect marker.
iaThere is, however, agreement with respect to levels of bonorificness and politeness.
156
(145) Taroo-ga tcgami-o kai-ta
Taroo-Nom letter-Acc write-Past
‘Taroo wrote a letter.'
(146) tegami-o Taroo-ga kai-ta
letter-Acc Taroo-Nom write-Paat
‘Taroo wrote a letter.'
The languages in (142) and (143) can be generated with the settings in (147) and
(148) respectively.
(147) [ 0 0 0 0 0 1 1 1/0 , i i , 0-0 0-1 0-1 0-0]
(148) [ 0 0 0 0 0 1 1 1/0 , f f , 0-0 0-1 1-0 0-0 ]
It is required in (147) as well as (148) that both the subject and object move to
their case positions (Agrlspec and Agr2spec respectively) and the verb remains
in situ. However, CP and IP are head-initial in (147) but head-final in (148). In
addition, the value of S(F(tns)) is different in the two settings. In (147) it is set
to 0-1 which requires the tense featureto be spelled out on the verb as an affix
(spelling out the L-feature). In (148),onthe other hand, this feature is to be
spelled out in TO by itself (spelling out the F-feature). The two settings produce
very different tree structures, as shown in (149(a)) and (149(b)).
157
NFC1) Agrl-1
I / X
Taroo-ga Agrl TP
AgrIP
(149)
NPQl Agr2-t
I / 
tagaai-o Agr2 VP
ad) kai-ta a<2)
(a) Tree generated with (147)
NP{1) Agrl-l
Taroo-ga TP
NPC2J Agr2-1
I / 
tagaai-o VP Agr2
aCl) kai a(2)
Agrl
(b) Tree generated with (148)
Japaneae Trees
It is not possible to tell on the basis of (144), (145) and (146) which of the
two structures is more likely to be the correct one for Japanese. If (149(a)) is the
correct one, Japanese will not be a head-final language at all, contrary to common
belief. What this shows is that a verb-final language is not necessarily a head-
final one. When we look at more data from Japanese, however, we begin to see
158
evidence that (149(b)) is probably the right choice. The following two sentences
are examples in support of the setting in (148).
(150) Taroo-wa Jbi-fa ka
Taroo-Topic come-Past Q-marker
‘Did Taroo come.'
(151) Hanako-ga asoko dt nai-tt i-ru
Hanako-Nom there at cry-Cont be-Nonpast
‘Hanako is crying there.'
The question marker ka in (150) comes at the end of the sentence. The only way
to account for it in (149(a)) is to treat ka os a verbal suffix attached to ki. In
other words, two features are spelled on this verb, ta being the tense feature and
ka being a predication feature which will be checked in CO at LF. However, this
analysis does not seem to accord with the intuition of native Japanese speakers
who usually regard ka as a separate word. If ka is indeed not part of the verb,
we will have to adopt the analysis in (149(b)) where ta and ka are grammatical
particles in TO and CO respectively. Turning to (151), we again see the plausibility
of (149(b)). To maintain (149(a)) we would have to say that nai-te-i-ru forms a
single big verbal complex, which is again a bit far-fetched. In (149(b)), however,
everything is comfortably accounted for: nai-tt is in VO and i-ru is in TO. It is
also possible that nai is in VO, te in AspO and i-ru in TO.
4.3.3 Berber: A VSO Language
Berber is usually considered a VSO language, but other orders are also found.
The most common alternative order is SVO which is usually used in topicaliza-
tion.(Sadiqi 1986). Here are some examples: 19
19Examples from Sadiqi (1989)
159
(152) i-ara hmad tabrat
3ms-wrote Ahmed letter
'Ahmed wrote the letter.’
(153) hmad i-ara tabrat
Ahmed 3ms-wrote letter
'Ahmed wrote the letter.’
In terms of morphology, there is no overt case marking in Berber, but verbs are
inflected for tense/aspect and agreement, as we can see in (152) and (153). This
language thus have the following set of strings in our abstract representation:30
(154) v- [ag r,tn s , asp] s-[]
v- [ag r,tn s ,asp] •-[] o-[]
■-[] v -[ag r, tn a , asp]
s -[] v -[ag r, tn s , asp] o-[]
This set of strings can be generated with the parameter setting in (155).
(155) [ 1 1 1 1 0 1/0 0 1/0 , i i , 0-1 0-0 0-1 0-1 ]
There are many alternative settings that are compatible with these strings. Here
are some examples:
[ 1 0 0 0 0 1/0 0 1/0 ] [ 1 1 0 0 0 1/0 0 1/0 ]
[ 1 1 1 0 0 1/ 0 0 1/ 0 ] [ 1 1 1 1 1 1 0 1 /0 ]
According to the setting in (155), the verb must move overtly to Agrl and the
object must stay in situ. The subject, however, can optionally move to Agrlspec
and then to Cspec. If the principle of Procrastinate is not overridden by other
,0The {act that the feature list is attached to the verb on the right in our representation does
not have any implication as to whether the features are spelled out as prefixes or suffixes. It
simply means those features are realised on the verb. They can appear as any kind ofaffix (prefix
or suffix) or other forms of verbal conjugation.
160
considerations, the subject will not move and the word order is VSO. When other
factors call for overt movement, the subject can move to Agrlspec or Cspec. In
either case the order is SVO. The tree structures for (152) and (153), according to
(155), are (156(a)) and (156(b)) respectively.
AgrIP
/ X
Spec Agrl-1
/ X
Agrl TP
I-ara T1
/ 
AgrlP
(156)
haad • tabrat
L-ara(2) Tl
ad) a(2) tabrat
(a) (b)
Berber Itoes
One general question that can be raised at this point is whether the order in which
the inflectional features appear in the list can imply anything about the actual
161
sequence of affixes. We may be tempted, at least in Berber, to let our feature list
have this extra ordering information. For instance, we may let v- [tans*, agr]
or [ag r,tan sa]-v mean that the affix representing agreement appears outside the
affix representing tense, as in the case of (152) and (153). Our string representation
would certainly be more informative if the ordering is encoded there. If the Mirror
Principle holds, this kind of encoding will not only be desirable but easy as well.
Unfortunately, the Mirror Principle does not seem to hold everywhere, not in
Berber at least. If we look at (152) and (153) only, we may conclude that agreement
occurs outside tense. The verb is inflected for tense and the agreement affix is
added to the tensed verb. However, we also find Berber sentences where the order
is reversed. (157) is such an example. 21
(157) ad-y-acgh Mohand ijn teddart
will(Tns)-3ms-buy Mohand one house
‘Mohand will buy a house.'
In (157) tense clearly occurs outside agreement. To avoid such problems, we will
keep to our assumption that the feature list attached to each terminal symbol is
unordcred. They only tell us what features are spelled out in some form of verbal
inflection. The order of affixation has to be handled separately. As a matter of fact,
we cannot exclude the possibility that the ordering is arbitrary and the learner has
to acquire it independently.
4.3.4 German: A V2 Language
German is a language where root clauses and subordinate clauses have different
word orders. In root clauses, the verb must appear in second position, the first
31Example from Ouhalla (1091)
162
position being occupied by a subject NP, an object NP, an AdvP, or any other XP.
This is illustrated in (158), (159) and (160).33
(158) Karl kaufte gestem das Buch
Karl bought yesterday that book
(Karl bought that book yesterday.*
(159) das Buch kaufte Karl gestem
that book bought Karl yesterday
‘That book Karl bought yesterday.’
(160) gestem kaufte Karl das Buch
yesterday bought Karl that book
‘Yesterday Karl bought that book’
Assuming that German NPs are inflected for case33and German verbs are inflected
for tense, aspect and agreement, we can abstractly represent German root clauses
as the set of strings in (161) (where adv stands for an AdvP like yesterday which
is presumably left-adjoined to Tl.)
(161) s - [ c ( l) ] v -[ag r,tn s,a sp ] (adv)
adv v -[ag r,tn s,a sp ] s -[c (l) ]
s - [ c ( l) ] v -[a g r,tn s,a sp ] (adv) o-[c(2)]
o -[c(2 )] v -[a g r,tn s,a sp ] s -[ c ( l)] (adv)
adv v -[ag r,tn s,a sp ] s - [ c (l)] o-[c(2)]
This set of strings can be generated with the following parameter setting:
(162) [ 1 1 1 1 1 1 1 1 . i f , 0-1 0-1 0-1 0-1 ]
“ Examples from Haegeman (1991)
a3The ease marking shows up on the determiner, though.
163
This setting requires that every movement be overt. By Spell-Out, the verb must
move to CO, the NPs to Agrapecs, and one of the XPs to Cspec. We have (158)
if the subject NP moves to Cspec, (159) if the object does, and (160) if the AdvP
does. The setting also specifies that CP is head-initial and IP is head-final.
Incidentally, the structures predicted by this setting can also account for the
fact that gestem (yesterday) can appear right after the verb in (158) but not in
(159). We have assumed that a time adverb like yesterday can be left-adjoined to
T l. In (158) the object has moved to Agr2spec but not to Cspec. This is why we
can have the order SV(Adv)0. In (159) the object has moved to Cspec and the
subject to Agrlspec. The resulting order can only be OVS(Adv), while OV(Adv)S
is impossible. The tree structures for (158) and (159) are in (163(a)) and (163(b)).
164
Karl C Agr IP
I /Xkaufte Spac Agrl-1
NP
Aspl Aap
Agr2P
Agr2-1
(163)
das Buch C AgrIP
kaufte NP Agrl-1
(*)
Karl TP
/ 
T1 Agrl
/ 
gaatern T1
/ 
AapP T
/ 
Aspl Aap
IAgr2P
Spac Agr2-1
/ 
VP Agr2
zx
(b)
German Trees
German is similar to English in that the tense and agreement features are spelled
out on the verb in some cases but in an auxiliary in others. When an auxiliary
165
exists in a sentence, the auxiliary is in second position and the verb in final position.
Here is an example:
(164) Gestem hat Karl das Buch gckauft
yesterday has Karl that book bought
‘Karl bought that book yesterday.’
Obviously, the setting in (162) will flail to account for the word order found in this
sentence. This problem may need to be handled in the same way as we handled the
English case. We can assume that tense and agreement features must be spelled
out in German, either in an auxiliary (spelling out the F-feature) or on the verb
(spelling out the L-feature), but not both. When the F-feature is spelled out, an
auxiliary appears. This auxiliary moves to CO and the verb moves to AspO only.
When the L-feature is spelled out, there is no auxiliary and the verb will move to
CO. Why we choose to spell out the F-features in some cases but the L-feature in
some others is again an open question which cannot be answered until our model
is interfaced with other components of the grammar.
We have so far only discussed the word order in German root clauses. The
order in subordinate clauses is SOV rather than V2 , as shown in (165) and (166).
(165) doss Kart gestem dieses Buch kaufte
that Karl yesterday this book bought
‘that Karl bought this book yesterday.’
(166) doss Kart gestem dieses Buch gckauft hat
that Karl yesterday this book bought has
‘that Karl bought this book yesterday.’
This fact again forces us to consider the possibility that some S(M)-parameters for
head movement (in this case S(Af(c))) must be allowed to have the value 1/0. If
S(M (c)) is set to 1/0 in German, then the verb can move to Agrl-0 , resulting in
an SOV order, or move to CO resulting in a V2 order. Apparently, the principle
166
of Procrastinate is overridden in the root clause. We may conjecture that the
predication feature must be checked before Spell-Out in the root clause but not
in the subordinate clause. This checking requirement overrides the principle of
Procrastinate and forces the verb to move to CO overtly in the root clause.
4.3.5 Chinese: A Head-Final SVO Language
We have seen in (104), (105) and (106) that Chinese is a scrambling language, its
possible word orders being SVO, SOV and OSV. All these orders can be generated
with a parameter setting like (167) where both CP and IP are head-initial.
(167) [ 0 0 0 0 0 1 1/0 1 , i i , 0-0 0-0 0-1 0-1 ]
But this setting is not able to account for the following sentence where we find
sentence-final particles which cannot possibly be spelled out on the verb.
(168) Ni kan-wan net-ben shu le ma
you finish reading that book Asp Q/A
‘Have you finished reading that book? 4 or
’You have finished reading that book, as I know.’
In this sentence le and ma are not attached to the verb, since the object intervenes
between the verb and those functional elements. A fair assumption is that le
and ma are some overtly realized functional heads, the former being the head
of AspP and the latter the head of CP. (This presupposes that S(F(asp)) and
5(F(pred)) are both set to 1- 0 .) These elements cannot appear in sentence-final
positions unless both IP and CP are head-final. What this suggests is that Chinese
is a head-final language (in terms of CP and IP).The structure for (168) should
be (169) which illustrates how an SVO string can be generated in a head-final
structure.
167
(169)
CP
/ 
Spk a
AgrIP C
NP Agrl-1 m
I / 
Ni TP Agrl
I
T1
A«pP T
Aapl Aap
Agr2P k
Spac Agr2-1
VP Agr2
/ 
NP VI
V VP
kan-wan NP VI
I Inai-ban ahu V
A Chineae Tree
168
4.3.6 Ftench: A Language with Clitics
In this case, we are not interested in the French language as a whole, but just its
cliticization. Since we are only dealing with simple sentences with two arguments,
only sentences like the one in (170) will be considered.
(170) J t le-visitais
I him-visited
‘I visited him.'
There is a huge amount of literature on the analysis of clitics like le here, but we will
not try to go through it in this short section. What I want to point out is that our
current model may offer an alternative account of this well-known phenomenon.
Recall that we observed in Chapter 3 that case and agreement features can be
spelled out either on NPs or on the verb. ( See Borer (1984) and Safir (1985) for
similar ideas.) The parameter S(F(case)) has four values: 0 - 0 (no case feature
spelled out), 0 - 1 (case features spelled out on the NP), 1 - 0 (case features spelled
out on the verb)34, and 1-1 (case feature spelled out on both the NP and the verb).
We have further assumed that, when spelled out on the verb, the case-features
together with the agreement features show up as clitics.
Now let us suppose that the S(F(case)) parameter has a value which is opera­
tive only when the object NP is a pronoun. Then the four values of this parameter
will have the following effects. When it is set to 0 - 0 , no case features are spelled
out. Since a pronoun is nothing more than a set of case and agreement features,
no pronoun will be visible in this case. We call this pro-drop. When S(F(case))
is set to 0 - 1 , the features are spelled out as a pronoun. In cases where the value
is 1- 0 , the features appear as a clitic instead of a pronoun. The features spelled
a4The features to be spelled out in this case are the F-features which the verb can pick up and
carry along when it moves through the functional categories.
169
out here are some F-fe&tures of Agr2. The verb acquires those features when it
moves through Agr2-0 on its way to Agr1-0 . In this sense, clitics are affixes of the
verb which spells out some case/agreement features of the verb. This explains why
clitics must be adjacent to the verb. Finally, we may have the value 1 -1 which
requires that the features be spelled out on both the verb and the NP. As a result,
we may see the clitic as well as the pronoun, a case of clitic doubling, The value
of 5(F(casc)) seems to be 1-0 in French.
In the GB literature there are basically two different accounts of cliticization.
The base-generation account has the view that clitics are based generated on the
verb. The movement account argues that clitics are generated in NP positions
and later get attached to the verb through movement. Recently syntacticians
have been trying to reconcile the two approaches and have proposed the view that
cliticization involves both base generation and movement (e.g. Sportiche 1992).
This is intuitively very similar to our present analysis. Clitics are base generated
in the sense that the features are verbal features and they show up wherever the
verb goes. They also involve movement because the verb has to move through
Agr2-0 and the object NP has to move to Agr2spec. While the verb is in Agr2-0
and the NP in Agr2spec, the verb will get its case/agreement features checked
against the object NP through spec-head agreement. It will take more work to
see, however, whether the present account can cover all the empirical data that
concerning cliticization.
The case studies above have given us some more concrete ideas as to what
linguistic phenomena can be accommodated in our parameter space. The studies
are incomplete, however, because the list of languages that can be studied this
170
way is an open-ended one. We should have looked at many more languages but a
complete survey is beyond the capacity of the present thesis.
4.4 Sum m ary
In this chapter we have laid out the parameter space in our model. We have had
a bird’B-eye view at all the passible languages in this space as well as a worm’s-
eye view at some specific languages. We have seen the present parameter space
is capable of accounting for a wide range of linguistic facts. In terms of word
order and inflectional morphology, most natural languages can find a corresponding
“language” in this parameter space. We have also discovered, however, that our
present system has its limitations. In order to provide a more complete account of
any natural language, the system must be enriched in the future.
171
Chapter 5
Setting the Parameters
This chapter will be devoted to the issue of learnability. We have defined an exper­
imental grammar with a set of parameters. We have also seen that the parameter
space thus created is capable of accounting for a wide range of cross-linguistic
variation. The next question is how a learner can acquire different languages by
setting those parameters. Is every language in our parameter space learnable? Is
there a learning algorithm whereby the parameters can be set correctly? If so,
what properties does this learning algorithm have? These are the questions that
will be addressed in this chapter.
We will see that the syntactic model we have adopted has many interesting
and often desirable learnability properties. It is found that all the languages in our
parameter space are learnable through a particular parameter setting algorithm.
Every possible language can be correctly identified in spite of the wide-spread ex­
istence of subset relations and the non-existence of negative evidence. The param­
eter setting algorithm is a variation of Gold’s (1967) identification by enumeration
learning paradigm, where the order in which hypotheses are enumerated is derived
from Chomsky’s (1992) principle of Procrastinate. The algorithm is implemented
in Prolog, which has enabled us to perform an exhaustive search of our parameter
172
space. The results are encouraging. Not only are all the languages identifiable, but
the learning process is incremental and independent of the order in which input
data is presented. There is even a possibility that the learning procedure may
provide a mechanism for language change. Let us now get down to the details.
5.1 Basic Assum ptions
The study of language acquisition is an enormous project which involves many sub-
areas of research. We are not trying to look at every aspect of language learning,
however. The area we will focus on is a sub-part of syntactic acquisition1. It is
assumed that there are learning modules that are responsible for the acquisition
of other linguistic knowledge such as phonology, morphology and semantics. The
learning activities to be discussed here will thus take place in an idealized situation
where other kinds of learning are supposed to be taken care of. We will take a
number of things as given, and the the success or failure of the learning algorithm
is to be viewed against the background of those given assumptions. It is therefore
important to state those assumptions explicitly at the beginning. Many of the
assumptions are standard ones which have been in the literature for a long time
(Wexler and Hamburger 1973, Wexler and Culicover 1980, Pinker 1979, Berwick
1985, Lightfoot 1991, Gibson and Wexler 1993, among many others). But they
need to be specified in the context of the present syntactic model.
5.1.1 Assumptions about the Input
The input to the learning module we are concerned with comprises strings which
are abstract representations of Degree-0 declarative sentences. The degree of a
1For a general review of formal approaches to syntactic acquisition, see Atkinson (1992).
173
sentence represents the depth of embedding of the sentence. A Degree-0 sentence is
a simple sentence with zero embedding.3 Besides, each input string to our learning
system is assumed to be a CP, i.e. a complete sentence. In order to abstract away
from the phonology and morphology of any particular language and represent all
possible languages in a neutral and uniform way, we will let every symbol in the
string be made of a category label plus a feature list. Such input string can be
called labeled strings3, but they are unusual in that the actual words are absent,
with the stringB consisting of the category labels and features only. It is assumed
that some other learning mechanisms can enable the learner to segment a string
correctly and figure out the grammatical category of each individual symbol. In
addition, the learner is supposed to be able to identify the argument structure of
each sentence. He can presumably differentiate transitive verbs from intransitive
ones and distinguish between subject and object NPs. How such "tagging” (i.e.
the assignment of category label to each word) is achieved is not the concern of our
present study. Finally, it is assumed that the learner is capable of analyzing the
morphological structures of the target language. She can find out, for instance, that
the word does in English is overtly marked for the tense and agreement features.
The category labels that can appear in the input strings include the following:
• a (subject NP)
• o (object NP)
3See Wexler and Culicover 1980, Morgan 1986 and Lightfoot 1989,1991 for discussions on the
significance of Degree-0, Degree-1 and Degree-2 sentences in language acquisition.
3A labeled string is a phrase where every word as well as the phrase itself has a category label.
A sentence like John work* is a labeled string when John is marked as NP, works as V, and the
whole string as S or CP.
174
• iv (intransitive verb)
• tv (transitive verb)
• aux (auxiliary or grammatical particle)
• of tan (adverb of the “often” type)
Each category label has a list of features attached to it. The features that ap­
pear in the list represent overt morphology, i.e. features that are spelled out.
For instance, a string like s -[c l] aux-[agr,tns] v-[asp] o -[c 2 ] represents a
sentence where the subject and object are overtly marked for different cases, the
auxiliary overtly inflects for agreement and tense, and the verb has overt inflec­
tion for aspect. A feature in a language is considered overtly represented if this
feature is morphologically realized at least in some cases. The auxiliary in English
will therefore be coded as aux -[ag r,tn s], since this is the case with does. The
full array of possibilities has been illustrated in 4.2.3. It is taken for granted that
the learner is able to identify the inflectional morpheme(s) in each word and the
feature(s) encoded in each morpheme.
The language to be acquired by a learner is composed of a set ofstrings. These
strings are to be presented in an unordered fashion. The learner can encounter
any string at any time. It is assumed that every string in the set, each of which
representing a type of sentences, will eventually appear in the input, and each
string can be encountered more than once.
All the input strings are supposed to be grammatical sentences in the target
language. No sentence is marked “ungrammatical” to tell the learner that this is
not a sentence he should generate. In other words, no negative evidence is available
175
(cf. Brown & Hanlon (1970), Wexler it Hamburger (1973), Baker (1979), Marcus
(1993), etc.). This assumption may seem too narrow, for there could be indirect
negative evidence (Lasnik 1989) which might be used as a substitute for negative
evidence. However, the existence of such evidence does not have to mean that the
learner has to depend on it for successful acquisition. We will therefore start with
the more restrictive hypothesis and conduct our experiments in an environment
where there is no negative evidence.
5.1.2 Assumptions about the Learner
The learner is equipped with Universal Grammar which has a set of parameters,
each having two or more values. In our case, the UG is the experimental grammar
defined in Chapter 3. Whenever an input sentence is encountered, the learner tries
to parse it using the grammar and the current parameter setting. At the initial
stage, the parameters can be either preset or unset. In the latter case, a setting
has to be chosen before the parsing starts. If we assume that the parameters are
preset, then all learner will start with the same setting which is universal. If the
parameters are unset, however, the learner can choose any value combination to
start with. In this case, there will not be any universal starting point for parameter
setting. The learning model we will discuss is based on the assumption that the
parameters are preset.
We adopt the hypothesis that the learner is failure-driven or error-driven (Wexler
it Culicover (1980))4. He will not change his current parameter setting unless he
4This kind of failure-driven learning paradigm baa been challenged by many people. An
interesting debate can be found in Valian (1990, 1993) and Kim (1993). However, the arguments
made there are mainly based on the setting of the null-subject parameter. It is still an open
question whether failure-driven learning is feasible in setting X-bar parameters or movement
parameters.
176
encounters a sentence which is not syntactically analyzable with the current set­
ting. We also assume the Greediness Constraint (Clark (1988, 1990), Gibson and
Wexler (1993)) according to which the learner will not adopt a new setting unless
it can result in a successful parse of the current input sentence.5 Finally, we share
with most researchers in the field the assumption that the learner has no memory
of either the previous parameter settings or the sentences previously encountered.
An ideal learning paradigm within which parameter setting can be experi­
mented with the above assumptions is Gold’s (1967) identification by enumeration.
This is a failure-driven algorithm whereby the learner goes through a number of
hypotheses until the correct one is found. In our case, the algorithm can be de­
scribed as follows. Given a list of parameter settings, the learner attempts to parse
an input sentence S with one of the settings in the list. If S can be successfully
parsed, then the setting is retained. If S is unparsable, however, the current setting
will be discarded and the next setting in the list will be tried. The settings are
tried one by one until the learner reaches a value combination that results in a
successful parse. This happens to every S the learner encounters. Some Ss trigger
resetting and some do not. Resetting will cease to occur when the learner comes
to a setting which can account for any S in the input data set.
5.1.3 Assumptions about the Criterion of Successful Learn
ing
Will the learner described above be able to successfully acquire any language in
our parameter space? The answer to this question depends on our criterion of
successful learning. When we say that the learner has acquired a language, we can
'This assumption can also be challenged. See Frank and Kapur (1993) for possible arguments
against this assumption.
177
mean any of the following:
(171) (a) He has identified the parameter setting of the target language.
(b) He has become able to parse/generate any string in the target language,
but he may also generate some strings which are not in the target lan­
guage.
(c) He has become able to parse/generate any string in the target language
and no string which is not in the target language.
The criterion in (171(a)) requires that the learner acquire a language which is
strongly equivalent to the target language. This is a criteria that our present
learner cannot meet. As we have seen again and again, a language in our pa­
rameter space can often be generated with two or more parameter settings. The
failure-driven learner, however, will stop learning as soon as one of these settings is
reached. If the target language is supposed to have any of the other settings, this
language will not be learnable according to this criterion. Fortunately, this is not
the criterion used in most theories of human language acquisition. It is acceptable
to most people that a learner can be said to have acquired a language if he can
parse/generate a language which is weakly equivalent (string equivalent but not
necessarily setting equivalent) to the target language.
The criterion in (171(b)) is debatable. Considering the fact that people do
overgenerate in their linguistic performance, we are tempted to accept this crite­
rion. The existence of creoles also seems to show that humans can produce things
which are not in their target language. However, this will not be the criterion to
be used here. Once overgeneration is allowed in general, we will have to tolerate
situations where children produce many sentence patterns that are not acceptable
178
to their parents. This is definitely not the case with human language acquisition.
The language of the next generation can be a little different, but never to to the
extent that it sounded like a different language. We will therefore assume the cri­
terion in (171(c)) where exact identification is required. This criteria may be too
strict, but it is a good working hypothesis to start with.
Now the question is whether exact identification is achievable in our learning
paradigm. A well-known property of the learning algorithm we have assumed is
that the enumeration of hypotheses (in our case the parameter settings) must follow
the Subset Principle (Angluin 1978,1980, Williams 1981, Berwick 1985, Manzini
and Wexler 1987, Wexler and Manzini 1987, etc.). Given two languages L and
L-2 and their respective parameter settings Pi and Pi, P must come before Pj in
the enumeration if L constitutes a proper subset of Li. Otherwise, the learner
will be stuck with the wrong setting and never try to reset it again. Suppose the
target language is [ i v , i t o ] and the learner has just set the parameters
to C 1 0 0 0 0 1 / 0 1 / 0 0 ]. With this setting, he will be able to process every
string in the target language. As a result, he will never change the setting again.
But this is a wrong setting, for it will enable him to generate not only SV and
SVO strings, but OVS, SOV, VS and VSO strings as well. He has acquired a
superset language of the target language instead of the target language itself. We
have seen in 4.1.4 that superset and subset languages do exist extensively in our
parameter space. Since we require exact identification, we must see to it that the
enumerative process of our learning algorithm follows the Subset Principle. This
will be a major topic of this chapter.
179
5.2 Setting S(M )-Param eters
There are three types of parameters to be set in our model: S(M)-parameters,
S(F)-parameters and HD-parameters. S(M) and HD parameters account for word
order variation. They are reset if and only if the current setting fails to accept the
word order of an input string. S(F)-parameters, on the other hand, are responsible
for the morphological paradigm of a language. They are reset on the basis of
visible morphology only. Thus the values of S(M) and HD parameters respond
to word order only and the values of S(F)-parameters respond to morphology
only. Since word order and overt morphology are assumed to be independent of
each other in our model, there is no dependency between the values of S(M)/HD
parameters and S(F)-parameters. In other words, the former and the latter can be
set independently. We can therefore consider them in isolation of each other.
In this section we consider the setting of S(M)-parametera. The values of other
parameters will be held constant for the moment, with all HD-parameters set
to I and S(F)-parameters set to 0 - 0 . Since no feature is spelled out when all
S(F)-parameters are set to 0 - 0 , the feature list will be temporarily omitted in the
presentation of strings. The string ■ v o , for example, is understood to be an
abbreviated form of ■- [] v- [] o- []. In addition, the symbol v will often be used
to stand for both iv and tv.
5.2.1 The Ordering Algorithm
As we have seen in 4.1.4, some languages in the parameter space of S(M)-parameters
are properly included in some other languages. This implies that the learning al­
gorithm we have assumed can fail to result in convergence for some languages if
180
the enumeration of parameter settings is random. In order for every language in
the parameter space to be learnable, the hypothetical settings must be enumerated
in a certain order. In particular, the settings of subset languages must be tried
before the settings of their respective superset languages. Let us call the parameter
setting for a subset language a subset setting and the one for a superset language
a superset setting. A superset setting must then be ordered after all its subset
settings. To ensure learnability for every language, we can simply calculate all
subset relations in the parameter space, find every superset setting and its subset
settings, and enumerate the settings in such a way that all subset settings comes
before their relative superset settings. Such an ordering is attainable. In fact,
the enumeration can be made to satisfy this ordering condition in more than one
way. However, arbitrary ordering of this kind is not linguistically interesting. It
can certainly make our learning algorithm work, but we cannot expect a child to
know the ordering unless it is built in as part of UG. We are thus in a dilemma:
the learning may not succeed if there is no ordering of parameter values, but the
assumption that the ordering is directly encoded in UG seems very unlikely.
However, there is a way to get out of this dilemma. The child can be expected
to know the ordering without it being directly encoded in UG if the following is
true: the ordering can be derived from some independent linguistic principle in UG.
Such a principle does seem to exist in our current linguistic theory. One possible
candidate is the principle of Procrastinate (Chomsky 1992) which requires that
movement in overt syntax be avoided as much as possible. This principle has the
following implications for the parameter setting problem considered in our model.0
®There is an alternative approach which goes in the in opposite way. Following Peset-
sky’s (1989) Earlinees principle which requires movement to occur as early as possible in the
derivational process, we could assume that the learner start from the hypothesis that all S(M)-
181
(172) All S(M)-parameters should be set to 0 at the initial stage. Let us suppose
that the principle of Procrastinate is operative in children's grammar from
the very beginning. According to this principle, an “ideal” grammar should
have no overt movement. Therefore, children will initially hypothesize that
no movement takes place before Spell-Out in their language. They will con­
sider overt movement (i.e. setting some S(M)-parameters to 1) only if they
have encountered sentences which are not syntactically analyzable with the
existing parameter setting.
(173) In cases where children are forced to change their hypothesis by allowing
some movement(s) to occur before Spell-Out, they will try to move as little
as possible, again following the principle of Procrastinate. They will not
hypothesize more overt movement(s) than is absolutely necessary for the
successful parsing of the current input sentence. As a result, given two set­
tings, both of which can make the current input parsable, the setting with
fewer S(M)-parameters set to 1 should be preferred and adopted as the new
hypothesis.
(174) If the principle of Procrastinate is adhered to rigorously, there should not
be any optional overt movement. Given the option of moving either before
or after Spell-Out, the principle will always dictate that the movement occur
after Spell-Out. Setting an S(M)-parameter to 1/0 is therefore no different
from setting it to 0. So why should the value 1/0 be considered in the first
parameters are set to 1 at the initial stage. This alternative is tried out in Wu (1902) where
the learning process involves setting some S(M)-parameters from 1 to 0. Interestingly enough,
this approach also works, though it is conceptually less natural and less compatible with the
acquisition data.
182
place? If a movement must occur before Spell-Out, then its S(M)-parameter
must be set to 1 rather than 1/ 0 . Consequently, the value 1/0 should not
be tried unless it is the only value which can make all the strings in a given
language parsable.
(175) In cases where overt movement is absolutely necessary, the principle of Pro­
crastinate will require that the movements which are more “essential” be
considered first. Now which movements are more essential? According to
Chomsky, the principle of Procrastinate can be overridden to let a movement
occur before Spell-Out only if the feature to be checked by this movement
is “strong” i.e. realized in overt morphology. In view of the fact that A-
movement and head-movement often occur for morphological reasons while
A-movements do not, the former are more essential than the latter. In our
model, overt movement is independent of overt morphology, so the morpho­
logical explanation may not be available. But there is a common assumption
that A-movement and head-movement are more closely related to the basic
word order of a language than A-movements which are more likely to be as­
sociated with interrogation, quantification , focusing and topicalization. In
this sense, A-movements and head movements are more essential than A-bar
movements. If overt movement is to be considered at all, priority should be
given to the former rather than the latter.7
7Theie is a potential problem with the assumption that A-movements tend to occur earlier
than A-movements. One possible counter-example to this hypothesis is the passive construction
which involves A-movement. According to the acquisition data, passives tend to occur fairly late
in children’s speech, usually after wh-movement which is an A-movement. The question is then
why the A-movement in passive formation is not allowed to apply before some A-movements
are. One answer to this question might be the following: The A-movement in passive sentences
might be different from other A-movements in that it is forced, not by feature-checking, but by
some other grammatical operations which are active only at a later stage of development. The
183
To sum up, the principle of Procrastinate can provide certain constraints on or
preferences for the choice of the next parameter setting to be tried in the learning
process. In particular, the following ordering rules seem to follow from this general
principle:
(176) (i) Given two parameter settings Pi and Pj, with and vV2 (0 < TVj,0 <
jVj) being the respective numbers of S(M)-parameters set to 1 /0 in P
and Pj , Pi -< P3 if < N2. In other words, the setting which allows
for fewer optional overt movements is to be considered first.
(ii) Given two parameter settings Pi and pj, Pi -< Pa if S(M (cspec)) is
set to 0 in Pi and 1 in Pa- In other words, the setting which does not
require overt A-movement is to be considered first.
(iii) Given two parameter settings Pi and P3, with Ni and N? (0 < N ,0 <
iVj) being the total numbers of S(M)-parameters set to 1 in Pi and Pa
, P, -< Pa if Ni < N2.
These ordering rules are to be applied in the sequence given above. The second
rule is applied only if the first one fails to decide on the precedence, and the
third applied only if the second fails to do so. This order of rule application
is not directly derivable from the principle of Procrastinate, but it is not totally
stipulative, either. Comparing optional overt movement and overt A-movement, we
find the latter “less evil” than the former which, according to the principle, should
not exist at all. In our particular parameter space, optional movements always
result in subset relations while overt A-movements do so only in some contexts, as
absorption of theta-rolea might b« one such operation. For a passive sentence to occur, the theta-
role carried by the subject must be “absorbed”. Such absorption may happen relatively late in
children’s grammar, thus postponing the A-movement associated with passive constructions.
184
we have seen in 4.1.3. This also suggests that optional movement should be the
last choice. Here is a situation where linguistic and computational considerations
seem to agree with each other. The ordering of (ii) and (iii) is less justified by the
principle of Procrastinate, though. We assume here that a setting without overt
A-movement is to be preferred over a Betting with A-movement even if the total
number of overt movements in the former is greater than that in the latter. The
decision here is made on qualitative rather than quantitative grounds. Overt non-
A-movements are assumed to be “less evil” than overt A-movements. Therefore
the latter should be avoided even at the cost of having more other movements.
So far this choice has been motivated by computational considerations more than
linguistic arguments. Subset relations are more likely to arise with a setting with
overt A-movement than a setting without it. By putting off overt A-movements
as much as possible, learnability can be guaranteed. The linguistic intuition in
support of our preference here is that A-movements seem to be more “peripheral”
than A-movements and head movements on the whole. Whether this intuition
is correct or empirically justifiable is an open question. In any event, we will
suppose for the time being that there are qualitative differences between different
movements. We assume that quantitative arguments apply only in cases where
qualitative considerations yield no result. In this sense, (iii) acts as a default rule
which applies only if nothing else works. It should be pointed out that there are
many settings which will remain unordered to each other after all the precedence
rules have been applied. We will see these settings can be tried in any order without
a violation of the Subset Principle.
185
5.2.2 Ordering and Learnability
Applying the ordering algorithm in (176) to the value combinations of S(M)-
parameters, we get a partial order of all the settings. Given any two S(M)-
par&meter settings Pi and P3, either Pi ^ Pi or Pi -< Pi, but never both.
The first rule of our ordering algorithm applies to all the S(M)-parameter set­
tings, for every setting has zero or more parameters set to 1/ 0 . (The maximum is
three because only three S(M)-parameters can have this value.) This partitions all
the settings into the following four groups, where (a) -< (b) -< (c) -< (d):
(177) (a) settings that contain zero 1/0 value.
(b) settings that contain one 1 /0 value.
(c) settings that contain two 1 /0 values.
(d) Bettings that contain three 1/0 values.
We then apply the second rule of the orderingalgorithm within eachgroup.
This partitions the settings in each of the four groupsinto two sub-groups: those
where the S(M(cspcc)) is set 0 (no overt A-movement permitted), and those where
5(M(cspec)) is set to 1 and 1 /0 (overt A-movement permitted). The settings which
do not allow overt A-movement will precede those which do allow such movement.
Notice that the second rule does not apply across the groups. It never compares
a setting in Group (a) with a setting in Group (b), for example. If P is in Group
(a) and Pi is in Group (b), Pi will precede Pa in the partial order even if Pi
allows overt A-movement whereas P2 does not. So given the two settings in (178),
(178(a)) will be ordered before (178(b)).
186
(178) (a) [ 0 0 0 0 0 1 0 1 ]
(b) [ 0 0 0 0 0 1/0 1 0 ]
After the application of the first and second rules, the S(M)-parameter settings are
partitioned into the eight groups in (179) where (a-a) -< (a-b) -< (b-a) -< (b-b) -<
(c-a) -< (c-b) -< (d-a) -< (d-b).
(179) (a-a) settings with no optional movement and no overt A-movement;
(a-b) settings with no optional movement but with overt A-movement;
(b-a) settings with one optional movement but no overt A-movement;
(b-b) settings with one optional movement and overt A-movement;
(c-a) settings with two optional movements but no overt A-movement;
(c-b) settings with two optional movements and overt A-movement;
(d-a) settings with three optional movements but no overt A-movement;
(d-b) settings with three optional movements and overt A-moevment.
Finally, we apply the third rule within each of these eight sub-groups. Here we
just count in each setting the number of parameters which are set to 1. (The value
1/0 can be ignored as it occurs the same number of times within each group.) Each
setting has a number and P will precede Pa if the number associated with Pi is
less than that of Pj. It is obvious that this will result in a partial order within each
sub-group, since what is involved here is the ordering a natural numbers. Again it
should be pointed out that the third rule never relates two settings in two different
sub-groups. If Pi is in a group that precedes the group which P2 is in, Pi will
187
precede P2 even if P has more parameters set to 1. In (180), for instance, (180(a))
must precede (180(b)), in spite of the fact that the absolute number of parameters
set to 1 is greater in (180(a)).
(180) (a) t l l l l l l l 1 ]
(b) [ 0 0 0 0 0 1 1 1/0 ]
In each of the sub-groups, there will be settings which have the same number of
overt movements. They will occupy the same position in the partial order. The
settings in (181) are settings of this kind.
(181) ( a ) £ l 1 1 0 0 0 0 0 ]
(b) [ 1 1 0 0 0 1 0 0 ]
( c ) [ l 1 0 0 0 0 1 0 ]
( d ) [ l 0 0 0 0 1 1 0 ]
None of these settings has optional movement or overt A-movement, but they
share the property of having three S(M)-parameter set to 1. They therefore remain
unordered to each other, though they are ordered as a whole relative to any other
setting. For example, they are all ordered before the settings in (178) and (180).
The enumerative learner can try these settings in any order without having any
learnability problems, as we will see.
The Prolog program that implements the ordering algorithm is given in Ap­
pendix A.3. In Appendix C, we find the complete ordered list of settings produced
by this program. The settings here are listed in 50 groups and numbered in the
order in which they are to be tried in the parameter setting process. We notice
that the first setting in the list i s [ 0 0 0 0 0 0 0 0 ] which requires no overt
movement, and the last setting is [ 1 1 1 1 1 1 / 0 1 / 0 1/ 0] which allows for
188
the maximal number of optional movements in addition to requiring every other
movement to be overt. Each group number is accompanied by three digits. The
first shows the number of parameters set to 1/ 0 , the second indicates whether
S{M{cspec)) is set to 1, and the last is the total number of parameters set to 1 or
1/ 0 in a setting.
It turns out that the Subset Principle can be observed if our enumerative learner
goes through the hypothetical settings in the order given in C. This is not a sur­
prise. We have seen in 4.1.4 that, in the parameter space of S(M)-parameters,
subset relations arise from two types of settings. The first type consists of value
combinations where 5(A/(specl)), 5(Af(spec2)) and S(M(cspec)) are all set to 1.
Languages generated with such settings share the property of having two alterna­
tive orders for any transitive sentence. We have one order when the subject NP is
in Cspec and the other one when the object NP is. The following is a complete list
of such settings (fully instantiated ones only), the languages they generate, and
the subset languages contained in each language.
(182) Setting [ 0 0 0 0 0 1 1 1 ]
1 ]
Setting [ 0 0 0 0 0 1 1
Language [ s v, s e V. 0 S V ]
Subset Languages [ s v, s o V ] [ s V,
Setting [ 1 0 0 0 0 1 1
Language [ s v, 8 O V, o 8 V ]
Subset Languages [ 8 v, S O V 3 [ 8 V,
Setting [ 1 1 0 0 0 1 1
Language [ s V. S V 0., o 8 V 3
Subset Languages [ e V, S V 0 ] [ S v.
Setting [ 1 1 1 0 0 1 1
Language [ - V, S V o,* o 8 V ]
Subset Language [ • V. S V 0 3 [ 8 V,
1 ]
1 ]
189
Setting [ 1 1 1 1 0 1 1 1 ]
Language C ■ v, • v o, o a v ]
Subset Languages [ s v , s v o ] [ s v , o s v ]
Setting [ 1 1 1 1 1 1 1 1 ]
Language [ s v , s v o , o v s ]
Subset Languages [ s v , s v o ] [ s v , o v s ]
It is easy to prove that, with our ordering algorithm, each of the subset lan­
guages is learnable. What the subset languages have in common is that they only
have a single word order for a transitive sentence. They can all be generated with
a setting where S(M(cspec)) is set to 0s:
(183) Setting Language
[ 0 0 0 0 0 1 1 0 ] [ s v , s o v ]
[ 0 0 0 0 0 0 1 0 ] [ s v , o s v ]
[ 0 0 0 0 0 1 0 0 ] [ s v , s v o ]
[ 1 0 0 0 0 0 1 0 ] [ s v , o v s ]
None of these settings permits overt movement to Cspec while all the settings in
(182) requires this movement. Therefore, the settings in (183) will be enumerated
before those in (182). Once the setting in (183) is reached, all the strings in the
subset languages will be interpretable and the failure-driven learner will never try
to reset the parameters again. Therefore the superset settings in (182) will not
be reachable unless the learner encounters strings which are not in the subset
languages.
*There are many alternative settings, but one of them will suffice to illustrate the point.
190
The second type of settings that results in subset relations involves optional
movement. The parameter value 1/0 is a variable which can be instantiated to
either 1 or 0 in a particular syntactic derivation. Every setting which has n S(M)-
parameters set to 1/ 0 has 2 " (full) instantiations, each of which being a subset
setting of the original setting. For instance, the setting in (184) can be instantiated
to the four settings in (185). We can see that the languages generated with the
settings in (185) are all subset languages of the language generated with setting in
(184).
(184) Setting [ 1 1 1 1 0 1 / 0 1 / 0 0 ]
Language [v o s , s v , s v o , v s , v s o ]
(185) Setting Language
[ 1 1 1 1 0 1 0 0 ] [ s v , s v o ]
[ 1 1 1 1 0 0 1 0 ] [ v s , v o s ]
[ 1 1 1 1 0 0 0 0 ] [ v s , v s o ]
[ 1 1 1 1 0 1 1 0 ] [ s v , s v o ]
In order for the languages in (185) to be learnable, the settings in (185) must
precede the setting in (184) in the enumeration. This is guaranteed by the first
rule of the ordering algorithm which puts all the settings in (185) before the setting
in (184) in the partial order.
However, the subset settings of (184) are not limited to those in (185). They
also include partial instantiations in (186).
191
(186) S e ttin g Language
(a) [ 1 1 1 1 0 1 / 0 1 0 ] [ s v o , s v , v s , v o s ]
(b) [ 1 1 1 1 0 1 / 0 0 0 ] [ s v o , s v , v s , v s o ]
(c) [ 1 1 1 1 0 1 1 / 0 0 ] [ s v , s v o ]
(d) [ 1 1 1 1 0 0 1 / 0 0 ] [ v o s , v s , v s o ]
As we can see, tbe languages generated with these settings are all subset languages
of the one generated with (184). These subset relations do not cause learnability
problems because the settings in (186) all go before (184) in the partial order.
They only contain one 1/0 while (184) contains two. In general, any setting with
n parameters set to 1/0 can be "factored” into 3" —1 subset settings. All those
settings will precede the original setting. This is because each of them will have
instantiated at least one of the variables to either one or zero and thus have fewer
parameters set to 1/ 0 .
There is another complication with optional movement. We find that subset
relations do not arise from different numbers of optional movements only. Even
settings that have the same number of parameters set to 1/ 0 can generate lan­
guages that are in subset relations. We have just seen such an example in (186).
The settings in (186(a)) and (186(c)) have exactly the same number of optional
movement, the same number of overt A-movement, and in fact the same total
number of overt movements. Our ordering algorithm will therefore put these two
settings in the same place in the partial order. Yet the language generated with
(186(c)) is properly included in that of (186(a)). At first glance, this seems to be
a problem. Upon closer inspection, however, we discover that the problem is not
192
real.
First of all, if the target language is the one in (186(c)), none of the settings in
(186) will ever be tried. We have seen in the previous chapter that a single language
can often be generated with more than one setting. The language in (186(c)), i.e.
C s v , s v o ], can be generated with 32 different settings, most of which will
be ordered before the one in (186(c)). One such setting, for instance, is [ 0 0 0 0
0 0 0 0 ] which is actually the initial hypothesis adopted by the learner. Once
the learner reaches one of those settings, all the strings in his language will be
analyzable and he will never consider resetting the parameters again. The settings
in (186) are therefore not reachable, and the possibility of failing to identify the
exact language does not exist. This is a general property of our learning algorithm.
Given two setting? Px and Pa with Lx and I>3 being the respective languages they
generate, we find numerous instances where Zj is properly included in Lx but P
goes before Pj in the partial order of hypothesis enumeration. In each of these
cases, there is always another setting Po which also generates L? but Pq -< P. In
all these cases, the learner will stop resetting the parameters once Pq is reached.
Therefore no subset problems arise.
There is a deeper reason why the apparent subset problem exemplified by (186)
is not a problem. The language in (186(c)) looks like a subset language of the one
in (186(a)) only because the movements are often string-vacuous in our system.
Two “words” Wx and W2 may appear to be in the same position while they are not.
This happens when Wx and W2 are in positions A and B respectively, while the
movement from A to B is string-vacuous. Looking at (186(a)) again, we see that
the setting in (186(c)) can actually generate a string which is not in the language of
(186(a)) i/the object NP movement from Vspec to Agr2spec is not string-vacuous.
193
In (186(a)), the object NP can appear in one position (Agr2spec) only while it can
appear in either Vspec or Agr2spec in (186(c)). The structure where the object
is in Vspec is not a structure that can be generated with (186(a)). Suppose there
is an AdvP left-adjoined to Agr2-1. Then the language generated in (186(a)) will
be (187(a)) and the one generated with (186(c)) will be (187(b)). As we can see,
(187(b)) is not a subset language of (187(a)), for a v advp o is not in (187(a)).
(187) (a) [ a v advp, a v o advp, v advp a, v o advp a ]
(b) [ a v advp, a v o advp, a v advp o ]
The general observation is that, if none of the movements in our system can ever
be string-vacuous, the following two statements will be true.
(188) (a) Given two settings Pi and P2 with L and L? being the respective
languages they generate, Pi ~<P2 in the enumeration if L C L2.
(b) If Pi ■<P2 and Pa ^ Pi, then Li <£. L2 and L2 Li.
5.2.3 The Learning Algorithm
We can now describe our learning algorithm as follows.
(189) (1) Get all value combinations in a given parameter space and place them
in a list P.
(II) Sort P according to the precedence rules in (176) and get the partially
ordered list Pot* as output.
(III) Start learning language L.
(i) Select any string 5 from L and try to parse S.
(ii) If S is successfully parsed, go back to (i).
194
Otherwise, reset the perimeters to the first value combination Pi
in P0rd and remove Pi from Pord- Go to (i).
If L is in the given parameter space, the learning process will eventually stay in
(i) and never leave it. At this point, we can generate Lt which is the set of stringB
that can be successfully parsed with the current parameter setting. If Lt = L, then
L is leamable. We have converged on the correct value combination. If L C Lt,
then L is not learnable. We have converged on a superset setting for the grammar
of L. If L is not in the given parameter space at all, Por<t will eventually become
empty and the learning process will get stuck in (ii).
The Prolog program that implements the algorithm in (189) is given in Ap­
pendix A.4. The ordered list of parameter settings is computed off-line using the
get_eettinge/ 0 predicate.9 (The next setting to be tried at each point can also be
computed on-line, and the result will be the same. The off-line computation just
makes the calculation simpler and the execution of the learning procedure more
efficient.) The learning session is initiated by calling sp/ 0 which keeps putting out
the “Next?” prompt at which we can type in
a. a string from the target language;
b. “c u rre n tjse ttin g ” to have the current setting displayed;
c. “generate” to get the complete set of string that can be generated with the
current setting;
d. “in itia liz e ” to put the learner back to the initial stage; or
9The predicates in Prolog are referred to by the form X/Y where X is the predicate name and
Y is the number of arguments in the predicate.
195
e. “by*” to terminate the session.
Appendix D contains a number of Prolog sessions. D.l and D.2 illustrate the
process in which the language [ a(ofta n ) iv , a (o ften ) tv o , a (o fte n ) o
tv , o a(o fte n ) tv ] (which can be Chinese) is acquired with the program in
Appendix A.4. The input strings are numbered “Xl”, “X2”, ... in the two sessions.
The successive settings are numbered “Xa”, “Xb”, etc. In D.l, the strings Xl and X2
can be parsed with the initial setting, so the parameter values remains unchanged.
Each of the strings in X3-X18 triggered a resetting of the parameters. After each
resetting, the “g en erate” command is given to have all the strings accepted by
the current setting generated, so that we can see the language that the learner
“speaks” at that particular point. The learner converged on the correct setting at
Xl8 , after which all of the possible strings in the language (Xl9, X20, X21, X22,
X23, X24 X25 and X26) became analyzable and no further resetting is triggered. As
the output of “g en erate” shows, the current language is exactly the language we
have tried to acquire, there being neither overgeneration nor undergeneration. In
the D.2 session, the input strings were presented in a different order, but the final
result is the same.
5.2.4 Properties of the Learning Algorithm
Several comments can be made on the learning sessions described above.
(190) (i) The learner is able to converge on the correct setting on the basis of
positive evidence only. Every input string presented to the learner is
a grammatical sentence in the language. This is possible because our
learning procedure respects the Subset Principle.
196
(ii) The learning procedure is incremental. All resetting decisions are based
on the current setting and the current input string only. The learner
does not have to remember any of the previous strings, nor does she
have to memorize the previous settings.10
(iii) The convergence does not depend upon any particular ordering of the
input strings. The string to be presented at the next prompt can be
selected randomly. The sessions in D.l and D.2 differ in terms of the
order in which the input strings are presented, but their final outcome is
the same. The learner does require, however, that some crucial sentence
types (the distinguishing subset of the data set) be presented more than
once. In D.l the string [ ■ tv o ] was presented ten times and eight of
them triggered resetting. These requirements are empirically plausible.
Children do get exposed to some common sentence patterns over and
over again.
(iv) The learner has to go through a number of intermediate grammars
before she arrives at the correct setting. The intermediate settings that
are traversed in the learning process can vary according to the way
input strings are presented. At a given point in the learning process,
different input strings can cause the parameters to be reset to different
values. A comparison of the sessions in D.l and D.2 shows this. For
10One may argue that the learner does have to memorise the previous settings in the sense
that no setting that has been tried in the past can be tried again. But in most cases the learner
can tell from the current setting which settings have been previously tried. As the settings being
tested have progressively more overt movements, all previous settings should have fewer S(M)-
parameters set to 1. The only previously-tested settings she may have to remember are those in
the same group. Those settings are in the same position in the partial order so he cannot tell
from the current setting which of the other settings in that group have been tried before.
197
example, after arriving at the setting [ 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 ]
(Xb in both sessions), the learner was presented different strings in the
two sessions. In D.l, she was given [ i tv o ] which triggered the
setting [ 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 ]; in D.2, she was given [ o s
tv ] which triggered a different setting: [ 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1
]. Due to the fact that the presentation of input strings is different in
the two sessions, the intermediate settings are different. While most
settings appeared in both sessions, some settings were traversed in one
session only. We notice that there are fewer intermediate settings in D.2
and the learner converged on the same correct setting with fewer input
strings. The general picture seems to be the following: given a set of
input strings, there is a definite set of intermediate settings that can
be traversed. In a particular learning situation, however, only a subset
of the settings will be reached and the members in this set can vary
according how the input strings are presented sequentially.
(v) No Single Value Constraint (Clark 1988,1990, Gibson and Wexler 1993)
is imposed on the parameter setting process. This constraint requires
that, in the process of resetting parameters, the new setting and the
current setting can differ by one parameter value only. In other words,
we can never reset two parameters at the same time. This constraint is
not observed here. As we can see in the sessions, two successive settings
can differ by more than one parameter value. In D.l, for instance, the
settings in Xa and Xb differ by two values while X« and X* differ by
5 values. It is not the case, however, that the learner can arbitrarily
choose the next setting. Nor is the learner non-conservative. In fact,
198
the learner always tries to make the current input string analyzable
by making the smallest change in the parameter values. However, the
degree of change is determined by the principle of Procrastinate rather
than the absolute number of parameters being reset.
(vi) No Pendulum Problem (Randall 1990,1992) exists for the current learn­
ing algorithm. This refers to the problem where a parameter is con­
stantly reset between two alternative values V and Vj. One piece of
evidence triggers the reseting from Vi to Vj and another piece sets it
back to V|. This does not happen in our system. Why this is so is
obvious. The learner proceeds through the list of hypothetical settings
in a unidirectional way. Once a setting is considered incorrect, it will
not be considered again. Such determinism is made possible by the fact
that the learner is conservative and would never entertain settings with
more overt movements if settings with fewer overt movements had not
been considered yet. Therefore, once a setting is reached, there is no
need to consider settings that require fewer overt movements.
5.2.5 Learning All Languages in the Parameter Space
The sessions in D.l an D.2 have only shown that at least some language is learn-
able in our present system. We have observed in 5.2.2 that every language in this
parameter space should be learnable. To have an additional proof for the points
made in 5.2.2. I ran a learning a session for each of the languages. This exhaus­
tive testing can be performed by calling lea rm a ll-lan g s/ 0 which is defined in
the Prolog program in Appendix A.4. We first use get-pspacs/O to get (a) the
ordered list of parameter settings (done by calling g s t.s e ttin g s /0 ) and fb) all
199
the possible languages in the parameter space (done by calling gat-languagea/O).
The la a rn _ a ll/l predicate then feeds the languages one by one into la a rn l/ 1
which conducts a learning session for each particular language. The real work is
done by le a rn / 1 which keeps resetting the parameters until all the strings in the
language become parsable. At this point, generate/ 1 is called to get the complete
set of stringB generated with the current setting. A language is learnable if the set
of strings generated is exactly the target language presented to the learner and not
learnable if it is a superset of the target language11.
In D.3 and D.4, we find two Prolog sessions run with learn _ all-lan g s/0 .
The two sessions differ in that in D.3 often does not appear in the input strings
while it does appear in D.4. In D.3, all languages are learnable except two: [
o tv s ] and [ o s tv ]. We have observed in Chapter 4 that these are the
only two languages which do not have intransitive sentences. Our syntactic model
should be made more restrictive so that such languages are not generated at all.
Furthermore, these two languages are not really unlearnable. They cannot be
correctly identified in D.3 simply because there is not enough information in the
input string to distinguish them from other languages. As we can see, they are
correctly identified in D.4 where all the languages, including these two odd ones,
are proven to be learnable. This is because the appearance of often in D.4 has
made some otherwise indistinguishable languages distinct from each other.
One thing we notice in these sessions is that, while all the languages in the
parameter space are learnable, not every setting in the parameter space can become
n A language is also not learnable if no setting in the parameter space can make every string
in this language analysable. But this will not happen here because such a language would fall
outside the given parameter space thus not considered a possible language to be acquired in the
first place.
200
a final setting for some language. This is not a surprise, though. We have seen
in Chapter 4 that the relationship between settings and languages can be many-
to-one. A single language can be generated with more than one setting. In the
learning process, however, only one of the possible setting will be converged on as
the final setting for a language. As we have seen in B.2, the language [ i iv , a
tv o ] can be generated with 32 different settings. The setting which the learner
has converged on in the D.l session is the initial setting. Once this setting is in
place, all the stringB in the language will be parsed successfully and no resetting
will be considered any more. In D.4, the appearance of often made four pure SVO
languages distinct from each other:
C (often) s v (o) 3
[ s (often) v (o) ]
[ s v (often) (o) 3
[ e (often) v (o ), (often) s v (o) 3
The final settings reached for these languages bythe learners are respectively
[ 0 , 0 , 0 , 0 . 0 , 0 , 0 , 0 3
[ 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 3
C i . i . l . i . 0 , i , o, 0 3
[ o , o, o , o, o , i / o , o, o 3
Looking at all the possible settings for each of these languages (i.e. # 1, #2, #12
and #20 in B.3), we see that the final setting reached for each language is always
the setting which is to be ordered first among alternative settings by the precedence
rules in (176). In general, the learner always tries to select the setting with the
fewest overt movements possible to fit the current data. Further movements are
201
considered only if there is evidence that the current setting is inadequate. Given
the string [ i i r ] or [ I tv o ], the learner will stay with the initial setting,
but a new string like [ a tv o ftan o ] will make her reset the parameters to [
0, 0. 0, 0, 0, 1, 0, 0 ].
Similar tests to those in D.3 and D.4 have also been run on parameter spaces
where some S(F)-parameters are set to 1-0 so that auxiliaries appear. Due to
space limitation, these sessions are not given in the appendices. Those tests show
that our current learning algorithm works just as well in acquiring languages with
auxiliaries and grammatical particles.13 When some functional heads are spelled
out as auxiliaries, head movement may be instantiated as either verb movement
or auxiliary movement. Consequently, the number of S(M)-parameter settings
which are syntactically meaningful is greater, as we have seen in 4.2.2. However,
the picture remains unchanged as far as the existence of subset relations among
languages is concerned. The sources of subset relations are still overt A-movement
and optional movement. Therefore, all the languages in this greater parameter
space can still be correctly identified by using the ordering algorithm in (176) and
the learning algorithm in (189).
The results we get here may at fisrt seem too good to be true. The learning task
we have accomplished so far is very similar to the one attempted in Wexler and
Hamburger (1973) and Wexler and Culicover (1980). Both assume a universal base
of phrase structure rules and try to relate every natural language to this universal
base through transformations. However, while they proved that Degree-2 sentences
are required for the success of the learning task, we have managed to succeed with
Degree-0 sentences. Did they make a mistake in their proofs or have we found a
13The test sessions are available upon request.
202
better learning algorithm? The answers to both questions are no. Things have
changed, of course, but the crucial change which has made the difference is in
the grammar rather than in the learning algorithm. The syntactic model we have
developed here is very different from the Standard Theory of Chomsky (1965).
There are several major differences:
(a) The base structure is more constrained and more universal in the current
model. The variation in phrase structures have been restricted by X-bar
theory and further restricted by the removal of the specifier-head parameter.
(b) The transformations (i.e. the movements) which relate surface strings to the
base structure are more restricted. In fact, the set of transformations is
universal. The learner does not have to learn the transformations. He only
has to decide whether a given movement must be visible.
(c) The choices a learner has to make in acquiring a language have been parame­
terized. The computation involved in searching for hypotheses has therefore
been simplified.
(d) The economy principles in general and the principle of Procrastinate in partic­
ular have given different weights to different hypotheses, so that the decision
as to which hypothesis to try next is very easy to make.
In short, the current grammar is more universal and the learner has less to learn.
The success of the learning algorithm is therefore not a surprise.
We will conclude this section by observing a special property that the current
learning algorithm has with regard to noisy input. As we have seen, the learner in
our model takes every piece of input data seriously and treats it as a grammatical
203
sentence in the language. This in principle should cause problems when the input
is degenerated. However, those problems are sometimes accidentally remedied
in our current system. Ungrammatical input may trigger wrong settings, but as
long as those settings are ordered before one of the possible target settings,13 the
learner still has a chance to recover from the error. This is illustrated in D.5. The
target language to be acquired in this session is the same as the one in D.l and
D.2, i.e. [ s (often) iv , s (often) tv o, s (often) o tv , o s (often)
tv ]. The learner was presented those stringB plus a number of strings which
are not in the language (X2, X4, X6 , X13, Xl8, X21, X28, X33, X37, X42, X46 and
X47). The first 8 deviant strings triggered some wrong settings, but the learner
still managed to converge on one of the correct settings (the setting at Xz). This
is however the last target setting the learner can ever reach. If she for any reason
leaves this state and tries some settings further down the list, there will be no
more chance of convergence. In the D.5 session, the learner was presented more
deviant strings after Xz was reached. These strings made the learner adopt the
settings Xbl and Xcl which generate superset languages of the target languages.
The deviant strings that appear after Xcl made things even worse. The learner
eventually ran out of further hypotheses and the learning ended in failure.
The implications of this peculiar property of the learning algorithm are not
clear. It may mean that the current system can provide a mechanism for language
change. When the input data is perfect, only one of the possible settings for a
language will be reachable. When the input contains deviant strings, however,
alternative settings will be considered, as we have seen in D.5 where four of the
possible settings (Xl, Xp, Xv and Xal) are reached at some point of the learning
13Recall that than can be more than one setting which is compatible with a given language.
204
process. The languages generated with these settings are just weakly equivalent to
the target language. Underlyingly, each of those settings can potentially generate
a different language. At this stage, this account for language change is purely
speculative. Much more careful work has to be done before it can be taken seriously.
What is certain, however, is that the learning algorithm as it Btands now is not
robust enough. Further research has to be done to make it more empirically
plausible.
5.3 Setting Other Param eters
In this section we discuss how the other two types of parameters - HD-parameters
and S(F)-parameters - can be set together with S(M)-parameters. HD-parameters
interact with S(M)-parameters in determining the word order of a language. We
want to know whether the learning algorithm presented in the previous section can
be modified to set HD-parameters as well as S(M)-parameters without losing its
basic properties. The S(F)-parameters can be set independently using a separate
algorithm.
5.3.1 Setting HD-Parameters
When HD-parameters are kept out of the picture, there is only one kind of pa­
rameters to reset when an input sentence is found to be syntactically unanalysable
in terms of word order. Now that both S(M)- and HD-parameters are available,
we are given a choice. Upon failing to parse an input string, we have to decide
which type of parameters to reset. This may seem to be a problem. In Gibson and
Wexler (1993) (G&W hereafter) which addresses a similar problem, some target
languages are found to be unleamable. The learning process ends in a local mox-
205
imum which is not the target setting but from which the learner can not escape.
The main contributor to this problem is the fact that there are two types of word
order parameters that can be set when a parsing failure occurs. The parameters
in their parameter space are X-parameters, which are called HD-parameters in our
model, and the V2-parameter which is similar to our S(M)-parameter in that it also
determines whether a certain movement is overt. When an input sentence fails to
be analyzed by the current grammar, the learner can reset either the V2-parameter
or one of the X-parameters, but not both. It is discovered in G&W that, with the
Single Value Constraint, the Greediness Constraint, but no parameter ordering in
any sense, the learner can get stuck in an incorrect grammar. We may wonder if
the same problem will occur in our system, since we also have to set two types of
parameters either of which may be responsible for the word order of a language.
Upon further reflection, however, we realize that the situation here is very differ­
ent from the one in G&W where local maxima occur. First of all, we only have one
kind of X-parameter - the complement-head parameter - while both the specifier-
head and complement-head parameters are present in the model G&W assumes.
They discovered that local maxima can be avoided if the value of the specifier-head
parameter can be fixed before the V2-parameter and the complement-head param­
eter are set. But this condition is satisfied by default in our system, since there is
no specifier-head parameter in our model at all. It is as if the specifier-head pa­
rameter were set before the other parameters are considered. According to G&W,
this should be sufficient to prevent local maxima. Secondly, the Single Value Con­
straint is not assumed in our system. G&W has shown that local maxima can
also be avoided by removing the Single Value Constraint. This is another reason
why local maxima should not occur in our model. We do assume the Greediness
206
Constraint, though. G&W show that the removal of this constraint can help avoid
local maxima as well, but will leave the learning algorithm so unconstrained that
the correct grammar can only be found by chance.
The solution G&W finds most plausible for the prevention of local maxima is
parameter-ordering. There are several ways of ordering the parameters and they
favor the one where X-parameters are set before the V2-parameter. We agree with
them on that movement operations are costly and should not be considered unless
a simple flip of the X-parameter fails to solve the problem. We will therefore
basically adopt this ordering hypothesis. When a change of parameter values is
required, the learner is to try HD-parameters first. Resetting of S(M)-parameters
is attempted only if the resetting of HD-parameters fails to make the input string
syntactically analyzable. However, the actual implementation of the ordering has
to be more sophisticated than the one suggested in G&W. While there is only one
“movement parameter” - the V2-parameter - with two possible values in G&W’s
model, we have eight movement parameters with 864 possible value combinations.
The values of the two HD-parameters in our system can interact with any of the
864 value combinations, producing a total of 3456 possible settings. In particular,
we must allow the four possible value combinations of HD-parameters, t i i 3,
[ i f 1, [ f i ] and [ f 1 ], to interact with each value combination of S(M)-
parameters. This can be achieved through the following algorithm.
(191) Given: an ordered list of S(M)-parameter settings P„d-
(i) Initially set all S(M)-parameters to 0 and HD-parameters to any values.
(ii) Select any string S from the target language L and try to parse S.
(iii) If 5 is successfully parsed, go to (ii);
207
Otherwise, go to (iv).
(iv) Reset HD-parameters and try to parse S with the new setting.
If S is successfully parsed, retain the new HD-parameter setting and
go back to (ii);
If none of the settings of HD-parameters results in a successful parse of
S, reset S(M)-parameters to the first value combination P in Pord,
and remove Pi from Pord- Go back to (ii).
Intuitively, what the above algorithm does is the following. For each of the S(M)-
parameter settings, try combining it with any of the four HD-parameter settings.
If none of the four makes the input string analyzable, then pick the next S(M)-
parameter setting from the ordered list and try the combinations again. In other
words, each setting in the ordered list is expanded into four different settings. The
setting [ 1 , l t 0 , 0 , 0 , 1 , 1 , 0 3 , for example, show up as the following four
settings:
(192) C 1. 1. 0 , 0. 0 , 1 , 1, 0 . i , i ]
[ 1 . 1 ,0 , 0 . 0 , 1 , 1 , 0 , i , f ]
[ 1 , 1 ,0 , 0 , 0 , 1 . 1 , 0 , f , i ]
[ 1, 1,0, 0, 0, 1. 1, 0, f , f ]
This being the case, we can have an alternative algorithm where some of the steps
in (191) are “compiled" into the ordered list of parameter settings. We can expand
the ordered list of S(M)-parameters in Appendix C by turning each setting in the
list into a group of four settings, each having a different value combination of
HD-parameters. The ordering of settings in the original list now becomes the
ordering of groups of settings, with the original partial order preserved. Within
208
each group, the four settings can be ordered in any way. For convenience, we
will order them arbitrarily in the order shown in (192). This ordering has the
implication that the default setting for HD-parameters is head-initial. This does
not have to be correct, however, and the success of our learning algorithm does
not depend on this arbitrary choice. The learning algorithm will work the same
way no matter how these settings are ordered within each group, because the four
languages generated with the four settings of a single group never properly include
each other.
The outcome of the above expansion is obvious. Due to the length of this
expanded list, we are unable to put it in the appendices. But the beginning of the
list is given below to give the reader some concrete idea of what the new list looks
like.
[ 0 0 0 0 0 0 0 0 i i ]
Co 0 0 0 0 0 0 0 i f 3
[ o 0 0 0 0 0 0 0 f i 3
Co 0 0 0 0 0 0 0 f f ]
Cl 0 0 0 0 0 0 0 i i 3
Cl 0 0 0 0 0 0 0 i f 3
i l 0 0 0 0 0 0 0 f i 3
C l 0 0 0 0 0 0 0 f f 3
Co 1 0 0 0 0 0 0 i i 3
I o 1 0 0 0 0 0 0 i f 3
Co 1 0 0 0 0 0 0 f i 3
Co 1 0 0 0 0 0 0 f f 3
Co 0 1 0 0 0 0 0 i i 3
209
[ O O l O O O O O i f ]
[ O O l O O O O O f i ]
[ O O l O O O O O f f ]
• « •
With this list in place, we can use the simple algorithm in (189) to set both
the S(M)-parameters and HD-parameters. As a result, the Prolog program in
Appendix A.4 can be used, with very little modification, to set both types of
parameters.
Testing sessions have been run with this new list of parameter settings. The
session in D.6 illustrates how an individual language can be acquired using the
sp/ 0 predicate. The target language in this case is
[ s i v aux, a o tv aux, o a tv aux ]
The parameters being set here are the 8 S(M)-parameters and 2 HD-parameters.
All S(F)-parameters are constantly set to 0-0 except S(F(tns)) which is constantly
set to 1 -0 to allow the occurrence of some auxiliaries. We can find in this session
all the learnability properties listed in (190). The only thing new is the setting of
HD-parameters.
The learnability of all the languages in this expanded parameter space was
exhaustively tested using the l«arn_all_langs/ 0 predicate. Two session (one
with often and one without) was run with all S(F)-parameters constantly set to
0 -0 and another two sessions with (S(F(fn«)) set to 1- 0 . Auxiliaries appear when
S(F(tns)) is set to 1- 0 . The results are again very similar to those obtained
when HD-parameters have fixed values. There are a few languages which are not
learnable. These languages are again [ o tv i ] and [ o s tv ] which are odd
210
in not having intransitive sentences. When S(F(tns)) is set to 1-0 , the unlearnable
languages are [ o s t v ] , [ o s t v aux ] and [o tv a aux]. No language is
unlearnable when the position of often is taken into consideration. The reason
why these two languages are unlearnable without the appearance of often has been
discussed earlier. The log files of these sessions are not included in the appendices
for reasons of space, but they are available to anyone who wants to see the actual
process.
In sum, the fact that there are two kinds of word order parameters in our
system does not seem to create any problem for learnability. The success of pa­
rameter setting is attributable to parameter ordering, the absence of the Single
Value Constraint, and the non-existence of specifier-head parameters. It is im­
portant to note that the parameter ordering here is not artificially or arbitrarily
imposed on the learning system. It is derivable from some general linguistic princi­
ple, namely the principle of Procrastinate. This principle not only tells us how the
S(M)-parameters should be ordered but also explains why we should try resetting
HD-parameters before resetting S(M)-parameters. Thus the learning algorithm is
linguistically motivated.
5.3.2 Setting S(F)-Parameters
As mentioned above, the S(F)-parameters can be set independently on the basis
of morphological evidence only. We have assumed that each S(F)-parameter can
have two sub-parameters, represented as F-L, with the value of F (1 or 0) deter­
mining whether the F-feature is spelled out and the value of L (1 or 0) determining
whether the L-feature is spelled out. There are therefore four value combinations
for each S(F)-parameter: 0 - 0 , 0 -1 ,1 -0 and 1- 1. The two sub-parameters in each
211
parameter can again be set independently. The value of F is based on the mor­
phological properties of function words (such as auxiliaries) only and the value of
L on the morphology of content words (such as nouns and verbs) only. What we
do in parameter setting is the following. We have to look at the function words,
if there are any, and examine their morphological make-up. We know that no
auxiliary can appear unless F is set to 1 in at least one of the S(F)-parameters.
So the appearance of an auxiliary in the input string tells us that at least one
S(F)-parameter is set to 1-L. (L has an independent value.) Exactly how many
S(F)-parameters should be set to 1-L depends on how much information the aux­
iliary carries. If it is inflected for tense and agreement, for instance, then both
S(F(tna)) and S(F(agr)) will be set to 1-L. We also have to look at the content
words and see what features are morphologically represented. For example, a noun
inflected for case will tell us that 5(F’(case)) is set to F -l (F has an independent
value).
It has been assumed that there is a separate learning module which is respon­
sible for finding out whether a word is morphologically inflected and what features
are represented by the inflection. The learner in our system takes the output of
this module as its input. The input in our case consist of symbols which are of
the form C-F where C is a category label such as s, o, iv, tv and aux, and F is a
list which contains zero or more features.14 A feature appears in the list only if it
is morphologically realized in the language under question. The list is empty if a
word carries no overt inflectional morphology. Here is a sample string.
(194) [ •-[] aux[agr,tna] iv-faap] ]
14The only exception is often which will appear by itself without a feature list attached to it.
212
The string in (194) represents a sentence which consists of the following “words”
from left to right: a subject NP with no inflection, an auxiliary inflected for agree­
ment and tense, and an intransitive verb inflected for aspect. The features that are
in the feature list of aux are overt F-features and the one in the feature list of iv is
an overt L-features. We assume that the learner is able to get the representation in
(194) using some independent learning strategies. For instance, she is supposed to
be able to conclude from words like do, does, did and have, has, had that English
auxiliaries are inflected for agreement and tense. Thus the main auxiliary in En­
glish should be represented as aux- [agr,tn s ]. Once representations like the one
in (194) are available, the setting of S(F)-parameters is straight-forward. For in­
stance, the string in (194) tells us that, in this language, S(F(agr)) and S(F(tns))
are set to 1-L while S(F(asp)) is set to F-l.
At first sight, we might think that there exist relations of proper inclusion in
the morphological patterns presented here. For instance, aux- [tns] may seem to
be properly included in au x -[ag r.tn s]. A second thought tells us that this is not
true. A feature appears in the list if and only if this feature is morphologically
visible. Therefore, aux-[tns] is acceptable to the parser only if S(F(agr)) is set
to 0-L, and au x -[ag r.tn s] is acceptable only if S(F(agr)) is set to 1-L. There is
no setting with which both aux-[tns] and au x -[ag r.tn s] can be grammatically
analyzed. If S(F(agr)) is originally set to 1-L, the analysis of aux-[tns] will
end in failure, which will trigger the resetting of this parameter to 0-L; Likewise,
au x-[agr,tns] cannot be analyzed with 5(F(ajrr)) set to 0-L, which will cause
it to be reset to 1-L. No setting in the parameter space of S(F)-parameters is a
proper subset of another. Because of this, the failure-driven learning algorithm
we have assumed will always succeed in setting the S(F)-parameters on the basis
213
of positive evidence only. No parameter-ordering or default setting is necessary.
However, in view of the fact that inflectional morphology is generally absent in
children’s speech at the initial stages of language acquisition, we will assume that
all S(F)-parameters are initially set to 0 - 0 . They will retain this value if no overt
inflectional morphology is found in the target language. If the target language does
have overt morphology, they will be reset to 0 - 1 , 1 -0 or 1 -1 when the inflections
are detected and analyzed by children in the acquisition process. Notice that this
“initial-0” hypothesis is empirically-based rather than computationally-based. We
assume this in order to make our learning algorithm more natural, but the success
of our algorithm does not depend on this hypothesis.
Now that we have both the morphological parameters (i.e. S(F)-parameters)
and word order parameters (i.e. S(M)- and HD- parameters), we have to decide
which parameters to reset first in the event of a parsing failure. A parsing failure
may occur because
(a) some S(F)-parameter(s) has the wrong value;
(b) some S(M)- or HD- parameter(s) has the wrong value; or
(c) both S(F)-parameters and S(M)/HD parameters have wrong values.
Whether an S(F)-parameter has the wrong value is easy to check out. There is a
one-to-one mapping between the parameter values and the features in the feature
lists. A feature appears in the list if and only if its S(F)-parameter is set to 1. We
find an error whenever there is a mismatch: the feature is in the list but its S(F)-
parameter is set 0. In Case (a), the input sentence should become pars&ble once
the wrongly set S(F)-parameter is reset to 1. No resetting of S(M)/HD parameters
is necessary. In fact, we will never be able to process the input if we reset S(M)/HD
214
parameters instead of S(F)-parameters. This suggests that S(F)-parameter values
should be checked first in the resetting. In Case (b), we will not find a mismatch
between the overt features and the S(F)-parameter values. The sentence will not
become parsable unless some S(M)/HD parameters are reset. However, we will
not know if there is a mismatch unless we check the S(F)-parameter values first.
Resetting S(M)/HD parameters before checking S(F)-parameters is harmless in
this case, but it will be fatal in Cases (a) and (c). In Case (c), like in Case (a), we
will never make the input interpretable by resetting S(M)/HD parameters alone.
We have to check the values of S(F)-par&meters first. We will find some error and
reset the parameters. In Case (c), The input sentence will remain unparsable after
the resetting, but now we are sure that the problem is in the values of S(M)/HD
parameters. In each of the three cases, we see that it is always safe to check the
S(F)-parametera first. The resetting of S(M)/HD parameters can be dangerous if
the errors in S(F)-parameter values are not found before the resetting starts. All
this indicates that we must check out the the values of S(F)-parameters before
initiating the parameter-setting algorithm for S(M)/HD parameters.
The algorithm that checks S(F)-parameter values can be described as follows.
(195) Go through the feature list of every word in the input sentence and do
the following whenever a feature is encountered in the feature list. For an F-
feature (which is found in an auxiliary or grammatical particle in our system),
leave the S(F)-parameter value of this feature untouched if it is already set
to 1-L; otherwise reset it to 1-L. For an L-feature (which is found on nouns
and verbs), leave the S(F)-parameter value of this feature untouched if it is
already set to F-l; otherwise reset it to F -l.
215
Embedding this sub-algorithm in our general acquisition procedure, we now have
the following learning algorithm:
(196) Given: an ordered list of value combinations of S(M)- and HD- parameters
Pord'
(i) Initially set all S(M)-parameters to 0, all HD-parameters to /, and all
S(F)-parameters to 0 - 0 .
(ii) Select any string S from the target language L and try to parse S with
the current setting.
(iii) If S is successfully parsed, go to (ii);
Otherwise, go to (iv).
(iv) Check the values of S(F)-parameters using the sub-algorithm in (195)
and parse S again.
If 5 is successfully parsed, go to (ii);
Otherwise, go to (v).
(v) Reset S(M)/HD parameters to the first value combination Px in Pord
and remove Px from Pord. Go back to (ii).
The Prolog program that implements this new algorithm can be found in Appendix
A.6 . We can again use ap/ 0 to set the parameters on-line for a given language,
ls a r n l/1 to find out if a given language is learnable, and le a rn ja ll-la n g s/ 0 to
check out if all the languages in the parameter space are learnable. The search
space in this case is huge, comprising tens of thousands of possible languages. Ex­
haustive testing by computer has shown that all the languages in this big parameter
216
space are learnable,15 but we do not have to depend on the computer search in
order to know that it is true. We know that the S(M)- and HD- parameters can
set correctly by themselves for every language in the parameter space. We also
know that the S(F)-parameters can be set correctly on its own as well. Since there
is no value dependency between S(F)-parameters and S(M)-/HD- parameters, the
combination of those parameters does not result in any additional complexity. We
can thus conclude that learnability is guaranteed for all the languages that can be
generated in our experimental model.
5.4 Acquiring Little Languages
In this section we examine how the parameters are set for some little languages.
Due to the huge number of languages in the parameter space, it is not possible
to put the log file of running la a rn ja llJ.a n g s/0 in the appendices. To see the
behavior of the learner in this bigger parameter space, we will look at some Pro­
log sessions where the learnability of several individual languages is tested. The
languages being tested are Little English, Little Japanese, Little Berber, Little
Chinese, Little German and Little French, which are small subsets of the corre­
sponding languages. We will run le a rn l/1 on Little English, Little Japanese,
Little Berber and Little Chinese to test their learnability and then run sp/O on
Little French and Little German to see the on-line processes whereby these two
languages are acquired.
15Again there are those odd languages which are learnable only if often appears in the input
strings.
217
5.4.1 Acquiring Little English
The portion of English to be acquired is (197).
(197) Little English:
{ a - [c l] a u x -[a g r,tn s] (o ften ) iv -[a s p ],
■ -[cl] au x-[agr,tns] (often) tv -[asp ] o -[c2 ]
>
These strings are instantiated by the English sentences He was reading and He
was chasing her. The copula BE is treated as an auxiliary carrying agreement and
tense information. It is a visible TO which has moved to Agrl-0 before Spell-Out.
Aux- [ag r,tns] can be spelled out as HAVE BE or DO in English. The choice of
auxiliary in a particular sentence seems to be associated with the aspect feature of
the sentence. It is Bpelled out as BE with progressive aspect, HAVE with perfective
aspect, and DO with zero aspect.
Here is the Prolog session where Little English is acquired.
I T- see(english),read(L),sssn,lsarnl(L).
Trying to lsarn
[[■ -[cl],aux- [agr,tns],iv-[asp]], [s -[c l],aux-[agr,tns],tv-[asp ],o-[c2]],
[a-[cl] ,aux-[agr,tns],oft«n ,iv-[asp]],[s-[cl],aux-[agr,tns],oftan,tv-[as
p],o-[c2]]] ...
■(f(cass)) is roast to 0-1
a(f(agr)> is reset to 1-0
aCf(tns)) la reset to 1-0
s(f(asp>) Is reset to 0-1
Final setting: [0 0 0 1 0 1 0 0 1 1
agr(l-O) aap(O-l) case(0-1) pred(O-O) tns(1-0) ]
Language generated:
[s - [ c l] ,aux-[agr,tns],iv-[asp]] Xl
[s - [ c l] ,aux-[agr,tns],tv-[asp],o-[c2]] 12
[s - [ c l] ,aux-[agr,tns],often .lv -[asp]] X3
218
[a-[cl].aux-[agr,tns].oftan,tv-[up]»o-[c2]] X4
Tha language
[[a-[cl],au x-[agr,tn a].iv-[u p ]],[a-[el],au x-[agr.tn a],tv-[tap ],o-[c2]],
[•-[cl] ,aux-[agr.tna] .oftan.iv-[up ]] , [b- [ c1] ,aux-[agr.tna] ,often,tv-[aa
p].o-[c2]]] ia laarnabla.
yea
I ?-
According to the final setting reached by the learner, Little English is a head-
initial language where the subject NP moves to Agrlspec and TO moves to Agrl-0.
Agreement and tense features are spelled out on the auxiliary, aspect feature spelled
out on the verb, and cue features on NPs. The four strings generated in %1 , %2 ,
%3 and X4 can be exemplified by the English sentences in (198), (199), (200) and
(201).
(198) She is running.
(199) He is chasing her.
(200) He has often succeeded.
(201) She has often amused him.
5.4.2 Acquiring Little Japanese
The subset of Japanese to be acquired is (202).
(202) Little Japanese:
{ a -[c l] (oftan) iv -[tn a ],
a -[c l] (oftan) o-[c2] tv -[tn s ],
o-[c2] a -[c l] (oftan) tv -[tn s]
>
219
Instances of these strings can be found in (144), (145) and (146), repeated here as
(203), (204) and (205).
(203) Taroo-ga ki-ta
Taroo-Nom come-Past
‘Taroo came.’
(204) Taroo-ga tegami-o kai-ta
Taroo-Nom letter-Acc write-Past
‘Taroo wrote a letter.’
(205) tegami-o Taroo-ga kai-ta
letter-Acc Taroo-Nom write-Past
‘Taroo wrote a letter.’
Let us test its learnability by running le a r n l/1:
I ?- •••(Japanese),raad(L),saan,laarnl(L).
Trying to l«am
[[■ -[cl],iv -[tn s]], [s-[c l],o -[c 2 ],tv-[tn s]], [o -[c 2 ],s-[c l],tv -[tn s]],
[ • -[c l],often,iv -[tn s]], [s -[c l],often,o-[c2],tv -[tn s]], [o-[c2],s-[cl]
.often,tv-[tns]]] ...
s(f(case)) is reset to 0-1
■(f(tns)) is reset to 0-1
Final setting: [0 0 0 0 0 1 1 1 1 1
agr(O-O) asp(O-O) case(0-1) pred(O-O) tns(O-l) ]
Language generated:
[[■ -[cl],iv-[tn s]]
[■-[cl],o -[c2 ],tv-[tns]]
[o -[c 2 ],s-[c l],tv-[tns]]
[■-[c l],often,iv-[tn s]]
[s-[cl],o ften ,o -[c2 ],tv-[tns]]
[o-[c2],s-[cl].often ,tv-[tn s]]
Tha language
[[• -[c l],iv -[tn s]], [a -[cl],o-[c2],tv -[tn s]], [o -[c 2 ],s-[c l],tv-[tns]] ,
[s -[c l],often ,iv-[tn s]], [s -[c l],oftan,o-[c2],tv-[tn s]], [o -[c2],s-[cl]
,oftan,tv -[tns]]] is lsaraabla.
yas
I T-
220
According to the final setting, Little Japanese is a head-initial language where
the subject NP and object NP move to Agrlspec and Agr2spec respectively. In
addition, one of them must move further up to Cspec. The fact that Little Japanese
is identified as a head-initial rather than a head-final one shows that a simple SOV
string is not sufficient for the identification of a head-final language. There are
alternative settings for Little Japanese among which are (206) and (207) where IP
is head-final. But (206) is not chosen because it requires more overt movements,
and (207) is not chosen because it requires optional movement.
(206) [ 1 0 0 0 0 1 1 1 i f
agr(0-0) asp(O-O) case(0-1) pred(O-O) tns(O -l)
]
(207) [ 0 0 0 0 0 1 1 1/0 i f
agr(O-O) asp(O-O) caae(O-l) pred(O-O) tns(O -l)
]
5.4.3 Acquiring Little Berber
Little Berber consists of the strings in (208).
(208) Little Berber:
{ iv -[a g r,tn s] (often) a -[],
s- [] i v- [agr, tns] (often)
tv-[agr, tns] (often) e -[] o -[],
s -[] tv-[agr, tns] (often) o-[]
The 3rd and 4th strings in this set are exemplified by (152) and (153), repeated
here as (209) and (210):
(209) i-ara hmad tabrat
3ms-wrote Ahmed letter
*Ahmed wrote the letter.’
(210) hmad i-ara tabrat
Ahmed 3ms-wrote letter
‘Ahmed wrote the letter.’
I have no data on the position of often in Berber. Its position in Little Berber is
purely hypothetical. Let us see how the parameters are set for this language:
I 7- saa(barber) ,raad(L) ,seen,learnl(L) .
Trying to loam
[[iv-[agr,tns] ,iv-[agr,tns]] , [tv-[agr,tns] ,s-Q ,o-[]]
tv- [agr,tns] ,o- □ ] , [iv- [agr,tns],oftan,s- □ ], [s- □ , iv- [agr,tns] ,oftan], [
tv-[agr,tns] ,often,s-Q ,o -Q ], [s-D ,tv-[agr ,tns] ,oftan,o-[]]] ...
s(f(agr)> is rasat to 0-1
s(f(tn s)) is rasat to 0-1
Final satting: [1 1 1 1 0 1/0 0 0 i i
agr(O-l) asp(O-O) casa(O-O) prad(O-O) tns(0-1) ]
Languaga ganaratad:
[iv- [agr,tns] .s-O ]
[•-□ .iv -[a g r , tns]]
[tv-[agr,tns] ,s-D ,o-Q]
[»-□ ,tv-[agr,tns] ,o-Q ]
[iv-[agr.tns] ,often,s-G ]
[ • - □ , iv-[agr,tn s],oftan]
[tv-[agr,tns] ,often,s-G ,o-G ]
[•-□ .tv -[a g r ,tn s],often,o-G ]
Tha languaga
[[iv-[agr,tns] ,s -G ], [•-□ ,iv-[agr,tn s]], [tv-[agr,tns] ,s-G ,o-G ] , [s-G ,
tv- [agr,tns] ,o -□ ] , [iv-[agr,tn s],oftan ,s-□ ], [s-□ ,iv-[agr,tn s],oftan], [
tv-[agr, tn s],oftan,s-D ,o-G ] ,[s -D ,tv-[agr,tn s],
oftan,o-D ]] is laamabla.
yas
I 7-
222
In this session, Little Berber is identified as a language where the verb must
move to Agrl-0 and the subject NP can optionally move to Agrlspec. The verb is
required to have its agreement and tense features spelled out.
5.4.4 Acquiring Little Chinese
Little Chinese consists of the following strings:
(211) Little Chinese:
{ s-[] (oftan) iv - [] aux-[asp] aux- [prad] ,
■-[] (oftan) tv -[] o-[] aux-[asp] aux-[prad],
(oftan) o -[] tv -U aux-[asp] aux-[prad],
o- [] a- [] (oftan) tv- □ aux- [asp] aux- [prad]
>
Some actual instances of those strings are found in (213. The grammatical particle
ma in the following sentences can be either a question marker or an affirmative
marker depending upon the intonation. So they are all ambiguous intheir roman-
ized forms, having both an interrogative reading and adeclarative reading.(They
are not ambiguous when written in Chinese characters, as the two senses of ma
are written in two different characters: “ o ” and “ ").
(212) Ta lat le ma
he come Asp Q/A
‘Has he come? / He has come, as you know/
(213) 7a kan-wan nei-ben shu le ma
you finish reading that book Asp Q/A
‘Have you finished reading that book?’ or
‘He has finished reading that book, as you know.’
223
(214) Ta nei-ben shu kan-wan le ma
you that book finish reading Asp Q/A
‘Have you finished reading that book?* or
'He has finished reading that book, as you know.'
(215) Nei-ben shu ta kan-vtan le ma
that book you finish reading Asp Q/A
'Have you finished reading that book?’ or
‘He has finished reading that book, as you know.'
Here is the session where Little Chinese is acquired.
I ?- ■••(chinaae) ,read(L) .seen,learnl(L) .
Trying to learn
[[■ -□ , iv- □ ,aux- [asp] ,aux- [pred]] , [s-D ,tv- □ ,o- [] ,aux- [a»p] ,aux- [pred]
],[■ -[].o- □ ,tv- □ ,aux- [asp] ,aux- [pred]] , [o- D ,s- [] ,tv- D »aux- [asp] ,aux-
[pred]] , [[s-D .o ften ,iv -D ,aux-[asp],aux-[pred}] , [■ -□ .often,tv-D ,o-D ,
aux-[asp] ,aux-Cpred]] , [■-□ ,often,o-□ ,tv -□ ,aux- [asp] ,aux-[pred]] , [o-D
,■-[] ,often,tv-D ,aux-[asp] ,aux-[pred]]] ...
s(f(asp>) is reset to 1-0
s(f(pred)) is reset to 1-0
Final setting: [0 0 0 0 0 1 1/0 I f f
agr(O-O) asp(l-O) case(0-0) pred(l-O) tns(0-0) ]
Language generated:
[s- D ,iv - □ ,aux- [asp] ,aux- [pred]]
[•-□ ,tv-[] ,o-D ,aux-[asp],aux-[pred]]
[■ -□ .© -□ , tv- □ , aux- [asp],aux- [pred]]
Co-□ ,■ -[], tv- □ , aux- [asp],aux- [pred]]
[[s-D .often ,iv-D .aux-[asp] ,aux-[pred]]
[s-D .often, tv- □ ,o-D .aux-[asp],aux-[pred]]
[s-D .often, o - 0 ,tv- □ , aux-[asp],aux-[pred]]
[o-[] ,s - □ .often ,tv-□ ,aux-[asp],aux-[pred]]
The language
[ [s- [], iv- □ , aux- [asp] ,aux- [pred]] ,[ • - □ , tv- [] ,o- [] ,aux- [asp],aux- [pred]
] . [s- □ ,o- □ .tv- □ ,aux-[asp] ,aux-[pred]], [o-D ,s- □ ,tv- □ ,aux-[asp] ,aux-
[pred]] , [[•-□ .o ften ,iv-□ , aux-[asp],aux-[prod]] ,[•-[] .often ,tv-[] .o -O .
aux-[asp],aux-[pred]] , [•-[] .often ,o-D .tv -□ .aux-[asp] ,aux-[pred]], [o-D
,s-D .o fte n ,tv -0 .aux-[asp].aux-[pred]]] is learnable.
yes
I ?-
224
The learner identified Little Chinese as a head-final language where overt move­
ment to Agrspec is obligatory for the subject NP and optional for the object.
Cspec must be filled at Spell-Out by the subject or by the object if it has moved
to Agr2spec. The verb remains VP-internal while AspO and CO are spelled out in
situ.
5.4.5 Acquiring Little lfrench
Here is the subset of French to be acquired.
(216) { s - [ c l] iv -[tn s .a s p ] (o ften )
s -[c l] tv-[tn s,asp ] (oftsn) o-[c2]
>
It differs from Little English in that (i) there is no auxiliary, and (ii) often appears
post-verbally. The strings in (216) are exemplified by (217) and (218).
(217) R nage souvent
he swims often
‘He often swims.1
(218) R visite souvent Paris
he visit often Paris
(He often visits Paris.’
The following session shows how Little French is acquired on-line.
I T- sp.
The in itia l setting is
[0 0 0 0 0 0 0 0 1 1
agr(O-O) asp(O-O) cass(O-O) prsd(O-O) tns(O-O) ]
Bext? [ s - [ ] ,iv - [ ] ] .
Current setting remains unchanged.
Next? [s-[cl] ,iv -D ] . XI
Unable to parse [s-[cl] ,iv-Q ]
Resetting the parameters ...
225
s(f(cas*)) is rssst to 0-1
Successful pars*.
Next? [s-[c l],iv -[tn s]]. 12
Unabl* to pars* [s-[cl],iv -[tn s]]
Resetting the parameters ...
s(f(tns>) is reset to 0-1
Successful parse.
Next? [s-[c l],iv -[tn s,a sp ]]. X3
Unabl* to pars* [s-[cl],iv-[tn s,asp ]]
Resetting the parameters ...
s(f(asp)) is reset to 0-1
Successful parse.
Next? [s -[c l],tv-[tn s,asp ],o-[c2]]. X4
Current setting remains unchanged.
Next? [s -[c l], iv-[tns,asp].often] . %5
Unable to parse [s-[cl],iv-[tn s,asp ].often ]
Resetting the parameters ...
Word order parameters reset t o : [ 1 1 1 1 0 1 0 0 i i ]
Successful pars*.
Next? [s-[cl],tv-[tn s,asp ],often ,o-[c2]]. X6
Current setting remains unchanged.
Next? current.setting.
[ l l l l O l O O i i
agr(O-O) asp(O-l) case(0-1) pred(O-O) tns(O-l) ]
Next? generate.
Language generated with current setting:
[s-C cl].iv-[tns,asp]]
[s-[cl],iv-[tn s,asp ].often ]
[s -[c l],tv-[tns,asp],oft*n,o-[c2]]
[s-[cl],tv-[tn s,asp ],o-[c2]]
Next? bye.
yes
I ?-
226
The presentation of input strings here is inconsistent with regard to overt mor­
phology: %1 has no overt morphology whatsoever; %2 has overt case only; %3 has
overt tense in addition to overt case; and X4 has all the overt morphology in Little
French. This is done intentionally in order to mimic children's gradual acquisition
of morphological knowledge. It is a common observation that children’s morpho­
logical knowledge is not acquired instantaneously. At the early stages of language
development, they may fail to notice part or all of inflectional morphology in their
target language. When this happens, the sentence which is actually being analyzed
by children are strings where part or all of the overt morphology is missing. The
string C s- [] iv - [] ] thus represents a piece of input data whose inflectional
markings are not noticed by children. As acquisition progresses, the morphology
is gradually worked out and becomes available for syntactic analysis. In the above
session, case morphology becomes visible at %2 where S(F(case)) is reset to 0-1.
Tense and aspect morphology appears in the input (or rather intake) at %3 and
%4 where 5(F'(<ns)) and S(F(asp)) are successively set to 0-1. Up till %5, how­
ever, Little French has been treated as a language which has no overt movement.
The SVO order found so far is compatible with the structure where every lexical
item is in its VP-internal position. The trigger for a different setting comes at %6
where often appears. The new string cannot be parsed unless the subject moves
to Agrlspec and the verb moves to Agrl-0. Hence the new setting. After that,
every string in (216) is acceptable. The language generated by the current setting
is exactly Little French. We thus conclude that this language has been correctly
identified in the limit.
227
5.4.6 Acquiring Little German
Little German is a V2 language which contains the following strings;
(219) { s -[c l] aux-[agr,tns] (oftsn) iv -[a sp ],
s -[c l] au x -[ag r,tn s] (oftsn) o-[c2] tv -[a sp ],
o-[c2] au x -[ag r,tns] s-[c l] (o ftsn ) tv -[a sp ],
o ftsn au x -[ag r,tn s] s -[c l] o-[c2] tv -[asp ]
>
This is the small subset of German main clauses where the auxiliaryisin second
position and the verb is clause-final. The strings in (219) can be exemplified by the
German sentences in (220), (221) and (222). (The word achon in these sentences
is an instance of the “often”-type adverb.)
(220) Karl hat achon das Buck gckaufi
Karl has already that book bought
‘Karl has already bought that book.’
(221) Das Buch hat Karl schon gckaufi
that book has Karl already bought
'That book, Karl has already bought.'
(222) Schon hat Karl das Buch gckaufi
already has Karl that book bought
'Already Karl has bought that book.'
The following shows how Little German is acquired on-line.
1 ?- sp.
Ths in itia l setting is
[0 0 0 0 0 0 0 0 1 1
agr(0-0) asp(O-O) case(0-0) pred(O-O) tns(0-0) ]
Next? [s- □ , aux-[tns],iv- □ ]. Xl
Unable to parse [s -D ,aux-[tns], iv -D ]
Resetting the parameters ...
228
s(f(tn s)) is rssst to 1-0
Word ordsr parameters rssst t o : [ 0 0 0 0 0 1 0 0 i i ]
Successful psrss.
Mart? [s-D ,aux-[tns] ,o-D ,tv -D ] . X2
Unable to parse [ s - □ ,aux-[tns]to -[ ] ,tv - []]
Resetting the parameters ...
Word order parameters reset t o : [ 0 0 0 0 0 1 l O i i j
Successful parse.
Kext? generate.
Language generated with current setting:
[s-D .often,aux-[tns] ,iv -D ]
[•-□ .often,aux-[tns] ,o-D ,tv -D ]
[s- D .eux- [tns] , iv- □ ]
[ s -D , aux-[tns] ,o-D ,tv -D ]
Next? [o-D ,aux-[tns] ,s-D .tv -D ] . X3
Unable to parse [o -D ,*ux-[tns],s - D ,tv -D ]
Resetting the parameters ...
Word order parameters reset to: [ O O O O O O l i i i ]
Successful parse.
■extT generate.
Language generated with current setting:
[often,aux-[tns] ,o-D ,»- D ,tv -D ]
[often,aux- [tns] ,s-D ,iv - []]
[o-D .often,aux-[tns] ,s-D ,tv -D ]
Co- D ,*ux- [tns] ,s- D .tv - D]
Kext? [ s - [cl] ,aux-[tns] , iv -D ] • Xd
Unable to parse [s -[e l],aux-[tns],iv -[]]
Resetting the parameters ...
s(f(case)> is re se t to 0-1
Word order parameters reset t o : [ 1 0 0 0 0 1 0 l i i ]
Successful parse.
MextT generate.
Language generated vith current setting:
[o fte n ,s -[c l], aux- [tns] ,iv -D ]
[o fte n ,s -[c l], aux- [tns] ,tv -D ,o-[c2]]
[s-[c l] .o ften , aux-[tns] ,iv -D ]
229
[■-[cl] .oftan,aux-[tna] ,tv-[] ,o-[c2]]
[a- [ c l] ,aux- [tna],iv- □ ]
[■-[cl] ,aux-[tna] ,tv-[] ,o-[c2]]
Maxt? [oftan ,au x-[tn a],a-[cl],iv-[]]. %B
Unabla to parsa [oftan,aux-[tna] ,a-[cl] ,iv-Q ]
Raaatting tha paranatara ...
Word ordar paranatara raaat to: [1 0 0 0 0 0 1 I l f ]
Succaaaful paraa.
Naxt? ganarata.
Languaga ganaratad with currant aatting:
[oftan,aux-[tna],o-[c2],a-[cl],tv-[]]
[oftan,aux- [tna] ,a-[cl] ,iv -[]]
[o-[c2] ,oftan,aux-[tna] ,a-[cl] ,tv -[]]
[o-[c2] ,a-[cl] ,tv-[] ,aux-[tna]]
Naxt? [a-[cl],aux-[tna], iv -[]]. X6
Unabla to paraa [a-[cl] ,aux-[tna] ,iv -[]]
Raaatting tha paranatara ...
Word ordar paranatara raaat to: [0 1 0 0 0 1 0 1 1 1 ]
Succaaaful paraa.
Maxt? ganarata.
Languaga ganaratad with currant aatting:
[oftan ,a-[cl],aux-[tna],iv-[]]
[oftan,a- [cl] ,aux- [tna],tv- □ ,o- [c2]]
[a-[cl].oftan,aux-[tna],iv-[]]
[a-[cl] .oftan,aux-[tna] ,tv-[] ,o-[c2]]
[a-[cl] ,aux-[tna] ,iv-Q ]
[a-[c1 ],aux-[tna],tv- □ ,o-[c2]]
Maxt? [oftan,aux-[tna] ,a-[cl] ,iv-D ] . %7
Unabla to paraa [oftan,aux-[tna] ,a -[c l],iv- □]
Raaatting tha paranatara ...
Word ordar paranatara raaat to : [ 0 1 0 0 0 0 l l i i ]
Succaaaful paraa.
Naxt? [o-[c2],aux-[tna] ,a-[cl] ,tv -Q ]. X8
Currant aatting ranaina unchangad.
Maxt? [a -[c l],aux-[tn a],iv-[]].
Unabla to paraa [a-[cl],au x-[tn a],iv-n ]
230
Raaatting tha paranatara ...
Word ordar paranatara raaat to: [0 0 1 0 0 1 0 1
Succaaaful paraa.
HaztT [s-[cl] ,aux-[agr,tns] ,iv-[]] .
Unabla to paraa [a-[cl] ,aux-[agr,tna] ,iv -[]]
Raaatting tha paranatara ...
a(f(agr>) ia raaat to 1-0
Word ordar paranatara raaat to: [0 0 0 1 0 1 0 1
Succaaaful paraa.
Naxt? [oftan,aux-[agr,tna] ,a-[cl] ,iv -[]] .
Unabla to paraa [oftan,aux- [agr,tns] ,a- [cl] ,iv- []]
Raaatting tha paranatara ...
Word ordar paranatara rasat to: [0 0 0 1 0 0 1 1
Succaaaful paraa.
Naxt? [o- [c2],aux- [agr,tn s],a- [c l],tv- □ ] .
Currant aatting ranaina unchangad.
Naxt? [a-[cl] ,aux-[agr,tns] ,o-[c2] ,tv -□ ].
Unabla to paraa [s -[c l],aux-[agr,tns],o-[c2],tv-D ]
Raaatting tha paranatara ...
Word ordar paranatara raaat to: [0 0 0 1 0 1 1 1
Succaaaful paraa.
Naxt? [o-[c2],au x-[agr,tn s],s-[cl],tv-[]].
Unabla to paraa [o-[c2] , aux-[agr,tna] ,a -[c l],tv -[]]
Raaatting tha paranatara ...
Word ordar paranatara raaatto: [ 1 0 0 1 1 0 1 1
Succaaaful paraa.
Naxt? [a- [c l],aux-[agr,tna] ,o-[c2],tv- []] .
Unabla to paraa [s-[cl],au x-[agr,tn s],o-[c2],tv-[]]
Raaatting tha paranatara ...
Word ordar paranatara raaatto: [ 1 0 0 1 0 1 1 1
Succaaaful paraa.
Naxt? ganarata.
Languaga ganaratad with currant aatting:
[o fta n ,s-[cl],aux-[agr,tns] ,iv-Q ]
[oftan ,a-[cl],aux- [agr,tna] ,o-[c2],tv-[]]
[o-[c2] ,s - [ c l] ,aux-[agr,tna] ,oftan,tv-[]]
i i ]
X9
i i ]
XlO
i i ]
XU
i i ]
X12
i f ]
X13
i i 3
231
[o-[c2] ,s-[c l] ,aux- [agr,tns] ,tv -[]]
[s-[cl],aux-[agr,tns] .oftan, iv-D ]
[s-Ccl] ,aux-[agr.tn s] .oftan,o-[c2] ,tv -[]]
[•-[cl].au x-[agr,tn s],iv-[]]
[•-[cl].au x-[agr,tn s],o-[c2],t»-[]]
Maxt? [s -[c l],aux-[agr.tns],o-[c2] ,tv -[]].
Unabla to parsa [s-[cl] ,aux-[agr,tns] ,o-[c2] ,tv -[]]
Rasattlng tha paranatars ...
Word ordar paranatars rasat to: [0 0 0 1 1 1 1 1
Succassful parsa.
Maxt? C s-[cl],aux-[agr,tns],o-[c2],tv-[asp]].
Unabla to parsa [s -[c l],aux-[agr,tns],o-[c2],tv-[asp]]
Rasattlng tha paranatars ...
s(f(asp)> is rasat to 0-1
Succassful parsa.
Maxt? [o-[c2],aux-[agr,tns],s-[cl],tv-[asp]].
Currant satting ranains unchangad.
Maxt? [oftan,aux-[agr,tns],s-[cl],o-[c2],tv - [asp]].X17
Currant sattlng ranains unchangad.
Maxt? currant.satting.
[0 0 0 1 1 1 1 1 1 1
agr(l-O) asp(O-l) casa(O-l) prad(0-0) tns(l-O) ]
Maxt? ganarata.
Languaga ganaratad with currant satting:
[oftan, aux- [agr, tns] ,s- [c l], iv- [asp]]
[oftan,aux-[agr,tns],s-[cl],o-[c2],tv-[asp]]
[o-[c2],aux-[agr.tns],s-[cl].oftan,tv-[asp]]
[o-[c2],au x-[agr,tns],s-[cl],tv-[asp]]
[s -[c l],aux-[agr,tns],oftan,iv-[asp]]
[s-[cl],au x-[agr,tn s],oftan,o-[c2],tv-[asp]]
[s -[c l],aux-[agr,tns],iv-[asp]]
[s-[cl],au x-[agr,tn s],o-[c2],tv-[asp]]
Maxt? bya.
yas
I 7-
X14
i i ]
X16
X16
232
The first input string is not compatible with the initial setting. It becomes
anaiyzable when S(Af(specl)) is reset to 1 and S(F(tns)) is set to 1-0. The next
string (X2) triggers the resetting of S(M(spec2)) to 1. The language generated at
this point is an SOV language instead of a V2 language. The next input string
(%3) informs the learner of the error and resetting occurs. The learner is trying to
identify the input language as a non-scrambling one at first, but without success.
Resetting continues through X4, %5, X6, and %7. The setting at this point is able to
accept a language where the Aux is in second position while the first position can be
occupied by an Adverb or an Object. But no subject can appear clause-initially,
which shows the setting is still incorrect. Therefore more resetting occurs until
after the string at Xl5 is presented. Meanwhile, the recognition of morphological
inflections cause 5(J*’(case)) and 5,(F(asp)) to be reset to 0-1 whereas S(F(agr))
and 5(/'(pred)) are reset to 1-0. From this point on, every string in Little German
are acceptable. In addition, the language generated with the current setting is not
a superset of the target language. The learner has thus converged on the correct
grammar for Little German.
5.5 Sum m ary
We have seen in this chapter that the parameter space associated with our experi­
mental grammar has certain desirable learnability properties. The languages gen­
erated in this parameter space are not only typologically interesting but learnable
as well. There is at least one algorithm whereby the parameters can be correctly
set for each language in the parameter space. The precedence rules employed in
the learning algorithm, which are crucial for the success of the learnability, are
derivable from a general linguistic principle, the principle of Procrastinate. This
233
makes our learning algorithm linguistically plausible as well as computationally
feasible. There are many issues to which our acquisition model is potentially rel­
evant, but they have not been addressed so far. We may want to examine the
empirical implications of this model for language development. One of the ques­
tions we can ask is how the intermediate settings the learner goes through in our
model relate to the developmental stages that children undergo. Our model may
also be theoretically interesting to the study of language change. The existence of
weakly equivalent languages in our parameter space may provide an explanation
for the kind of reanalysis phenomena where the grammar changes without imme­
diately affecting the language generated. Another issue we may want to pursue
is how plausible our learning algorithm will be when the input is noisy and what
modifications are needed to make the learning procedure more robust. All these
issues are yet to be explored and a serious discussion on them requires much more
work than has been done in this thesis.
234
Chapter 6
Parsing w ith S-Parameters
When discussing the parameter space in Chapter 4 and the parameter setting algo­
rithms in Chapter 5, we have assumed a parser which implements the experimental
grammar defined in Chapter 3. In Chapter 4, the parser is used in the generation
of all possible strings of all possible languages in our parameter space. In Chapter
5, it is used by the learner to process incoming sentences or generate all the strings
in her language.1 So far the parser itself has not received any discussion. It is the
goal of this chapter to describe this parser.
The parser to be discussed implements our version of UG, namely, the ex­
perimental grammar we have assumed. It is not strictly speaking an axiomatic
representation of the grammar, but it is equivalent to the grammar in that it ac­
cepts or rejects exactly the same set of sentences/languages as the grammar does.
The parser is universal in the sense that it is capable of processing any language in
the parameter space. In other words, the parsing procedures can be applied in a
uniform fashion no matter which particular language is being processed. The only
thing that can change from language to language is the parameter setting. This
'The Prolog programs pspaes.pl in Appendix A.l, sp .p l in Appendix A.4 and s p l.p l in
Appendix A.6 cannot run without this parser.
235
should be no surprise because the parser is in fact the embodiment of UG which
does not vary except for the parameter values.
6.1 D istinguishing Characteristics o f the Parser
Our discussion of the parser will be restricted to those properties of the parser
which are absent in other parsers. Those properties are independent of the ways in
which the parser might be implemented. They can be preserved no matter whether
the parsing algorithm is top-down, bottom-up, left-corner, or any combinations
thereof. The reason why the parser we build here can be different from all other
parsers is that the grammar used by the parser is different. The most salient
feature which distinguishes our grammar from all other versions of UG is the
uniform treatment of movement. In traditional grammars, most movements are
S-structure movements. The movements that a parser deals with are only those
movements which are overt. The parse trees built by the parser are either S-
structure representations or combinations of S-structure and D-structure. In the
latter case, certain nodes in the parse tree form chains. The constituent that
moves iB found at the head of the chain and the tail or foot of the chain consists
of a trace. Since S-structure movements can vary from language to language,
the chains to be built by the parser can vary according to which language is being
parsed. Consequently, the chain-building procedures cannot be universally defined.
We have to learn that an XOchain starts at CO in German but at Agrl-0 in French.
We also have to learn that the A-chain resulting from wh-movement must be built
in English but not in Chinese.
Things are different in our present model. We have assumed that there is an
underlying set of movements which occur in every language. All those movements
236
are LF movements in the sense that they are all necessary for the successful deriva­
tion of the LF representation. The derivation will crash if any of those movements
fails to occur. This implies that the parser in our model must build an LF repre­
sentation in order to find out if a sentence is grammatical. Looking at LF, we see
that all languages are identical at this level of representation in that the movement
chains found there are the same cross-linguistically. If a sentence is grammatical
at all, then its LF representation must have those chains, no matter what language
this sentence is from. This means that the process of chain-building can be defined
universally as an inborn procedure. The parser goes ahead and constructs an in­
variant set of chains in a uniform way regardless of what language ib being parsed.
These chains are all LF chains. If the only representation we need were LF, then
nothing will have to be learned as far as chain-building is concerned.
LF is of course not the only structure the parser has to build. When we talk
about a chain, at least two structural positions must be involved, one represented
by the head of the chain and one by the tail. In terms of a chain resulting from LF
movement, the head is the LF position of a constituent and the tail is the base po­
sition. The base position is the position where the constituent is generated through
lexical projection. All the “content words1* in our system are base-generated VP-
internally. It is roughly the D-structure (DS) position in traditional terminology.
Since every LF movement involves moving a constituent from its base position
to the LF position, the chain formed by this movement is a simultaneous repre­
sentation of both positions. When we say that the chains are universal, we in
fact mean that both the LF positions and base positions are invariant. By saying
that the chains can be built uniformly, we have actually concluded that the LF
and “DS1* representations of all languages can be constructed through a uniform
237
parsing procedure.
However, LF and DS are not the only representations at which the grammat-
icality of a sentence is determined. We also have to look at the positions where
lexical items are spelled out. In traditional terminology, this representation is
called S-structure (SS). We can borrow this term and use it to refer to the struc­
ture which is fed into the PF component. The “S” thus stands for “Spell Out”
rather than “surface”. This is the structure which is subject to cross-linguistic
variation. The source of variation in our model is the values of S(M)-parameters
which determine which subset of LF movements must be performed before Spell-
Out in a given language. The constituent involved in an LF movement appears
at the head position of a chain at S-structure if it moves before Spell-Out. It
appears at the tail position if it moves after Spell-Out. To find out if a sentence
is grammatical in a given language, we have to take the S-structure positions of
all constituents into consideration. In other words, the parse tree we build must
represent SS in addition to LF and DS. This is not difficult to do. LF and DS
position can as usual be represented by the chain, with its head invariably marking
the LF position and the tail the DS position. The SS position can be indicated by
the position of lexical head in the chain. It must be at the head position if it moves
before Spell-Out and it must be at the tail position if it moves after Spell-Out. We
can thus represent LF, DS and SS in a single parse tree. Examples of such parse
trees are given in (223) and (224).
238
239
S vkjM t vO<[
240
The chains in those parse trees are represented through coindexing and feature-
sharing. There are three chains in each of the trees:
(i) A V-chain Unking the nodes CO, Agrl-0, TO, AspO, Agr2-0 and VO. All those
nodes are assigned the same index: 1. They also share the following feature
values: phi:X, tns:T , asp:A, and th : [ag t.p a t]. This chain is formed by
successive head movements from VO to CO. The variables in the chain, X, T
and A, can be instantiated to things like 3sm (3rd person singular masculine),
pres (present tense), prog (progressive aspect) in an actual sentence.
(ii) An NP chain linking the nodes Cspec, Agrlspec and the first Vspec. These
nodes are coindexed 2 and they share the following feature values: c a se :l,
op:+, phi :X, and th e ta :a g t. This chain is formed by two successive move­
ments of the subject NP: an A-movement from Vspec to Agrlspec and an
A-movement from Agrlspec to Cspec. The fact that both the phi feature in
this NP chain and the phi feature in the V-chain have the value X indicates
that the subject and the verb must agree with each other. The + value of
the op feature in this NP chain shows that this NP can be the topic or focus
of the sentence, or a QP which receives a wide-scope reading.
(iii) Another NP chain linking the nodes Agr2spec and the second Vspec. The
index for these nodes is 3 and the feature values that are shared between
them are case:2, op:-, phi:Y, and th e ta :p a t. This chain is formed by
movement of the object NP from Vspec to Agr2spec.
As far as these chains are concerned, (223) and (224) are identical. This should be
the case because these chains are formed by LF movements which are universal.3
2Both (223) and (224) have an alternative representation where Cspec is coindexed with the
241
What differentiates (223) and (224) are the positions of lexical items in these chains.
The lexical items, which are actually pronounced, are represented by Subject,
Obj ect and Verb in those trees. In a real sentence they will be real words like he,
him and likes. In (223), Subject is in Agrlspec, Object in Agrlspec, and Verb
in VO. In (224), Verb is in CO, and Subject and Object are in their VP-internal
positions. These positions tell us where Subject, Object and Verb are at the
time of Spell-Out. In (223), Subject and Object have moved overtly to their case
positions and the surface word order is SOV. In (224), Verb has overtly moved
to CO and the surface order is VSO. We get the tree in (223) when the S(M)-
parameters are set to [ 0 0 0 0 0 1 1 0 ] and we get (224) when they are set
t o [ l 1 1 1 0 0 0 0 ] . It is clear that three levels of representation - LF, DS
and SS - are merged into one in those parse trees. The LF and DS positions are
identical in the two trees but the SS positions are different.
In building trees of this kind, the parser is constructing three levels of repre­
sentations at the same time. There are three things that the parser must do. It
must (i) build the tree; (ii) build the chains; and (iii) decide where the lexical item
appears in each chain.
Chain-building is universal, as we have seen. The parsing procedures which
are responsible for chain-formation can therefore be invariant. It can simply be
hard-wired into the parser, since it does not respond to language variation at all.
So this part of the parsing mechanism can be assumed to be innate.
Tree-building is universal in that the same set of nodes are built in a given
type of sentence no matter what the language is. This is illustrated in (223) and
object NP which will then have the ♦ value for the operator feature. This is possible when the
object is understood to be the topic, focus, or wide-scope QP of the sentence.
242
(224) which represent different languages but have the same nodes. Furthermore
the same set of dominance relations holds between those nodes in every language.
AgrlP is always dominated by Cl, for example. However, linear precedence may
vary according to the values of HD-parameters. When building CP and IP, the
parser must be able to respond in two different ways. It has to build the phrase
head-initially if the HD-parameter is set to / and build the phrase head-finally if
the parameter is set to F. These alternative actions can again be built-in. We can
suppose that the parser is innately able to build a phrase in either way. In parsing
a particular language, it has to receive instructions from the HD-parameters in
order to decide which action to take.
The task of deciding where the lexical item appears in each chain consists of
the following computation. The parser must determine for each terminal node in
the tree whether this node must dominate a lexical item. In addition, it must
make sure that only one node in each chain dominates a lexical item. Can these
decisions again be made universally? The answer appears to be negative at first
sight. Apparently, the action the parser takes at a given terminal node can vary
cross-linguistically. In some languages the terminal node must dominate a lexical
item while in some other languages it must be empty. It seems that we may need
to specify the parser action at each terminal for each different language. This does
not have to be the case, however. It is true that these decisions are language-
specific, but the specifications exist in the parameter settings rather than in the
parser itself. In our model, the surface position of a lexical item is determined
by the values of S(M)-parameters. These parameters decide how far each lexical
item moves before Spell-Out. A terminal node must dominate a lexical item if
this lexical item has moved exactly to that node at Spell-Out. The node should
243
dominate nothing (i.e. empty) if the lexical item in that chain has either moved
through this node to a higher position or has not moved to this node by the time
of Spell-Out. Take TO as an example. It must dominate a verb if S(A/(*ns)),
5(M (asp)) and S(M(agr2)) are set to 1 while S(M(agrl)) is set to 0. In this
case, the verb will overtly move to TO but no further. TO must be empty in either
of the following situations: (a) 5(M (fns)), 5(Af(asp)) and S(M(agr2)) are set
to 1 and 5(M (aprl)) is set to 1 as well ( the verb moves through TO to a higher
position); or (b) S(M(tna)) is set to 0 (the verb does not move to TO before Spell-
Out). We assume that, whenever a terminal node is built, the parser is able to
consult the S(M)-parameter values and decide whether the node should be empty
or not. We can further assume that such decision-making capability of the parser
is innate. There can be a built-in mechanism which checks the parameter values
and takes the right actions accordingly. If this is true, no learning is needed in this
part of the parsing procedure, either. What has to be learned is the parameter
setting which exists independently of the parser. Once the parameters are set for
a given language, the parser will be able to process that language.
It turns out that, given a certain value combination of S(M)-parameters, the
action that the parser must take at any given terminal node is unique. The fol­
lowing table shows the S(M)-parameter conditions under which a terminal node is
to dominate a content word (V or NP).
244
UV in CO S(M (c(l») k S(M(agrl(l))) k S(M(tns(l)))
k S(M(asp(l))) k S(M(agr2(l)))
V in Agrl-0 S(M(c(0))) k S(M (agrl(l») k S(M(tns(l)))
k S(M(asp(l))) k S(M(agr2(l)))
V inTO S(M(agrl(0))) k S(M(tns(l))) k S(M(asp(l)))
k S(M(agr2(l)))
V in AspO S(M(tns(0))) k S(M(asp(l))) k S(M(agr2(l)))
V in Agr2-0 S(M(asp(0)j) k S(M(agr2(l)))
V in VO S(M(agr2(0)))
NP in Cspec ( S(M(cspec(l))) k (S(M(specl(l))) )
or S(M(spec2(l))))
NP in Agrlspec S(M(specl(l)))
NP in Agr2spec S(M(spec2(l)))
NP in Vspecl S(M(specl(0)))
NP in Vspec2 S(M(spec2(0)))
(225) Decision Tkble for the Spell-Out of V end NP
Notice that the conditions for an NP to appear in Agrlspec or Agr2spec are
necessary but not sufficient conditions. S(A/(specl(l))) is necessary for Agrlspec
to dominate an lexical NP, but it is not sufficient. The NP may move further
up to Cspec if 5(M(c3pec(l))) holds. However, we cannot state the condition as
5(Af(*pecl(l))) k S(A/(cspec(0))) because that would be too limiting. The sub­
ject NP may stay in Agrlspec with S(M (cspec)) set to 1 if the object NP moves
to Cspec instead. By stating the necessary conditions only, we allow some kind
of non-determinism which makes it possible to have alternative NPs (subject or
object) in Cspec in some languages. But this non-determinism is local to the posi­
tion of Agrspec. The choice can be made deterministically when a whole sentence
is taken into consideration. Let us take Agrlspec again as an example. Suppose
both 5(Af(sjiecl)) and S{M(cspec)) are set to 1. Upon seeing S(M(apec2(l))),
the parser may decide to put the subject NP under Agrlspec. This would be the
correct choice if the object NP has moved to Cspec. If the object is not there, how­
245
ever, Cspec will be empty which is contradictory to 5(A/(cspec(l))). The parser
will realize that a mistake has been made and it will try the other choice - putting
the subject in Cspec and leaving Agrlspec empty - which is also permitted by the
current parameter setting.
It should also be noted that the VO in (225) is the head of the top layer of VP.
The lower V0(s) are always empty. The assumption is that the verb always moves
through all layers of VP to the top one before Spell-Out. There is no variation
there.
In addition to deciding whether a terminal should contain a content word or
not, the parser also has to check, in cases where the terminal is not empty, what
features are overtly expressed in the word. An NP may or may not be overtly
marked for case; a verb can be overtly marked for the predication, agreement, tense
or aspect feature, or any combination of them. A sentence is accepted only if the
morphological patterns are correct as well. These patterns are determined by the
values of S(F)-parameters. An NP must be overtly marked for case if 5 (/r(case))
is set to F-l; otherwise it must be set to F-0. A verb can be overtly marked
for predication, agreement, tense or aspect if and only if S(F(pred)), S(F(agr)),
S(F(tns)) or 5(/'(asp)) is set to 1; it must have no overt inflectional morphology
if every S(F)-parameter is set to 0. The complete set of possible spell-out of S
(Subject NP), 0 (object NP), and V (verb) and the S(F)-parameter values which
are necessary and sufficient conditions for the spell-out are given in (226). As
usual, we will indicate the overtness of a feature by placing it in the feature list of
every symbol. For conciseness, we will use v as a cover term for both iv and tv.
246
l r
s-[] SfFfcasefL-O)))
s-(cl]_____________
°* J
S(F(case(L-l)))
SFcase^L-0
o- c21 S(F(case(L-l))) |
v
v-U S(F(pred(L-0))) k S(F(agr(L-0))) 1
k S(F(tns(L-0)l) k S(F(asp(L-0)))
v-[pred] S(F(pred(L-l)j) k S(F(agr(L-0)))
k S(F(tns(L-0))) k S(F(asp(L-0)))
v-lagrJ S(P(pred(L-0))) k S(P(agr(L-l)))
k S(F(tns(L-0m k S(F(asp(L-0)))
v-[tns] S(F(pred(L-0))) k S(F(agr(L-0)))
k S(F(tns(L-l))) k S(F(asp(L-0)))
v-laspj §(F(pred(L-0))) & S(P(agr(L-0)))
k S(F(tns(L-0))) k S(F(asp(L-l)))
v-[pred,agrj S(F(pred(L-l))) k S(F(agr(L-l)))
k S(F(tns(L-0))) k S(F(asp(L-0)))
v-[pred,tnsj S(F(pred(L-l))) k S(F(agr(L-0)))
k S{F(tns(L-l))) k S(F(asp(L-0)))
v-[pred,aspj S(F(pred(L-l))) L 3(F(agr(L-0)))
k S(F(tns(L-0))) &S(F(asp(L-l)))
v-[agr,tns] S(F(pred(L-0))) k S(P(agr(L-l)))
k S(F(tns(L-l))) k S(F(asP(L-0)))
v-lagr,asp] S(F(pred(L-0))) k S(F(agr(L-l)))
k S(F(tns(L-0))) k S(F(asp(L-l)))
v-[tns,asp] S(F(pred(L-0))) k S(F(agr(L-0)))
k S(F(tns(L-l))) k S(F(asp(L-l)))
v-[pred,agr,tns) S(F(pred(L-l))) k S(F(agr(L-l)))
k S(F(tna(L-l))) k S(F(asp(L-0)))
v-[pred,agr,asp] S(F(pred(L-l))) k S(F(agr(L-l)))
k S(F(tns(L-0))) k S(F(asp(L-l)))
v-[pred,tns,aap] S(F(pred(L-l))) k S(F(agr(L-0)))
k S(F(tns(L-l))) k S(F(asp(L-l)))
v-[agr,tns,asp] S(t,(pred(L-0))) & S(F(agr(L-l)))
k S(F(tns(L-l))) k S(F(asp(L-l)))
v-[pred,agr,tns,asp] S(F(pred(L-l))) k S(F(agr(L-l)))
k S(F(tns(L-l))) k S(F(asp(L-l)))
(226) DecUion Tkble for the Spell-Out of L-Features
So far we have limited the lexical items that can appear in the tree to content
247
words (verbs and NPs) only. There is another kind of visible (and pronouncible)
elements that can be dominated by terminal nodes: auxiliaries/grammatical parti­
cles which we have represented as Aux. Whether a terminal node can dominate an
Aux has to be determined by both S(M)-parameters and S(F)-parameters. Take
TO as an example. There are three situations where TO can dominate an Aux.
(i) When 5(M (tns(0))) k 5'(F(tna(l —X )) k 5(M (ayrl(0))) is true. This con­
dition requires that (a) no constituent move to TO, (b) the F-feature of TO
be spelled out as an Aux, and (c) this Aux remain in situ. In this case, we
have aux- [tns] in TO.
(ii) When S(Af(«sp(0))) k (F(asp(l - X))) k S(M (fns(l))) k S(M (aprl(0)))
& S(F(tns(0 —A"))) is true. This condition requires that (a) no constituent
move to AspO, (b) the F-feature of AspO be spelled out as an Aux, (c) this
Aux move to TO, (d) it move no furhter up after moving to TO, and (e) the
F-feature of TO not be spelled out. In this case, we have aux- [asp] in TO.
(iii) When S(Af(<wp(0))) k {F(asp{1 - A))) k S(M(tns(l))) k S(M(agrl{0)))
k S(F(tns( 1 —A))) is true. This condition requires that (a) no constituent
move to AspO, (b) the F-feature of AspO be spelled out as an Aux, (c) this
Aux move to TO, (d) it move no furhter up after moving to TO, and (e) the
F-feature of TO be spelled out as well. In this case, we have aux- [tn s , asp]
in TO.
We assume that an auxiliary can be overtly inflected for a certain feature only if it
has originated from or moved through the position in which this feature is found.
An Aux can have tn s in its feature list only if it is in TO, or has moved from or
through TO. In order for an Aux to be spelled out as aux- [pred,a g r,tn s ,asp],
248
for instance, two conditions must be met. First, the Aux must have originated from
AspO and moved all the way up to CO; second, S(F(pred)), 5(F’(opr)), S(F(tna))
and S(F(asp)) must all be set to 1-L. In short, the spell-out of Aux has to be
determined by the values of both S(M)- and S(F)- parameters. The following table
lists all the possible spell-out of Aux and the necessary and sufficient conditions
for each possibility. Since an Aux can have a different set of spell-out possibilities
in each different position, we have to consider them one by one.3
Aux aux-[pred] SfF^red 1-X111 k S(M(c(0»)
in aux-[agr] S(F(pred(0-X))) k S(F(agr(l-X))) k S(M(c(l)))
k S(M(agrl(0)))
CO aux-[tns] S(F(pred(0-X))) k S(F(agr(0-X^) k S(F(tns(l-X)))
k S M(c(I))) k S(M(agrl(l))) k S(M(tns(0)))
aux-[asp] S(F(prsd(0-X))) k S(F(agr(0-X))) k S(F(tns(0-X)))
k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l)))
k S(M(tns(l») k S(M(asp(0)))
aux-[pred,agr] S(fTpred(l4)i L S ( f ( * p ( l - m
k S M(c(I))) k S(M(agrl(0)))
aux-[pred,tns] S(F(pred(l-X))) k S(F(agr(0-X))) k S(F(tns(l-X)))
k S(M(c(l))) k S(M(agrl(l))) k S(M(tn«(0)))
aux-[pred,asp] S(F(pred(l-X))) k S(F(agr(0-X))) k S(F(tns(0-X)))
k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l)))
k S(M(tns(l))) k S(M(asp(0)))
aux-[agr,tns] S(F(pred(0-X))) k S(F(agr(l-X))) k S(F(tns(l-X)))
k S(M(c(l))) k S(M(agrl(l))) & S(M(tn»(0)))
aux-[agr,asp] s(P(Pr«i<o-x))) l
k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(l)))
k S(M(agrl(l))) k S(M(tns(l)))&S(M(asp(0)))
aux-[tns,asp] S(F(pred(0*X))) & S(F(agr(0-X)^E S(F(tns(l-X)J)
k S(F(asp(l-X))) k S(M<c(l))) k S(M(agrl(l)))
k S(M(tns(l))) k S(M(aspfO»)
aux-[prsd,agr
,tns]
S(F(pred(l-X))) k S(F(agr(l-X))) k S(F(tns(l-X)))
k S(M(c(l)» k S(M(agrl(l))) k S(M(tns(0))) |
(Continued aa the next psge)
sAgr2-0 is being ignored here because s i we restricting ourselves to situations where there is
no objeet-verb agreement. We can easily it to object-verb agreement but we choose not
to do so for the sake of avoiding unnecessary complications in our exposition.
249
aux-[pred,agr,
asp]
S(F(pred(l-X))) k S(F(agr(l-X)))
k S(F(tns(0-X))) k S(F(asp(l-X)))
k S(M(c(l)» k S(M(agrl(l»)
k S(M(tns(l))) k S(M(asp(0)))
aux-[pred,tns,
asp]
S(F(pred(l-X))) k S(F(agr(0-X)))
k S(F(tns(l-X))) k S(F(asp(l-X)))
k S(M(c(l))) k S(M(agrl(l)))
k S(M(tns(l))) k S(M(asp(0)))
aux-[agr,tns,
asp]
S(F(pred(0-X))) k S(f(agr(l-X)))
k S(F(tns(l-X))) k S(F(asp(l-X)))
k S(M(c(l))) k S(M(agrl(l)))
k S(M(tns(l))) k S(M(asp(0)))
aux-[pred,agr,
tns,asp]
S(F(pred(l-X))) k S(F(agr(l-X)))
k S(F(tns(l-X))) k S(F(asp(l-X)))
k S(M(c(l))) k S(M(agrl(l)))
k S(M(tns(l))) k S(M(asp(0)))
Aux
in
Agrl-0
aux-[agr] S(F(agi(l-X))) k S(M(c(0))) k S(M(agrl(0)))
aux-[tns] Sit'(agrio-X))) k S(F(tns(l-X))) k S(M(c(0)))
k S(M(agrl(l))) k S(M(tns(0)))
aux-[asp] 5(P(agi(0-X))) k SM tnsfa-xffi k S(F(asp(l-X)))
k S(M(c(0))) k fS(M(agrl(l))) k S(M(tns(l)))
k S(M{asp(0)))
aux-[agr,tns] S(F(agr(l-X))) k S(F(tns(l-X))) k S(M(c(0)))
k S(M(agrl(l))) k S(M(tns(0)))
aux-[agr,asp] S(F(agr(l-X))) k S(F(tns(0-X))) k S(F(asp(l-X)))
k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l)))
k S(M(asp(0)))
aux-[tns,asp] S(F(agr(0-X))) k S(F(tns(l-X))) k S(F(asp(l-X)))
k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l)))
k S(M(asp(0)))
aux-[agr,tns,
asp]
k S(F(tns(l-X))) k S(F(asp(l-X)))
k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l)))
k S(M(asp(0)))
Aux
in
TO
aux- tns] S(F(tns(l-X))) k S(M(agrl(0))) k S(M(tns(0))) |
aux-[asp] S(F(tns(0-X))) k S(F(asp(l-X)j) k S(M(agrl(0)))
k S jM (M l))) t S(M(asp(0)))
aux-[tns,asp] S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(agrl(0)))
k S(M(tns(l))) k S(M(asp(0)))
II Aux
in
|| AspO
aux-[asp] S(F(asp(l-X))) k S(M(tna(0))) k S(M(asp(0)))
(227) Decision Table for the Spell-Out of F-Features
250
Note that Aux can never have an empty list attached to it. This is so because
Aux is the spell-out of some F-feature(s) and there can be no Aux if no feature is
spelled out.
Using the decision tables in (225), (226) and (227), the parser can uniquely
determine the status of each terminal. If the condition in (225) holds, the termi­
nal must contain a content word (a verb or a lexical NP). It then uses (226) to
determine the inflectional pattern of this word. If the condition in (227) holds, the
terminal must dominate an Aux of a particular morphological make-up. Notice
that there is no overlapping between the conditions in (225) and (227). If neither
the conditions in (225) nor those in (227) are met, the terminal must be empty
i.e. dominating no lexical item.
Now we sum up the characteristics of the parser in our model. Like all other
principle-based parsers, the present parser has to do at least three things: it has to
build a tree, it has to build chains, and it has to decide which terminals are empty
and which terminals contain lexical material. What makes this parser special
is the degree of universality found in the parsing procedures. As we have seen,
chain-building is universal and tree-building is universal aside from the limited
variation in head direction. Most of the language-particular decisions are made at
the terminal nodes an these decisions can always be made correctly by consulting
the values of S-parameters.
The feature which sets this parser apart from all other parsers is the way in
which traces or empty categories are posited. For all existing parsers, the decision
of whether to posit an empty category cannot be made locally. In order to licence
a trace in a given position, the parser must find in the left or right context a lexical
item which has moved from this position. With our present parser, however, the
251
decision can be made on the basis of the S-parameter setting alone. We can
simply look at the parameter values and decide whether a node must be empty or
non-empty. No memory of the left context or look-head into the right context is
necessary. The chains are built universally, so we know in advance how far a given
chain goes. This property can be especially valuable for parsing structures with
many empty categories, such as those assumed in current syntactic theory.
In the next section, we will look at a Prolog implementation of this parser.
This will clarify the discussion in this section and add some concreteness to our
understanding of the parsing algorithm.
6.2 A Prolog Im plem entation
In this section we look at a particular implementation of the parser described
above. It is a top-down parser implemented in the DCG (definite clause grammar)
format. The choice of presenting a top-down version of the parser in this thesis is
motivated by the consideration that this seems to be the simplest way to describe
the underlying logic of the parser. It is not theoretically superior, nor is it the
most efficient parser that can be built in the present model. We choose it in
order to make the main characteristics of the parser clear without getting into the
complications of parsing strategies. Once the logic is clear, we can implement it in
many other ways.
The top-down parser to be discussed is presented in Appendix A.7. We shall
examine it by looking at the following sub-processes one by one: (a) tree-building;
(b) feature-checking; and (c) leaf-attachment.
252
6.2.1 Tree-Building
The tree is built by cp/3, c l/4 , agrlp/5, agr 1-1/4, tp /6 , t l/ 6 , asp.p/6, aapl/6,
agr2p/6, agr2_l/5, vp/5 and v l/5 .4 These predicates implement the following
phrase structure rules which are equivalent to the rules in (64) presented as part
of our experimental grammar in Chapter 3.
(228)
C P - * X P C1 (i)
C 1 - { C°, AgrlP } (si)
AgrlP —*■NP(l) Agrl1 (m )
Agrl1 - {Agrl°, TP } (iv)
T P -» T 1 (u)
T 1 often T 1 (vi)
T 1 — { T°, AspP } (vis)
AspP —* Asp1 (viii)
Asp1 ~4 { AspP, Agr2P } (*x)
Agr2P —»JVP(2) Agr2l (x)
Agr2P —»Agr2l (xt)
Agr2l —» { Agr2°, VP( 1) } (xii)
VP{) -» NP(l) Vl (l) (xi*)
Vl(l)-+ V ° VP{2) (xiii)
y*(l) -> V° (xxv)
VP(2) -►NP(2) Vrl(2) (xt>)
V*(2) -> V° (xvi)
The X P in (i) can be an NP (subject or object) or an AdvP such as often. The
first clause of c/3 takes care of the case where Cspec is occupied by an NP. The
second clause lets Cspec contain an AdvP whose instantiation is limited to often
in the present system.
Some of the rules in (228) have their right-hand side enclosed in curly brackets.
The symbols in these brackets are unspecified for linear order. Which symbol
4The predicate* are again referred to by the notation X/Y where X is the predicate name
and Y is the number of arguments in the predicate. Notice that the two arguments representing
difference lists are invisible due to the DCG format used here.
253
precedes the other depends on the values of HD-parameters. This is why c l/4 ,
agrl.1/4, tl/6 , asp l/6 and agr2-l/5 each have at least two clauses, one for the
value i (head-initial) and one for f (head-final).
Two different NPs are distinguished in these rules: NP( 1) and N P (2). N P (l)
is the subject NP which is assigned Casel and NP( 2) is the object NP which has
Case2. This distinction is implemented by the checking requirement case (NF) ■■■cl
or case(N F)«-c2 in agrlp/5, agr2/6 and vp/6. (The Variables NF, CF, AgrIF,
Agr2F, TF, AspF, VF, and AdF in the program represent the feature matrices of C,
Agrl, Agr2, T, Asp, V and Adv respectively.)
Differentiation is also made between two VPs: V Pfl) and VP{2). V’/a(l) is
the top layer of VP whose specifier is the subject NP. This layer of VP is always
present in the structure, since every sentence must have at least one argument.
VP(2) only appears in transitive sentences and its specifier is the object NP.
As we are restricting ourselves to sentences with no more than two arguments
for the time being, VP(2) will be the bottom layer of VP. Hence the rules in
(xv) and (xvi) which close the whole VP. VP(1) may or may not also be the
bottom layer, depending on whether the verb is transitive or not. This is why
there are two different expansions of Vf(l) ((xiii) and (xiv)), one taking a VP(2)
complement and one closing the VP. In the Prolog implementation, the distinction
between VP(1) and VP(2) is made by theta-grid checking. The first clause of
vp/5, corresponding to (xii), requires [agt |Ths] which shows that the theta-role
assigned in this layer of VP is the agent role. The second clause corresponds to
(xv). It requires [pat] which shows that the agent role has already been assigned
in the layer above.
254
We notice that there are two alternative ways of expanding Agr2P: (x) and
(xi). The rule (x) applies when the sentence is transitive and (xi) applies when it
is intransitive. The distinction is again made by theta-grid checking. The specifier
of Agr2P is generated only if the theta-grid contains two theta roles.
The rules in (vi) and (vii) permit zero or more often to be attached to T l. Thus
T 1can be expanded recursively, allowing an infinite number of often to be adjoined
to T l. In our implementation, however, the recursion is interrupted so that at most
one often can be generated. (This is done by adding a dummy argument to 11/6
after often is attached.) We have to do this because the parser also functions as a
generator in the system. The generator is used to generate all possible strings in
a language. By limiting recursion to Depth 1, we can prevent the generation from
being infinite.
We find that the second clause of cp/3, the third clause of agrlp/5 , and the
third, fourth and fifth clauses of t l / 6 are commented out by “X” in the Prolog
program. These clauses are needed only if we want often to appear in the strings.
6.2.2 Feature-Checking
There are two types of feature checking in the parsing process. One shows up
as Spec-head agreement and the other involves movement. Both types of feature-
checking are performed in the tree-building process. Spec-head agreement is checked
whenever a specifier is built, and chains are formed as the tree grows. As we can
see in the program, all the feature-checking operations are built into the phrasal
expansions.
Let us look at spec-head agreement first. The tree we build has five Spec
positions: Cspec, Agrlspec, Agr2spec, Vspecl and Vspec2. The features checked
255
in these five positions are different. The constituent in Cspec is assigned the [+
value of the operator feature. The assignment is done in cp/3 by op(XF)■■■»+'.
The features checked through spec-head agreement in Agrl are case and agreement
features. The specifier of Agrl is assigned Casel (case(NF)■■■cl in agrlp/5). In
addition, the NP in Agrlspec must agree with Agrl in the values of phi-features
(phi (AgrIF)"■ p h i (NF) in agrU/6). Similar checking is done in Agr2 where
the NP in Agr2spec is assigned Case2.5 The spec-head agreement in VPs are
responsible for theta-role assignment. The NP in Vspecl is assigned the first
role in the theta grid ( which is always the agent role in our restricted system)
and the NP in Vspec2 is assigned the second theta-role (patient in our case).
(See theta(NF)-«»agt and theta(NF)«"pat in vp/5). All the feature checking
operations are performed by which is Johnson’s (1990a, 1990b) unification
algorithm implemented by Ed Stabler. This algorithm (johnson.pl) is given in
Appendix A.7 as well.
The chains resulting from movement are formed through feature-passing and
feature-checking. Three types of chains are built in the parsing process: XO-chains,
A-chains and A-chains. Each of the three is represented by a separate argument
in the predicates. They are named HC(Head Chain), AC(A chain) and ABC(A-Bar
Chain) respectively when appearing as variables. HC is the second argument (if
any) in each predicate. Its content is xO(HF,Th) where “HF” is the feature matrix
being passed on in the chain and uTh” the theta-grid. ACis the argument following
HC (if any). It is a list because there can be more than one A-chains being formed
at the same time. Each member of this list is an “np(NF)” where “NF” consists
•Verb-object agreement is being ignored in this program. This is why we do not find
phi(A gr2F)«aphi(IF) in agr2_l/6.
256
of the NP features of the chain. The argument following ACis ABC (if any) which
is represented as a list not because there is more than one A-chain but because
we want to distinguish between empty and non-empty lists. In our system, ABC
may contain an NP (NPl or NP2) or an AdvP depending on which constituent
has moved to Cspec. A chain starts when the head of that chain is encountered
and terminates when the tail of that chain is reached.
Since a verb moves successively from VOto Agr2-0, AspO, TO, Agrl-0 and finally
to CO, the head of the XO-chain (HC) is CO and the tail is VO. The formation of the
chain starts in c l/4 where an extra argument is created to hold the chain. It goes
through agrlp/5, agrl-1/4, tp /6 , tl/6 , asp.p/6, aspl/6, agr2p/6, agr2_l/5,
vp/5 and ends in v l/5 where the chain terminates. The features of this chain
are checked at each link. The checking is done through chsck.v_fe a tu rss/2 at
c l/4 , agrl-1/4, tl/6 , aspl/6, agr2-l/5 and vl/5. A new head is found in each
of these steps and the features of the new head is unified with those of the chain.
The features that are checked in the present program are index, tense, aspect and
phi-features.
The A-chains are created by NP-movement. In a transitive sentence, there are
two A-chains, one for the subject and one for the object. The subject A-chain has
its head in Agrlspec and its tail in Vspecl. The object chain starts in Agr2spec
and ends in Vspec2. By the time we come to VP(1), AC contains two NPs. The
subject NP, whose case feature is instantiated to cl, is selected and unified with
the NP in Vspecl. The other NP is passed on until it is unified with the NP in
Vspec2. The unification is performed by chsck_np_featurss/2 which checks the
index, operator, theta, case and phi features.
The A-chain can consist of the subject NP, the object NP or an AdvP like
257
often. The chain starts in Cspec. The first clause of cp/3 deals with the cases
where an NP moves to Cspec. An unp(NF)” is thus put in ABC. The second clause is
used when Cspec is occupied by an AdvP. In this case ABCwill contain advp(AdF).
The A-chain is passed on and different actions are taken depending on what XP
is in the chain. If Cspec is filled by the subject NP, this chain will terminate at
Agrlspec in which case the first clause of agrlp/5 will apply. (ABC becomes empty
after that.) If the object is in Cspec, the chain will be passed on through AgrlP
(second clause of agrlp/5), TP and AspP until it comes to Agr2P where the tail
of the chain is found in Agr2spec (first clause of agr2p/6). If the constituent that
has moved to Cspec is an AdvP, ABC will contain an AdvP instead of an NP. It
passes through AgrlP (third clause of agrlp/5) and terminates in TP where an
empty AdvP is adjoined to T l and this AdvP iB unified with the AdvP in ABC
(fourth clause of tl/6 ).
Besides building the chains, we also have to make sure that each chain contains
exactly one lexical head (a pronounced NP or verb). This checking is done through
indexing. The value of the index feature starts as a variable. When a visible NP or
verb is attached to a terminal, the variable becomes instantiated. In this particular
implementation, the verb always receives the index 1, the subject NP 2, the object
NP 3 and and the AdvP 4. Aux is not considered a full lexical item, so an X0-
chain can seemingly contain more than one lexical item: a verb and one or more
auxiliaries. For this reason, an Aux shares the index of the verb instead of having
one of its own. Once the index feature of a chain has been instantiated, no other
visible NPs or verbs can be put into the chain. This is achieved through ind«x/2
which is applied whenever a visible head is found. It requires that the value of
index be a variable and it will refuse to incorporate a lexical item into a chain if
258
the value is already a constant. This prevents a chain from having more than one
lexical head. In addition, we use lex ica l / I to make sure that each chain does
have a lexical head, in which case the value of index should be a constant rather
than a variable. This predicate is applied after each chain terminates, by which
time every chain is supposed to have found a lexical head. The checking of NP
chains is done in vp/5 after the termination of each chain in Vspecl or Vspec2.
The XO-chain is checked after the parse tree is complete. The checking cannot be
done earlier, say, when VO is reached, because the chain will not be complete at
that time if CP or IP is head-final. The joint effect of index/2 and le x ic a l/1
ensures that each chain has exactly one lexical head.
6.2.3 Leaf-Attachment
This is the part of the parsing procedure which deals with cross-linguistic variation
due to different value combinations of S-parameters. It is applied whenever a
terminal node is created. The procedure determines, on the basis of the current
setting of S-parameters, whether the terminal node should dominate a content
word, an Aux, or be empty. How such decisions are made has been discussed in
6.1. The particular actions to take in all individual situations have been summed
up in the decision tables in (225), (226) and (227). They are directly coded in
the Prolog program as c0 /5 ( agrl JO/5, tO/5, aspO/5, agr2j0/5, vO/5, np/5,
subjact/4, objact/4, varb/5 and aux/5.
NPs in different positions are differentiated by the second argument in np/S.
In each case the parser looks at S(M(cape.c)), S(M (specl)) and S(M(spec2)) to
decide whether the terminal NP should contain a subject, object or nothing. If
the terminal node must be non-empty, a “word” (Subject/Object) will be taken
259
from the input string and attached to the node as a leaf. If a subject NP is to
be attached, subject/4 is called to determine whether this NP must be overtly
marked for case. Object/4 is called when an object NP is to be attached. In cases
where there is overt case, the morphological case of the overt NP must get unified
with the case feature of the NP chain of which the terminal is a link. Indexing is
also done at this point.
The leaf to be attached to XO can be a verb, an Aux, or nothing. The com­
putation involved here is more complex because a three-way decision has to be
made. Each of the three possibilities is handled by a separate clause in cO/5,
agrlJO/5, tO/5, aspO/5, agr2_0/5 and vO/5. The first clause finds out if a verb
can be attached here. If so, verb/5 (which implements the decision table in (226))
is called to process the morphology of the verb to be attached. If not, the second
clause is applied to find out if an Aux can be attached here. The real work is done
by aux/6 where the decision table in (227) is implemented. This predicate deter­
mines not only the presence/absence of an Aux in a given position but the specific
morphological make-up of the Aux as well. The morphological information of the
Aux to be attached is incorporated into the XO-chain using code-features/2. If
neither the first clause nor the second applies, the third clause will be used and
the terminal will be empty.
Since the terminal nodes are encountered strictly from left to right and every
node is checked for its status as soon as it is encountered, the input string will
not be accepted by the parser unless it has the required word order. The string
will also be unacceptable if some symbols in the string do not have the correct
morphological pattern.
260
6.2.4 The Parser in Action
The parser described above can be used in several different ways. We can call
parse/O to get all the strings in a language and graphically displays their parse
trees. We have seen examples of such parse trees in (223) and (224). We can also
use p a rs e /1 to process a string without showing the tree and p arse/2 to get both
the string and the tree.
Before a sentence is parsed, the parameters must be set to a particular value
combination. The setting can be done on-line using reset/O. The following are
two Prolog sessions illustrating the use of reset/0 and parse/1. The clauses
concerning often were commented out while running the first session but were
included when the second session was run. The parse tree is printed out for each
string that is generated. To save space, I omitted all the parse trees except for the
last string in each setting. Furthermore, only the category label for each node is
printed out, with all the other features suppressed in the tree printing.
(229) Session 1:
I ?- rsset.
Haw sstting: [1,1,1,1,0,1,1,0,1,1,0-1,0-0,0-1,0-1,0-1]. Xl
yss
I ?- parss(S).
S ■ [s-[cl],iv-[agr,tn s,asp ]] ;
S ■ [s-[cl],tv-[agr,tns,asp],o-[c2]} ;
261
cp
np
c l
cO
agrlp
np SubJ-[cl]
ogrl.l
ogrl.O Vorb-£ngr,tna,asp]
tp
tl
to
“ P-P
nspl
ospO
»gr2p
np Obj- [c2]
ogr2_l
&gr2_0
▼P
np
vl
vO
▼P
np
vl VO
S ■ [o-[cl] ,tv-[ogr,tns,osp] ,o-[c2]] ;
no
I ?- r u « t.
Mow sotting: [0,0,0,0,0,0,0,0 ,f , t ,0-0,1-0,0-0,1-0,0-1]. X2
yos
I ?- porsoCS).
S ■ [•-□ ,tv-[osp] ,o-D ,oux-[tns] ,out-[prod]] ;
S ■ [•-□ ,iv-[*sp] ,out-[tns] .out-[prod]] ;
262
cp
np
cl
ngrlp
np
agrl.l
tp
tl
“ P-P
nspl
agr2p
np
agr2„l
vp
np Subj - C]
vl
vO Varb- Cup]
vp
np Obj - []
vl vO
ngr2.0
upO
tO Aux-[tns]
agrl.O
cO Aux-[prad]
S • [a-[] ,tv-[u p ] ,o-D ,aux-[tna] ,aux-[prad]] ;
no
I ?- rasat.
Maw tatting: [ l , l , l , l , l , l , 0 , 0 , i , i , 0 - l (0 -0 ,0 -l,0 -l,0 -l]. X3
yas
I ?- parta(S).
S ■ [iv-[agr,tn*,up],s-tcl33 ;
S ■ [tv-[ngr.tnt ,u p ] ,t-[c l] ,o-[c2]] ;
263
cp
np
c l
cO Vnrb-[agr,tns,up]
ngrlp
np Subj-[cl]
agr1.1
agrl.O
tp
t l
to
u p .p
u p l
aapO
agr2p
np
agr2_l
agr2_0
vp
np
vl
vO
▼P
np 0bj-[c2]
vl vO
S « [tv- [agr,tns,up] ,s-Cel] ,o-[c2]] ;
no
I ?- ru n t.
Nnv sntting: [0,0,0,1,0.1,0,0,1,1,0-1,0-0,1-0,1-0,0-1]. X4
y u
I ?- parsn(S).
S ■ [n -[c l],aux-[agr.tns],tv-[asp],o-[c2]] ;
S ■ [s-[cl],aux-[agr,tns],iv-[asp]] ;
264
cp
np
cl
cO
•grip
np Subj-[cl]
agrl.l
agr1.0 Aux-[agr,tns]
tp
tl
to
asp.p
aspl
aspO
agr2p
np
agr2_l
agr2_0
vp
np
vl
vO Varb- [asp]
*P
np 0bj-[c2]
vl vO
S » [s-[cl],aux-[agr#tns],tv-[asp],o-[c2]] ;
no
I ?- rasst.
Nav sattlng: [1 .1 ,0 ,1 ,1 .1 ,1 .1 .i . f ,0-1,0-0,1-0,1-0,0-1]. X5
yas
I ?- parsa(S).
S * [a -[c l],aux*[agr,tns],o-[c2],tv-[asp]] ;
S • [a-[cl] ,aux-[agr,tns],iv-[asp]] ;
265
cp
np Obj - [c2]
cl
cO Aux-[agr,tns]
•grip
np Subj-[cl]
agrl.l
tp
tl
“ P-P
u p l
agr2p
np
agr2_l
vp
np
vl
vO
np
vl vO
agr2_0
aspO Varb-[asp]
to
agrl.O
S ■ [o-[c2],aux-[agr,tns],s-[cl],tv-[asp]] ;
no
I ?- rasat.
Mas sattlng: [0 ,0 ,1 .0 ,1 ,1 ,1 ,0 ,f ,1,0-0.1-0,l-0 » l-0 ,1-0]. X6
yas
I ?- paraa(S).
S ■ [o-D ,aux-[tns,asp] ,o-[] ,tv-D ,aux-[prad,agr]] ;
S ■ [s -D ,aux-[tns,asp] ,iv-[],aux-[prsd,agr]] ;
266
cp
np
cl
ngrlp
np Subj- □
agr1.1
agrl.O
tp
t l
tO Aux-[tns,aip]
aap.p
aapl
aspO
agr2p
np Obj-Q
agr2_l
agr2.0
▼P
np
vl
vO
▼P
cO Aux-[prad,agr]
S ■ [s-[] ,aux-[tns,asp] ,o-[] .tv-D ,aux-[prad,agr]]
no
I ?- rasat.
Mas aatting: [0,0,1,1,1,1,1,0,1,1,0-1,1-0,1-0,1-0,1
yas
I ?- parsa(S).
S ■ [aux-[prad,agr,tns,asp] ,a-[cl] ,o-[c2] ,tv-Q ] ;
S - [aux-[prad,agr,tns,asp],s-[cl],iv-[]] ;
Varb-[]
np
vl vO
-0 ]. X7
267
cp
np
cl
cO Aux-[prad,agr,tns,asp]
ngrlp
np Subj-[cl]
agrl.l
agrl.O
tp
t l
to
aap.p
aspl
aapO
agr2p
np 0bj-[c2]
agr2.1
agr2_0
vp
np
vl
vO V*rb-[]
np
vl vO
S ■ [aux-[prad,agr,tna,aap],■-[c l],o-[c2],tv -[]] ;
no
I ?- raaat.
Mav sattlng: [0 ,0 ,1 ,1 ,1 ,1 ,1 ,0 ,1 ,1 ,O-O,1-1,1-1,1-1,1-1]. X8
yaa
I ?- paraa(S).
S ■ [aux-[prad,agr,tns,asp] ,a -□,<>-□,tv-[prad,agr,tns,asp]] ;
S ■ [aux-[prad,agr,tns,asp],s-[],lv-[prad,agr,tns,asp]] ;
268
cp
np
cl
cO Aux-[prad,agr,tns,asp]
•grip
np Subj- □
agrl.l
agrl.O
tp
t l
to
aap.p
aspl
aspO
agr2p
np Obj- □
agr2.1
agr2_0
vp
np
vl
vO Varb-[prad,agr,
vp tn s,asp]
np
vl vO
S ■ [aux-[prsd.sgr,tns,asp] ,s - 0 ,o-D ,tv-[prad,agr,tns,asp]] ;
no
I ?-
(230) Session 2:
I ?- rasat.
Mav satting: [1 ,1 ,1 ,0 ,0 ,1 ,0 ,l , i , 1,0-1,0-0,0-1,0-1,O-O]. %1
y «
I ?- parsa(S).
S ■ [s-[cl],iv -[a g r,tn s]] ;
S • [a-[cl],tv-[agr,tn s],o-[c2]] ;
S ■ [s-[cl],oftan ,iv-[agr,tn s]] ;
269
S ■ [s-[cl],often,tv-[agr,tns],o-[c2]] ;
S ■ [oftan,s-[cl],iv-[agr,tns]] ;
cp
advp oftan
cl
cO
agrlp
np Subj-[cl]
agrl.l
agrl.O Varb-[agr,tns]
tp
tl
advp
tl
to
asp.p
aspl
aspO
agr2p
np
agr2.1
agr2.0
▼P
np
vl
vO
vp
np 0bj-[c2]
vl vO
S ■ [o fta n ,s-[cl],tv-[agr,tns],o-[c2]] ;
no
I ?- rasat.
Nav satting: [1 ,1 ,1 ,1 ,0 ,1 ,0 ,1 ,i . i , 0-1,0-0,0-1,0-1,0-0]. X2
yas
I ?- parsa(S).
270
s ■ [ s - [ c l ] ,iT -[agr,tna]] ;
S ■ [» -[c l],I t - [agr,tns],often] ;
S ■ [s-[cl],tr-[agr,tn s],o-[c2]] ;
S ■ [a-[c l],tT -[agr,tns],oftan,o-[c2]] ;
S ■ [o fta n ,s-[el],I t - [agr,tns]] ;
cp
advp oftan
cl
cO
ngrlp
np Subj-[cl]
agr1.1
agrl.O Varb-[agr,tns]
tp
t l
advp
t l
t o
M P-P
aspl
aapO
agr2p
np
ugr2_l
agr2.0
▼P
np
vl
vO
*P
np 0toj-[c2]
vl tO
S ■ [oftan,■-te l], t T - [agr,tna],o-[c2]] ;
no
271
I 7- r«a«t.
lav ••tting: [1, 1 ,1 , 1, 1 , 1, 1 , 1 ,1 ,1,0-0,0-0,0-0,0-0,0-0]. %3
yaa
I 7- paraa(S).
S » [a-[],iv-C 3] ;
S ■ [a-[] ,iv -Q ,oftan] ;
S » ,tv-D ,o-Q ] ;
S ■ [a-[] ,tv-[] ,oftan,o-[]] ;
S - Co-□ , t v - ;
S ■ [o-[] ,tv- .oftan] ;
S - [ o f t o n , i v - ;
272
cp
advp oftan
cl
cO Varb-Q
agrlp
np Subj- □
agrt.l
agrl.O
tp
tl
advp
tl
to
asp.p
aspl
aspO
agr2p
np Obj-U
agr2_l
agr2_0
vp
np
vl
vO
vp
np
vl vO
S ■ [oftan,tv-□ ,
no
I T-
Every time a new setting is entered, p a rse /1 is run exhaustively to find out all the
strings that can be parsed with this setting. In Session 1, the number of strings
which are accepted by each setting is always three: one intransitive sentence and
two transitive sentences. The two transitive sentences are different from each
other in that, at LF, Cspec is occupied by the subject NP in the first one while
it is occupied by the object NP in the second. In other words, the two sentences
273
differ as to whether the subject or object is the topic/focus of the sentence. This
difference does not show up in the surface strings if there is no overt movement to
Cspec. But it does show up in the pane tree. It shows up in the surface string
when S(M(cspec)) is set to 1, as we can see in the fifth setting. In Session 2, more
strings are generated with each setting because of the optional appearance of often.
The number of strings generated is six if there is no overt movement to Cspec and
eight if overt movement to Cspec occurs. There are two additional strings in the
latter case because often itself can move to Cspec.
6.2.5 Universal vs. Language-Particular Parsers
As we have seen, the parser presented here is a universal one. By consulting the
parameter values, it can produce or analyze the strings of any language in our
current parameter space. The parser is “complete” in the sense that any language-
particular parser can be generated from this universal parser by setting the pa­
rameters in a specific way. Every individual language is a particular combination
of parameter values and there is a parser corresponding to any value combination.
When the parameters are set in a certain way so that just one language can be
accepted, only a subpart of the parser is used. In terms of a Prolog program,
we can say that a given parameter setting selects a subset of the clauses in the
parser. When the HD-parameters are set to i, for instance, the clauses which
require that the parameters be set to f will not be used. The parser in A.7 has
three clauses for cO/5, agr1JO/5, tO/5, and aspO/5, fifteen clauses for verb/5,
and twenty-six clauses for aux/5. Once the S-parameters are set, however, only
one of them will be used. Which one is used depends on the parameter values.
Therefore, although the program in A.7 is fairly big with many disjunctions, the
274
parser for any particular language can be reasonably small. In fact, we can obtain
any language-particular parser by removing all the clauses which can not be used
with the given parameter setting. Once these clauses are removed, all the choice
points where parameter values are consulted no longer exist. As a result, we can
remove all the calls to parameter values as well. The resulting parser can parse
one language only but it will be more efficient because the computation involving
parameter values is not necessary any more. This process is an instance of partial
evaluation or partial execution (Burstall and Darlington 1977, Clark and Sickel
1977, Clark and Tarnlund 1977, Hogger 1981, Tamaki and Sato 1984, Pereira and
Shieber 1987) which in our case results in a specialization of the parser. Once the
parameters are finally set, any language-particular parser can be derived from the
universal parser through such partial evaluation. An example of such a parser is
given in Appendix A.8. This parser is obtained by partially executing the original
parser with the following parameter setting:
(231) [ 1 1 0 1 1 1 1 0, i i , 1-0 0-0 1-0 1-0 0-1 ] .
This parser can only process the language having this setting. Compared with the
parser in A.7, this parser takes less space but runs more quickly, as all unused
clauses and all the calls to parameter values are now removed.
This relationship between the universal parser and language-particular parsers
is suggestive of a certain hypothesis on learning. We can speculate that children are
born with the universal parser. This parser is used in setting the parameters. Once
the parameters are set, however, partial evaluation may take place, which makes
the parser more compact and more efficient. An interesting question is whether
the original parser is kept after the partial execution. It is very likely that it will
275
become less and less active after the the “critical period” of language acquisition.
At least part of it will get “trashed", since it is no longer useful. A speaker whose
language has the setting in (231) may me the parser in A.8 only and discard the
parser in A.7. This may provide an explanation for the difficulty people experience
in second language learning. If our speculations happen to be correct, then the
second language learner would have to either reactivate or reconstruct the original
parser, which is of course a costly operation.
6.3 Summ ary
In this chapter we have seen how parsing might work in our present model. Our
parser differs from other parsers in that the procedures for chain-building are in­
variable across languages. Differences between different languages show up mostly
in how the leaves are attached to the tree. It is found that, given a particular
setting of S-parameters, there is a unique way to attach the leaves. The parser can
consult the parameter values and attach the leaves accordingly. It is universal in
the sense that it can parse any language in the parameter space without a single
change in the parser itself. Language-particular parsers can be derived from this
universal parser through the process of partial evaluation. A Prolog implementa­
tion of the parser is presented to illustrate those new properties. The presentation
and the discussion show that our present syntactic model might have advantages
over traditional models in terms of parsing.
276
Chapter 7
Final Discussion
This thesis has been a syntactic experiment in the field of Principles and Parame­
ters theory. We have explored a parametric syntactic model which is based on the
notion of Spell-Out in the Minimalist framework. A specific grammar is proposed
and this grammar is put to the test of language typology, language acquisition,
and language processing. We are now in a better position to see the potentials
and limitations of this model. In this chapter, I will first consider some possi­
ble extensions of the model and then discuss the validity of the present model in
general.
7.1 Possible Extensions
The experimental grammar we have examined in detail here is a very restricted
one. Among those things that are left out are the internal structures of DP/NP
and PP. We have put these phrases aside in order to limit our experimental pa­
rameter space to a manageable size. There is no principled reason why the present
approach cannot be applied to the internal word orders of these constituents as
well. As a matter of fact, a great deal of research has already been done in this
direction. The parallels between CP/IP and DP have been discussed by many
277
people (Abney (1987), Ritter (1988,1990), Tellier (1988), Stowell (1989), Szabolcsi
(1990), Carstens (1991,1993), Valois (1991), Mitchell (1993), Campbell (1993),
etc.). I will not get into a full discussion of non-verbal projections, but it is fairly
obvious how the S(M)-parameters can work there. We can use a very simple DP
structure to illustrate this. Suppose that lexical projection and GT operations
universally generate the following tree:
DP
/ 
Spec D1
/ D NP
(232)
A Simple DP Tree
Suppose also that the NP in this tree must move to the Spec of DP by LF in
order to have its case and ^-features checked against those of the determiner.
(The fact that the noun has to agree with the determiner in many languages
suggests that this checking operation is plausible.) If this checking movement has
an S(M)-parameter associated with it, then the determiner will get spelled out in
a pre-nominal position if this parameter is set to 0 and in a post-nominal position
if the parameter is set to 1. The word order inside a PP can be derived in a similar
way. Let us suppose that PP also has a Spec position as shown in (233).
278
PP
/ " 
SpM PI
/ P NP
(233)
A Hypothetical PP Tree
There could be an LF requirement that the prepositional object must move to
the Spec of PP to have its case features checked. If so, at Spell-Out the P will
precede its object NP when the movement is covert and follow the object when
the movement is overt.
Extensions of our current approach can also be made with respect to the notion
of feature spell-out. Take PP again as an example. In all the cases where a
preposition takes an NP object, we can treat the P as an overt realization of the
case feature of this NP. In other words, we can let the case feature of this NP
be associated with an S(F)-parameter. We see a preposition when this parameter
is set to 1. This idea is by no means my own invention. It has been proposed
recently (e.g. Kayne 1992) that every NP has an abstract P (whether overt or
covert) associated with it. The abstract P may well be a case feature which is
spelled out as a preposition in certain cases. If this is true, we will not even need
the tree in (233) and the case-checking movement to derive both prepositional and
postpositional structures. The P is simply a case-marker which can appear either
as a prefix or suffix. What we will have to explain then is why the the case marker
can have different physical realizations on different NPs in the sentence, sometimes
279
as an integral part of an NP/DP and sometimes as an more independent element
such as a preposition.
In many languages, including English and Chinese, there exist both NPs car­
rying no case marker and prepositional NPs. If we regard P as a case marker, we
face the question of why some NPs have to be overtly marked for case (by a P) and
some do not. Here is a tentative answer to this question. As a working hypothesis,
we can assume that any NP whose case feature is not checked in the Spec of IP
(Agrlspec or Agr2spec) must be spelled out as a preposition. Consider the gram­
matical model used in our experiment. There are two Agr projections in an IP:
AgrlP and Agr2P. Usually the subject NP can have its case checked in Agrlspec
and the object NP can have its case checked in Agr2P. This is probably why the
subject and object NPs almost never need a preposition. If there are other NPs
in a sentence, however, there will be no more Agrspecs for these NPs to move to
in order to have their cass features checked. This can happen in many situations.
One situation is where the sentence has an adjunct modifier, such as in (234).
(234) The girl met the boy in the garden.
The subject and object NPs in this sentence, the girl and the boy, can obviously
have their case features checked in Agrlspec and Agr2spec respectively. The third
NP the garden, however, cannot move to any Spec of AgrP. It must therefore have
its case feature spelled out as a P, as predicted by the hypothesis suggested above.
This hypothesis may also explain why the subject NP in a passive sentence has to
appear in a by-phrase. In passivization, the subject 0-role is absorbed. The object
NP is “promoted” and can thus move to Agrlspec to have its case checked. If we
want to mention the subject NP in a passive sentence, this NP can not move to
280
Agrlspec which has already been occupied by the object NP. It cannot move to
Agr2spec, either, because its case feature and the feature in Agr2spec will clash.
As a result, it must have its case feature spelled out as a preposition, namely, by.
Another way to look at it is by treating the by-phrase as an adjunct which, like in
the garden, must appear as a PP.
The fact that we have assumed two Agreement projections in IP in our exper­
imental model does not mean that there cannot be a third AgrP in IP. Certain
verbs may project a triple-Agr IP. One such verb might be give which can be used
in a double-object construction such as (235).
(235) The girl gave the boy a book.
The IP projected by give may look like the following.
281
Agrl-P
Spsc Agrl-1
/ 
Agrl-0 TP
AspO Agr2-P
/ XSpec Agr2-1
Aar2-0 Aar3-P
(236)
A Triple-Agr IP
In a sentence like (235), each of the three NPs can have its case features checked
in one of the Agrspecs. There is therefore no need of a preposition.
An obvious question that arises here is why we need a preposition in (237).
(237) The girl gave a book to the boy.
This sentence contains exactly the same number of NPs as in (235), but one of
them has to take a preposition. One way to tackle this problem is to assume that
2S2
the verb give is syntactically ambiguous. It may project either a triple-Agr IP or
a double-Agr IP. When a double-Agr IP is projected, the third NP in the sentence
must be an adjunct which has to be licensed by an overt case feature manifested
in a P.
The present model can also be extended to cover both nominative-accusative
languages and ergative-absolutive languages. We have assumed that an IP con­
tains two Agr projections even in an intransitive sentence. As a result, there are
two potential Agrspecs that the sole NP in an intransitive sentence can move to.
Let us assume that the case checked in Agrlspec is nominative/ergative and the
one checked in Agr2spec is accusative/absolutive. Then we will have an nomi­
native/accusative language if this NP chooses to move to Agrlspec; we get an
ergative/absolutive language if this NP moves to Agr2spec. We can then propose
a parameter which determines which Agrspec an NP moves to in an intransitive
sentence. This is again not my own invention. Similar approaches have been taken
by Bobaljik (1992), Chomsky (1992), Laka (1992), etc. They actually have a name
for this parameter: the Obligatory Coat Parameter. A potential problem we can
have with the particular structure assumed in this thesis is word order. In our
IP projection, TP and AspectP come between AgrlP and Agr2P. In a language
where the verb moves to T, we can have two different word orders in an intransitive
sentence depending on which Agrspec the NP moves to. The order is NP-Verb if it
moves to Agrlspec and Verb-NP if it moves to Agr2spec. In an ergative language
where the verb moves to T, a transitive sentence will have the order NP-Verb-NP
and an intransitive sentence will be Verb-NP. To account for an ergative language
which is NP-Verb-NP when transitive and NP-Verb when intransitive, we have
to assume that the verb can never move beyond Agr2 in this ergative language.
283
This assumption will almost certainly turn out to be wrong. To avoid this prob­
lem, we can try an alternative model where there is only one Agr projection in
IP when a sentence is intransitive. But this AgrP can have different case features
in different languages. The obligatory case parameter then determines which case
the AgrP has. We have a nominative/accusative language if it is Case 1 and an
ergative/absolutive language if it is Case 2.
All the extensions proposed above can make our model more complete, but a
lot more research has to be done before we can incorporate them into our theory.
7.2 P otential Problem s
The present model is not perfect and it can be challenged in many different ways.
There are at least two kinds of argument that can be made against our approach.
First of all, this model may seem too theory-dependent. We may wonder what
will happen if some of the specific assumptions in our grammar turn out to be
incorrect. Secondly, one may worry about the number of parameters we may need
in a full version of the theory. It may seem that, as the grammar is expanded
to cover more and more constructions, the parameters will become so many that
learnability can become a problem. We will address these two potential problems
in this section. We will see that our general approach can remain plausible even if
many of the specifics of this theory are thrown out.
Almost every present-day syntactic model is theory-dependent to a certain
extent. Any approach in the Principles and Parameters paradigm has to start from
some basic assumptions of this theory, such as the existence of Universal Grammar.
Our current approach is built upon some hypotheses in the Minimalist Framework.
One of those hypotheses is the notion of Spell-Out. All the experiments we have
284
done in this thesis will be pointless if this basic notion turns out to be fallacious.
However, what we have to worry more about is not whether a model is theory-
dependent but the degree of such dependency. It is acceptable for a model to
depend on certain theoretical assumptions, but these assumptions should be as
general as possible. It is not desirable to have a model whose success hinges on
some very specific assumptions which have not been generally accepted. One of
the assumptions in our model which we may find suspicious is the structure of IP.
It has been assumed in our model that the IP consists of a TP, an AspectP and
two AgrPs. In addition, these phrasal projections must be arranged in a certain
structural configuration. We may wonder what will happen if we replace this more
articulated IP with a traditional non-split IP structure, such as the one in (238).
(238) w b object
A More Traditional Tree
Let us assume this base structure and the following LF movements:
285
A. The verb must move to I to have its tense, aspect and agreement features
checked.
B. After moving to I, the verb must move to C to have its predication features
checked.
C. The subject NP must move to the Spec of IP to have its case and agreement
features checked.
D. Either the subject NP or the Object NP must move to the Spec of CP to have
its operator features checked.
We can let each of these movements be associated with an S(M)-parameter and
let CP and IP each have an HD-parameter, as we have done before. Then we can
derive the following word orders by varying the parameter values (only one of the
possibilities is given below for each order):
S V O if IP is head-initial, the subject NP moves overtly to Spec of IP and the
verb moves to I;
S O V if IP is head-final, the subject NP moves overtly to Spec of IP and the
verb moves to I;
V S O if CP is head-initial, the subject NP moves overtly to Spec of IP and the
verb moves to C;
V2 if CP is head-initial, the verb moves overtly to C, and either the subject or
object NP moves to the Spec of CP;
O S V if the object moves to Spec of CP and both the subject and the verb
remain in situ;
286
O V S if CP is head-initial, the object NP moves overtly to Spec of CP, the verb
moves to C, and the subject NP remains in situ.
We notice that the V O S order cannot be derived unless we allow the object NP
to move to the Spec of IP or allow the Spec of IP to appear on the right. We
wilt also lose many of the scrambling orders. What this shows is mixed. On the
one hand, our model does seem too theory-dependent, since it misses certain word
orders once the Split-Infl hypothesis is removed; on the other hand, we can still get
most of the basic word orders even if the IP is non-split. In any case, our model
is dependent on the Split-Infl hypothesis, the dependency is not critical.
Our specific theory also depends on the VP-internal Subject hypothesis. Once
this hypothesis is dismissed, many movements will not be necessary any more. The
word order variation we can derive from movement will be very limited. What does
all this show? It may mean that the Split-Infl hypothesis and VP-internal Subject
hypothesis are correct, as they can provide us with more explanatory power. But
let us consider the worst case. Suppose that both of these two hypotheses are
proven incorrect in the end. Can the model proposed in this thesis still exist?
The answer can be “yes" or “no". The specific grammar used in this thesis can
of course no longer exist. The movement patterns will have changed and so will
the S(M)-parameters. All the experimental results in the thesis will need to be
reconsidered. However, the general approach we are taking here can remain valid
even in such a situation. We can proceed in this direction as long as the following
are true:
(i) The grammar has X-bar structures and movement operations;
(ii) The X-bar structures are universal modulo head directions;
287
(iii) The movement operations are universal modulo the timing of Spell-Out;
(iv) Different head-directions and different spell-out of movements result in word
order variation;
(v) The head directions of X-bar structures can be parameterized;
(vi) The spell-out of movement can be parameterized.
If these assumptions hold, we can build a model of the present sort no matter
how the other specific assumptions change. The general picture of word order
typology described in this thesis will not change; the learning algorithm presented
here will still be applicable; and parsing can still proceed in the way presented
in this thesis. One of the basic problems this thesis has addressed is how to
handle word order in a syntactic model where cross-linguistic variation can result
from both X-bar structure and movement. We have found a way to describe a
word order typology in terms of both head-direction and movement. We have
also discovered a learning strategy which the learner can use to converge on any
particular grammar by simultaneously setting two different types of parameters:
X-bar parameters and movement parameters. This is a problem that has to be
addressed by any acquisition theory which accepts the view that word order can
be determined by both phrase structure and movement. Finally, we have seen
the possibility of a more universal parser which can parse different languages by
looking at the parameter settings of those languages.
Now let us consider the potential problem of Hparameter explosion”. The model
we have been working with is minimal, but the number of parameters we have
assumed does not seem too small. One may wonder how many parameters we
288
would eventually need when the model is expanded to include more constructions.
There seem to be many ways in which the number of parameters may grow. Here
are a few of them:
(239) a. In order to account for word orders within other constructions, such as
PP/D P/N P, more S(M)-parameters and HD-parameters may be needed
to control the movements and head directions internal to these con­
stituents.
b. Since even a single language may have different word orders in statements
and questions, in main clauses and embedded clauses, etc., we seem in
need of different parameters in different types of clauses.
c. To handle the full range of inflectional morphology in world’s languages,
a greater number of features may need to be taken into consideration.
As a result, the number of S(F)-parameters may increase.
It looks as if the parameter space could be much bigger than the one we have dealt
with. The amount of search involved in learning and parsing could then be so
great that language acquisition and language processing might become a problem.
Are the problems in (239) real problems? Let us examine them one by one.
The problem in (239(a)) exists only if the internal structures of DP/NP and
PP are totally unrelated to those of CP/IP. This does not seem to be the case.
There are more and more studies showing that DP/NP parallels CP/IP in many
ways. It is very likely that these phrases are similar not only in X-bar terms, but
in terms of movement as well. It could well be the case that a movement in CP
has a counterpart in DP. Moreover, the corresponding movements could be similar
in their overtness, i.e. they might both occur before Spell-Out or both after Spell-
289
Out. If so, we will not need two separate S(M)-parametera. The two movements
could be considered different instances of a single type of movement whose spell-
out is controlled by a single S(M)-parameter. Should this be true, the number of
parameters will not increase as much as we might expect. Language acquisition
will therefore not be a problem. As a matter of a fact, parameter setting could be
easier, since the learner can get evidence for a parameter value from both CP/IP
and DP/NP (Koopman 1992)
The problem described in (239(b)) can be a real problem only if we adopt the
assumption that HD and S(M)-parameters are the only determinants of word order.
This assumption seems to be false. There are obviously other factors which can
influence the word order of a language. When an S(M)-parameter has the value
1/0, for instance, whether the relevant movement occurs before Spell-Out depends
on things other than the parameter values. The principle of Procrastinate dictates
that the movement should be after Spell-Out in this case, but this principle can
be overridden if some other factors call for overt movement. When a language
has different word orders in statements and questions, or in main clauses and
embedded clauses, the difference can often be explained by the overtness of one
or two movements. It is definitely not the case that different clauses have totally
different parameters or parameter values. In English, statements are S-Aux-V-0
and yes-no questions are Aux-S-V-0. A simple explanation for this fact is that the
auxiliary moves to C in questions but not in statements. We do not need additional
parameters to account for this if we assume that the S(M)-parameter for I-to-C
movement is set to 1/0 in English. The real question we have to answer then is
what overrides the principle of Procrastinate in interrogative sentences to make
the movement overt. This is a question that has to be addressed in any linguistic
290
theory regardless of the existence of S-parameters.
A similar argument can be made for German which has the V2 order in main
clauses and SOV orders in subordinate clauses. Assuming that CP is head-initial
and IP is head-final in German, we can account for the word order difference by
supposing that the S(M)-parameters for I-to-C movement and XP-movement to
Cspec are both set to 1/0 in German. In subordinate clauses, these movements
are covert due to the principle of Procrastinate. In matrix clauses, however, the
movements are made overt by some other factors. What these factors are remain
the topics of current linguistic research. The important point is that we do not
need different parameters for questions or embedded clauses. Once the module of
linguistic theory we have studied here is interfaced with other modules, the correct
word order in each type of clauses will emerge. The success of our model therefore
depends on the research in those other modules.
Finally we address the problem in (239(c)). The number of S(F)-parametera
required depends on the number of features required by the grammar. As long as
the set of features is finitely small, there will not be too many S(F)-parameters.
The question is how many features have to be there in our system. There is no
definite answers here, but the number should be finite. This may seem false in view
of the fact that morphological variation in world’s languages is so rich. But the
seemingly infinite variation in inflectional morphology does not have to imply that
the number of morphological features is infinite. We can get tens of thousands
of surface morphological paradigms from a small set of features because of the
following:
i. Different languages can have different subsets of features spelled out;
291
ii. Different combinations of features can have different phonological realizations;
iii. The phonological realization of a certain feature or a combination of features
can be arbitrary.
Therefore, while we may need more features than we already have in the system,
there is very little indication that the required set of features must be infinite.
In conclusion, the likelihood of an explosion of S(M)-parameters and S(F)-
parameters is very small. We probably need more parameters but the increase will
not be dramatic. As long as the number of parameters is finite and reasonably
small, parameter-counting is not particularly meaningful. Given two grammars
which account for the same range of linguistic phenomena, the one with fewer
parameters is of course preferable. However, there is no principled reason why
the number of parameters Bhould be less than 20 or less than 30. As long as
there is a learning algorithm whereby those parameters can be correctly set, the
exact number of parameters should not be an issue. In fact, a small increase in
parameters is welcome if this can result in a simplification of the principles of our
grammar.
7.3 C oncluding Rem arks
In this thesis, we have studied a particular model of the Principles and Parameters
theory. By fully exploiting the notion of Spell-Out, we have set up a grammar
where cross-linguistic variation in word order and inflectional morphology is mainly
determined by a set of Spell-Out Parameters. The new parametric system, though
still in its preliminary form, has been found to possess some desirable properties
in terms of language typology, language acquisition and language processing. The
292
experiments we have performed in this thesis are far from complete, but the initial
results are encouraging. There is reason to believe that this line of research is
at least worth pursuing, though a great deal of future work is needed to get this
model closer to the truth.
293
A ppendix A
Prolog Programs
A .l pspace.pl
X F i l e pspacc.pl
X Author: Audi Hu
X O ats: Ju ly 15, 1993
X Purpose: Find a l l values coabinations of S(H)-p a ra a e te rs , S (F )-p a ra a e te rs,
X BO -paraaeters and AA -paraaetcr. Try generating soae language
X (p o ssib ly eapty) w ith each value coabination and c o lle c t th e s e t of
X a l l non-aapty languages th a t are generated and th e ir corresponding
X p a ra n e te r s e ttin g s .
X The p a rse r used here is in p a rs e r.p l
dynaaic s /1 , h d l/1 , h d 2 /l, a a /1 , lan g /1 .
XX pspace(A l i s t of a l l th e settin g -lan g u ag e p a irs in th e p a ra a e te r space,
XX each containing one possible paraaeter setting and the language
XX i t generates)
pspace(Ps) s e to f(P ,s l_ p a ir(P ),P s ).
XXpspacel(A l i s t of a l l th e settin g -lan g u ag e p a irs in th e p a ra a e te r space,
XX each containing one o r aore p o ssib le param eter s e ttin g s and the
XX sin g le language they generate)
X (The s e ttin g s in a sin g le p a ir a l l generate the sane language.)
pspacel(P s) :-
se to f(P ,sl_ p a ir(P ),P sO ),
group.settings(P sO , P s).
XX pspace2(A l i s t of a l l th e d is tin c t languages th a t can be generated in
th e p a ra a e te r space)
pspace2(Ls) :-
s e to f(P ,s l_ p a ir(P ),P s ),
c o lle c t.la n g s (P s ,Ls).
XX sl_pair((Setting,Language]).
sl_pair([S,L]):-
g e t.s e ttin g (S ) ,
s e to f(L ,s l_ p a irl(S ,L ),L s ),
aerg e_ l(L s,L ).
294
XX al.pairl(Setting, A Hat of setting-language pairs, each with a different
XX w i t u e instantiation of the setting)
sl.pairl(S,L) :-
instantiate.var(S,Si).
generate.all(31 ,L).
X Find e ll strings that can be generated froa a given (fully instantiated)
X value combination
generate.alKSettlng,Strings) :-
) , S trin g s).
XX in sta n tia te _ v a r(S e ttin g ,P a rtic u la r_ In sta n tia tio n _ o f_ S e ttin g )
X ( I t has no e ffe c t on s e ttin g s th a t do not contain v a ria b le s .)
X Io ta : s (n (s p e c l)), s(n(spec2>) and s(n(cspee)) any be s e t to 1/0 uhieh,
X being a v a ria b le , can be in s ta n tia te d to e ith e r 1 or 0 in a p a rtic u la r
X p arse. The language generated by a s e ttin g containing such v a ria b le (s)
X is th e union of th e languages generated u ith each p a rtic u la r in s ta n tia tio n .
X ( I t only has e ffe c t on s e ttin g s containing v a ria b le v a lu e s.)
in s ta n tia te .v a r ( [1 /0 IV sl],[V IV s2]) I,
<¥•0; VM).
in sta n tia te _ v a r(V s1 ,Vs2).
in stan tiate_ v ar([V |V slj,C v jv s2 ]) :-
in s ta n tia te .v a r(V s l, Vs2).
in sta n tia te _ v a r( □ , □ ).
XX uerg e.l($ ete_ o f.S trin g s,U n io n _ o f_ S ets_ o f.S trin g s)
X Merge languages generated u ith d iffe re n t in s ta n tia tio n s of a s e ttin g
n e rg e .l( [L i,L 2 |L s],L ):-
nerge(L l.L 2,L 3),
nerge_l([L 3|L s},L ).
nerge_l(L ,L ).
XX g ro u p .se ttin g s
XX (A l i s t of settin g -lan g u ag e p a irs ,
XX A l i s t of settln g (s)-lan g u a g e p a irs
XX ).
X Group to g eth er s e ttin g s th a t generate id e n tic a l languages,
group.s ettin g s(S L _ P airs,SL_Pairs1) :-
r e tr a c ta ll (lang(_> _) )»
peck_p*irs(SL_Pairs),
c o lle ct_ p a irs(S L _ P a irsl).
X A ssert a l l languages and group to g eth er th e s e ttin g s fo r each of then
pack _ p airs(C [S ,L j|P airs])
la n g (S l,L l), X th e present language has already
s a n e _ se t(L ,L l),I, X been a sse rte d .
r e tra c t(la n g (S l,L l)), X add th e present s e ttin g to the
a sse rt(la n g (C S tS i],L )), X s e ttin g s fo r th is language.
p n ck _ p airs(P airs).
pack_paire( CCS.U IP a irs]) X th e p resen t language has not been a sse rte d ,
a s s e rt(ln n g ([S3,L )), X a s s e rt th is new language
p a c h _ p airs(P a irs).
pack_pairs( □ ) .
essert_new_settins(SettlnjtJ
295
<o
Ol
« • • • • • • •
• O H H H I I I
• m • m v v &h r o a o i t *
«* n ct ct ct * n n
I ft K K K
9 .....................I
s e e s ss e e i 2222^ rrr
????jtjt^ .e 5 - r r s
aaaaaaaa* assa:it it rt c* i* - -
• I I V I I K t t ■ • II « It
n/snftA A A ni| MHHHK■ ■ I n H H H H B
s
eeseseess• b b b b i i b b Q
• • § • • • ■ • >
aaaaaaaa?
h h h C i » m• ►
OAnwi rt HiN»V
• M• a a Q v I'tKH MHwwtSw w w w *
WWW WWWWV* WW WWW* if
H H H H d
B S E * ..i
| AAVV| | W
a
o
*d
VI■uUw
• £ .. .■
?•
A
I
m m
liI I
• •
A «t
S«
stl
• N ■
S r-S PD
.B8C P'nK l t H l U
• AAUw
« ! ► ► • •
A - * ■
BHHv
• • v• H «
IB B I
■ sv
P.SH ill• M B
M M *-■
v B »
• A | |
M M
■•
• ■
5?MI A
P v
#
r
i
2 S
izAs H
M A>
■ M
M M
M M
% v
M -
S - "
M
H H
nA s*
M g
M H
B O
' 'd
M O
M »t
M Ct
■! !U O
w H
A A
O O
A A
A A
I I
M M
A A
O O
A A
A A
I I
A
0
A
A
A A
© ©
A A
A A
I I
A
O
A
A
A
O
A
A
i t
H
m
m
m
ct
rSAAAll A
□ 3« •
C P
ccW |
• M0
1
3
ncf cc
r?
bF—M
3.°
“ 2
cr
i —m*b
0 B
1 U
*Fw|
fLS?V-Sf
O S—
ct
tl
• I
•tl
nm
c
7 7• M
□c« w
c..v I
A M
r*tdi—i- I
MP
rt X
0
I-* o
HO• Hrt Hct •
1 A
r t-0 ct
?! III!HII lilt I h
Aft AA ft A
3 ,-£ 3 ,*73
C 3 & I* - t
—■ « H l i B N
r 3 £ * 2
□
A
r
A
A
P*
£A
|t t . M t t i B c ( [ i |l |C lD,Bl FlO ,H ,D l|D 2,C u«l l i r lTt lip ,P r td lO p,iA ]):-
ap(a(agr2(A ) ) ) , sp(a(aap(B ) ) ) ,sp(n?tns(C ) ) ) , sp(a(agrl(D ) ) ) ,
sp (a(c(B )) ) ,sp (a (sp ec l(F ) ) ) ,sp(a(spec2(G ))) t sp(a(cspec(H )) ) ,
sp (f(c a se (C a se ))),sp (f(a g r(A g r))) ,s p (f(tn s (T )) ) ,
sp (f (asp (Asp))> ,sp (f(p red (P red )) ) , ap(f(op(O p)) ) ,
c_head(D l), i_head(D2).
ap(a(ag r2 (0 )) ) .
a p (a (a sp (0 ))).
a p (a (tn a (0 ))).
s p (a (a g rl(0 ))).
sp (a (c 7 o ))).
s p (a (s p e c l(0 ))).
sp (a(sp ec2 (0 )) ) .
ap (a(cap ac(0 ))).
a p { f(e u « (0 -0 ))),
s p (f(c a a e (l-O ))),
s p (f(a g r(0 -0 ))).
s p (f ( a g r ( l- 0 )) ) .
s p (f(tn s (0 -0 ))).
s p (f(tn s (l-O ))).
sp(f(asp(O -O ))).
sp (f(a s p (l-O ))).
sp (f(p red (0 ~ 0 ))).
s p (f(p ra d (l-0 ))).
sp(f(op(O -O ))).
s p ( f ( o p (l-0 ) ) ) ,
c_ h ead (i).
i_ h e a d (i).
s p (a (a g r2 (l))).
s p (a (a s p (l))).
s p (a ( tn s ( l) ) ).
s p ( a ( a g r l( l) ) ) .
a p ( a ( c ( l) ) ) .
s p (a (s p e c l(l))).
s p (a (s p e c 2 (l))).
a p (a (c a p a c (l))).
a p (f(c a s e (0 -l))).
s p ( f ( c a s e ( l- l) ) ) .
s p (f(a g r(O -l))).
a p ( f ( a g r ( l- l) ) ) .
s p (f(tn s (O -l))).
s p ( f ( tn s ( l- l) ) ) .
s p (f(a sp (O -l))).
s p ( f ( a s p ( l- l) ) ) .
s p (f(p re d (O -l))),
s p ( f ( p r e d ( l- l) ) ) .
s p (f(o p (O -l))).
s p ( f ( o p ( l- l) ) ) .
s p ( a ( s p e c l( l/0 ) ) ) .
sp (a (sp e c 2 (l/0 )> ).
s p (a (e sp a c (l/0 )) ) .
c_head(f).
i_ h ead (f).
A .2 sets.pl
X Fila: aata.pl
X Author: Audi Vu
X Update: July 10, IM S
X Purpoaa: Coaputa tha set-theoretic relations between languages in a given
X paraaatar apaea.
XX diajoint_paira(A lia t coaaiatiag of pairs of languages in tha paraaatar
XX spaca which ara disjoint with aach othar)
X (Bach d is tin c t language rep resen ted by a d is tin c t nuabar in th a output)
d isjo in t_ p a irs(P s)
papaca2(L s),
ratractall(language(_)),
assert_languages(Ls,1), t,
setof(P,disjoint_pair(P),Fs).
d is jo in t_ p a ir ( [ I I ,12])
la a g u a g a (ll,A ),
la n g u a g a (l2 ,» ,
d isJo ia t(A .B ).
297
XX i n t « n t c t i a ^ p i d n ( i l i s t co n sistin g of p a irs of languages in th e paranotor
XX space th ic k in te rs e c t each other)
X (Bach d is tin c t language represented bp a d is tin c t nnnber in the output)
in te rso c tia g _ p a irs(P s) :-
pspace2(L s),
r e tr a c ta lK language (_ )),
a sse rt_ la n g u a g e s(L s,l), I,
s e to f(P ,in te rs e c tin g _ p a ir(P ),P s ).
in te r s e c tin g _ p a ir ( [ll,l2 ])
la n g u a g e d l,A),
language(1 2 ,B).
in te rse c tin g (A ,B ).
XXp rop er.in clu sio n s(A l i s t co n sistin g of p a irs of languages in th e p a ra a e te r
XX space uhere th e f i r s t member of each p a ir is a proper subset of th e
XX second member)
X (Bach d is tin c t language represented bp a d is tin c t number in th e output)
proper_inclnsions(P s) :-
pspace2(L s),
r e tr a c ta lK language (_ )),
a sse rt_ la n g u a g e s (L s ,l),I,
se to f(P , properlp_included(P ),P s).
p ro p erlp _ in c lu d e d ([II,12)) : -
la n g u a g e d l. A),
language(I2,B ),
proper!p_includes(B ,A ).
XX Find out th e s e t-th e o re tic re la tio n between anp too languages.
se t_ re la tio n (L l,L 2 )
( id e n tic a l(L l,L 2 ),
w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l,
w rite C a re id e n tic a l.')
; d is jo in t (L I, U ) .
w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l,
w rite C a re d is jo in t.* )
; in te rse c t(L l,L 2 ),
w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l,
w rite C a re in te r s e c tin g .')
; p ro p erlp .in c lu d e s(L I, L2),
w rits(L 2 ),n l,
w rite ( 'i s a proper subset o f ) , n l ,
w rite(L l)
) ,n l.
identical([A lA sD ,B )
member(A.B),
s e le c t(A,B,Be),
id e n tic a l(A s, B s).
id e n tic a l ( □ , □ ) .
d isjo in t(A ,B )
+ co_member(A,B).
in tersect(A ,B )
co_member(A,B),
unique_member(A,B),
unique.member(B, A), f .
298
p ro p erly .in c lu d e s(A,B)
subset(B .A ),
nnique_neaber(A,B),! .
s u b s e t([],_ ).
su b set( [AIAs],B) :-
aeaber(A ,B ),
subset(A s.B ).
co_aeaber(CAI_] , B)
aeaber(A ,B ).
co_aeaber( C_IAs] ,B)
co_aeaber(A s,B ).
unique_aeaber([A l_],B )
V aeaber(A ,B).
unique_aeaber([_|A sj,B ) :-
unique.nenber(A s ,B ).
assert_languages([L lL s] ,1)
u s t r t d u n u s t l . L ) ) ,
11 is B+i,
a ssert.lan g u ag es(L s, B1).
assert_languages( U ,_ ) .
A .3 order.pl
X Fils: ordsr.pl
X Author: Audi Vu
X O ats: August 9 , 1093
X Purpose: Order tbs settings in a given
X paraaeter space in the spirit of
X th e p rin c ip le of P ro c ra stin a te
o rd e r.s e ttin g s (S ,S l)
quicksort(S ,S I).
q u ick so rt(L ist.S o rte d )
q u ic k so rt2 (L ist, Sorted- □ ).
q u ic k so rts( □ ,Z -Z ).
quicksorts(CXlTail].A1-Z3) :-
s p lit(X ,T a il,S n a il,B ig ),
quicksorts(Saall.A l-C X I AS]),
q u ic k so rts(Big,AS-ZS).
s p l i t (.X , □ , □ , □ ) .
s p lit(X .[Y IT a il], [T lS aall].B ig)
verify(precedes(Y .X )) . I ,
s p lit(X ,T a il,S a a ll,B ig ).
splitCX ,[Y I T a il].S a a ll,[Y lB ig ])
s p lit(X ,T a ll,S a a ll,B ig ).
299
300
I
|
i i i i i i
a • • • • •
K X X X S . 4 tt 0 0
>3 g S 3 T j f f f fB S S B ! J 1"**- ih. •* e e e oi i i • « • v * >4 *4 M M M
f s s s f i a I * I - s ; ; ; ;
f . V . V S &- . 5 M ~ | g j j ? g g S 3H H H H H flS ►■ CO •• i S l it ct B ct
* o o o o o ►b p v m i r t*r ****** r «O » • • • • ( l l & Pv 0 ^ 0 * r>« O 4
ct ft & & AQi «0 it P # X H n H n n K *
£ &&&£& & u 5 *0 - *.5
£ n- ? - <5?3ij.- a-•» B *- 4 «»u«». uh ct I
P I t s ? ? " 8t*H • vfll fl V HOT!Mw»t » R O »« M
f
a
O K
o
H M I
r»
M g g
iff'M O O
■ M M
I I
ii
:
i

I *& r
325
I ) M |
I
og * d g
w0 4 0MOM
1 etl0 t*O•O <->*d
f t l- lf t
1 I I
3 m5
s a l
40 4
0 -0
0 OM M M
1 I I
_ OO O
R 4 4 4 0*0
rr er**•<-.*i r i
• 3 g a &aM - ct ct I &
0-t»M O+ « rtrt *0 4
►* n n ct *
4U I M
—• ■ w
M M 4
cc > p ••
<->m ct i
M
rs
4
*
o
O h
t» t t •
M ■ M
m
H 3i t c t*
ft)M*I
*3 *8
A
Pi
O
(ft
A
A
&
A
rO
Hit P’S M«p*^'«M^*
x | I w S 2 C +2 S 2
g J J g ** • •-
O O ct
*0*0
i cr- o
C B f r f s r s* o
•p
I w fi | wO w
S . r 5 '
I
/S (t
t f ♦* i t ♦*
” 5 f V
I * ’!
3 U to *9K V fr*
m3
ft)«t er
- 2 I
M
gat.papaca
gat.languaga s .
g s t.s s ttin g t
a a to f(S ,g st.s a ttin g (S ) ,S s ),
o r d i r . i « t t l i |i ( S i lS i l ) ,
a ssa rt(a a ttin g sO (S sl)).
gat.languagas
aato f(L ,g at.lan * u ag a(L ),L a),
a s s a rt ( l u g s (L a )).
gat_satting([A .B .C .D .E .F.G .H ])
a p T « (ag r2 (* ))),a p (» (a a p (B ))),sp (n (tn s(C ))),sp (a (ag rl(D ))),
sp (a (c (E )) ) ,sp (n (a p a cl(F )) ) , sp(a(spac2(G )) ) , sp(a(cspac(H )) ) .
gst.langnaga(L )
g a t.a a ttin g (S a ttin g ),
a a to f (S trin g , gansratsC S atting, S tring) ,L ).
ap in itia lim a ,
* rita ('T h a i n i t i a l a a ttin g ia ')►
c u rra n t.a a ttin g ,
a p l.
apl n a x t.in p a t(S ),
( S « in itia lis a -> ap
; S*bya -> tru a
S>gsnarata -> g an arata.sp l
; S « c u rra n t.sa ttin g -> c u rra n t_ sa ttin g ,sp l
i p ro c a ss(S ),I,
« rlta ('C u rra n t a a ttin g ranaina uncbangad.’) ,n l,
apl
a rita('U n a b l# to paraa *),
* r ita ( S ) ,n l,
v r ita ( 'b a s a ttin g tba p a ru a ta ra . . . ') . n l . n l ,
ra a a t.to .p ro c a a a (S ),
raa a t.to .p ro c a a a (S )
t i 7 _ n a x t_ sa ttin g ,! ,
( p ro ca a a (S ),I.
w rits ('P a ra M ts rs ra a a t to : ') .
e u rra a t.a a ttin g
; ra a a t.to .p ro e a ss(S )
lsa rn _ a ll.la n g a :-
l u g s (La ) t
la a r n .a ll(L a ) .
la a rn _ a ll([L lL s]) :-
la a r n l( L ) ,!,
la a m _ a ll(L a ).
l a a n _ a l l ( D ) .
301
Ii u b I(L) :-
w rita ('T ry in g to lonrn ') ,
« r ita ( L ) ,v r ita ( ' . . . ') , n l ,
i n i t i a l i s a ,
lo n rn (L ).
laarn(L )
proe«na_«ll(L ), I ,
« r ita ( 'P in a l a a ttin g : ') ,
c u rra n t.a a ttin g ,
g an arata(L i),
v rita('L anguaga ganarntad: ') ,
W Tita<Ll),nl,
( id a n tic a l(L .L l),! ,
n rita ('T b a languaga
v rita (L ),
* r i t a ( ' ia la a rn a b la .') ,n l
; * r i t a ( ' which ia a auparaat of ') ,
v r ita (L ) ,n l,
v rita ('T h a languaga
w rita(L ),
v r ita C ia VOT la a rn a b la .* ),n l,n l
) ,n l .
laarn (L ) :-
try .n a x t_ a a ttin g ,I,
la a rn (L ).
i n i t i a l i s e :-
re tra c ta ll(c u rre n t_ a e ttin g (_ )) ,
r e tr a c ta lK a (_ )),
r a t r a c t a l l (a a ttin g a (_ )),
aettingaO ( [SISa]),
a a a e rt(c u rre n t_ a e ttin g (S )),
a a a a rt(a a ttin g a (S a > ).
procaaa_all(C SISa])
procaaa(S ),
p ro c a a a .a ll(S a ).
p ro c e a a .a llC D ).
procaaa(S)
c u rre n t_ sc ttin g (P ),
in s ta n tia te .v a r(P ,P I),
re tra c t.o ld .v a lu e ,
a sse rt_ n e v .v a lu e (P i),
p araa(S ).
try .n e x t.s a ttin g
ra tra c ta ll(c u rre n t.a e ttln g C .)) ,
ra tra c t(a a ttin g a (C S IS a ])),
a a a a rt(e n rra n t_ a a ttin g (S )),
a s a a rt(a a ttin g a (S a )).
ganarata :-
c n rra a t_ a a ttin g (P ),
a a to f(S ,g a n a ra ta (P ,S ),S a ),
w rite('L anguage ganaratad a ith c u rra n t a a ttin g : * ),n l,
w rite (S s ),n l.
302
I
W
oCO
a as H M i V H i H H * H
*d*d *d*d*d*d * •. - /N /N i^> H Ct
£ ?? M I I I I V I • H
• •**••<»*»*
IlilllK - I f II f f i f f i . » . » » !. 1 'it 1 « r l£ £ • £ £ * • » • •*■»*■»*■5 M £ 0 (A a a a v O w O H H H M M M M B I ft fr H- H ft R • I* S -
? ? # ■ H e B c»M O O O • W W W • • • • • • • • 4 H f ft H ft e « N e «• • • • • • • • 4 4 I ft Mft « t N lt«
i ~ r . 3 » 3 : m : t 3 P . S3 .* Sff * 3 S 6 ~(■v/srtrtrtrtrtrtd ft H4 ft94 Ml ft ft »4 4 r™l4 ■• i ii i**4 - ■ H • b www n/srtrtrtrtrtrtd ... . _
—< u “ t n v e B e y S . . . . • « • • • « « • • m e m t
H* “ HV “ W • P - 01? P v H• H• _ _
S „ < S O <• ~ g ^ g - ■ it• i m m
i f f * # * ■ • •" I H h- rt*1 # <t I <L
* t* -1* ^ ^ P.* • f t * * f t * « t * * » • I » «* J l I
£ £ £ “ w r X &8 *G « ‘2 ,S£*3Ej'<a.w C eg- £2 2
J{J" < J *• < H-W M f O fl w»«nnMO p w£ 0 C>R
• w I • t * M * AMhw<A(0^s* I • • 0 /sw* wo* w * * * * * * * * ^ ^ w Q w w ^ O w *d A* *0
^ • * VOWdWd a o na wwwww* w w *h« *
M *0 ww* wwwwm w « h V
^ * * * * * * * * WWW WWWW* W K
• t> WWW w» • w^J « w
W0 | H- I
. - . • • O*0 • I 4Hi*
« ? ‘S ^ B - S 'S• o n w h a /nm
Oe*Si«"S4**SWt*WWt*
o
* • ■
* * *^
ft * *
•s's*:• ft ft
ft MH
I i I
*(j)fnTnM'MMZB9
-:(*s)i
w rita l([S |S s ])
w rita (S ),
w rits(3 s).
w r its l( Q ) .
n«xt_input(Input)
ra p s a t.
w rits C ls z tT *),
rsa d (In p u t).
instantiata_var(C V l|V sl],[V 2|V s2]) :-
v s r(V l),!, (V2-0; V2-1),
in s ta n tia ts .v a r(V s l, Vs2).
in sta n tia ts_ v sr([V IV sl], [VIVa2])
in sta n tia ts _ v a r(V s l. Vs2).
in sta n tia ta _ » a r( [ ] , □ ) .
A .6 sp2.pl
X F ils : sp 2 .p l
X Author: Audi Vu
X D sts: August 3, 1993
X Purpose: A cquiring word o rd srs and in fle c tio n a l Morphology by a a ttin g
X S (N )-p aran stsrs, HD-paraaatera and S (F )-p araaatara.
en su ra _ lo ad e d (lib ra ry (b a sics)).
en su re .lo ad e d (ae ta ).
sn a n rs_ lo ad sd (p arasr).
:- e n su re .lo ad e d (o rd e r).
sn su rs_ lo a d sd (a p u til).
dynaaic a s ttin g a /1 .
get.pspace
g s t.a s ttin g a ,
g a t.la n g u ag ss.
g s t.a s ttin g a
s e to f(S , g e t.s e ttin g (S ) ,Ss),
o rd e r_ s e ttin g s (S s ,S s l),
a a a s rt(a s ttin g s O (S a l)).
gst.langoagsa
se to f(L ,g et.la n g u a g e (L ), La),
a ssa rt(la n g a (L a )).
get_settlng(CA,B,C,D,E,F,<3,I,HDl,HD2])
sp (u (ag r2 (A)) ) . sp(a(asp(B ) ) )» ap (a(tn s(C )) ) . a p (a(ag rl (D)) ) ,
sp (a (c (E )) ) , a p (a (a p sc l(F )) ) , sp(a(apsc2(Q )) ) . ap(a(capsc(H )) ) .
c .h sa d (lD l), i_hsad(HD2).
get.settingl(C A ,B ,C ,D ,E ,F,Q ,l,B D l,lD 2,C ese,Pred,A gr,T ns,A ep])
spCa(agr2(A) ) ), ap(a(aap(B) ) ) , ap(a(tas(C ) ) ) , sp (a (a g rl (D) ) ),
sp(a(c(E ) ) ) ,sp (a(sp * el (F) ) ) , sp(a(spee2(a) ) ) ,sp (a (c sp e c (I) ) ),
c.h aad(IO l),i_hsad(E 02),
a p (f (cass(C aas) ) ) ,a p (f (pred(Pred) ) ) ,a p (f (agr( Agr) ) ),
sp (f(tn s(T n s))),sp (f(a a p (A sp ))).
304
gat.langoaga(L )
g a t.a a ttin g 1(S e ttin g ),
a a to l(S trin g ,g a n a ra ta l(S a ttin g ,S trin g ),L ).
■p in itia lix a ,
w rit*('T ha I n i t i a l a a ttin g ia ') , n l ,
c u rra n t.a a ttin g ,
apt.
apl n a z t.in p a t(S ),
( S a in itia lix a -> ap
; S«bjra -> tra a
; S*ganarata -> g an arata,ap l
; S *enrrant_aatting -> c u rra n t_ a a ttin g ,a p l
; proeaaa(S )>I,
« rita ('C u rra n t a a ttin g raaaina unchangad.*),n l,
apl
; a r i t a ( ’Unabla to paraa ') ,
a r ita ( S ) ,n l,
a r ita ( 'ta a a ttin g th a p araaatara . . . ') , n l . n l ,
ra a a t.to .p ro c a a a (S ),
raaat_to_procaaa(S )
( r a a a t.a lp (S ),
p ro c a a a (S ),I,
aritaC 'S u ccaaafu l p a ra a .* ),n l
try _ n a x t.a a ttin g (V ),
procaaa(S ), I ,
v ritaC U o rd ordar p araaatara ra a a t to : *),
a rita .v a lu a a (V ),n l,
aritaC 'S u ccaaafu l paraa. *),nl
; I,raaat_to_procaaa(S )
laarn _ all_ lan g a
langa(L a),
la a ra _ a ll(L a ).
laam _ all(C L |L a])
la a r n l( L ) ,l,
la a m _ a ll(L a ).
l a a m _ a l l ( n ) .
la a rn l(L )
a rita ('T ry in g to la a rn ') ,
a r it a ( L ) ,a r i ta ( ' . . . ') , n l ,
in itia li* a ,
a a t.a fp (L ),
la a m (L ).
305
learn (L ) :-
p ro c e e a .a ll(L ),! ,
v r ite ( 'F in a l a a ttin g : ') ,
c u rra n t.a a ttin g ,
g an a ra ta (L i),
w rite('Language ganaratad: ') ,
w rite (L l),n l,
( id a n tlc a l(L ,L l),I,
w rite('T he languaga *),
w rite(L ),
w rite ( ' ia la a rn a b la . ' ) , n l
; w rite (* ahich ia a auparaat of ') ,
« rita (L ),n l,
w rite('T he languaga ') ,
a rita ( L ) ,
w rite ( ' ia IOT la a r n a b la .') ,n l,n l
) ,n l.
laarn (L ) :-
t i 7 _ n e x t_ a e ttin g (_ ), I ,
laa rn (L ).
i n i t i a l i s e
r a tra c ta ll(c u rre n t.e e ttin g { _ )),
r a tr a c ta ll( a ( _ ) ) ,
ra tra c ta ll(fa d l(_ )),
re tra c ta ll(h d 2 (_ )) .
re tra c ta ll(w a ttin g a (_ )),
in itia l.e e ttin g ( [ S |S a j),
a a a a rt(c u rre n t_ a a ttin g (S )),
a a e e rt(c u rra n t_ a e ttin g (S )),
a a e e rt(s a ttin g a (S a )),
aaaart(a(f(caaa(O -O )) ) ) ,
aaaert(a(f(p red (O -O )) ) ) ,
a a a e rt(a (f(a g r(0 -0 )))),
a a a a rt(a (f(tn a (0 -0 ))> ),
a a a a rt(a (f(a a p (0 -0 )))).
procaaa_all(E S lS a])
procaaa(S ),
p ro c a a a .a ll(S a ).
p ro ceaa_ all( □ ).
procaaa(S)
c u rre n t.a e ttin g (P ),
in a ta n tia te _ v a r(P ,P l),
r e tr a c t.o ld .v a lu e r ,
a a ae rt.n a w .v a lu e r(P I),
p a ra a (S ).
try .n a z t.a e ttin g O )
ra tra c ta ll(c u rre n t_ a e ttin g (_ )),
r e t r a c t (a e ttin g a ( [SISa]) ) ,
a a a e rt(c u rra n t.a e ttin g (S )),
a a a a rt(a e ttin g a (S a )).
aat.afpC C SlSa])
re a e t.a fp (S ),
s a t.a fp (S a ).
a a t.a fp ( □ ).
306
rasat_sip([W |W s])
ckack_sfp(V ),
ra sa t.slp (W s).
raa a t_ sfp ( □ ).
chack_sfp(_-[]).
chack_sfp(V -[FIPa])
chack_alpl(tf- [F] ),
ch«ck_alp(V-Fs).
chack_slp(oft n ) .
chack_sfpl(aux-[F ]) !,
ch ack _ i_ iaatu ra(F ).
chack_sfpi( _ - [F]) :-
ch ack _ l_ faatu ra(F ).
ckack_f_laatura(prad)
( a ( f ( p r a d ( l- _ ) ) ) ,)
; ra tra c t(a (f(p ra d (_ -L )))),
a s s a r t(»(i(p rad (1 -L )) ) ) ,
v rita (* a (f(p ra d )) is ra s a t to *),
v r ita ( l- L ) ,n l
).
chack_f_laatura(agr) :-
( s 7 f ( a g r ( l- _ ) ) ) t»
; r a tr a c t( s ( f ( a g r ( .- L ) ) ) ) ,
a a s a r t( s ( f ( a f r ( l- L ) ) ) ) ,
v r ita ( 's ( f ( a f r > ) Is ra s a t to ') .
v r i t a ( l - L ) ,u
).
ch ack _ f_ faatn ra(tas)
( s ( f ( tn s ( l- _ ) ) ) » I
; r a tr a c t( s ( f ( ta s ( _ - L ) ) ) ) ,
a s sa rt(s (* (tn s(l-L ))} > ,
« r i t a ( ’s ( l( tn s ) ) is ra s a t to ') ,
* r ita ( l- L ) ,a l
).
cback_fj t aato ra(asp ) :-
( s ( f ( a s p ( l- _ ) ) ) ,l
; rs tr a c t(s ( l( a s p (_ - L )) ) ),
a s s a r t( s(f(a sp (1 -L )) ) ) ,
« r i t a ( ’s (f(a s p )) is ra s a t to *).
w rita (l-L ),n l
).
cb ack _ l_ faato ra(F tr)
( F t r - c l ; F tr> e2),
( s ( f ( c a s a ( _ - l) ) ) ,t
; r s tra c t(s (f(c a s a (F -_ ))} ),
a s s a r t( s ( l( c a s a ( F - l) ) ) ) ,
« r ita ( 's ( f ( c a s a ) ) is ra s a t to *).
w rita (F -l),n l
).
cback_l_faatura(prad) :-
( s ( f ( p r a d ( .- l ) ) ) ,t
; ra tra c t(s (f(p ra d (F -_ )))),
a s s a rt(s (* (p ra d (F -l)) ) ) ,
v r ita ( 's ( f ( p r a d ) ) is ra s a t to ') ,
307
writ*(F~D.nl
).
chack_l_iaatura(agr)
( a T f(a g r(_ -i))).(
i r a tr a c t(a ( i( a g r( F - _ )) )) ,
a a a a rt(a (f(a g r(F -l)))),
a r ita ( 'a ( f ( a g r ) ) i t r t s t t to ’) .
t r i t a ( F - l ) ,n l
).
chack_l_laaturo(tna)
( a ( * ( tn a ( .- l) ) ) ,l
; ra tr a c t(a ( f( ta a ( F -_ ) ) )) ,
a a ia r t( a ( i( tn a ( F - l) ) ) ) ,
w rita (* a (f(tn a )) i t r t t t t to ') .
t r i t a ( F - l ) ,n l
).
ch ack_l.faatura(aap)
( t ( f ( n t p ( _ - D ) ) ,!
; ra tra c t(a (f(a a p (F -_ )) ) ) ,
a a a a rt(a (f(a a p (F -l)) ) ) ,
w rita ('a (l(a a p )) i t r t t t t to *),
t r i t t ( F - l ) , n l
).
gonorato :-
c u rra n t.ia ttin g (P ).
t t t o i ( S , ganarata(P , S ),S i),
« r i t a ( ’Languaga ganaratad w ith c u rrtn t lo ttin g : * ),n l,
a rita l( S a ) .n l.
g tn tr a tt(S t)
c u r r tn t.ttttin g ( P ) .
a a to f(S .g a n a ra ta (P ,S ),S a ).
g tn tr ttt( F .S )
in tta n tia ta _ v a r(P ,P l),
r a tr a c t.o ld .v a lu ta ,
aaaart_ n at_ v alu aa(P D .
parao(S).
g t n tr a tt 1(P.S)
in a ta n tia ta _ v a r(P ,P i).
r ttr a c t.o ld .v a lu ta i .
a aaart.n aw .v alu aaK P D ,
p a ra t(S ).
curraat.aattiag :-
eurrtat.ttttlag(P).
aatoKV.aUCVD'Va),
v r i t a ( *L*). u r ita .v a lu ta (P ). n l ,ta b (1).
v rita_valuaa(V a), t r i t t ( • ] * ) ,n l.
r a tra c t.o ld .v a lu ta :-
r t t r a c t a l K t (■ (_ ))),
r a tr a c ta ll( h d l(_ )).
r a t r a c t a l l (h d l( .) ) .
308
608
•(*)PWTT
•(*)pw r»
• (T )p w fT
’<« ~)9*df3)a)da
(((")!»•<*•)■)<*■
*(((I)3*dl3)«)dl
■(((T)T9*dt)«)d«
«(«*)■)*•
•«(»)<*•»)■)<*»
•(o)a*n*vaf-?
'(((0)»*di3)a)d>
*(((0)C9*<I*)■)<*•
’(((0)T9*da)ta)da
•« « » «*)■)<!■
•(((0 )dw)«)da
•((((i*T)dB»)j)B)*xaaB »
* < (((« 1 )* w» )* )« )* » bb»
' ( ( ( (x ff )**») j .)a ) * » a a *
' <( <<pMd)p*ad) *) t)* » a a »
■ ((((•a* o )aa» 3 )j)a)w aaa»
([dav‘a v i‘j9 v <pa.x4‘aa«3])dj3-*j»3a*
‘((SOH)CPiD^»«n
{[MH‘TOH])<lp*r*«“ *
•<((
*<({(0)C9*<*«)»i)«)WaaB»
*<(<(i)I«dB)w)B)*«BBtt
*<<<<3)9)■)*)**•»*»
*((((a)T 29a)B )a)«jaaaa
'(((0 )a n » )« )a )» ja a a »
' <( ( (B )dB »)«)a)*»aa*
*((((T)CJtfa)w)a)oaaaaa
([H 'D 'd 'a 'a 'O 'H 'T ])<*«•"»*•■«
•( [dap *a«i* ifp 'pa-Xj *a « 3] )d*a“*iaaa»
' ([can*TOE])dpn-iua8«
'([a'O'J'a'a'O'a'T] )d«B-*xBBB»
( [dap *aui **®T' P*Jd **a»0 ' MH'  oh ' H' D*I *3' 0 *3 ' 8*¥] >Taa«ta*“aaH"»ja aaa
'((")C P8)T T i»i> «»«
'(C “ )tP 8 )T W 9« » «
'((")*)rr»*9«*M
-: TaanTBA~pio~*3B.i5a.i
*(CZGH*IOS])dpq“vx*aaa
*( W 'O 'i'a ’a 'D 'f T ] ) * » - * a*bb»
U M H 'T O I'I'D 'd 'a 'a 'D 'a 'T n w iirW M W - * " ” *
A .7 parser.pl
X F ile : p u ra c r.p l
X Author: Audi Vo
X Updated: August 20, 1003
X Purpose: A top-down p a rse r implementing the S-param eter model.
:- ensure_loaded(Johnson).
:- en su re_ lo ad ed (tree).
dynamic s /l,h d l/l,h d 2 /l.
parse c p (T re e ,_ ,D ), d (T re e ),fa il.
parse w rite (’lo more p a r s e .') .
parse(S) :- e p (_ ,S ,D ).
parse(S,T ) c p (T ,S ,D ).
c p (c p /[n p (IF )/l? ,C l]) —>
to p (I P ) « » '+ ’) ,
ap (IP ,c sp e c,IF ),
cl(C l,C n p (IF )]).
cp(cp/[advp(A dF)/[often],C 1]) —> [o fte n ],
{ s(m (c sp e c (l))),
op(AdF)»"»'+»,
index(A dF)»*4
c l(C l, [advp(AdF)]).
cl(cl/[cO (C F,T h)/C ,A grlP], ABC) —>
< h d l( i» ,
cO(C.CF.Th).
agrlp(AgrlF,xO(CF,Th).ABC),
< lezical(C F )> ,
cl(cl/[A grlP,cO (C F ,T h)/C ], ABC) -->
th d l(f)> .
agrlp(A grlP ,zO(CF,Th), ABC),
cO(C.CF.Th),
{lexical(C F )}.
a g rlp (a g rlp /[n p (IF )/IP .A g rl.l],H C ,[n p (IF l)]) —>
< c a s e (IF )« -c l,
check.np.fnatures(IF,M F1)
>.
np(IP , a g rlsp ec, IF ),
a g r l.l( A g r l.l. BC. [n p (IF )]. □ ) .
ag rlp (ag rlp /[n p (IF )/IP ,A g rl.l].x O (B F .T h ), [np(B Fl)]) —>
<Th»[_ ,_ ],
c a a e (IF )« -c l,
c a se (lF W " c a s e (IF l)
>.
n p (IP ,a g rlsp e c ,IF ),
ag rl.K A g rl.l,x O (B F ,T h ), Cnp(IF)], [n p (IF l)]).
a g rlp (a g rlp /[n p (IF )/IP , A grl.l],xO (H F,Th), [advp(AdF)]) -->
n p (IP ,a g rlsp e c ,IF ),
agrl_l(A grl.l,xO (E F .T h). [n p (IF )], [adwp(AdF)]).
310
agrl_l(agrl_l/[agrl_0< A grlF .T h)/A grl,T P ] ,xO(HF,Th), [u p (IF )]. ABC) —>
{ h d 2 (i),
pU <AgrlF) ■■■phi(IF ),
^ch«ck_v_f«atnr«i(AgrlF,HF)
agrl_0(A grl,A grlF,T h),
tp(TP, xO(IF.Th), [n p (IF )), ABC).
agrl_l(*frl_l/[T P,agrl_0<A grlF,T h)/A grl],xO (H F,T h),[np(IF)],A B C ) —>
T h d 2 (l).
phi(A grlF)■■■phi(IF),
ch«ck_v_faattir«a( AgrlF, IF)
tp(TP,xO(HF,Th). Cnp(IF)], ABC),
ftgrl_0(A grl, A grlF,Th).
tp(tp/[Tl),IC,A C,A BC) —>
tl(Tl,HC,AC,ABC).
tl(tl/[tO(TF,Th)/T,A*pP),xO(BF,Th),AC,ABC) -->
< hd2(i).
chack_v_laaturaa(TF,H F),
+ABC*[advp(_)3
}.
tO (T.TF,Th),
aap_p(AspP,xO(HF,Th), AC,ABC).
tl(tl/[A apP,tO (T F,T h)/T ],xO (H F,T h),AC,ABC) —>
< hd2(l),
chack_v_f«aturaa(TF,H F),
+ABC»[adrp(_))
>.
aap_p(AapP,xO(HF,Th), AC,ABC),
tO(T.TF.Th).
tl( tl/[ a d v p /[ o f te n ) ,T1],xO(HF,Th),AC,ABC) —> [o fta n ],
<+ABC-[nd*p(_)]>,
tl(T l,xO (H F,T h), AC,ABC,.).
tl(tl/[a d ra (A d F )/[] ,T1],xO(HF,Th) ,AC, [advp(AdF)]) —>
tl(Tl,xO (HF,Th),AC, □ ,_ ) .
tl(tl/[tO (TF,Th)/T,A ipP],xO (BF,Th),A C,A BC,_) -->
{check.v_featiirea(TF,H F)},
tO(T.TF.Th),
aap_p(AapP,xO(BF,Th),AC,ABC).
aap_p(aap_p/[Aapl],HC,AC,ABC) -->
a a p l(Aapl, BC, AC, ABC).
aapl(aepl/[as£0(A epF,T h)/A ap,Agr2P] , xO(HF.Th) , AC,ABC) —>
check.v_featurea(A apF,IF)
) •
aapO(Aep,AapF,Th),
agr2p(Agr2P,xO(HF,Th), AC,ABC).
aepl(aapl7[Agr2P,aspO(AepF,Th)/Aap] ,xO(BF,Th), AC,ABC) ~ >
< hd2(f),
ch eck .v .featu rea (AapF.BF)
agr2p(Agr2P,xO(HF,Th), AC,ABC),
aapO(Aap,AapP.Th).
311
agr2p(agr2p/[ap(H F)/IP,A jr2_l] .xO(HF.Th) ,AC, tap(H Fl)]) -->
<Th»[ _ ,.] ,
caaa(H F)«*c2,
chack_np_faaturaa(IF,1F1)
>.
np(IP ,afr2apac,V F ),
atr2_l(Ifr2_l,xO (H F .T b). Cap(IF)I AC]).
agr2p(agx2p/ [a p (IF )/l? , Agr2_l],xO(HF,Th), AC,[] ) —>
<Th»
caaa(H F)*«c2
>.
ap(IP,afr2apae,H F),
»*r2_l(Agr2_l.xO<HF,Th).[np<«F)lAC]).
agr2p(ajx2p/[A jr2_l],xO <H F.Tta),A C,t]) —>
«Sr2_l(Agr2_l,xO(HF,Th), AC).
agr2_1(a*r2_1 /Cagr2_0(Agr2F, Th) / Agr2,VP], xO(IF , Th), C ap(IF)IIPa]>
< hd2(i),
^ehack_v_faaturaa(Agr2F,HF)
afr2_0(A £r2,A gr2r,Th),
»P(VP,xO(1F.tS ), Cap (IF )IIP * ]).
agr2_l(ag2_l/[V P,a*r2_0(A fr2F.Th)/A jr2].xO (K F.Tb), [ap(IF > IIP s])
^chack.v_faaturaa (Agr2F, HF)
▼p(VP,xO<HF,Th), C np(IF)IIPs]),
afr2_0(A gr2,A fr2F,Th).
rp(Tp/Eap(IF)/IP,V I],xO (H F,CagtlTha]),A C) —>
U h « ta ( I F ) a n a |t,
aalact(A C ,ap (IF l). AC1).
c a a a (IF l)« * c l,
chaek.ap_faaturaa( IF ,IF 1)
>.
a p (IP ,v sp a c l,IF ),
< lax ical(IF )> ,
v l(V I. sO (lF ,[a ftI Tha]) , AC1).
vp(vp/C ap(IF )/IP ,V i],xO (H F,[pat]) ,AC) —>
lth a ta ( IF )* « p a t,
aalact(A C ,ap(IFl),A C 1),
c a a a (IF l)a « c 2 ,
ch«ck_ap_faaturaa(IF,IF1)
>.
ap(V P,vapac2,IF),
< lax ical(IF )> ,
v l(V I,x O (IF .[p at]) , AC1).
vl(vl/[vO (V F, CttllTha])/V ,V P],xO (H F.[ThliTha]) , AC) —>
{ehack_v_taaturaa(VF,HF)>,
vO(V,VF,[ThllTha]),
vp(VP,xO(IF,Tha),AC).
vl(vl/CvO(VF,[Th))/V],xO(HF,[Th]),_AC) —>
{chack_v_laaturaa(VF,KF)},
vO(V,VF,[Th]).
312
cO(Varb,CF,Th) -->
{»_to_c>,
v«rb(V«rb,CF,Th).
cO(Anx,CF,_Th) —>
aux(A ax,c,CF).
C O C O , . , . ) — > □ ,
{+»_to_c,
+ a u x (_ ,c ,_,_ ,_ )
>.
*grl_0(V,A grlF,Th) —>
<T_t©_*grl>,
varb(V ,A grlF,Th).
agrl_0(Aax,AgrlF,_Th) —>
maxCAux, a g r l, AgrlF).
* g r l _ 0 ( [ ] —> [ ],
< *».to_A grl,
^+aux(_ ,a g r l,_ ,_ ,_ )
tO(V.TF.Th) —>
»srb(V,TF,Th).
tO(Anx,TF,_Th) -->
aux(A ux,t,T F).
tO (□ , . , . ) —> [ ] ,
^ + a u x (_ ,t,_ ,_ ,_ )
aspO(V,AspF,Th) —>
{ f .t o .u p ) ,
v«rb(V,AspF,Th).
upO(Aux,AspF,_Th) —>ir ,u p r ,.T n ; —>
aux(Aux, u p . AspF).
I.— ) --> a .u p o < n ,
{ * » _ to _ u p ,
 + n x ( . , u p ............)
>.
agr2_0(V,Agr2F,Th) —>
{▼-to_agr2>,
v*rb(V,Agr2F,Th).
agr2_0( [],_ ,_ ) —> □ .
vO(V,V F,[agtlThs]) —>
< s? « (a g r2 (0 ))» .
vsrb(V ,V F,[art IThaJ).
▼0(C3 Cagtl.j) —> [3 ,
(s ? a (a g r2 (l)))} .
[Thl_]J — > □,
<+Th«agt>.
np(Sub1,cspsc,IF) —>
< s (a (c a p a c (l))), ■ (■ (sp a e l(l)))} ,
s a b ja c t(Subj, IF ).
np(Obj, cspac, IF) —>
{ s (a (c sp a c { l))), s (a ( s p a c 2 ( l) ) » ,
objsct< O bj,IF ).
313
n p ( n , capac,_) —> □ ,
{■(■(cflp«c(0)))}.
np(S ubj,axrlap«c,IF ) —>
{ a (a (a p « c l(l)))> ,
a a b j« c t(S u b j,IF ).
n p (Q .ig rla p a c ,.) —> □ .
np(0bj,asr2apac,IF ) —>
{a(a(apac2(!)))> ,
o b jact(O b j,IF ).
ap(Q ,*fr2spac,_) —> □ .
np(Subj,vxp«cl,IF ) —>
ii(« (« p tc l(0 )))} ,
a u b ja c t(S u b j,IF ).
n p ([],v a p a c i,_ ) —> □ .
np(Obj, vap«c2, IF) —>
{a(a(apae2(0)))> ,
o b ja c t(O b j,IF ).
n p (t],v ap ac2 ,_ ) --> □ .
au b jact( [ 'S u b j-0 ’/ □ ] ,IF ) —> C a-Q ],
{ a « (c a a * (0 -0 ))),
c a s a ( IF )" « c l,
indax(IF ,a)
a u b ja e t([’S a b j-[c l] V n i .i F ) --> [ a - [ c l] ] ,
{ a (f(c a a a (0 -l))).
eaaa(V F )« » cl,
iad ax (IF ,a)
>.
o bjactC [’O bj-D * /□ ] ,IF ) --> [© -□ ],
< a « (e a a a (0 -0 )> ),
caa«(IF )a» c 2 ,
indax(VF,o)
>.
o b j« c t([>0 bj-[c2] '/ □ ] *IF) --> Co-Cc2]].
{ a (f(c a a a (O -l))),
e u « (IF )> n e 2 ,
indaz(IF .o)
>.
aarbtC 'V arb-D * /□ ] ,VF,Th) --> [V -Q J,
{tlx_frid(V ,n»),
a (ftp ra d („ -O ))),a (f(a g r(..-O ))),a (f(tn a (_ -0 ))),a (f(M p (_ -O ))),
co d a.faatu raa( □ , VF),
indax(VF,V)
>.
varb(C'Varb-Cprad] * /□ ] ,VF,Th) —> [V -Cpr.d]],
<th_*rld(V ,Th),
a (< (p ra d (_ -l))),a (< (a fr(_ -0 ))),a (f(t» a (_ -0 ))),a (* (a a p (_ -0 ))),
e o d i .f u t a n a ( Cprad], VF),
iadax(VF.V)
>.
314
v « rb ([* V « rb -[u r]* /0 ],V F ,T h ) —> [V -[agr]],
{th_*rid(V ,Tli).
s (l( p re d ( _ -0 ) )) .s (f ( a g r ( _ -l)) ) ,s ( f (tn s (_ - 0 )) ) ,s ( f (a a p (_ - 0 ) )) .
co d a.faatu raa( [agr].V P),
iadax(VF.V)
▼arb([ ’V arb-[tua] * /□ ] ,VF,Th) “ > [V -[tn s]],
<th_frid<V .Tb),
s (f ( p rs d ( _ - 0 ) ) ) ,s ( f ( a g r ( .- 0 ) ) ) ,a ( f ( tn s ( _ - l) ) ) ,a ( f ( a a p ( .- 0 ) ) ) ,
co d a.faatu raa( [ tn s ] , VF).
indax(VF.V)
>.
»arb([*V srb-[asp]*/[]],V F,T h) —> [V -[asp]],
•Cth_grid(V,Th),
s (tT p ra d (_ -0 ))),s { f(a g r(.-O ))),s (f(tn s (.-O ))),s (f(a s p (_ ~ l))),
co d a.faatu raa( [asp],V F),
indax(VF.V)
>.
v a rb (['V a rb -[p ra d ,a g r]‘/[]]»V F,Th) --> [V -[p rad .ag r]],
{th .g rid (V ,T h ),
s ( f ( p r a d ( _ - l) ) ) ,s ( f ( a g r ( _ - l) ) ) ,s ( f ( tn s ( _ - 0 ) ) ) ,s ( f ( a s p ( _ - 0 ) ) ) ,
co d a.faatu raa( [prad.agr],V F ),
indax(VF.V)
>.
v e rb (['V e rb -[p ra d ,tn s]’/ □ ] .VF.Th) —> [V -[p re d ,tn s]],
{th_grid(V ,T h),
s < f( p re d ( _ -l) )) .s (f (a g r (_ - 0 ) )) .s (f (tn a (_ - l)) ).s ( f( a s p (_ - 0 )) ) ,
co d a.faatu raa( [p rad .tas],V F ),
iudax(VF.V)
>.
varb( [ 'V arb-[prad. a s p ]’/[]]»V F.Tli) —> [V -[prad.aap]] ,
{th_grid(V ,T h),
a ( f ( p r a d ( _ - l) ) ) ,a ( f ( a g r ( _ - 0 ) ) ) ,a ( f ( tn s ( _ - 0 ) ) ) ,s ( f ( a s p ( .- l) ) ) .
c o d a.faatu raa( [prad.aap].V F),
^indax(VF.V)
varb( [ ’V arb-[agr, tn s ] ’/ □ ] .VF.Th) —> [V -[a g r,tn s ]],
{th_grid(V ,T h).
■ (f(p ra d (_ -0 ))),s (f(a g r(_ -l))).s (l(tn a (_ -l))),i(f(a a p < :_ -0 ))),
co d a.faatu raa( [ag r,tn s].V F ),
iudax(VF.V)
v a rb ([, V arb-[afr,aap] '/ □ ] .VF.Th) —> [V -[ag r,asp ]],
<th_crid(V .Tb).
s ( f ( p r a d ( .- 0 ) ) ) ,s ( f ( a g r ( .- l) ) ) ,t( f ( tn s ( .- 0 ) ) ) ,a ( f ( a s p ( _ - l) ) ) ,
c o d a.faatu raa( [a g r, aa p ].VF)»
index(VF.V)
varb ([* V arb -[tn s,asp ]* / [ ] ] .VF.Th) —> [V -[tn a ,aa p ]].
{th.grid(V ,T h),
s ( f ( p r s d ( .- 0 ) ) ) ,a ( f ( a g r ( .- 0 ) ) ) ,s ( f ( tn s ( _ - l) ) ) ,s ( f ( a s p ( .- l) ) ) ,
co d a.faatu raa( [ tn s ,aa p ]. VF).
indax(VF.V)
>.
▼ arb([’V a rb -[p ra d ,a g r,tn s]* / □ ] .VF.Th) —> [V -[p rad .ag r,tn s]] ,
<th_grid(V ,Th),
a (f ( p ra d ( _ - l) ) ) ,a ( f ( a g r ( _ - l) ) ) ,s ( f ( tn a ( _ - l) ) ) ,a ( f ( a s p ( _ - 0 ) ) ) ,
co d a.faatu raa( [p ra d .a g r.tn s ]. VF),
iadax(VF.V)
315
>.
varb( [ ’V arb -[p rad ,ag r,u p ] * / □ ] ,VF.Th) —> [V -[prad, a g r, u p ] ] ,
{th_grid(V ,A ) ,
a ( f ( p r a d ( _ - l) ) ) ,a ( f ( a g r ( _ - l) ) ) ,a ( f ( tn a ( .- 0 ) ) ) ,a ( f ( a a p ( _ - l) ) ) ,
co d a.faatu raa( [p ro d ,ag r,n ap ], VF),
indax(VF.V)
>.
v a rb ([ ’V e rb -[a g r,tn a ,a u p ]'/[]] ,VT,Th) —> [V -[ag r,tn a,aap ]] ,
(th_grid(V ,T h),
a ( f ( p r a d ( _ - 0 ) ) ) ,a ( f ( a g r ( .- l) ) ) ,a ( f ( tn a ( .- l) ) ) ,a ( f ( a a p ( .- l) ) ) ,
coda_faaturaa ( [ u r , tn s , aap], VF),
indax(VF.V)
v a rb ([* V arb -[p rad ,ag r,tn a,aap ]’/ [ ] ] , VF.Th) —> [V -[p ra d ,a g r,tn a ,a a p ]],
(th_grid(V .T h),
a (f ( p ra d ( _ -l)) ) .a ( f( a g r( _ - l) )) ,a (f ( tn a ( _ -l)) ) ,a ( f( a B p ( _ -l) )) ,
c o d a .fa a tu ra a ([p ra d ,a g r,tn a ,a a p ], VF),
indax(VF.V)
>.
a u x ([’A ux-[prad]•/□ ],c,C F ) —> [aux-[prad]] ,
{ a ( f ( p r a d ( l- .) ) ) ,
a (a (c (0 ))),
co d a.faatu raa( [prad],CF)
au x ([ ’A ux-[agr]'/ [ ] ] ,c,CF) —> [au x -[ag r]],
C f(prad(0-_)) ) ,a (f(a g r(l-_ ))T .(a (f(p ra d (0 -_ ))),a (f(a g r(l'
a (a ( c ( l) ) ) ,a ( a ( a g r l( 0 ) ) ) ,
co d a.faatu raa( [agr],CF)
a u x ([’A ux-[tna]' / □ ] ,c,CF) —> [a u x -[tn a ]],
< a (f(p ra d (0 -.))),a (f(a g r(0 -_ ))),a (f(tn a (l-_ > )),
a ( a ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( a ( tn a ( 0 ) ) ) ,
co d a.faatu raa( [tna],C F)
).
aux(['A ux-[aap]’/ Q ] ,e,CF) —> [au x -[aap ]],
{ a (f ( p ra d ( 0 -_ ) )) ,a (f ( a g r (0 - .) )) ,a (f (tn a (0 - _ ) )) ,a (f (a a p ( l- _ )) ) ,
a (a (c (1 ) ) ) ,a (a (a g rl( lT )) »a(a(tna( 1 ) ) ) ,a(n (aap (0 )) ) ,
co d a.faatu raa( [aap],CF)
}*
au x (['A u x -[p rad ,ag r]'/ □ ] ,c,CF) —> [au x -[p rad ,ag r]] ,
{a (f (p ra d U --) ) ) . a ( f (agr ( l-_ ) ) ),
u(n(c(l))),a(a(ugrl(oT>).
co d a.faaturaa([prad.agr],C F )
>
aux([’Aux-[prad,tna]'/[]],c,CF) —> [aux-[prad,tna]],
{a(f(prad(l-.))),a(f(agr(0-_))),a(f(tna(l-_))).
a(B(c(l))),a(a(agrl(IT)),a(a(tna(0))),
coda.faaturaa([prad,tna],CF)
>.
aux([’Aux-[prad,aap]*/[]],c,CF) —> [aux-[prad,aap]],
{a(f(prad(l-_))),a(f(agr(0-_))),a(f(tna(0-_))),a(f(aap(l-_))),
a(a(c(l))),a(a(agrl(IT)),a(a(taa(l))),a(n(aap(0)}),
coda.faaturaa( [prad,aap],CF)
aux(['Aux-[agr,tna] ’/□],c,CF) —> [aux-[agr,tna]],
{a(f[prad(0-_))),a(f(agr(l-_))),a(f(tna(l-_))),
a(a(c(l))),a(n(agrl(lT)) ,a(a(tna(0))),
coda.faatnraa( [agr,tna],CF)
316
>.
aux(['A ux-[agr,aap] '/ [ ] ] ,c,CF) —> [aux-[agr,aap]] ,
{ a ( f ( p r a d ( 0 -_ ) )) ,a (f ( a g r (l- .)) ),a ( f( tn a ( 0 - _ )) ).8 ( f (a a p (l- _ )) ),
a ( n ( e ( l) ) ) ,a ( n ( a g r l( l) ) ) ,8 ( a ( tn a ( l) ) ) ,a (n (a a p (0 ))),
co d a.f aaturaa ( [agr, aap] ,CF)
] •
a u x (['A u x -[tn a ,a a p ]’/ [ ] ] ,c,CF) —> [a u x -[tn a ,a a p ]],
{ a (f ( p r a d ( 0 - _ ) ) ) ,8 ( f ( a g r ( 0 - .) ) ) ,a ( f ( tn a ( l- .) ) ) .a ( f ( a a p ( l- _ ) ) ) ,
a (a ^ c (l))),a (n (a g rl(lT )),a (n (ta 8 (l))),s (u (a a p (0 ))),
c o d a .fa a tu ra a ( [tn a ,aap],CF)
>.
a u x (['A u x -[p ra d ,a g r,tn a ]* /[]],c,C F ) —> [aux-[prad, a g r, tn a ]] ,
{ a (f(p ra d (l-_ ) ) ) , a (f(a g r( l-_ )) ) , a (f ( tn a ( 1 - .) ) ) ,
a (■(c ( 1 ) ) ) ,a (a (a g rl( 1 ) ) ) ,a(n{tna(0 ))),
co d a.f aaturaa ( [prad, a g r, tna] ,CF)
] •
a u x (['A u x -[p rad ,ag r,aap ]'/ [ ] ] , c.CF) --> [au x -[p rad ,ag r,aap ]] ,
{ a (f ( p r a d ( l- _ ) ) ) ,a ( f ( a g r ( l- .) ) ) ,a ( f ( tn a ( 0 - .) T ) ,a ( f ( a a p ( l- _ ) ) ) ,
a ( n ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( n ( tn a ( l) ) ) ,a ( n ( a a p ( 0 ) ) ) ,
^ c o d a.faatu raa( [prad, ag r, aap] ,CF)
au x (['A u x -[p ra d ,tn a,a a p ]* /[ ] ] .c.CF) —> [au x -[p rad ,tn a,aap ]] ,
{ a ( f ( p r a d ( l- _ ) ) ) ,a ( f ( a g r ( 0 - .) ) ) ,a ( f ( tn a ( l- .) ) ) ,a ( f ( a a p ( l- .) ) ) ,
a (n (c (l))),a (a i(a g ri(lT } ),a (n (tn a (l))),8 (n (a a p (0 ))),
co d a.f aaturaa ( [prad, tn a , aap] ,CF)
■
a u x ([’A ux-[agr,tna,aap] * / □ ] ,c,CF) —> [a u x -[a g r,tn a ,a a p ]],
< a ( f ( p r a d ( 0 - _ ;) ) ,a ( f ( a g r ( l- .) ) ) ,a ( f ( ta a ( l- .) > ) ,a ( f ( a a p ( l- _ ) ) ) ,
a ( a ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( n ( tn a ( l) ) ) ,a ( n ( a a p ( 0 ) ) ) ,
c o d a.f aaturaa ( [agr, tn a , aap] ,CF)
}•
a u x (['A u x -[p rad ,ag r,tn a,aap ] * /□ ] ,c.CF) —> [a u x -[p ra d ,a g r.tn a ,aa p ]] ,
{ a ( f ( p r a d 7 l- _ ) ) ) ,a ( f ( a g r ( l- _ ) ) ) ,a ( f ( tn a ( l- .) ) ) ,a ( f ( a a p ( l- _ ) )>,
b(b (c( 1) )) ,a (* (a g rl(l} ) ) ,a (n (tn a (l) )) ,a (a (a a p (0 )) ) ,
^coda_faaturaa( [p rad.agr, tn a, aap] ,CF)
a u x (['A u x -[ag r]’/n ],a g r l,A g rlF ) —> [au x -[ag r]],
< a (f(a g r(l-_ ))T ,
a (a (c 7 0 ))).a (a (a g rl(0 ))),
c o d a .fa a tu ra a ( [agr].A grlF)
aux([*A ux-[tna]* / □ ] ,agr1 ,AgrlF) —> [a u x -[tn a ]],
{ a (f (a g r(0 -_ )) ) , s (f ( tn a ( l-_ ) ) ) ,
a (a (c (0 )) ) , a (a (a g rl( 1 ) ) ) ,a (a (tn a (0 ))),
c o d a .fa a tu ra a ( [tn a ], AgrlF)
}.
aux(['A ux-[aap]’/[]],a g rl,A g rlF ) --> [au x -[aap ]],
{ a (f(a g r(0 -.))T ,a (f(tn a (0 -_ ))),a (f(a a p (l-_ ))),
a (a ( c T o )) ),a ( a (a g r l( l) ) ),a ( n (tn a ( l)) ) >a (n (a a p (0 ))),
c o d a .fa a tu ra a ( [aap ], AgrlF)
a u x (['A u x -[a g r,tn a ]'/□ ],a g rl,A g rlF ) --> [a u x -[a g r,tn a ]] ,
< a ( f u g r ( l- _ ) ) ) ,a ( f ( tn a ( l- _ ) ) ) ,
a (n (c 7 o ))),a (a (a g rl(l))),a (n (tn a (0 } )),
c o d a.f aatu raa ([a g r, tn a] .AgrlF)
a u x (['A u x -[a g r,a a p ]'/□ ],a g rl,A g rlF ) —> [au x -[ag r,aap ]] ,
{ a (f ( a g r ( l- _ ) ) ) , a (f(tn a C 0 -.)) ) ,a (f(a a p (l-_ )) ) ,
a (a (c (0) )) ,B ( n (a g r l( l)) ),a ( a ( tn a ( l)) ),a ( n (a a p (0 ) ) ),
317
I
!
WI—*
GO
ft ft « *t n ft
o o o v v v
a a » r • •• • • M ft ft
I I I r w t*th M th M- I I
III I | | i " " I i l i f E t SSH?f t f t • • t • ■ ? B A * l l f t | l
g « Q SfUUSt ~ 2 * S £ S Jg QQQQQ
3 ; < □ : •; r s 3 * s S s s s s s> • • 4 - «t t t — ■ * w to j » ■ ■ ■ ■ ^
■• w I ft ft •* << ■ ■ ■w ■a ■■■■■
• i-i wH RA>> O ■ ■ » » ■§ ■ ■ • ■<4
m? - i - 8 3 T * 1 < I £ 3 B t f l f r
g g m* m g^si zs/s/s/s*q
v - : r ?? ? s | f 3 l s s s s 3•• *■• I n n w«ftv> wwww»
w a A • |3« • • ; • • • • l
o 5 5 r a '
7ft j
e e i e I e
S>BV *ftBv iftfi V (ftfi V isB V «sBV•I I • AAAI* AAA | • AAA I* AAA | ■ AAAI* AOftiNn O/vftn P P QA.rn PftiK* S.«*ft7 ftnt7 ft. a *7 S,• n n i t n n B ( /m>^A f AAM I AAllI Ai l l I AAA IAf^A* H/tA< thAOH A AOH A Aa a p- wto B <|WQ<0 A
6.S*ft A frSHA frSKA frSlftAIftftS AAAA AAA A AAAA
I AA*0 I Aft A I AA>« I AAA
? e S ^ r f r . ?<ae^ ? 5 s ^A« a S A►*AA a toAV. BHAS „ ----- _ - — ^A O to I—i A a I»H ) A AO |—| A a t o . - , A V H I A ^ OHS A
g ’“'.I H fi O I “ B O lu d O l U |1v I . f w T l pRwlu Hw| . Rw| u Rwl u R » I A5*1 - 5Aw * Awwv Aww* Aw * ABif i AAw* AA* w | Awwn Mws/|t AwwA AAw'd AAV|-| AAA WA AO VU AO WO Ao Wo a AWI—I AS WU Ar-iao *o tnA■ I—I mAo u nA • H AA- - AAo i_i A
I f f i Z z r l - f 3 i Z 3 H § & I*-*A Hd o AAOf LJfl A I Ult I o toAI—I o 0*AH oWA *0 oAa of .. ..
►a 3 C E C - • ■ » » ■ ' . * p j.
AO 8E83 hES; hE : ???> 8Cg.~ 6
^w , &C3., 3^ a 3o a oB3-5 Sw-r a4**' I * Wt» I w o | w p p wt* to o wtoR Oww VHw | V —I P If d A- I - »• I to At|w | w| R w R d Al A A Al 4 AWo WA o w | o | I_IAWM HAWW If
~ ‘ “ A A • ■Wti tod W toA A►a w oo IMAW I AA p AtAo «q wAo l•d A>Hp Aw AAVu l_l tos A AA
A A w p A — A A • b ' w SI t o g w
/-SW(» If *H^W I
1 - j» g *♦-_ *n ^ ^ J, w
6
*. 3, S A Wg V Wgg
S ffrfi
• « In i r. _ — _____| » It « U U I ___ _
• *d P *d * i
* A A Iffl I
*TJ w Atop AtoA• J o a I M A
•J B I I B
- I g
83$ 838
3**? 3.’ a
O <t O uw if w «w i wW A W
’ 8
I
< 4 4 4 4I I I I ICfr <f Ct «t n ^
0 OOO O t* •1i i i a o h
o
a
&
s a ; s ; frE s e s s s s s s s q '*!!7-7 p s r - ’- ^ e E•• i i *• « g, £ g J*
• a<t i /->i ^ /s.
a a a a a a a a g s * s s, es?»♦to« />■0rsO ■ Pif«totob*totototo»» « I ^01 « ^ m-•• M-w
Vi *1^NH 9 I 9H H K ^ P B I 9 H - p • 9 h o W w A A *•
9 p 9 p p A I
M O H « f l ( t » l < • * O ^ O O v || f » < t 9 •
■« g Eh? ~ o » « o w w . n 5 £ £ 5
A - • M-
« 4 AH H H - i-v
■ I AH
I UM*«» ft M
I • •
v/Op>nwpwnnw> M-
•3*?'*w>Uto w• wwl| O•»w p V wwi *www v • wct99 »WW W • If• w. *w * 9 ■*w oB• tt M
E«
WWWWHtw» « 9w- 99r* 9anas*■c* ■£As| AAH
jrsl'S-n«oAHWHAHASw|))H V
x§ HWWW9
ESSS5SSgnPPMPPPPH9
33SS333L*
mmgmmmm* h
WW »• Vwmai a• a■a
■ EAHA*■•<*A>
<ti*gs10
AS^HWHMkil 1KKMp,
^es-ssES*S• i*«0Xi?«o0 »iHkfLw§AhaO»*4AlWOHAstoCt4
S s q ss tew# WWWWH*W(to WWWV/%toww wWW« »IPww ww« Q»*n«w • « M•—>
WWWtoW» was
^•>*1
f«n
SSES♦0*0VAAW**fr*W
* r^ws^»V
wwtoWWA
4- ■toAs
C4*
«09*■>*i►*10wAS'WHW
g
3
B
«tI4W«
* 4
m B
■
lu m c to r(T e rn ,Itt, 1),
u |( l , T * n , l « b t t n ) .
u n ify (_ i IId ._ ._ ,Y Id ._ ) XXd»YId, I. X already u n ifie d
unify(_,XXd,XBBqa,_,YXd,YV8qa)
( vl_nenber(XId,YIEqe)
; vl_nenber(YXd,XIBqe)
) , I. f a i l .
unify(XV,Id,_,YV,Xd,_)
( atonic(XV)
; atonic(YV)
).l.
XV-YV.
uniiy(XV.Xd.XBBqa,YV,Id,YIEqa>
vl_uaion(XBEqa,YIEqa),
uaify_avpair«(XV,YV).
unify_avpairs(L l,L 2)
( var(L l)
; var(L2)
).».
L1-L2.
unify_aT paira(L l,L 2)
av_nerge(L l,L 3),
av_norge(L2,Ll),
v l_ ta il(L I,T a il),
v l_ ta il(L 2 ,T a il).
av_narge(L,_) var(L ), I.
av_norge([i:V l|K ],L )
noabor(A:V2,L), I,
V1— V2,
av_nerge(RlL ).
v l.union(S I,S 2) X onauro ovary a l t ol Si is in S2 and vice varaa
▼ l.narga(Sl,S2),
v l_ n arg a(S 2 ,S l),
v l_ ta il( S l.T a il) ,
v l_ ta il(S 2 ,T a il).
vl_norgo(L,_) var(L ), I.
vl_narge(CBlB],S)
▼l_add(B,S),
vl_nerge(E ,S ).
vl_add(X,L)
v ar(L ), I ,
CXl_] - L.
vl_add(X ,C ll.3) X « l, I.
▼ l_add(X.LIM ) vl_add(X.B).
vl_nenber(B.L) vl_nanbarl(X , L ), X—B, I.
vl_nenberl(„,L ) v ar(L ), I, f a i l . X anunarata nanbars of L
vl_nonberl(B ,[B |_ ]).
v l_ u e a b o rl(B ,£ .IIj) vl_neuberl(B ,L ).
v l_ ta il(T ,T ) var(T ), I.
v l.ta ll( C .lt] ,T ) ▼ l.tail(B .T ).
X chock inequalities
X unify conata
X general caaa
X copy inequalitiea
320
X»/«»Y
eval(X ,_,X Id,X IB qs),
eval(Y ,_,Y Id,Y IB qs).
Xld »» YId, X X and Y h i n d is tin c t Ids
vl_add(XId,YIBqs),
vl_add(Yld,XIBqs).
display.aqns(T ) :- (a to n ic (T );v a r(T )), I,w rite q (T « " T ).fa il.
display.eqas(T ) T ■ K . d d , . ) , d isp la y .e q n s(Id ,T ), f a i l .
display_eqns(_) n l.
display.eqns(L ,R ) (a to n ic(X );w a r(l)), t ,w rlteq(L »«lt) ,n l.
display_eqns(L ,• ( ! ,_ ,_ ) ) ato n ic(R ),! ,w riteq(L »*R ) ,n l.
disp lay.eqns(Id,0(P os
vl_nem berl(A tt: Val, Pos),
fu n eto r(T ,A tt,1 ),
a rg ( l.T .Id ) .
d isp lay .eq n s(T ,V al).
d isp lay .sq n s( Id ,• ( _ ,_ ,lE qs))
v l.n en b erl(IB q ,IE q s),
w rite q d d ■/■ IBq), n l.
A .8 parserl.pl
X F ils : p a r s s r l.p l
X Author: Andi Vu
X Updated: August 24, 1003
X Purpose: A top-down p a rse r inplenenting th e S -p aran eter nodel.
ensure_loaded(Johnson).
:- e n su re .lo a d e d (tre c ).
parse :- c p (T re e ,_ ,□ ) , d (T re e ).fa il,
parse w rite C lo nore p a r s e .') .
parse(S) :- c p (_ ,S ,D ).
parse(S.T ) :- c p (T ,S ,D ).
c p (c p /[n p (IF )/IP ,C l]) —>
to p (IF )* « -'+ ’},
n p (IP ,c sp e c ,IF ),
c l(C l,[n p (lF )]).
cl(cl/[cO (CF,Th)/C,AgrlP].ABC) -->
cO(C,CF,Th),
agrlp(A griP , zO(CF,Th). ABC),
(lex ical(C F )} .
a g rlp (a g rlp /[n p (IF )/IP ,A g rl.l] ,IC , [n p (IF l)]) —>
(c a s e (IF )» > c l,
check_np.features(IF ,IF 1)
n p (IP ,a g rlsp e c ,IF ),
a g rl.l(A g rl.l,H C ,C n p (IF )]. □ ).
321
agrlp(agrlp/[np(IF )/B P ,A grl.l},xO (B F .T h), [np(BFl)]) —>
<Th»
caaa(B F )"» cl,
caaa(B FW Bcaaa(IF l)
>.
np(B P ,agrlapac,IF ),
ag rl.l(A g rl.l.x O (B P .T h ), [np(BF)], Cap(BFl)}).
»frl-l(agrl_l/[agrl_0(A grlF ,T h)/A grl,T P ],xO (B F ,T h), C np(IF)], ABC) -->
{ pfci(A grlF)*«pU (B F),
^chack_T_iaaturaa(A grlF,IF)
a g rl-O U g rl,AgrlF.Tfc),
tp(TP,xO (IF,Th), Cnp(KF)], ABC).
tp(tp/[Tl],BC,AC.ABC> -->
tl(Tl,BC,AC,ABC).
tl(tl/[tO(TF,Th)/T,AapP],xO(IF,Th),A C.ABC) -->
< cfcack_v_laaturaa(TF,HF),
^+ABC»[advp(.)3
tO(T.TF,Th).
aap_p(AapP,xO(HF,Th) .AC,ABC).
tl(tl/[advp/[oftaa},T1],xO (H F,Th).A C ,A B C ) —> [o fta a ],
{+ABC*[advp(.)]>,
tl(Tl,xO (IF,T h).A C ,A B C ,.).
tl(tl/[tO (TF,Th)/T ,A apP},xO (H F,Th),AC,ABC,.) —>
{chack_v.*aaturaa(TF,HF)},
tO(T.TF.Tli).
aap_p(AapP,xO(HF,Th).AC,ABC).
aap_p(aap_p/[Aapl].HC.AC.ABC) ~ >
a a p l(Aapl,HC,AC,ABC).
aapl(aapl/[aapO(AapF,Tli)/Aap,Agr2P],xO(BF,Tfe), AC,ABC) —>
{ chack_T_faaturaa(AapF, HF)
>,
aapO(Aap,AapF,Th),
agr2p(Agr2P,xO(BF,Th), AC,ABC).
agr2p(agr2p/[np(IF)/BP,Agr2_l],xO(HF,Th),A C,Cnp(BFl)]) —>
<Th»
c a aa (IF )» * c 2 ,
chack_np_faaturaa(IF,IF1)
np(IP ,ag r2 ap ac.IF ),
agr2.1(A gr2_l,xO (BF,H i), Cap(BF)1AC)).
agr2p(agr2p/Cnp(BP)/IP,A gr2_l] ,xO(HF,Th), AC, □ ) —>
caaa(BF)— c2
>,
np(BF,agr2apae,BF),
ag r2 .1 (Ig r2 _ l,x O (IF .1 h ), Cap(MF) IAC]).
322
egr2p(agi^[A g2_l].xO (H F ,T h),A C ,C ]) —>
agr2_l(A gr2_l, iO(HF.Th), AC).
agr2_l(sgr2_l/[sgr2_0(A gr2F,Th)/A gr2,V P],xO (H F,Th), C np(IF)IIPs]) —>
{ check_v_ie»tures(A gr2F,IF)
egr2_0( Agr2 ,Agr2F,Th),
¥p(VP, xO(HF.Th>. [np(IF) IIPs] ) .
vp(Y p/[& p(IF)/IP,V l],xO (IF ,[sgtIT he]), AC) —>
(thete(IF )■ ■ «»gt,
select(A C ,n p (IF l), AC1),
c e s e ( IF l) u * c l,
check_np_festurss (IF, IF1)
>.
npdP .T B M C l.IF ),
(le x ic a l* IP )} ,
▼ l(Vl.xOdF,CagtlThs]),ACl).,[agtlTlu
vp(T p/[np(IF)/IP,V l],xO (E F,(pat]),A C ) —>
[th e ta (IF )a « p a t,
s e le c t(AC,np(IFl),AC1),
c u « ( lF l) « n e 2 ,
ch eck.np.features (IF , IF1)
^,
np (IP , v spec2,IF ),
(le x ic a l(IF )} ,
Tl(V I, xO(HF,[pat]) , AC1).
▼1(t1/CtO(VF,CThl|T hs])/V ,V F],xO (IF,[T hlIT h*]) , AC) —>
(ch eck .v _ featu res(VF, HF)>,
vO(V,VF,tThllThs]),
vp(VP,xO(HF.Ths),AC).
vl(vl/[vO(VF,(Th])/V ],xO (H F,[Th])._AC) —>
(check.v_features(V F,IF)},
»0(V,VF,[Th]).
c o ( n , . , . j --> □ .
egrl.O(Aux,AgrlF,_Th) —>
i u ( Aux, e g r l, A grlF).
« ( □ , . , _ ) - > □.
espO(V.AspF.Th) —>
verb(V,AspF,Th).
agr2_0(□ —> □ .
TO([ ],_ ,_ ) - > □ .
np(C ],cspec,_) —> □ .
n p (S u b j,eg rlsp ec,IF ) —> su b je c t(S u b j,IF ).
np(O bj,egr2spec,IF) —> o b ject(O b J,IF ).
np(D ,T epee 1,_) —> □ .
323
A
I
I
n
u
2.m
□
&
n *s
▼4a n 3VLJ n
0#■% 1
1
•1
wl__l L-i
JL
1
O An
1
1 A
1
JL
t
n *
1
1
ham •
•M
5 i
fc 1• *v
"" w
> 3 -
>-. a •
O o
□ a ~
V I o
a
□ 2 s c
■s. » 3 >
* ►<• *a v a h
Aft * >
ft-H ft'-'
•5 ft* •
Sfefe
1WVa ta
w fcfcUww■aft*y
^ 3 8
u y /*%
V n
? 3 1
g w3 .
**»
Pa* w
W
•
4*O wA
?
#
S' 2**
m
A*o
W
►
ap
ft*i_i
5
A A
I £
ftm*
-*.aH4*
□ ft
£E
35P •
I
3
K-A--N
2 Sn B
2 2
2 2
? aa a
^ a
a ?a a
a 0
II Nn n
wA**
paw
* ■
a wMi
2.3
• *H
2 2 .
~ £ g s f i f L? § £
?-II
&i ■
a aw i a
f
3&&
5 ■ l 8 2 2 3
1
ft -H*1
I►
Jo
ido
SE
s ?■ma
jS
jt>
2u
t»w
■ ft■ o
■ ■'-v■
r* ■
' i t
3ftu o
I
>
ft
£
HIN r-l
¥ 3 6 ft
6■•• •
> w
*0
ft
J
u»
a
l •*«*/<%>
ft&-£>ftw
5 * £
s i t 'i—P P •
11
S 3 3 I
• ft
■fto
o
•e
o
u
E £
>-*a
£* " •
&. ■
£55
3 i S
3'>
§ 3
3 2
3
a
3
LU I I
• MO«* r aa a ao a a
vUO
S3
A ppendix B
Param eter Spaces
B .l P-Space o f S(M )-
Param eters (1)
•1 0 0 0 0 0 0 0 0
[b y , a ¥ a]
n o o o o o o i o
■v. • *1
•3 0 0 0 0 0 1 0 0
[by, • » oj
•4 1 0 0 0 0 0 0 0
[T f, ¥ ■ •]
U 0 0 0 0 0 0 1 1
[ • •¥]
M 0 0 0 0 0 1 0 1
[» *p » » aj
«r o o o o o i i o
[• T, • • v]
M 1 0 0 0 0 0 1 0
[¥ •, • ¥ *1
M 1 0 0 0 0 1 0 0
[• ¥, • ¥ •]
• 1 0 1 1 0 0 0 0 0 0
[yB, ¥ ■ •]
• 1 1 0 0 0 0 0 1 1 1
[ • ■ ¥ , ■ ¥ , ■ • yJ
•12 1 0 0 0 0 0 1 1
(• ¥ •}
•13 1 0 0 0 0 1 0 1
[• ¥, • ¥ •]
•14 1 0 0 0 0 1 1 0
[• ¥, • • ¥]
•15 1 1 0 0 0 0 1 0
t¥ •, ¥ • •]
•IS 1 1 0 0 0 1 0 0
[■ ¥, • ¥ •]
•IT 1 1 1 0 0 0 0 0
[y at ¥ • •]
•It 1 0 0 0 0 1 1 1
to • ¥. BY. • • V]
•IS 1 1 0 0 0 0 1 1
[• ¥ •]
•30 1 1 0 0 0 1 0 1
[• ¥, • ¥ «]
•21 1 1 0 0 0 1 1 0
[• ¥, • ¥ •]
• 22 1 1 1 0 0 0 1 0
[y a, ¥ • i]
•23 1 1 1 0 0 1 0 0
[■ ¥, a y a]
•34 1 1 1 1 0 0 0 0
[* a, ¥ a a]
• 2 5 1 1 0 0 0 1 1 1
[•BY, BY, a ¥ a]
•26 1 1 1 0 0 0 1 1
[a ¥ a]
•2T 1 1 1 0 0 1 0 1
[a ¥, by a]
•3* 1 1 1 0 0 1 1 0
[a ¥, a ¥ a]
n t i i i i o o i o
325
Ca a , T • a ] MO O O O O O l l / O O
[ • • > . a a , a * a ]
•30 1 1 1 1 0 1 0 0
[ * * , a a a ]
• 3 1 1
l a a . a
1 1
a a ]
1 1 0 0 0
• 3 3 1
[ a • a ,
1 1
• » .
0
a a
0
a ]
1 1 1
• 3 3 1
Co a a }
1 1 1 0 0 1 1
• 3 4 1
Ca a , a
1 1
a a ]
1 0 1 0 1
• 3 S 1
[ • a , a
1 1
a a ]
1 0 1 1 0
• 3 0 1
Ea •> a
I 1 1
a a ]
. 1 1 0 1L O
• 3 7 1
Ca a , a
1 1
a a ]
1 1 1 0 0
• 3 * 1
[ a • a .
1 1
• *.
1
a a
0
a ]
1 1 1
• 3 0 1
Ca a a ]
1 1 1 1 0 1 1
• 4 0 1
[ a a , a
1 1
a a ]
1 1 1 01
• 4 1 1 1 1
[ a a , a a a ]
1 1 1 1 0
• 4 2 1
[ a a a .
1 1 1 1 1 11
• 4 3 0
[ • a , a
0 0
a a ]
0 0 1 / 0 0 0
[a a a . ■ a , • a a ]
• 4 1 0
[ a a , a
0 0
a a ]
0 0 0 0 I / O
• 4 4 0
Ca a a ,
0 0
a a a,
0
a
0
a ]
1 / 0 1 0
• 4 7 0
Ca a , a
0 0
a a]
0 0 1 / 0 0 1
• 4 0 0
Ca a a]
0 0 0 0 0 1 / 0 1
• 4 * 0
Ca a a .
0 0
a a]
0 0 0 1 1 / 0
M l
Ca a
0
. a
0 0
a a]
0 0 1 0 1 / 0
M 2
Ca a
1 0 0 0 0 1 / 0 0 0
Imttit*mi
iI
1*1
1 0 0 0 0 0 1 / 0 0
M 4
Ca a
1
. a
0 0
a a]
0 0 0 0 1 / 0
M S
Ca a a. a a, a a a]
M e
Ca a
0
a,
0 0
a a a,
0
a
0
a.
1 1 / 0 1
a v a]
M 7
Ca a
0
a,
0 0
a a,
0
a a
0
a]
1 1 1 / 0
M S
Ca a
1 0 0 00 1 / 0 1 0
M B
Ca a
1
. a
0 0
a a ]
0 0 1 / 0 0 1
• 4 0
Ca a
1
a ]
0 0 0 00 1 / 0 1
M l
Ca a
1
. a
0 0
a •]
0 0 0 1 1 / 0
M 2
Ca a
1 0 0 0O 1 1 / 0 0
• 0 3
C* a
1
. •
0 0
a a ]
0 0 1 0 1 / 0
• 0 4
Ca a
1 1 0 0 0 1 / 0 0 0
M S
Ca a
1 1 0 0 00 1 / 0 0
M S
[a a
1
. a
1 0
a a]
0 0 0 0 1 / 0
M 7
Ca a
1
a,
0 0
a a,
0
a a
0
a.
1 / 0 1 1
a a •]
H I
Ca a
1
a,
0 0
a • a,
0
a
0
a,
1 1 / 0 1
a a a ]
M O
Ca a
1 0 0 0 0 1 1 1 / 0
•70 1 1 0 0 0 1/0 1 0
[ a * a . aa. a a . a a a ]
326
07 1
Ca » ,
1
p O
1 0
o a ]
0 0 1 / 0 0 1
on i
[ • * a ]
1 0 0 0 0 1 / 0 1
0 7 3
[ • *
1 1 0 0 00 1 1 / 0
0 7 4
Ca « ,
1
> a
1 0
• 0 ]
0 0 1 1 / 0 0
0 7 0
[ a a ,
1
p a
1 0
* a ]
0 0 1 0 1 / 0
0 7 0
Ca a
1 1 1 0 0 1 / 0 0 0
0 7 7
C* 0
1
a .
1 1
* a ,
0 0
a a a ]
0 1 / 0 0
0 7 8
Ca a ,
1
a
1 1
a o ]
0 0 0 0 1 / 0
0 7 0 1 1 0 0 0 1 / 0 1 1
0 0 0
Co a
1 1 0
a a ,
0 0
a a a ]
1 I / O 1
00 1
Co a
1 1 0 0 0 1 1 1 / 0
0 0 3 1 1 1 0 0 1 / 0 1 0
0 0 3
Ca a .
1
a
1 1
a a ]
0 0 1 / 0 0 1
0 0 4
Co a
1
a ]
1 1 0 0 0 I / O 1
0 0 0
Co V
1 1 1 0 0 0 1 1 / 0
0 0 0
Ca a .
1
a
1 1
a a ]
0 0 1 1 / 0 0
0 0 7
Co » ,
1
o
1 1
a a ]
0 0 1 0 I / O
000
Ca a
1 1 1 1 01 / 0 0 0
000
Ca O
1 1 1 1 00 1 / 0 0
000
Ca a .
1
a
1 1
a a ]
1 0 0 0 1 / 0
Ml 1 1 1 0 0 1 / 0 1 1
[••*, If. *IT, • * •]
0 0 3
Ca a
1 1 1 0 0 11 / 0 1
0 0 3
Ca a
1
a .
1 1
a a .
0
a a
0
a ]
1 1 1 / 0
0 0 4
Ca a
1 1 1 1 0 1 / 0 1 0
0 0 0
Ca a
1
• a
1 1
a a ]
1 0 I / O 0 1
0 8 0
Ca a
1
a }
1 1 1 0 01 / 0 1
0 0 7
[ a a
1 1 1 1 0 01 I / O
0 0 0
Ca a ,
1
. a
1 1
a a ]
1 0 1 1 / 0 0
0 8 9
Ca a,
1
. a
1 1
a a ]
1 0 1 0 1 / 0
0 1 0 0
Ca a ,
1
. a
1 1
a a ]
1 1 I / O 0 0
0 1 01
C a a
1 1 1 1 1 0 1 / 0 0
0 1 0 3
Ca a ,
1
p a
1 1
a a ]
1 1 0 0 I / O
0 1 0 3
Ca a
1
a .
1 1
a a .
1
a a
0
a ,
1 / 0 1 1
a a a ]
0 1 0 4
Ca a
1 1 1 1 0 11 / 0 1
0 1 0 0
Ca a
1 1 1 1 0 1 1 I / O
0 1 0 0
Ca a
1 1 1 11 1 / 0 1 O
8 1 0 7
Ca a ,
1
a
1 1
a a ]
1 1 I / O 0 1
0 1 0 0
Ca a
1
a ]
1 1 1 1 0 1 / 0 1
0 1 0 0
Ca a
1
a .
1 1
* a .
1
a a
1
a ]
0 1 1 / 0
0 1 1 0
Ca a ,
1
a
1 1
a a ]
1 l 1 1 / 0 0
0 1 1 1
Ca a
1 1 1 1 1 10 1 / 0
327
•112 1 1 1 1 1 1 / 0 1 1
[s t •. I f, • t •]
• 1 1 3 1 1 1 1 1 1 1 / 0 1
Co T o . o T . o T o ]
• 1 1 4 1 1 1 1 1 1 1 1 / 0
•11C 0 0 0 0 0 1/0 1/0 o
t o O T , O O T , I * , ■ *
•110 0 0 00 0 1/0 0 1/0
Co T , I f • ]
• n r o o o o o o i / o i / o
[ • I T i 0 ¥ , I T l )
•U« 0 0 0 0 0 1/0 1/0 1
Co ¥ O, • V , • q ¥, • ) f]
•110 0 0 0 0 0 1 /0 1 1 /0
[■ • T , o o ¥ , ■ t ]
•120 0 0 00 0 1 1/0 1/0
to • t , • » t , o o , o ¥ o ]
•1 2 1 1 0 0 0 0 1 / 0 1 / 0 0
to T
T 0 0 ]
• 1 2 2 1 0 0 0 0 1 / 0 0 1 /0
t o T
• 1 2 3 1 0 0 0 0 0 1 / 0 1 / 0
to T a . t a 0 t a o ]
• 1 2 4 1 0 0 0 0 1 / 0 1 / 0 1
t o T o , a t » o a ¥ , a 0 ¥ , O ¥
• 1 2 3 1 0 0 0 0 1 / 0 1 1 / 0
t o 0
• 1 2 3 1 0 0 0 0 1 1 / 0 1 / 0
t o 0 T . o o T , a ¥ , a t o ]
• 1 2 7 1 1 0 0 0 1 / 0 1 / 0 0
Ct o a , 0 T 0 a t 0 . T O , T O O .
• 1 2 3 1 1 0 0 0 1 / 0 0 1 / 0
CO T
• 1 2 0 1 1 0 0 0 0 1 / 0 1 / 0
Ct o a . 0 T a . ¥ a . ¥ 0 o ]
• 1 3 0 1 1 0 0 0 1 / 0 1 / 0 1
Ca t
•1 3 1 1 1 0 0 0 1 / 0 1 I / O
to T
f * l ]
•132 1 1 0 0 0 1 1/0 1/0
t o o ¥ , a ¥ , • * o]
•113 1 1 1 0 0 1 /0 1 /0 0
Co O O. . f , . T . , T O , T O O ]
•134 1 1 1 0 0 1/0 0 1/0
Co * ^ » o ]
•133 1 1 1 0 0 0 1/0 1/0
Ct 0 O, OT 0,TO, TOO]
•130 1 1 1 0 0 1 /0 1 /0 1
t o T O. O T , O 0 T. O T o ]
•137 1 1 1 0 0 1/0 1 I/O
l 0 T O. D O T . O T, 0 T O, T O .
T O O ]
•133 1 1 1 0 0 1 1/0 1/0
t o O T, O T , O T O]
•130 1 1 1 1 0 1 /0 1 /0 0
It O0 . OT , OT O. TO. TOO]
•140 1 1 1 1 0 1/0 0 1/0
t o T O. 0 T , T O . T O O ]
•141 1 1 1 1 0 0 1/0 1/0
I t O O, O T O, T O , t o o ]
•142 1 1 1 1 0 1 /0 1 /0 1
t o T O, 0 T , 0 0 T, 0 * o]
•143 1 1 1 1 0 1 /0 1 1 /0
t o T O. O 0 T , 0 T. O T O. T O .
t o o ]
•144 1 1 1 1 0 1 1 /0 1 /0
t o O T . O T , 0 T o]
•143 1 1 1 1 1 1 / 0 1 / 0 0
t T 0 O. T O , T O O ]
•143 1 1 1 1 1 1/0 0 I/O
t o T. O T O. T O . T O O ]
•147 1 1 1 1 1 0 1 /0 1 /0
I t o o . o t o .t o . t o o ]
•143 1 1 1 1 1 1 / 0 1 / 0 1
t o T O. 0 T. O T • ]
•140 1 1 1 1 1 1 / 0 1 1 / 0
t o T 0 . 0 T O, O T, T O O , T O ,
t o o ]
•IS O 1 1 1 1 1 1 1 / 0 1 / 0
[• t '
•131 0 0 0 0 0 1/0 1/0 1/0
l 0 O T . O 0 T. O T . O T O]
328
•a 1 0 0 0 0 0 0 0
nu 1 0 0 0 0 1/0 1/0 1/0 1 1 0 0 o o 0 0
t**• 1 1 1 0 0 0 0 0
*■. T• •] 1 1 1 1 0 0 0 0
1 1 1 1 1 0 0 0
nu 1 1 0 0 0 1/0 1/0 1/0 1 1 1 1 1 1 0 0
[T. .. 1T• >* • a *, • *a , 1 1 1 1 1 1 1 0
»•. * ■ •] 1 0 0 0 0 0 0 1/0
1 1 0 0 0 0 0 1/0
nil 1 1 1 0 0 1/0 1/0 1/0 1 1 1 0 0 0 0 1/0
[*• ■ 1 1 1 1 0 0 0 1/0
* •. * * •] 1 1 1 1 1 0 0 1/0
1 1 1 1 1 1/0 0 0
nu 1 1 1 1 0 1/0 1/0 1/0 1 1 1 1 1 1 1/0 0
[V*l [*• *a o]
*•. V••]
•3 0 0 0 0 0 o 1 0
nu 1 1 1 1 1 1/0 1/0 1/0 0 0 0 0 0 0 1 1/0
[• a *. a T]
*4 0 0 0 0 0 1 1 0
1 0 0 0 0 1 1 0
[»* a o »]
B .2 P -S p a c e o f S (M )- w 1 0 0 0 0 0 1 0
P a r a m e te r s (2) [*■
1
•
0
V
0
a]
0 0 0 1 1/0
M 0 0 0 0 0 0 1 1
0 0 0 0 0 10 0 0 0 0 0 0 0 1/0 1
» 0 0 0 10 0 [• a V]0 » 0 0 0 10 1 . . . . .
0 0 0 10 0 #7 1 0 0 0 0 0 1 1> 0 0 0 10 1 1 1 0 0 0 0 1 1
1 0 0 10 0 1 1 1 0 0 0 1 1
0 0 0 1 1 0 1 1 1 1 0 0 1 1
0 0 0 10 1 1 1 1 1 1 0 1 1
1 1 0 10 0 1 0 0 0 0 0 1/0 1
1 0 0 1 1 0 1 1 0 0 0 0 1/0 1
1 0 0 10 1 1 1 1 0 0 0 1/0 1
1 1 0 1 1 0 1 1 1 1 0 0 1/0 1
1 1 0 10 1 1 1 1 1 1 0 1/0 1
1 1 1 10 1 [0 Va]
0 » 0 0 0 0 0 1/0
0 » 0 0 0 1/0 0 0 M 1 1 0 0 0 0 1 0
0 » 0 0 0 1 0 1/0 1 1 1 0 0 0 1 0
0 0 0 0 0 1/0 0 1 1 1 1 1 0 0 1 0
0 0 0 0 1 0 1/0 1 1 1 1 1 0 1 0
0 0 0 0 1/0 0 1 [* a * • a]
0 0 0 1 0 1/0
0 0 0 1 1/0 0 M 0 0 0 0 0 1 1 1
0 0 0 1/0 0 1 1 0 0 0 0 1 1 1
1 0 0 1 0 1/0 0 0 0 0 0 1/0 1 0
1 0 0 1 1/0 0 0 0 0 0 0 1 1 1/0
1 0 0 1/0 0 1 0 0 0 0 0 1/0 1 1
1 1 0 1 0 1/0 1 0 0 0 0 1 1 1/0
1 1 0 1 1/0 0 0 0 0 0 0 1/0 1 1/0
1 1 0 1/0 0 1 [a • ]
1 1 1 1/0 0 1
0 0 0 0 0 1/0 0 1/0 •10 1 1 0 0 0 1 1 1
[• *. • T•] 1 1 1 0 0 1 1 t
1 1 1 1 0 1 1 1
329
0 0 0 0 0 0 1 / 00
1 1 0 0 01 1 1 / 0
1 1 0 0 0 1 1 / 0 1
1 1 1 0 0 1 1 I / O
1 1 1 0 0 1 1 / 01
1 1 1 1 0 11 I / O
1 1 1 1 0 1 1 / 0 1
0 0 0 0 0 0 1 / 0 1 / 0
1 1 0 0 0 1 1 / 0 1 / 0
1 1 1 0 01 1 / 0 1 / 0
1 1 1 1 01 1 / 0 1 / 0
Ca a
• 1 1 1 11 1 1 1 1 1
1 1 1 1 1 1 1 / 01
1 1 1 1 1 1 / 0 11
1 1 1 1 1 1 / 0 1 / 0 1
[■ *
• 1 3 1 0 0 0 0 1 / 0 0 0
1 1 0 0 0 1 / 0 0 0
1 1 1 0 0I / O 0 0
1 1 1 1 0 1 / 0 0 0
1 1 1 1 1 1 0 1 / 0
1 0 0 0 0 I / O 01 / 0
1 1 0 0 0 1 / 0 0 1 / 0
1 1 1 0 0 1 / 0 0 1 / 0
1 1 1 1 0 1 / 0 01 / 0
1 1 1 1 1 1 / 0 0 1 / 0
[ • « o . 0 ¥ , * O , T • a ]
• 1 3 0 00 0 0 1 1 / 0 0
1 0 0 0 0 1 1 / 0 0
Ca • ¥ , o ¥ , a ¥ a ]
• 1 4 1 0 00 0 0 1 / 0 0
1 0 0 0 0 0 1 / 0 1 / 0
[ • v ]
• 1 4 1 1 0 0 0 0 1 / 0 0
1 1 1 0 00 1 / 0 0
1 1 1 1 0 0 1 / 00
1 1 1 1 1 0 1 / 0 0
1 1 1 1 1 1 / 0 1 0
1 1 1 1 11 / 0 I / O 0
I t a ]
• ! • 0 0 0 0 01 1 / 0 1
1 0 0 0 01 1 / 0 1
0 0 0 0 0 1 / 0 1 / 0 0
0 0 0 0 0 1 1 / 01 / 0
0 0 0 0 0 1 / 0 1 / 0 1
1 0 0 0 0 1I / O 1 / 0
0 0 0 0 0 1 / 0 1 / 0 1/1
Co •
• 1 7 1 0 0 0 0 1 / 0 1 0
Co o
• 1 0 1 1 0 0 0 0 1 1 / 0
1 1 1 0 0 0 1 1 / 0
1 1 1 1 0 01 1 / 0
1 1 1 1 I 0 1 1/0
[ • « ]
•1» 1 0 0 0 1/0 1 0
1 1 0 0 1/0 1 0
1 1 1 0 1/0 1 0
Ca v a ¥i ¥ a , ¥ a a ]
•30 0 0 0 0 1/0 1 1
Ca a a¥ , a a ¥. a ¥ a ]
•31 1 0 0 0 1/0 1 1
1 1 0 0 1/0 1 1
1 1 1 0 1/0 1 1
1 0 0 0 1/0 1/0 1
1 1 0 0 1/0 1/0 1
1 1 1 0 1/0 1/0 1
Ca * a¥t a a ¥. a ¥ a ]
•33 1 1 1 1 1 1/0
1 1 1 1 1 1/0 1/0
Ca ¥ a¥ , a ¥ a . ¥ a , ¥ aa]
•33 0 0 0 0 1/0 1/0 0
Ca ¥
I I I ]
•34 1 0 0 0 0 1/0 1/0
1 1 0 0 0 1/0 1/0
1 1 1 0 0 1/0 1/0
1 1 1 1 0 I/O 1/0
Ca a
n t 1 0 0 0 1/0 1/0 0
1 1 0 0 1/0 1/0 0
1 1 1 0 1/0 1/0 0
C¥ a a¥, a ¥ a ¥ a , a a a]
•34 0 0 0 0 1/0 1 1/0
Ca a a¥ , a a ¥• ¥ a , a ¥ a ]
•37 0 0 0 0 1/0 1/0 1
Ca ¥ a ¥, a a ¥. a a ¥ , a ¥ a ]
n i 1 0 0 0 1/0 1 1/0
1 1 0 0 1/0 1 I/O
1 1 1 0 1/0 1 I/O
Ca ¥ a a ¥• a ¥ a ¥ a . ¥ a ,
¥ a a ]
•30 1 1 1 1 1/0 1 I/O
1 1 1 1 1/0 1/0 1/0
[¥ a a ¥, a ¥ a a ¥ a, ¥ a,
¥ a a ]
•30 0 0 0 0 1/0 1/0 1/0
Ca ¥
¥ a, a a ]
•31 1 0 0 0 1/0 1/0 1/0
1 1 0 0 1/0 1/0 1/0
1 1 1 0 1/0 1/0 1/0
330
v s, v s * ]
B .3 P-Space of S(M )-
Param eters (w ith
A dv)
• 1 0 00 0 0 00 0
0 0 0 0 O 00 1 / 0
[ ( s f t s a ) s s . ( s f t s a ) s v' s ]
> 2 0 0
1 0 0 0 0 10 0
0 0 0 0 0 10 1
1 1 0 0 0 1 00
1 0 0 0 0 1 01
1 1 1 0 0 10 0
1 1 0 0 0 11 0
1 1 0 0 0 1 0 1
1 1 1 00 1 1 0
1 1 1 0 0 10 1
0 0 0 0 0 1 0 1 / 0
0 0 0 0 0 1 / 0 0 1
1 0 0 0 0 1 0 1 / 0
1 0 0 00 1 / 0 0 1
1 1 0 0 0 10 1 / 0
1 1 0 0 0 1 1 / 0 0
1 1 0 0 0 1 / 0 0 1
1 1 1 0 0 10 1 / 0
1 1 1 0 0 11 / 0 0
1 1 1 0 0 1 / 0 0 1
[ s ( s f t s a )* . s ( s f t s a ) vs ]
0 3 1 00 0 0 00 0
1 1 0 0 0 0 0 0
1 1 1 0 0 00 0
1 0 0 0 0 00 1 / 0
1 1 0 00 0 0 1 / 0
1 1 1 0 0 0 0 1 / 0
[ ( s f t s a ) V S. ( s f t s a ) V s 0 ]
0 4 0 0 0 0 0 0 10
[ ( s f t s a ) sa v » ( s f t s a ) s* )
OS 0 0 0 0 0 11 0
1 0 0 0 0 11 0
[ s ( s f t s a )* . s ( s f t s a ) s* ]
0 0 1 0 00 0 0 10
[ ( s f t s a ) V s, ( s f t s a ) S V s]
OT 0 0 0 0 0 0 1 1
0 0 0 0 0 0 i/o 1
C* ( s f t s a ) s *]1
00 1 0 0 0 0 0 1 1
1 1 0 0 0 0 1 1
1 0 0 0 0 0 1 /0 1
1 10 0 0 0 1/0 1
1 1 1 0 0 0 1 /0 1
[s (oftu) v s]
03 1 1 0 0 0 0 1 0
1 1 1 0 0 0 1 0
[(sftsa) Va. (sftsa) Vs a]
010 0 0 0 0 0 1 1 1
1 0 0 0 0 1 1 1
0 0 0 0 0 1 1 1/0
1 0 0 0 0 1 1 1/0
[* s (sftsa) v0 a (sftsa) *.
s (sftsa) * *;I
Oil 1 1 i 1 0 0 0 0
1 1 i 1 1 0 0 0
1 1 i 1 0 0 0 I/O
1 1 i 1 1 0 0 I/O
[v (sftsa) a. V (sftsa) a s]
013 1 1 1 1 0 1 0 0
1 1 1 1 0 1 1 0
1 1 1 1 0 1 0 1
1 1 1 1 1 1 0 1
1 1 1 1 0 1 0 1/0
1 1 1 1 0 1 1/0 0
1 1 1 1 0 1/0 0 1
1 1 1 1 1 1/0 0 1
[s v (sftsa). a v (sftsa) s]
013 1 1 1 1 0 0 1 0
1 1 1 1 1 0 1 0
[v (sftsa) a. V (sftsa) s a]
014 1 1 0 0 0 1 1 1
1 1 1 0 0 1 1 1
1 1 0 0 0 1 1 I/O
1 1 0 0 0 1 I/O 1
1 1 1 0 0 1 1 1/0
1 1 1 0 0 1 1/0 1
1 1 0 0 0 1 I/O 1/0
1 1 1 0 0 1 I/O I/O
[s s (sftsa) v0 a (sftsa) *.
a (sftsa) * *;1
010 1 1 i 1 1 1 0 0
1 1 i 1 1 1 1 0
1 1 i 1 1 1 1/0 0
[v a (aftaa), V a (sftsa) s]
010 1 1 i 1 0 0 1 1
1 1 i 1 1 0 1 1
1 1 i 1 0 0 1/0 1
1 1 i 1 1 0 1/0 1
[s v (sftsa) a]
01T 1 1 i 1 0 1 1 1
1 1 i 1 0 1 1 1/0
1 1 i 1 0 1 1/0 1
331
1 1 1 1 0 1 1/0 1/0
[ • i t (iftn ), a a (aftaa),
a ? (aftaa) a]
(aftaa) a a, (aftaa) a a a]
•10 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 / 0 1
[a a a (aftaa), a a (aftaa),
a a (aftaa) a]
•10 0 0 0 0 0 0 1/0 0
[(aftaa) a a a, (aftaa) a a,
(aftaa) a a a]
030 0 0 0 0 0 1/0 0 0
0 0 0 0 0 1/0 0 1/0
[a (aftaa) a a, a (aftaa) a,
(aftaa) a a, (aftaa) a a a]
031 1 0 0 0 0 1/0 0 0
1 10 0 0 1/0 0 0
1 1 1 0 0 1/0 0 0
1 0 0 0 0 1/0 0 1/0
1 1 0 0 0 1/0 0 1/0
1 1 1 0 0 1 /0 0 1/0
Ca (aftaa) a a, a (aftaa) a,
(aftaa) a a, (aftaa) a a a]
033 0 0 0 0 0 1 1/0 0
1 0 0 0 0 1 1/0 0
[a (aftaa) a a, a (aftaa) a,
a (aftaa) a a]
0 0030 0 0
Ca (aftaa) a a,
a a (aftaa) a.
031
0 1/0 1 1
a (aftaa) a,
a (aftaa) a a]
1 1 0 0 0 0 1 1/0
1 1 1 0 0 0 1 1 /0
Ca (aftaa) a a, (aftaa) a a,
(aftaa) a a a]
033 1 1 0 0 0 1/0 1 0
1 1 1 0 0 1 / 0 1 0
[a (aftaa) aa, a (aftaa) a,
(aftaa) a a, (aftaa) a a a]
033 1 0 0 0 0 1/0 1 1
Ca (aftaa) aa, a (aftaa) a,
a a (aftaa) a, a (aftaa) a a]
034 1 1 1 1 0 0 1/0 0
1 1 1 1 1 0 1/00
[a (aftaa) a a, a (aftaa) a,
a (aftaa) a a]
030 1 1 1 1 0 1 / 0 0 0
1 1 1 1 0 1/001/0
[a a (aftaa) a, a a (aftaa),
a (aftaa) a, a (aftaa) a a]
030 1 1 0 0 0 1/0 1 1
------— —------- 1 1 1 0 0 1/0 1 1
•33 1 0 0 0 0 0 1/0 0 1 1 0 0 0 1/0 1/0 1
[(aftaa) a a a, (aftaa) a a. 1 1 1 0 0 1/0 1/0 1
(aftaa) a a a] Ca (aftaa) a a, a (aftaa) a.
a a (aftaa) a, a (aftaa) a a]
034 0 0 0 0 0 0 1 1 /0 -------- — —
Ca (aftaa) a a, (aftaa) a a a. •37 1 1 1 1 0 0 1 1 /0
(aftaa) a aj 1 1 1 1 1 0 1 1 /0
——-------- Ca a (aftaa) a. a (aftaa) a,
031 0 0 0 0 0 1/0 1 0 a (aftaa) a aj
Ca (aftaa) a a, a (aftaa) a,
(aftaa) a • *. (aftaa) a a] 030 1 1 1 1 1 1 /0 0 0
Ca a (aftaa) a, a a (a fta a ),
030 1 1 0 0 0 0 1/0 0 a (aftaa) a, a (aftaa) a a]
1 1 1 0 0 0 1/0 0
1
1
!
1
1
1
t
1
1
1•
11
11
11
I11aa
C(aftaa) a a a, (aftaa) a a, •3* 1 1 1 1 0 1/0 1 0
(aftaa) a a a] Ca a (aftaa) a, a a (aftaa),
a (aftaa) a, a (aftaa) a a]
e
o
8
0 0 0 1 1 /0 1 --------—----------------------------
1 0 0 0 0 1 1 /0 1 MO 1 1 1 1 1 1 0 1/0
0 0 0 0 0 1 1/0 1/0 Ca a (aftaa) a. a a (aftaa),
1 0 0 0 0 1 1/0 1/0 a a (aftaa), a a (aftaa) aj
Ca a (aftaa) a, a (aftaa) a a,
1a
aa
i
a
i
a
i
a
i
ia
i
a
ia
i
i•
a (aftaa) ». a (aftaa) a a} Ml 1 1 1 1 1 1 /0 1 0
— ------------------------------------------ Ca a (aftaa) a, a a (a fta a ),
o
•
8
0 0 0 0 1 1 /0 a (aftaa) a, a (aftaa) a aj
Ca (aftaa) * a, (aftaa) a a, ———— -------------—-------------—-
(aftaa) a a a] M3 l i l t 0 1/0 1 1
1 1 1 1 0 1/0 1/0 1
030 1 0 0 0 0 1/0 1 0 [a a (aftaa) a. a a (aftaa).
[a (aftaa) a a, a (aftaa) a, a a a (a fta a). a a (aftaa) a]
M3 I I 1 I 1 1 1 1/0
1 1 1 1 1 1 1 / 0 1 / 0
[« * (aftaa) a, a * (aftaa),
a * a (aftaa), a a (aftaa),
v a (aftaa) a]
M4 1 1 1 1 1 1 / 0 1 1
1 1 1 1 1 1 / 0 1 / 0 1
Ca a (aftaa) a, a v (aftaa),
a a a (aftaa), a a (aftaa) a}
Hi 0 0 0 0 0 0 1/0 1/0
[(aftaa) a a a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a a]
MO 0 0 0 0 0 1/0 1/0 0
C(aftaa) a a a, a (aftaa) a a,
a (aftaa) a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a a]
MT 1 0 0 0 0 0 1/0 1/0
[(aftaa)a a a, a (aftaa) a a,
(aftaa)a a, (aftaa) a a a]
MS 1 0 0 0 0 1/0 1/0 0
[(aftaa) a a a, a (aftaa) a a,
a (aftaa) a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a aj
MS 0 0 0 0 0 1/0 1 1/0
[a (aftaa) a a, a a (aftaa) a,
a (aftaa) a, a (aftaa) a a,
(aftaa)a a a, (aftaa) a aj
•SO 0 0 0 0 0 1/0 1/0 1
[a (aftaa) a a, a (aftaa) a,
a a (aftaa) a, a (aftaa) a a,
a (aftaa) a aj
•St 1 1 0 0 0 0 1/0 1/0
1 1 1 0 0 0 1/0 1/0
[(aftaa) a a a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a a]
•S3 1 1 0 0 0 1/0 1/0 0
1 1 1 0 0 1/01/00
[(aftaa) a a a, a (aftaa) a,
a (aftaa) a a, (aftaa) a a,
(aftaa) a a a]
•S3 1 0 0 0 0 1/0 1 1/0
[a (aftaa) a a, a s (aftaa) a,
a (aftaa) a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a aj
•St 1 0 0 0 0 1/0 1/0 1
[a (aftaa) a a, a (aftaa) a,
a a (aftaa) a, a (aftaa) a a,
a (aftaa) a aj
1 1 1 0 0 1/01 1/0
[a (aftaa) a a, a a (aftaa) a,
a (aftaa) a, a (aftaa) a a.
(aftaa) a s . (aftaa) a a aj
•SO 1 1 1 1 0 0 1/0 1/0
1 1 1 1 1 0 1/01/0
[a (aftaa) a s , a a (aftaa) a,
a (aftaa) a. a (aftaa) a a)
•ST 1 1 1 1 0 1/0 1/0 0
[a (aftaa) a a, a a (aftaa),
a a (aftaa) a, a (aftaa) a,
a (aftaa) a aj
•SO 1 1 1 1 1 1 / 0 0 I/O
[a a (aftaa) a, a a (aftaa),
a a (aftaa), a a (aftaa) a,
a (oftaa) a, a (aftaa) a aj
•SS 1 1 1 1 1 1 / 0 1/0 0
[a (aftaa) a s , a a (aftaa),
a a (aftaa) a, a (aftaa) a,
a (aftaa) a aj
MO 1 1 1 1 0 1 /0 1 1 /0
[a a (aftaa) a, a a a (aftaa),
a a (aftaa), a a (aftaa) a,
a (aftaa) a, a (aftaa) a aj
Ml 1 1 1 1 1 1 / 0 1 1/0
[a a (aftaa) a, a a (aftaa) a,
a a (aftaa), a a a (aftaa),
a a (aftaa), a a (aftaa) a,
a (aftaa) a, a (aftaa) a aj
M3 0 0 0 0 0 1/0 1/0 1/0
[(aftaa) a a a, a (aftaa) a a,
a (aftaa) a, a (aftaa) a a,
a a (aftaa) a, a (aftaa) a a,
(aftaa) a a, (aftaa) a a a]
M3 1 0 0 0 0 1/0 1/0 1/0
[(aftaa) a a a, a (aftaa) a a,
a (aftaa) a, a (aftaa) a a,
a a (aftaa) a, a (aftaa) a a,
(aftaa) a a, (aftaa) a s s ]
MS 1 1 0 0 0 1/0 1/0 1/0
1 1 1 0 0 1/0 1/0 1/0
[(aftaa) a a a, a (aftaa) a a,
a (aftaa) a, a a (aftaa) a,
a (aftaa) a a, (aftaa) a a,
(aftaa) a a aj
•SS 1 1 0 0 0 1 /0 1 1/0
MS 1 1 1 1 0 1/0 1 /0 1/0
[a (aftaa) a s , a a (aftaa) a,
a a (aftaa), a a a (aftaa),
a a (aftaa) a, a (aftaa) a,
a (aftaa) a aj
MO 1 1 1 1 1 1 / 0 1 / 0 1/0
333
[» (aftam) a a, « • (aftaa) a,
v a (aftaa), a * a (aftaa),
• » (aftaa), a * (aftaa) a,
a a (aftaa) a, v (aftaa) a,
v (aftaa) a a]
B .4 P-Space o f S(M ) &
H D Param eters
•i
0 0 0 0 0 0 0 0 , 1 1
0 0 0 0 0 0 0 0 , 1 f
0 0 0 0 0 0 0 0 , f i
0 0 0 0 0 0 0 0 , f f
0 0 0 0 0 1 0 0 , 1 i
0 0 0 0 0 1 0 0 , i f
0 0 0 0 0 1 0 0 , f 1
0 0 0 0 0 1 0 0 , f f
1 0 0 0 0 1 0 0 . i 1
1 0 0 0 0 1 0 0 , f 1
0 0 0 0 0 1 0 1 , 1 1
0 0 0 0 0 1 0 1 , 1 f
0 0 0 0 0 1 0 1 , f 1
1 1 0 0 0 1 0 0 , i 1
1 1 0 0 0 1 0 0 , f 1
1 0 0 0 0 1 0 1 , 1 1
1 0 0 0 0 1 0 1 , f 1
1 I 1 0 0 1 0 0 , 1 1
1 1 1 0 0 1 0 0 , f 1
1 1 0 0 0 1 1 0 , 1 i
1 1 0 0 0 t 1 0 , f i
1 1 0 0 0 1 0 1 , 1 1
1 1 0 0 0 1 0 1 , f 1
1 1 1 1 0 1 0 0 , i 1
1 1 1 1 0 1 0 0 , f 1
1 1 1 0 0 1 1 0 , 1 1
1 1 1 0 0 t 1 0 , f 1
1 1 1 0 0 1 0 1 . 1 1
1 1 1 0 0 1 0 1 , f 1
1 1 1 1 0 1 1 0 , 1 1
1 I 1 1 0 1 1 0 , f 1
1 1 1 1 0 I 0 1 , 1 1
1 1 1 1 0 1 0 1 , f 1
1 1 1 1 1 1 0 1 , i 1
1 1 1 1 1 1 0 1 , 1 f
0 0 0 0 0 1/0 » • •
0 0 0 0 0 0 0 1/0 1 * *
0 0 0 0 0 0 0 1/0 0 * •
0 0 0 0 0 0 0 1/0 * * *
0 0 0 0 0 1/0 0 0 » ^ 1
0 0 0 0 0 1/0 0 0 1 • *
0 0 0 0 0 1/0 0 0 I * •
0 0 0 0 0 1/0 0 0 • * *
0 0 0 0 0 1 0 1/0 1 • •
0 0 0 0 0 1 0 1/0 » • *
0 0 0 0 0 1 0 1/0 1 ■ •
0 0 0 0 0 1/0 0 1, 1
0 0 0 0 0 1/0 0 1, 1
0 0 0 0 0 1/0 0 1, f
0 0 0 0 0 1/0 0 1, f
0 0 0 0 1 0 1/0 , 1
0 0 0 1 0 1/0 , f
0 0 0 0 1/0 0 1, 1
0 0 0 1/0 0 1, f
1 0 0 0 1 0 1/0 , 1
1 0 0 0 1 0 1/0 , f
1 0 0 0 1 1/0 0 , 1
1 0 0 0 1 1/0 0 . f
1 0 0 0 1/0 0 1, 1
1 0 0 0 1/0 0 1 , f
1 1 0 0 1 0 1/0 , 1
1 1 0 0 1 0 1/0 , f
1 1 0 0 1 1/0 0 , 1
1 1 0 0 1 1/0 o , f
1 1 0 0 1/0 0 1. 1
1 1 0 0 1/0 0 1, f
1 1 1 0 1 0 1/0 , 1
1 1 i 0 1 0 1/0 , f
1 1 1 0 1 1/0 0 . 1
1 1 1 0 1 1/0 0 , f
1 1 1 0 1/0 0 1, 1
1 1 1 0 1/0 0 1. f
1 1 1 1 1/0 0 1, 1
1 1 1 1 1/0 0 1, 1
0 0 0 0 1/0 0 1/0 , 1 1
0 0 0 0 1/0 0 1/0 . 1 f
0 0 0 1/0 0 1/0 . f 1
0 0 0 0 1/0 0 1/0 . f f
[a » a a « a]
0 0 0 0 0 0 0 , 1 1
0 0 0 0 0 0 0 , f 1
1 0 0 0 0 0 0 , 1 1
1 0 0 0 0 0 0 , f 1
1 1 0 0 0 0 0 , 1 1
1 1 0 0 0 0 0 , f 1
1 1 1 0 0 0 0 , 1 1
1 1 1 0 0 0 0 , f 1
1 1 1 1 0 0 0 , 1 1
1 1 1 1 0 0 0 , 1 f
1 1 1 1 1 0 0 , 1 1
1 1 1 1 1 0 0 , 1 f
1 1 1 1 1 1 0 . 1 1
1 1 1 1 1 1 0 , 1 f
0 0 0 0 1/0 . 1
0 0 0 0 0 0 1/0 , f
1 0 0 0 0 0 1/0 , 1
1 0 0 0 0 0 1/0 , f
1 1 0 0 0 0 1/0 , 1
1 1 0 0 0 0 1/0 , f
1 1 1 0 0 0 1/0 , 1
1 1 1 0 0 0 1/0 , f
1 1 1 1 0 0 1/0 , 1
1 1 1 1 0 0 1/0 , 1
1 1 1 1 1/0 0 0, 1
1 1 1 1 1/0 0 0 , 1
1 1 1 1 1 1/0 0 . 1
334
1 1 1 1 1 1/0 0 . 1 ft 1 0 0 0 0 1/0 0 1 1 ft
* * • •] 1 0 0 0 0 1/0 0 1 « f
1 1 1 0 0 0 0 I/O 1 f
3 1 1 1 0 o 0 0 I/O f ft
1 1 1 0 o 1/0 0 0 1 ft
1 1 1 0 0 1/0 0 0 ft ft
1 1 0 0 0 1 0 1/0 1 ft
1 1 0 0 0 1 0 1/0 ft ft
1 1 0 0 0 1 1/0 0 1 ft
0 0 0 0 1 0 0 ft ft 1 1 0 0 0 1 1/0 0 ft ft
0 0 0 0 1 1 0 1 1 1 1 0 0 0 1/0 0 1 1 ft
0 0 0 0 1 1 0 1 ft 1 1 0 0 0 1/0 0 1 ft ft
0 0 0 0 1 1 0 ft 1 1 1 1 1 0 0 0 1/0 1 ft
0 0 0 0 1 1 0 ft ft 1 1 1 1 0 0 0 1/0 ft ft
1 1 0 0 0 0 0 1 ft 1 1 1 1 0 1/0 0 0 1 ft
1 1 0 0 0 0 0 ft ft 1 1 1 1 0 1/0 0 0 ft ft
1 0 0 0 1 0 0 t 1 ft 1 1 1 0 0 1 0 1/0 1 ft
1 0 0 0 1 0 0 • ft ft 1 1 1 0 0 1 0 1/0 ft ft
0 0 0 0 1 1 0 • 1 1 1 1 1 0 0 1 1/0 0 1 ft
0 0 0 0 1 1 0 1 ft 1 1 1 0 0 1 1/0 0 ft ft
0 0 0 0 1 1 0 ft 1 1 1 1 0 0 1/0 0 1 1 ft
0 0 0 0 1 1 0 ft ft 1 1 1 0 0 1/0 0 1 ft ft
0 0 0 0 1 0 1 » 1 ft 1 1 1 1 1 0 0 1/0 ft 1
0 0 0 0 1 0 1 » ft ft 1 1 1 1 1 0 0 1/0 ft ft
1 1 1 0 0 0 0 » 1 ft 1 1 1 1 1 1/0 0 0 ft 1
1 1 1 0 0 0 0 f ft 1 1 1 1 1 1/0 0 0 ft ft
1 1 0 0 1 0 0 ft 1 ft 1 1 1 1 0 1 0 1/0 1 ft
1 1 0 0 I 0 0 ft ft 1 1 1 1 0 1 0 1/0 ft ft
1 0 0 0 i 1 0 1 1 ft 1 1 1 1 0 1 1/0 0 1 ft
1 0 0 0 1 1 0 ft ft 1 1 1 1 0 1 1/0 0 ft ft
1 0 0 0 1 0 1 1 ft 1 1 1 1 0 1/0 0 1 1 ft
1 0 0 0 1 0 1 ft ft 1 1 1 1 0 1/0 0 1 ft ft
1 1 1 1 0 0 0 ft 1 1 1 1 1 1 1 0 1/0 ft 1
1 1 1 1 o 0 0 ft ft 1 1 1 1 1 1 0 1/0 ft ft
1 1 1 0 1 0 0 1 ft 1 1 1 1 1 1 1/0 0 ft i
1 1 1 0 1 0 0 ft ft 1 1 1 1 1 1 1/0 0 ft ft
1 1 0 0 1 1 0 1 * 1 1 1 1 1 1/0 0 1 ft 1
1 1 0 0 1 1 0 ft ft 1 1 1 1 1 1/0 0 1 ft ft
1 1 0 0 1 0 1 1 ft 1 0 0 0 0 1/0 0 1/0 . 1
1 1 0 0 1 0 1 ft ft 1 0 0 0 0 1/0 0 1/0 . ft
1 1 1 1 1 0 0 ft 1 1 1 0 0 0 1/0 0 1/0 . 1
1 1 1 1 1 0 0 ft ft 1 1 0 0 0 1/0 0 1/0 . ft
1 1 1 0 1 1 0 1 ft 1 1 1 0 0 1/0 0 1/0 . 1
1 1 1 0 1 1 0 ft ft 1 1 1 0 0 1/0 0 1/0 . ft
1 1 1 0 1 0 1 i # 1 1 1 1 0 1/0 0 1/0 . 1
1 1 1 0 1 0 1 ft ft 1 1 1 1 0 1/0 0 1/0 , ft
1 1 1 1 1 1 0 ft i 1 1 1 1 1 1/0 0 1/0 . ft
1 1 1 1 1 1 0 ft ft 1 1 1 1 1 1/0 0 1/0 . ft
1 1 1 1 1 0 1 ft 1 [■ *. ■ • *1
I 1 1 1 1 0 1 ft ft
0 0 0 0 0 0 1/0 ft * * •4
0 0 0 0 0 0 1/0 ft * *
0 0 0 0 1/0 0 0 ft * "
0 0 0 0 1/0 0 0 • * *
1 0 0 0 0 0 1/0 ft • *
1 0 0 0 0 0 1/0
1 0 0 0 1/0 0 0 I • * 1 0 0 0 0 0 1 0 , ft ft
1 0 0 0 1/0 0 0 ft * * 1 1 0 0 0 0 1 0 , 1 ft
0 0 0 0 1 0 1/0 ft * * 1 1 0 0 0 0 1 0 , f ft
0 0 0 0 1 0 1/0 ft T * 1 1 1 0 0 0 1 0 , 1 ft
0 0 0 0 1 1/0 0 ft * * 1 1 1 0 0 0 1 0 . ft ft
0 0 0 0 1 1/0 0 ft * * 1 1 1 1 0 0 1 0 , 1 ft
335
1 1 1 I 0 0 1 0 , f 1 0 0 0 0 1 1 1 , 1 1
1 t 1 1 1 0 1 0 . * 1 0 0 0 0 1 1 1 , 1 f
1 1 1 1 1 0 1 0 , 1 1 0 0 0 0 1 1 1 . f 1
0 0 0 0 0 0 1 1/0 1 1 1 0 0 0 0 1 1 1 . f f
0 0 0 0 0 0 1 1/0 1 f 1 1 0 0 0 1 1 1 . 1 1
0 0 0 0 0 1 1/0 « 1 1 1 0 0 0 1 1 1 . f t
0 0 0 0 0 0 t 1/0 t 1 1 1 1 0 0 1 1 1 . 1 t
1 0 0 0 0 0 1 1/0 1 f 1 1 1 0 0 1 1 1 . * t
1 0 0 0 0 0 1 1/0 t t 1 1 1 1 0 1 1 1 . 1 t
1 1 0 0 0 0 1 1/0 1 f 1 1 1 1 0 1 1 1 . f 1
1 1 0 0 0 0 1 1/0 1 1 1 1 1 1 1 1 1 1 . f 1
1 1 1 0 0 0 1 1/0 i f 1 1 1 1 1 1 1 1 . * f
1 1 1 0 0 0 1 1/0 t t 1 0 0 0 0 0 1/0 0 1
1 1 1 1 0 0 1 1/0 1 t 1 0 0 0 0 0 1/0 0 f
1 1 1 1 0 0 1 1/0 f 1 0 0 0 0 0 1/0 1 0 0 1
t 1 1 1 1 0 1 1/0 1 1 0 0 0 0 1/0 1 0 1
I I 1 1 0 1 1/0 f f 0 0 0 0 0 1/0 1 0 f
[• • ». » v] 0 0 0 0 0 1/0 1 0 f
1 1 0 0 0 0 1/0 0 1
M 1 1 0 0 0 0 1/0 0 t
0 0 0 0 0 0 1 1 . 1 1 1 0 0 0 0 1/0 1 0 i
0 0 0 0 0 0 1 1 . 1 * 1 0 0 0 0 1/0 1 0 f
0 0 0 0 0 0 t 1 . t 1 0 0 0 0 0 1 1 1/0 1
0 0 0 0 0 0 1 1 . t f 0 0 0 0 0 1 1 1/0 1
0 0 0 0 0 1 1 1/0 1
1 0 0 0 0 o 1 1 . t 1 0 0 0 0 0 1 1 1/0 1
1 1 0 0 0 0 1 1 . 1 f 0 0 0 0 0 1/0 1 1 i
1 1 0 0 0 0 1 1 , f t 0 0 0 0 0 1/0 1 1 1
1 1 1 0 0 0 1 1 . 1 1 0 0 0 0 0 1/0 1 1 f
1 1 1 0 0 0 1 1 . t t 0 0 0 0 0 1/0 1 1 f
1 1 1 1 0 o 1 1 . 1 * 1 1 1 0 0 0 1/0 0 1
1 I t 1 0 0 1 1 . t 1 1 1 1 0 0 0 1/0 0 1
1 1 1 1 1 0 1 1 , t i 1 1 0 0 0 1/0 1 0 1
1 1 1 1 1 0 1 1 . t t 1 1 0 0 0 1/0 1 0 f
0 1 0 0 0 0 1 1 I/O 1
0 0 0 0 0 0 I/O 1 1 f 1 0 0 0 0 1 1 1/0 1
0 0 0 0 0 0 1/0 1 « 1 1 0 0 0 0 1 1 1/0 1
0 0 0 0 0 0 1/0 1 f t 1 0 0 0 0 1 1 1/0 f
1 0 0 0 0 0 1/0 1 1 f 1 0 0 0 0 1 1/0 1 1
1 0 0 0 0 0 1/0 1 f f 1 0 0 0 0 1 1/0 1 f
1 1 0 0 0 0 1/0 1 1 f 1 0 0 0 0 1/0 1 1 1
1 1 0 0 0 0 1/0 1 f f 1 0 0 0 0 1/0 1 1 f
1 1 1 0 0 0 1/0 1 t f 1 1 1 1 0 0 1/0 0 1
1 1 1 0 0 0 1/0 1 t 1 1 1 1 1 0 0 1/0 0 f
1 I 1 1 0 0 1/0 1 * 1 * 1 1 1 0 0 1/0 1 0 1
1 1 t 1 0 0 1/0 1 t t 1 1 1 0 0 1/0 1 0 f
1 1 1 1 1 0 1/0 1 t 1 1 1 0 0 0 1 1 1/0 1
1 1 1 1 1 0 1/0 1 f f 1 1 0 0 0 1 1 1/0 f
C« ■ *] 1 1 0 0 0 1 1/0 1 1
1 1 0 0 0 1 1/0 1 f
M 1 1 0 0 0 1/0 1 1 1
1 0 0 0 0 0 1 0 . 1 1 1 1 0 0 0 1/0 1 1 f
1 0 0 0 0 0 1 0 . f 1 1 1 1 1 1 0 1/0 0 «
1 0 0 0 0 0 1 1/0 1 1 1 1 1 1 1 0 1/0 0 * t
1 0 0 0 0 0 1 1/0 « 1 1 1 1 1 0 1/0 1 0 1
[* •. • t *] 1 1 1 1 0 1/0 1 0 f
1 1 1 0 0 1 1 1/0 1
•7 1 1 1 0 0 1 1 1/0 f
0 0 0 0 0 1 1 1 . 1 i 1 1 1 0 0 1 1/0 1 1
0 0 0 0 0 1 1 1 , 1 f 1 1 1 0 0 1 1/0 1 f
0 0 0 0 0 1 1 1 . f 1 1 1 1 0 0 1/0 1 1 1
0 0 0 0 0 1 1 1 . t t 1 1 1 0 0 1/0 1 1 f
336
1
1
1
1
1
1
1
1
1
I
1
1
1
1
0
0
0
0
0
0
0
0
1
1
1
1
0
0
0
0
0
0
1
1
1
1
1
1
1
1
I
I
1
1
1
1
1
1
1
1
1
1
1
I
1
1
1
1
1
1
t
1
1 1 1 1/0 1 0 . f 1 1 1 1 1 1 1 1/0 i / 0 , 1 i
1 1 1 1/0 1 0 , « 1 1 1 1 1 1 1 1/0 i / 0 , 1 1
1 1 0 1 1 1 /0 ,1 1 1 1 1 1 1 1/0 1 i / 0 , 1 1
1 1 0 1 1 I/O , 1 1 1 1 1 1 1 1/0 1 1/0 , * *
1 1 0 1 1/0 1 , 1 f 1 1 1 1 1 1/0 1/0 1 . f i
1 1 0 1 1/0 1 , f f 1 1 1 1 1 1/0 1/0 1 , 1 1
1 1 0 1/0 1 1 ,1 1 1 0 0 0 0 I/O 1/0 i / o ,1
1 1 0 1/0 1 I , * « 1 0 0 0 0 1/0 I/O 1/0 , 1
1 1 I 1 1 1/0 , 1 1 1 1 0 0 0 I / O 1/0 1/0 . 1
1 1 1 1 1 1/0 , 1 1 1 1 0 0 0 1/0 1/0 1/0 , 1
1 1 1 1 1/0 1 , 1 i 1 1 1 0 0 1/0 1/0 1/0 , 1
1 1 1 1 1/0 1 , f f 1 1 1 0 0 1/0 1/0 1/0 . 1
1 i 1 1/0 1 1 , 1 1 1 1 1 1 0 1/0 1/0 1/0 , i
1 1 1 1/0 1 1 , f f 1 1 1 1 0 1/0 1/0 I/O , 1
0 0 0 0 1/0 1 /0 .1 1 1 1 1 1 1 1/0 1/0 1/0 , #
0 0 0 0 1/0 1/0 , 1 1 1 1 1 1 1 1/0 1/0 1/0 , #
0 0 0 1/0 1/0 0 , 1 « [• a *. • a y• a r]
0 0 0 1/0 1/0 0 , 1 1
0 0 0 1/0 1 1 /0 ,1 1 M
0 0 0 1/0 1 1 /0 ,1 f 1 0 0 0 0 0 1 1 • 1 1
0 0 0 1/0 1 1/0 , f 1 1 0 0 0 0 0 1 1 • f i
0 0 0 1/0 1 1 / 0 , 1 1 1 1 0 0 0 0 1 1 » 1 1
0 0 0 0 1/0 1 /0 ,1 1 1 1 0 0 0 0 1 1 • 1 1
0 0 0 0 1/0 1/0 , 1 1 1 1 1 0 0 0 1 1 i i
0 0 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 1 1 • 1 i
0 0 0 1/0 1/0 0 , 1 1 1 1 1 1 0 0 1 1 » 1 1
0 0 0 1 1/0 1 /0 ,1 1 1 1 1 1 0 0 1 1 1 i
0 0 0 1 1/0 I/O , 1 1 1 1 1 1 1 0 1 1 1 i
0 0 0 1/0 1 1 /0 ,1 1 1 1 1 1 1 0 1 1 1 1
0 0 0 1/ 0 1 1/0 . 1 t 1 0 0 0 0 0 1/0 1 , 1 1
0 0 0 1/0 1/0 1 , 1 1 1 0 0 0 0 0 1/0 1 . t 1
0 0 0 1/0 1/0 1 , 1 1 1 1 0 0 0 0 1/0 1 . 1 1
1 0 0 0 1/0 1 /0 .1 1 1 1 0 0 0 0 1/0 1 . 1 1
1 0 0 0 1/0 i / 0 , 1 1 1 1 1 0 0 0 I/O 1 . 1 i
1 0 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 I/O 1 . 1 i
1 0 0 1/0 1/0 0 , 1 1 1 1 1 1 0 0 1/0 1 . i 1
0 0 0 1 1/0 1 /0 ,1 1 1 1 1 1 0 0 1/0 1 , / 1
0 0 0 1 1/0 i / 0 , 1 1 1 1 1 1 1 0 I/O 1 . 1 1
0 0 0 I/O 1 1 /0 ,1 1 1 1 1 1 1 0 1/0 1 . 1 1
0 0 0 1/0 1 i / 0 , 1 1 c* Ya]
0 0 0 i / o 1/0 1 , 1 1
0 0 0 1/0 1/0 1 , f 1 n
1 1 0 0 1/0 1 /0 ,1 1 1 1 0 0 0 0 1 0 1 1
1 1 0 0 1/0 i / 0 , 1 1 1 1 0 0 0 0 1 0 f 1
1 1 0 1/0 1/0 0 , 1 1 1 1 1 0 0 o 1 0 1 1
1 1 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 1 0 1 1
1 0 0 1 1/0 1 /0 ,1 f 1 1 1 1 0 0 1 0 1 1
1 0 0 1 1/0 i / 0 , 1 1 1 1 1 1 0 0 1 0 1 1
1 0 0 1/0 1 1 /0 ,1 f 1 1 1 1 1 0 1 0 1 1
1 0 0 1/0 1 1/0 , f f 1 1 1 1 1 0 1 0 1 1
1 0 0 1/0 1/0 1 , 1 1 [* a. * a a]
1 0 0 1/0 1/0 1 , 1 1
1 1 1 0 1/0 i / 0 , 1 1 •10
1 1 1 0 1/0 1/0 , 1 1 1 1 0 0 0 1 1 1 1 1
1 1 1 1/0 1/0 0 , f 1 1 1 0 0 0 1 1 1 « 1
1 1 1 1/0 1/0 0 , 1 1 1 1 1 0 0 1 1 1 1 1
1 1 0 1 1/0 1 /0 ,1 1 1 1 1 0 0 1 1 1 1 1
1 1 0 1 1/0 1/0 , # f 1 1 1 1 0 1 1 1 1 1
1 1 0 1/0 1 1 /0 ,1 f 1 1 1 1 0 1 1 1 f 1
1 1 0 1/0 1 1/0 , < 1 0 0 0 0 0 0 1/0 0 , i i
1 1 0 1/0 1/0 1 , 1 1 0 0 0 0 0 0 1/0 0 . i t
1 1 0 1/0 1/0 1 , < f 0 0 0 0 0 0 1/0 0 . 1 i
337
■*V f4*4
*4*4*4*4
o o o o
O O O O m
o o o o
o o o o
o o o o
o o o o
O O *4*4
O O k h
0 0 0*1WNN«(<«<4«•
o o o o
o o o o
o o o o
• o o o o
• o o o o
• *4I • f4 <4*4v«i
•rt%4*4%««4V*4*4***4 * •
o o o o o o o o o o AAo o**>*-XOOOOOOOOf4«4v4f4lSVNNNNNN«4f4f4f4f4f4f4*4. _ —_o o o oNN NNOOOOOOOOf4f4*4H
00©00©*4*4*4*4*4*4
©©©©o4o4*4o4*4o4o4*4
oo*4o4«4*4*4«4*4o4^^
I*t4*4-H*4li *4
l*404«4*4*41
............................................o o o o o o o o o o
_ A _ _ N N N W S V N N S00f<t«0044004f<4f4«<f(«<«(4*«f4*4
o o O O O O O O O O O O O O r - i
0«*f44«4«l«l«(M ««#f4400004f44f(H H
o o o o o o o o o o o o o o o o o o o o o o o
000000000*4f4f4«400000000«4««
00000^«*Hf4f4f4*4v4000000««*«4>4
0«4o4o4*4«4o4*4*4o4«4v4v40000«4«4*4-p4 o*«4
of t f « f 4 * t « 4 0 0 0 0 « t l 4 f t v t H « 4 U
000001/01/0
013000001/01/0
100001/000,11000001/01/0
•*4«40404*4*40404*4«40404<*404 * * • *
« • • • » • * * * • ■ » • • 0 0 0 0 0
O O O dO O O O ftftH ftO O vtftfo** ^
N S W NN
© OOOO 0 * 0 0
«400O0«**t«4«40Oftf4ft
* 9 0 0 0 9 0
9 9 9 0 9 0 0 0 9 0 9 9 0 0
9 0 9 9 0 9 0 0 9 0 0 9 0 0
9 9 0 0 0 0 0 0 0 0 0 0 0 9
0 0 0 9 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 « 4 o « 0 0 0
>
o m
►
9 0
*4«•r-i* • i
OO>
•*4*4
O O a
^4rt »
oo f
e o ■
oo .
0 9 0
O
o
o
o
00
CO
CO
• . * • . • * * * 0 0 9 0 0 9 0 0 9 0 •A A .N N N N N N N N N N000000000<4f4«««4«4Hft«4f4«4 ■'Sl
0 0 0 0 0 0 0 * * * " 0 0 0 0 0 0 0 0 0 0 *
o o o o o o o ° ° o o o o o o o o o o IN S N N X N S S N N V X X S N X N«tftftH f4«tft*«f4«tft«tftft«tf««tf4f4 ►
0 9 0 0 0 0 0 ^ o 4 0 0 0 0 0 0 9 9 * 4 * 4 *
00000*4*4*4*«000000v4*4«4*4 ■
0 0 0 4 « 4 < t f t 4 0 0 0 0 * t « t f t f t f t i 4 •
0
O M ftftitftftftftO O ftfiftftftftftft p
o oXX
9 9
9 9
OO
O 9
«**40 0
o o o o
►***©©
o ©©e
© o e e
e o o o
© © © © ©
© o e © o o o o
i t ......................
n ►*►*m
4 • 510
• 4 © ©
• B
W * © ©
• O ©
• © ©
4
V N.
©O
U • S+**»«*«* I 4
• ****©©©©►*►*0000
4
* © o © © o © © e e o © ©
© ©
"s.
C C ° °
©©• -
_  W N V W S X  N • 0©0©©00©00©0
» »” »” ■■m-^ ”” ^ ^ ^ ^ ^ v*s s s x x v
_ © © © © © e
• »* »*»*»* M •»
4 - - * • -
•J- * * « - » >i ** H t* K ^
H **•>4►*•*4**• M>M»*►n, t* t*
© ©
O O
o o
o ©
N t*
>»*v
©o
I l»
I •
I 4
i.°
i •
i f
**** © o
© ©o o
©e
o o
©e
I 4 ►*►•►***►*►* o o o ©
****©
© ©©
© © ©
© ©©
i ■I • © © © © o ©
© © © ©
t
H t0 **•
© © ©©©©© © ©
►4►*►%
CO
CO
CO
w
cn
> *d hfl
6 8 GO
P *d
p>
o
©©
rt*
©
*1
CO
o
*-*»
C/5
- £**■
d *y
I n
I « *I B •
[- •I
I -• B B
: • «
i•
t
i
- - g
t* »*H H 0 Q
►***0000
© © © o © ©
N N S V N S
© © © O © ©
©©©o o o
9 ■»•*»*•* **»*
N X N S X4 © © ©© ©O
n ►*►*p
«• sB 4 © ©
4
B B
• 4
©©
©O
© ©
• ©©
4• ►*►*
©©b
• •* **x. v,
4 ©©
• H!►
B ►*••*»
4
14 7 “ >, nh3
© © o ©
© © • *
•4 **•
»*#*»««* o ©
** **o © ©o
© ©o o o ©
»* t*fc*fr*•*«*
W N W S©©O©©©
W S N N V© o © © © ©
M>H l* H I*
4 © O ^
a
• © o
B © ©
* © ©
0 ►*►*■S."N*
B O ©
4* ►*m
• ° «
4 * -
* © ©
• o ©
4-* ►*►*
4 ****Vi
BO©
• *4M*
4 t*B
l n ************ i
»*»*►*►• O O
**►*0000
© O © © © ©
**»*»*•*»*»*
V S S N S V
© © © © © ©
H H H MH »*
S N N N N SO © © © © ©
O OO OO©
H ►*H •**4*■
[axx v • , i u T i t ]
[axx • s v, u i a yl
[• »w t *, • m s y. > u • y.
•33 0 0 0 0 0 0 1/0 0
U u • • y, * u • v,i u • y *]
n j 1 0 0 0 0 1 / 0 0 0
[• axx a a, • sms y, i u v •,
* o x y i y ]
134 1 0 0 0 0 0 1/0 0
[ u i • * a, u i v a, is s * a a]
03B 0 0 0 0 0 1 1/0 0
[a i u • *, a i u tr, a i u a a]
•30 0 0 0 0 0 0 11/0
[a i u a v, i u a a a, i u a v]
•37 0 0 0 0 0 1/0 1 0
[a ass a v, a m i a, i u a a v,
sox a a]
*30 1 1 0 0 0 0 1/0 0
[ i u a a a, i u a a, u i a a a]
m i o o o o o i i /o
[a amx a a, sax a a, i u a a a]
•30 0 0 0 0 0 1 1/0 1
[a a axx a, a i u a a, a i u a,
a axx a a]
•31 1 0 0 0 0 1/0 1 0
[a axx a a, a axx a, axx a a,
axx a a a]
•33 0 0 0 0 0 1/0 1 1
[a axx a a, a axx a, a a axx a,
a axx a a]
•3 3 1 1 1 0 0 1 /0 0 0
[a a a, a a.a a. a a a]
•34 1 1 1 0 0 0 1/0 0
[a a a, a a, a a a]
•3* 1 1 0 0 0 0 1 1/ 0
[a axx a a. axx a a. axx a a a]
•34 1 0 0 1 1 1/0 0 0
[axx a a a, axx a a, axx a a,
axx a a a]
•37 1 1 0 0 0 1/0 1 0
[a axx a a. a axx a, axx a a,
axx a a a]
•31 0 0 0 1 1 1 1 / 0 0
[axx a a a, axx a a, axx a a a]
i n l o o o o t / o i i
[a axx a a, a axx a, a a axx a,
a axx a a)
340
•40 0 0 0 1 1 1 / 0 1 0
[ i u i * t>i u • • a, i u at]
041 1 1 1 0 0 0 1 1/0
[a * a. *a, a a a]
•42 1 1 1 0 0 1/0 1 0
[a a a. aa, a a. a a a]
043 1 1 0 0 0 1/0 1 1
[a i u a a, a aas a, a a IU a,
a i u a a]
044 0 0 0 1 1 1 1 1/0
[a i u a a, a i u a, a aaa a a,
i u a a, i u a a a]
040 0 0 0 1 1 1 1/0 1
(a aax a a, a i u a. a i u q a,
a i u a a]
046 1 0 0 1 1 1/0 1 0
[aax a a a, aax a a, aax a a,
aax a a a]
047 1 1 10 0 1/0 1 1
[a a a, a a, a a a, a a a]
040 1 1 0 1 1 1/0 1 0
[aax a a a, aax a a, aax a a,
aax a a a]
040 1 0 0 1 1 1/0 1 1
[a aax a a, a aax a, a aax aa,
• aax a a]
000 1 1 0 1 1 1 1 1/0
[a aax a a, a aax a, a aax a a,
aax a a, aax a a a]
001 1 1 0 1 1 1 / 0 1 1
[a aax a a, a aax a, a aax aa,
a aax a a]
003 1 1 11 1 1 1 1/0
[a a a. a a. a a a. a a, aa a]
003 0 0 0 0 0 0 1/0 1/0
[aax a a a, a aax a a, aax aa,
aax a a a]
004 0 0 0 0 0 1/0 1/0 0
[aax a a a. a aax • a, a aaxa,
a aax a a,aax a a, aax a a a]
000 1 0 00 0 0 1/0 1/0
[aax a a a, a aax a a. aax aa,
aax a a a]
006 1 0 0 0 0 1/0 1/0 0
[aax a a a. a aax a a, a aaxa,
OCT 0 00 0 0 1/0 1 1/0
[a aax a a, aa aax a, a aaxa.
a aax a a, aax a a a, aax aa]
006 0 00 0 0 1/0 1/0 1
[a aax a a, a aax a, a a aax a,
a aax a a, a aax a a]
006 1 1 0 0 0 0 1/0 1/0
[aax a a a, a aaxa a, aax a a,
aax a a a]
•60 1 1 0 0 0 1/0 1/0 0
[aax a a a, a aaxa, a aax a a,
aax a a, aax a a aj
061 1 0 0 0 0 1/0 1 1/0
[a aax a a, a a aax a, a aax a.
a aax a a, aax a a, aax a a a]
063 1 0 0 0 0 1/0 1/0 1
[a aax a a, a aax a, a a aax a,
a aax a a, a aax a a]
063 0 0 0 1 1 1/0 1/0 0
[aax a a a, aax a a a, aax a a,
aax a a a]
064 1 1 1 0 0 0 1/0 1/0
[a a a. a a a. a a. a a a]
060 1 1 1 0 0 1/0 1/0 0
[a a a, a a, aa a, a a, a a a]
•66 1 0 0 1 1 1 /0 0 1/0
[iu a a a. i u a a, a aax a,
a i u a a
•67 1 1
[a aax a a
0 0
aax a a a]
•60 1 0
[aax a a a
aax a a a
070 0 0
[a aax a a
aax a a a
aax a a, aax a a a]
0 0 0 1/0 1 1/0
a a aax a, a aax a,
aax a a. aax a a a]
0 1 1 1 1 / 0 1/0
0 1 1 1 / 0 1/0 0
aax a a a, aax a a,
aax a a, aax a a a]
0 1 1 1 / 0 1 1/0
a aax a a. a aax a,
aax a a a, aax a aj
•71 1 1 1 0 0 1 /0 1 1/0
[a a a, a a a, a a, a a a, a
a a aj
•73 1 1 0 1 1 1 / 0 1 / 0 0
341
[ M I a • ■
m i v a. a a a]
•73 1 0 0 1 1 1/0 t I/O
[a »u « i, f i u • v , a h i a ,
a i u a a, m i a a, m i a a a,
h i a a, m i a a a]
•74 1
[a aai a
0 0 1 1 1/0 1/0 1
•73 1 1 0 1 1 1/0 1 1/0
[a aai a
a aax a a, aax a a, aax a a a.
j”
t
1 1 1 1 1/0 1 1/0
[a a a,
a a a]
*77 0 0 0 0 0 1/0 1/0 1/0
*p
a aax a a, a a aax a. a aax a a.
•78 1 0 0 0 0 1/0 1/0 1/0
».
a aai a
aai a a,
a, a a
, aax a
aax a,
a a]
a aax a a,
»7» 1 1 0 0 0 1/0 1/0 I/O
a,
a a aai
aai a a
a, a aax a a,
a]
aax a a.
MO 0 0 0 1 1 1/0 1/0 1/0
a aax a,
aai a a,
a aax
aax a
a a, a
a a]
aax a *.
Ml 1 1 1 0 0 1/0 1/0 1/0
a a, a a a)
j-
s
0 0 1 1 1/0 1/0 1/0
aax a a a, a aai a a, a aax a a,
a aax a,
aax a a,
a aai
aax a
a a, a
a a]
aax a a,
MS 1 1 0 1 1 1 / 0 1 / 0 1/0
[ a u a a a, aax a a a , m i a a,
a m i a a, a aai a, a ami a a,
a m i a a, m i a a. aai a a a]
B .6 P-Space o f S(M ) &
H D Param eters (
w ith A ux )
• 1 0 0 0 0 0 0 0 0 , 1
[aai a a. m i a a a]
• 3 0 0 0 0 0 0 0 0 , 1
[a a aai, a a a aai]
• 3 1 0 0 0 0 0 0 0 , 1
[a a aai, a a a aax]
• 4 0 0 0 0 0 1 0 0 , 1
[a aai a, a aai a a]
• 3 0 0 0 0 0 0 1 0 , 1
[aai a a a, aui a a]
• 6 0 0 0 0 0 0 1 0 , 1
[a a a aai. a a aai]
•7 1 0 0 0 0 0 0 0 , 1
[aax a a, aai a a a]
M 0 0 0 0 0 1 1 0 , 1
[a aai a, a aai a a]
• • 0 0 0
[a aax a a]
•10 0 0 0 0 0 0 1 1 , 1
[a a a aai]
•111 0 0 0 0 0 1 0 , 1
[aai a a, aax a a a]
•13 1 1 . 1
[a a. a a a]
•13 1 1 1 0 0 0 0 0 . 1
[a a. a a a]
•14 0 0 0 0 0 1 1 1 . 1
[aax a a aax a a a)
•16 1 0 0 0 0 0 1 1 . 1
(a aax a a]
•17 1 1 0 0 0 0 1 0 , 1
[aax a a, aai a a a]
•16 1 0 0 1 1 0 0 0 , f
[a a aai, a a a aax]
•!• 0 0 0 0 0 1 1 1, 1
[a a aax a, a aai a, a aai a a
•301 1 1 0 0 1 0 0 , 1 1
342
[a a. « f •]
•21 1 1 1 «
[v • , * • ■)
•22 1 1 1 0 0 0 1 0 , 1 f
[a • a, a a]
•23 1 0 0 1 1 0 1 0 . f i
(a a a u , a a a aax]
•24 1 1 1 0 0 0 1 1 . 1 1
MS 1 1 1 0 0 0 1 1 , I f
•20 0 0 0 1 1 1 1 1 . 1 1
[a aax a a, a aax a, a aax a a]
n o 1 1 0 1 i o i o , f i
[a a aax, a a a aax]
n oi 1 0 0 0 1 1 1 , 1 1
[a a au a, a aax a, a aax a a]
•30 1 1 1 0 0 1 1 1 , i f
•32 1 1 0 1 1 1 1 1 , 1 1
[a aax a a, a aax a, a aax a a]
•33 1 1 0 1 1 1 1 1 , f i
Ca a a aax, a a aax, a a a aax]
• 3 4 1 1 1 1 1 1 1 1 , 1 1
aax a a a]
• 3 0 0 0 0 0 0 0 1/0 0 , 1 1
[aax a a a, aax a a, aax a a a]
• 3 7 0 0 0 0 0 0 1 1/0 , *1
[a aax a a, aax a a a, aax a a]
•30 1 0 00 0 1/0 0 0 , 11
[a aax a a, a aax a, aax aa,
aax a a a]
• 3 0 0 0 0 0 0 1 1/0 0 , 1 f
[a a a aax, a a aax, a a a aax]
• 4 0 1 0 0 0 0 0 1/0 0 , 1 1
[aax a a a, aax a a, aax a a a]
• 4 1 0 0 0 0 0 1 1/0 0 , 1 1
[a aax a a. aaax a, a aax a a]
0 4 2 0 0 0 0 0 1/0 1 0 , 1 1
[a aax a a, a aax a,aax aaa,
aax a a]
•431 1 0 0 0 0 1/0 0 , 1 1
[aax a a a, aax a a, aax a a a]
• 4 4 0 0 0 0 0 1 I/O 1 , 1 f
[a a a aax, a a a aax, a a aax,
a a a aax]
•4S1 0 0 0 0 0 1 1 /0 ,1 1
[a aax a a, aax a a, aax a a a]
•40 0 0 0 0 0 1 1/0 1 , 1 1
[a a aax a, aaax a a, a aax a.
a aax a a]
MT 1 0 0 0 0 1/0 1 0 , 1 1
[a aax a a, a aax a, aax a a,
aax a a a]
• 4 0 0 0 0 0 0 1/0 1 1 , 1 1
[a aax a a, a aax a, a a aax a,
a aax a a]
• 4 9 1 1 1 0 0 1/0 0 0 , 1 1
[a a a. a a, a a. a a a]
•SO 1 1 1 0 0 0 1/0 0 , 1 1
[a a a, a a, a a a]
•SI 1 1 0 0 0 0 1 1/0 , 1 1
[a aax a a, aax aa, aax a a a]
•S3 1 0 0 11 0 1/0 0 , 1 f
[aax a a a, aaxa a a, aax a a]
OSS 1 0 0 1 1 1/0 0 0 , 1 1
[aax a a a, aax a a, aax a a,
aax a a aj
•04 1 0 0 1 1 1/0 0 0 . f 1
[a a a aax, a a aax, a a aax,
a a a aax]
OSS 1 1 O0 0 1/0 1 0 , 1 1
[a aax a a, a aax a, aax a a.
aax a a a]
•SO 0 0 0 1 1 1 1/0 0 , 1 1
[aax a a a, aax a a, aax a a a]
•ST 1 0 O1 1 0 1/0 0 , f 1
[a a a aax, a a aax. a a a aax]
■00 1 0 00 0 1/0 1 1 , 1 1
343
[■ M> on i 1 0 1 1 1/0 1 1 . 1 1
• HI a a] [a aax
a aax a a]
H f 1 1 1 0 0 0 1 1 / 0 , 1 1
[a 7 a. * a• a a a] 070 1 1 0 1 1 1/0 1 1 , t 1
[a a a aax, a a aax, a a a aax,
MO 1 1 1 0 0 1 / 0 1 0 , 1 1 a a a aax]
[•* »
•77 i 1 1 1 1 1 1 1/0 , 1 1
Ml 0 0 0 1 1 1 1 1 /0 ,1 1 Ca a a
[a IU a a, a aax a, a aax a a,
•as ■ a, aax a a a] •7# 0 0 0 0 0 0 1/0 1/0 . 1r___
Ma l 0 0 1 1 1 0 1 /0 ,1 1 aax a a a]
[• M I
•ax • a a] •TO0 0 0 0 0 1/0 1/0 0 . 1-------- -------— [aax a a a, a aax a a, a aax a,
M3 1 1 0 1 1 0 1/0 0 , « i a aax a a. axx a a, aax a a a]
[a • a aax, a a aax, a a a aax]
MO 1 0 0 0 0 0 1/0 1/0 . 1
m i 1 0 0 0 1/0 1 1 , 1 1 [aax a a a, a aax a a, aax a a,
[• aax aax a a a]
a aax a a]
M l 1 0 0 0 0 1/0 1/0 0 . 1
Ml 0 0 0 1 1 1 1/0 1 , 1 1 [aax a a a. a aax a a, a aax a.
Cx aax 7 a, a aax a, a aax a a. a aax a a. aax a a, aax a a a]
a aax a a]
M3 0 0 0 0 0 1/0 1 1/0 , 1
MO 1 0 0 1 1 1/0 1 0 , 1 1 [a aax
[aax a a a, aax a a, aax a a. a aax a a, aax a a a, aax a 7]
aax a a a]
Ml 0 0 0 0 0 1 /0 1 /0 1 , 1
MT 1 0 0 1 1 1/0 1 0 , 1 1 [a aax a,
[a a a aax, a a aax, a a aax, a aax a a aax a a]
a a x aax]
•M 1 1 0 0 0 0 1/0 1/0 , 1
MO 1 1 1 0 0 1/0 1 1 , 1 1 [aax a a a, a aax a a, aax a a,
[a a a,> a a* a a a, a a a] aax a a a]
OOO 1 1 0 1 1 0 1 1 /0 , « 1 MS 1 1 0 0 0 1 /0 1 /0 0 . 1
[a a a aax, a a aax, a a a aax] [aax a a a, a aax a, a aax a a.
aax a a, aax a a a]
070 1 1 0 1 1 1/0 1 0 , 1 1
[aax a * a, aax a a, aax a a, •M 1 0 0 0 0 1/0 1 1/0 . 1
aax a a a] [a aax a a, a a aax a, a aax 7,
a aax a *. aax a a, aax a a a]
•71 1 1 0 1 1 1/0 1 0 , « 1
[a a a aax, a a aax, a a aax, •07 1 0 0 0 0 1/0 1 /0 1 , 1
a a a aax] [a aax 7,
a aax a *i a aax a a]
073 1 0 0 1 1 1/0 1 1 , 1 1
[s aax MO 0 0 0 1 1 1 /0 1 /0 0 . 1
a aax a a]
aax a a a]
•73 1 0 0 1 1 1/0 1 1 , f 1
[a a a aax. a a aax, a a a aax, •M 1 1 1 0 0 0 1/0 1 /0 . i
a a a aax] [v a a,
070 I 1 0 1 1 1 1 1 /0 ,1 1 •90 1 1 1 0 0 1 /0 1 /0 0 . 1
[a aax [a a a, a a• a a a, a a, a a a]
aax a a, aax a a a] --------
•91 1 0 0 1 1 0 1/0 1/0 . 1
344
i
1
1
1
1
1
1
i
1
1
i
i
1
«
[MI . I f ,
MI I I I ]
1
1
[a aai a a. a aai
a aai a a, aai a
m i a a, m i a a
a, aai a a a,
•]
1
1
M3 1 0 0
[m i i f I.
■ IU f 1,
MS 0 o o
[• MI • f,
MI I l f ,
1 1 1 / 0 0 1/0 . 1
u i a a, aai a a a]
1 1 1 / 0 1 1/0 , 1
a aai a a, a aai a,
aai a a a, aai a a]
•107 1 1 0 1 1 1 / 0
[a a a aai, a a a m i,
a a a aai, a a aai, a
•100 1 1 1 1 1 1 / 0
[a a a, a a, a a a. a
1 1 /0 ,1
a a aai,
a a aai]
1 1 /0 ,1
M4 1 1 0 0 0 1/0 1 1/0 . 1 1 •1M 0 0 0 0 0 1/0 I/O 1/0 , 1 1
[• MI * I, a a aai a, a m i a. [aai a a a, a aai a a, • U I Y»
I MI a •, aai a a, aai a a aj a aai a a, a a aai a. • ««x a ?,
aai a a, aai a a a]
MS 4 0 4 1 1 1 1 / 0 1/0 , 1 1 -------------------------- —
[■ MI f 0. •110 1 0 0 0 0 1/0 1/0 1/0 * 1 1
• MI S *, ««1 • • «tl 1 V,
MI • * a]
MS 1 0 0 1 1 1/0 1/0 0 ,1 1
[•U 1 f 1, aai a a a, m i a a, •111 1 1 0 0 0 1/0 1/0 1/0 , 1 1
MI I f * , aai a a, aai a a a]
------------------------------------------ a a aai a, a aai a a. aai a a,
M7 1 0 0 1 1 1/0 1/0 0 , t 1 aai a a a]
[a * a i u . a a a aai, a a aai.
■ T • MI, a a m i, a a a aai] •113 0 0 0 1 1 1/0 1/0 1/0 , 1 1
[aai a a a, aai a • a, a aai a a.
MS 1 1 1 0 0 1/0 1 1/0 . 1 1 a aai a, a aai a a, a aai a a.
aai a a, aai a a a]
* • •] ---------
••••1
1a
ia
i
ta
a
i
•113 1 1 1 0 0 1/0 1/0 1/0 , 1 1
SM 1 1 0 1 1 0 1/0 1/0 , f 1 [a a a. a a a . a
[f I I MI, a a a aai, a a aai, a a a, a a, a a a]
V I* Ml]
•114 1 0 0 1 1 1/0 1/0 1/0 , 1 1
MM 1 1 0 1 1 1/0 1/0 0 , 1 1 [aai a a a, au a a a, aai a a.
[m i » • I, a u a a, aai a a a, aai a a a, a aai a a. a aai a a.
m i t a , i u * a a]
aai a a, aai a a a]
S101 1 1 0 1 1 1/0 1/0 0 , « 1
[f * i aai. a a aai, a a a aai, •116 1 0 0 1 1 1/0 1/0 1/0 , f 1
v a m i , v a a aai) [a a a aai, a a a aai, a a aai,
a a a aai, a a a aai, a a aai.
S103 1 0 0
r , ___ _ .
1 1 1 / 0 1 1/0 , 1 1 a a a aai]
a aai a a, aai a a, aai a a a. •US 1 1 0 1 1 1/0 1/0 1/0 , 1 1
m i a a, aai a a a] [aai a a a, aai a a a , aai a a,
------------------ a aai a a, a aai a, a aai a a,
•10S 1 0 0 1 1 1 / 0 1 1/0 , f 1
[a a a aai, a a aai, a a a aai,
a a aai, a a a aai] •117 1 1 0 1 1 1/0 1/0 1/0 , f 1
[a a a aai, a a a aai, a a a u ,
#104 1 0 0 1 1 1/0 1/0 1 , 1 1 a a a aai, a a a aai, a a a u ,
[mi a a , a aai a, a aai a a, a a a aai]
a aai a a, a aai a a]
S10S 1 0 0 1 1 1/0 1/0 1 , f 1
[a a a aai. a a aai, a a a aai,
a a a aai. a a a aai]
•106 1 1 0 1 1 1 / 0 1 1/0 , i i
345
B ,7 P-Space o f S(F)-Param eters
•I 0 0 0 0 0 1 0 0 , 1 1 , 0 - 1 1 - 0 1 - 0 0 - 1
[ a - [ e ( l ) } »u*[Im] a - [ a a p ] , c [ c ( l ) ] I i f t u t ] v [ u f ] • ■ [ « ( ! ) ] ]
n o o o o o i o o . t i . o - i i - o i - o o - o
[ a - [ c ( l > ] i u - [ Im ] * - Q , a - [ c ( l ) ] a a t - [ t a a ] r D a - [ c ( 2 ) ] ]
n 0 0 0 0 0 1 0 0 , 1 1 , 0-0 1 -0 1 -0 0 - 1
[ » - n * u - [ t u ] a - [ a a p ] , * - D a a a - [ t a a ) i - [ u p ] • ' [ } ]
M 0 0 0 0 0 1 0 0 , 1 1 , 0 - 0 1-0 1-0 0-0
t « - Q * • [ ]. a - 0 a a x - [ t a a ] * - 0 a - D l
as o o o o o i o o , l « , o-i 1 - 0 1 -0 o-i
[i-[e(l)l v-Caap] aax -[taa], a-[c(l)J *-[up] t-[c{D] a u -[tu ]]
M 0 0 0 0 0 1 0 0 , 1 f , 0-1 1-0 1-0 0-0
[a-[e(l)] f [ ] i u - [ t u ] , a-[c(l> ] a-O a-[c<2>] m - [ t u ] ]
*7 0 0 0 0 0 1 0 0 , 1 f , 0-0 1-0 1-0 0-1
[a-O a-[tap] i u - [ t u ] , a-D a-[aap] a-[] aax-[taa]]
M 0 0 0 0 0 1 0 0 , 1 f , 0-0 1-0 1-0 0-0
[a-D a-[] a u - [ta a ], a-D r D o-D aaa-{taa]l
M 0 0 0 1 0 1 0 0 , 1 1 , 0*1 1-0 1-0 0-1
[a-[c(l>] aax-[afr,taa] a-[tap], •-[«(!)] aax-[agr,taa]a-[aap]#-[«(!)]]
•10 0 0 0 1 0 1 0 0 ,1 1 , 0-11-0 1-0 0-0
[a-[c(l>] aaa-[afr,taa] a-O, a-[«(l>] aw-[agr,taa] r [ ] «-[c(2>]]
ill 0 0 0 1 0 1 0 0 , 1 1 , 0-0 1-0 1-0 0-|
[a-D tu -[« ft,m ] V-[up], a-D > u -[ip ,tu ] r [ u p j a-Dl
• l a 0 0 0 1 0 1 0 0, 1 1 , 0-0 1 -0 1 -0 0 - 0
[a-[] iu * [i|r,tia ] r D , a-D aax-[agr,taa] t-D a-D]
•13 0 0 0 1 0 1 0 0 , 1f , 0-1 1-0 1-0 0-1
[a-[e(l>] a-[aap] au-[afr,taa], a-[c(l)] r[u p ] a-[e(2)]lu-C ip.tia]]
•14 0 0 0 1 0 1 0 0 , 1f , 0-1 1-0 1-0 0-0
[a-[c(D] r D Mx-[afr,*aa], a-[c(l>] a-D a-[c(2)] aax-[agr,taa]]
•IS 0 0 0 1 0 1 0 0 , 1t , 0-0 1-0 1-0 0-1
[a-D a-[aap] aax-[agr,*aa], a-D a-[aap) a-D aaa-Cagr,taa]]
•1« 0 0 0 1 0 1 0 0 , 1f , 0-0 1-0 1-0 0-0
[a-D a-D aax-[agr,taa], a-D a-D a-D a«x-[agr,taa]]
•17 11 1 0 0 1 0 0 , 1 1,0-1 1-0 0-1 0-1
[a-[e<D] a-[taa,aap], a-[c(D] a-[taa,aap] a-[e<2)]]
•10 11 1 0 0 1 0 0 . 11,0-1 1-0 0-1 0-0
[a-[c(l>] a-[taa], a-[c<D] a-[taa] a-[c(2)]]
•10 1 1 1 0 0 1 0 0 , 1 1 , 0-1 1-0 0-0 0-1
[a-[c(l>] a-[aap], a-[c(l)] v-[aap] a-[e(2)]]
•20 1 1 1 0 0 1 0 0 , 1 1,0-1 1-0 0-0 0-0
[a-[c(D] a-D, a-[c(l)} a-D e-[«<2)]]
346
•31 1 1 t 0 0 1 0 0 . 1 1 . 0*1 0-1 0-1 0-1
[•-[c(l>] T -Itp.tM .tif], a-[c(l>] «-[(p,tH ,up] •-[c(3>]]
•33 1 1 1 0 0 1 0 0 , 1 1.0-1 0-1 0-1 0-0
[•”[•(1)1 v-frgr.tM], •-[•(!)] «-[ap,tM] ••[«(!))]
•33 1 1 1 0 0 1 0 0 , 1 1.0 -1 0-1 0-0 0-1
[•-(c(l)l T -(t|t,up], •-[«(!)] f [ i|r ,u p ] •-[c(a)]J
134 1 1 1 0 0 1 0 0 , 1 1,0-1 0-1 0-0 0-0
[•-Cc(l>] T-[«gr], •-[•(!)} r-(*|r] •-[c(a>]]
•31
[*-□
1 1 1 0 0 1 0 0 , 1 1 , 0-0
T-[tM,up], •-□ r-[tu ,u p ] •-[]]
1-0 0-1 0-1
n o 1 1 1 0 0 1 0 0 , 1 1 , 0-0
[•-□ *-[«■»], i -[] *-[«>■] •-[]]
1-0 0-1 0-0
•37
[•-□
1 1 1 0 0 1 0 0 , 1 1 , 0-0
v-[Mp], »-[) a-[up] o-m
1-0 0-0 0-1
n o
[•-a
1 1 1 O O l 0 0 , 1 1 , 0 - 0
*-[]. •-□ *-□ •-[)]
1-0 0-0 0-0
n o
[•-a
1 1 1 0 0 1 0 0 , 1 1 , 0 - 0
»-[»|r,to»,MpJ, •-[] f-[ipr,tM ,u|
0-1
p] •-
0-1
[]]
0-1
•30
[•-[]
1 1 1 0 0 1 0 0 , 1 1 , 0 - 0
T-CifT.tu], •-□ r-[agr,taa) •-[]]
0-1 0-1 0-0
•31
[•-a
1 1 1 0 0 1 0 0 , 1 1 , 0 - 0
*-[*jt,up], •-□ v-[*fr,up] •-[]]
0-1 0-0 0-1
• s a
[.-□
1 1 1 0 0 1 0 0 , 1 1 , 0 - 0
*-[•*•] •-[]]
0-1 0-0 0-0
•33 1 1 1 0 0 1 0 0 , 1 * , 0-1
[•-[c(l>] • - [ e ( a > ] (-[tM .up], a-[c(l>]
1-0 0-1 0-1
«-[tu,up)]
•34 1 1 1 0 0 1 0 0 , 1 f , 0-1 1-0 0-1 0-0
[ • * [ « ( ! ) ] • - [ c ( a > ] c [ t u ] , • - [ • ( ! ) ] i - [ t u ] ]
•3* 1 1 1 0 0 1 0 0 , 1 * , 0-1 1-0 0-0 0-1
[•-[•(!)] •-[c(a>] •-(•(!)] tr[up]]
•30 1 1 1 0 0 1 0 0 , 1 * , 0-1 1-0 0-0 0-0
[ • - [ c ( l > ] o - [ c < a > ] * - [ ] , • - [ « ( ! ) ] » - [ ] ]
•37 1 1 1 0 0 1 0 0 , I f , 0-1 0-1 0-1 0-1
[•-[•(!)] »-[e(a)] f [ i|r ,tii,u p ) , •-[«(!)] i-[i^ ,tu ,u p ]]
•30 1 1 1 0 0 1 0 0 , 1 * , 0-1 0-1 0-1 0-0
■-C«<1>3
s
o
o
* . 0-1 0-1 0-0 0-1
[•-(«(!>} •-(•<») «-Caor,aap], ■-[•(1)3 T-[l|T,Up]]
MO 1 1 1 0 0 1 0 0 , 1 f . O - l 0-1 0-0 0-0
[•-[•(I)] •-[«(•>] «-[«|r], ■-[*(1)3 »-[•*»))
Ml 1 1 1 0 0 1 0 0 , 1 * . 0-0 1-0 0-1 0-1
[•-[) •-[] f[tM ,u p ], •-□ l*[lH ,up]]
347
M2 1 1 1 0 O 1 0 0 . 1 f . 0-0 1-0 0-1 0-0
[■-O a-D r l t M j , 1 * 0 c [ tM ]]
M3 1 1 1 0 0 1 0 0 , 1 f , 0-0 1-0 0-0 0-1
t* -0 o -tl * -[o p ], > -[u p ]]
M4
(•-□
1 1 1 0
*-[J.
0 1 0 0 , 1 f , 0-0
a-[] *-CJ]
1-0 0-0 0-0
Ml
[•-[]
1 1
•-[]
1 0 0 1 0 0 , 1 f , 0-0
*-Ca*r,taa,aap3, a-[]
0-1
■a.u
0-1
pH
0-1
mo
ta-C3
1 1 1 0 0 1 0 0 , 1 * . 0-0 0-1
*-[agr,t«a], a-[D *-[«gr.t»aJ]
0-1 0-0
M7
[a-n
1 1
o-n
1 0
*-[agr
0 1 0 0 , 1 7 , 0 - 0
,aap], «-[] v-(agr,aap]]
0-1 0-0 0-1
MO 1 1 1 0 0 1 0 0 , i f , 0-0 0-1 0-0 0-0
*-!•**], »-[] v [ ip l]
348
Appendix C
Partial Ordering of
S(M )-Param eter Settings
x> 0. 0 0 C o 0 0 X 0 X X0 3
C o 0 0 X X 0 X 0 J
] C 00 0 X X X0 0 ]
[ 0 0 X 0 0 X X0 3
a> o . 0 1 C o 0 X 0 X 0 Xo ]
t o 0 X 0 X X 00 3
0 0 0 0 0 0X 0 ] C 0 0 X X 0 0 X 0 3
0 0 0 00 X 0 0 ] [ 0 0 X X 0 X 00 3
0 0 0 0 X 00 0 ] [ o0 X X X 0 00 3
0 0 0 X 0 00 0 ] C oX 0 0 0 X X0 3
0 0 1 00 0 0 0 3 [ 0X 0 0 X 0 X0 3
0 1 0 0 00 0 0 1 C o X 0 0 X X 00 3
1 0 0 0 0 00 0 ] C oX 0 X 0 0 X 0]
0 0 0 0 0 X X0 ] t 0 X 0 X 0 X 0 o 3
E o X 0 X X 0 0 o ]
3 ) 0 . 0 a C o X X 0 0 0 Xo 3
[ o X X 0 0 X 0 o 3
0 0 0 0X 0 X 0 ] [ oX X 0 X 0 0o 3
0 0 0 0 X X0 0 ] C o X X X 0 0 0 o 3
0 0 0 X 0 0 X0 1 [ X 0 0 0 0 X X0 3
0 0 0 X 0X 0 0 3 C x 0 0 0 X 0 X o 3
0 0 0 X X 0 00 3 E X 0 0 0 X X 0o 3
0 0 1 0 00 X 0 3 C x 0 0 X 0 0 X o 3
0 0 1 0 0 X 00 3 t X 0 0 X 0 X 0o 3
0 0 1 0 X 0 0 0 3 [ X 0 0 X X 0 0o 3
0 0 1 X 0 0 0 0] [ X 0 X 0 00 X 0 3
0 1 0 0 0 0X 0 3 C x 0 X 0 0 X 0o 3
0 1 0 0 0 X 0 0] C x 0 X 0 X0 0 0 3
0 1 0 0 X 0 00 3 E 1 0 X X 0 0 0o 3
0 X 0 X 0 0 0 0 ] C x X 0 0 0 0X o 3
0 1 1 0 0 0 0 0 ] [ X X 0 0 0 X 0o 3
1 0 0 0 0 0 X 0] C x X 0 0 X 0 00 3
1 0 0 0 0 X 00 ] C X X 0 X 0 0 0 0 3
1 0 0 0 X 0 0 0) E X X X 0 0 0 0o 3
1 0 0 X 0 0 00 ]
1 0 X 0 0 0 0 03 ( t > 0. 0 . 4
1 I 0 0 0 0 00 3
C o 0 0 X X X X 0 3
4 ) 0 . 0 . 3 [ 0 0 X 0 X X X o 3
[ 0 0 X X 0 X X o 3
0 0 0 0X X X 0 3 [ o 0 X X X 0 X o 3
349
***+►*© O
9*»*©** O •* ►*******¥+■**•* **** •* m•*•*•* ©
¥*© *+** © •* fr*N**»***
0
0
0
0
1
T
1
T
0
9*•*m
I
0
1
o mm »* -
1
0
o
0
I*•* •* ©
i
i
i
0
i
9*9*e ****
mm p+»* »»
m»• |M** ©
>• •* m•»
0
i
0
0
m©**»*«• o •***•* **o ►*¥+i*•*¥*■
OO ©© OOOOOOOOOO0OOO©OOOOOO
M
W^^^OOOOOO********** **oooo**^»*****»*oooo**
O n q o ^ O O ^ ^ O ^ O O ^ ^ O ^ ^ O ^ ^ O O ^ ^ O * * * 1 ^****
o q **o o »*o »*o »*o »*o »*o ***»o *+**o **o »*o »***o »*»*»*
o o o » ‘ 0©»»©»*'*o©»*ot*»*©»*»*»*oo»*©t*t*©fr*MMo
e p o o o o o o o o o o o e o o o o o o o e o o o e o o o o o
00
c*
o
OOOOOOOOOOOOOO V
o»*»*»***0 0 0 0 0 0 ©0 0 © •
© © © © ► * • * > * ► * ► * • * © © 0 © »
»*OOOh **m OOO******0
0 **»»0 »*©©»»t*0 «*i*0 **
^ O h h O O ^ O ^ ^ O * * ^ * *
•*ooo©©**«*«*»*»*©©©©©ooooo
o m o o o o **o o o o »*»*»**»© © o © o o
o o ^ o o o o N e e o N o e o ^ ^ ^ o c o
o o o h o o o o ^ o o o ^ o o ^ o o ^ ^ o
o©o©»*©o©©»*ooo»*©©»*o«*o»*
oooo©»*©©eo»*©o©»*©o»»©**»*
► * 0 0 0 0 0 0
©** ©o o o ©
o o **o o oo
0 0 0 ^ 0 0 0
O O o o M>o o
©O OO © ►*©
o o o o © o ►*
o
o
o
o
o
o
©
o
o
©
o
**•* o
E o 0 1 1 1 1 1 1 ]
C 0 1 0 1 1 0 0 1 ] E o 1 0 1 1 1 1 1 ]
[ 0 1 1 0 0 1 1 ] E o 1 1 0 1 1 1 1 ]
E 0 1 1 0 0 1 0 1 ] E o 1 1 1 0 1 1 1 ]
E 0 1 1 0 1 0 0 1 ] E o 1 1 1 1 0 1 1 ]
E 0 1 E 0 1 1 1 1 1 0 1 ]
C 1 0 0 0 0 1 1 1 ] C l 0 0 1 1 1 1 1 ]
E 1 0 0 0 1 0 1 1 ] E i 0 1 0 1 1 1 1 ]
C l 0 0 0 1 1 0 1 ] E l 0 1 1 0 1 1 1 ]
[ 1 0 0 1 0 0 1 1 ] E i 0 1 1 1 0 1 1 ]
t 1 0 0 1 0 1 0 1 ] E l 0 1 1 1 1 0 1 ]
[ 1 0 0 1 1 0 0 1 ] [ 1 1 0 0 1 1 1 1 ]
C l 0 1 0 0 0 1 1 ] t 1 1 0 1 0 1 1 1 ]
[ 1 0 1 0 0 1 0 1 ] E i 1 0 1 1 0 1 1 )
[ 1 0 1 0 1 0 0 1 ] E l 1 0 1 1 1 0 1 ]
[ 1 0 1 1 0 0 0 1 ] E 1 1 1 0 0 1 1 1 ]
t 1 1 0 0 0 0 1 1 ] E l 1 1 0 1 1 1 ]
C i I 0 0 0 1 0 1 ] E i 1 1 0 1 1 0 1 ]
[ 1 1 0 0 1 0 0 1 ] E 1 1 1 1 0 1 1 ]
[ 1 1 0 1 0 0 0 1 ] E i 1 1 1 0 1 0 1 ]
[ 1 1 1 0 0 0 0 1 ] E l 1 1 1 1 0 0 1 ]
(13) 0.1 .5 (IS) 0.1 .7
[ 0 0 0 1 1 1 1 1 ] E o 1 1 1 1 1 1 1 ]
[ o 0 1 0 1 1 1 1 ] E l 0 1 1 1 1 1 1 ]
[ 0 0 1 1 0 1 1 1 ] E i 1 0 1 1 1 1 1 ]
E 0 0 1 1 1 0 1 1 ] E l 1 1 0 1 1 1 1 ]
[ o 0 1 1 1 1 0 1 ] E i 1 1 1 0 1 1 1 ]
[ 0 1 0 0 1 1 1 1 ] E i 1 1 1 1 0 1 1 ]
[ o 1 0 1 0 1 1 1 ] E l 1 1 1 1 1 0 1 ]
E 0 1 0 1 1 1 1 ]
C o 1 0 1 1 1 o 1 ] (10) 0.1 .3
C o 1 1 0 0 1 1 1 ]
E o 1 1 0 1 1 1 ] E i 1 1 1 1 1 1 1 ]
E o 1 1 0 1 1 0 1 ]
C o 1 1 1 0 1 1 ] (17) 1.0.1
E 0 1 1 1 0 1 o 1 )
C o 1 1 1 1 0 0 1 ] E o 0 0 0 0 1/0 0 0 ]
[ l 0 0 0 1 1 1 1 ] E o 0 0 0 0 0 1/0 0 ]
[ i 0 0 1 0 1 1 1 ] E o 0 0 0 0 1/0 1 0 ]
[ l 0 0 1 1 1 1 ]
E l 0 0 1 1 1 0 1 ] (IS) 1.0.3
E l 0 1 0 0 1 1 1 ]
E l 0 1 0 1 0 1 1 1 E o 0 0 0 0 1 1/0 0 ]
£ i 0 1 0 1 1 0 1 ] E o 0 0 0 1 1/0 0 0 ]
E l 0 1 1 0 0 1 1 ] E o 0 0 0 1 0 1/0 0 ]
E l 0 1 1 0 1 0 1 ] E o 0 0 1 0 1/0 0 0 ]
E l 0 1 1 1 0 0 1 ] E 0 0 0 1 0 0 1/0 0 ]
E l 1 0 0 0 1 1 1 ] E 0 0 1 0 0 1/0 0 0 ]
E 1 1 0 0 1 0 1 1 ] E o 0 1 0 0 0 1/0 0 ]
E l t 0 0 1 1 0 1 ] E o 1 0 0 0 1/0 0 0 ]
E l 1 0 1 0 1 1 ] E o 1 0 0 0 0 1/0 0 ]
E i 1 0 1 0 1 0 1 ] E l 0 0 0 0 1/0 0 0 ]
E 1 1 0 1 1 0 1 ] E i 0 0 0 0 0 1/0 0 ]
E 1 1 1 0 0 1 1 ]
E 1 1 1 0 0 1 0 1 ] (IS) 1.0.3
E l 1 1 0 1 0 0 1 ]
E l 1 1 1 0 0 0 1 ] E o 0 0 0 1 1/0 1 0 ]
E o 0 0 0 1 1 1/0 0 ]
(14) 0.1 • E o 0 0 1 0 1/0 1 0 ]
E o 0 0 1 0 1 1/0 0 ]
351
o» » * * « * » * i * » * ( * w i * * * i * i t e O O O O O O O O O O O O O O O O O O O w H K H k * H ^ » H H K O O O O O O O O O O O O O O O O
t*
^ o o o o o o o o o o o o ~ ~ ~ ~ ~ ~ ~ ~ ~ P * ~ - o o o o o o o o **ft*0©©©e©©©**ft*ft***ft***ft*ft*0©©©©©©0o
* o©B***o©oo©©»®**o©©©oo»*B*B«»»a»*»oo
©*®*®©©©©*®»®»®»®©o»»»»©oo©**»®»»«»©o»*»»ta»»*oo**»* o o o e ^ ^ o o o o o o ^ ^ o o o o ^ ^ e o o o * * ^
o©©t®»®e©»®»®oo»*»®o©B®»*©o**»»o©*»»»»ftB*oo»*»*»*B* oo©©©©»®«»©oo©©©»***oooo»*»®oo»®*®
V N V V S K S S V S - V XV S * V V SV V V S S V V N X . v s S N
® . - ® . . ® . . ® . . ® m ® ..® ..® .-® ..® .*® ,*® .*® >.® ,.® .*® .*® ^ ® ^ ® ^ ® ^ ® ^ ® ^ ® M® ^ ® ^ ® (.® M ® ^ ® ^ ®
S N V N V X S V V ' « . ' « . V N V S S S N S S V S N N X X
> * o o o o o > * e o o > ‘ o>‘ o o o o e > ‘ o oo> *o> ‘ o o a > ‘ o>‘ 0 >* o o o o o o o o o » o o o o o o o » * o o o o o > * o o
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o e o o o o
co
aj
io
M K M r ' V ^ e e H M W
©b*******************^
S S  S V N_ © o _ o © _ © oft* ft* ft* ft* t* B*
N   K > 0 0 0 ^*0 ^ 0 ^ 0 ^ 0 ^
o e
m h h m h h h h k h » * ^ 0 0 0 0 0 0 0 0 * * * * * * * * * * * * * * * * 0 0
* * * * o o o o * * * * * * * * o o * * ^ * * ^ o o ^ ^ * * * * h * * o o * * * * * * * *
O O h h O O * * ^ O O h k * * m O O ^ * * ^ * * * * * * 0 0 ^ ^ > * * * » * * *
©ft*©ft***ft*©**ft***ft*ft*©ft*****ft*ft*ft*ft*©ft*ft*ft*ft***ft*ft*B*ft*V K K N  N X X V S S S V S S
o e e o o o o o o o © o _ ©_ ©_ ©ft* ft* ft* ft* ft* ** ** ** ft* ** ** ** ** ft* **S V  S . S » N N N V N N V S
OOOOO» ®OOO» ®O» ®OOO» ®O» * O» ®0 OO» * 0 * ®O» ®O» «
o o o e c o o o o o o e e o o o o o o o o o o o e o o o o o
*®►* oo e e e
o o ►*•* o ©©
o © ©© *®►*o
©** o**o •***
© © ©B* ft* B* ft*
V s.e © oo ©e ©
©© oo e e ©
t o 0 0 1 1 1 0 1 /0
0 0 0 o 0 0 0 1 /0 ] t 0 0 1 0 0 1 /0 1 1
[ o 0 1 0 0 1 1 /0 1
» > 1.1 .2 I o 0 1 0 0 1 1 1 /0
I 0 0 1 0 1 1 /0 0 1
0 0 0 0 0 1 /0 0 11 [ 0 0 1 0 1 0 1 /0 1
0 0 0 0 0 0 1 /0 1 ] t 00 1 0 1 0 1 1 /0
0 0 0 0 0 0 1 1 /0 ] [ o 0 1 0 1 1 0 1 /0
0 0 0 0 0 1 0 1 /0} [ o 0 1 1 0 I/O 0 1
0 0 0 0 1 0 0 1 /0 1 I 0 0 1 1 0 0 1 /0 1
0 0 0 1 0 0 0 1 /0] [ o 0 1 1 0 0 1 1 /0
0 0 1 0 0 0 0 1 /0J [ o 0 1 1 0 1 0 1 /0
0 1 0 0 0 0 0 1 /0] [ o0 1 1 1 0 0 1 /0
1 0 0 0 0 0 0 1 /0 J C o 1 0 0 0 1 /0 1 1
C 0 1 0 0 0 1 1 /0 1
30) 1 .1 .3 C o 1 0 0 0 1 1 1 /0
t 0 1 0 0 1 1 /0 0 1
0 0 0 0 0 1 /0 1 1 ] C o 1 0 0 1 0 1 /0 1
0 0 0 0 0 1 1 /0 1 ] C 0 1 0 0 1 0 1 1 /0
0 0 0 0 0 1 1 1 /0 ] C o 1 0 0 1 1 0 1 /0
0 0 0 0 1 1 /0 0 1 ] t o 1 0 1 0 1 /0 0 1
0 0 0 0 1 0 1 /0 1) [ o 1 0 1 0 0 1 /0 1
0 0 0 0 1 0 1 1 /0 1 C 01 0 1 0 0 1 1 /0
0 0 0 0 1 1 0 1 /0] C 0 1 0 1 0 1 0 1 /0
0 0 0 1 0 1 /0 0 1] C 01 0 1 1 0 0 1 /0
0 0 0 1 00 1 /0 1 3 [ 0 1 1 0 0 1 /0 0 1
0 0 0 1 0 0 1 1 /0] [ 01 1 0 0 0 1 /0 1
0 0 0 1 0 1 0 1 /0 ] C 01 1 0 0 0 1 1 /0
0 0 0 1 1 0 0 1 /0 3 [ 0 1 1 0 0 1 0 1 /0
0 0 1 0 01 /0 0 1 ] [ 0 1 1 0 1 0 0 1 /0
0 0 1 0 0 0 1 /0 1 ] t 0 1 1 1 0 0 0 1 /0
0 0 1 0 0 0 1 1 /0] [ 1 0 0 0 0 1 /0 1 1
0 0 1 0 0 1 0 1 /0] [ 1 0 0 0 0 1 1 /0 1
0 0 1 0 1 0 0 1 /0 3 [ 1 0 0 0 0 1 1 1 /0
0 0 1 1 0 0 0 1 /0 3 C 1 0 0 0 1 1 /0 0 1
0 1 0 0 0 1 /0 0 1 3 C i 0 0 0 1 0 1 /0 1
0 1 0 0 0 0 1 /0 13 [ 1 0 0 0 1 0 1 1 /0
0 1 0 0 0 0 1 1 /0 ] [ 1 0 0 0 1 1 0 1 /0
0 1 0 0 0 1 0 1 /0} [ 1 0 0 1 0 1 /0 0 1
0 1 0 0 1 0 0 1 /0 ] [ 1 0 0 1 0 0 1 /0 1
0 1 0 1 0 0 0 1 /03 [ 1 0 0 1 0 0 1 I/O
0 1 1 0 0 0 0 1 /0 ] [ 1 0 0 1 0 1 0 1 /0
1 0 0 0 0 1 /0 0 1 ] C l0 0 1 1 O 0 1 /0
1 0 0 0 0 0 1 /0 1 ] [ 1 0 1 0 0 1 /0 0 1
1 0 0 0 0 0 1 1 /0 ] [ 1 0 1 0 0 0 1 /0 1
1 0 0 0 0 1 0 1 /0 ] £ 1 0 1 0 0 0 1 1 /0
1 0 0 0 1 0 0 1 /0 1 t 1 0 1 0 0 1 0 1 /0
1 0 0 1 0 0 0 1 /0 3 t 10 1 0 1 0 0 1 /0
1 0 1 0 0 0 0 1 /0 ] C l0 1 1 0 0 0 1 /0
1 1 0 0 0 0 0 1 /0 3 £ 1 1 0 0 0 1 /0 0 1
[ 1 1 0 0 0 0 1 /0 1
3T) 1 .1 .4 E l1 0 0 0 0 1 1 /0
E l 1 0 0 0 1 0 1 /0
0 0 0 0 1 1 /0 1 1] [ 1 1 0 0 1 0 0 1 /0
0 0 0 0 1 1 1 /0 1 ] [ 1 1 0 1 0 0 0 1 /0
0 0 0 0 1 1 1 1 /01 ( 1 1 1 0 0 0 0 1 /0
0 0 0 1 0 1 /0 1 1]
0 0 0 1 0 1 1 /0 13 (23) 1.1 .0.
0 0 0 1 0 1 1 1 /03
0 0 0 1 1 1 /0 0 13 E 0 0 0 1 1 1 /0 1 1
0 0 0 1 1 0 1 /0 1 ) [ o 0 0 1 1 1 1 /0 1
0 0 0 1 1 0 1 1 /03 E 0 0 0 1 1 1 1 1 /0
353
I
>.k m ^ ^ ^ » * n ^ ^ » * n »>»«^^^»*»«»*»***h h h h h h h v 0 O O O O OO O OO O OO O OO OO O OO O OO O OO O OO O O
» * ^ ^ ^ ^ k A * * f t * o O O O O O O O O O O O O O O O O O O O O O ^ H K H ^ H » o » * * * » « r * i 4 M M H ^ H » * H H H H O O O O O O O O O Q
0 0 0 0 0 0 0 0 * * ^ ^ ^ ^ * * ^ * * ^ ^ » * ^ 0 0 0 0 0 0 0 0 0 0 * * ^ * * ^ ^ ^ > * ^ ^ ^ ^ ^ 0 0 0 0 0 0 0 0 0 0 > * * * ^ N * ^ ^ ^ ^ * * ^
H O O O O O O O ^ ^ ^ ^ ^ O O O O O O O ^ ^ ^ ^ H ^ ^ O O O ^ ^ ^ ^ ^ O O O O O O O ^ ^ ^ H ^ M M O O O W H H K H H H O O O
0»***»*»*000**0000**»*****00©**»*»***000********0000********000****»***000**»***fr*^»t^000»***»*
t*>*»»»*o»*oo***»oo****»*»*»»oo******»***»***o»»oo«»»»oo»*»‘ ^»*»*oo**»t**t»f»***»*oo»*»*»***»*»***K S S S ^    N V S V  S N S X
O O O O O O O O O O O O O a a a
oo**** o**** **n o **** ** ►* o © *• **o oo ** ** ►***
O OO © ** OO o o ©** OO ©** ©** OO OO OH o o ©** o - OO ©** ©**(rf H M ****** **** ** N H ** *+ **»*M **** ** **** ** ^ M N ^ I*S V N N N V N N N V N S  X  S S   N N S >  S N N
»*oo»*»*o**»*ooo**»*oo****ot*»*oo****o****o****ooo«»»»oo»»»»o****oo»*»*o****o»*»*oo»*^o»*t*o»***
CO
CA
4*
wo**i*******»*********»*****»*»*»*»***»***»*»*»*«*»*****»*t*»***»»»*0000000000000000 w *•»**•*»***•*•••*•*
* *
* * * * * * * * * * * * * * * * * * * *
* *
•*«a«a»»»»«»b*«a»*»*0000000000********************000******^*^»*********000****** O »«*«**Hh*>*0000
»*ia»*0 0 0 0 0 0 0 **************0 0 0 **************0 0 0 +*****»**»**»*»*»***0 0 0 **»********* >«ooooo********
ooo********ooo********ooo**************ooo********************ooo****************** o * * o o o o * * o o o
OO»*O0**0**OO
o o o © © © ___o o o ^ o ^ o o o o o o**** ©**** ** **©***£ ***J^ ***J © •* *£ ***^ ***£ ***^ O ♦*H** »*** ** *» *»** »*
_ o o _ © o o * * _ _ o o o** o** o o o**^o**^o**___ o o o**_o**_o>* o** o o o** **** ** »*N M ** ** •* ** ** ** »* ** ** »* ** N H K H H »*»*I*X X X x * ^ ^ ^ " X X X X* X X X X ' X X X .
©****oo****©****oo****©****©****©o****©****e****o****©©****©****©****©****©**** oo oo *** *o oo* *
1 1 1 0 1 0 1/0 3
1 1 1 1 0 0 I/O ]
30) 1.1.7
1 1 1 1 1/0 1 1 1
1 1 1 1 1 1/0 1 3
1 1 1 1 1 1 1/0 ]
0 1 1 1 1/0 1 1 3
0 1 1 1 1 1/0 1 3
0 1 1 1 1 1 1/0 3
1 0 1 1 1/0 1 1 3
1 0 1 1 1 1/0 1 3
1 0 1 1 1 1 1/0 3
1 1 0 1 1/0 1 1 3
1 1 0 1 1 1/0 1 ]
1 1 0 1 1 1 1/0 ]
1 1 1 0 1/0 1 1 ]
1 1 1 0 1 1/0 1 ]
1 1 1 0 1 1 I/O ]
1 1 1 1 1/0 0 1 3
1 1 1 1 0 1/0 1 ]
1 1 1 1 0 1 1/0 3
1 1 1 1 1 0 1/0 J
31) 1.1 .8
1 1 1 1 1/0 1 1 ]
1 1 1 1 1 1/0 1 ]
1 1 1 1 1 1 1/0 )
33) 2.0 .2
0 0 0 0 0 1/0 I/O 0 3
33) 2.0.3
0 0 0 0 1 I/O 1/0 0 ]
0 0 0 1 0 I/O 1/0 0 ]
0 0 1 0 0 1/0 1/0 0 ]
0 1 0 0 0 I/O 1/0 0 ]
1 0 0 0 0 I/O I/O 0 }
34) 3.0 .4
0 0 0 1 1 1/0 1/0 0 3
0 0 1 0 1 1/0 1/0 0 ]
0 0 1 1 0 1/0 1/0 0 3
0 1 0 0 1 1/0 1/0 0 ]
0 1 0 1 0 1/0 1/0 0 ]
0 1 1 0 0 1/0 1/0 0 3
1 0 0 0 1 1/0 1/0 0 3
1 0 0 1 0 1/0 1/0 0 3
1 0 1 0 0 1/0 1/0 0 3
1 1 0 0 0 1/0 1/0 0 3
St) 3.0 t
0 0 1 1 1 1/0 1/0 0 3
0 1 0 1 1 1/0 1/0 0 3
0 1 1 0 1 1/0 1/0 0 3
0 1 1 1 0 1/0 1/0 0 3
( 1 0 0 1 1 1/0 1/0 0 3
[ 1 0 1 0 1 1/0 1/0 0 3
( 1 0 1 1 0 1/0 1/0 0 3
[ 1 1 0 0 1 1/0 1/0 0 ]
[ 1 1 0 1 0 1/0 1/0 0 ]
I 1 1 1 0 0 1/0 1 /0 o 3
<36) 3.0 .6
[ o 1 1 1 1 1/0 1/0 0 3
[ 1 0 1 1 1 I/O 1/0 0 3
[ 1 1 0 1 1 1/0 1/0 0 3
[ 1 1 1 0 1 I/O 1/0 0 3
[ 1 1 1 1 0 1/0 1/0 0 3
(37) 3.0 .7
C i 1 1 1 1 I/O
0*
o
o
3
(38) 3.1.2
[ 0 0 0 0 0 1/0 0 1/0 ]
t o 0 0 0 0 0 1/0 1/0 ]
(38) 3.1 .3
I o 0 0 0 0 1/0 1/0 1 3
C o 0 0 0 0 1/0 1 1/0 3
C o 0 0 0 0 1 1/0 1/0 3
C o 0 0 0 1 1/0 0 1/0 3
[ o 0 0 0 1 0 1/0 1/0 3
[ 0 0 0 1 0 1/0 0 1/0 3
[ o 0 0 1 0 0 1/0 1/0 3
C 0 0 1 0 0 1/0 0 1/0 3
C o 0 1 0 0 0 1/0 1/0 3
[ o 1 0 0 0 1/0 0 1/0 3
[ o 1 0 0 0 0 1/0 1/0 3
[ 1 0 0 0 0 1/0 0 I/O 3
[ 1 0 0 0 0 0 1/0 1/0 3
(40) 3.1 .4
[ 0 0 0 0 1 1/0 1/0 1 3
C 0 0 0 0 1 1/0 1 1/0 3
[ 0 0 0 0 1 1 1/0 1/0 3
[ o 0 0 1 0 1/0 I/O 1 3
C o 0 0 1 0 1/0 1 1/0 3
( 0 0 0 1 0 1 1/0 1/0 3
[ 0 0 0 1 1 1/0 0 I/O 3
( o 0 0 1 1 0 1/0 1/0 3
C 0 0 1 0 0 1/0 I/O 1 3
C o 0 1 0 0 1/0 1 1/0 3
C o 0 1 0 0 1 1/0 1/0 3
[ 0 0 1 0 1 1/0 0 1/0 3
[ o 0 1 0 1 0 1/0 1/0 3
[ 0 0 1 1 0 1/0 0 1/0 3
[ o 0 1 1 0 0 1/0 1/0 3
[ 0 1 0 0 0 1/0 1/0 1 3
[ 0 1 0 0 0 1/0 1 1/0 3
[ 0 1 0 0 0 1 1/0 1/0 3
[ 0 1 0 0 1 1/0 0 I/O 3
[ 0 1 0 0 1 0 1/0 1/0 3
355
t 0 1 0 1 0 1/0 0 1/0 3
C o 1 0 1 0 0 1/0 1/0 ]
C o 1 1 0 0 1/0 0 1/0 3
C o 1 1 0 0 0 1/0 1/0 ]
C i 0 0 0 0 1/0 1/0 1 3
[ l 0 0 0 0 1/0 1 1/0 3
C i 0 0 0 0 1 1/0 1/0 3
[ i 0 0 0 1 1/0 0 1/0 3
[ l 0 0 0 1 0 1/0 1/0 ]
[ l 0 0 1 « 1/0 0 1/0 3
[ l 0 0 1 0 0 1/0 1/0 3
[ i 0 1 0 0 1/0 0 1/0 3
[ l 0 1 0 0 0 I/O 1/0 3
t l 1 0 0 0 1/0 0 1/0 3
E l 1 0 0 0 0 1/0 1/0 3
(41) 2.1.6
[ o 0 0 1 1 1/0 1/0 1
[ 0 0 0 1 1 1/0 1 1/0
C o 0 0 1 1 1 1/0 1/0
[ o 0 1 0 1 1/0 1/0 1
C o 0 1 0 1 1/0 1 1/0
E 0 0 1 0 1 1 1/0 1/0
[ o 0 1 1 0 1/0 1/0 1
t o 0 1 1 0 1/0 1 1/0
0 1 1 0 1 1/0 1/0
[ 0 0 1 1 1 1/0 0 1/0
[ o 0 1 1 1 0 1/0 1/0
[ o 1 0 0 1 1/0 1/0 1
[ 0 1 0 0 1 1/0 1 1/0
[ o 1 0 0 1 1 1/0 1/0
[ o 1 0 1 0 1/0 1/0 1
C o 1 0 1 0 1/0 1 1/0
C o 1 0 1 0 1 1/0 1/0
1 0 1 1 1/0 0 1/0
E o 1 0 1 1 0 1/0 1/0
E o 1 1 0 0 1/0 1/0 1
E o 1 1 0 0 1/0 1 1/0
E o 1 1 0 0 1 1/0 1/0
E o 1 1 0 1 1/0 0 1/0
E o 1 1 0 1 0 1/0 1/0
E o 1 1 1 0 1/0 0 1/0
E o 1 1 1 0 0 1/0 1/0
{ i 0 0 0 I 1/0 1/0 1
[ i 0 0 0 1 1/0 1 1/0
E l 0 0 0 1 1 I/O 1/0
E l 0 0 1 0 1/0 1/0 1
E i 0 0 1 0 1/0 1 1/0
C l 0 0 1 0 1 1/0 1/0
t l 0 0 1 1 1/0 0 1/0
E t 0 0 1 1 0 I/O 1/0
E l 0 1 0 0 1/0 1/0 1
E l 0 1 0 0 1/0 1 1/0
E l 0 1 0 0 1 1/0 1/0
E l 0 1 0 1 1/0 0 1/0
E i 0 1 0 1 0 I/O 1/0
E l 0 1 1 0 1/0 0 I/O
[ l o 1 1 0 0 1/0 I/O
[ i 1 0 0 0 I/O I/O 1
C t 1 0 0 0 1/0 1 1/0
C i 1 « 0 0 1 1/0 1/0
1 1 0 0 1 i / o o i / o 3
1 1 0 o 1 o 1/0 1/0 3
1 1 0 1 0i / o o i / o 3
1 1 0 1 0 o i / o i / o 3
1 1 1 0 0 i / o o i / o 3
1 1 1 0 0 0 1 / 0 1 / 0 3
43) 2 . 1 .0
0 0 1 1 1 I/O i / o i 3
0 0 1 1 1 i / o i i / o 3
0 0 1 1 1i 1 / 0 1 / 0 3
0 1 0 1 1 i / o i / o l 3
0 1 0 1 1 1 / 0 1 1 / 0]
0 1 0 1 1 1 1 / 0 1 / 0 3
0 1 1 0 1 i / o i / o i3
0 1 1 0 1 1 / 0 i 1 / 0 3
0 1 1 0 1 i i / o i / o 3
0 1 1 1 0 i / o i / o i 3
0 1 1 1 0i / o i 1/0 3
0 1 1 1 0 1 1 / 0 1 / 0 3
0 1 1 1 1 i / o o i / o 3
0 1 1 1 1 o i / o i / o 3
1 0 0 1 1I/O i / o l 3
1 0 0 1 1 i / o i i / o 3
1 0 0 1 1l 1/0 i / o 3
1 0 1 0 1 i / o i / o i 3
1 0 1 0 1 i / o i1/0 3
1 0 1 0 1 l i / o i / o 3
1 0 1 1 0 i / o i / o i 3
1 0 1 1 0i / o i i / o 3
1 0 1 1 0 i i / o i / o 3
1 0 1 1 1i / o o i / o 3
1 0 1 1 1 o i / o i / o 3
1 1 0 0 1 i / o i / o i 3
1 1 0 0 1 i / o i i / o 3
1 1 0 0 1 l i / o i / o 3
1 1 0 1 0 i / o i / o i 3
1 1 0 1 0 i / o i i / o 3
1 1 0 1 0 i i / o i / o 3
1 1 0 1 1 1 / 0 0 1 / 0 3
1 1 0 1 1 0 1 / 0 1 / 0 3
1 1 1 0 0 i / o i / o i 3
1 1 1 0 0 i / o i i / o 3
1 1 1 0 0 i I/O i / o 3
1 1 1 o 1 l / o o t / o 3
1 1 1 0 1 o i / o i / o 3
1 1 1 1 0 i / o o i / o 3
1 1 1 1 0 o i / o i / o 3
43) 2 . 1 .7
0 1 1 1 1 1 / 0 1 / 0 1 3
0 1 1 1 1 t / o i i / o 3
0 1 1 1 1 i i / o i / o 3
1 0 1 1 1 i / o i / o i 3
1 0 1 1 1 i / o i i / o 3
1 0 1 1 1 i i / o i / o 3
1 1 0 1 1 i/o i / o i 3
1 1 o 1 1 1/0 i i/o 3
1 1 0 1 1 t i / o i / o 3
1 1 1 0 1 i / o i / o i 3
356
I
**•* o *+
o o o o ©
* * * * * * * * * *
S  N S So o o o o
**********s v x wo o o o o
* A
m 5** ** ft* ** ** ** o o o o ft* ft* ft* ft* o
0
0
o o o
w <*
• ** ** ft* o o o ft* ft* **o ft* o o o ft* ft* ft* o o o•* ft*
** o o ** ** o k* ** o ft* o o ft* o o ft* o o ft*ft*o
o **
1
0
o ** **o ** ft* o o ft* o o ft* o ft* o ft*
o o ** o **** o t* ft* ft* o o o ft* o o ft* o ft*ft*
ft* ** **** »* ** ** ft* ft*ft* ft* ft*ft* ft*ft* ft*ft* ft*ft* ft*X X X ■V. X.X X. X. X X X X X X X X X X X
o o o o o o o o o o o o o o o o o o o o
ft* ** ft* ** ** ft* ft* ** ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft*
X X X V. X V. X. X X X X X X X X X X X X
o o o o o o o o o o o o o o o o o o o o
t* ft* ** ft* ** ** ** ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft*ft*
X X X X X X X X, X. X X X X X X X X X X X
o o o o o o o o o o o o o o o o o o o o
u u u u u u u u w u u
CO
Oi
1/01/01/0
i/oi/oi/o
i/oi/oi/o
i/oi/oi/o
i/oi/oi/o
V
(i
** oo o o
o **o e o
o o** o o
© o©**o
© o o o * *
sw
C# O
o
o
o
£
w
„ o ©
X
o ****
o o **
**t» •»•»•* o o
*-**• o o o **•*
A********** **
N S V S
o o o o* * * * * *
S N 
OO O»»* O •*
o* * * * * * * * * * » *
N S N S
0 0 0 0 ^ 0 0
X
o
Appendix D
Learning Sessions
D .l Acquiring an Individual Language (1)
I T - a p .
T k a i a l t i a l a a t t l a g l a [ 00 0 0 0 0 0 0 ] Xa
l a a t r [ a . i v ] . XI
C a r r a a t a * t t i i | r m l u u e k w f i l .
l a a t r [ a , t v , a ] , X3
O u n t t a a t t l a g r a a a i a a a a c k a a g a d .
l a a t r [ a t v ] . X3
U a a b la t a p a r a a [ a , a , t v ]
b a a t t l i f t k a p a r a a a t a r s . . .
P m a a l t r a r a a a t t a : [ 0 0 0 0 0 1 I 0 ] Xb
l a a t r g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t s a t t l a g :
[ [ a ,1 v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a , a f t a a , a , t v ] ]
l a a t T [ a , t v , a ] . X4
U a a b la t a p a r a a [ a , t v , a ]
k a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a :[ 1 0 0 0 0 1 0 0 ] Xc
l a a t r g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , i v ] , [ a , a f t a a . l v ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] ]
l a a t r [ a , a , t v ] . X I
U a a b la t a p a r a a [ a , a , t v ]
k a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r a r a a a t t a : [10 0 0 0 1 1 0 ] I t
l a a t r g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , 1v ] , [ a , a , t v ] , [ a , a f t a a , 1v ] , [ a , a f t a a , a , t v ] ]
l a a t T [ a , t v , a ] . X I
U a a b la t a p a r a a [ a , t v , a ]
k a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a : [11 0 0 0 1 0 0 ] Xa
l a a t r g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , i v ] , [ a , a f t a a , i v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ]
l a a t r [ a , a . t v ] . XT
U a a b la t a p a r a a [ a , a , t v ]
358
h r i w U n r a a a tt s : [ 0 0 0 0 0 0 1 1 ] X f
l a a t T g a a a r a t a -
L a a g a a g a g a a a r a t a d s l t h c a r r a a t a a t t i a g :
[ [ a , a f t a a , a , t v ] , [ a , a , t v ] ]
■ a a t r [ a , t v , a } . M
U m a b la t a p a r a a [ a , t v , a ]
K a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a s [ 00 0 0 0 1 0 1 ] Xg
I s x t T g a a a r a t a .
La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , l v ] , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ s , t v , a ] ]
l a a t T [ a , a , t v ] . X*
U a a b la t a p a r a a [ a , a , t v ]
K a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a : [ 00 0 0 0 1 1 1 ] Xh
l a a t r g a a a r a t a .
l a a g a a g a g a a a r a t a d v l t h c a r r a a t a a t t l a g :
[ [ a , a , a f t a a , t v ] , [ a , a , t v ) , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a . a f t a a , a , t v ] ]
■ a a t r [ a , t v , a ] . X I0
U a a b la t a p a r a a [ a , t v , a ]
K a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a : [1 0 0 0 0 1 0 1 ] X i
l a a t r g a a a r a t a .
L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ s . l v ] , ( a , a f t a a . l v ] , [ a , a f t a a , t v , o ] , [ s , t v , a ] ]
■ a a t r [ a , a . t v ] . X ll
U a a b la t a p a r a a [ a , a , t v ]
K a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a :[ 1 0 0 0 0 1 1 1 ] XJ
■ a a t r g a a a r a t a .
L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g ;
[ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ) , [ a , a , t v ] , [ a , o f t a a . l v ] , [ s , s f t t a , a , t v ] ]
l a a t T [ a , t v , a ] . X l l
U a a b la t a p a r a a [ a , t v , a ]
K a a a t t l a g t k a p a r a a a t a r s . . .
P a r a a a t a r s r a a a t t a :[ 1 1 0 0 0 1 0 1 ] Xk
l a a t r g a a a r a t a .
L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , 1 v ] , [ a ,a f t a a , 1v ] , [ a ,a f t a a , t v , a ] , [ a , t v ,a ] ]
l a a t T [ a , a , t v ] . X13
U a a b la t a p a r a a [ a , a , t v ]
K a a a t t l a g t k a p a r a s w t a r s . . .
P a r a a a t a r s r a a a t t a : [ 0 0 0 0 0 1 / 0 1 0 ] X I
l a a t T g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a , l v ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a . a f t a a , a , t v ] ]
l a a t T [ a . t v . a ] . X l«
U a a b la t a p a r a a [ a , t v , a ]
K a a a t t l a g t k a p a r a a w t a r a . . .
P a r a a a t a r s r a a a t t a : [ 0 0 0 0 0 1 1 / 0 O ] X a
l a a t T g a a a r a t a .
L a a g a a g a g a a a r a t a d w i t h c a r r a a t a a t t l a g :
[ [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] . [ a . a f t a a , a , t v ) , [ a , o f t a a , t v , a ] , [ a , t v , a ] ]
359
■ a x tT [ ■ , i , t v ] . k i t
Um U i t * p u M [ • ■ • . t v ]
U i t t t l i | t k a f t r u a u i f . . .
P u i M t i r i t w i t t * : [ 0 0 0 0 0 0 1 / 0 1] k a
h i t ? p M i t U .
I n p > ( i g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ t , * f u i , i , n ] , [ i , i , t v ] ]
h r t t ( a , t v , a ] . l i t
D a a k la u p a r s * [ a , t v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
h n a a t t n r a a a t t a : [ 0 0 0 0 0 1 0 1 / 0 ] t a
S a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[[ a , i v ] , [ a , a < t a a ,1a], [ a , a f t a a , tr , a ] , [ a , t a , o ]]
■ a x t f [ a , a , t v ) . U T
O a a b la t a p a r a a [ a , o , t v ]
k a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 } I p
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a f t a a , a , t v ] , [ a , a , a f t c a , t v ] , [ o . a , t v ] , [ a , l v ] , [ a , o , t v ] , [ a , o f t a a , i v ] , [ a , o f t a a , a , t v ] ]
■ a x tT [ a , t v , a ) . l i t
Q a a k la t a p a r a a [ a , t v , a ]
l a a a t t l a g t k a p a r a i M t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 1] l q
■ a x tT [ a . l v ] . H O
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a f t a a , l v ] . 1 2 0
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , t v , a ) . 1 2 1
C a r r a a t a a t t l a g r a a w l a a a a c k a a g a d .
■ a x tT [ a , a , t v ) . 1 2 2
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . t v ) . 1 2 3
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a f t a a , t v , a ] . 1 2 4
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . a f t a a . t v ] . 1 2 1
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a f t a a , a , t v ) . 1 2 0
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a , a f t a a . t v ] , [ a , a , t v ] , [ a , 1v ] , [ a , a , t v ] , [ a , a f t e a . 1v ] , [ a , a f t a a . a , t v ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] ]
■ a x tT k p a .
J «
I T -
D .2 Acquiring an Individual Language (2)
I T- ap.
Tka laltial aattlag la [0 0 0 0 0 0 0 0 ] la
■axtT [a,a,tv). U
Oaakla ta paraa [a,a,tv]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [0 0 0 0 0 1 1 0 ] Ik
360
■ • a t ? 12
Qm U « t a p a n *
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 00 0 0 0 0 1 1 ] Xc
■ a i t ? [ a . a . t v ] . 1 3
I h u U t t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a w t a r a . . .
P a r a a a t a r a r a a a t t a ; [ 0 0 0 0 0 1 1 1 ] M
■ a i t ? [ a , t v , a } . X4
O a a k la t a p a r a a [ a , t v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [10 0 0 0 1 0 1 ] I t
■ • a t ? [ a , a , t v ] . XC
O a a k la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ 1 0 0 0 0 1 1 1 ] I f
l a s t ? [ a , t v , a ] . XO
O a a k la t a p a r a a [ a , t v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [11 0 0 0 1 0 1 ] Xg
■ a r t ? [ a . a . t v ] . XT
O a a k la t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ 1 1 0 0 0 1 1 1 ] Xk
■ a r t ? g a a a r a t a .
L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g :
[ [ a . a . a f t a a . t v ] . [ a . c . t v ] , [ a , i v ] . [ a . a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ]
■ • a t ? [ a . a . t v ] . X*
O a a k la t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 0 ) XI
■ • a t ? g a a a r a t a .
L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g :
[ [ • , a , t v ) , [ a f t a a . a , a . t v ) , [ a f t a a . a , l v ] , [ a . I v ] , [ • , a , t v ] , [ a . a f t a a . l v ] . [ a . a f t a a . a , t v ] ]
■ • a t ? [ a . t v . a ] , X t
O a a k la t a p a r a a [ a , t v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 0 ] XJ
■ a a t ? [ a . a . t v ] . X I0
O a a k la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 0 1 / 0 1 ) Xk
■ a a t ? [ a . a . t v ] . X l l
O a a k la t a p a r a a [ a . a . t v )
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 ] X I
■ a a t ? g a a a r a t a .
L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g :
[ [ a , a f t a a . a , t v ] . [ a . a . a f t a a . t v ] , [ a , a . t v ] , [ a . I v ] , [ • , « , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a . a , t v ) ]
■ a a t ? [ a , t v , a ) . X l l
O a a k la t a p a r a a [ a , t v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
361
r u i M t t n r a a a t * • : ( 0 0 0 0 0 1 1 / 0 1 ] fa
■ a r t ? [ a . t r ] . X i3
C u r n i t M t t i i f r a a a l a a a a c k u g a d .
l a a t T ( a , a f t a a .I t ] . 114
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT [ a . t r . a ] . t i t
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
l a a t T [ a . a . t r ] . { I t
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT ( a . a . t r ) . %17
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a t ? [ a , a f t a a . t r , a ] . t i t
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
l a s t ? [ a , a , a f t a a . t r ] . t i t
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a t ? [ a . a f t a a . a . t r ] . 1 1 0
C a r r a a t a a t t l a g r a a w l a a a a c k a a g a d .
■ a a tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
( C a , a , a f t a a . t r ] , [ a . a . t r ] , [ a , l r ] , [ a . a . t r ] , ( a , a f t a a , I r ] , ( a . a f t a a . a . t r ] , [ a , o f t a a . t r , a ] , [ a . t r , . ] ]
■ a r t ? b y a .
?•*
I T -
D .3 Acquiring A ll Languages in the P-Space of
S(M )-Param eters
I ? - l a a r a _ a l l _ l a a g a .
T r y l a g t a l a a r a [ ( a , i r ] , ( a , t r , • ] ] . . .
F i a a l a a t t l a g :0 0 0 0 0 0 0 0
L a a g a a g a g a a a r a t a d : ( [ a , l r ] , [ a , t r , a ] ]
T k a l a a g a a g a [ [ a . i r ] , [ a , t r , a ] ] l a l a a r a a b l o .
T r y l a g t a l a a r a [ [ a , l r ] , [ a . a . t r ] , [ a , t r , a ] ] . . .
F i a a l a a t t l a g : 0 0 0 0 0 1 1 / 0 0
L a a g a a g a g a a a r a t a d : [ [ a , l r ] , [ a , a . t r ] , [ a , t r , a ] ]
T k a l a a g a a g a [ ( a , l r ] , [ a , a , t r ] , [ a , t r , a ) ] l a l a a r a a b l a .
T r y l a g t a l a a r a ( ( a , l r ] , [ a . a . t r ] ] . . .
F i a a l a a t t l a g :0 0 0 0 0 1 1 0
L a a g a a g a g a a a r a t a d : [ [ a , l r ] , [ a . a . t r ] ]
T k a l a a g a a g a [ [ a , l r ] , [ a , o , t r ] ] l a l a a r a a b l o .
T r y l a g t a l a a r a [ [ a , t r , a ] , [ a , l r ] , [ a , t r , a ] } . . .
F i a a l a a t t l a g :1 1 1 1 1 1 1 1
L a a g a a g a g a a a r a t a d : [ [ a . t r . a ] , ( a , i r ] , [ a . t r , a ] ]
T k a l a a g a a g a [ [ a , t r , a ] , [ a , i r ] , [ a , t r , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ a , t r , a ] ] . . .
F i a a l a a t t l a g :1 0 0 0 0 0 1 0
L a a g a a g a g a a a r a t a d : [ [ l r , a ] , [ a . t r . a ] ]
a k l e h l a a a a g a r a a t a f [ [ a , t r , a ] )
T k a l a a g a a g a [ [ a , t r , a ] ] l a I 0 T l a a r a a b l a .
T r y l a g t a l a a r a [ [ a , a , t r ] , [ a , l r ] , [ a , t r , a ] ] . . .
F i a a l a a t t l a g :1 1 0 0 0 1 1 1
L a a g a a g a g a a a r a t a d : [ [ a , a . t r ) , [ a , i r ) , [ a . t r . a ] ]
T k a l a a g a a g a ( [ a , a , t r } , [ a , l r ] , [ a , t r . a ] ] l a l a a r a a b l a .
362
I r j i « | t a l a a r a [ [ a . a . t v ] , [ a , i v ] ,ta.*.t v ] . [ a . t v . a ] ] . . .
F i a i l a a t t l a g : 0 0 0 0 0 11 /0 1
l a a p t f i p i t n M : [ [ a . a . t v ] . [ a . i v ] , [ a . a . t v ] , t a , t v , a ] ]
T k a l a a g a a g a [ [ a . a . t v ] , [ a , l v ] , [ a . a . t v ] , [ a . t v . a ] ] l a l a a r a a b l a .
T r y l a g « a l a a r a [ ( a . a . t v ] , [ a , l v ] , [ a . a . t v ] ] . . .
F i a a l a a t t l a g :0 0 0 0 0 1 1 1
L a a g a a g a g a a a r a t a d : [ [ a , a , t v ] , [ a , i v ] , [ a , a , t v l ]
T k a la a g a a g a [ [ a . a . t v ] , [ a , l v ] , [ a , a , t v ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ a . a . t v ] . [ a , i t ] ] . . .
F i a a l a a t t l a g :0 0 0 0 0 0 1 0
L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] . [ a , I v ] ]
T k a l a a g a a g a [ [ a . a . t v ] , [ a , i v ] ] i t l a a r a a b l a .
T r y l a g t a l a a r a [ [ a . a . t v ] , [ a , t v , a ] , [ a . i v ] . [ a . t v . a ] ] . . .
F i a a l a a t t l a g ; 1 1 0 0 0 1 / 0 1 1
L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] . [ a . t v . a ] , [ a , I v ] , [ a . t v . a ] ]
T k a l a a g a a g a [ [ a . a . t v ] , [ a . t v . a ] , [ a . I v ] , [ a . t v . a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ a . a . t v ] . [ a . t v . a ] . [ a , i v ] . [ a . a . t v ] , [ a , t v , a ] ] . . .
F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 1 / 0 1
L a a g a a g a g a a a r a t a d : [ [ a , a , t v ] . [ a . t v . a ] , [ a , I v ] , [ a . a . t v ] , [ a , t v , » ] ]
T k a l a a g a a g a [ [ a . a . t v ] , [ a , t v . a ] , [ a , i v ] , [ a , a , t v ] , [ a , t v , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a ( [ a , a , t v ] , [ a , t v , a ] , [ a , l v ] , [ a , a , t v ] ] . . .
F i a a l a a t t l a g : 1 0 0 0 01 / 0 1 1
L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] , [ a . t v . a ] , [ a , i v ] , [ a , a , t v ] ]
T k a l a a g a a g a [ [ a , a , t v ] , [ a , t v , a ] , [ a , l v ] , [ a , a , t v ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ a . a . t v ] } . . .
F i a a l a a t t l a g :0 0 0 0 0 0 1 0
L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] , [ a . i v ] ]
■ k ic k l a a a a p a r a a t a f [ [ a . a . t v ] ]
T k a l a a g a a g a [ [ a . a . t v ] ] l a I 0 T l a a r a a b l a .
T r y l a g t a l a a r a [ [ i v , a ] , [ t v , a , a ] ] . . .
F i a a l a a t t l a g :1 0 0 0 0 0 0 0
L a a g a a g a g a a a r a t a d : [ [ l v , a ] , [ t v , a , a ] ]
T k a l a a g a a g a [ [ i v , a ] , [ t v , a , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ i v , a ] . [ t v , a , a ] , [ t v , a , a ] ] . . .
F i a a l a a t t l a g : 1 1 0 0 0 0 1 / 0 0
L a a g a a g a g a a a r a t a d : [ [ l v , a ] . [ t v , a . a ] . [ t v , a , a j ]
T b a la a g a a g a [ [ l v . a ] . [ t v , a . a ] , [ t v , a , a ] ) l a l a a r a a b l a .
T t y l a g t a l a a r a [ [ l v , a ] , [ t v , a , a ] ] . . .
F i a a l a a t t l a g :1 1 0 0 0 0 1 0
La a g a a g a g a a a r a t a d : [ [ l v , a ] , [ t v , a , a ] ]
T b a la a g a a g a [ [ l v . a ] . [ t v . a . a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ l v , a ] , [ a , i v ] , [ a . t v . a ] , [ t v . a . a ] } . . .
F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 0 0
L a a g a a g a g a a a r a t a d : [ [ i v , a ] , [ a . i v ] , [ a , t v , a ] , [ t v , a , a ] ]
T k a l a a g a a g a [ [ l v , a ] , [ a , i v ] , [ a , t v , a ] , [ t v , a , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a [ [ l v , a ] , [ a . i v ] , [ a , t v , a ] , [ t v , a , a ] , [ t v . a . a ] ] . . .
F i a a l a a t t l a g : 1 1 0 0 0 1 / 0 1 / 0 0
L a a g a a g a g a a a r a t a d : [ [ l v . a ] , [ a . i v ] , [ a , t v , a ] . [ t v . a . a ] , [ t v , a , o ] ]
T ka l a a g a a g a [ [ l v , a ] ,[a,l v ] ,[a,t v , a ] , [ t v , a ,a],[ t v ,a,a ] ] l a l a a r a a b l a .
363
Trylag «• laara CClv,a],Cs.lv],[a,tv,a],[tv.a,a]] ...
Fiaal aattlag: 1 1 0 0 0 1/0 1 0
Laagaaga gaaaratad: [[lv,s] ,[s,iv],[s,tv,a], [tv,a,s]]
Tka laagaaga [[lv,a],[s,iv] .[a.tv.a],[tv.a.a]] la laaraabla.
Trylag ta laara [[lv,a],[a.tv.a],[tv.a.a]] ...
Fiaal aattlag: 1 0 0 0 0 0 1/0 0
Laagaaga gaaaratad: [[lv,a],[a.tv,a],[tv.a.a]]
Tka laagaaga [[lv,a],[a.tv.a], [tv.a.a]] la laaraabla.
Trylag ta laara [[tv,a],(a.tv.a],[tv.a.a].[tv.a.a]] ...
Fiaal aattlag: 1 1 0 0 0 0 1/0 1/0
Laagaaga gaaaratad: [[lv,a],[a,tv.a],[tv.a.a],[tv.a.a]]
Tka laagaaga [[lv.a],[a.tv.a].[tv.a.a].[tv.a.a]] ta laaraabla.
Trylag ta laara [[iv,a],[a,tv,a].[tv.a.a]] ...
Fiaal aattlag: 1 1 0 0 0 0 1 1/0
Laagaaga gaaaratad: [[lv.a].[e.tv.al.Ctv.a.a]]
Tka laagaaga [[lv,a],[a,tv.a],[tv,a,a]] la laaraabla.
Trylag ta laara [[lv,a],[a.tv.a],[a.iv],[a,tv.a],[tv.a.a]] ...
Fiaal aattlag: 1 1 1 1 1 1 1 1/0
Laagaaga gaaaratad: [[iv.al.Ca.tv.al.Ca.ivl.Ca.tv.ol.Ctv.a.e]]
Tka laagaaga [[lv.a],[a,tv,a].[a.iv],[a,tv,a],[tv,a,o]] la laaraabla.
Trylag *• laara [[iv.a],[a,tv,a],[a,lv],[a,tv.a],[tv,a,a],[tv.a.a]] ...
Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1 / 0
Laagaaga gaaaratad: [[lv.a].[a.tv.a].[a,lv].[a.tv.a].[tv.a.a],[tv.a,a]]
Tka laagaaga [[lv,a],[a,tv,a],[a.iv],[a.tv.a],[tv,a,a],(tv,a,a]] la laaraabla.
Trylag ta laara [[lv.a],[a.tv.a],[a,lv],[a,a,tv],[a.tv.a],[tv,a,a]] ...
Fiaal aattlag; 1 0 0 0 0 1/0 1/0 0
Laagaaga gaaaratad: [[lv.a],[a,tv.a],[a.iv],[a,a.tv],[a,tv.a],[tv,a,a]]
Tka laagaaga CClv,a],[t,tv,s],[s,lv],[s,s,tv],[a,tv,al,Ctv,s,a]] la laaraabla.
Trylag ta laara [[iv,a],Ca,tv,a],[a.iv],[a.a.tv]] ...
Fiaal aattlag: 1 0 0 0 0 1/0 1 0
Laagaaga gaaaratad: [[lv,a],[o,tv,a],[a,lv],[a,a,tv]]
Tka laagaaga [[iv.a],[a,tv,a],[a.iv].[a.a.tvj] la laaraabla.
Trylag ta laara [[iv,a],[a,tv,a]] ...
Fiaal aattlag: 1 0 0 0 0 0 1 0
Laagaaga gaaaratad: [[lv,a],[a.tv.a]]
Tka laagaaga [[iv,s],[a,tv,aj] la laaraabla.
Trylag ta laara [[lv,a],[a,a,tv],[a,tv,a],[a.iv],[a,tv,a ] ,[tv,a.a],[tv.a,a]] ...
Fiaal aattlag: 1 1 0 0 0 1/0 1/0 1/0
Laagaaga gaaaratad; [[lv,a],[a,a,tv],[a,tv,a],[a,lv],[a,tv,a],[tv,a,a],[tv,a,a]]
Tka laagaaga [[lv.a],[a.a.tv],[a,tv.a],[a,iv],[a,tv,a],[tv,a,a],[tv.a.a]] la laaraabla.
Trylag ta laara [[iv,a],[a,a,tv],[a,tv,a],[a,lv],[a,tv.a],[tv,a,a]] ...
Fiaal aattlag: 1 1 0 0 0 1/0 1 1/0
Laagaaga gaaaratad: [[lv,a],[a,a,tv],[a.tv.a],[a,lv],[a.tv.a],[tv.a.a]]
Tba laagaaga [[iv,a],[a,a,tv],[a,tv,a],[a.iv],[a,tv,a],[tv,a,a]] la laaraabla.
Trylag ta laara [[iv,a],[a.a.tv],[a,tv.a],[a,lv],[a,a,tv],[a,tv,a],[tv.a.a]] ...
Fiaal aattlag: 1 0 0 0 0 1/0 1/0 1/0
Laagaaga gaaaratad: [[lv,a],[a,a,tv],[a,tv,a],[a,lv],[a,a,tv].[a.tv,s],[tv.a.a]]
Tka laagaaga [[iv,a],[a.a.tv],[a,tv,a],[a,lv],(a,a,tv],(a,tv,a],[tv.a.a]] la laaraabla.
Trylag ta laara [[lv,a],[a,a,tv],Ca,tv,a],[a,iv],[a,a,tv]] ...
Fiaal a atti^ ; 1 0 0 0 0 1/0 1 1/0
364
Laagaaga (tM tilid: [[lv.a],[a,a,tv],[a,tv,a),[a.iv],[a,a,tv]]
Tba laagaaga [[lv,a],(*,*,tv],(a,tv,a],[a,lv],|i.a,tv]] it ltu u k U .
I**
I- T
D .4 Acquiring A ll Languages in the P-Space of
S(M )-Param eters (w ith A dv)
I T- l u n . t l l .l u |i .
Trylag ta U u a , [a.aftaa, itr],[a.aftaa.tv,*],[•,<«,«]] ...
Fiaal aattlag: 0 0 0 0 0 1 0 0
Laagaaga gaaaratad: [Ca.lt],[*,aftaa,iv],[a,aftaa,tv,a],[a.tr.a]]
Tka laagaaga [[a,1*1,[a,aftaa.lt].[a.aftaa,tv,a].[a.tv.a]] la laaraabla.
Trylag tv laara
[[a,lt].[a,a,tv],[a.aftaa.it],[a,aftaa.a,ttl,[a.aftaa,tv,a],[a,tv,a]] ...
Fiaal aattlag: O 0 0 0 0 1 1/0 0
Laagaaga gaaaratad: [[a.iv].[a.a.tv].[a,aftaa.lt],[a,aftaa,a,tv],[a.aftaa.tv,a],[a.tv.a]]
Tba laagaaga
[[a.It],[a.a.tv],[a.aftaa.lv],[a.aftaa,a,tv],[a.aftaa.tv,a],[a.tv.a]]
la laaraabla-
Trylag ta laara [[a.iv],[a,a,tv],[#,aftaa.lv],[a,aftaa,a,t*]] ...
Fiaal aattlag: 0 0 0 0 0 1 1 0
Laagaaga gaaaratad: [[a.iv].[a.a.tv].[a.aftaa,iv),[a,aftaa,a,tvj]
Tba laagaaga [[a,iv],(a,a,tv].[a.aftaa,iv],[a,aftaa,a,tv]] la laaraabla.
Trylag ta laara CCa.lv],[a.1*.aftaa],[a,tv.a],[a,tv,aftaa.a]] ...
Fiaal aattlag: 1 1 1 1 0 1 0 0
Laagaaga gaaaratad: [[a.iv].Ca,lv,*ftaa],[a,tv,a),[a,tv,aftaa,a])
Tba laagaaga [[a.iv],(a.iv,aftaa],[a.tv.a],[a,tv,aftaa,a]] la laaraabla.
Trylag ta laara [(aftaa.a,lv],[aftaa.a,tv,a],[a.iv],[a,tv,*]] ...
Fiaal aattlag: 0 0 0 0 0 0 0 0
Laagaaga gaaaratad: [[aftaa,a,lv],[aftaa,a,tv,a],[a,lv],[a,tv,a]]
Tba laagaaga [[aftaa,a,iv].[aftaa,a,tv.a],[a.iv],[a,tv,a]] la laaraabla.
Trylag ta laara
[[aftaa,a.iv],[aftaa,a,tv,a],[a,lv],[a.aftaa.lv],[a.aftaa,tv,a],[a,tv.a]] ...
Fiaal aattlag: 0 0 0 0 0 1/0 0 0
Laagaaga gaaaratad:
[[aftaa,a,iv], [aftaa,a,tv,a], [a, iv], [a.aftaa.lv], [a.aftaa,tv,a], [a,tv,a]]
Tba laagaaga
[[aftaa,a,iv),[aftaa,a,tv,a],[*,!*],[a,aftaa.iv],[a,aftaa.tv,*],[a.tv.a]]
la laaraabla.
Trylag ta laara
[[a,tv,a],[a,tv,a,aftaa], [a.iv],[a,lv,aftaa],[a,tv.a],[a,tv,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 1 1
Laagaaga gaaaratad:
[[a,tv,a),[a,tv,a,aftaa], [a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a]]
Tba laagaaga
[[a,tv,a], [a,tv,a,aftaa], [a,lv], [a,lv,aftaa], [a,tv,a],[a,tv,aftaa,a]]
la laaraabla.
Trylag ta laara
[[a,tv,aftaa,a], [a,tv,a], [a,tv.a,aftaa], [a,lv], [a,iv,aftaa], [a,tv,a],
[a,tv,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1
365
Laagaaga pM ritri:
[[a,tV,aftaB,a] .[0,tV,a] ,[a,tV.a,aftaa] .Cl.lv] ,[a,iV,aftaa],[a,tV,a] ,
Tka laagaaga
[[alt i laftM ,t],[«>tf,«}l[i,t«>i,>ll*a]l[a ,i* ],[i,h l«ft*a]l[«,tv>t],
[iit*iafta»,*U la laaraabla.
Trylag «a laara [[a,tv,aftaa,a],[a.tv.a]] ...
Fiaal aattlag: 1 1 1 1 0 0 1 1
Laagaaga gaaaratad: [[a.tv,aftaa,a],to,tv,a]]
Tka laagaaga [[a,tv,aftaa,a],[a,tv,a]] la laaraabla.
Trylag ta laara
[[a,a,tv],[aftaa,a.a.tv],[aftaa,a,iv],[
[a.aftaa,a,tv)] ...
Fiaal aattlag: 0 0 0 0 0 1/0 1 0
Laagaaga gaaaratad:
[[a.a.tv],[aftaa.a,a, tv],[aftaa,a,lv], [
Tka laagaaga
[[a.a.tv].[oftaa,a.a,tv].[aftaa.a,1v].[
la laaraabla.
Trylag ta laara [[•,a,tv],[aftaa,a,a,tv],[aftaa.a,lv].[a.iv]] ...
Fiaal aattlag: 0 0 0 0 0 0 1 0
Laagaaga gaaaratad: [[a.a.tv],[aftaa,a,a,tv],[aftaa,a,iv],[a,lv]]
Tba laagaaga [[a,a,tv],[aftaa,a,a,tv],[aftaa.a,iv],[a,lv]] la laaraabla.
Trylag ta laara
[[a,a,tv],[aftaa,a,a,tv],[aftaa.a,lv],[aftaa.a,tv,a],[a,lv],[a,tv,a]] ...
Fiaal aattlag: 0 0 0 0 0 0 1/0 0
Laagaaga gaaaratad:
[[a,a,tv], [aftaa,a,a, tv], [aftaa,a,lv], [aftaa,a,tv,a], [a,lv], [a,tv,a]]
Tba laagaaga
[[a,a,tv],[aftaa,a,a,tv],[aftaa,a,iv],[aftaa,a,tv,a],[a.iv],[a,tv,a]] la laaraabla.
Trylag ta laara
[[a,a,tv], [aftaa,a,a,tv], [aftaa,a.iv], [aftaa.a,tv,a], [a.iv], [a,a,tv], [a,aftaa.lv],
[a,aftaa.a,tv],[a,aftaa,tv,a] ,[a,tv,a]] ...
Fiaal aattlag: 0 0 0 0 0 1/0 I/O 0
Laagaaga gaaaratad:
[[a,a,tv],[aftaa,a,a,tv],[aftaa.a,iv],[aftaa.a,tv,a],[a,iv], [a,a,tv],[a,aftaa, iv],
[a,aftaa,a,tv], [a,aftaa,tv,a], [a,tv,a]]
Tka laagaaga
[[a ,a,tv],[aftaa,a,a,tv],[aftaa,a,lv),[aftaa,a,tv ,a],[a,lv],[a,a ,tv],[a,aftaa,iv],
[a,aftaa,a,tv],[a,aftaa,tv,a],[a,tv,a]] la laaraabla.
Trylag ta laara
[[a,a,tv],[a,a,tv,aftaa],[a,lv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 0 1 1 1
Laagaaga gaaaratad:
[[a,a,tv], [a.a.tv,aftaa], [a.iv], [a,iv,aftaa],[a,tv,a], [a,tv,aftaa,a]]
Tka laagaaga
[[a,a,tv],[a,a,tv,aftaa],[a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a]] la laaraabla.
Trylag ta laara
[[a.a.tv], [a,a,tv,aftaa], [a,tv,aftaa,a],[a,tv,a], [a,lv], [a,lv,aftaa], [a, tv,a],
[a,tv,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 0 1/0 1 1
Laagaaga gaaaratad:
[[a,a,tv],[a,a,tv,aftaa], [a,t v,aftaa,a],[a,tv,a],[a,1v],[a,1v,aftaa],[a,tv,a],
[a,tv,aftaa.a]]
-].
] ,[a,aftaa.a,tv]]
-), [a.aftaa.a, tv])
366
T b a l u p a p
[ [ a . a . t v ] , [ a , a , t v , a f t a a ] , [ a . t v , a f t a a , a ] , [ a . t v . a ] , [ a . l v ] , [ a . i v , o f t a a ] , [ a , t v , a ] ,
[ a , t v , a f t a a . a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a
[ [ a . a . a f t a a . t v ) , [ a , a . t v ] , [ a . i v ] . [ a , a f t a a , l v ] , [ a . a f t a a , t v . a ] . [ a , t v . a ) ] . . .
F i a a l a a t t l a g :1 1 0 0 0 1 1 1
L a a g a a g a g a a a r a t a d :
[ [ a , a , a f t a a , t v ] , [ a . a , t v ] , [ a . i v ] , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a . t v , a ] ]
T b a l a a g a a g a
[ [ a , a , a f t a a , t v ] , [ a , a , t v ] , ( a , i v ] , [ a , a f t a a , l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a
[ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ] , [ a . a . t v ] , [ a , a f t a a , l v ] . [ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] ,
[ a , t v , a ] ] . . .
F i a a l a a t t l a g : 0 0 0 0 0 1 1 / 0 1
L a a g a a g a g a a a r a t a d :
[ [ a , a , a f t a a , t v ) , [ a . a . t v ] , [ a , l v ] , ( a . a . t v ] , [ a . a f t a a , i v ] . [ a . a f t a a . a . t v j .
[ a . a f t a a , t v . a ] , [ a . t v . a ] ]
T b a l a a g a a g a [ [ a , a , a f t a a , t v ] , [ o , a , t v ] , [ a . i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , o f t a a , a , t v ] , [ a , a f t a a . t v , e ] ,
[ a . t v . a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a
( [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a . i v ] , [ a , a , t v ] , [ a , o f t a a . l v ] , [ a , a f t a a , a , t v ] ] . . .
F i a a l a a t t l a g :0 0 0 0 0 1 1 1
L a a g a a g a g a a a r a t a d :
[ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ] , [ a , a , t v ] , [ a . a f t a a . l v ] , [ a , a f t a a , o , t v ] ]
T b a l a a g a a g a
[ ( a , a , a f t a a , t v ] , [ a , a , t v ) , [ a , l v ] , [ a , a , t v ] , [ a , a f t a B , i v ] , [ a , a f t a a , a , t v ] ] l a l a a r a a b l o .
T r y l a g t a l a a r a [ [ a , a f t a a , t v . a ] , [ a . t v . a ] ] . . .
F i a a l a a t t l a g :1 0 0 0 0 0 1 1
L a a g a a g a g a a a r a t a d : [ [ a , a f t a a , t v , a ] , [ a , t v , a ] )
T b a l a a g a a g a [ [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a
( [ a , a f t a a , t v . a ] , [ a . a . a f t a a , t v ] , ( a , a , t v ] , [ a . t v . a ] , [ a . i v ] , [ a , o f t o a . l v ] , [ a , a f t a a , t v , a ] ,
[ a , t v , a ] ] . . .
F i a a l a a t t l a g : 1 1 0 0 0 1 / 0 1 1
L a a g a a g a g a a a r a t a d :
( [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a . i v ] , [ a , a f t a a . l v ] ,
( a , a f t a a , t v , a ] , [ a , t v , a ] )
T b a l a a g a a g a [ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a . i v ] , [ a , a f t a a . l v ] ,
[ a , a f t a a , t v , a ] , [ a . t v . a ] ] l a l a a r a a b l a .
T r y l a g t a l a a r a
[ [ a , a f t a a , t v . a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a , i v ] , [ a . a . t v ] , [ a , a f t a a . l v ] ,
[ a , a f t a a . a , t v ] , [ a , a f t a a , t v , a ) , [ a , t v , a ] ] . . .
F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 1 / 0 1
La a g a a g a g a a a r a t a d :
[ [ a . a f t a a , t v . a ] , [ a , a , a f t a a . t v ) , [ a , a , t v ] , [ a , t v . a ] , [ a , i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] ,
[ a , a f t a a , a , t v ] . [ a . a f t a a . t v , a ] , C a , t v . a ] ]
T b a l a a g a a g a
[ [ a , a f t a a , t v , a ] , [ a . a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v . a ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] ,
[ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a .
T r y l a g t o l a a r a
[ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ o , 1v ] , [ a , a , t v ] , [ a . a f t a a , 1 v ] ,
[ a , a f t a a . a . t v ) ) . . .
F i a a l a a t t i ^ : 1 0 0 0 0 1 / 0 1 1
L a a g a a g a g a a a r a t a d :
[ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a , i v ] , [ a , a , t v ] , [ a , a f t a a , l v ] ,
367
[a ,a fta a ,a ,tv ]]
Tba lM |M |i
[£• .a fta a , tV ,s ] ,[ a ,a , oft u ,tv] .[a ,a . tv] ,[o , tv ,a] ,[ • , lv] ,[ i,o , tv] ,[ • ,a fta a , iv ] ,
i s U u m U «.
Trylag ta la a ra
[ [a .a fta a ,a ,tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ] ,[a fta a ,a ,lv ] , [s ,lv ]] . . .
Fiaal a a ttla g : 0 0 0 0 0 0 1 1/0
Laagaaga gaaarata*: [ [a .a fta a ,a ,tv ] ,[ a ,a ,tv ] .[ a f ta a ,a ,a ,tv ] .[a fta a ,a .iv ] , [a ,lv ]]
Tba laagaaga
[ [a .a fta a .a .tv ] , [a ,a ,tv ].[a fta a ,a ,a ,tv ],[a fta a .a ,lv ] , [a ,lv ]] la laaraab la.
Trylag ta la a ra
[[a .a fta a ,a ,tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ] ,[a fta a ,a .iv ] ,[ a f ta a ,a ,tv ,a ] , [a ,iv ] ,
[a .tv .a ]] . . .
F iaal a a ttla g : 0 0 0 0 0 0 1/0 1/0
Laagaaga gaaaratad:
[[a ,a fta a ,a ,tv ],[a ,a ,tv ],[a fta a ,a ,a ,tv ],[a fta a ,a ,iv ] , [a fta a ,a ,tv ,a ] ,[a ,lv ] ,
[a .tv .a ]]
Tba laagaaga
[ [a ,a fta a ,a ,tv ] ,[ a ,a ,tv ] .[ a f ta a .a .a ,tv ] ,[a fta a ,a ,lv ] ,[ a f ta a ,a ,tv ,a ],[s ,lv ] , [a .tv .a ]] la laaraab la.
Trylag ta la ara [ [a ,a fta a ,a ,tv ] , [a ,a ,tv ]] . . .
Fiaal a a ttla g : 0 0 0 0 0 0 1 1
Laagaaga gaaaratad: [[a ,a fta a ,a ,tv ].[a .a .tv ]]
Tba laagaaga [[a ,a fta a ,a ,tv ] ,[ a .a .tv ] ] la laaraabla.
Trylag ta la a ra [ [a ,a fta a ,a ,tv ] , [a ,a ,a fta a ,tv ], [a ,a ,tv ] , ( a ,lv ] , [ a .a .tv ] , [a ,a fta a .lv ].[ a .a f ta a .a .tr ] ,
[ a ,a fta a ,tv ,a ],[a ,tv ,a ]] . . .
F iaal a a ttla g : 0 0 0 0 0 1 / 0 1 / 0 1
Laagaaga gaaaratad: [ [a ,a fta a ,a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a .tv ] ,[a .iv ] , [ a .a .tv ] ,[ a .a fta a .lv ],
[ a .a fta a .a , tv ] , [a ,a fta a ,tv .a ] , [a .tv .a ]]
Tba laagaaga
[ [a ,a fta a .a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a ,iv ] , [ a ,a ,tv ],[a ,a f ta a .lv ] ,[a ,a fta a ,a ,tv ],
[ a ,a f ta a ,tv ,a ] ,[a ,tv ,a ] ) la laaraabla.
Trylag ta la a ra
[[a ,a fta a ,a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a ,lv ] ,[a ,a ,tv ] , [ a ,a fta a ,iv ],[a .a fta a ,a ,tv ]] . . .
F iaal a a t t l ^ : 0 0 0 0 0 1/0 1 1
Laagaaga gaaaratad:
[[a , a fta a ,a ,tv ], [a, a ,a fta a , tv ], [a ,a , tv ], [a .iv ], [a,a, tv ], [a ,a f ta a, iv ], [a ,a fta a ,a , tv ]]
Tba laagaaga [[a .a fta a .a , tv ], [a ,a .a fta a .tv ], [ a ,a ,tv j, [ a .lv ] , [ a .a ,tv ] , [a ,a fta a .lv ],
[a ,a fta a ,a ,tv ]] la laaraabla.
Trylag ta la a ra
[[a ,a fta a ,a ,tv ] , (a ,a ,a f ta a ,tv ] ,[a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[ a f ta a .a .iv ] ,[a .iv ] ,
[ a ,a ,tv ] ,[ a .a f ta a .lv ] ,[a .a fta a ,a ,tv ]] . ..
F iaal a a ttla g : 0 0 0 0 0 1/0 1 1/0
Laagaaga gaaaratad:
[[a , a f ta a ,a ,tv ] .[ a .a .a f ta a .tv ] ,[a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[ a f ta a ,a ,iv ] , [ a .iv ] ,
(a .a ,tv ] , [a .a fta a , iv ] , [a .a fta a ,a , tv ]]
Tba laagaaga
[[a .a fta a .a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a fta a ,a ,a ,tv ] ,[ a f ta a ,a ,iv ],[a ,lv ] , [a ,a ,tv ] ,
[ a ,a fta a ,iv ],[a ,a fta a ,a ,tv ]] la laaraabla.
Trylag ta la ara
[ [ a .a f ta a .a .tr ] , [a ,a .a fta a , tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[a fta a ,a ,lv ] ,[ a f ta a ,a ,tv ,a ],
[ a ,lv ] ,[ a ,a ,tv ] ,[ a .a f ta a .lv ] ,[a ,a fta a ,a ,tv ],[a ,a fta a .tv ,a ] , [a ,tv ,a ]] . . .
F iaal a a ttla g : 0 0 0 0 0 1/0 1/0 1/0
Laagaaga gaaaratad:
[[a ,a fta a .a ,tv ],[a ,a ,a fta a , tv ],[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[a fta a ,a ,iv ] ,[ a f ta a ,a ,tv ,a ],
[a ,1v ], [a ,a ,t v ], [a ,a fta a , 1v ], [a,af ta a ,a ,t v ], [a,afta a ,t v ,a1, [ a ,t v ,a] ]
368
Tka lu p a |(
[[a,aftaa,a.tv], [a.a.aftaa, tv],[e,a, tv],[aftaa,a,a,tv],[aftaa.I,iv],[aftaa,a,t*,«],
[■,1*], [a,a ,tv],[a.aftaa,lv].[a.aftaa.a ,tv].[a,aftaa,tv,a].[a,t»,aj] la laaraabla.
Trylag ta laara [[lv.a].[aftaa.lv,a].[aftaa,tv,a,a],[tv,a,a]] ...
Fiaal aattlag: 1 0 0 0 0 0 0 0
Laagaaga gaaarata*: [[iv,a],[aftaa,lv.a],[aftaa,tv.a,a],(tv,a,a]]
Tba laagaaga [[lv,a],[aftaa.lv,a],[aftaa,tv,a,a],[tv,a,a]] la laaraabla.
Trylag ta laara
[[lv,a],[aftaa,lv.a],[aftaa,tv,a,a],[a,lv],[a,aftaa,lv],(a,aftaa,tv,a],[a,tv,a],
[tv.a.a]] ...
Fiaal aattlag: 1 0 0 0 0 1/0 0 0
Laagaaga gaaarata*: [[lv,a].[aftaa.lv,a],[aftaa.tv.a.a],[a,iv],[a,aftaa.lv],[a.aftaa,tv,a],
[a.tv.a],[tv,a,*]]
Tba laagaaga
[[iv,a], [aftaa. lv,a], [aftaa,tv,a,a], [a.iv], [a,aftaa.lv], [a,oftaa,tv,a], [a,tv,a],
[tv,a,a]] la laaraabla.
Trylag ta laara [[iv,a],[aftaa.lv,a],[aftaa,tv,a,s],[tv,a,a]] ...
Fiaal aattlag: 1 1 0 0 0 0 1 0
Laagaaga gaaarata*: [[iv,a],[aftaa.lv,a],[aftaa,tv,a,a],(tv,a,a]]
Tba laagaaga [[tv,a],[aftaa,iv,a],(aftaa,tv,a,a],[tv,a,a]] la laaraabla.
Trylag to laara
[[lv.a],[aftaa.lv,a],[aftaa.tv,a,a],[a.iv],[a,attaa.lv] .[a,aftaa,tv,a],[a,tv.a],
[tv.a.a]] ...
Fiaal aattlag: 1 1 0 0 0 1/0 1 0
Laagaaga gaaarata*:
[[lv.a],[aftaa,lv.a],[aftaa.tv,a,a], [a,lv],[a,aftaa,lv],[a,aftaa,tv,a],
[a,tv,a],[tv,a,a]]
Tba laagaaga
[[1v,a],(aftaa,lv.a],[aftaa.tv,a,a], [a,1v],[a,aftaa,iv),[a,oftaa,tv ,o],[a,tv,a],
[tv.a.a]] la laaraabla.
Trylag to laara [[lv,a],[aftaa,iv,a].[aftaa.tv,a,a],[aftaa,tv,a,a],[tv.a.a],[tv,a,a]] ...
Fiaal aattlag: 1 1 0 0 0 0 1/0 0
Laagaaga gaaarata*:
[[iv,a], [aftaa, iv,a], [aftaa,tv,a,a], [aftaa,tv,a,a], [tv,a,a], [tv,a,a]]
Tba laagaaga
[[lv,a].[aftaa.lv,a),[aftaa,tv,a,a].[aftaa,tv,a,a],[tv,a,a],[tv,a,a]] la laaraabla.
Trylag ta laara
[[iv,a),[aftaa,lv.a],[aftaa.tv.a.a]. [aftaa.tv.a.a],[a,iv], [a,aftaa.lv],[a,aftaa,tv.a),
(a,tv.a],[tv,a,a],[tv,a,a]) ...
Fiaal aattlag: 1 10 0 0 1/0 1/0 0
Laagaaga gaaarata*:
[[lv,a],(aftaa,lv,a],[aftaa,tv,a,a],[aftaa,tv,a,a],[a,lv],[a,aftaa,iv),
[a,aftaa,tv,a], [a,tv,a], [tv,a,a], [tv,a,a])
Tba laagaaga
[[lv.a],[aftaa,lv.a],[aftaa,tv,a,a],[aftaa,tv,a,a],[a,iv],[a,aftaa,lv].[a.aftaa,tv,a] ,
[a,tv,a],[tv,a,a],[tv,a,a]] la laaraabla.
Trylag ta laara
[[lv.a], [a, tv,a], [aftaa,lv.a], [aftaa,a,tv,a], [a.iv], [a,a,tv), [a,aftaa.lv],
[a,aftaa,a,tv)] ...
Fiaal aattlag: 1 0 0 0 0 1/0 1 0
Laagaaga gaaarata*:
[[lv,a], [a,tv,a), [aftaa.lv,a], [aftaa,a, tv,a], [a,lv], [a,a,tv], [a,aftaa,iv],
[a .aftaa.a,tv]]
Tba laagaaga
[[lv,a],[a,tv,a],[aftaa,iv,a],[aftaa,a,tv,a],[a,iv],[a,a,tv],[a,aftoa.lv],
369
[),•««»,•,»]] Is ln m U « .
Trylag *• laara [[lv.a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv.a],[aftaa.tv.a.a],[tv.a.a]] ...
Plaal aattlag: 1 0 0 0 0 0 1/0 0
Laagaaga gaaaratad:
[[lv,a],[a.tv.a] .[aftaa.tv,a].(aftaa,a,tv.a].[aftaa,tv,a,a],[tv.a.a]]
Tba laagaaga
[[lr.a],(a,tv,a].[aftaa,lv,a].[aftaa,a,tv,a],[aftaa,tv.a.a],[tv,a,a]] la laaraabla.
Trylag to laara
[[lv.a],[a,tv.a],[aftaa,lv.a],[aftaa,a,tv.a],[aftaa,tv,a,a],[a.iv] ,[a.a.tv],
[a,aftaa.tv],[a,aftaa,a,tv],[a,aftaa,tv,a],[a,tv,a],[tv,a,a]] ...
Plaal aattlag: 1 0 0 0 0 1/0 I/O O
Laagaaga gaaaratad:
CClv,a],[a,tv,a],[aftaa.tv,a],[aftaa.a,tv,a],[aftaa,tv,a,a],[a,lv],[a,a,tv],
[a,oftaa.lv],[a,aftaa,a,tv].[a.aftaa,tv,a],[a,tv,a],[tv,a,a]]
Tba laagaaga [[lv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv.a],[oftaa,tv,a,a],[a,lv],[a,a,tv],
[a,oftaa,1v],[a,aftaa,a,tv],[a.aftaa,tv,o],[a,tv.a],[tv.a,a]] la laaraabla.
Trylag ta laara [[lv.a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv.a]] ...
Plaal aattlag: 1 0 0 0 0 0 1 0
Laagaaga gaaaratad: [[lv,a],[a,tv.a].[aftaa.lv,a],[aftaa,a,tv,a}]
Tka laagaaga [[lv,a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv,a]] la laaraabla.
Trylag ta laara [[lv.a] .[a.aftaa,tv,a] ,[a,tv,a],[aftaa,lv,a].[aftaa.tv,a,a],[tv,a,a]] ...
Plaal aattlag: 1 1 0 0 0 0 1 1/0
Laagaaga gaaaratad: [[lv,a],[a,aftaa,tv.a],[a,tv.a],[aftaa,lv.a].[aftaa,tv,a,a],[tv.a.a]]
Tba laagaaga [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv,a],[aftaa,tv.a,a],[tv,a,a]] la laaraablo.
Trylag ta laara
[[lv,a],[a,aftaa,tv,a],[a.tv.a],[aftaa.lv,a],[aftaa,tv,a,a],[aftaa,tv,a,a],
[tv.a,a],[tv,a,a]] ...
Plaal aattlag: 1 1 0 0 0 0 1/0 1/0
Laagaaga gaaaratad:
C[lv,a],[a,aftaa,tv,a].[a.tv.a],[aftaa,lv.a],[aftaa,tv.a,a],[aftaa,tv,a,a],
[tv,a,a],[tv.a,a]]
Tba laagaaga C[lv,a],[a,aftaa,tv,a],[a,tv,a],[oftaa.lv,a],[aftaa,tv,a,a],[aftaa,tv.a.a],
[tv,a,a],[tv,a,a]] la laaraablo.
Trylag ta laara
[[lv.a],[a.aftaa.tv,a],[a.tv,a],[oftaa.lv.a],[aftaa,a,tv.a],[aftaa.tv.a.a],[tv.a.a]] ...
Plaal aattlag: 1 0 0 0 0 0 1/0 1/0
Laagaaga gaaaratad:
[[lv.a].(a.aftaa.tv,a],[a.tv,a].[aftaa.lv.a].[aftaa.a.tv,a],[aftaa.tv,a,a]. [tv.a.a]]
Tba laagaaga
[[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,iv,a],[aftaa,a,tv,a],[aftaa,tv.a,a],[tv,a,a)]
la laaraabla.
Trylag ta laara [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv,a]] ...
Plaal aattlag: 1 0 0 0 0 0 1 1/0
Laagaaga gaaaratad: [[lv,a],[a,aftaa,tv,a],[a,tv,a],[oftaa.lv,a],[aftaa,a,tv,a]]
Tba laagaaga [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv.a]] la laaraabla.
Trylag to laara
[[lv.a],[a.aftaa,tv,a],[a,a,aftaa,tv],[a,a,tv],[a,tv,a],[aftaa.lv,a],[aftaa,tv.a,a],
(a,lv],[a.aftaa,lv],[a,aftaa,tv,a],[a,tv,a],[tv,a,a]] ...
Plaal aattlag: 1 1 0 0 0 1/0 1 1/0
Laagaaga gaaaratad:
[[lv,a], [a,aftaa.tv,a], [a,a,aftaa.tv], [a.a, tv], [a,tv,a], [aftaa.lv,a], [aftaa.tv.a.a], [a, lv],
[a,aftaa.lv], [a,aftaa, tv,a], [a,tv,a], [tv,a,a]]
Tba laagaaga
[[lv.a], [a,aftaa,tv,a], [a.a.aftaa,tv], [a.a.tv], [a,tv,a], [aftaa.lv,a], [aftaa.
370
is ItUHbl*.
Trylag ts laara
[[lv.s], |s, aftaa,tv,a], [a.s.aftaa,tv], [a.s.tv], [s,tv,a], [aftaa.lv,a], [aftaa,tv,a.a],
[aftaa,tv.s.s],[a,lv],[a.aftaa,lv],[a,aftaa,tv,a],[s,tv,s],[tv,a,s},(tv,s,a]] ...
F lu l aattlag: 1 1 0 0 0 1/0 1/0 1/0
Laagaaga gaaaratad:
[[iv.s],[a,aftaa,tv.a],[«,s,aftaa,tv],[s,s,tv],[a.tv.a],[sftss.lv,a], (aftaa.»»,»,!],
[aftaa,tv,s,a],[a,iv],[a.aftaa.tv],[a.aftaa.tv,a],[a.tv,a],[tv,a,a],[tv,a,a}]
Tba l u p t f i
[[t«,s],[*l*ftn,tT ,s]>I*.s,tfUklw ]l[« ,i,tf],(* ,U ,t],[« fm ,i« .s],t* ttu ,tf,» ,s]l
[aftaa, tv,a,a], [a,lv], [a,aftaa.tv], [s.aftaa, tv,a], [a,tv,a], [tv.a.a], [tv,a,a]] Is lssrssbls.
Trjria| ta l u n
[[lV,s],[s,8ftaa,tV,s] ,[8,S,aftaa,tv],[o,S,tv] ,[a,tV,a],[aftaa,lv,s], [aftaa.a,tV,s] ,
[a,iv], [a,a,tv], [a,aftaa.lv], [a,aftaa,a,tv]] ...
F lu l aattlag: 1 0 0 0 0 1/0 1 1/0
U ip m (u m ta i;
[[lv,a],[a.aftaa,tv,a],[a,a,aftaa,tv], [a.a.tv].[a.tv.a].[aftaa,iv.s].[aftaa.a.tv.a],
[s,is],[s,s ,tv] .[s .sftss,iv],[s,aftaa,«,tv]1
Tba laagaaga
£[1*.a],[a,aftaa,tv,s],[a,s,aftaa,tv],[a.s.tv] ,[a,tv.a] ,[aftaa,lv,a],[aftaa,a,t*,s] ,
[a,lv],[a,a,tv],[a,aftaa.lv],[a,aftaa,a,tv]] is laaraabla.
Trylag ta laara
[[lv,a], [a,aftaa,tv.a], [a,a.aftaa.tv], [a.a.tv],[a,tv,a],[aftaa,lv.s],[aftaa,s.tv,a].
[oftaa,tv,s,a],[s,iv] ,Es,a,tv},[s,aftaa,lv],[s,aftaa,a,tv],[s,oftaa,tv,o],[s,tv,a], [tv,a,a]] ...
Fiaal sa tti^ : 1 0 0 0 0 1/0 1/0 1/0
Laagaaga gaaaratad:
[[lv,s], [a.aftaa,tv,s],[a,s,aftaa,tv], [o.s,tv], [a,tv,s], [aftaa, lv,s], [aftaa,o.tv.s],
[aftaa,tv,s,a], [a, lv], [a.a.tv], [s.aftaa,iv],[»,aftaa,a, tv], [a.aftaa,tv,a], [s, tv,o], [tv,s.o]]
Tba laagaaga
[[lr .a].[a,aftaa,tv,s],[a,s,aftaa,tv],[a,s,tv],[a,tv,s],[aftaa,lv,s], [aftaa.a,tv,s].
[aftaa,tv,s,a],[s,lv],[s,a,tv],[s,aftaa,lvj,[s,aftaa,a,tv],[s,aftaa,tv,a],[s,tv,a],[tv,s,o]]
la laaraabla.
Trylag ta laara [[iv,a],[lv,s,aftaa],[tv.a,a],[tv,s,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 0 0
Laagaaga gaaaratad: [[lv,a],[iv,s,aftaa],[tv,a,a],[tv,s,aftaa.a]]
Tba laagaaga [[lv.a],[lv.a,aftaa],[tv,a,a],[tv,s,aftaa,a]] is laaraabla.
Trylag ta laara [[lv,a],[lv,s,aftaa],[s,iv],[s,lv,aftaa],[s,tv,a],[B,tv,aftaa,a],[tv,s.a],
[tv,a,aftaa.a]] ...
Fiaal aattlag: 1 1 1 1 1 1 0 1/0
Laagaaga gaaaratad: [[iv,s],[iv,s,aftaa],[s,iv],[s,lv,aftaa],[s.tv,a],[s,tv,aftaa,a],[tv.a.a],
[tv,s,aftaa,a]]
Tba laagaaga
[[lv,a],[iv,s,aftaa],[s,iv],[a,iv,aftaa],[s,tv,a],[s,tv,aftaa.o],[tv,s,a],[tv,s,aftaa,a]]
is laaraabla.
Trylag ta laara [(lv,s],[lv,s,aftaa],[a,tv,a],[a,tv,s,aftaa],[a,lv],[s,iv,aftaa],[s,tv,a],
[s,tv,aftaa.a],[tv,s,a],[tv,s,aftaa,a]] ...
Fiaal a attl^ : 1 1 1 1 1 1 1 1 / 0
Laagaaga gaaaratad: [[lv,s],[iv,s,aftaa],[a,tv,a],[a,tv,a,aftaa],[s,iv],[s,lv,aftaa],(s,tv,a],
[a,tv,aftaa,a],[tv.s.a],[tv,s,aftaa,a]}
Tba laagaaga
[[lv,a), [iv,s,aftaa], [a,tv,s], [a,tv.a,aftaa], [s,lv], [a.iv,aftaa], [s.tv,a], [s.tv,aftaa,a],
[tv,a,a].[tv,a,aftaa,a]] ia laaraabla.
Trylag ta laara [[lv,aftaa,a],[lv,a],(tv,aftaa,a,a],[tv,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 0 0 0
Laagaaga gaaaratad: [[iv,aftaa,a],[iv,a],[tv,aftaa,s,a],[tv,a,a]]
371
TIm lu |U |t [[lv.aftaa,a],[iv,a],[tv,aftaa,a,a],[tv,a,a]] la laaraabla.
Trylag ta laara [[iv,aftaa.a].[lv.a].[tv.a,a],[tv,aftaa.a.a],[tv,aftaa,a,a],[tv.a.a]] ...
Fiaal aattlag: 1 1 1 1 0 0 1 / 0 0
Laagaaga gaaaratad: [El*.aftaa.a],[lv.a],[tv.a.a],[tv,aftaa,a,a] ,[tv,aftaa,a,a], [tv.a.a]]
Tba laagaaga
[[iv,aftaa,a],[iv,a],[«v,a,a],[tv,aftaa.a,a],[tv,aftaa.a,a],[tv.a,a]] la laaraabla.
Trylag ta laara [[lv,aftaa,a],[lv,a],[tv,a,a],[tv,aftaa,a,a]] ...
Fiaal aattlagt 1 1 1 1 0 0 1 0
Laagaaga gaaaratad: [[1*.aftaa,a],[lv,a],[tv.a.a],[tv,aftaa,a,a]]
Tba laagaaga [[lv,aftaa,a],[lv,a],[tv,a,a],[tv,aftaa,a,a)] la laaraabla.
Trylag ta laara
E[lv,aftaa,a],[iv,a],[a,lv] ,[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a] ,[tv,aftaa,a,a],
[tv,a,a]] ,..
Fiaal aattlag: 1 1 1 1 0 1/0 0 0
Laagaaga gaaaratad:
[[lv .aftaa,a] .[lv.a] ,[a.iv],[a,It.oftaa], [a.tv.a]. [a, tv.aftaa,a] .[tv.aftaa.a.a],
[tv.a.a]]
Tba laagaaga [[lv,aftaa,a],[lv.a] ,[a.iv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaa,a] ,(tv,aftaa,a,a] ,
[tv,a,a]] la laaraabla.
Trylag ta laara [[lv,aftaa,a],[iv,a],[a,lv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaa.a],[tv,a,a],
[tv,aftaa.a,a],[tv,aftaa,a,a],[tv,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 1/0 1/0 0
Laagaaga gaaaratad:
[[lv,aftaa,a], [lv,a], [a, lv], [a.iv,aftaa], [a,tv,a], [a,tv,aftaa,a), [tv,a,a],
[tv,aftaa,a,a],[tv,aftaa,s.a],[tv.a.a]]
Tba laagaaga
([lv,aftaa,a],[lv.a],[a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a] ,
[tv,aftaa,a,a],[tv,a,a]] la laaraabla.
Trylag ta laara [[lv.aftaa,a],(lv,a],[a,iv],[a,lv,oftaa],[a,tv,a],[a,tv,oftaa,a],[tv,o,a],
[tv,aftaa,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 1 / 0 1 0
Laagaaga gaaaratad:
[[iv,aftaB,a],[iv,a],[a,iv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaB,a],[tv,a,a],
[tv,aftaa,a,a]]
Tba laagaaga
[[lv,aftaa,a],[iv,a],[a,iv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a]]
la laaraabla.
Trylag ta laara
[[iv,aftaa.a],[lv.a],[a,tv,aftaa.a],[a,tv,a],[tv.a,a),[tv,aftaa.a,a).[tv.aftaa.a.a],
[tv.a.a]] ...
Fiaal aattlag: 1 1 1 1 0 0 1/01/0
Laagaaga gaaaratad: [[lv,aftaa,a] ,[lv,a],[a,tv,aftaa,a] ,[a,tv,a],[tv,a,a],[tv,aftaa,a,a] ,
[tv.aftaa.a.a],[tv.a.a]]
Tba laagaaga
[[lv,aftaa,a], [lv.a], [a,tv,aftaa,a], [a,tv.a], [tv,a,a], [tv,aftaa,a,a], [tv,aftaa,a,a],
[tv,a,a]] la laaraabla.
Trylag ta laara [[iv,aftaa,a],[iv,a],[a.tv.aftaa,a],[a,tv,a],(tv,a,a],[tv,aftaa,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 0 1 1/0
Laagaaga gaaaratad: [[iv,aftaa,a],[lv,a],[a,tv,aftaa,a],[a,tv,a],[tv,a,a],[tv,aftaa,a,a]]
Tba laagaaga
[[iv,aftaa,a],[tv,a],[a,tv,aftaa,a],[a,tv,a],[tv,a,a],[tv,aftaa,a,a]] la laaraabla.
Trylag ta laara [[iv,aftaa,a],[iv,a],[a,a,tv],[a,a,tv,aftaa],[a,tv,aftaa,a],[a,tv,a],[a,iv],
[a,lv,aftaa],[a,tv,a], [a,tv,aftaa,a],[tv,a,a),[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 1/0 1/0 1/0
372
L u |(t|< gaaaratad:
[[lv.aftaa,a], [lv.a], [a,a,tv], [a,a, tv,aftaa], [a.tv,aftaa,a], [a, tv,a], [a, lv],
[a,lv,aftaa], [a,tv,a], [a, tv,aftaa,a], [ta,a.a], [tv,aftaa,a,a], [tv,aftaa,a,a], [tv.a.a]]
Tka laagaaga
[[iv,aftaa,a],[lv,a],[a,a,tv],[a,a,tv,aftaa},[a,tv,aftaa,a],[a,tv,a],[a,lv],[a, iv,aftaa],
[a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa.a,a],[tv,a,a]] la laaraabla.
Trylag ta laara [[lv,aftaa,a],[lv,a],[a,a,tv],[a,a,tv,aftaa],(a,tv,aftaa,a],[a,tv,a],[a,iv],
[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv.a.a],[tv,aftaa,a,a]] ...
Fiaal aattlag: 1 1 1 1 0 1/0 1 1/0
Laagaaga gaaaratad:
[[lv,aftaa,a],[lv.a],[a.a.tv],[a,a,tv,aftaa],[a,tv,aftaa,a],[a,tv,a],[a.iv],
[a,lv,aftaa],[a,tv,a], [a,tv,aftaa,a],[tv,a,a],[tv.aftaa,a, a]]
Tba laagaaga [[iv,aftaa.a],[lv.a],[a,a,tv],[a,a,tv,aftaa], [a,tv,aftaa.a],[a,tv.a],[a,lv],
[a,iv,aftaa],[a.tv.a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a]] la laaraabla.
Trylag ta laara [[lv,aftaa,a], [lv.a],[lv,a,aftaa],[tv,aftaa,a,a],[tv,a,a], [tv,a,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1/0 0 0
Laagaaga gaaaratad:
[[lv,aftaa,a],[lv.a],[iv,a,aftaa],[tv.aftaa.a.a],[tv.a.a],[tv,a.aftaa.a]]
Tba laagaaga [[lv,aftaa,a],[lv,a],[lv,a.aftaa],[tv,aftaa,a,a],(tv.a,a],[tv.a,aftaa,a]] la laaraabla.
Trylag ta laara
[[lv,aftaa,a],[iv,al,[lv.a,aftaa] ,[tv,a,a],[tv.aftaa.a.a],[tv.a.a] ,[tv,a,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 / 0 1 0
Laagaaga gaaaratad:
[[lv,aftaa,a],[lv,a],[lv,a,aftaa],[tv,a,a],(tv,aftaa,a,a] ,[tv,a,a],[tv,a,aftaa,a]]
Tba laagaaga
[[iv,aftaa,a),(lv,a],[lv,a,aftaa],[tv, a,a],(tv,aftaa,a,a],[tv,a,a],(tv,a,aftaa,a]]
la laaraabla.
Trylag ta laara
[[lv,aftaa,a],[iv,a],[iv,a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a],
[tv.a.aftaa.a]] ...
Fiaal aattl^; 1 1 1 1 1 1 / 0 1 / 0 0
Laagaaga gaaaratad:
[[lv,aftaa,a],[lv.a],[lv.a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a],
[tv,a,aftaa,a]]
Tba laagaaga
[[lv,aftaa,a],[iv,a],[iv,a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a],
(tv,a,aftaa,a]] la laaraabla.
Trylag ta laara [[iv,aftaa,a],[lv.a],[lv,a,aftaa],[a,lv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a],
[tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 / 0 0 1 / 0
Laagaaga gaaaratad:
C[lv,aftta,a],[iv,a],[iv,a,aftaa],(a,iv],[a.iv.aftaa],[a,tv,a], [a,tv,aftaa,a],
[tv,aftaa,a,a],[tv,a,a],[tv.a,aftaa.a]]
Tba laagaaga [[iv,aftaa,a],[iv,a],[iv,a,aftaa],[a,iv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa.a],
[tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] la laaraabla.
Trylag ta laara
[(iv,aftaa.a],[lv,a],[lv,a,aftaa],[a,tv,aftaa,a],[a.tv.a].[a,tv,a,aftaa],[a,lv],
(a, lv,aftaa], [a,tv,a], [a,tv.aftaa,a], [tv.a,a], [tv.aftaa.a.a], [tv,a,a], [tv,a.aftaa,a]] ...
Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1 / 0
Laagaaga gaaaratad:
[[lv,aftaa,a],(lv,a],[iv,a,aftaa],[a,tv,aftaa,a],[a,tv,a], [a,tv.a,aftaa],[a,lv],
[a,iv,aftaa], (a,tv,a], [a,tv,aftaa,a], [tv,a,a], [tv,aftaa,a,a), [tv,a,a], [tv,a,aftaa,a]]
Tba laagaaga
[[lv,aftaa,a],[iv,a],[iv.a,aftaa],[a,tv,aftaa,a].[a.tv,a].[a,tv.a,aftaa].[a,iv],[a.iv.aftaa].
[a.tv.a],[a,tv,aftaa,a],[tv.a,a],[tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] la laaraabla.
373
Trylag U l t t r i
[[lv,aftaa.a],[lv.a],[lv,a.aftaa],[o,tv,aftaa.a],[a,tv.a],[a.tv,a,oftaa3.ta.lv].
[a, la,aftaa], [a,tv,a], [a,tv.aftaa,a], [tv.a.a], [tv.aftaa.a.a] .[tv.aftaa.a.a], [tv.a.a],
[ta,a,aftaa.a]] ...
Piaal aattlag: 1 1 1 1 1 1 / 0 1/0 1/0
Laagaaga gaaaratad:
[[la,aftaa.a],[iv,a],[la.a,aftaa],[a,tv,aftaa,a],[a.tv.a],[a,tv,a.aftaa].
[a, lv], [a.iv,aftaa], [a,tv,a], [a,tv,aftaa,a], [tv,a,a], [tv.aftaa.a.a], [tv.aftaa.a.a],
[tv,a,a],[tv.a,aftaa.a]]
Tka laagaaga
[[lv,aftaa,a],[lv,a],[lv,a,aftaa],[a,tv.aftaa,a],[a,tv,a],[a,tv,a,aftaa],[a,lv],[a,lv,aftaa],
[a,tv.a),[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a],
[tv.a,aftaa,a]] la laaraaUa.
I «
I- ?
D .5 Param eter Setting w ith N oisy Input
I ?- ap.
Tka laltlal aattlag la [0 0 0 0 0 0 0 0 ] la
■aatT [a.a.tv]. 11
Oaakla ta paraa [a.a.tv]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [0 0 0 0 0 1 1 0 ] Ik
■aatT [lv.a]. 13
Oaakla ta paraa [iv,a]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [1 0 0 0 0 0 1 0 ] 1c
■axtT [a.lv], 13
Oaakla ta paraa [a,lv]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [1 0 0 0 0 1 0 0 ] Id
■aatT [a,tv,a]. 14
Oaakla ta paraa [a.tv.a]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [1 0 0 0 0 0 1 1 ] la
■aatT [a,a,tv]. 11
Oaakla ta paraa [a,a,tv]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [1 0 0 0 0 1 1 1 ] If
■art? [tv,a,a]. 10
Oaakla ta paraa [tv.a.a]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [l 0 0 0 0 1/0 0 0 ] lg
■axtT [a.tv.a]. IT
Carraat aattlag raaalaa aackaagad.
■axtT [a,a,tv]. 11
Oaakla ta paraa [a,a,tv]
laaattlag tka paraaatara ...
Paraaatara raaat ta: [l 0 0 0 0 1/0 1 0 ] lk
■axtT [a,tv.a]. It
Oaakla ta paraa [a.tv.a]
laaattlag tka paraaatara ...
374
h n a r t t n r a a a t t a : [1 0 0 0 0 1 1 / 0 0 ] X I
■ u t l ( a , l , t « ] . X lO
V w U a t a p a r a a [ a , » , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 0 1 / 0 1 ] XJ
• a a t T [ a . a . t v ] . t i l
U a a U a t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 ] Xk
■ a x tT [ a . t v . a ] . X I3
O a a k la t a p a r a a [ a . t v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 1 ] X I
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a . a f t a a , t v ] , [ a . a . t v ] , [ a . i v ] , [ a . a , t v ) , [ a . a f t a a . l v ] , [ a . a f t a a . a . t v ] , [ a , a f t a a , t v . a ] , [ a , t v , « ] ]
■ a x tT [ l v , a ] . X l9
O a a k la t a p a r a a [ i v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 1 0 0 0 0 0 1 1 / 0 ] X a
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ l v . a ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] , [ a f t a a . l v , a ] , [ o f t a a , a , t v , a ] ]
■ a x tT [ a . t v . a ] . X ld
O a a k la t a p a r a a [ a . t v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [10 0 0 0 1 0 1 / 0 ] Xa
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a . i v ) , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] )
■ a x tT [ a . a . t v ] . X IS
O a a k la t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 1 ] Xa
■ a x tT [ a . a . t v ] . X l«
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , t v , a ] . X1T
O a a k la t a p a r a a [ a . t v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [10 0 0 0 1 1 / 0 1 ] Xp
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a . a f t a a . t v ] , [ a , a , t v ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a , a , t v ] , [ a . a f t a a . t v , a ] , [ a , t v . a ] ]
■ a x tT [ l v . a ] . X l l
O a a k la t a p a r a a [ l v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 0 0 0 1 1 / 0 ] Xq
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ l v . a ] , [ a . a f t a a . t v , a ) , [ a , t v . a ) , [ a f t a a . l v , a ] , [ a f t a a , t v . a , a ] , [ t v , a , a ] )
■ a x tT [ a . t v . a ] . Xll
O a a k la t a p a r a a [ a . t v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
375
P u t H l t n t m t M : [1 1 0 0 0 1 0 1 / 0 ] I t
■ a x tT ( i M r t M .
L u p > | t p M t i t d w i t h c a r r a a t a a t t l a g :
[ [ a , i t ] , [ a , e f t a a , i v ] , [ a . a f t a a , t * . a ] , [ a . t v , a ] ]
■ a a tT ( a , a , * v ] . 1 3 0
O a a h la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 0 ] Xa
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g :
[ [ a . a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a , i v ] . [ a f t a a . a . t v , a ] , [ a . i v ] . [ a . a . t v ] , [ a . a f t a a . l v ] , [ a . a f t a a , a . t v ] ,
[ a , a f t a a , t v , a ] , [ a . t v . a ] ]
■ a a tT [ i t . a ] . 1 3 1
O a a k la t a p a r a a [ l v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 01 /0 0 ] X t
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ 1 * . a ] , [ a , t a . a ] , [ a f t a a , i t , a l , [ a f t a a . a , t v , a ] , [ a f t a a , t v . a , a ] , [ a , i t ] . [ a . a . t v ] , [ a , a T t a a . i t ] ,
( a , a f t a a . a , t v ] , [ a . a f t a a , t v , e ] . [ a , t v , o ] . [ t v . a , » ] ]
■ a x tT [ a . t v . o ] . X »
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v ] . U S
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . t v ] . X 24
O a a k la t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a ; [ 0 0 0 0 0 0 1 / 0 1 / 0 ] Xa
■ a x tT [ a , a , t v ] . X 2 I
O a a k la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 1 ] Xv
■ a x tT [ a . t v . a ] , X 30
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v ] . 1ST
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a f t a a , a , t v ] , [ a , a , a f t a a , t v ] , [ a . a . t v ] , [ a . i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] ,
[ a . t v . a ] ]
■ a x tT [ l v , a ] . X 2 I
O a a k la t a p a r a a [ l v . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 0 1 / 0 ] Xa
■ a x tT ( a , t v , a ) . X 30
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . t v ] . X 30
O a a k la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 / 0 1 ] Xx
■ a x tT [ a . t v . a ] . X l l
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . t v ] . X32
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
376
[ a , # * » • « , « , » ] , [ a , t v , a ] ]
b t t r [ i v . a ] . 1 3 3
i m t U i t a p a r a * [ l v . a ]
■ • a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a ; (1 0 0 0 0 1 / 0 1 I / O ] X j
■ a a tT [ a , t * , a ] . 1 3 4
O a a b la t a p a r a a [ a . t v . a ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 1 / 0 1 / 0 ] %m
■ a a tT [ a . a . t v ] . X3C
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT [ a . a . t v ] . 1 3 6
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a , a f t a a , t v ] . [ a . a . t v ] , [ a , l v ) , [ a , a , t v ] , [ a , a f t a a . l v ] . [ a . a f t a a , a , t v ] , [ a . a f t a a , t v , a ] . [ a . t v . o ] ]
■ a a tT [ l v . a ] . X37
O a a k la t a p a r a a [ l v . a ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 0 0 1 / 0 0 1 / 0 ] 1 * 1
■ a a tT ( a . t v . a ) . X36
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT [ a . a . t v ] . X3P
O a a k la t a p a r a a [ a . a . t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 1 / 0 ] I b l
■ a x tT [ a . t v . a ) . 1 4 0
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT ( a , a , t v ) . 1 4 1
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ a , a f t a a . a , t v ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a . i v ] , [ a f t a a , a , t v , a ] , [ a , l v ] , [ a , a , t v ] ,
[ a , a f t a a , l v ) , [ a , a f t a a . a , t v ] , ( a , a f t a a . t v , a ] , [ a , t v . a ] ]
■ a x tT [ l v , a ] . 1 4 2
O a a k la t a p a r a a [ l v , a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 / 0 1 / 0 ] l e i
■ a x tT [ a . t v , a ] . 1 4 3
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a . a . t v ] . 1 4 4
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v ] . I 4 t
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
[ [ l v . a ] , [ a , a f t a a , t v , a ] , [ a , a . a f t a a . t v ] , [ a , a , t v ] , [ a . t v . a ] , [ a f t a a , l v . a ] , [ a f t a a , a , t v , a ] , [ a f t a a , t v . a . a ] ,
[ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] , ( t v , a , a ] ]
■ a x tT [ a , t v , a ] . 1 4 0
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ t v , a , a ] .
O a a k la t a p a r a a [ t v . a . a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 0 0 1 / 0 1 / 0 1 / 0 ] 1 4 1
■ a x tT [ a . t v . a ] . 1 4 7
377
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT [ a , a , t v ] . I U
U a afcla t a p a r a a [ a , a , t v ]
l a a a t t l a g t k a p a r a a a t a r a . . .
a a
I T -
D .6 Setting S(M ), S(F) and HD Param eters
a Particular Language
I T - a p .
T k a l a l t l a l a a t t l a g l a [ 00 0 0 0 0 0 0 1 1 1 - 0 ]
■ a x tT [ a , i v , a a x ] .
U a a b la t a p a r a a [ a , l v t a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 00 0 0 0 0 0 0 1 4 1 - 0 )
■ a x tT t a , t v t a , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a . a ^ v . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1O O O O O O O l f 1 - 0 ]
■ a x tT [ a , t v , a , a a x | .
U a a b la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 10 0 0 0 1 f 1 - 0 ]
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [11 0 0 0 0 0 0 1 4 1 - 0 ]
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 10 0 0 4 1 1 - 0 ]
■ a a tT C a , a , t v , a a a ] .
O a a k la t a p a r a a [ a . a . t v . a x x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 00 0 0 0 1 1 0 1 4 1 - 0 ]
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a >t v t a .a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 00 0 1 1 1 0 0 4 1 1 - 0 ]
■ a x tT [ a . a . t v . a a x ] .
U a a b la t a p a r a a [ a , a , t v ( a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : ( 00 0 1 0 1 1 0 1 4 1 - 0 ]
■ a x tT [ a , t v , a , a a a ] .
U a a b la t a p a r a a [ a . t v . a . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
I w t T [ s . a . t v . a a x ] .
U a a k l* t a f i m
l a a a t t l a g t k a r u i M M n . . .
h r i M t m n m I « • : (1 0 0 1 1 1 0 01 1 1 - 0 ]
■ a x tT [ l , t T , * , t u ] .
V u U « « • p a r a *
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : E l 1 0 1 1 I 0 01 1 1 - 0 ]
■ a a tT [ a , t t , a , a u ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g :
tC a , l r , a a x ] , [ a , t a , a .m a x ] ]
■ a x tT C * ,o , t v . a a x j .
O a a k la t a p a r a * E a . a . t r . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o :El 1 0 1 1 1O O f f 1 - 0 ]
■aatT E s.tr,a , a a x ] .
O a a k la t o p a r a a l a . t r . a . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o : [1 1 0 1 1 1 1 0 # 1 1 - 0 ]
■ a x tT E a . a . t r . a a x ] .
O a a k la t a p a r a * [ a . a . t r . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o :El 1 0 1 1 1 1 0 # f 1 - 0 ]
■ a a tT [ s . t v , a , a a x ] .
O a a k la t o p a r a a E a . t r . a . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o : £ 00 0 0 0 1 0 1 I f 1 - 0 ]
■ a a tT Ea.a.tr.aax).
O a a k la t o p a r a * E a . a . t r . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o :C l O O O O l O l l f 1 - 0 ]
■ a a tT E a . t r . a . a a a ] .
O a a k la t o p a r s * E a . t r , a . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 0 1 0 1 1 f 1 - 0 ]
■ a x tT E a . a . t r . a a a ] .
O a a k la t a p a r a * E a . a . t r . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :E O O O O O l l l l f 1-0 ]
■ a a tT C a . t r , a , a « x ] .
O a a k la t o p a r a a E a . t r , a . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :( O O O l l l O l f l 1 - 0 )
■ a x tT C a . a . t r . a a x ] .
O a a k la t o p a r s * E a . a . t r . a a a ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t o :E O O O l O t l l l f 1 - 0 ]
■ a x tT [ a . t v , * , s a x ] .
O a a k la t o p a r s * E s . t r . a . a a x ]
379
kH ttii| tka pwiMttn
h n u M t i r a a a t t a :[ 1 0 0 1 1 1 0 1 f 1 1 - 0 ]
■ a x tT [ a , a , t v , a i u ] .
O a a k la t a p a r a a [ a , a , t * , i a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ l O O l l l O i f f 1 - 0 ]
■ a a tT [ a , t v , a , a a x ] .
O a a k la t a p a r a a [ a , t v , a . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ i l O l l l O l f l 1 - 0 }
■ a x tT [ a , a , t v , a a x l .
O a a k la t a p a r a a [ a , o , t v , t a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ l l O l l l O l f f 1 - 0 ]
■ a x tT [ a , t v , a , a a x ] .
O a a k la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a tt a : [ 1 1 0 1 1 1 1 1 f l 1 - 0 1
■ a x tT [ a . a . t v , a a x ] .
O a a k la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 1 11 t 1 - 0 ]
■ a x tT [ a , t v , a , a a x ] .
O a a k la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0
■ a x tT [ a , a , t v , a a x ] .
O a a k la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 0 0 0 1 / 0 0 1t 1 - 0
■ a x tT [ a , t v , a , a a x ] .
O a a k la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a M t a r a , . .
P a r a a a t a r a r a a a t t a : [ 0 0 i
■ a a tT [ a , a , t v , a a x ] .
O a a k la t a p a r a a ( a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[0 0 0 0 0 1 1/0 0 1 f 1-0
■ a x tT [ a , t v , a , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g ;
[ [ a . l v . a a x ] , [ a , a , t v , a a x ] , [ a , t v , a , a a x ] ]
■ a x tT [ a , a , t v , a a x ] .
U a a k la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[0 0 0 0 0 1/0 1 0 1 f 1-0
■ a x tT [ a , t v , a , a a x ] .
O a a k la t a p a r a a ( a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P v i w t i r f r a a a t t a : [ 0 0 0 1 1 0 I / O 0
■ a a tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ » , o , t v .x o x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 0 1 1 / 0 0
■ a a tT [ a , t v , a , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a a tT [ a , a , t v , a a x ] .
D a a k la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 O 1 / 0 1 0
■ a a tT [ f . t t . i . a w O .
U a a b la t a p a r a a [ a , t v t » , a u ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 0 0
■ a x tT [ a , o , t v , a a x ] ,
U a a b la t a p a r a a [ a . a . t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 0 0
■ a x tT [ a , t v , o , a a x ] .
U a a b la t a p a r a a [ a . t v . a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 1 1 1 / 0 0
■ a x tT [ a , a , t v , a a x ] ,
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a . t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0 1 1 1 / 0 1 0
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a , t v , a . a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : (1 1 0 1 1 1 / 0 0 0
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 / 0 0 0
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 1 / 0 0
■ a x tT [ a , a , t v , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v , a a x ) .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 1 0
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 1 / 0 0
f 1-0 ]
1 1-0 ]
1 1-0 ]
t 1 - 0 ]
1 1-0 ]
1 1-0 ]
1 1-0 ]
t 1 - 0 ]
1 1-0 ]
f 1-0 ]
1 1- 0 ]
381
■ a x tT [ a . a . t v , a a x ] .
U a a b la t a p a r a * [ a . a . t v , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ 1 1 0
• a x t T [ a , t v , a , a a x } .
U a a b la t a p a r a a [ a , t v . a , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 0
■ a x tT [ a , a , t v . a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a :[ 1 1 0
■ a x tT [ a . t v . a , a a x ] .
U a a U a t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 1 0 0
■ a x tT ( a . t v . a , a a x ] .
U a a b la t a p a r a a [ a , t v , a . a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [1 1 o
■ a x tT [ a , t v , a , a a x ] .
U a a b la t a p a r a a [ a , t v , a , a a x ]
l a a a t t l a g t k a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a : [ 0 0 0
■ a x tT [ a , a , t v , a a x ] .
U a a b la t a p a r a a [ a , a , t v , a a x ]
l a a a t t l a g t b a p a r a a a t a r a . . .
P a r a a a t a r a r a a a t t a ; [ 0 0 0
■ a x tT [ a . t v . a . a a x ] .
U a a b la t a p a r a a [ a , t v , a , a u ]
l a a a t t l a g t b a p a r a a a t a r a . . .
1 1 1 1/0 0 f f 1-0 ]
1 1 1/0 1 0 f 1 1-0 ]
1 1 1 / 0 1 0 f f 1 - 0 ]
0 0 0 0 1 / 0 1 f 1 - 0 ]
0 0 0 0 1/0 1 f 1-0 ]
1 0 0 0 1 / 0 1 f 1 - 0 ]
0 0 0 0 1/0 1 « 1-0 ]
1 1 0 0 1/0 f 1 1-0 ]
o o i i i/o i i i-o 3
P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 I l f 1 - 0 ]
■ a x tT [ a , a , t v , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT [ a , a , t v , a a x ] .
C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d .
■ a x tT g a a a r a t a .
L a a g a a g a g a a a r a t a d a l t h c a r r a a t a a t t l a g :
[ [ a , a , t v . a a x ] , [ a , 1v , a a x ] , [ a , a , t v , a a x ] , [ a , t v ,a , a a x ] ]
■ a x tT b p a .
T *»
I T -
382
1
References
Abney, S. and J. Cole (1985) A Government-Binding Parser. In Proceedings
of the Sixteenth North East Linguistic Society Conference, University of
Massechusetts, Amherst, MA.
Abney, S. (1987) The English Noun Phrase in its Sentential Aspect. MIT doctoral
dissertation, Cambridge, MA.
Abney, S. (1989) A Computational Model of Human Parsing. Journal of Psy­
cholinguists Research, IS. 129-144.
Aho, A.V., J.E. Hopcroft and J.D. Ullman (1974) The Design and Analysis of
Computer Algorithms. Addison-Wesley, Menlo Park, CA.
Angluin, D. (1978) On the Complexity of Minimum Inference of Regular Sets.
Information and Control, 39: 337-350.
Angluin, D. (1980) Inductive Inference of Formal Languages from Positive Data.
Information and Control, 45: 117-135.
Aoun, J., N. Hornstein and D. Sportiche (1981) Some Aspects of Wide-Scope
Quantification. Journal of Linguistic Research, I: 69-95.
Atkinson, M. (1992) Children’s Syntax. Blackwell, Cambridge, MA.
Baker, C.L. (1979) Syntactic Theory and the Projection Problem. Linguistic
Inquiry 10: 533-581.
Baker, M.C. (1985a) The Mirror Principle and Morphosyntactic Explanation.
Linguistic Inquiry, 16: 373-417.
Baker, M.C. (1985b) Syntactic Affixation and English Gerunds. In Cobler et
al. (eds.) Proceedings of the West Coast Conference of Formal Linguistics,
Stanford University, Palo Alto.
Baker, M.C. (1988) Incorporation: A Theory of Grammatical Function Changing.
University of Chicago Press, Chicago.
Belletti, A. (1990) Generalized Verb Movement. Rosenberg & Sellier, Turin.
Berwick, R.C. (1985) The Acquisition of Syntactic Knowledge. The MIT Press,
Cambridge, MA.
Berwick, R.C. (1987) Parsability and Learnability. In B. MacWhinney (ed.)
Mechanisms of Language Acquisition. Lawrence Erlbaum Associates, Hills­
dale, NJ.
Berwick, R.C. (1991) Principles of Principle-Based Parsing. In R.C. Berwick et al.
(eds.) Principle-Based Parsing: computation and psycholinguistics, Kluwer,
Boston.
383
Berwick, R.C. and A. Weinberg (1984) The Grammatical Basis o f Linguistic Per­
formance: Language Use and Acquisition. MIT Press, Cambridge, MA.
Berwick, R.C. and S. Fong (1991) Madama Butterfly Redux: A Parsing Opera in
Two Acts or Parsing English and Japanese with a Principles and Parameters
Approach. To appear in Computational Linguistics.
Bobaljik, J. (1992) Norminally Absolutive is Not Absolutely Nominative. Pre­
sented at the Eleventh West Coast Conference on Formal Linguistics, UCLA,
Los Angeles.
Bobaljik, J. and D. Jonas (1993) Subject Positions and the Role of TP. Presented
at the Sixteenth GLOW Colloquium, Lund, Sweden.
Borer, H. (1984) Parametric Syntax: case studies in Semitic and Romance lan­
guages. Foris, Dordrecht.
Borer, H. (1992) The Ups and Downs of Hebrew Verb Movement. Umass manuscript.
Bouchard, D. (1984) On the Content of Empty Categories. Foris, Dordrecht.
Brody, M. (1990) Remarks on the Order of Elements in the Hungarian Focus
Field. In I. Kenesei (ed.) Approaches to Hungarian, Vol. 2. JATE, Szeged.
Brown, R. and C. Hanlon (1970) Derivational Complexity and the Order of Acqui­
sition of Child Speech. In J.R. Hayes (ed.) Cognition and the Development
of Language, Wiley, New York, NY.
Burzio, L. (1986) Italian Syntax. Reidel, Dordrecht.
Campbell, R. 11993) The Occupants of Spec-DP. Presented at the Sixteenth
GLOW Colloquium, Lund, Sweden.
Carstens, V. (1991) The Morphology and Syntax of Determiner Phrases in Kiswahili.
UCLA doctoral dissertation, Los Angeles.
Carstens, V. (1993) On Grammatical Gender and NP Internal Subjects. Pre­
sented at the Sixteenth GLOW Workshop, Lund, Sweden.
Carstens, V. (1993) Feature-Types, DP-Syntax and Subject Agreement. Pre­
sented at the Sixteenth GLOW Colloquium, Lund, Sweden.
Cheng, L. (1991) On the Typology of Wh-Questions. MIT doctoral dissertation,
Cambridge, MA.
Cheng, L. (1993) Wh-Scope Markers and Partial Wh-Movement. Presented at
the Colloquium of the UCLA Dept, of Linguistics.
Chiu, B. (1993) The Inflectional Structure of Mandarin Chinese. UCLA doctoral
dissertation, Los Angeles.
Chomsky, N. (1955) The Logical Structure of Linguistic Theory. Plenum, New
York (1975). Cambridge, MA.
384
Chomsky, N. (1957) Syntactic Structures. The Hague: Mouton.
Chomsky, N. (1965) Aspects of the Theory of Syntax. MIT Press, Cambridge,
MA.
Chomsky, N. (1980) On binding. Linguistic Inquiry, 11: 1-46
Chomsky, N. (1981a) Principles and Parameters in Syntactic Theory. In N. Horn-
stein & D. Lightfoot (eas.) Explanation in Linguistics, Longman, London.
Chomsky, N. (1981b) Lectures on Government and Binding. Foris, Dordrecht.
Chomsky, N. (1982) Some Concepts and Consequences of the Theory of Govern­
ment and Binding. MIT Press, Cambridge, MA.
Chomsky, N. (1986a) Knowledge of Language. Praeger, New York.
Chomsky, N. (1986b) Barriers. MIT Press, Cambridge, MA.
Chomsky, N. (1991) Some Notes on Economy of Derivation and Representation.
In R. Freidin (ed.) Principles and Parameters in Comparative Grammar,
MIT Press, Cambridge, MA.
Chomsky, N. (1992) A Minimalist Program for Linguistic Theory. M IT Occa­
sional Papers in Linguistics, Number 1.
Chomsky, N. and H. Lasnik (1977) Filters and Control. Linguistic Inquiry, 8,
425-504. Reprinted in Lasnik (1990).
Chomsky, N. and H. Lasnik (1991) Principles and Parameters Theory. In J.
Jacobs, A. von Stechow, W. Stemefeld and T. Vennemann (eds.) Syntax:
An International Handbook of Contemporary Research, Walter de Gruyter,
Berlin.
Clahsen, H. (1991) Constraints on Parameter Setting: a grammatical analysis of
some acquisitional stages in German child language. Language Acquisition
1(4): 361-391.
Clark, R. (1988) On the Relationship between the Input Data and Parameter
Setting. In Proceedings of the Nineteenth North East Linguistic Society Con­
ference, Cornell University, Ithaca, NY.
Clark, R. (1990) Some Elements of a Proof for Language Learnability. Manuscript,
University of Geneva.
Clocksin, W.F. and C.S. Mellish (1984) Programming in Prolog, 2nd Edition.
Springer-Verlag, New York.
Cornell, T. (1992) Description Theory, Licensing Theory, and Principle-Based
Grammars and Parsers. UCLA doctoral dissertation, Los Angeles.
385
Emonds, J. (1976) A Transformational Approach to Syntax. Academic Press, New
York. Emonds, J. (1978) The Verbal Complex V’-V in French. Linguistic
Inquiry, ft 151-175
Emonds, J. (1980) Word Order in Generative Grammar. Journal of Linguistic
Research, 1: 33-54.
Emonds, J. (1985) A Unified Theory of Syntactic Categories. Foris, Dordrecht,
The Netherlands.
Fodor, J,D. (1990) Phrase Structure Parameters. Linguistics and Philosophy, IS:
619-659.
Fodor, J.D. (1991) Learnability of Phrase Structure Grammar. To appear in
R. Levine (ed.) Formal Grammar: Theory and Implementation, Vancouver
Studies in Cognitive Science, University of British Columbia Press.
Fong, S. (1991) Computational Properties of Principle-Based Grammatical The­
ories. MIT doctoral dissertation, Cambridge, MA.
Forster, P. (19891 Identification of Zero-Reversible Languages. MA thesis, The
University of Western Ontario.
Frank, R. (1990) Computation and Linguistic Theory: A Government Binding
Theory Parser Using Tree Adjoining Grammar. MA thesis, University of
Pennsylvania.
Frank, R. and S. Kapur (1993) On the Use of Triggers in Parameter Setting.
Upenn manuscript, Philadelphia.
Frank, R. and A. Kroch (1993) Generalized Transformation in Successive Cyclic
and Long Dependencies. Presented at the Sixteenth GLOW Colloquium,
Lund, Sweden.
Frazier, L. and K. Rayner (1988) Parameterizing the Language Processing Sys­
tem: left- vs. right-branching within and across languages. In J. Hawkins
(ed.) Explaining Language Universals, Basil Blackwell, New York.
Fukuda, M. (1993) Head Government and Case Marker Drop in Japanese. In
Linguistic Inquiry, 24' 168-172.
Fukui, N. (1993) Parameters and Optionality. Linguistic Inquiry, 24: 399-420.
Gibson, E. (1991) A Computational Theory of Human Linguistic Processing:
Memory Limitations and Processing Breakdown. Carnegie Mellon Univer­
sity doctoral doctoral dissertation, Pittsburgh, PA.
Gibson, E. and K. Wexler (1993) Triggers. To appear in Linguistic Inquiry.
Gold, E. M. (1967) Language Identification in the Limit. Information and Con­
trol, 10, 447-474.
386
Gorrell, P. (1993) Structural Relations in the Grammar and the Parser. Manuscript,
University of Maryland.
Greenberg, J. (ed.) (1966) Universal* of Language. The MIT Press, Cambridge,
MA.
Grimshaw, J. (1991) Extended Projection. Manuscript, Brandies University.
Guilfoyle, E. and M. Noonan (1988) Functional Categories and Language Acqui­
sition. Presented at the Boston University Conference on Language Acqui­
sition, Boston.
Haegeman, L. (1991) Introduction to Government and Binding Theory. Basil
Blackwell, Oxford, UK.
Haider, H. and M. Prinzhorn (eds.) (1986) Verb Second Phenomena in Germanic
Languages. Foris, Dordrecht, The Netherlands.
Hamburger, H. and K. Wexler (1975) A Mathematical Theory of Learning Trans­
formational Grammar. Journal of Mathematical Psychology, IS: 137-177.
Hoekstra, T. (1984) Transitity. Foris, Dordrecht, The Netherlands.
Hoji, H .(1985) Logical Form Constraints and Configurational Structures in Japanese.
University of Washington doctoral dissertation, Seattle, WA.
Horvath, J. (1986) Focus In the Theory of Grammar and the Syntax of Hungarian.
Foris, Dordrecht, The Netherlands.
Horvath, J. (1992) The Syntax of Focus and Parameters of Feature-Assignment.
Presented at the Colloquium of the UCLA Dept, of Linguistics.
Huang, J. (1982) Logical Relations in Chinese and the Theory o f Grammar. MIT
doctoral dissertation, Cambridge, MA.
Hyams, N. (1986) Language Acquisition and the Theory o f Parameters. Reidel,
Dordrecht, The Netherlands.
Hyairis, N. (1987) The Theory of Parameters and Syntactic Development. In T.
Roeper and E. Williams (eds.) Parameter Setting, Reidel, Dordrecht, The
Netherlands.
Hyams, N. (1991) V2, Null Arguments and C Projections. To appear in T.
Hoestra and B. Schwartz (eas.) Language Acquisition Studies in Generative
Grammar.
Jackendoff R. (1977) X-bar Syntax. MIT Press, Cambridge, MA.
Jaeggli, 0 . (1980) On Some Phonologically-Null Elements in Syntax. MIT doc­
toral dissertation, Cambridge, MA.
Johnson, K. (1991) Object Positions. Natural Language and Linguistic Theory,
9: 577-636.
387
Johnson, M. (1990a) Features, Frames and Quantifier-Free Formulae. In P. Saint-
Dizter and V. Dahl (eds.) Logic and Logic Grammars for Language Process-
ing, Ellis Horwood, New York.
Johnson, M. (1990b) Expressing Disjunctive and Negative Feature Constraints
with Classical First-Order Logic. In Proceedings of the £8th Annual Meeting
of the Association for Computational Linguistics.
Kayne, R. (1984) Connectedness and Binary Branching. Foris, Dordrecht, The
Netherlands.
Kayne, R. (1992) Talks given at the Fifteenth GLOW Colloquium and UCLA.
Kayne, R. (1993) The Antisymmetry of Syntax. CUNY manuscript, New York.
Kim, J. (1993) Null Subjects: comments on Valian (1990). Cognition, 46: 183-
193.
Kitagawa, Y. (1986) Subjects in Japanese and English. University of Massechusetts
doctoral dissertation, Amherst, MA.
koopman, H. (1984) The Syntax of Verbs. Foris, Dordrecht, The Netherlands.
Koopman, H. (1987) On the Absence of Case Chains in Bambara. UCLA
manuscript.
Koopman, H. (1992) Licensing Heads. To appear in N. Hornstein and D. Lightfoot
(eds.) Verb Movement.
Koopman, H. and D. Sportiche (1985) Theta Theory and Extraction. Abstract
in GLOW newsletter.
Koopman, H. and D. Sportiche (1988) Subjects. UCLA manuscript.
Koopman, H. and D. Sportiche (1990) The Position of Subjects. UCLA manuscript.
To appear in Lingua.
Kuno, S. (1978) Japanese: A Characteristic OV Language. In W. Lehmann, (ed.)
Syntactic Typology: studies in the phenomenology of language. University of
Texas Press, Austin.
Kuroda, S.-Y. (1988) Whether We Agree or Not: a Comparative Syntax of En­
glish and Japanese. In W. Poser (ed.) Papers on the Second International
Workshop on Japanese Syntax, CSLI, Stanford University.
Laka, I. (1990) Negation in Syntax: on the nature of functional categories and
projections. MIT doctoral dissertation, Cambridge, MA.
Laka. I. (1992) Ergative for Unergatives? Presented at the Colloquium of the
UCLA Dept, of Linguistics.
Langley, P. and J. Carbonell (1987) Language Acquisition and Machine Learning.
In B. MacWhinney (ed.) Mechanisms of Language Acquisition, Lawrcuce
Erlbaum Associates, Hillsdale, NJ.
388
Larson, R. (1988) On the Double Object Construction. Linguistic Inquiry, 19:
335-391.
Lasnik, H. (1981) Restricting the Theory of Transformations: a Case Study. In
N. Hornstein and D. Lightfoot (eds.) Explanation in Linguistics, Longman,
London. Reprinted in Lasnik 1990.
Lasnik, H. (1989) On Certain Substitutes for Negative Data. In R. Matthews and
W. Demopoulos (eds.) LeamabUity and Linguistic Theory, Kluwer, Boston.
Lasnik, H. (1990) Essays on Restrictiveness and Leamability. Reidel, Dordrecht.
Lasnik, H. and J. Uriagereka (1988) A Course in GB Syntax. The MIT Press,
Cambridge, MA.
Lasnik, H. and M. Saito (1989) Move a. The MIT Press, Cambridge, MA.
Ligntfoot, D. (1989) The Child's Trigger Experience: Degree-0 Learnability. Be­
havioral and Brain Sciences, 12: 321-334.
Lightfoot, D. (1991) How to Set Parameters: Arguments from Language Change.
MIT Press, Cambridge, MA.
Mahajan, A. (1990) The A/A-Bar Distinction and Movement Theory. MIT doc­
toral dissertation, Cambridge, MA.
Mallinson, G. and B. Blake (1981) Language Typology. North-Holland Publishing
Company, Amsterdam, The Netherlands.
Manzini, M.R. and K. Wexler (1987) Parameters, Binding Theory and Learnabil­
ity. Linguistic Inquiry, 18: 413-444.
Marcus, G. (1993) Negative Evidence in Language Acquisition. Cognition.
May, R. (1985) Logical Forms. MIT Press, Cambridge, MA.
McDanial, D. (1989) Partial and Multiple Wh-Movement. Natural Language and
Linguistic Theory, T. 565-604.
Mitchell (1993) The Nature and Location of Agreement Within DP. Presented at
the Sixteenth GLOW, Lund, Sweden.
Morgan, J. (1986) From Simple Input to Complex Grammar. MIT Press, Cam­
bridge, MA.
Nyberg, E. (1987) Parsing and the Acquisition of Word Order. Proceedings of
the Fourth Eastern States Conference on Linguistics, Ohio State University,
Columbus, OH.
Nyberb, E. (1990) A Limited Non-Deterministic Parameter-Setting Model. Pro­
ceedings o f the Twenty-First North East Linguistic Society Conference, McGill
University, Montreal, Quebec.
389
Osherson, D., M. Stob and S. Weinstein (1984) Learning Theory and Natural
Language. Cognition, IT. 1-28.
Osherson, D., M. Stob and S. Weinstein (1986) Systems that Learn. MIT Press,
Cambridge, MA.
Ouhalla, J. (1991) Functional Categories and Parametric Variation. Routledge,
New York and London.
Pesetsky, D. (1989) Language-Particular Processes and the Earliness Principle.
MIT manuscript.
Pesetsky, D. (1993) Cascade Syntax and Layered Syntax. Presented at the Six­
teenth GLOW, Lund, Sweden.
Pereira, F. and S. Shieber (1987) Prolog and Natnral-Language Analysis. Chicago
University Press, Chicago.
Pinker, S. (1979) Formal Models of Language Learning. Cognition, 7: 217-283.
Pinker, S. (19841 Language Leamability and Language Development. Harvard
University Press, Cambridge, MA.
Poeppel, D. and K. Wexler (1993) The Full Competence Hypothesis of Clause
Structure in Early German. Language, 69: 1-33.
Pollock, J.-Y. (1989) Verb Movement, UG and the Structure of IP. Linguistic
Inquiry, SO: 365-424.
Pritchett, B. (1991) Head Position and Parsing Ambiguity. Journal of Psycholin­
guists Research, SO: 251-270.
Pullum (1983) How Many Possible Human Languages are there? Linguistic In­
quiry, lj: 447-467.
Radford, A. (1990) Syntactic Theory and the Acquisition of English Syntax: the
nature of early child grammars of English. Basil Blackwell, Cambridge, MA.
Randall, J. (1987) Indirect Positive Evidence: overturning overgeneralizations in
language acquisition. Reproduced by the Indiana University Linguistic Club.
Randall, J. (1990) Catapults and Pendulums: the mechanics of language acqui­
sition. Linguistics, S8y 1381-1406.
Randall, J. (1992) The Catapult Hypothesis: grammars as machines for unlearn­
ing. In J. Weissenborn, H. Goodluck and T. Roeper (eds.) Theoretical Issues
in Language Acquisition: continuity and change in development, Lawrence
Erlbaum Associates, Inc. Hillsdale, NJ.
Ritter, E. (1988) A Head-Movement Approach to Construct-State Noun Phrases.
Linguistics, S6.
390
■?
Ritter, E. (1990) Two Functional Categories in Noun Phrases: evidence from
Modern Hebrew. UQAM manuscript.
Rizzi, L. (1982) Issues in Italian Syntax. Foris, Dordrecht, The Netherlands.
Rizzi, L. (1990) Relativized Minimality. MIT Press, Cambridge, MA.
Roberts, I. (1991) Excoporation and Minimality. Linguistic Inquiry, 22: 209-218.
Roberts, I. (1992) Two Types of Head Movement in Romance. To appear in N.
Homstein and D. Lightfoot (eds.) Verb Movement.
Sadiqi, Fatima (1989) Studies in Berber Syntax: the complex sentence. Konigshausen
+ Nuumann, Wurzburg, Germany.
Safir, K. (1985) Syntactic Chains. Cambridge University Press.
Saito, M. (1985) Some Asymmetries in Japanese and Their Theoretical Conse­
quences. MIT doctoral dissertation, Cambridge, MA.
Saito, M and H. Hoji (1983) Weak Crossover and Move Alpha in Japanese. Nat­
ural Language and Linguistic Theory.
Schachter, P. (1976) The Subject in Philippine Languages: Topic, Actor, Actor-
Topic, or None of the Above. In C.N. Li (ed.) Subject and Topic. Academic
Press, New York.
Shapiro, E.Y. (1983) Algorithmic Program Debugging. MIT Press, Cambridge,
MA.
Sportiche, D. (1988) Conditions on Silent Categories. UCLA manuscript.
Sportiche, D. (1990) Movement, Agreement and Case. UCLA manuscript.
Sportiche, D. (1992) Clitic Constructions. UCLA manuscript.
Sproat, R. (1985) Welsh Syntax and VSO Structure. Natural Language and Lin­
guistic Theory ft 173-216.
Stabler, E.P., Jr. (1987) Restricting Logic Grammars with Government-Binding
Theory. Computational Linguistics, 75(1-2): 1-10.
Stabler, E.P., Jr. (1988a) Parsing with Explicit Representations of Syntactic
Constraints. In V. Dahl and P. Saint-Dizier (eds.) Natural Language Under­
standing and Logic Programming, II, North-Holland, New York.
Stabler, E.P., Jr. (1988b) Implementing Government Binding Theory. To appear
in Levine ana S. Davis (eds.) Formal Linguistics: Theory and Implementa­
tion.
Stabler, E.P., Jr. (1989a) Avoid the Pedestrian’s Paradox. In M IT Parsing
Volume 1988-1989, eaited by C. Tenny, M IT Center for Cognitive Science.
Revised version in R.C. Berwick, S. Abney, C. Tenny (eds.) (1991) Principle-
Based Parsing: Computation and Psycholinguistics, Kluwer, Boston.
391
Stabler, E.P., Jr. (1989b) W hat’s a Trigger? Behavioral and Brain Sciences, 12:
358-360.
Stabler, E.P., Jr. (1990) Relaxation Techniques for Relaxation Principles for
Principle-Based Parsing. UCLA Center for Cognitive Science Technical Re­
port 90-1. Revised version to appear in E. Wenrli (ed.) Proceedings of the
Geneva Workshop on GB Parsing.
Stabler, E.P., Jr. (1992) The Logical Approach to Syntax: foundations, speci­
fications and implementations of theories of government and binding, MIT
Press, Cambridge, MA.
Sterling,L. and E.Y. Shapiro (1986) The Art of Prolog: Advanced Programming
Techniques. MIT Press. Cambridge. MA.
Schaufele, S. (19911 Richness of Subject-Agreement Marking and V-AGR Merger:
the Verdict of Vedic. Presented at tne 20th Annual Conference on South
Asia, Madison, WI, Nov.
Stowell, T. (1981) Origins of Phrase Structure. MIT doctoral dissertation, Cam­
bridge, MA.
Stowell, T. (1989) Subjects, Specifiers, and X-bar Theory. In M.R. Baltin and
A.S. Kroch (eds.) Alternative Conceptions of Phrase Structure. University
of Chicago Press, Chicago.
Szabolcsi, A. (1987) Functional Categories in the Noun Phrase. In I. Kenesei (ed.)
Approaches to Hungarian, Volume Two: Theories and Analyses, Szeged.
Szabolcsi, A. (1989) Noun Phrases and Clauses: Is DP analogous to IP or CP?
To appear in J. Payne (ed.) Proceedings of the Colloquium on Noun Phrase
Structure.
Teng S.-H. (1973) Negation and Aspect in Chinese. Journal of Chinese Linguis­
tics, 1: 14-37.
Thiersch, C. (1978) Topics in Germanic Syntax. MIT doctoral dissertation, Cam­
bridge, MA.
Toman, J. and U. Koln (1981) Aspects of Multiple Wh-Movement in Polish and
Czech. In R. May and J. Koster (eds.) Levels of Syntactic Representation,
Foris, Dordrecht, The Netherlands.
Travis, L. (1984) Parameters and Effects of Word Order Variation. MIT doctoral
dissertation, Cambridge, MA.
Valian, V. (1990) Null Subjects: a problem for parameter-setting models of lan­
guage acquisition. In Cognition, 35: 105-122.
Valian, V. (1993) Parser Failure and Grammar Change. In Cognition, 46: 195-
202.
392
Valois, D. (1991) The Internal Syntax of DP. UCLA doctoral dissertation, Los
Angeles.
Veenstra, M. (1993) An Implementation of the Minimalist Program. Manuscript,
University of Groningen.
Vergnaud, J.-R. (1982) Dipendences et nivcaux de representations en syntaxe.
Universite de Paris VIl These de Doctorat d’Etat, Paris.
Watanabe, A. (1991) Wh-in-situ, Subjacency, and Chain Formation. MIT manuscript,
Cambridge, MA.
Webelhuth, G. (1989) Syntactic Saturation Phenomena and the Modem Germanic
Languages. University of Massachusetts dissertation, Amherst, MA.
Weissenbom, J. (1990) Functional Categories and Verb Movement: The Acquisi­
tion of German Syntax Reconsidered. In M. Orthweiler (ed.) Spracherwerb
und Grammatik, Linguistische Berichte, Sonderheft 3.
Weissenbom J., H. Goodluck and T. Roeper (eds.) (1992) Theoretical Issues
in Language Acquisition: continuity and change in development. Lawrence
Erlbaum Associates, Hillsdale, NJ.
Wexler, K. (1991) The Subset Principle is an Intensional Principle. To appear
in E. Reuland and W. Abraham feds.) Knowledge and Language: Issues in
Representation and Acquisition, Kluwer.
Wexler, K. (1991) Optional Infinitives, Head Movement, and the Economy of
Derivation. Presented at the Verb Movement Conference at the University
of Maryland, College Park, MD.
Wexler, K. and P. Culicover (1980) Formal Principles of Language Acquisition.
MIT Press, Cambridge, MA.
Wexler, K. and H. Humburger (1973) On the Insufficiency of Surface Data for the
Learning of Transformational Languages. In K.J. Hintikka, J.M.E. Moravcsik
and P. Suppes (eds.) Approaches to Natural Language, Reidel, Dordrecht,
The Netherlands.
Wexler K. and M. Manzini (1987) Parameters and Learnability in Binding Theory.
In T. Roeper and E. Williams (eds.) Parameter Setting, Reidel, Dordrecht,
The Netherlands.
Williams, E. (1981) Language Acquisition, Markedness and Phrase Structure. In
S. Tavakolian (ed.) Language Acquisition and Linguistic Theory, MIT Press,
Cambridge, MA.
Wu, A. (1992) Acquiring Word Order Without Word-Order Parameters? In
UCLA Working Papers in Psycholinguistics.
Wu, A. (1993a) A Minimalist Universal Parser. In UCLA Occasional Papers in
Linguistics, 11.
393
Wu, A. (1993b) The P-Parameter and the Acquisition of Word Order. Presented
at tne Sixty-Seventh Annual Meeting of the Linguistic Society of America.
Wu, A. (1993c) Parsing DS, SS and LF Simultaneously. Presented at the Sixth
CUNY Conference on Human Sentence Processing.
Wu, A. (1993d) The S-Parameter. Presented at the Sixteenth GLOW Collo­
quium.
394

More Related Content

PDF
How to draw manga vol. 6
PDF
Advanced photogeology lecture notes kadir dirik
PDF
Real-Time Vowel Synthesis - A Magnetic Resonator Piano Based Project_by_Vasil...
DOC
1ºdebachillerat otenses
DOC
1ºbach review of tenses
PDF
English school-books-3rd-primary-2nd-term-khawagah-2019-5
PDF
Practicas basicas piano
PDF
CAREER DEVELOPMENT
How to draw manga vol. 6
Advanced photogeology lecture notes kadir dirik
Real-Time Vowel Synthesis - A Magnetic Resonator Piano Based Project_by_Vasil...
1ºdebachillerat otenses
1ºbach review of tenses
English school-books-3rd-primary-2nd-term-khawagah-2019-5
Practicas basicas piano
CAREER DEVELOPMENT

Viewers also liked (15)

PDF
[SLIDE FACTORY] [S19] Nguyễn Thu Thảo - Bài tốt nghiệp
PDF
SHR_Brochure
PPT
I need a fundraiser now!
PDF
Jupin/ Pratt Industries "Discover, Create & Act.."
PPTX
Dynamics Business Conference 2015: Thinking about CRM not sure where to start
PPT
Kalnina ecer 20150908
PPTX
Introducing & playing with Docker | Manel Martinez | 1st Docker Crete Meetup
PPTX
Beet root insects A Lecture By Allah Dad Khan Provincial Coordinator IPM MINF...
PDF
Kentico_General_brochure
PPTX
Presentation sg v77
PPTX
Promotional Packaging
PDF
Clef 2015 Keynote Grefenstette September 8, 2015, Toulouse
PDF
Postdocpaper Career Satisfaction of postdocs
PDF
OPTIMIZE YOUR BUSINESS WITH SUCCESSFUL ONLINE ADVERTISING & ADWORDS GENERATION
[SLIDE FACTORY] [S19] Nguyễn Thu Thảo - Bài tốt nghiệp
SHR_Brochure
I need a fundraiser now!
Jupin/ Pratt Industries "Discover, Create & Act.."
Dynamics Business Conference 2015: Thinking about CRM not sure where to start
Kalnina ecer 20150908
Introducing & playing with Docker | Manel Martinez | 1st Docker Crete Meetup
Beet root insects A Lecture By Allah Dad Khan Provincial Coordinator IPM MINF...
Kentico_General_brochure
Presentation sg v77
Promotional Packaging
Clef 2015 Keynote Grefenstette September 8, 2015, Toulouse
Postdocpaper Career Satisfaction of postdocs
OPTIMIZE YOUR BUSINESS WITH SUCCESSFUL ONLINE ADVERTISING & ADWORDS GENERATION
Ad

Similar to Dissertation (20)

PPTX
Artificial Intelligence Notes Unit 4
PPT
Morphology.ppt
PPTX
lecture 1 intro NLP_lecture 1 intro NLP.pptx
PDF
Adnan: Introduction to Natural Language Processing
PDF
Natural language Processing: Word Level Analysis
PPT
haenelt.ppt
PPTX
NLP_KASHK: Introduction
PPT
NLP Finite state machine needed.ppt
PDF
9780429149207_previewpdf.pdf
PDF
Introduction to Computational Linguistics
PPTX
NL5MorphologyAndFinteStateTransducersPart1.pptx
PDF
Ontology And The Lexicon A Natural Language Processing Perspective Churen Hua...
PPTX
Computational Linguistics - Finite State Automata
PDF
Domain Specific Terminology Extraction (ICICT 2006)
PDF
Syntactic Theory A Formal Introduction Second Edition Ivan A Sag Thomas Wasow...
PPTX
PDF
Compiler Construction | Lecture 3 | Syntactic Editor Services
PDF
Lexical
PDF
learn about text preprocessing nip using nltk
Artificial Intelligence Notes Unit 4
Morphology.ppt
lecture 1 intro NLP_lecture 1 intro NLP.pptx
Adnan: Introduction to Natural Language Processing
Natural language Processing: Word Level Analysis
haenelt.ppt
NLP_KASHK: Introduction
NLP Finite state machine needed.ppt
9780429149207_previewpdf.pdf
Introduction to Computational Linguistics
NL5MorphologyAndFinteStateTransducersPart1.pptx
Ontology And The Lexicon A Natural Language Processing Perspective Churen Hua...
Computational Linguistics - Finite State Automata
Domain Specific Terminology Extraction (ICICT 2006)
Syntactic Theory A Formal Introduction Second Edition Ivan A Sag Thomas Wasow...
Compiler Construction | Lecture 3 | Syntactic Editor Services
Lexical
learn about text preprocessing nip using nltk
Ad

More from Andi Wu (11)

PDF
Chinese Word Segmentation in MSR-NLP
PDF
Correction of Erroneous Characters
PDF
Statistically-Enhanced New Word Identification
PDF
Learning Verb-Noun Relations to Improve Parsing
PDF
Dynamic Lexical Acquisition in Chinese Sentence Analysis
PDF
Word Segmentation in Sentence Analysis
PDF
Customizable Segmentation of
PDF
BibleTech2010.ppt
PDF
BibleTech2011
PDF
BibleTech2013.pptx
PPTX
BibleTech2015
Chinese Word Segmentation in MSR-NLP
Correction of Erroneous Characters
Statistically-Enhanced New Word Identification
Learning Verb-Noun Relations to Improve Parsing
Dynamic Lexical Acquisition in Chinese Sentence Analysis
Word Segmentation in Sentence Analysis
Customizable Segmentation of
BibleTech2010.ppt
BibleTech2011
BibleTech2013.pptx
BibleTech2015

Dissertation

  • 1. INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type ofcomputer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margin*, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand corner and continuingfrom left to right in equal sectionswith small overlaps. Each original is also photographed in one exposure and is included in reduced form at the backofthe book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6" x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. University Microfilms international A Bell &Howell Information Company 300 North Zeeb Road. Ann Arbor. Ml 48106-1346 USA 313/761-4700 800/521-0600
  • 2. Order Number 941891* The Spell-Out parameters: A M inimalist approach to syntax Wu, Andi, Ph.D. University of California, Los Angeles, 1994 U M I300 N. Zeeb Rd. Ann Arbor, MI 4? 106
  • 3. UNIVERSITY OF CALIFORNIA Lot Angeles The Spell-Out Parameters: A Minimalist Approach to Syntax A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Linguistics by Andi Wu 1994
  • 4. The dissertation of Andi Wu is approved. Terry K. A n D. Stott Parker yfdLsi/vt-dy Dominique Sportiche fedward P. Stabler, Jr., Committee Chair University of California, Los Angeles 1994 ii
  • 5. Contents 1 Introduction 1 2 The Spell-Out Parameters 8 2.1 The Notion of S pell-O ut......................................................................... 9 2.1.1 The Minimalist Program .............................................................. 9 2.1.2 The Timing of Spell-O ut.............................................................. 13 2.1.3 The S-Parameters........................................................................... 17 2.2 The S(M)-Parameter and Word Order . ............................................. 21 2.2.1 An Alternative Approach toWord O rd e r................................... 22 2.2.2 The Invariant X-Structure Hypothesis (IX S H )........................ 25 2.2.3 Modifying the IX S H ..................................................................... 33 2.3 Interaction of S(F)- and S(M)- Param eters.......................................... 41 2.4 S u m m a ry ...................................................................... 48 3 An Experimental Grammar 50 3.1 The Categorial and Feature S y ste m s.................................................... 55 3.1.1 Categories....................................................... 55 3.1.2 F eatures........................................................................................... 57 3.1.3 Features and C ategories.............................................................. 65 3.1.4 The Spell-Out of F eatu res........................................................... 67 3.2 The Computational S y s te m .................................................................... 71 3.2.1 Lexical Projection........................................................................... 71 3.2.2 Generalized Transform ation........................................................ 83 3.2.3 Move-a ........................................................................................... 91 3.3 S u m m a ry ......................................................................................................110 4 T he P aram eter Space 111 4.1 The Parameter Space of S(M )-Param eters.............................................115 4.1.1 An Initial Typology...........................................................................116 4.1.2 Further Differentiation.................................................................... 125 4.1.3 Some Set-Theoretic O bservations................................................. 129
  • 6. 4.2 Other Param eters......................................... 134 4.2.1 HD-Param eters................................................................................ 134 4.2.2 The Spell-Out of Functional H e a d s ............................................. 138 4.2.3 HD-Parameters and Functional Heads .......................................141 4.2.4 S(F)-parameters................................................................................ 144 4.3 Case Studies...................................................................................................148 4.3.1 English: An SVO L anguage.......................................................... 149 4.3.2 Japanese: An SOV language.......................................................... 156 4.3.3 Berber: A VSO L anguage............................................................. 159 4.3.4 German: A V2 Language.................................................................162 4.3.5 Chinese: A Head-Final SVO L anguage.......................................167 4.3.6 French: A Language with Clitics .................................................169 4.4 S u m m ary ......................................................................................................171 5 S etting th e P aram eters 172 5.1 Basic A ssum ptions...................................................................................... 173 5.1.1 Assumptions about the I n p u t ....................................................... 173 5.1.2 Assumptions about the L earn er....................................................176 5.1.3 Assumptions about the Criterion of Successful Learning . . . 177 5.2 Setting S(M)-Parameters.............................................................................180 5.2.1 The Ordering A lgorithm ................................................................ 180 5.2.2 Ordering and Learnability............................................................. 186 5.2.3 The Learning A lgorithm ................................................................ 194 5.2.4 Properties of the Learning A lgorithm .......................................... 196 5.2.5 Learning All Languages in the Parameter S p a c e .......................199 5.3 Setting Other P a ram e te rs..........................................................................205 5.3.1 Setting HD-Parameters....................................................................205 5.3.2 Setting S(F)-Param eters................................................................ 211 5.4 Acquiring Little Languages..........................................................................217 5.4.1 Acquiring Little E n g lish ................................................................ 218 5.4.2 Acquiring Little Ja p a n e se ............................................................. 219 5.4.3 Acquiring Little Berber....................................................................221 5.4.4 Acquiring Little C hinese................................................................223 5.4.5 Acquiring Little FYench....................................................... 225 5.4.6 Acquiring Little G erm an................................................................ 228 5.5 S u m m ary ......................................................................................................233 6 Parsing w ith S-P aram eters 235 6.1 Distinguishing Characteristics of the Parser . ................................... 236 6.2 A Prolog Implementation............................................................................ 252 6.2.1 TVee-Building...................................................................................253 iv
  • 7. 6.2.2 Feature-Checking............................................................................255 6.2.3 Leaf-A ttachm ent............................................................................259 6.2.4 The Parser in A ction..................................................................... 261 6.2.5 Universal vs. Language-Particular P a rs e rs ............................... 274 6.3 S u m m ary ............... 276 7 Final Discussion 277 7.1 Possible Extensions.................................................................................... 277 7.2 Potential P roblem s.................................................................................... 284 7.3 Concluding R em ark s................................................................................. 292 A Prolog Program s 294 A.l pspace.pl........................................................................................................294 A.2 s e ts .p l.......................................................................................................... 297 A.3 o rd e r.p l.............. 299 A.4 s p .p l..............................................................................................................300 A.5 sp u til.p l....................................................................................................... 303 A.6 sp2.pl ...........................................................................................................304 A.7 parser.pl....................................................................................................... 310 A.8 p a rs e rl.p l.................................................................................................... 321 B P aram eter Spaces 325 B.l P-Space of S(M)-Parameters ( 1 ) .............................................................. 325 B.2 P-Space of S(M)-Parameters ( 2 ) .............................................................. 329 B.3 P-Space of S(M)-Parameters (with A d v ) ...............................................331 B.4 P-Space of S(M) & HD P aram eters........................................................334 B.5 P-Space of S(M)-Parameters ( with A u x )...............................................339 B.6 P-Space of S(M) it HD Parameters ( with Aux ) ................................. 342 B.7 P-Space of S(F)-Param eters.....................................................................346 C P artial O rdering of S(M )-Param eter Settings 349 D Learning Sessions 358 D.l Acquiring an Individual Language (1 ) .....................................................358 D.2 Acquiring an Individual Language (2 ) .....................................................360 D.3 Acquiring All Languages in the P-Space of S(M)-Parameters . . . . 362 D.4 Acquiring All Languages in the P-Space of S(M)-Parameters (with A d v )...............................................................................................................365 D.5 Parameter Setting with Noisy In p u t........................................................374 D.6 Setting S(M), S(F) and HD Parameters in a Particular Language . 378 References 383
  • 8. List of Figures The Derivational Process of M PLT.................................................................11 A Specific Interpretation of the Derivational Process.................................. 12 The Invariant X-bar Tree in Wu (1993)........................................................ 27 The Coverage of S-Parameters and HD-Parameters...................................38 An Elementary X-bar T ree..............................................................................71 Multiple Complements in a Binary T ree........................................................ 74 Feature Value Binding in A grl-P ....................................................................78 VP-Projection of a Transitive Verb............................................................... 80 GT Substitution................................................................................................85 GT Adjunction................................................................................................ 86 Base Tree of a Simple Transitive Sentence..................................................88 Base Tree of a Simple Intransitive Sentence ................................... 89 Feature-Checking Movements ..............................................................96 Head Movement as Substitution.................................................... ... 99 Head Movement as Adjunction........................................................................100 The Position of O ften............................................................... 127 Head-Initial and Head-Final IP s .................... ............................................. 135 Japanese Trees....................................................................................... 158 Berber Trees................. ...................................... ............................... 161 German Trees . ............................................................................................. 165 A Chinese Tree ............................................................................. 168 Parse Tree of an SOV Sentence........................................ 239 Parse Tree of a VSO Sentence......................................................................... 240 A Simple DP T ree.......................................................................................... 278 A Hypothetical PP T ree..................................................................................279 A Triple-Agr I P .................................................................................................. 282 A More Traditional T ree..................................................................................285 vi
  • 9. List of Tables The Parameter Space of Wu (1993)............................................................. 30 Selectional R ules................................................................................................ 74 Featurized Selectional Rules.............................................................................82 Decision Table for the Spell-Out of V and N P ..............................................245 Decision Table for the Spell-Out of L-Features.............................................247 Decision Table for the Spell-Out of F-Features .................................249 vii
  • 10. ACKNOWLEDGMENTS First and foremost, I want to thank Ed Stabler, my advisor and committee chair, whose intellectual insights, financial assistance and personal friendship has made the writing of this dissertation one of the most memorable events in my life. He taught me many things in computational linguistics virtually by hand and he enabled me to see the real motivations for syntactic research. I benefited a great deal from those hours spent in his office where his constructive criticism of my work helped me stay closer to the truth. This dissertation would not exist if he had not come to UCLA. I am also very grateful to all the other members of my committee. Dominique Sportiche has been my main source of inspiration in syntactic theory. Quick to detect the flaws in my model, he has never let me finish our meeting without giving me a new perspective on my ideas. Nina Hyams has been supportive of my work ever since my M.A. years. She introduced me into the fields of language acquisition and language processing and her constant guidance and encouragement have been essential to my graduate education. Ed Keenan has led me to see both the empirical and mathematical sides of languages. It was in his class that I began to view linguistics as a science. I want to thank Stott Parker from Computer Science and Terry Ou from psychology for listening patiently to my abstract stories and reminding me of the computational and psychological relevance of my hypotheses. I also owe intellectual debt to many other professors in our department. Among them Tim Stowell, Hilda Koopman and Anoop Mahajan deserve special credit. Their comments and suggestions on this work have been invaluable. During the course of this project I also benefited from discussions with many linguists on the east coast and across the Atlantic Ocean. Special thanks are
  • 11. due to Stephen Crain, Robert Berwick, Mark Johnson, Robert Frank and Edward Gibson who paid attention to my half-baked ideas and made a number of useful suggestions. The conversations I had with Aravind Joshi, Ken Wexler, Lyn Frazier, Amy Weinberg, Craig Thiersch, Luigi Burzio, Christer Platzack, Ian Roberts, Jean Pollock and David Pesetsky have also been very helpful. I did not have a chance to meet Richard Kayne, but the papers he sent me and his encouragement have made me more confident of my project. Another place I have been getting inspiration from is the Workshop on Theoret­ ical East-Asian Linguistics at UC Irvine. Those monthly discussions have helped me to know more about my own language. In particular, I want to thank James Huang who has never failed to respond to my call for help. I would also like to thank my fellow students at UCLA for their help and friendship. I owe a lot to Tom Cornell who “lured" me into computational lin­ guistics. In the computer lab I got much help from Karan Wallace, Johnathan Mead, Claire Chyi and Susan Hirsh. My academic and personal life at UCLA have also been made more meaningful by Yasushi Zenno, Kuoming Sung, Feng- shi Liu, Mats Johansson, Sue Inouye, Chris Golston, Cheryl Chan, Emily Sityar, Harold Crook, Bonnie Chiu, Bill Dolan, Stephan Schuetze-Coburn, Daniel Valois, Dan Silverman, Tetsuya Sano, Dorit Ben-Shalom, Akira Nakamura, Murat Kural, Luc Moritz, Jeannette Schaeffer, Seungho Nam, Hyunoo Lee, Jongho Jun, Filippo Beghelli, Bonny Sands, Abby Kaun, among many others. I feel very fortunate to have chosen the Department of Linguistics at UCLA as the home of my graduate study. In addition to the academic nourishment and financial support I received from this excellent institution, I especially appreciate the warm and human environment here which makes learning more of a pleasure.
  • 12. In particular, I want to extend my special thanks to Anna Meyer, John Bulger, and the three successive Chairs: Paul Schachter, Russ Schuh and Tim Stowell, without whom my life could be miserable. I am also fortunate to have a chance to work at Intelligent Text Processing where I learned a lot more about parsing. I thank Kathy Dahlgren for making it possible for me to have a rewarding job and finish my dissertation at the same time. The people who deserve my thanks most are those in my family: my wife Luoqin Hou whose love, encouragement and hard work have been essential to my survival; my 5-year-old daughter Cathy Wu who lets me work at the computer even though she does not think that this is the best way to spend my time; my parents who encouraged me to pursue learning in a period when knowledge was despised in China; and my mother-in-law who did everything possible to save my time. This dissertation is dedicated to them. x
  • 13. July 11, 1957 1982 1985 1985-1986 1987-1991 1989 1991-1993 1993- VITA Born, Shanghai, China B.A. in English Language and Literature Nanjing University, China M.A. in Language and Linguistics Nanjing University, China Lecturer Nanjing University, China Teaching Assistant Dept, of Linguistics and Dept, of East Asian Languages and Cultures University of California, Los Angeles M.A. in Linguistics University of California, Los Angeles Research Assistant University of California, Los Angeles Programmer Intelligent Text Processing, Los Angeles xi
  • 14. PUBLICATIONS AND PRESENTATIONS Wu, A. (1986) The Stylistic Effects of Left- and Right-Branching Structures. In Journal of Nanjing University. Wu, A. (1991) Center-Embedding and Parsing Constraints. Presented at the Colloquium of the UCLA Dept, of Linguistics. Wu, A. (1992) A Computational Approach to ‘Intake’ Ordering in Syntactic Ac­ quisition. In Proceedings of the Eleventh West Coast Conference on Formal Linguistics, Chicago University Press. Wu, A. (1992) Why Both Top-Down and Bottom-Up: evidence from Chinese. Presented at the 5th CUNY Conference on Human Sentence Processing. Wu, A. (1992) Arguments for an Extended Head-Driven Parser. Presented at the UC Irvine Workshop on East Asian Linguistics. Wu, A. (1992) Acquiring Word Order Without Word Order Parameters? In UCLA Working Papers in Psycholinguistics. Wu, A. (1993) The P-Parameter and the Acquisition of Word Order. Presented at the 67th annual meeting of the Linguistic Society of America (LSA). Wu, A. (1993) A Minimalist Universal Parser. In UCLA Occasional Papers in Linguistics, Vol. 11. Wu, A. (1993) Parsing DS, SS and LF Simultaneously. Presented at the 6th CUNY Conference on Human Sentence Processing. Wu, A. (1993) The S-Parameter. Presented at the 16th GLOW Colloquium. Wu, A. (1993) Left-Corner Parsers and Head-Driven Parsers. In Linguistics Abroad, 56, journal of the Institute of Linguistics, Academy of Social Sci­ ences of China.
  • 15. ABSTRACT OF THE DISSERTATION The Spell-Out Parameters: A Minimalist Approach to Syntax by Andi Wu Doctor of Philosophy in Linguistics University of California, Los Angeles, 1994 Professor Edward P. Stabler, Jr., Chair This thesis explores a new parametric syntactic model which is developed from the notion of Spell-Out in the Minimalist framework (Chomsky 1992). The main hypothesis is that languages are identical up to the point of Spell-Out: the sets of movements and morphological features are universal but different languages can have different word orders and morphological paradigms depending on which movements or features are visible. We can thus account for a wide range of cross- linguistic variation by parameterizing the spell-out options. The model proposed in this thesis has two sets of Spell-Out parameters. The S(M)-parameters determine which movements occur before Spell-Out in a given language. By varying the values of these parameters, we get a rich spectrum of
  • 16. word order phenomena, including all basic word orders, V2, and a variety of scram­ bling. The S(F)-parameters determine which features are morphologically realized. Different value combinations of these parameters produce different morphological paradigms. The values of these two sets of parameters can also interact, resulting in the occurrence of auxiliaries, grammatical particles and expletives. Computational experiments are conducted on a minimal version of this model in terms of language typology, language acquisition and language processing. It is found that the parameter space of this model can accommodate a wide range of cross-linguistic variation in word order and inflectional morphology. In addition, all the languages generated in this parameter space are found to be learnable via a linguistically motivated parameter-setting algorithm. This algorithm incorporates Chomsky’s (1992) principle of Procrastinate into the standard induction by enu­ meration learning paradigm. It observes the Subset Principle, and consequently every language can be exactly identified without the need of negative evidence. Finally, this new parametric system leads to a new conception of parsing. The fact that the underlying set of movements is universal makes it possible to have parsers where the chain-building process is uniformly defined. An experimental parser is presented to illustrate this new possibility. This parser can parse every language in the parameter space by consulting the parameter settings. A language-particular parser for each language can be derived from this Universal parser” through partial evaluation. All these experimental results, though preliminary in nature, indicate that the line of research suggested in this thesis is worth pursuing.
  • 17. Chapter 1 Introduction The goal of this thesis is to explore a new parametric system within the theoretical framework of Principles and Parameters (P&P theory hereafter) which is repre­ sented by Chomsky (1981a, 1981b, 1982,1986a, 1986b, 1991,1992) and many other works in generative syntax. A basic assumption of this theory is the following: Children are endowed at birth with a certain kind of grammatical knowledge called Universal Grammar (UG) which consists of a number of universal principles along with a number of parameters. Each of the parameters has a number of possible values and any possible nat­ ural language grammar results from a particular combination of those parameter values. In acquisition, a child’s task is to figure out the pa­ rameter setting of a given language on the basis of the sentences he hears in this language. At the present stage, the P&P theory is still more of a research paradigm than a fully-developed model. No final agreement has been reached as to what the principles are and how many parameters are available. In this thesis, I will propose a set of parameters and investigate the consequences of this new parametric system 1
  • 18. in terms of language typology, language acquisition and language processing. The syntactic model to be explored in this thesis was inspired by some recent developments in syntactic theory exemplified by Chomsky (1992) and Kayne (1992, 1993). In A Minimalist Program for Linguistic Theory (Chomsky 1992), the notion of Spell-Out was introduced into the theory. Spell-Out is a syntactic operation which feeds a syntactic representation into the PF (Phonetic Form) component, where a sentence is pronounced. The ideal assumption is that languages have identical underlying structures and all surface differences are due to Spell-Out. In the model I propose, the notion of Spell-Out applies to both movements and features. When a movement is spelled out, we see overt movement. We see overt inflectional morphology when one or more grammatical features are spelled out. Different languages can have different word orders and different morphological paradigms if they can choose to spell out different subsets of movements or features. Since so many cross-linguistic variations have come to be associated with Spell- Out, it is natural to assume that this syntactic operation is parameterized. We need a set of parameters which determine which movements are overt and which features are morphologically visible. The main objective of this thesis is to propose and test out such a set of parameters. As the new parameters to be proposed in this thesis are all related to Spell-Out, we will call those parameters Spell-Out Parameters (S-parameters for short here­ after). There are two types of S-parameters: the S(M)-parameters which control the spell-out of movements and the S(F)-parameters which control the spell-out of features. We will see that the value combinations of these parameters can explain a wide range of cross-linguistic variation in word order and inflectional morphology. They also offer an interesting account for the distribution of certain functional 2
  • 19. elements in languages, such as auxiliaries, expletives and grammatical particles. In terms of language acquisition, this new parametric system also has some de­ sirable learnability properties. As we will see, all the languages generated in this new parameter space are learnable via a linguistically motivated parameter-setting algorithm. This algorithm incorporates Chomsky's (1992) principle of Procrasti­ nate into the standard induction by enumeration learning paradigm. It observes the Subset Principle, and consequently every language can be exactly identified without the need of negative evidence. Finally, this new parametric system leads to a new conception of parsing. The fact that the underlying set of movements is universal makes it possible to have parsers where the chain-building process is uniformly defined. An experimental parser is presented to illustrate this new possi­ bility. This parser can parse every language in the parameter space by consulting the parameter settings. A language-particular parser for each language can be derived from this "universal parser” through partial evaluation. All the experiments on this parametric syntactic model were performed by a computer. The syntactic system and the learning algorithm are implemented in Prolog. The parameter space was searched exhaustively using a Prolog program which finds every language that can be generated by our grammar. The learning algorithm has also been tested against every possible language in our parameter space. These computer experiments serve as additional proofs for my arguments. It should be pointed out here that in this thesis the term language will often be used in a special sense to refer to an abstract set of strings. In order to concentrate on basic word order and basic inflectional morphology, and study those properties in a wide variety of languages, I will represent the “sentences” of a language in an abstract way which shows the word order and overt morphology of a sentence but 3
  • 20. nothing else. An example of this is given in (1). (1) s -[c l] o-[c2] v- [tns ,asp] The string in (1) represents a sentence in some SOV language. The lists attached to S, 0 and V represent features that are morphologically realized. The “words” we find in (1) are: (a) a subject NP which is inflected for case (cl); (b) an object NP which is inflected for a different case (c2); and (c) a verb which is inflected for tense and aspect. A language then consists of a set of such strings. Here is an example: (2) { s -[c l] v- [tn s, asp] , • -[ c l] o-[c2] v -[tn s,asp ], o-[c2] * -[c l] v-[tns,asp] > What (2) represents is a verb final language where the subject and object NPs can scramble. The NPs in this this language are overtly marked for case and the verb in this language is inflected for tense and aspect. Such abstract string representation makes it possible to let the computer read strings from any “language". In many situations we will be using the term “language" to refer to such a set of strings. The fact that we will be conducting computer experiments with these artificial languages does not mean that we will be detached from reality, however. Many real languages will also be discussed in connection with these simplified languages. Although the representation in (2) is fairly abstract, it is not hard to see what 4
  • 21. natural language it may represent. As a matter of fact, most languages generated in our parameter space can correspond to some natural languages. The results of our experiments are therefore empirically meaningful. In the course of our discussion, we will often furnish real language examples to exemplify those abstract languages. The rest of this thesis is organized as follows. Chapter 2 examines the notion of Spell-Out and considers its implications for linguistic theory. After a brief description of the Minimalist model, I propose a syntactic model where cross-linguistic variations in word order and morphology are mainly determined by two sets of S-parameters: S(M)-parameters and S(F)- parameters. Arguments for this new model are given and the potential of the new system is illustrated with examples from natural languages. Chapter 3 describes in detail the syntactic model to be used in the experiments. In order to implement the new system in Prolog and use computer search to find out all the consequences of this system, we need a fully specified grammar. At the present stage, however, not every detail of the Minimalist model has been worked out. For this reason, I will define a partial grammar which includes only those aspects of syntax which are directly relevant to our discussion. The partial grammar includes a categorial system, a feature system, a parameter system, and a computational system whose basic operations are Lexical Projection, Generalized Transformation and Move-a. The grammar will be specific enough for computer implementation and rich enough for the generation of various word orders and morphological paradigms. In Chapter 4, we consider the consequences of our experimental grammar in terms of the language typology it predicts. It is found that our new parameter space is capable of accommodating a wide range of linguistic phenomena. In terms 5
  • 22. of word order, we are able to derive all basic word orders (SVO, SOV, VSO, VOS, OSV, OVS and V2) as well as many kinds of scrambling. In terms of inflectional morphology, we can get a variety of inflectional paradigms. There are also parame­ ter settings which account for the occurrence of auxiliaries, grammatical particles, expletives and clitics. Many value combinations in the parameter space will be illustrated by examples from natural languages. The topic of Chapter 5 is learnability. We consider the question of whether all the “languages” in our parameter space can be learned by setting parameters. It is discovered that each of those languages can be correctly identified in Gold's (1967) induction by enumeration paradigm if the hypothetical settings are enumerated in a certain order. It turns out that this ordering of hypotheses can be derived from some general linguistic principle, namely the principle of Procrastinate. Our experiments show that, with this linguistically motivated ordering algorithm, the learner can converge on any particular grammar in an incremental fashion without the need of negative evidence or input ordering. In Chapter 6, we discuss the implications of our parametric syntactic model for language processing. We will see that this new model can result in a parser which is more universal in nature. The uniform treatment of movement in this model makes it possible for one of the parsing processes - chain-building - to be defined universally. The new system also facilitates the handling of empty categories. Whether any given terminal node must dominate lexical material or not can be uniquely determined by the parameter values. The new possibilities are illustrated with a sample parser. This parser is capable of processing any language in the parameter space according to the parameter values. Any language-particular parser can be obtained by partially executing the universal pane with a particular 6
  • 23. parameter setting. Chapter 7 concludes the thesis by considering possible extensions and potential problems of the present model. One extension to be discussed is how our approach can be applied to the word order variation within PP/DP/NP. It seems that we can account for the internal structures of these phrases using a similar approach. The main potential problem to be considered is the dependency of our model on certain syntactic assumptions. We realize that the particular model we have implemented does rely on some theoretical assumptions which are yet to be proved. However, the general approach we are taking here can remain valid no matter how the specific assumptions change. The model can be updated as the research in linguistic theory advances. 7
  • 24. Chapter 2 The Spell-Out Parameters In this chapter, we examine the notion of Spell-Out and consider its implications for cross-linguistic variations in word order and inflectional morphology. We will see that a considerable amount of word order variation can be explained in terms of the Spell-Out of movements, while the Spell-Out of features can account for much morphological variation. Two sets of Spell-Out parameters are proposed: the S(M)-parameters which determine the Spell-Out of movements and the S(F)- parameters which are responsible for the Spell-Out of features. We shall see that the parameter space created by these two sets of parameters can cover a wide range of linguistic phenomena. This chapter will only present a very general picture of how things might work in this new model. The full account is given in Chapter 3 and Chapter 4. In the brief sketch that follows, we will start by looking at the notion of Spell-Out in Chomsky (1992). This notion will then be applied first to movement and then to infectional morphology. Finally, we will have a quick glance at the possible interactions between the Spell-Out of movements and the Spell-Out of features. 8
  • 25. 2.1 The N otion o f Spell-O ut 2.1.1 The Minimalist Program Spell-Out as & technical term is formally introduced in Chomsky's (1992) Min­ imalist Program for Linguistic Theory (MPLT hereafter), though the notion it represents has been around for some time. The most salient feature of the Min­ imalist framework1 is the elimination of D-structure and S-structure. The levels of representation are reduced to nothing but the two interfaces: Phonetic Form (PF), which interacts with the articulatory-perceptual system, and Logical Form (LF) which interacts with the conceptual-intentional system. Consequently, gram­ matical constraints have come to be associated with these two interface levels only. Most of the well-formedness conditions that used to apply at D-structure (DS) and 5-structure (SS) have shifted their domain of application to either PF or LF. In this new model, structural descriptions (SDs) are generated from the lexicon and the SDs undergo syntactic derivation until they become legitimate objects at both PF and LF. Given a SD which consists of the pair (x, A)a, “ ... a derivation D converges if it yields a legitimate SD; otherwise it crashes; D converges at PF if x is legitimate and crashes at PF if it is not; D converges at LF if A is legiti­ mate and crashes at LF if it is not” (MPLT p7). The legitimacy of PF and LF representations will be discussed later. The derivation is carried out in the computational system which consists of three distinct operations: lexical projection (LP),3 generalized transformation (GT), and 1Throughout this thesis 1 will try to make a distinction between MPLT and the Minimalist framework. The former refers to the specific model described in MPLT while the latter refers to the general approach to syntax initiated by MPLT. J» stands for PF and A stands for LF. *This is not a term used in MPLT, but the operation denoted by this term is obviously 9
  • 26. move-a. LP “selects an item X from the lexicon and projects it to an X-bar structure of one of the forms in (3), where X = X ° = [rX].” (MPLT, p30). (3) (i) X (ii) l**J (iii) U-[**]] The generation of a sentence typically involves the projection of a set of such elementary phrase*markers (P-markers) which serve as the input to GT. GT reduces the set of phrase-markers generated by LP to a single P-marker. The operation proceeds in a binary fashion: it “takes a phrase-marker K 1 and inserts it in a designated empty position 0 in a phrase-marker K, forming the new phrase-marker K ‘, which satisfies X-bar theory” (MPLT, p30). In other words, GT takes two trees K and K l, “targets” K by adding 0 to K , and then substitutes K 1 for 0. The P-markers generated by LP are combined pair-wise in this fashion until no more reduction is possible. Move-a is required by the satisfaction of LF constraints. Some constituents in the sentence must be licensed or checked in more than one structural position and the only way to achieve this kind of multiple checking is through movement. Unlike GT which operates on pairs of trees, mapping (K , K l ) to K*, move-a operates on a single tree, mapping K to K*. It “targets K , adds 0 , and substitutes a for 0 , where a in this case is a phrase within the targeted phrase-marker K itself. We assume further that the operation leaves behind a trace t of a and forms the chain (a, t).” (MPLT, p31). assumed in the paper. 10
  • 27. There is an additional operation called Spell-Out in the computational system. This operation feeds the an SD into the PF component. The derivation of a sentence can consist of a number of intermediate SDs but only one of them is actually pronounced or heard.4 The function of Spell-Out is to select such an SD. It takes a "snap-shot’’ of the derivational process, so to speak. According to Chomsky, Spell-Out can occur at any point in the course of derivation. Given a sequence of SDs in the derivation, < S D i,S D i, ..., SD n >, each SDi representing a derivational step, the system can in principle choose to spell out any SD i, 1 < t < n. This notion of Spell-Out is illustrated in (4) where the curly bracket is meant to indicate that Spell-Out can occur anywhere along the derivational procedure. (4) Lexicon LEXICAL PROJECTION GT OPERATION MOVE-ALPHA LF The Derivational Process of MPLT However, not every SD that we choose to spell out is acceptable to the PF compo­ nent. Only those SDs which are legitimate objects at PF can be pronounced. In other words, the SD being fed into PF must at least satisfy the PF requirements. Once these requirements are met, an SD can be spelled out regardless of how many LF constraints have been satisfied. 4The derivation may proceed in more than one way. In that case, we can have different intermediate SDs depending on which particular derivational procedure is being followed. SPELL-OUT PF ------------------- 11
  • 28. One of the PF constraints proposed in MPLT requires that the input to PF be a single P-marker. If the representation being spelled out “is not a single phrase marker, the derivation crashes at PF, since PF rules cannot apply to a set of phrase markers and no legitimate PF representation jt is generated.” (MPLT, p30). In other words, a given P-marker cannot be spelled out until all its subtrees have been projected and reduced to a single tree.9 In normal cases, therefore, the spell-out of any constituent must occur after the completion of LP and GT within this given constituent.6 This leads to the conclusion that Spell-Out can only apply in the process of move-a. So (5) is a more specific description of the derivational process. (5) PF* Lexicon LEXICAL PROJECTION Elementary Phrase Markers GT OPERATION Single Phrase Marker SPELL-OUT 1 MOVE-ALPHA LF A Specific Interpretation of the Derivational Process This diagram may seem to suggest a sequencing of the computational operations, •One possible reason why PF can take only one tree at a time might be the following: For a sentence to converge, it must be assigned a proper intonation. What intonation to assign depends on the tree structure of the sentence. Apparently, there is no way of assigning a single intonation to two unconnected trees. 8We might get sentence fragments or a broken sentence if Spell-Out occurs before the com­ pletion of GT in a CP. 12
  • 29. with LP preceding GT which in turn precedes move-a. No such strict sequencing is implied here, however. The picture is intended to be a logical description of linguistic theory rather than a flow chart for procedural computation. The ordering is relative in nature. In actual language production and language comprehension, these computational operations can be co-routined. For instance, GT operations may be interleaved with movement operations, as long as the starting point and the landing site of a given movement are in a single tree before the movement applies. The crucial point this diagram is meant to convey is that only single trees can be accepted by PF, with the logical consequence that the spell-out of any particular constituent can only occur after the GT operation is complete within this given constituent. 2.1.2 The Timing of Spell-Out Now let us take a closer look at Spell-Out which, as we have argued, normally occurs in the process of move-a, where this operation is free to apply at any time. What is the consequence of this freedom? Before answering this question, we had better find out exactly what happens in move-a. In the pre-Minimalist PfcP model, some movements are forced by S-structure requirements and some by LF requirements. The ones that are forced by SS constraints must take place in overt syntax. In our current terminology, we can say that these movements must occur before Spell-Out. The movements forced by LF requirements, however, can be either overt or covert. A typical example is wh-movement which is forced by the scope requirement on wh-phrases. It has been generally accepted since Huang (1982) that the scope requirement is satisfied at SS in languages like English and at LF in languages like Chinese. This is why wh-movement is overt in English 13
  • 30. but covert in Chinese. Now that S-structure is gone, all movements are forced by LF requirements. Consequently, every movement has become an LF movement which, like wh-movement, can be either overt or covert. The Case Filter (Chomsky & Lasnik 1977, Chomsky 1981b, Vergnaud 1982, etc.) and the Stray Morpheme Filter (Lasnik’s Filter) (Lasnik 1981)r , for instance, are no longer requirements on overt syntax only. They may apply either before Spell-Out or after Spell-Out, as long as they do get satisfied by LF. Consequently, the movements motivated by these filters can be either visible or invisible. It should be mentioned here that all LF requirements in the Minimalist frame­ work are checking requirements. An SD is a legitimate object at LF only if all its features have been checked. In cases where the checking involves two differ­ ent structural positions, movement is required to occur. In fact, movement takes place for no reason other than feature-checking in this model. The visibility of a movement depends on the timing of feature-checking. It is visible if the relevant feature is checked before Spell-Out and invisible if it is checked after Spell-Out. Now the question is why some features are checked before Spell-Out. According to Chomsky’s principle of Procrastinate (MPLT, p43) which requires that overt movement be avoided as much as possible, the optimal situation should be the one where every movement is covert. There must be some other requirements that force a movement to occur before Spell-Out. In MPLT, overt movement is forced by a PF constraint which requires that ustrong” features be checked before Spell- Out. u ... ‘strong’ features are visible at PF and ‘weak’ features invisible at PF. These features (i.e. those features that are visible*) are not legitimate objects at 1This filter requires that morphemes designated as affixes be "supported” by lexical material at PF. It is the primary motivation for V-to-I raising or do-support. •Comment added by Andi Wu 14
  • 31. PF; they are not proper components of phonetic matrices. Therefore, if a strong feature remains after Spell-Out, the derivation crashes.” (MPLT, p43) To prevent a strong feature from being visible at PF, the checking of this feature must be done before Spell-Out. Once a feature is checked, it disappears and no PF con­ straint will be violated. Chomsky cites French and English to illustrate this: “the V-fe&tures of AGR are strong in French, weak in English. ... In French, overt raising is a prerequisite for convergence; in English, it is not.” (MPLT, p43) The combined effect of this “Strong Feature Filter” and the principle of Procrastinate is a precise condition for overt movement: a movement occurs before Spell-Out if and only if the feature it checks is strong. This account is very attractive but it has its problems, as we will see later when we come to an alternative account in 2.1.3. The timing of feature-checking and consequently the timing of movement are obviously relevant to word order. This is clearly illustrated by wh-movement which checks the scope feature. This movement is before Spell-Out in English and after Spell-Out in Chinese. As a result, wh-phrases are sentence-initial in English but remain in situ in Chinese. Now that every movement has the option of being either overt or covert, the amount of word order variation that can be attributed to movement is much greater. As we will see in 2.2.2, given current syntactic assumptions which incorporate the VP-Interaal Subject Hypothesis9 (Koopman and Sportiche 1985, 1988, 1990, Kitagawa 1986, Kuroda 1988, Sportiche 1990, etc.) and the Split-Infl Hypothesis10 (Pollock 1989, Belletti 1990, Chomsky 1991, 9Thia hypothesis assumes that every argument of a VP (including the subject) is generated VP-internally. 10This hypothesis assumes a more articulated Infl structure where different functional elements such as Tense and Agreement count as different categories and bead their own projections. 15
  • 32. etc.), it is possible to derive all the basic word orders (including SVO, SOV, VSO, V2, VOS, OSV and OVS) just from movement. TITSS'suggests that movement can have a much more important role to play in cross-linguistic word order variation than we have previously assumed. We may even begin to wonder whether all the variation in word order can be accounted for in terms of movement. If so, no variation in the X-bar component will be necessary. This idea has in fact been proposed in Kayne (1992, 1993) and implemented in a specific model by Wu (1992, 1993a, 1993b, 1993c, 1993d). We will come back to this in 2.2.2. There is another assumption in the Minimalist theory which has made the prospect of deriving word order variations from movement a more realistic one. This is the assumption that all lexical items come from the lexicon fully inflected. In pre-Minimalist models, a lexical root and its affixes are generated separately in different positions. To pick up the inflections, the lexical root must move to the functional category where the inflectional features reside. For instance, a verb must move to Infl to get its tense morphology and a subject NP must move to the Spec of IP to be assigned its case morphology. Without movement, verbs and nouns will remain uninflected. This assumption that lexical roots depend on move­ ment for their inflectional morphology runs into difficulty whenever we find a case where the verb or noun is inflected but no movement seems to have taken place. It has been generally accepted that the English verb does not move to the position where agreement morphology is supposed to be located (Chomsky 1957, Emonds 1976, Pollock 1989, among many others). To account for the fact that verbs are inflected for subject-verb agreement in English, we have to say that, instead of the verb moving up, the affixes are lowered onto the verb. In the Minimalist theory, however, lowering is prohibited. The requirement that each move-a operation must 16
  • 33. extend the target has the effect of restricting movement to raising only. In fact, this requirement of target extension can be viewed as notations! variant for the no-lowering requirement. At first sight, this seems to put us in a dilemma: low­ ering is not permitted, but without lowering the affixes will get stranded in many cases. But this problem does not exist in the Minimalist model. In this model, words come from the lexicon fully inflected. Verbs and nouns “are drawn from the lexicon with all of their morphological features, including Case and ^-features” (MPLT, p41). They no longer have to move in order to pick up the morphology. Therefore, whether they carry certain overt morphological features has nothing to do with movement. Movement is still necessary, but the purpose of movement has changed from fe.aturt-assignm.cnt to feature-checking. The morphological features which come with nouns and verbs must be checked in the appropriate positions. For instance, a verb must move to T(ense) to have its tense morphology checked and a noun must move to the Spec of some agreement phrase to have its case and agreement morphology checked. These checking requirements are all LF require­ ments. Therefore, the movements involved in the checking can take place either before or after Spell-Out. The cases where lowering was required are exactly those where the checking takes place after Spell-Out. 2.1.3 The S-Parameters We have seen that the timing of Spell-Out can vary and the variation can have consequences in word order. We have mentioned Chomsky's account of this vari­ ation: a movement occurs before Spell-Out just in case the feature it checks is “strong”. Now, what is the distinction between strong and weak features? Ac­ cording to Chomsky, this distinction is morphologically based. He does not have a 17
  • 34. precise definition of this distinction in MPLT, but the idea he wants to suggest is clear: a feature is strong if it is realized in overt morphology and weak otherwise.11 Let us assume that there is an underlying set of features which are found in every language. A given feature is realized in overt morphology when this feature is spelled out. Then the PF constraint in Chomsky’s system simply says that a fea­ ture must be checked before it is spelled out. Given the principle of Procrastinate, a movement will occur before Spell-Out just in case the morphological feature(s) to be checked by this movement is overt. This bijection between overt movement and overt morphology is conceptually very appealing. If this is true, syntactic acquisition will be easier, since in that case overt morphology and overt movement will be mutually predictable. The morphological knowledge children have acquired can help them acquire the syntax while their syntactic knowledge can also aid their acquisition of morphology. We will indeed have a much better theory if this re­ lationship actually exists. Unfortunately, this bijection does not seem to hold in every language.13 Since Pollock (1989) where this linkage between movement and morphology is seriously proposed, many people have challenged this correlation. Counter-examples to this claim come in two varieties. On the one hand, there are languages where we find overt movement but not overt morphology. Chinese may be such a language. As has been observed in Cheng (1991) and Chiu (1992), the subject NP in Chinese moves out of the VP-shell to a higher position in overt “ Chomsky uses the term “rich morphology1* instead of “overt morphology**. The agreement morphology in French, for example, is supposed to be richer than that in English. In this way, French and English can be different from each other even though both have overt agreement morphology. Unfortunately, the concept of “richness” remains a fussy one. Chomsky does not specify how the rich/poor differentiation is to be computed. 13We can treat this correlation between syntax and morphology as a higher idealisation of the linguistic system. But then we must be able to tolerate the notion that some existing languages have deviated from the ideal grammar. 18
  • 35. syntax. But there is no overt morphological motivation for this movement, for this NP carries no inflectional morphology at all. Other examples are found in the Kru languages (Koopman 1984) where the verb can move to Agr just as it does in French in spite of the fact that there is no subject-verb agreement in these lan­ guages. On the other hand, there exist languages where we And overt morphology but not overt movement. The most frequently cited example is English. In view of the fact that agreement features are spelled out in English, the verbs in English are expected to move as high as those in French. This is not the case, as is well known. The agreement in English is of course “poor”, but even in Italian which is a language with very rich subject-verb agreement, it is still controversial whether the verb always moves to AgrS (c/. Rizzi (1982), Hyams (1986), Belletti (1990), etc.). There are other examples of rich agreement without overt movement. Schaufele (1991) argues that Vedic Sanskrit is a language of this kind. If we insist on the “iff” relationship between overt movement and overt morphology, we will face two kinds of difficulties. In cases of overt movement without overt morphology, the principle of Procrastinate is violated. We find movements that occur before Spell- Out for no reason. In cases of overt morphology without overt movement, the PF constraint will be violated which requires that overt features be checked before Spell-Out. Wc cannot rule out the possibility that, under some different analyses, all the examples cited above may cease to be counter-examples. However, we will hesitate to base our whole model upon this assumed linkage between syntax and morphology until we have seen more evidence for this hypothesis. There is an additional problem with this morphology-based explanation for overt/covert movement. Apparently, not all movements have a morphological mo­ tivation. V-movement to C and XP-movement to Spec of CP, for example, are not 19
  • 36. very likely to be morphologically related. They are certainly related to feature- checking, but these features are seldom morphologically realized.13 Why such extremely “weak” features should force overt movement in many languages is a puzzle. Since the correspondence between overt morphology and overt movement is not perfect, I will not rely on the strong/weak distinction for an explanation for the timing of feature-checking. Instead of regarding overt morphology and overt movement as two sides of the same coin, let us assume for the the time being that these two phenomena are independent of each other. In other words, whether a feature is spelled out and whether the feature-checking movement is overt will be treated as two separate issues. We will further assume that both the spell-out of features and the spell-out of the feature-checking movements can vary arbitrarily across languages. I therefore propose that two types of Spell-Out Parameters (S- Parameters) be hypothesized. One type of S-par&meters determines whether a given feature is spelled out. The other type determines whether a given feature- checking movement occurs before Spell-Out. Let us call the first type of parameters S(F)-parameters (F standing for “feature”) and the second type S(M)-parameter (M standing for “movement”). Both types of parameters are binary with two possible values: 1 and 0. When an S(F)-parameter is set to 1, the feature it is associated with will be morphologically visible. It is invisible when its S(F)- parameter is set to 0. The S(M)-parameter affects the visibility of movement. When an S(M)-parameter is set to 1, the relevant movement will be overt. The movement will be covert if its S(M)-parameter is set to 0. 13We do not exclude the possibility that these features can be realised in some special visible forms, such as intonation and stress. 20
  • 37. The S(F)-parameters determine, at least in part, the morphological paradigm of a language. A language has overt agreement just in case the S(F)-parameter for agreement features are set to 1, and it has an overt case system just in case the S(F)-parameter for the case features is set to 1. Given a sufficiently rich set of features, the value combinations of S(F)-parameters can generate most of the inflectional systems we find in natural languages. All this is conceptually very simple and no further explanation is needed. The exact correspondences between S(F)-parameters and morphological paradigms will be discussed in Chapter 3 and Chapter 4. The S(M)-parameters, on the other hand, determine (at least partially) the word order of a language. How this works is not so obvious. So we will devote the next section (2.2) to the discussion of this question. In 2.3 we will consider the interaction between S(F)-parameters and S(M)-parameters. 2.2 The S(M )-Param eter and W ord Order In this section, we consider the question of how word order variation can be ex­ plained in terms of the parameterization of movements. We will first look at the traditional approach where word order is determined by the values of head-direction parameters14 (hereafter HD-parametcrs for short) and then examine an alternative approach where S(M)-parameter values are the determinants of word order. The two approaches will be compared and a decision will be made as to what kind of parameterization we will adopt as a working hypothesis. ‘^Various names have been given to this parameter in the literature. The one adopted here is from Atkinson (1902). Other names include X-parameter* and ktad parameter*. 21
  • 38. 2.2.1 An Alternative Approach to Word Order Traditionally, word order has been regarded mainly as a property of phrase struc­ ture. It is often assumed that different languages can generate different word orders because their phrase structure rules can be different. In the Principles and Param­ eters theory, cross-linguistic variations in basic word order are usually explained in X-theoretic terms, (c/ Jackendoff (1977), Stowell (1981), Koopman (1984), Hoek- stra (1984), Travis (1984), Chomsky (1986), Nyberg (1987), Gibson and Wexler (1993), etc.) The basic phrase structure rules of this theory are all of the following forms: (6) X P { X , (specifier) } X => { X , (complement) } The use of curly brackets indicates that the the constituents on the right-hand side are unspecified for linear order. Which constituent precedes the other in a partic­ ular language depends on the values of HD-parameters. There are two types of HD-parameters: the specifier-head parameter which determines whether the spec­ ifier precedes or follows X and the complement-kead parameter which determines whether the complement precedes or follows X . The values of these parameters are language-particular and category-particular. When acquiring a language, a child’s task is to set these parameters for each category. It is true that the parameter space of HD-parameters can accommodate a fairly wide range of word order variation. The parameterization can successfully explain the word order differences between English and Japanese, for instance. However, there are many word order facts which fall outside this parameter space. The most obvious example is the VSO order. If we assume that the direct object is a verbal 22
  • 39. complement and therefore must be base-generated adjacent to the verb, we will not be able to get this common word order no matter how the HD-parameters are set. The same is true of the OSV order. A more general problem is scrambling. It has long been recognized that this word order phenomenon cannot be accounted for in terms of HD-parameters alone. All this suggests that the HD-parameters are at least insufficient, if not incorrect, for the explanation of word order. To account for the complete range of word order variation, we need some additional or alternative parameters. The observation that not all word order facts can be explained in terms of phrase structure is by no means new. Ever since Chomsky (1955, 1957), linguists have found it necessary to account for word order variation in terms of move­ ment in addition to phrase structure. In fact, this is one of the main motivations that triggered the birth of transformational grammars. All the problems with HD-parameters mentioned above can disappear once movement is accepted as an additional or alternative source of word order variation. The VSO order can be derived, for example, if we assume that, while the verb and the object are adjacent at D-structure, the verb has moved to a higher position at S-structure (Emonds 1980, Koopman 1984, Sproat 1985, Koopman and Sportiche 1988, 1990, Sportiche 1990, among others). Scrambling can also receive an elegant account in terms of movement. As Mahajan (1990) has shown, many scrambled word orders (at least those in Hindi) can be derived from A and A-movements. As a matter of fact, very few people will challenge the assumption that movement is at least par­ tially responsible for cross-linguistic differences in word order. However, there has not been any model where the movement options are systematically parameter­ ized. The notion that the visibility of movements can be parameterized has been 23
  • 40. around for quite some time. It is very clearly stated, for example, in Huang (1982). But it has not been pursued as a main explanation for cross-linguistic word order variation until recently when Kayne (1992, 1993) proposed the antisymmetry of syntactic structures. One reason for the lack of exploration in this area is probably the pre-Minimalist view of movements. In the standard GB theory, most move­ ments are motivated by S-structure requirements which are often universal. As a result, many movements do not have the option of being either overt or covert. For instance, the A-movement forced by the Case Filter and the head movement forced by the Stray Morpheme Filter are always required to be overt. There are very few choices. The parameterization of movement, even if it were implemented, would not be rich enough to account for a sufficiently wide range of word order phenomena. Things are different in the Minimalist framework we have adopted, as we have seen in the previous section. In this model, every movement has the option of being either overt or covert. We have proposed that an S(M)-parameter be associated with each of the movements and let the value of this parameter determine whether the given movement is to occur before Spell-Out (overt) or after Spell-Out (covert). The word order of a particular language then depends at least partially on the values of S(M)-parameters. Now that we can account for word order variation in terms of S(M)-parameters, we may want to reconsider the status of HD-parameters. Since HD-parameters by themselves are insufficient for explaining all word order phenomena, there are two possibilities to consider: (7) (i) S(M)-parameters can account for all the word order facts, including those covered by HD-parameters. In this case, HD-parameters can be replaced 24
  • 41. by S(M)-parameters. (ii) S(M)-parameters cannot account for all the word order facts that HD- parameters are able to explain. In this case we will need both types of parameters. To choose between these two possibilities, we have to know whether all the word orders that are derivable from the values of HD-parameters can be derived from the values of S(M)-parameters as well. To find out the answer to this question, we must first of all get a better understanding of the parameter space created by S(M)-parameters. We will therefore devote the next section to the exploration of this new parameter space. 2.2.2 The Invariant X-Structure Hypothesis (IXSH) If S(M)-parameters can replace HD-parameters to become the only source of word order differences, variations in phrase structure can be assumed to be non-existent. We will be able to envision a model where X-bar structures are invariant and all word order variations are derived from movement. Let us call this the invariant X-structure hypothesis (IXSH). This hypothesis can be traced back to the univer­ sal base hypothesis of Wexler and Hamburger (1973): “A strong interpretation of one version of linguistic theory (Chomsky 196-5) is that there is a single univer­ sal context-free base, and every natural language is defined by a transformational grammar on that base" (pl73). A stronger version of this universal base is recently put forward in Kayne (1992, 1993). He argues that X-bar structures are antisym- metrical. In the model he proposes, the specifier invariably precedes the head and the complement invariably follows the head. The structure is right-branching in every language and linear order corresponds to asymmetric C-command relations. 25
  • 42. According to this hypothesis, there is a single set of X-bar trees which are found in all languages. All variations in word order are results of movement. HD-parameters are thus unnecessary. Kayne’s proposal has been explored in terms of parameterization in Wu (1992, 1993) where the new approach is tried out in the Minimalist framework. In Wu (1993) I experimented with the IXSH in a restricted model of syntax and showed that a surprising amount of variation in word order can be derived from a single X-bar tree. Since the results of this experiment will give us a more concrete idea as to what S(M)-parameters can do and cannot do, we will take a closer look at this model. 26
  • 43. The invariant X-bar tree I assumed in the model for a simple transitive sentence is given in (8).15 (8) CP / Xspec a / AC AgrSP / XSPEC AgrSl / AgrS TP T AgrOP SPEC AgrOl /XAgrO VP Subject V NP Verb Object The Invariant X-bar Tree in Wu (1993) 15The tree for an intransitive sentence is identical to (8) except that the verb will not have an interna] argument. It is assumed that AgrO exists even in an intransitive sentence, though it may not be active. 27
  • 44. The set of LF requirements which force movements in this model are listed in (9). (9) (A) The verb must move to AgrO-O to have its ^-features checked for object- verb agreement. (B) The verb must move to TO to have its tense/aspect features checked. (C) The verb must move to AgrS-0 to have its ^-features checked for subject- verb agreement. (D) The verb must move to CO to have its predication feature checked. (E) The subject NP must move to Spec-of-AgrSP to have its case and <f> features checked. (F) The object NP must move to Spec-of-AgrOP to have its case and <j> features checked. (G) The XP which has scope over the whole sentence or serves as the topic/focus of the sentence must move to Spec-of-CP to have its op­ erator feature checked. Each of the seven movements listed above, referred to as A, B, C, D, E, F and G, is supposed to be associated with an S-parameter whose value determines whether the movement under question is to be applied before or after Spell-Out. The seven S-parameters are referred to as 5(A), 5(B), 5(C), S(-D), S(E), S(F), and S(G). All the parameters are binary (1 or 0) except S(G) which has three values: 1, 0 and 1/0. The last value is a variable which can be either 1 or 0. The overtness of movement is optional if this value is chosen. 28
  • 45. The application of those movements is subject to the two constraints in (10). (10) (i) Head Movement Constraint. This constraint requires that no intermedi­ ate head be skipped during head movement. For a verb to move from its VP-internal position all the way to C, for instance, it must land successively in AgrO, T and AgrS. This means that, if D occurs before Spell-Out, A, B and C will also occur before Spell-Out. Consequently, setting S(D) to 1 will require that 5(A), S(B) and 5(C) be set to 1 as well. As a result, there is a transitive implicational relationship between the values of 5(A), 5(B), 5(C) and S(D): given the order here, if one of them is set to 1, then the ones that precede it must also be set to 1. (ii) The requirement that the subject NP and object NP must move to Spec- of-AgrS and Spec-of-AgrO respectively to have their case/agreement features checked before moving to Spec-of-CP. This means that 5(C) cannot be set to 1 unless S(E) or S(F) is set to 1. It was shown that with all the assumptions given above, the parameter space consists of fifty possible settings.10. These settings and the corresponding word orders they account for are given in (11). 16With 6 binary parameters and one triple-valued one, logically there should be 192 possi­ ble settings. But most of these settings are ruled out as syntactically impossible by the two constraints in (10). 29
  • 46. (11) # Values of S-parameters Word Order 5(A) 5(5) S(C) m 5(5) 5(5) 5(6) 1 0 0 0 0 0 0 0 S V(O) 2 1 0 0 0 0 0 0 VS (0) 3 0 0 0 0 1 0 0 S V (0) 4 0 0 0 0 1 0 (0) 5 V 5 1 1 0 0 0 0 v S (0) 6 1 0 0 0 1 0 0 S V(O) 7 1 0 0 0 1 0 (0 ) v s 8 0 0 0 1 1 0 S (0) V 9 1 1 1 0 0 0 VS (0) 10 1 1 0 0 1 0 0 S V (0) 11 1 1 0 0 1 0 V (0) s 12 1 0 0 1 1 0 s (0) V 1 13 1 1 1 1 0 0 V S (0) 1 14 1 1 1 0 1 0 0 S V (0) 15 1 1 1 0 1 0 V (0) S 1 16 1 1 0 1 1 0 S V (0) 1 17 1 1 1 1 1 0 0 v s (0 ) 1 18 1 1 1 1 1 0 v (0) S 1 19 1 1 1 0 1 1 0 S V (0) 1 20 1 1 1 1 1 1 0 VS (0) 21 0 0 0 0 1 0 1 S V (0) 22 0 0 0 0 1 1 1 S (0) V 0 S V 23 1 0 0 0 1 0 1 S V(0) 24 1 0 0 0 1 1 1 S (0) V 0 S V 25 1 1 0 0 1 0 1 S V(0) 26 1 1 0 0 1 1 1 S V(0) 0 S V 27 1 1 1 0 1 0 1 S V (O) 28 1 1 1 0 1 1 1 § V(6) 0 S V 29 1 1 1 1 1 0 1 S V(O) 30 1 1 1 1 1 1 1 S V (0) 0 VS 31 0 0 0 0 0 0 !/0 S V(O) 32 1 0 0 0 0 0 1/0 V S (0) 30
  • 47. # Values of S-parameters Word Order 5(A) 5(5) 3{C) S{D) S{E) $(F) $(G) 33 0 0 0 0 1 0 1/0 S V(O) 34 0 0 0 0 0 1 1/0 (O) SV 35 1 1 0 0 0 0 1/0 V(O) s 36 1 0 0 0 1 0 i/0 S v( 6) 37 1 0 0 0 0 1 v° (O) vs 38 0 0 0 1 1 1/0 S(O) V OSV 39 1 1 1 0 0 v° VS (0) 40 1 1 0 0 1 1/0 SV(0) 41 1 1 0 0 0 1 1/0 V(O) s OVS 42 1 0 0 1 1 1/0 s (O) V OSV 43 1 1 1 0 1 1/0 SV(O) 44 1 1 1 0 0 1 1/0 V(O) s OVS 45 1 1 0 1 1 1/0 SV(0) OSV 46 1 1 1 1 0 1/0 VS (O) 47 1 1 1 0 1 1 1/0 SV(0) OSV 48 1 1 1 1 1 1/0 VS (O) S V(O) 49 1 1 1 1 0 1 1/0 V(O) S OVS 50 1 1 1 1 1 1 1/0 VS (O) SV(0) OVS The Parameter Space in Wu (1993) As we can see, the word orders accommodated in this parameter space include SVO, SOV, VSO, V2, VOS, OSV, and OVS. In other words, all the basic word orders are covered. The parameter space also permits a certain degree of scrambling. In addition to this new word order typology, I also showed in Wu (1993) that all the languages in this parameter space are learnable. I proposed a parameter- 31
  • 48. setting algorithm which has the following properties.17 • Convergence is guaranteed without the need of negative evidence. • The learning process is incrementat the resetting decision can be based on the current setting and the current input string only. • The resetting procedure is deterministic at any point of the learning process, there is a unique setting which will make the current string interpretable. • Data presentation is order-independent. Convergence is achieved regardless of the order in which the input strings are presented, as long as all the distinguishing strings eventually appear. 17The parameter-setting algorithm is based on the principle of Procrastinate (MPLT) which basically says "avoid overt movement as much as possible". Following this principle, I assumed that all the S-parameters are set to 0 at the initial stage. The parameter-setting algorithm is basically the failure-driven one described in Gold (1967) (induction by enumeration). The learner always tries to parse the input sentences with the current setting of parameters. The setting remains the same if the parse is successful. If it fails, the learner wilt try parsing the input sentence using some different settings until he finds one which results in a successful parse. The order in which alternative settings are to be tried are determined by the following algorithm: Sort the possible settings into a partial order where precedence is determined by the following sub-algorithm: (i) Given two settings P I and PI, P I < P2 if S(G) is set to 0 in P I while it is set to 1 or 1/0 in P2. (The setting which permits no overt movement to Spec-of-CP is preferred.) Go to (ii) only if (i) fails to order P I and P2. (ii) Given two settings P I and P2, P I < P2 if S(G) is set to 1 in P I while it is set to 1/0 in P2. (If overt movement to Spec-of-C is required, the setting which permits no optional movement is preferred.) Go to (iii) only if (ii) fails to order P I and P2. (iii) Given two settings P l(i) and P2(j) where i and j are the number of param­ eters set to 1 in the respective settings, P I < P2 if i < j. (The setting permits fewer overt movements is preferred.) The resulting order is the one given in (11). What the learner does in cases of failure is try those settings one by one until he finds one that works. The learner is always going down the list and no previous setting will be tried again. 32
  • 49. 2.2.3 Modifying the IXSH So far the IXSH approach to word order has appeared to be very promising. We have a parameter space which can accommodate all the basic word orders and a parameter-setting algorithm which has some desirable properties. Many word order facts that used to be derived from the values of HD-parameters have proved to be derivable from the the values of S(M)-parameters as well. In addition, the parameter space of S-parameter can account for word order phenomena (such as V2 and scrambling) which are difficult to be accommodated in the parameter space of HD-parameters. What we have seen has certainly convinced us that movement can have a much more important role to play in the derivation of word orders. However, we have not yet proved that HD-parameters can be eliminated altogether. In other words, we are not yet sure whether S(M)-parameters can account for everything that HD-parameters are capable of accounting for. Both Kayne (1992,1993) and Wu (1992, 1993) have assumed a base structure where the head invariably precedes its complement. This structure is strictly right-branching. One consequence of this is that linear precedence corresponds to asymmetric C-command relations in every case. Given two terminal nodes A and B where A asymmetrically C-commands B, A necessarily precedes B in the tree. In current theories, functional categories dominate all lexical categories in a single IP/CP, with all the functional heads asymmetrically C-command the lexical heads. This means that, in any sinlge extended projection (in the sense of Grimshaw (1991)18), all functional heads precede the lexical heads in the base structure. This is apparent from the tree in (8). Let us assume that a functional l*In such extended projections, the whole CP/IP is projected from the verb. 33
  • 50. head can be spelled out in two different ways: (i) as an affix on a lexical head or (ii) as an independent word such as an auxiliary, a grammatical particle, an expletive, etc. ( This assumption will be discussed in detail in 2.3.) In Case (i), the lexical head must have moved to or through the functional head, resulting in an “amalgamation” of the lexical head and the functional head. The prefix or suffix to the lexical head may look like a functional head preceding or following the lexical head, but the two cannot be separated by an intervening element. Case (ii) is possible only if the lexical head has not moved to the functional head, for otherwise the functional head would have merged into the lexical head. Consequently, the lexical head such as a verb must follow the functional head such as an auxiliary in a surface string. This is so because the lexical head is base-generated lower in the tree and will be asymmetrically C-commanded by all functional heads unless it moves to a higher position. What all this amounts to is the prediction that, unless we have VP-preposing or the kind of IP-preposing suggested in Kayne (1993), we should never find a string where a functional element appears to the right of the verb but is not adjacent to it. In other words, the sequence in (12) is predicted to be impossible where F + stands for one or more overtly realized functional heads, such as auxiliaries and grammatical particles, and X stands for any intervening material between the verb and the functional element(s). (12) U ... [ip ... Ver6 X F* ]) One may argue that this sequence is possible if excorporation in Koopman’s (1992) sense can occur. In this kind of excorporation, a verb moves to a functional head without getting amalgamated with it, and then moves further up. In that case, the verb will end up in a position which is higher than some functional heads. When 34
  • 51. these functional heads are spelled out as auxiliaries or particles, they will follow the verb. But this does not account for all cases of (12). Consider the Chinese sentence in (13)19. (13) Ta kan-wan nci-bcn shu le ma you finish reading that book Asp Q/A 'Have you finished reading that book?' or 'He has finished reading that book, as you know.' This sentence fits the pattern in (12). The aspect marker le20 and the ques­ tion/affirmation particle ma31 are not adjacent to the verb, being separated from it by a full NP. This order does not seem to be derivable from excorporation. The aspect particle le is presumably generated in AspP (aspect phrase) and the ques­ tion/affirmation particle ma is generated in CP (Cheng 1991, Chiu 1992). In order for both the verb and the object NP to precede le or ma, the verb must move to a position higher than Asp0 or C°, and the object NP must also be in a position higher than Asp0 or C°. This is impossible given standard assumptions. Therefore the sentence in (13), which is perfectly grammatical, is predicted to be impossi­ ble in Wu’s (1993) model. Of course, this sentence can be generated if we accept Kayne’s (1993) hypothesis that the whole IP can move to the Spec of CP. However, this kind of extensive pied-piping still remains a speculation at the present. We do not feel justified to adopt this new hypothesis just to save this single construction. 1#Thi» sentence is ambiguous in its romanised form, having both an interrogative reading and a declarative reading. (They are not ambiguous when written in Chinese characters, as the two senses of ms are written in two different characters: “ o ” and “ ”). 30There are two distinctive le's in Chinese: an inchoative marker and a perfective marker (Teng 1973). These two aspect markers can co-occur and they occupy distinct positions in a sentence. The inchoative le is always sentence-final in a statement while the perfective le immediately follows the verb. The le in (13) is an instance of the inchoative marker. 31Whether it is the interrogative ms or affirmative ms depends on the intonation. 35
  • 52. Moreover, the adoption of this new movement can make our system too powerful, with the result that too many unattested word orders are predicted. Finally, the movement of IP cannot solve every problem we have here. The position of /e, for instance, would still be a mystery even if the new hypothesis were taken. All this shows that there are linguistic facts which the Invariant X-Structure Hypothesis is unable to account for without some strenuous stretch in our basic assumptions. By contrast, these facts can receive a very natural explanation if some HD-parameters are allowed for. We can assume that CP and IP (which contains AspP) are head- final in Chinese. The verb does not move overtly in Chinese, so the heads of AspP and CP are spelled out as an aspect marker and a question/affirmation marker respectively. Since they are generated to the right of the whole VP, they must occur in sentence-final positions. The assumption that CP and IP can be head-final is supported by facts in other languages. In Japanese, for example, we find the following sentences. (14) 33 Yamada-aenaei-wa kim-ashi-ta ka Yamada-teacher-Topic come-Hon-Past Q ‘Did Professor Yamada come?' (15) 33 Kcsa hatizi kara benkyoosi-tc i-ru this-moming 8 from study-Cont be-Nonpast *1 have been studying since eight this morning.* In (14), we find the question particle ka in a post-verbal position. This particle is clearly a functional element.34 As assumed in Cheng (1992) and Fukuda (1993), 33Example from Tetsuys Sano. 3aFrom Kuno (1978). 34A number of other particles can appear in this position, such as no, ss, yo, to, etc. There is no doubt that they are functional elements, though the exact functions they perform are controversial. 36
  • 53. question particles in Japanese are positioned in C°. In (15), the verb is followed by i-ru which is most likely located in T. In both cases, a functional head appears after the verb, which is to be expected if CP and TP (which is part of IP) are head-final in Japanese. The IXSH will have difficulty explaining this unless we assume that these particles and auxiliaries are in fact suffixes which come together with the verb from the lexicon. But there is strong evidence that ka and i-ru are not suffixes. We could also argue that the verb has managed to precede ka and i-ru by left-adjoining to T0 and C® through head movement. But in that case the verb would be in Co and the word order would be VSO instead of SOV. The excorporation story is not plausible either, for the verb in (14) would have to move to a position higher than C in order to precede ka. Arguments for the existence of HD-parameter in IP are also found in European languages. In German, an auxiliary can appear after the verb, as we see in (16). (16) 25 dat Wim dat boek gekocht heeft that Wim that book bought has 4that Wim has bought that book.’ The clause-final heeft is located in the head of some functional projection. It is not a suffix which can be drawn from the lexicon together with the verb.36 If we stick with the IXSH, the word order in (16) would not be possible. For the verb to get in front of heeft, it must move to a position at least as high as heeft.Butthe word order in that case would be SVO or VSO insteadof SOV. However,(16) will not be a problem if we say that IP is head-final in German. It is becoming evident that a strong version of IXSH, where HD-parameters are J5Example from Haefeman (1991) MThe fact that heeft can appear in positions not adjacent to the verb shows that it is not an affix. 37
  • 54. eliminated altogether, is very difficult to maintain. We have seen that, although the S(M)-parameters can explain things that HD-parameters fail to explain, there are also facts which are best explained by HD-parameters. In other words, we have come to the conclusion that the second possibility in (7) is more plausible. It seems that that the word order facts covered by S(M)-parameters and HD-parameters intersect each other. While some facts can receive an explanation in terms of either S(M)-parameters or HD-parameters, there are word order phenomena which can be explained by S(M)-parameters only or HD-parameters only. The situation we have here is graphically illustrated in (17). Therefore we need both types of parameters. (17) A = facts covered by S(C)-parameters B = facts covered by head-direction parameters C = facts covered by both parameters The Coverage of S-Parameters and HD-Parametera The next question is how to coordinate these two kinds of parameters in an account of word order variation. As (17) shows, the empirical grounds covered by these two types of parameters overlap a lot. If we keep both parameters in full, there can be too much redundancy. The number of parameters we get will be greater than necessary. Ideally the parameter spaces of these parameters should complement each other. There should therefore be a new division of labor so that the two kinds of parameters duplicate each other's work as little as possible. There are at least two ways in which this can happen. 38
  • 55. (i) Word order is derived mainly through movement, with variation in phrase structures covering what is left out. In other words, the facts in the A and C areas of (17) are accounted for by S(M)-parameters and those in B by HD-parameters. (ii) Word order is mainly a property of phrase structure, with movement account­ ing for what is left out. In other words, the facts in the B and C areas of (17) are accounted for by HD-parameters and those in A by S(M)-parameters. The choice between the two can be based on several different considerations. The model to be favored should be syntactically simpler and typologically more ade­ quate. In addition, it should be a better model in terms of language acquisition and language processing. In order to have a good comparison and make the correct decision, we must have good knowledge of both approaches. The approach in (ii) has already been explored extensively. This phrase-structure-plus-some-movement account has been the standard story for many years. The results are familiar to everyone. The approach in (i), however, has not received enough investigation yet. This is an area where more research is needed. For this reason, the remainder of this thesis will be devoted mainly to the first approach. We will examine this approach carefully in terms of syntactic theory, language typology, language ac­ quisition, and language processing. It is hoped that such an investigation will put us in a better position to judge different theoretical approaches. Having decided on the main goal of the present research, we can now start looking into the specifics of a model which implements the idea in (i). One of the obvious questions that is encountered immediately is “how many S(M)-parameters are there and how many HD-parameters”. This question will be addressed in 39
  • 56. Chapter 3 when we work on a full specification of the model. It will be proposed that the HD-parameters be restricted to functional categories only. In particu­ lar, there will be only two HD-parameters: a complement-head parameter for CP and a complement-head parameter for IP. The former determines whether CP is head-initial or head-final and the latter determines directionality of IP. With the assumption that IP consists of a number of functional categories each heading its own projection, the number of heads in IP can be more than one. Instead of as­ suming that each of these functional projections has an independent parameter, I will assume that the complement-head parameter applies to every individual pro­ jection in IP. This is to say that the value of this parameter will apply to every projection in IP. No situation will arise where, say, AgrP is head-initial while TP is head-final. This decision is based on the following considerations: (i) All previous models incorporating HD-parameters have treated IP as a single unit. (ii) There has been no strong evidence that IP can be “bidirectional” in the sense that some of its projections are left-headed and some right-headed. (iii) The organization of the IP-internal structure is still a controversial issue. There has not been enough consensus as to how many IP-internal categories there are and how they are hierarchically organized. It is risky, therefore, to attach a parameter to any specific “sub-IPs”. Similar arguments can be made for CP which is treated as a single unit in spite of the fact that some other CP-like categories, such as FP (Focus Phrase) (Brody 1990, Horvath 1992, etc.) and TopicP, have been proposed. 40
  • 57. By reducing the number of HD-parameters to two, we have not destroyed the Invariant X-bar Structure Hypothesis completely. What are we are left with is a weaker version of the IXSH where the directions of specifiers are still fixed and the directions of complements are fixed except those in functional categories. Com­ pared with the model where each category can have both a specifier-head parameter and a complement-head parameter, the degree of invariability in X-bar structure is much higher. As a result, the number of possible tree structures has decreased considerably. The significance of this modified version of IXSH for language ty­ pology, language acquisition and language processing will be studied in Chapters 4, Chapter 5 and Chapter 6. 2.3 Interaction of S(F)- and S(M )- Param eters In this section, we come back to the relationship between overt morphology and overt movement. We have assumed for the time being that these two phenomena are independent from each other. It has been proposed that each feature has two separate S-parameters associated to it: the S(F)-parameter that determines whether the feature is spelled out morphologically and the S(M)-parameter that determines whether the movement responsible for the checking of the feature occurs before Spell-Out. It is interesting to note that both the S(F)- and S(M)- parameters are feature-related. Given that both the S(F)-parameter and S(M)-parameter are binary (1 or 0)37, there is a parameter space of 4 possible settings for each feature: (18) S(F) S(M) (i) 1 1 J7We will see later on that the S(M)-parameter can have a third value: 1/0. 41
  • 58. (ii) 1 0 (iii) 0 1 (iv) 0 0 What is the linguistic significance of these value combinations? Before answering this questions, let us have a closer look at how feature-checking works. In the Minimalist framework, we can entertain the assumption that any feature that needs to be checked is generated in two different places, one in a functional category and one in a lexical category. Let us call these two instances of the same feature F-feature and L-feature respectively. For instance, the tense feature is found in both T and V. To make sure that the two tense features match, the verb (which carries the L-feature) must move to T (which carries the F-feature) so that the value can be checked. If the checking (which entails movement) occurs before Spell-Out, the F-feature and the L-feature will get unified and become indistin­ guishable from each other38. As a result, only a single instance of this feature will be available at the point of Spell-Out. If the checking occurs after Spell-Out, how­ ever, both the F-feature and the L-feature will be present at Spell-Out and either can be overtly realized. This assumption has important consequences. As will be discussed in detail in Chapter 4, this approach provides a way of reconciling the movement view and base-generation view of many syntactic issues. The checking mechanism we assume here requires both base-generation and movement for many syntactic phenomena. Many features are base-generated in two different places but related to each other through movement. I will not go into the details here. The implications of these assumptions will be fully explored in Chapter 4. 3sThis is similar to smalgamation where the functional element becomes part of the lexical element. 42
  • 59. Now let us examine the values in (18) and see what can possibly happen in each case. In (18(i)), both the S(F)-parameter and the S(M)-parameter are set to 1. This means both the feature itself and the movement that checks this feature will be overt. In this case, the F-feature and L-feature will be unified after the movement and be spelled out on the lexical head in the form of, say, an affix. In addition, this lexical item will be in a position no lower than the one where the F-feature is located. If the features under question are agreement features, for example, we will see an inflected verb in IP/AgrP or a higher position, with the inflection carrying agreement information. This seems to be the “normal" case that occurs in many languages. One example is French where the verb does seem to appear in IP/AgrP and it has overt morphology indicating subject-verb agreement (see (19)).2® (19) Mea parents parlent souvent espagnol my parents speak-3P often Spanish (My parents often speak Spanish.’ If the feature to be considered is the case feature of an NP, this NP will move to the Spec of IP/AgrP and be overtly marked for case. Japanese seems to exemplify this situation, as can be seen in (20).30 (20) 31 Taroo-ga Hanako-o yoku mi-ru Taroo-nom Hanako-acc often see-pres ‘Taroo often sees Hanako’ 28The fact that the verb precedes the adverb sosvenf in this sentence tells us that the verb has moved to IP/AgrP. 30We can assume that the subject NP in this sentence has moved to the Spec of AgrSP and the object has moved to Spec of AgrOP. 31Example from Akira Nakamura. 43
  • 60. If the feature to be checked is the scope feature of a wh-phrase, the wh-phrase will move to the Spec of CP, with the scope feature overtly realized in some way. English might serve as an example of this case, though it is not clear how the scope feature is overtly realized. In (18(ii)), the S(F)-parameter is set to 1 but the S(M)-parameter is set to 0. This means that the feature must be overt but the movement that checks this feature must not. Since the checking movement takes place after Spell-Out, both the F-feature and the L-feature will be present at the point of Spell-Out and at least one of them must be overtly realized. There are three logical possibilities for spelling the feature out: (a) spell out the F-feature only, (b) spell out the L-feature only, or (c) spell out both, (a) and (b) seem to be exemplified by the agreement/tense features in English. The English verb does not seem to move to I before Spell-Out. Since the features appear both in I and on the verb, we can pronounce either the L-features or the F-features. When the L-features are pro­ nounced, we see an inflected verb, as in (21). When the F-features are pronounced, we see an auxiliary, as in (22) where does can be viewed as the overt realization of the I-features. (In other words, Do-Support is a way of spelling out the head of IP.) The third possibility where the feature is spelled out in both places does not seem to be allowed in English, as (23) shows. (21) John loves Mary. (22) John does love Mary. (23) * John does loves Mary. In Swedish, however, “double” spell-out seems to be possible. When a whole VP is fronted to first position, which probably means that the verb did not have a 44
  • 61. chance to undergo head movement to T(ense), the tense feature is spelled out on both the verb and the auxiliary which has moved from T to C. This is shown in (24). (24) 33 Oeppnade docrrcn ] gjorde han open-Past door-the do-Past he ‘He opened the door/ The value in (18(ii)) can also be illustrated with respect to case/agreement features and the scope feature. It seems that in English the features in Spec of IP/AgrSP must be spelled out. When overt NP movement to this position takes place, the features appear on the subject NP. In cases where no NP movement takes place, however, Spec of AgrSP is occupied by an expletive (as shown in (25)) which can be interpreted as the overt realization of the case/agreement features in Spec of AgrSP. (25) There came three men. The situation where the scope feature is overtly realized without overt wh-movement is found in German partial wh-movement. In German, wh-movement can be either complete or partial, as shown in (26), (27), (28) and (29). 33 (26) [ mit wem ]< glaubst du U dass Hana meint [ep t, doss Jakob ti with whom believe you that Hans think that Jakob gesprochen hat ]] talked has ‘With whom do you believe that Hans thinks that Jakob talked?* 53Example from Chrieter Platsack. S3Examples are from McDasial (1989) and Cheng (1993) 45
  • 62. (27) wan glaubst du [ rot'f wem ], Hans meint [cp dass Jakob t, WHAT believe you with whom Hans think that Jakob gesprochen hat ]] talked has ‘With whom do you believe that Hans thinks that Jakob talked?’ (28) wasi glaubst du [„ wasi Hans meint [«, [ mit wem ]<Jakob t, WHAT believe you WHAT Hans think with whom Jakob gesprochen hat ]] talked has ‘With whom do you believe that Hans thinks that Jakob talked?’ (29) *tva$i glaubst du [ep dass Hans meint [cp [ mit wem ],-Jakob U WHAT believe you that Hans think with whom Jakob gesprochen hat ]] talked has ‘With whom do you believe that Hans thinks that Jakob talked?’ These four sentences have the same meaning but different movement patterns. In (26) the wh-phrase (mit wem) moves all the way to the matrix CP. In (27) and (28), the wh-phrase also moves, but only to an intermediate CP. The Spec(s) of CP(s) which mit wem has not moved to are filled by was which is usually called a wh-scope marker. (29), which is ungrammatical, is identical to (28) except that the specifier of the immediately embedded CP does not contain was. It seems that the scope-feature in German has the value in (18(ii)) which requires that this feature be made overt. In (26) the wh-phrase has moved through all the Specs of CP, which means all the scope features have been checked before Spell-Out. So the F-features and L-feature have unified and we see the wh-phrase only. In (27) and (28), the checking movement is partial and all the unchecked scope features are spelled out as was. (29) is ungrammatical because one of the unchecked features is not spelled out, which contradicts the value of S(F) that requires that all scope features be made overt. 46
  • 63. The last example to illustrate the value in (18(H)) is from French. Previous examples have shown that the specifiers of CP and AgrSP can be spelled out by themselves. The French example here is intended to show that the head of CP can also be spelled out in that way. Let us suppose that C° contains a certain predi­ cation feature which determines, for instance, whether the sentence is a statement or a question. Let us further suppose that this feature must be spelled out in a question in French. When I-to-C movement takes place, as in (30), this feature is not spelled out by itself. It is merged with the verb in C°. (30) Apprenez voua le russc learn you Russian ‘Do you learn Russian?1 In cases where no overt I-to-C movement takes place, however, the predication feature must be spelled out on its own. This is shown in (31) where eat-ce que can be regarded as an overtly realized head of CP. (31) Est-ce que voua apprenez le russc you learn Russian ‘Do you learn Russian?* In (18(iii)), the S(F)-parameter is set 0 while the S(M)-parameter set to 1. This means the feature is checked through movement before Spell-Out but not overtly realized. We will see overt movement but not overt morphology. This seems to happen to the case/agreement features of the subject NP in Chinese, as (32) shows. (32) Ta bu xihuan Lisi He not like Lisi 'He doesn’t like Lisi.’ We assume that the subject NP ta moves overtly to the Spec of IP/AgrSP (Cheng 1991, Chiu 1992). One argument for this is the following: with the assump­ tion that the subject is base-generated within VP and Neg is generated outside VP, 47
  • 64. the subject could not precede Neg had it not moved to Spec of IP/AgrSP. However, there is no case/agreement marking on this NP at all, which shows that the S(F)- parameter for case/agreement features is set to 0. More examples demonstrating the value in (18(iii)) can be found in Chapter 4. In (18(iv)), both the S(F)-parameter and the S(M)-parameter are set to 0. In this case, there is neither overt morphology nor overt movement. The object-verb agreement features in many languages seem to exemplify this value: there is no overt agreement marker and the object does not move. Examples of this will be given in Chapter 4. The examples we have seen so far is sufficient to show that the interaction between S(F)-parameters and S(F)-parameters can give rise to a wide variety of syntactic phenomena. It has added a new dimension to our parameter space. 2.4 Sum m ary In this chapter, I have proposed a syntactic model which is built on some of the new assumptions in the Minimalist framework. By fully exploiting the notion of Spell- Out, we discovered a model where a wide range of linguistic facts can be accounted for in terms of two sets of'S-parameters. The values of S(F)-parameters determine which features are morphologically realized; the values of S(M)-parameters deter­ mine which movements are overt. It is found that so much variation in word order can be derived from movement that the number of HD-parameters can be reduced. It is also found that the interaction between S(F)-parameters and S(M)-parameters can make interesting predictions for the distribution of functional elements, such as affixes, auxiliaries, expletives, particles and wh-scope markers. In short, we have found a parameter space which has rich typological implications. So far the 48
  • 65. model has been presented in a very sketchy way with many details left out. But the details are important. In the chapters that follow, we will make the model more specific and put it to the test of language typology, language acquisition and language processing. 49
  • 66. Chapter 3 An Experim ental Grammar In the previous chapter 1 proposed a new approach to syntax where cross-linguistic variations in word order and inflectional morphology are attributed to three sets of parameters: the S(F)-parameters, the S(M)-parameters, and the HD-parameters. So far the discussion has been very general, with many details unattended to. To fully explore the consequences of this new hypothesis in language typology, lan­ guage acquisition and language processing, we need to work with a more concrete model. We must have a grammar which is specific enough so that the consequences can be computed. The goal of this chapter is to specify such an experimental gram­ mar. Obviously, the presentation of a syntactic model which is complete in any sense is an unrealistic goal here. In the first place, the Minimalist theory has not been fully specified. Many issues that the standard P&P model has addressed have not been accommodated in the new program yet, not to mention those areas that even the standard model has left unexplored. In addition, I will not follow MPLT in every detail, though the theory I am proposing is in the Minimalist framework.1 1As has been stated in the last chapter, I will try to make a distinction between MPLT and the Minimalist framework. The former refers to the specific model described in MPLT while the latter refers to the general approach to syntax initiated by MPLT. 50
  • 67. This leaves more issues open, for even those things that are supposed to have been discussed in MPLT may have to be reconsidered here. The best we can do at this moment is to come up with a partial model which has full specifications for those parts of the grammar that are relevant to the testing of our hypothesis. The conclusions we draw from this partial grammar will not be definitive, but they can at least provide us with some way of evaluating the new theory. Such evaluation will give us some idea as to whether this line of research is worth pursuing at all. For this reason, the syntactic model to be presented in this chapter will be minimal. In particular, we will be concerned only with those parts of the grammar which are responsible for the basic word orders and morphological characteristics of languages. We will start with a grammar which is restricted in the following ways. • Declarative sentences only. We will focus on cross-linguistic variations in statements first. In many languages, the word orders found in questions are different from those in statements. We do not want to get into this complication before we have a better understanding of how the parameters work in the “basic” type of sentences, i.e. declarative sentences. Therefore, I will put other sentence types aside in this chapter, though some of them will be picked up in Chapter 4. • Matrix clauses only. We will start with simple sentences with no embedding. In other words, we will be concerned mainly with Degree-0 sentences.3 There are two reasons for this temporary exclusion of embedded clauses. First, main clauses and subordinate clauses have different word orders in some 3Discussions on the “degrees” of sentences can be found in Wexler and Culicover 1980, Morgan 1986 and Ligbtfoot 1989, 1991. 51
  • 68. languages (e.g. German, Dutch, and many other V2 languages). Why there is this difference deserves some special discussion. We will come to that after matrix clauses have been analyzed. Second, the inclusion of embedded clauses will make it necessary to deal with “long-distance” movement whose application involves the notion of “barriers’* or “minimality”. How these notions are defined in the Minimalist framework is not clear yet. It is very likely that the standard definitions can be transplanted in the present model without too much tinkering. But I prefer to put these issues aside until we have worked on aspects of the grammar which are more directly related to basic word order. • Two types of verbs only. Since I will be mainly concerned with the ordering of S(ubject), O(bject) and V(erb) in this experimental study, I will only look at two types of verbs: (a) intransitive verbs with a single NP argument (e.g. swim) and (b) transitive verbs with two NP arguments (e.g. love). • IP/CP only. I will experiment with the ordering in IP/CP first and leave the internal structures of NP/DP aside for the moment. There have been many observations on the parallels between IP/CP and DP (Stowell 1981,1989, Abney 1986, Szabolsci 1989, Valois 1991, etc.). The new approach considered here can definitely apply to the word order phenomena within DP. It is very likely that the movement patterns in IP/CP and NP/DP are related (Koopman, 1992). However, I will single out IP/CP for analysis first, All NPs/DPs will be treated as unanalyzed wholes for the time being. Their internal structures and internal word orders will be given a very preliminary analysis in Chapter 7. 52
  • 69. • No binding theory. Binding theory is one of those components of the gram­ mar that needs major re-working in the Minimalist framework. In order not to get distracted from my main topic, I will not go into a Minimalist account of binding theory. What is listed above does not exhaust the topics which are left out in this chapter. Other things will be noted in the course of presentation. We will see that, in spite of these simplifications and omissions, the model will be rich enough to spell out the basic properties of the present approach. The typology, the parser and the acquisition procedure based on this minimal model will not be complete, but they will be sufficient for the illustration of these properties. It must be emphasized again that the model to be described below does not follow MPLT in every detail. The model is in the Minimalist framework in the sense that it keeps to the spirit of the Minimalist approach. I will try to point out the differences as we go along. It should also be emphasized that the grammar to be described is not the only one where the new approach will work. I am simply trying out a particular instantiation of the theory to show that my proposal can be put to practice in at least one given version of the model. By the time when we have completed the experiment, we will realize that the main results of our experiment do not have to rely on this particular grammar. The approach should apply in general to many different specifications of the theory. We now start on our particular model. To compute the relationships between the parameter values on the one hand and the variations in word order and mor­ phology on the other, we must specify at least the following. 53
  • 70. (i) The categorial system of the model. This provides the building blocks of linguistic structures. (ii) The feature system of the model. Since both the S(F)- and S(M)- parameters are associated with features, we will not know how many S-parameters are needed unless we know what features can be spelled out and what features need checking. (iii) The computational system of the model. This includes the following sub­ systems; • Lexical Projection (LP) which determines the phrasal projections of all categories. • Generalized Transformation (GT) which determines how the phrasal projections are joined to form a single tree. • Move-a which is responsible for feature-checking. (iv) The PF constraints. (v) The LF constraints. The PF and LF constraints can be easily defined in this model. We will therefore specify (iv) and (v) first. There is only one PF constraint in this model which requires that the input to PF be a single tree. Presumably, the violation of this constraint might result in “broken" sentences. This does not mean, however, that we are not allowed to produce sentence fragments. A single NP or PP can also constitute a single tree and thus be a legitimate object at PF. In MPLT there is another constraint which 54
  • 71. rejects strong features that survive to PF, as we have discussed in 2.1.2. Since we have chosen not to resort to the strong/weak distinction as a possible explanation for overt movement, this constraint does not exist in the present model. The only LF requirement in this model is that all the features must be checked. Since checking requires movement, it actually requires a set of movements to take place in the derivation. The nature of these movements will become clear in 3.2. In what follows, we will look at these systems one by one. 3.1 The Categorial and Feature System s The categorial system and the feature system will be discussed in the same section, because they are closely related. Every feature is associated with one or more categories and every category is basically a bundle of features. 3.1.1 Categories The categories to be used in this mini-grammar will be limited to the ones in (33). (33) { C(omp), Agr(eement)l, Agr(eement)2, T(ense), A(spect), V(erb), N ) Agrl and Agr2 is equivalent to what we usually call AgrS and AgrO. We prefer not to use AgrS and AgrO because AgrS is not always associated with the subject, nor is AgrO always associated with the object. The use of Agrl and Agr2 will facilitate our discussion on ergative constructions, passive constructions, unaccusatives, etc. We see that all categories except N are verbal in nature while some nominal categories like D(eterminer) are missing. This is because, for the time being, we will not look into the internal structures of NP or DP, all of which will be treated as a single unit. For instance, John, the boy and the boy who smiled will 55
  • 72. be treated identically as NPs. Other common categories that are absent in the list include P(reposition), Adv(erb), Adj(ective) and Neg(ation). Some of them will be introduced into our system in succeeding chapters when they become relevant to our discussion. We assume that the set of categories in (33) is universal. In other words, the categories are innately given as part of UG. The fact that some categories do not seem to show up in some languages can be explained in two different ways: (i) Only a subset of those categories is used in each particular language. After the critical period of language acquisition, the categories that are not used are “trashed”. (ii) All the categories are present not only in UG but in every individual adult grammar as well. The fact that some categories are invisible simply means they are not spelled out. The explanation to be adopted in our present model is the one in (ii). There are several arguments against (i). First of all, the assumption that some categories can be trashed after the critical period implies that different languages can have very different X-bar structures. If we assume (ii), however, the structures will be more uniform. Secondly, the “trashing” of categories is not a simple computational operation. It may mean a partial or total rehash of selectional relations among categories. Suppose that in UG C selects Agr as a complement, Agr selects T, and T selects Asp. If a language does not have overt tense and agreement, we have to remove all those selectional rules and replace them with a new rule which may let C select Asp as its complement. How this complicated operation can be triggered and accomplished is a question. Finally, even in languages where certain 56
  • 73. categories seem to be missing, the concepts represented by those categories appear to be present. Many people have analyzed Mandarin Chinese as a language where the category T is missing (e.g. Cheng 1991). This by no means indicates that speakers of this language are tense-insensitive. As a matter of fact, every Chinese sentence is interpreted in some tense frame. This is true even in cases where no time adverbial is present. The simplest explanation for this fact is that T is present though it is not overtly realized. 3.1.2 Features The arguments we made above about the universal nature of categories can be applied to features in a similar way. The features to be assumed in this model are also supposed to be universal. They are in UG and they remain in the grammar of every individual language. The fact that only a subset of those features is visible in a given language means that only this subset is spelled out. Therefore, we can assume the existence of a feature as long as this feature is visible in some languages. We have hypothesized that the visibility of a feature depends on the value of its S(F)-parameter rather than the availability of this feature itself. Some people may argue that the distinction we are making here has no empirical import. After all, what is the difference between invisible existence and non-existence? But there is a difference in terms of language acquisition and language processing. As we will see, both of them can be simplified with our assumptions. Now let us specify the features of our model. It is commonly accepted that the basic features are of two types: the V-features and the NP-features. What these features should exactly be is an open question, but we can start with the tentative 57
  • 74. set in (34). (34) V-features: 5-grid, case, tense, aspect, ^-features, predication features NP-features: 5-role, case, ^-features, operator features This set is by no means complete but it will be enough for experimental purposes. In what follows, I will give some justification for the inclusion of those features in our model, and specify for each feature whether there is a S(P)-parameter associ­ ated with it. The existence of 5-grids is relatively non-controversial. Whether this feature can be spelled out, however, is open to debate. On the one hand, we can say that it is always overtly realized in the argument structure of a sentence; on the other hand, there does not seem to be a language where the 5-grids are morphologically realized on the verb. But one thing is almost certain: the spell-out of 5-grids does not vary from language to language. We can think of this features as being either always spelled out (in the argument structure) or never spelled out (in verbal morphology). Therefore there is no reason to assume an S(F)-parameter for this feature. It is also well accepted that NPs carry 5-roles, though their realization is mixed up with that of case features. In many cases, it may seem that 5-roles and cases are two sides of the same coin, morphological case being the overt realization of 5-roles. However, while all 5-role-carrying NPs can have case markers in some languages, not every case-marked NP carries a theta-role. We will therefore adopt the standard view and treat case and 5-role as two different animals. Furthermore, we will assume that all case-markers are overt realizations of case features while 5-roles are understood but never spelled out. Consequently, there is no reason to 58
  • 75. suppose that there is an S(F)-parameter associated with the 0 features. The ^-feature is used as a cover term for all agreement features, such as Person, Number and Gender. It is both a V-feature and an NP-feature. As far as overt agreement is concerned, however, the Spell-Out of V-features is more important than that of NP-features. A language is considered to have overt agreement as long as the V-features are overt, regardless of the status of the NP-features3. In this experimental model, we will only be interested in those features which are involved in overt agreement. For this reason, the spell-out of ^-features in NPs will be ignored for the time being. When we say a ^-feature or an agreement feature is spelled out, we mean the V-feature is overt. Another fact we will temporarily ignore is that different ^-features can be spelled out independently. We could let each ^-feature be associated with an independent S(F)-parameter. This is justified because each feature can be overt or covert regardless of the status of other <f>- features. For instance, we can find a language where person and number features are spelled out on the verb but the gender feature is not. However, such details do not affect the general approach we are taking. They can be easily added to the system after the big picture has been made clear. To simplify our parameter space so as to concentrate on the more interesting aspects of the theory, we will assume a single S(F)-parameter for the whole set of ^-features. Its value is 1 (spelled out) if any subset of the ^-features is overt. The existence of the tense and aspect features is again non-controversial. They are overt in some languages and covert in others. We could put these two features in a single bundle and let them be associated with a single S(F)-parameter, as 3Functionally speaking, case-marking and agreement perform the same role, i.e. identifying the grammatical functions of NPs. This function is realised on the NP when case is spelled out and realised on the verb when agreement is spelled out. 59
  • 76. we have done to the ^-features. However, the differentiation of these two features is more important than that of ^-features in the present model because we are focusing on the verbal system. How tense and aspect features can be realized on their own is of interest to us. We have decided in 3.1.1 that T(ense) and Asp(ect) constitute two different categories, each having its own features. Therefore we will let tense and aspect be associated two independent S(F)-parameters. This will enable us to have a more detailed analysis of the tense/aspect system. The case feature is usually regarded as an NP-feature, for it is often overtly realized as case-markers on NPs. There is no doubt that there should be at least one S(F)-parameter for the case features. Potentially we can associate a parameter with each different case, but we will assume a simpler system where the spell-out of all case features is determined by a single S(F)-parameter. This parameter will have the value “1” in a language if there is any kind of morphological case in this language. It is not so obvious, however, whether case is also a V-feature. Most people would think, at least initially, that verbs do not have case features. But there is evidence that the verb does carry case features and these features are sometimes visible. One example is found in Tagalog (Schachter 1976). In this language, the verb can have a case marker and the case feature varies according to which NP in the sentence is being topicalized. Consider the sentences in (35), (36), (37) and (38). (The topic marker is ang while ng marks agent and patient, sa marks locative and para sa beneficiary.) We could treat those markers as spelled out theta-roles, but we will stick with our assumption that theta-roles are never spelled out. Whatever is spelled out is always the case feature. 60
  • 77. (35) Mag-salis ang babae ng bigas sa sako para sa bata A-will:take woman rice sack child ‘The woman will take rice out of a/the sack for a/the child.’ (36) Aalisin ng babae ang bigas sa sako para sa bata 0-will:take woman rice sack child ‘A/The woman will take the rice out of a/the sack for a/the child.’ (37) Aalisan ng babae ng bigas ang sako para sa bata Loc-will:take woman rice sack child ‘A/The woman will take some rice out of the sack for a/the child.’ (38) Ipagsalis ng babae ng bigas sa sako ang bata A'wilktake woman rice sack child ‘A/The woman will take some rice out of a/the sack for the child.’ These examples suggest that the case feature exist in both nouns and verbs. Another possible example of verbal case features is cliticization. We can think of cliticization as a process whereby case features are spelled out on the verb. This view has been expressed by Borer (1984) who calls this process Clitic Spell-Out. It is also reminiscent of the treatment of subject clitics in Safir (1985). Clitics can be viewed as something between a pronoun and an affix. In fact, they are more like affixes than pronouns. If we are allowed to treat them as affixes, as in the lexical analyses of clitics, 4 they will start to look like case and agreement markers affixed to the verb. In the following French sentence, for example, me and la can be viewed as the overt realization of case and agreement features on the verb, the former being the feature matrix of [case:dat, person:1, number:s] and the latter [case:acc, person:3,numbers, gender:f]. (39) Jean me-la-montre John me-it-shows ‘John shows me it.’ 4Lexical analyses claim that a clitic is in effect a derivational affix modifying the lexical entry of a predicate. For instance, the alternation between lift an litre and le lire is taken to be an alternation between a transitive verb lire and an intransitive le+lire. 61
  • 78. Since languages can differ with respect to whether the verbal case feature is spelled out, we assume that this feature has an S(F)-parameter associated with it. The feature “predication” is supposed to contain information about sentence type. It tells us, for instance, whether the verb (or the “predicate”) is used in a statement or a question ([-Q] or [+Q]). Is this feature visible in some languages? The answer seems to be positive. One way in which this feature can be said to be realized is through intonation. It is very common for statements and questions to have different intonational contours. The fact the verb seems to be the main bearer of the clausal intonation suggests that there is a verbal feature which can be overtly realized. This feature also seems to show up morphologically sometimes. The English word whether can be regarded as a spell-out of [+Q] in an embedded CP. In Chinese, this feature can be morphologically realized as a question/affirmation particle, as shown in (13) repeated here as (40), or in the A-not-A construction, as in (41). (40) Ta kan-wan nei-ben shu le ma you finish reading that book Asp Q/A ‘Have you finished reading that book?’ or ‘He has finished reading that book, as you know.’ (41) Ni he-bu-he pijiu you drink-not-drink beer ‘Do you drink beer? The verbal complex he-bu-he (‘drink or not1) in (41) can be viewed as an instance where the [+Q] feature is spelled out on the verb. It seems that this feature must be spelled out in Chinese either in a verbal form or as a grammatical particle.5 We will 5The A-not-A construction never co-occurs with the question particle ms, which suggests that “double spell-out” is prohibited in Chinese. 62
  • 79. assume that there is an S(F)-parameter associated with this predication feature, as languages can vary as to whether this feature is morphologically realized. The last feature we will discuss is “operator”. This feature is used as a cover term for such features as “scope”, “topic” and “focus”. The status of these features is open to discussion. We will assume that quantifier-raising (QR), topicalization and focalization involve similar syntactic operations, i.e. putting a constituent in a prominent position. This is the view expressed by Chomsky: “The natural assumption is that C may have an operator feature (which we can take to be the Q or wk- feature standardly assumed in C in such cases), and that this feature is a morphological property of such operators as wh-. For appropriate C, the operators raise for feature checking to the checking domain of C: [SPEC,C], or adjunction to specifier (absorption), thereby satisfying their scopal properties. Topicalization and focus could be treated the same way.” (MPLT p45) However, there seems to be evidence that these operations are syntactically distinct. In Hungarian, for instance, a quantified NP, a topic and a focus can apparently co-occur in a single sentence. Consider (42).® (42) Mari mindenkinek Petit mutatta be Mary-Nom everyone-Dat Pete-Acc showed in ‘Mary introduced Pete to everyone/ In this sentence, Mari is the topic, mindenkinek the raised quantified NP, and Petit the focus. All three of them are raised to the beginning of the sentence and they must appear in the order of Topic < QP < Focus. To account for these facts, we will assume that there are distinct operator features but they are checked through the same syntactic operation, namely, by raising a constituent to the Spec of CP through A-bar movement. The fact that more than one constituent can be raised sExample from Anna Ssabolcsi. 63
  • 80. in this way simply means that there is more than one operator. The multiple A-bar movements that seem to be involved may be handled in a way analogous to the treatment of multiple wh-movement. We can have a layered CP where the specifier position of each layer contains one operator. The different layers might be named Topic-P, Focus-P, etc. We can also let the operators adjoin to CP one after another. We can even put an ordered list of operators in Spec of CP. For the purpose of our preliminary experiments, however, there is no need to commit ourselves to any of those options. Being minimal again, we will currently limit ourselves to those cases where only one operator feature is checked. This may be the scope feature, the topic feature, or the focus feature. The next question is whether the operator feature is ever morphologically re­ alized. The answer seems to be “yes”. The wh-scope marker was in German, for instance, can be taken as an overt scope feature, while the topic marker -wa in Japanese can be considered an overt topic feature. A German example is given in (43) (same as (28)) and a Japanese example is given in (44). (43) wasi glaubst du [<-? was, Hans meint [ mit wcm ], Jakob <, WHAT believe you WHAT Hans think with whom Jakob gesprochen hat ]] talked has ‘With whom do you believe that Hans thinks that Jakob talked?’ (44) Taroo-wa scnsci da Taroo-Topic teacher is ‘Taroo is a teacher.’ We will associate an S(F)-parameter to the operator feature to account for the fact it is overt in some languages but not in others. In sum, we have six S(F)-parameters which are associated with the following features: case, agreement, tense, aspect, operator and predication. They will 64
  • 81. be called S(F(case)), S(F(agr))t S(F(tns)), S(F(asp)), S(F(op)) and S(F(pred)) respectively. 3.1.3 Features and Categories So far we have been talking about V-features and NP-features as if only verbs and nouns had features. This is not true, of course. Every category, whether lexical or functional, has a set of features. Moreover, many features are found in more than one category. Typically, a feature appears in two different places, one in a lexical projection and one in a functional projection. The tense feature, for instance, exists in both T and V. We have called features in functional projections F-ftaturta and those in lexical projections L-features. Whenever a feature resides in both a functional projection and a lexical projection, feature-checking is required. To make sure that the F-feature and the L-feature agree in their values, the lexical element bearing the L-feature must move to the position where the F-feature is located. The only features that seem to have L-features only are the 0-features. The 0-grid is found in the verb only and the 0-roles are found in NPs only. All other features are generated in more than one position. The following table lists the features and the categories that contain them. functional lexical 0-grid V 0-role N tense: T V aspect: Asp V Agrl V, NP1 Agr2 V, NP2 65
  • 82. case(l): Agrl case(2): Agr2 predication: C operator: C NP V, NP1 V, NP2 V There are two sets of ^-features. ^(1) consists of the features involved in subject- verb agreement. <^(2) is related to object-verb agreement. NP1 and NP2 usually correspond to Subject and Object, but not always so. Technically, NPl is just the higher NP and NP2 the lower one. Parallel to the ^-features, there are also two sets of case features. Case(l) is the case assigned/checked in Agrl, and case(2) the one in Agr2. The NP with which the operator feature is associated can be any NP, subject or object. Conceptually, we can think of the F-features as representing the information the speaker intends to convey and the L-features as the features to be physically realized. To ensure that “we say what we mean”, so to speak, the lexical features must be checked against the functional features.7 As we will see, the dual presence of F-features and L-features can account for many interesting linguistic phenomena. Each feature has a set of values. For instance, the value of the tense feature can be instantiated to present, past, future, etc. However, we are not interested in those specific values in this abstract model. What we will be focusing on is the spell-out of those features: whether they are morphologically realized in a given language, whatever their values may be. rThe (set that the 0-features need not be checked this way does not mean that they are not checked. They do get checked but the checking takes place in the process of lexical projection. 66
  • 83. 3.1.4 The Spell-Out of Features In 3.1.2, we have assumed six S(F)-parameters. Here is review of those parameters and the features they are associated with. (46) S(F)-Parameter Feature S(F(case)) case S(F(agr)) agreement S(F(tns)) tense S(F(asp)) aspect S(F(op)) operator S(F(pred)) predication When the S(F)-parameter of a given feature is set to 1, this feature will be spelled out in overt morphology. What is actually spelled out, however, can vary in dif­ ferent cases. First of all, there is a distinction between pre-Spell-Out checking and post-Spell-Out checking. When a feature is checked before Spell-Out, the lexical element carrying the feature will have moved to the functional position where this feature is checked. As a result, the L-feature and F-feature will get unified be­ fore Spell-Out and become indistinguishable from each other. We may say that the F-feature disappears after checking and what is available for spell-out is the L-feature only. Thus the feature always shows up in an inflected lexical head if it is spelled out. By “inflected” we mean the lexical element has an affix, a special tone, or any other forms of conjugation. In cases where the feature is checked after Spell-Out, the L-feature and the F-feature will co-exist in the representation fed into PF. Consequently, both of them will be available for potential phonological realization. Logically speaking, then, there are four possibilities for the spell-out 67
  • 84. of features, as we have mentioned in the previous chapter: (47) (i) Only the F-feature is spelled out; (ii) Only the L-feature is spelled out; (ui) Both the F-feature and the L-feature are spelled out; (iv) Neither the F-feature nor the L-feature is spelled out. The possibilities in (i) and (iii) are available only if the feature is checed after Spell-Out. All the four possibilities can be illustrated by examples from real languages. Let us take the tense feature as an example. In English, this feature must be spelled out and what is overtly realized can be either the F-feature or L-feature. In (48) the F-feature is spelled out. The verb does not move to TP and the head of TP is overtly realized by itself as will In (49) the L-feature is spelled out as a suffix to the verb. (48) John wilt visit Paris. (49) John visit-ed Paris. It seems that double spell-out, i.e. spelling out both the F-feature and the L- feature, is prohibited in English, for (50) is ungrammatical. (50) *John did visit-ed Paris. In Swedish, however, both the F-feature and L-feature can be spelled out, as we have seen in (24) repeated here as (51). (51) [v, Oeppnade doerren ] gjorde han open-Past door-the do-Past he ‘He opened the door.1 68
  • 85. In fact, the sentence will be unacceptable if the L-feature is not spelled out. In (52), the infinitive form of the verb oeppna is used instead of the past tense form oeppnade, and the sentence is out. 8 (52) *Oeppna doerrtn gjorde han open (inf) door-the do-Past he *He opened the door.* On the other hand, there are also languages where neither the L-feature nor the F-feature is spelled out. Chinese offers examples of such null spell-out: (53) Ta meitian qu zveziao he/she everyday go school ‘He/she goes to school everyday.’ (54) Ta mingtian qu zueziao he/she tomorrow go school ‘He/she will go to school tomorrow.’ (53) is in the present tense while (54) is in the future tense, but there is no overt tense marking at all.9 We have thus seen that all the four logical possibilities in (47) are empirically attested. This suggests that the S(F)-parameter can have two sub-parameters, 8This sentence and the judgment on it is from Christer Platxaek. 9There are certain things in Chinese that are arguably tense markers. For example, jiang and Am can be treated as future tense markers, and the perfective marker It can be treated as a past tense marker (Chiu 1992), as we can see in (55) and (56). (55) Ta tningfian jiang zueziao he/she tomorrow JIANG go school 'He/she will go to school tomorrow.’ (56) Ta rnotias qu It zueziao he/she tomorrow go LE school 'He/she went to school yesterday.’ But even if these are tense markers, it still remains true that at least in some sentences the tense feature is not overt. 69
  • 86. one for the spell-out of L-features and one for the F-features. We will represent these two sub-parameters by splitting the value of each S(F)-parameter in the form X-Y. The value of X determines whether the F-feature is spelled out and Y the L-feature. Then the four possibilities in (47) will correspond to the following parameter values. (57) Features Spelled Out Values of SfFJ-parameter (i) F-feature only 1-0 (ii) L-feature only 0-1 (iii) both F and L feautres 1-1 (iv) neither F nor L features 0-0 The values in (i) and (iii) are possible only if feature-checking takes place after Spell-Out because only in these cases will both the L-feature and F-feature be available at the time of Spell-Out. The significance of these sub-values will be further discussed in Chapter 4 where more examples will be given to illustrate those possibilities. 70
  • 87. 3.2 The Com putational System There are three major operations in the computational system: lexical projection (LP), generalized transformation (GT) and move-a. We will specify them one by one. 3.2.1 Lexical Projection 3.2.1.1. Basic Operation In our system every category X projects the tree in (58). *P / (ZP) id / (58) x CYP1 An Elementary X-bar Tree This is different from the lexical projection described in MPLT. We assume that the projections are invariable, with every projection resulting in an XP (i.e. X2). In MPLT, however, the projection only goes “as far as it needs to” and what is projected can be an XO, an XI or an XP. In addition, the specifier and complement positions do not appear in the initial projection in MPLT. They are added later in the GT process. Consequently, the distinction between substitution and adjunction is in fact gone. In our system, however, this distinction is maintained. Generally speaking, any position which is obligatory is to be generated in the process of lexical projection. Attachment or movement to these positions is therefore substitution. All positions that are optional are not base-generated. They are to be added to the structure through adjunction in the process of GT or move-a. 71
  • 88. Now let us take a closer look at (58) which will be called an elementary X-bar tree. The (act that ZP and YP appear in upper case letters indicate that they are empty when the tree is projected. They contain a set of features but no lexical content. In the process of syntactic derivation, they will either be filled by a subtree or be licensed to remain empty. In the former case, they serve as attachment points for GT or move-a operations. The parentheses around ZP and YP indicate their optionality: not every projection contains them. The actual status of ZP and YP is determined by X. ZP can appear as a specifier of X if and only if X selects a ZP specifier. This selectional rule can be represented as (59). (59) specifier(X,Z) Similarly, YP can appear as a complement of X if and only if X selects YP as a complement. This can be represented as (60). (60) complement(X,Y) The specifier can be absent if the rule in (61) exists and the complement can be absent if (62) exists. (61) specifier(X,e) (62) complement(X,t) The rules in (59) and (60) do not tell us the directionality of the specifier or complement. In any actual implementation of these rules, however, the direction has to be specific. To generate the tree in (58), for example, we must make clear that the specifier is to occur to the left of the head and the complement to the right of the head. We have decided in 2.2.4 that we will adopt a weaker version of the 72
  • 89. Invariant X-bar Structure Hypothesis where there are only two HD-parameters: a head-complement parameter for CP and a head-complement parameter for IP. The specifiers always occur to the left of their heads. The complements always occur to the right of their heads except those in functional categories. The positions of heads in functional categories are determined by two HD-parameters: HDl and HD2. The value of HDl determines the position of head in CP and the value of HD2 determines the head-position in IP. HDl and HD2 each have two values: I (head-initial) and F (head-final). It is assumed here that each category can take at most one specifier and one complement. As a result, the tree to be generated will never be more than binary branching. The assumption that each category can take no more than one specifier (Spec) is well accepted. The single-complement assumption, however, may seem to be unsupported at first sight. There are obviously structures where two or more complements are found. The double-object construction is am example of this. However, the fact that some categories need more than one complement does not mean that any single elementary X-bar tree has to contain more than one complement position. Adopting the layered-VP hypothesis of Larson (1988), we can let a category project more than one elementary X-bar tree, with each tree taking only one complement. For instance, instead of (63(a)), we can have the tree in (63(b)) where both UP and WP are generated while binary branching is maintained. 73
  • 90. XP / zp xl / x xp / UP xl . . . / (63) X UP WP x WP (b) Multiple Complement* in a Binary Tree The atructure in (63(b)) will be discussed further when we come to the projection of VP. 3.2.1.2. Selectional Rules (Simplified) Now we define the selectional rules for each category. We will tentatively as­ sume that UG contains the rules in (64). (64) specifier(c,x) (i) specifier(agrl,n) (ii) specifier(t,c) (iii) 8pecifier(aap,e) (iv) specifier(agr2,n) (v) specifier(v,n) (vi) 74
  • 91. complement(c,agrl) (vii) complement(agrl ,t) (viii) complementt,asp) (ix) complement(asp,agr2) (X) complement(agr2,v) (xi) complementv»v) (xii) complementv,e) (xiii) Selectional Rules Some notes on these rules are in order here. There are no HD-parameters associated with the specifier rules: the specifier selected by the head always occurs to the left of the head. The complement rules which are sensitive to the values of HD-parameters are (vii), (viii), (ix), (x) and (xi). The head in (vii) precedes the complement when HDl is set to I and follows the head when HDl is set to F. The heads in (viii), (ix), (x) and (xi) precede their complements when HD2 is set to / and follow their heads when HD2 is set to F. The fact that the values of HD2 applies in all the three rules reflects what we have assumed in 2.2.4: IP is treated as a whole regardless of how many independent projections it may contain. In other words, Agrl-P, TP, AspP and Agr2-P are to be treated as segments of a single IP. Therefore they share the value of a single HD parameter. The application of (xii) is not subject to the value of any HD-parameter. A verb always precedes its complement. The Spec of CP (Cspec hereafter) can be any maximal projection. We use “x” to represent this. The Specs of Agrl, Agr2, and V are all NPs in these rules. (From now on, we will refer to these positions as Agrlspec, Agr2spec, and Vspec 75
  • 92. respectively.) This is again a simplification. In a more complete system, CPs or IPs should also be able to appear in those Spec positions. We notice that T and Asp do not have a Spec position. This does not mean that there is any principled reason against these categories having a specifier. There have been many syntactic arguments which rely crucially on the presense of this position (e.g. Bobaljik and Jonas (1993)). It is not present in this grammar simply because we are trying to keep the system as small as possible. The rules in (64) are applicable to both transitive and intransitive sentences. In other words, both Agrl-P and Agr2-P will be projected no mather whether there is an object or not. When a sentence is intransitive, one of the agreement phrases may be inactive. I will basically follow Bobaljik (1992), Chomsky (1992), and Laka (1992) in assuming that there is an Obligatory Case Parameter whose value determines which case-assigner (Agrl or Agr2) is active in an intransitive sentence. According to this assumption, we get a nominative/accusative construc­ tion if Agrl is active and an ergative/absolutive construction if Agr2 is active. There is evidence that both Agrl and Agr2 exist in a single intransitive sentence. In some languages case marking identifies the patient of a transitive verb with the intransitive subject, while agreement identifies the agent of a transitive verb with the intransitive subject. It seems that, in these situations, the case system is ergative while the agreement system is nominative. We have to conclude then that both Agrl and Agr2 can be partially active in an intransitive sentence. 3.2.I.3. Selectional Rules (Fe&turized) So far the selectional rules and the projection trees have been presented in a simplified form with a lot of information left out. The nodes in (58) contain 76
  • 93. nothing but the category label, and the selectional rules in (64) only tell us the categorial status of a specifier or complement. It is obvious that the nodes do not consist of category labels only. Each node is a set of features, the category label being just one of them. Take the verb catches as an example. It has at least the syntactic features in (65). (65) Category: v 0-Grid: [agent, patient] Tense: present ^•features: [person:3,numer:s] The specifier or complement selected by the category is also a bundle of features. For instance, Agrlspec may have the following features: (66) Category: n Case: 1 ^-features: X The value of the ^-features, represented here as a variable, is selected by the head of Agrl. This selectional relation can be seen in (67) which is the maximal projection of Agrl. 77
  • 94. egrl-OQ tpfl cased, tanee:Yl) (67) phiOO) Feature Value Binding in Agrl-P The variable “X” is found in both the specifier and the head. This indicates that the two nodes must have “unifiable” or non-distinct <t>features.10 This is the way Spec-head agreement is achieved. The actual value of this feature is not an intrinsic property of Agrl. They depend on (a) the verb that moves to Agrl-0 and (b) the NP that moves to Agrlspec. What Agrl plays is a mediating role. It ensures that, whatever the value may be, it must be shared by the specifier and the head. The structure in (67) also tells us that the NP specifier must have Case 1, i.e. the case assigned by Agrl. The NP to be attached or moved to this position must have the same case. In this way the case-marking of nouns will get checked. Notice that the case feature also appears in the Agrl-0 and the Agrl-P node. This means that case is an intrinsic feature of Agr. The specifier of Agr gets the value of its case feature via Spec-head agreement. The presence of the case feature in Agr also 10“Unifiable” ia used here in the standard sense of unification. Intuitively, two sets of features are uniflable if they do not have incompatible values. For instance, the feature matrices [per­ son^,number:p,cender:m] and [person:3,numberp, gender:Y] are unifiable (X and Y are variables meaning “any value”), while [person:2,number:p,gender:f] and [person:3,number:p,gender:f] are not unifiable. The values of person clashes with each other.
  • 95. explain* why case is a V-feature as well. A verb acquires the case feature or has a case feature checked when it moves to Agr-0 through head movement. The complement in this projection tree is a TP whose tense feature has a variable “Y" as its value. The value will be instantiated when a TP is attached to this position. To generate the tree in (67), we need two“featurized” selectional rules, such as the ones in (68). (68) specifier(agrl(case:l,phi:X), n(caae:l,phi:X)) complement(agrl,t) Obviously, all the rules in (64) need to be featurized this way. But there is one more point to be elaborated on before we do this. I have mentioned earlier that, in order to preserve binary branching, we will adopt the “Layered-VP” hypothesis of Larson (1988). The VP structure assumed in this experimental grammar is a pseudo-Larsonian one which in a way carries Larson's idea to the extreme. In addition to the general layered-VP structure, we also assume the following: (i) The number of VP layers corresponds to the number of arguments a verb takes or the number of 0-roles it assigns. In other words, each layer of VP will contain exactly one argument. (ii) The argument in every VP layer appears in Vspec. The assumption in (i) in fact follows from (ii). Each layer of VP can have only one specifier and therefore we need as many layers as the number of arguments. The VP tree for a transitive verb will look like the following: 79
  • 96. vp / np{ll vl / v yp / npC2] vl (69) ! VP-Projection of a Transitive Verb These assumptions have the following consequences. First, the distinction between internal and external arguments are eliminated. What remains is just a thematic hierarchy. An argument can just be relatively “higher” or “lower" than some others. What is traditionally called the “subject” is simply the highest argument in the VP-shell. The Extended Projection Principle is now translated into the requirement that every sentence must contain at least one argument. There is no longer the need to explain, for example, why an NP with a “Theme” role can be either an internal or external argument. The kind of argument promotion observed in (70a) and (70b) now receives a natural account. (70) (a) The man opened the door. (b) The door opened. Presumably the agent theta role, carried by the man, is higher in the thematic hierarchy than the patient or theme role carried by the door. When both theta roles are present in a sentence, as in (70a), the door must take a lower position in the VP tree and be the object of the sentence. When the agent the man is absent 80
  • 97. as in (70b), however, the door is promoted to subjecthood since there is no other NP in the sentence that carries a “higher” theta role. Passive and unaccusative constructions are also accounted for. The passive verb has had its first 0-role “absorbed”. Therefore the VP projected from a passive verb will not have the layer which is the top one in its active counterpart. The remaining arguments are thus promoted one layer up. The argument which is originally in the second layer is now in the first one and treated like a subject. The unaccusative verb has only one 0-role to assign, so only one layer of VP is projected. Since this single layer is the top layer, the argument of an unaccusative verb can enjoy subjecthood. This is just another way of stating Burzio’s generalization11 (Burzio 1986). Secondly, the VP structure assumed here lets 0-role assignment be performed uniformly in the Spec-Head configuration. This is conceptually appealing because 0-role assignment, case-checking and agreement-checking now involve the same type of operation, namely Spec-head agreement. We thus have a more general and more consistent notion of the Spec position being the checking domain of each category. The structure in (69) consists of two elementary X-bar trees but they are in fact projected from a single head. The verb exists as a chain, each VO being one of its links. The two links are identical except for the number of a 0-roles they contain. The higher one has two while the lower one has only one. This does not mean that n Bunio’» Generalisation: (i) A verb which lacks an external argument fails to assign accusative case. (ii) A verb which fails to assign accusative case fails to theta-mark an external argument. 81
  • 98. two different verbs are involved here. The difference is used as a computational device which makes sure that the correct number of layers are projected. We have seen in (64) that there are two complement rules for V: it can take either a VP complement or no complement. The choice is determined by the 0-feature. If the 0-grid contains only one 0-role, this V will have no complement and the current VP will be the last layer. If the 0-grid contains n + 1 (for any n > 0) 0-roles, this V will take a VP complement. The 0-grid of this VP complement will contain n 0 roles, with the understanding that the other role has been assigned in the higher layer. In each layer, the 0-role being assigned is always the first one in the list. This role is removed from the list after the assignment so that the next role can be assigned in the next layer. The layer-by-layer stripping of 0-roles also ensures that eventually there will be only one role left so that the VP projection will terminate. In the case of (69), the verb has two 0-roles to assign. No more VP complement is permitted after the second layer because there is no more theta-role to assign. Now we are ready for a “featurized” version of (64). The new lexical projection rules are given in (71). The structure F : V means that the feature F has V as its value. (71) speciffer(c,x(op:+)) (i) specifier(agrl(case:1,phi:X),n(case:l,phi:X)) (H) specifier(t,e) (iii) specifier(a,e) (iv) specifier(agr2(case:2,phi:X),n(case:2,phi:X)) (v) specifier(v(0-Grid:(Th,...],n(0-role:Th) (vi) 82
  • 99. complement(c,agrl) (vii) complement(agrl,t) (viii) complement(t,asp) (ix) complement(asp,agr2) (X) complement(agr2,v) (xi) complement v(0-Grid:[Thl,Th2,...]),v(0-Grid:[Th2,...]) (xii) complementv(0-Grid:[Th]),e) (xiii) Featurised Selectional Rules The 0-grids in these rules contain variables like “T hl”, “Th2”, etc. instead of names like “agent" and “patient".13 This is done for the sake of generality. The notation means that, given any two 0-roles, “Thl" is higher than “Th2" in the thematic hierarchy and is to be assigned in a higher layer of VP. The op(erator) feature in Cspec has the value “+ ”. This means that the NP or any XP to be substituted into this position is going to be the operator, i.e. it will receive the wide-scope, topic, or focus interpretation. 3.2.2 Generalized Transformation The LP operation described in the previous section produces a set of elementary X-bar trees. The function of Generalized Transformation GT) is to put those trees into a single tree. In MPLT, there is only one type of GT operation which subsumes both substitution and adjunction. In both cases, we add an empty position to the target tree (which looks like adjunction) and then substitute a subtree into this position. This will not be the version of GT to be assumed in the present model. lJThe in the 0-grids represents the rest of the theta-roles which can be empty. 83
  • 100. As I have stated earlier, we will maintain the distinction between substitution and adjunction, the former associated with obligatory constituents and the latter with optional ones. In GT substitution, a subtree K is substituted into an empty position 0 in the target tree K’, resulting in K*. The empty position 0 in K’ is either a specifier position or a complement position which has already been generated in the process of Lexical Projection. The position is empty in the Bense that it has features but no lexical content. It is an attachment site into which another tree may be substituted. The substitution is possible only if the attachment site and the subtree to be attached have compatible (i.e. unifiable) features. For instance, the subtree to be attached to the Agrlspec in (67) must be an NP whose case feature has the value 1 and whose ^-feature has the value X. If X has already been instantiated to [person:3,numer:s] in Agrl, only a third person singular NP can be attached to this point. If the X in Agrl is instantiated to [person:3,number:N] where N is a variable, however, either a singular or a plural third person NP can be substituted. (72) and (73) illustrate the two basic cases of GT substitution. In (72a), an NP is being substituted into Agrlspec. The tree that results is (72b). Notice the unification of feature values in the substitution process. (“3sm” is a short-hand form of [person:3,numbers,generrm].) In (73a), a TP is being substituted into the complement position of Agrl-P. The result is (73b). 84
  • 101. (72) •flrl-pd eutd, phiiXU i:YB •grl-pfl phi:3n0) phi:3*aj) •grl-0([ tp([ c k k I , ttns*:Y]) phi^€«]) (a) (b) agri-pfl cassd. phi:X]) npd agrl-1 ph!*X]‘) agrl-0([ tpfl id, tsn»s:Y]) phiOQ) (73) «i>a tsnssrprssj) •grl-pd cassd, phiJC) npQ ogrl-1 phBd> •grl-Otf pWdd) tp([ tsnsaiprssD (a) (b) GT Substitution 85
  • 102. In GT adjunction, a subtree is added or inserted into a constituent. In this experimental model, we will only be concerned with adjunction to XI. In other words, we will only consider the adjunctions whose function is adding modifiers into the structure. The subtree to be adjoined to an XI must be a maximal projection. In this GT process we create an additional segment of XI which contains an empty position 0 and then attach a subtree to 0 . In (74), an extra segment of XI is created and AP is substituted into the empty position contained in this extra XI. Txl / 0 xl 1 yp (74) GT Adjunction We assume that all adjunctions are left-adjunctions. We also assume that the attachment point created during the adjunction has certain selections] properties, so that each category will only accept a certain class of modifiers. For instance, the adjunction site in a VI may require the adjunct to be an AdvP. Therefore we will not be able to adjoin an NP to a VI. If an adjunct is acceptable to two or more X l’s, it can then choose to adjoin to any of them. I will not try to specify a full theory of modifier adjunction here. Some further discussion on this will be given when the need arises. GT operations are applied recursively on pairs of trees until there is only a 86
  • 103. single tree left. If there are two or more subtrees left and no GT operation can apply to reduce them to a single tree, the derivation crashes. At this point, we might be interested to see what structures are produced by LP and GT in our system. Given the rules in (71), we can get 4 different CP structures for each type of verb by varying the values of HDl and HD2. For illustration, we will look at one of the 4 structures where both HDl and HD2 are set to I. We will demonstrate it with two types of verbs: a transitive verb that takes two NP arguments and an intransitive verb that takes one argument. The former will be illustrated by the English sentence Mary caught him and the latter by Mary is swimming. The structures generated by LP and GT for these two sentences are given in (75) and (76) respectively. (All the nodes in these trees should have features other than the category label, but to save space the other features are omitted in all but the terminal nodes.) 87
  • 104. ■grl-p text m.-pMt]) •525in tc t ■gr2HMI vp $8% npfloo:U ngintll Mary vOa . «S £^ t£-griS?ftigant,pati*nt]}) I caught nptt nwtiant]) hi* vOfl BBSf- (75) Base Ttee of a Simple Transitive Sentence
  • 105. op: ♦!) eOtt. r,» •BTi-ppr«d:-Qj) np<[ sgrl-1 A «grl-0([ tp 33& tO(I tanserpraa]) aspect:prog]) •or2HM vp Mice:' SrToSaagentD Mary vOd aspecfcprog, tR^W:(*9*nt]]) I swiaaing (76) Base Tree of a Simple Intransitive Sentence Some specific points about these trees are worth mentioning. 89
  • 106. Firstly, We see that all the lexical items appear VP-internally. Each of the NPs is in a position to which a 0-role is assigned, but none of them , however, is in a position where its case can be checked. This is different from the traditional view that internal arguments are assigned cases VP-internally under government. In our system, there is no internal arguments and government does not play a role in case-checking at all. Every NP is drawn from the lexicon together with its case feature, but it cannot be checked VP-internally. To satisfy the checking requirement, it must move to the Spec position of one of the agreement phrases. This kind of movement will be discussed in 3.2.3. Secondly, the copula is in Mary is swimming does not appear in the tree. This follows from our assumption that is is an expletive which is not base-generated. It is inserted in the Spell-Out process as a way of overtly realizing the features in Agrl-0 and TO. Finally, we find in those trees all the features we have assumed. The values of these features are constants in some cases and variables in others. (All the upper­ case letters stand for variables.) The variables all represent unspecified values, but their syntactic status can be different depending on whether the feature is an F-feature (feature in a functional category) or an L-feature (feature in a lexical categories). The variables in functional projections are all used for agreement. Two nodes are supposed to have the same value for a certain feature if the same variable appears in both. For instance, the values of ^-features in both Agrl-P and Agr2-P are variables. The fact that <j>has X or Y as its value in both the head and the Spec of AgrP ensures that the subject/object and the verb will agree in their ^-features. The values of these features will be instantiated when the VP-internal NPs move to the Specs of AgrPs and the verb moves to the heads of AgrPs. 90
  • 107. The variables in the lexical projections indicate that the features in question are morphologically unspecified. In other words, there are no morphemes in the lexicon that represent the values of those features. In (75), for examples, the NP Mary is morphologically unspecified for the case feature and the verb caught is unspecified for the aspect feature. The features will get instantiated when movement takes place for feature-checking. In (76), swimming is specified for the aspect feature which is morphologically realized as the suffix -ing. But it is not morphologically specified for the tense feature. Hence the variable for the tense feature. The values of operator features in the two NPs (Mary and Atm) are also variables. When the sentence is used in a given context, however, one of them can get the **+* value and only this NP can eventually move to Cspec. 3.2.3 Move-a In our present system, movement takes place for no other reason than feature- checking. Following Chomsky's principle of Procrastinate (MPLT) which requires that no movement take place unless it is the only way to save a derivation from crashing, we will assume that a movement occurs if and only if there is an LF check­ ing requirement whose satisfaction depends soly on this movement. We should be reminded at this point that the movements we are discussing here are LF move­ ments which are universal. They take place in every language by LF, though only a subset of them may be visible in a particular language. 91
  • 108. S.2.3.1. M ovement as a Feature-Checking Mechanism The necessity of movement in feature-checking can be viewed from two different perspectives. From the viewpoint of lexical items, we see that a given word may have two or more features, each of which must be checked in a different structural position. Take NP as an example. UG requires that it be assigned a 0-role and be checked for case. However, 0-roles are assigned in Vspecs only and cases are checked in Agrspecs only. To meet both requirements, an NP must move from one position to the other, which forms a chain linking the two positions. Once this occurs, the NP exists as a chain rather than a single node. It enters a structural relation whenever one of its links is in the required position for that relation. From the viewpoint of features, we see that most features are found in more than one node. In (75) and (76), for instance, the tense feature appears in both TP and VP. To make sure that a given feature has the same value throughout the whole structure, we have to form chains to link nodes which are related by feature- checking movements but are not in the same projection. All the chains in our system are formed in this way. Since movement occurs for feature-checking only, we get the set of movements required by UG by locating all the features whose checking requires movement. As we have seen, only those features which appear in more than one projection need to be checked through movement. Furthermore, in all the cases where a feature appears in two different projections, one of them is in a lexical projection and the other in a functional projection. This is clear in (45), (75) and (76). To see what movements are required, we only have to list all the features that are both L-features and F-features. According to (45), they include the following: tense, aspect, ^(1), ^(2), case(l), case(2), predication and operator. 92
  • 109. The tense feature is found in both T and V. In order for the feature in V to be checked against the feature in T, the verb must move to T. The aspect feature is found in both Asp and V. Therefore, the verb must move to Asp for feature-checking. The predication feature is found in both C and V. Forced by the feature-checking requirement, the verb must move to C. The operator feature is found in both Cspec and NFs. For feature-checking, one of the NPs must move to Cspec. Since the value of the operator feature is always u+ ” in Cspec, only the NP which is the operator can move there. The case feature is found in both Agrspecs and NPs. Therefore, each NP must move to some Agrspec to have its case feature checked. NPl must move to Agrlspec and NP2 to Agr2spec. We assume that, when both NPl and NP2 are present in the VP projection, NPl cannot move to Agr2spec, nor can NP2 move to Agrlspec. There are various ways to account for this restriction. In MPLT, this restriction is supposed to be derived from the notion of equidistance. I will not go into the the mechanisms that implement this notion. At an intuitive level, we can view the restriction as a special way of observing the Principle of Economy which requires, among other things, that short moves be preferred over long moves. If we move NPl to Agrlspec and NP2 to Agr2spec, both movements will be relatively short. If we move N Pl to Agr2spec and NP2 to Agrlspec, however, the NPl-to-Agr2spec movement will be very short but the NP2-to-Agrlspec movement will be longer than any of the two movements in the previous case. This economy-based argument is not well-understood yet but we can get the same effect from some simpler principles. In our model we assume that the case hierarchy and the thematic hierarchy in a sentence must agree with each other. Given two NPs, NPn and N Pn+j , and two roles 6n and 0„+i with 0n preceding 6n+i in the 0-grid of the verb, NPn must have its case checked in a higher 93
  • 110. case position if NPn is assigned 0» and NPn+i assigned 0„+i- (A case position (Agrspec) is higher than another one if the former asymmetrically C-commands the latter.) Intuitively, this assumption simply means that the subject must be assigned the subject case and the object the object case. In passive constructions, the first 0-role in the 0-grid is suppressed and the one that follows it will become the the first. As a result, the NP assigned this promoted 0-role is free to move to the highest case position. In unaccusative constructions, the “subject” 0-role is missing from the 0-grid. Consequently, some other role will be the first in the grid and the NP assigned this role can go to the highest case position. The ^-features are similar to the case features in that they are both NP-features and V-features. In terms of the NPs, the ^-features are found in both Agrspecs and the NPs. During feature-checking, NPl must move to Agrlspec and NP2 must move to Agr2spec. The movement patterns are identical to those involved in case-checking. In terms of the verb, the ^-features are found in Agrl-0, Agr2-0 and VO. The verb therefore must move to Agrl-0 and Agr2-0 to have the features checked. During the movement, the verb will also pick up the case features in Agrl and Agr2. This indicates that case and agreement are two sides of the same coin. They have the common function of identifying grammatical relations. 94
  • 111. To sum up, we list in (77) all the movements forced by feature-checking. (77) A. The verb must move to Agr2-0 to have its <j>& case features checked for object-verb agreement. B. After moving to Agr2-0, the verb must move to AspO to have its aspect feature checked. C. After moving to AspO, the verb must move to TO to have its tense feature checked. D. After moving to TO, the verb must move to Agrl-0 to have its <t>& case features checked for subject-verb agreement. E. After moving to Agrl-0, the verb must move to CO to have its predication features checked. F. NPl must move to Agrlspec to have its <f>& case features checked. G. NP2 must move to Agr2spec to have its <f>& case features checked. H . After moving to an Agrspec, one of the NPs must move to Cspec to have its operator features checked.13 From now on, we will refer to these movements as M(agr2) , M (asp)y M (tns), M (agrl), AI(c), Af(specl), M(spec2) and Af(cspec) respectively. These move­ ments are illustrated graphically in (78) with the English sentence Alary caught him where Alary is NPl and him is NP2. iaThis implies that every sentence has an underlying topic or focus or an NP that receives a wide-scope interpretation. 95
  • 112. 5£L -nil •®rl_pprea:-QD agrHXl tp m b m t«u*:pe*tj) phiiYD npa«o:U ’agent)) jg L * tSSexli^ tSrfiplEIKftgent.petlent]]) :petlent]) his vO([ . O S ^ (78) Feature-Checking Movement* 96
  • 113. We can see that M(agr2), M(asp), M (tns), M (agrl) and M(c) are head move­ ments, Af(specl) and M(spec2) are A-movements, and M{cspec) is A-movement. There are two instances of M (cspec) in the diagram. One involves NPl moving to Cspec while the other involves NP2. In a particular sentence only one of the movements can occur. Which one occurs depends on which of the NPs is the topic or focus of the sentence. S.2.3.2. M ovement in Operation Having identified the set of movements involved in feature-checking, we will now take a closer look at the computational operation involved in these move­ ments. It has been assumed that all the movements are raising movements in our system. Lowering is prohibited. Therefore, it is illegal to have any “yoyo” type of movement where a constituent moves up and then down or down and then up. Other operational constraints on movement are discussed below. M ovem ent as S ubstitution All the movements discussed here are substitution operations. The landing site of every movement is an existing attachment point which is an empty node created in lexical projection. The substitution is possible only if the moved element and the landing site have identical categories and com­ patible feature values. It is not possible, for example, to move an XO to an XP or vice versa. Nor is possible to move an NP to a position where a different value is required for the case or ^ feature. This guarantees that all the movements are structure-preserving. The substitution operation is obvious in the cases of A-movement and A- movement. The landing sites of these movements are all Spec positions projected in LP: Agrlspec in the case of M(specl), Agr2spec in the case of Af(spec2), and 97
  • 114. Cspec in the case of M(cspec). In cases of head movement, however, this is less ob­ vious. At first sight, substitution seems to be impossible. How can a VO substitute for a TO or CO, for instance? For a node to serve as the landing site of a move­ ment, it must be (a) empty, and (b) have feature values which are unifiable with those of the moved element. The condition in (a) seems to hold. The landing sites of V-movement are all heads of functional categories which are feature matrices without lexical content. The condition in (b), however, looks a little problematic. For one thing, the landing site and the moved element do not seem to have the same categorial features. We seem to be substituting a V for a T, an Agr, a C, etc., which should be impossible. This is one of the reasons why head movements have been standardly treated as adjunction rather than substitution operations. But a second thought on the status of C, Agr, T and Asp suggests that the substitution story is plausible. These categories are after all extended V-projections. Since none of these functional categories has a lexical head, all of them can be said to have been projected ultimately from the verb. In other words, they are just some additional layers of the V-projection. This is the view expressed in Grimshaw (1991) according to which the V, Agr2, Asp, T, Agrl and C projections form an extended projection. All the heads in this single projection can share the same set of categorial features. This being the case, there should be no reason why sub­ stitution is impossible. We therefore assume in this experimental grammar that head movement involves substitution instead of adjunction, (c/. Emonds (1985), Roberts (1991, 1992)). When a verb is substituted into the head of a functional category, the two heads will merge into one. We will call this new head V, with the understanding that all the features of the original functional head have been preserved. We choose to call it V rather than T or Asp because the features of this 98
  • 115. new head are spelled on the verb in the form of verbal inflection. The diagram in (79) illustrates the substitution involved in a head movement where a verb moves to the head of Agr2-0. (79a) shows the pre-movement structure and the movement which is taking place. (79b) shows the post-movement structure. ■gr2p-p / np sgr2-l agr2-0([ vp c h e 2 , phi:Yl) np vt vOd aspsctrA, tn w p H t, th-grid:[sgt,pst]]) (79) rOd H piet'i, 2. phl;Y. th-grid:[sgt,pst]j) ■spsetut, th-grid:[»gt,pst]]) (a) 0>) Head Movement as Substitution The kind of head movement assumed here fails to make some of the predictions that are made by the standard version of head movement. In head movement by adjunction, the moving head gets attached to the target head either from the left or from the right, so the head and the affix will appear in a certain linear order. 99
  • 116. In (80), for instance, the verb has moved to CO through Agr2-0, AspO, TO and Agrl-0. / <0 agri-p / / V / / y (80) tO agrl-0 / vO agr-2-0 /' Head Movement as Adjunction The successive adjunction results in a big verbal complex which is boxed in the diagram. Suppose that in this language agreement features and tense features are morphologically realized as suffixes. Then the structure of this verbal com­ plex predicts that the inflected verb will be spelled out as V-T(ense)-Agr(eement) 100
  • 117. rather than V-Agr(eement)-T(ense). This prediction is based on the Mirror Prin­ ciple (Baker 1985a) which requires that morphological derivations reflect syntactic derivations (and vice versa). In the substitution story of head movement, this pre­ diction is gone. The movement just results in a complex feature structure where no linear order is implied. This result can be good or bad depending on whether the Mirror Principle is really valid. If it is, our version of head movement will be less desirable because it has missed an important generalization. However, counter-examples to the Mirror Principle do exist. In terms of the T-suffix and the Agr-suffix, both orders seem to be possible. In Italian (81) and Chichewa (82), for example, we find T inside Agr while in Berber (83) and Arabic (84), we find Agr inside T. (81) 14 legge-va-no read-imp(Asp/Tns)-3ps(Agr) ’They read’ (82) 18 Mtsuko u-na-gw-a waterpot SP(Agr)-past(Tns)-fall-Asp ’The waterpot fell’ (83) 18ad-y-segh Moha ijn teddart fut(Tns)-3ms(Agr)-buy Moha one house ‘Moha will buy a house.’ (84) 17sa-ya-shtarii Zayd-un dar-an fut(Tns)-3ms(Agr)-buy Zayd-Nom house-Acc ‘Zayd will buy a house.’ MExample* from Belletti (1988) 18Example from Baker (1968) lflExample from Ouhalla (1991) 17Example from Ouhalla (1991) 101
  • 118. In order to preserve the Mirror Principle, some (e.g. Ouhalla 1991) have proposed that the hierarchical order of AgrP and TP be parameterized, i.e. in some lan­ guages AgrP dominates TP while in other languages TP dominates AgrP. But the price we pay here to save the Mirror Principle seems too expensive. In our system, such reshuffling in the tree structure is not necessary. What syntax provides for each node is a feature matrix. The linear order in which the features are spelled out can be treated as an independent matter which probably falls in the domain of morphological theory. Different languages may simply choose to spell out the features in different orders. In acquisition the linear order can be learned in the same way that other ordering rules in morphology are learned. B arriers to M ovem ent I have mentioned earlier in this chapter that the bound­ ing theory may need some revision in the Minimalist framework. In the standard model, there is the distinction between SS movement and LF movement. It is assumed that some of the barriers which constrain SS movements do not apply to LF movement. Now that all movements are LF movements, it is no longer clear what the barriers are. Fortunately, this status of affair does not seem to affect our experimental model very much, since we are currently only concerned with simple sentences. Some barriers do exist within a single clause, but we can for the time being describe them in a case-by-case manner without attempting a gen­ eral account. In what follows, we will look at head movement, A-movement and A-movement one by one and discuss the constraints on each of them. For head movement, we will assume the Head Movement Constraint (HMC) which requires that no intermediate head be skipped during the movement. Given three heads, Hi, Hj and Hj, where Hi asymmetrically C-commands Hi and Hi 102
  • 119. asymmetrically C-commands Hs, no XO can move from H$ to H without moving to Hi first. For a verb to move from its VP-internal position to CO, for example, it must move successively to Agr2-0, AspO, TO and Agrl-0 first. For A-movement, there will be no clause-internal barriers. We usually assume that A-movement has to be local. According to Sportiche (1990), for instance, A- movement has to go in a Spec-to-Spec fashion. A movement is blocked whenever it has to go through a Spec position which is already filled by some other XP or one of the links of another XP chain. Obviously there would be problems if this locality constraint were imposed on the A-movements in our present system. For NPl to move to Agrlspec, it would have to go through Agr2spec, but this is impossible.18 A similar problem exists for NP2 which would have to go through NPl to reach Agr2spec. To account for the fact that M(specl) and M(spcc2) are possible, we will assume that the domain of XP movement can be extended by head movement (c/. Baker 1988). As a result, all the projections that a single head can go through will be transparent to each other. In our system, the verb moves all the way to C through Agr2, Asp, T and Agrl. So the whole CP tree is transparent for XP movement. Another way to describe this transparency is to say that there is no barrier to movement in a single extended projection (Grimshaw 1991). Within this single projection an NP can move to any Spec position without violating any locality constraint. This domain for XP-movement also applies to A-movement. As a result, any NP within a single CP can move to Cspec without crossing any barriers. But there is an independent constraint which prevents an NP from moving from its 18This is impossible because (i) the NP moving to Agrlspec must have Case 1 and will not be able to unify with Agr2spee which has Cass 2, and (ii) Agr2spec belongs to the chain headed by NP2 and therefore it is already filled and should serve as a barrier for the movement of NPl. 103
  • 120. VP-internal position directly to Cspec. It is required in our grammar that every NP move to a Agrspec to have its case & agreement features checked. If an NP moves directly to Cspec, skipping all Specs of AgrPs, the the case & agreement features will fail to be checked. Once in Cspec, an NP will not be able to move to an Agrspec any more, since lowering is prohibited. Consequently, an LF constraint is violated and the derivation will crash. To avoid the crash, an NP must move to a position to have its case & agreement features checked before moving to Cspec. In other words, NPl must move to Agrlspec first and NP2 to Agr2spec first. If we go back to (78) now, we will realize that all the constraints discussed above are observed there. In fact, the movements illustrated in (78) represent not only all the possible movements in our system but also all the possible paths for these movements. In particular, each movement has a unique path and results in a unique chain. Before we close this section, I will mention an apparent problem related to head movement. We have assumed that the verb always moves all the way up to CO. Superficially, however, there seem to be many cases where the verb only moves half-way up and what moves to Agrl or C is an auxiliary. It looks as if the checking movement were broken up into two parts, one performed by verb movement and one by auxiliary movement, resulting in two separate chains. I will argue that, even in these cases, what moves to CO at LF is still the verb and there is only a single chain. After Spell-Out the verb will move further up to the positions which the auxiliaries seem to have moved through. The movement is not blocked because auxiliaries are invisible at LF and their features are incorporated into the verb. Why the movement seems to be split at Spell-Out will be explained in Chapter 4. We will see that there are particular settings of S(M)-parameters which are 104
  • 121. responsible for this superficial phenomenon. 3.2.3.3. The S(M )-Parameters In 3.2.3.1, we identified a set of 8 movements: M(agr2), M(asp), M(fns), M (agrl), Af(c), M(specl), Af(spec2) and M(cspec). We assume that each of these movements can occur either before or after Spell-Out. In other words, each of them has an S(M)-parameter associated with it. We will call those parame­ ters S{M(agr2)), S(M(asp)), S(M(tns)), 5(A/(ayrl)), S(M{c)), S(M(specl)), S(M(spec2)) and S(M(cspec)) respectively. When S(M (X)) is set to 1, A/(.Y) will be overt. It is covert when S(A/(X)) is set to 0. Now the question is whether an S(M)-parameters can have a third value, namely 1/0, which is a variable. What this says is that the relevant movement can be either overt or covert, hence the optionality of the movement. Our immediate reaction to this idea might be negative. According to the Principle of Economy in general and the principle of Procrastinate in particular, no movement should be optional. If a movement can be either overt or covert, it should always be covert. In addition, there are both acquisitional and processing arguments against optional movement. Optional rules are more difficult to acquire. They also make the parsing process less deterministic. However, there is empirical evidence which shows that the “no optionality” assumption is too strong. It runs into difficulty whenever a language has alternative word orders. If we insist on the binarity of S(M)-parameters values, any given movement will be either always overt or always covert. As a result, only a single word order will be permitted in any language. The fact most languages do have alternative word orders shows that this binarity is too restrictive. We can of course say that any given language has a canonical word order. This order 105
  • 122. is determined by the obligatory movements and all the optional movements are “stylistic” or “discourse-driven”. But this leads to the assumption that there are two independent sets of movements: one syntactic and one stylistic. This assumption is not totally implausible, yet the necessity of identifying a different set of movements in addition to the checking movements we now have makes the theory more complicated. We will have a simpler theory if we assume that there is only a single set of movements and the “syntactic” and “stylistic” movements are overt manifestations of the same set of movements. In this way, we will not need to define a separate set of movements in addition to the movements we have defined here. All the “stylistic” movements correspond to movements whose S(M)-parameters are set to 1/0. This value is a variable which can be instantiated to either 1 or 0. As far as the S(M)-parameter values are concerned, therefore, both overt and covert movements are allowed. In stylistically neutral or unmarked cases, the Principle of Economy will dictate that the variable be instantiated to 0. As a result, the movements are invisible and the “canonical” order surfaces. In contexts where other factors call for overt movement, the Principle of Economy may be overridden. Consequently, the variable will be instantiated to 1 and the movement is visible. In short, when an S(M)-parameter is set to 1/0, the movement with which the parameter is associated will be covert unless there are some stylistic or discourse factors calling for overt movement. So the movement is not really optional. Once we have a stylistic or discourse theory which defines precisely when overt movement is needed, the choice will be clear. In any given context, the variable can only be instantiated to a single value. However, the model we are describing here is a purely syntactic one which does not include a stylistic or discourse module. This other module is absolutely necessary, but it falls outside the scope of the present 106
  • 123. study. The issues involved there need to be addressed in a separate project. What we can do in syntax is providing all the options. The choice will be made when the syntactic module is interfaced with other modules. For this reason, we will allow some movements to be optionally overt in our grammar. In particular, we will let the three S(M)-parameters associated with XP/NP movement - S(M(specl)), S(M(spec2)) and S(M(cspec)) - have three values: 1, 0 and 1/0. This by no means implies that head movement cannot be optional. We have simply chosen to experiment with optional movement on A-movement and A-movement first. There are two motivations for this choice. First, we want to try out some optional movements and find out their basic properties before generalizing optionality to all movements. Second, the main purpose of permitting optional movement in our grammar is to account for those scrambling facts which involve A-movement or A-movement. Optional head movement will be discussed briefly in this chapter but will be be put aside in subsequent discussion. To give the above argument more substance, we will look at two specific cases where the S(M)-parameters seem to be set to 1/0, one involving optional A-movement and one A-movement. For optional A-movement we can find an example in English. In (85) and (86) (same as (25)), we see an alternation between overt and covert NP movement. (85) Three men came. (86) There came three men. In (85), Af(specl) (NP movement to Agrlspec) is overt. It is covert in (86). We thus conclude that S(M(specl)) is set to 1/0 in English. This explains why both orders are possible. However, in a particular context only one of them will be 107
  • 124. appropriate. (86) seems to be the unmarked case where there is no reason for overt movement. In (85), however, the Principle of Economy has apparently been overridden by some discourse considerations. An example of optional A-bar movement can also be found in English where topicalization produces a word order other than SVO. (87) John likes apples, (88) Apples, John likes. In our system we assume that topicalization involves XP-movement to Cspec. Then it seems that apples has moved to Cspec in (88) but not in (87). We can conclude then that M(cspec) is optional in English and S(M(cspec)) is set to 1/0. In unmarked cases the movement does not occur overtly due to the Principle of Economy. When a constituent needs to be overtly topicalized, however, the Economy principle is overridden and the movement becomes visible. Although we will put optional verb movement aside in subsequent discussion, we will assume that it is possible in principle. An example of this kind of option­ ality can be found in French. There we find the word order alternation between statements and questions, as shown in (89) and (90). (89) Nous allon d la bibliothique we go to the library ‘We are going to the library.' (90) Allez vous a la bibliothique go you to the library 'Are you going to the library?' (89) is a statement where the verb is presumably in Agrl-0 while (90) is a question where the verb has moved to CO. It seems that the verb movement from Agrl-0 to 108
  • 125. CO is optional in French, since both orders are possible. We can therefore assume that 5(M(c)), the S(M)-parameter for verb movement to CO, is set to 1/0. This is why the verb can either precede or follow the subject. However, the movement is non-optional in any particular case. Let us suppose that the declarative sentence constitutes the unmarked case where there is no special motivation for Agrl-to-C movement. Thus the Principle of Economy will apply and the sentence will be ungrammatical if the movement is overt. In the case of interrogative sentences, there seems to be a special need for overt movement. We will not discuss what the need is here, but apparently it can override the Principle of Economy and require that the movement be overt. The Principle of Economy thus looks like a default principle. It applies only if no other principle is being applied. In terms of the values of S(Af(c)), French can be contrasted with V2 languages on the one hand and Chinese and Japanese on the other. In V2 languages, the Agrl-to-C movement seems to occur overtly regardless of whether the sentence is a statement or question. This shows that S(M(c)) is set to 1 rather than 1/0 in these languages. This is why the movement is always obligatory. In Chinese and Japanese, on the other hand, the Agrl-to-C movement is never visible. This suggests that S(M(c)) is set to 0 in these languages. In this case, the verb does not have the option to move to C even if this movement is motivated in some way. We will see in Chapter 4 that the value 1/0 for S(M(specl)), S(M(spec2)) and S(M(cspec)) can account for many interesting facts which would otherwise be left unexplained. The addition of this value will of course make the task of acquisition and parsing more challenging, but the challenge will give us a better understanding of the acquisitional and parsing processes. 109
  • 126. 3.3 Sum m ary In this chapter we have defined an experimental grammar upon which our study in syntactic typology, syntactic acquisition and syntactic processing in later chapters will be based. We defined a categorial system, a feature system and a computa­ tional system. The feature system includes a set of features and a set of S(F)- parameters which determine the morphological visibility of those features. The computational system is composed of three sub-components: lexical projection (LP), generalized transformation (GT), and move-a. For LP we defined a set of selectional rules which determine the specifier and complement of each category and two HD-parameters which determine the position of heads in functional pro­ jections. No parameterization exists in GT which is performed in a universal way. For move-a we defined a set of feature-checking movements, each of which has a S(M)-parameter that determines the visibility of the movement. In the next chap­ ter we will put this grammar to work. We will examine the parameter space created by the parameters and the language variations accommodated in the parameter space. 110
  • 127. Chapter 4 The Parameter Space In this chapter we consider the consequences of our experimental grammar in terms of the language typology it predicts. The parameters we have defined in the previous chapter can have many value combinations, each of which making the grammar generate a particular language.1 Those different value combinations form our parameter space where a variety of languages are found. We will ex­ plore the parameter space and find out its main properties and the languages it accommodates. We should be reminded here that the term “language” is used in a special sense here. In most cases we will be using the quoted form of this term to mean a set of strings which are composed of abstract symbols like S(ubject), O(bject) and V(erb). A string such as S V 0 represents a sentence where the subject precedes the verb and the object follows the verb. In addition, each symbol can carry a list of features. The features in this list represent overt morphology, i.e. the features that are spelled out. For instance, V-[agr,tns] represents a verb which is inflected for agreement and tense. A typical “language” in our system looks like (91) which ‘The language generated can be empty, i.e. it contains no string. Ill
  • 128. tells us the following facts: (a) this “language" has an SOV word order; (b) the NPs in this “language" carry case markers; (c) the verbs in this “language" are inflected for agreement and tense; (d) this “language” has both transitive and intransitive sentences; and (e) this is a nominative-accusative language where the subject in an intransitive sentence has the same case-marking as the subject in a transitive sentence. (91) < s -[c l] v -[a g r,tn s], s -[c l] v-[agr,tns] o-[c2] > This set of strings may resemble some subset of a real language, but it is far from a perfect representation of any natural language. It is only an abstract representations of certain properties of a human language. The properties we are interested in are word order and inflectional morphology. When we say a set of strings corresponds to an existing language, we mean that it reflects the word order and morphology in this language. All the languages that are generated in our systems are such abstract languages. In spite of their abstractness, however, it will not be hard to see what languages they may represent. We will see in this chapter that many “languages" accommodated in our parameter space have real language counterparts and most real languages can find an abstract representation in our parameter space. Let us start the exploration by reviewing the parameters we have assumed. (i) S(M)-parameters. These parameters determine what movements are overt in a given language. There are eight S(M)-parameters corresponding to the eight movements assumed in our theory: 112
  • 129. S(M(agr2)) [V-to-Agr2] S(M(c)) [Agrl-to-C] 5(M (asp)) [Agr2-to-Asp] 5(Jl/(specl)) [NPl-to-Agrlspec] 5(Af(tns)) [Asp-to-T] S{M{aptc2)) [NP2-to-Agr2spec] 5(A /(aprl)) [T-to-Agrl] S(M(capec)) [XP-to-Cspec] The movement in brackets is overt (before Spell-Out) if the corresponding S(M)-parameter is set to 1 and covert (after Spell-Out) if the parameter is set to 0. We have assumed in Chapter 3 that A and A movements ( M(specl), M(spec2) and M(cspec) ) can be optional before Spell-Out. Therefore the value of S(M(specl)), S(M(spec2)) or S(M(cspec)) can be a variable - 1/0 - which indicates that the associated movement can be either overt or covert. (ii) S(F)-parameters. These parameters determine what morphological features are overt in a language. Six of them are assumed: S(F(agr)), S(F(case)), S(F(tns)), S(F(asp)), S(F(pred)) and S(F(op)). Each of these parameters can have four values: 0-1 (spell out the L-feature only), 1-0 (spell out the F-feature only), 1-1 (spell out both the L-feature and the F-feature), and 0-0 (spell out neither the L-feature nor the F-feature). Recall that most features in our system are base-generated in two positions, one in a lexical category (the L-feature) and one in a functional category (the F-feature). The two features are checked against each other via movement. (iii) Two HD-parameters: HD1 which determines whether the head of CP pre­ cedes or follows its complement, and HD2 which determines whether the heads in IP precede or follow their complements. These two parameters can be set to either I (head-initial) or F (head-final). The value of HD2 applies to every segment of IP: AgrlP, TP, AspP and Agr2P. 113
  • 130. Putting these parameters together, we have 8 binary-valued parameters, 3 triple­ valued ones, and 6 quadruple-valued ones. They make up a parameter space where there are 14,155,776 (i.e. 27 x 3s x 4®) value combinations. Two questions arise immediately: (92) Does every existing human language has a corresponding “language” in our parameter space? (93) Does every “language” in our parameter corresponds to some natural lan­ guage? As we will see in this chapter, the answers to these questions are basically positive. In order to get the answers to these questions, we must first of all get all the value combinations, generate a language with each setting, and collect the languages that are generated together with their corresponding settings. This is a straight-forward computational task and it can be accomplished using the Prolog program in Appendix A.I. There is obviously an expository problem as to how the results of this experiment can be presented and analyzed, since simply listing all the settings and the languages they generate may take a million pages. In order to describe the whole parameter space in a single chapter, I will break down the parameter space into natural sub-spaces and look at them one at a time. This can be done because the three sets of parameters in our systems are independent of one another. We can vary the values of certain parameters while keeping the others constant. Some properties of the parameter space are local in the sense that they are properties of a particular sub-space or a particular type of parameters. We can get a clear idea about those properties by examining the relevant sub­ spaces. In areas where different types of parameters interact, we will concentrate on 114
  • 131. some representative cases instead of exhaustively listing all the possibilities. Such sampling will hopefully enable us to envision the potential of the entire parameter space. In what follows, we will look at the space of S(M)-parameters first and then ex­ pand it to include the HD-parameters. After that we will bring the S(F)-parameters into the picture and consider their interaction with S(M)-parameters. Many nat­ ural language examples will be cited in the course of discussion to illustrate the relevance of our experimental results to empirical linguistic data. 4.1 The Param eter Space of S(M )-Param eters In this section, we will single out the S(M)-parameters and explore the range of language variation they can account for. To do this we need to look at the value combinations of S(M)-parameters while keeping the values of S(F)-parameters and HD-parameters constant. In the following experiment, the HD-parameters are constantly set to I (head-initial). In other words, we will be restricted to the tree structures in (75) and (76) (Chapter 3) where every head precedes its complement. We will not be concerned with morphology at this moment. The settings of S(F)- parameters will be temporarily ignored. The “words” that appear in strings will therefore be simplified as s (subject) o (object) and v (verb) which are to be interpreted as NPs and verbs with any inflectional morphology. 115
  • 132. 4.1.1 An Initial Typology We have assumed eight S(M)-parameters and we will represent their values in a vector of eight coordinates: C S(M(agr2)) S(M(asp)) S(H(tns>) S(M(agrl)) S(M(c)) S(NCapecl)) S(M(spec2)) S(M(csp*c)) ] A setting like [ 1 0 0 0 0 1 0 0 ] means that S(Af(apr2)) is set to 1, S(M(a$p)) set to 0, S(M(tns)) set to 0, and so on. With 5 binary-valued parameters (£(Af(apr2)), S(M(asp)), S(A/(<ns)), 5(Af(aprl)), S(Af(c))) and 3 triple-valued ones (S(M(specl))t S(M(spec2)), S(M(cspec))), we have a parameter space of 864 settings. However, it is not the case that each of these value combinations is syntactically meaningful. This is true at least within this sub-space of S(M)-parameters. When the S(M)-parameter val­ ues are matched up with the values of S(F)-parameters, more of these settings will become syntactically relevant. But we will investigate this sub-space first. Only after we have understood why certain settings of S(M)-parameters are syntactically meaningless in this sub-space, can we see the reason why some of these settings may make sense once the S(F)-parameters are brought into the picture. In our current sub-space where only S(M)-parameters are active and the only X0 category that can move is the verb, many settings are meaningless due to the two syntactic constraints discussed in 3.2.3: the Head Movement Constraint (HMC) and the constraint that an NP must move to an Agrspec before moving on to Cspec. The HMC requires that no intermediate head be skipped during head move­ ment. For a verb to move from its VP-internal position to CO, for example, it 116
  • 133. must move successively to Agr2-0, AspO, TO and Agrl-0 first. In other words, verb- movement to C must consist of 5 short movements: M(agr2) (V-to-Agr2), Af(asp) (Agr2-to-Asp), M(ins) (Asp-to-T), M(agrl) (T-to-Agrl) and A/(c) (Agrl-to-C). The verb cannot be in CO if any of those intermediate movements fails to occur. If the only XO in a grammar that can undergo head movement is the verb, there will be a transitive implicational hierarchy in the form of M(agr2) < M(asp) < M(tn&) < M(agrl) < Af(c) No movement on the right-hand side of an < can occur without the one(s) on the left-hand side occurring at the same time. However, there are many value combinations of 5(Af(ayr2)), S(M(asp)), S{M which are contradictory to the HMC. Consider (94) (a) 1 1 1 1 1 (b) 0 1 1 1 1 (c) 1 0 1 1 1 (d) 1 1 0 1 1 (•) 1 1 1 0 1 (f) 0 0 1 1 1 (g) 0 1 0 1 1 (h) 0 1 1 0 1 (i) 1 0 0 1 1 (j) 1 0 1 0 1 Ck) 1 1 0 0 1 (1) 0 0 0 1 1 (a) 0 1 0 0 1 (n) 0 0 1 0 1 (o) 1 0 0 0 1 Cp) 0 0 0 0 1 tns)), S(M(agrl)) and 5(Af(c)) lie settings in (94). 117
  • 134. All these settings have S(M(c)) set to 1, requiring that the verb move to CO before Spell-Out. They all differ from each, however, in the values of S{M(agr2)), S(M(asp))i S{M(tns)) and S(A/(a£rl)). In (94(a)) they are all set to 1; in all the other settings, however, at least one of them is set to 0. This results in a contradiction in all the cases of (94(b))-(94(p)). The fact that S(A/(e)) is set to 1 requires that the verb moves to Agr2-0, AspO, TO, Agrl-0 and finally to CO before Spell-Out. But the values of the other parameters require that at least one of those intermediate movements must not occur before Spell-Out. Take the setting in (94(p)) as an example. This setting says that the verb must move to CO before Spell-Out, but it must not move to any of the intermediate heads before Spell-Out. Since overt verb movement to CO requires overt movement through Ag2-0, Asp-0, TO and Agrl-0, this setting makes no sense. Similar contradictions are found in all the other settings except the one in (94(a)) where no intermediate movement is blocked. This kind of contradiction is found in many other settings- The net result of all this is that, in so far as verb movement is concerned, there are only six settings which are syntactically meaningful:3 sThere a n alternative ways of looking at the situations described hen. It has been sug­ gested by R. Frank (personal communication) and D. Sportiche (personal communication) that movement to a certain head need not be blocked simply by the fact that the S(M)-parameter associated with this bead is set to 0. Following the HMC, the movement will go from head to head regardless of the S(M)-parameter values. If this is the case, then the settings in (94(b-p)) will all be equivalent to (94(a)) in terms of the overt movement forced: all of them requin that the verb move to CO before Spell-Out. Instead of saying that then a n only six syntactically meaningful settings, we can then say that the settings can be grouped into six equivalent classes. Different settings can be in the same equivalent class by virtue of the fact that the final landing site for the movement is the same in each setting. Each of the settings in (95) then npresents an equivalent class. 118
  • 135. (95) 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 (no overt V-aovement) (overt V-aoveaent to Agr2-0) (overt V-aovement to AspO) (overt V-novement to TO) (overt V-aoveaent to Agrl-0) (overt V-aoveaent to CO) An analogous situation seems to exist for the values of S(M(apecl))t S(M(spec2)) and S(M(cspec)) which are responsible for the overtness of XP move­ ment. The requirement that an NP must move to an Agrspec before moving to Cspec has the following consequence. If an XP appears in Cspec at Spell-Out, it must have moved to Agrlspec or Agr2spec during the derivation. But the rela­ tionship between different settings here is a little more complicated. Consider the settings in (96). (96) (a) [ . . . 1 1 1 ] (b) [ . . . 1 0 1 ] (c) [ . . . 0 1 1 ] (d) [ . . . 0 0 1 ] Some of these settings may seem syntactically meaningless. In (96(d)), some XP (including the subject NP and the object NP) is required to move overtly to Cspec, but neither the subject nor the object can move to its Agrspec position before Spell-Out. Upon further reflection, however, we realize that all those settings can be syntactically meaningful. Recall that the XP in Cspec can be any of the following: the subject NP, the object NP, or some adjunct (AdvP, PP, etc.). Since 119
  • 136. an adjunct need not move to any Agrspec before moving to Cspec, the setting in (96(d)) will make sense if the sentence contains an adjunct XP which can move to Cspec regardless of the values of S(M(specl)) and S(M(spec2)). All the other settings in (96) can also be meaningful and they mean different things. They are similar in that Cspec must be filled at Spell-Out, but they may differ in terms of the XP that Alls the Cspec. Here are the possible interpretations: (97) (96(a)): Cspec can be filled by any XP; (96(b)): Cspec can be filled by a subject NP or an adjunct, but not by an object NP, as it does not move overtly to Agr2spec; (96(c)): Cspec can be filled by an object NP or an adjunct, but not by a subject NP, as it does not move overtly to Agrlspec; (96(d)): Cspec can be filled by an adjunct only, as neither the subject NP nor the object NP moves overtly to Agrspec. The setting in (96(d)) has an interpretation, but it predicts that there can be a language where every sentence must contain an XP other than the subject or object NP. As such a language does not seem to exist, this setting will be disregarded. Thus the meaningful settings for 5(Af(spec2)), S(M(specl)) and 5(M(c)) are: (98) (a) c . . . 0 0 o ] (b) c . . . 1 0 0 ] (c) c . . . 0 l 0 ] (d) c . . l l 0 ] (•) 1 . . . l 0 1 ] (f) [ . . . 0 l 1 ] (g) c . . . l l 1 ] 120
  • 137. As each of these parameters has a third setting (1/0), we also have the following value combinations: (h) [ . . 1/0 0 0 ] (i) C . . 0 1/0 0 ] <j> r . . 0 0 1/0 ] 00 [ . . 1/0 1 0 ] (l) c . . 1 1/0 0 ] Cm) [ . . 1/0 1/0 o 1 (n) [ . . 1/0 0 1 1 (o) [ . . 1 0 1/0 ] (p) c . . 1/0 0 1/0 ] Cq) C . . 0 1/0 1 ] (r) [ . . 0 1 1/0 3 (>) [ . . 0 1/0 1/0 ] (t) [ . . 1/0 1 1 ] (u) [ . . 1 1/0 1 ] (v) [ . . 1 1 1/0 ] <w> [ . . 1/0 1/0 l 3 (x) [ . . 1/0 1 1/0 3 (y) [ . . 1 1/0 1/0 3 (z) [ . . 1/0 1/0 i/o 3 The total number of value combinations which are syntactically meaningful in our present sub-space is therefore 156 rather than 864. These settings and the corresponding sets of strings they generate are shown in Appendix B.l. In this 121
  • 138. list, each entry consists of a vector of S(M)-parameter values and the set of strings generated with this value combination. We call each vector a setting and each set of strings a “language”. If we examine these setting-language pairs carefully, we will find that the corre­ spondence between settings and “languages” is not one-to-one. For instance, both the setting in #1 ( [ 0 0 0 0 0 0 0 0 ] ) and the one in #3 ([ 0 0 0 0 0 1 0 0 ]) can generate the language [ s v, i v o ].3 In fact, the correspondence is many-to-one in most cases. We see in Appendix B.l that many settings generate identical languages. The language [ v m, v a o ], for instance, can be generated with 14 different settings including #4, #8, #12, #16, #20, etc.. There are 156 settings in the list, but the number of distinct languages that are generated is only 31. These 31 languages and their corresponding settings are listed in Appendix B.2. The significance of such many-to-one correspondences will be discussed in 4.1.2. Looking at the languages listed in Appendix B.2, we find that our current parameter space is capable of accommodating all the basic word orders: SV(O) (#1), S(0)V (#4), VS(0) (#2), V(0)S (#8), (O)SV (#3), «>d (O)VS (#5). We also find a V2 language (#11). One of the settings with which a V2 language can be generated is [ 1 1 1 1 1 1 1 1 ] where every movement is overt. According to this setting, the verb must move overtly all the way to CO, the NPs to their respective Agrspecs, and one of the NPs must move further on to Cspec. In an intransitive sentence, there is only one NP and this NP must move to Cspec. The slt is important to note the distinction between strings and languages. A language comprises a set of strings, and two languages are identical only if they have exactly the same set of strinp. For instance, [ s v, s v o ] and C s ▼, s o v ] are not the same language in spite of the fact they both contain the string “« v”. 122
  • 139. resulting word order is SV. In a transitive sentence, either the subject NP or object NP can fill Cspec. We have SVO when the subject is in Cspec and OVS when the object is. This may well be what is happening in German root clauses. If so, the German sentences in (100(a)) and (101(a)) will have the structures in (100(b)) and (101(b)) respectively. (100) 4 a. Der Mann sah den Hund the(nom) man saw the(acc) dog ‘The man saw the dog.1 b. [cp D er M sniii [ci [e sah^] [a^rip [agri—1 • • • [agrip den Hund* Ugn-i [«gr2 ej] [„p e< e, e*]]]]]]]] (101) a. Den Hund sah der Mann the(acc) dog saw the(nom) man ‘The dog, the man saw.’ b. [cp Den Hund* [cj [c uhy] [ojrip der M ann ,■[ayri—i • • • [agrip [.0r2 t j ) [v, e, ej e*J]]]]]]] In addition to basic word orders and V2, the current parameter space also allows for a considerable amount of scrambling. Scrambling is a general term referring to word order variation in a single language, usually the variation that results from clause-internal movements of maximal projections. In our present discussion, a language is considered to have scrambling if it has more than one way of ordering S, V and O. For example, [ o s v . s v , s v o ] and [ i o v, o s v, s v ] are scrambling languages. We can see that many of the “languages” in Appendix B.2 are scrambling languages. We do not know whether each of those languages is empirically attested, 4Examples from Federika Moltmann 123
  • 140. but at least some of them are. Let us take a look at the languages in #9, #16 and #30. The “language" in #9, [ s o t , o s v, s v ], seems to be exemplified by Japanese and Korean. They are verb-final language where the subject and object can be put in any order as long as they precede the verb. The Japanese sentences in (102) and (103) illustrate this. (102) Taroo-ga Hanako-o nagut-ta Taroo-Nom Hanako-Acc hit-Past ‘Taroo hit Hanako’ (103) Hanako-o Taroo-ga nagut-ta Hanako-Acc Taroo-Nom hit-Past ‘Taroo hit Hanako' The “language" in #16 is [ o i v, s o v , s v , s v o ] where the subject must precede the verb but the object can appear anywhere in the sentence. Chinese seems to be an example of this, as we can see in (104), (105) and (106). (104) wo jian guo neige ren I see Perf that person ‘I have seen that person before.' (105) wo neige ren jian guo I that person see Perf ‘I have seen that person before.' (106) neige ren wo jian guo that person 1 see Perf ‘That person, I have seen before.’ The “language" in #30 has very extensive scrambling, permitting all the following orders for a transitive sentence: [ o v s, s v o , s o v , o s v , v s o ]. All those orders are found in Hindi. For instance, the Hindi sentence in (107) has the alternative orders in (108)-(112). ‘ 'Examples from Mahajan (1990) 124
  • 141. (107) raam-ne kelaa khaayaa (SOV) Ram-Erg banana ate ‘Ram ate a banana.’ (108) raam-ne khaayaa kelaa (SVO) (109) kelaa raam-ne khaayaa (OSV) (110) kelaa khaayaa raam-ne (OVS) (111) khaayaa raam-ne kelaa (VSO) (112) khaayaa kelaa raam-ne (VOS) The only order which is found in Hindi but not in #30 is VOS. Languages like Japanese, Korean, Chinese and Hindi have been described as “non-configurational” in the literature (Saito and Hoji 1983, Saito 1985, Hoji 1985, among many others). They are supposed to be problematic for any strong version of X-bar theory where phrasal projections are assumed to be universal. This problem does not seem to exist in our present model. As we have seen, all the scrambled orders can be derived from a single configuration if some A and A movements are allowed to be optional at Spell-Out. This is in fact one of the current views of scrambling (e.g. Mahajan 1990). 4.1.2 Further Differentiation One thing in Appendix B.2 that may cause concern is the fact that a single language can be derived with many different parameter settings. Those settings generate weakly equivalent languages8 which cannot be distinguished from each other on the “Two languages are weakly equivalent if they have identical surface strings but different gram­ mars. 125
  • 142. basis of surface strings. One reason why we cannot distinguish them is that many of the movements are string-vacuous. For example, in cases where the subject NP has moved to Agrlspec while the object NP remains in situ, there is no way of telling from an SVO string whether the verb is in Agrl-0, TO, AspO, Agr2-0 or its VP-internal position. There is simply no reference point by which the movement can be detected. But this seems to be an accidental rather than an intrinsic prop­ erty of the current model. The apparent indistinction between different settings is due at least in part to the artifact that only nouns and verbs have been taken into consideration in our grammar so far. But nouns and verbs are not the only entities in natural languages. There are other constituents which may serve as reference points for movement by virtue of the fact that some movements have to go “over” them. It has been widely assumed since Pollock (1989) that negative elements and certain adverbs are constituents of this kind. Once these reference points appear in the string, many movements will cease to be string-vacuous and many otherwise indistinguishable languages/settings will become distinct from each other. To illus­ trate this, I will introduce just one reference point into our grammar: an adverb of the “often” type. For easy reference I will simply call it often, meaning an adverb in any language which means has the syntactic properties of often. Furthermore I will assume that this temporal/aspectual adverb is a modifier of T and therefore left-adjoined to T l. The partial tree in (113) shows the part of the structure where often appears. 126
  • 143. Agrl-1 / XAgrl-0 TP I T1 / X often T1 X TO AspP (113) ^ 2 ^ X The Position of Often In terms of linear order, often appears between Agrl-0 and TO. This position can serve as a diagnostic for verb movement from TO to Agrl-0 or NP movement to Agrlspec, both of which relates a position following often to a position preceding often. Any verb before often must have moved to Agrl-0 or higher and any NP before it must be in Agrlspec or a higher position. Given a string such as 0 S o ften V, for instance, we can tell that 0 is in Cspec and S in Agrlspec, since these are the only two NP positions before often. Moreover, we know that the verb cannot be in CO or Agrl-0. Appendix B.3 shows us what the setting-language correspondences will be if the position of often is taken into consideration. We have the same number of value combinations (156) here as we had in B.l or B.2.7 However, the number 7To have a fair comparison, we have restricted A-movement to NPs only. The number of settings will be greater if the AdvP often is allowed to move to Cspec as well. 127
  • 144. of distinct languages that are generated with these settings has increased from 31 to 66. The addition of often into the strings has helped to differentiate many “languages” that are otherwise indistinguishable from each other. Take the SVO “languages” as an example. In Appendix B.2, there is only a single SVO language which can be generated with 32 different settings. In Appendix B.3, however, four SVO languages are differentiated: #1 [ (often) s v, (often) s v o ] (2 settin gs) #2 [ s (often) v, s (often) v o ] (20 settin gs) •12 [ s v (often ), s v (often) o ] (8 settin gs) •20 [ s (often) v o, s (often) v, (often) s v, (often) s v o ] (2 settin gs) We are now able to distinguish between English and French which seem to cor­ respond to #2 and #12 respectively. In English often precedes the verb while it follows the verb in French, as shown in (114) and (115). (114) John often visits China. (115) Jean visite souvent la Chine John visits often China ‘John often visits China.’ We can expect that, with more reference points (such as Neg and other adverbs) taken into account, even finer differentiations are available. However, it is neither likely nor necessary that we should have enough reference points to differentiate every setting from every other setting. There does not seem to be any principled reason against having more than one way of generating the same language, as long as all the different languages are identifiable. The existence of weakly equivalent languages may just be a fact. 128
  • 145. A natural question to ask at this moment is whether the reference points that have been assumed are always reliable. So far we have assumed that constituents such as Neg and adverbs do not move. However, this assumption does not seem well supported. In English, for instance, the adverb often does seem to move around. It can appear clause-initially as in (116), clause-medially as in (117), and clause-iinally as in (118). (116) Often she plays the piano. (117) She often plays the piano. (118) She plays the piano very often. In (116), often seems to have moved to Cspec. This is possible if S(M(cspec)) is set to 1 or 1/0 in English. To account for (118), we have to assume that often may be adjoined to either T1 or VI (i.e. it can be either a T modifier or V modifier). The order in (118) can be derived if often is attached to VI while the verb has moved to TO and the object NP to Agr2spec. This shows that the so called “reference points” may not always serve as a good indicator for the movement ofother constituents. There might be a canonical position for each adverb andonly this position serves as a reference point. But how a certain position can be identified as being canonical by the learner is a question that remains to be answered. 4.1.3 Some Set-Theoretic Observations In this section, we consider the set-theoretic relations between the “languages” in our parameter space. We will also consider the parameter settings that are responsible for those relations. The languages to be examined will be those in 129
  • 146. Appendix B.2. There are only 31 distinct languages there and it does not take long to compute all the relations between those languages. (The Prolog program which does the computation is in Appendix A.2.) The number of languages we have when all other parameters are taken into consideration will be much greater than 31, but the properties we find in this small number of languages will hold when the parameter space is expanded to include other parameters. The set-theoretical relations can be considered in a pair-wise fashion. We take each of the 31 languages and check it against each of the other 30 languages.8 The total number of pairs we get this way is 465 (30*31/2). There are three possible set-theoretic relations that can hold between the two languages in each pair: disjunction, intersection and proper inclusion.9 Two languages are disjoint with each other if they have no string in common. An example of this is the relation between #1 [ s v, s v o ] and # 2 [ v a, v a o ]. Two languages intersect each other if they are not identical but have at least one string in common. For instance, #1 [ a v, a v o ] and # 4 [ s o v , s v ] intersect by virtue of the fact that (i) each of them has a string that the other does not have ( s v o is in #1 only and s o v in #4 only.), and (ii) both of them contain the the string s v . A language LI is properly included in another language L2 if the strings generated in Ll forms a proper subset of those generated in L2. This relationship is found, for example, between #1 [ s v, s v o ] and #10 [ o s v, s v, s v o ]. Each string in #1 is in #10, but not vice versa. Our computation shows that the subset relationship holds in 162 of the 465 pairs. The fact that subset *We are not interested in pairing each language with itself, since every language is identical to itself. ®No languages in any pair can be identical to each other, since all the 31 languages in B.2 are distinct. 130
  • 147. relations do exist in a substantial number of cases will have important implications for the learning algorithm to be discussed in Chapter 5. We will not get into the learnability issues here. What we want to find out here is why subset relations arise in our parameter space. For convenience we will refer to a language which properly includes some other language(s) a superset language and a language which is properly included in some other language(s) a subset language. These notions are of course relative in na­ ture. We may have three languages LI, L2 and L3 where LI properly includes L2 which in turn properly includes L3. In this case, L2 will be a subset language relative to LI but a superset language relative to L3. Examining the languages in B.2, we find that all superset languages are scrambling languages. 10 Since only two types of sentences, (transitive and intransitive) are produced in our system, any language in B.2 that contains more than two distinct strings is a scrambling language. Now, what do those scrambling languages have in common with regard to their parameter values? An examination of the languages in B.2 reveals that the superset languages either have 5(M (specl)), S(M(spec2)) and S{M(cspec)) all set to 1 or have at least one of them set to 1/0. In other words, there is ei­ 10The only exceptions are the superset languages for #6 [o s v ] and #7 [o v •]. For instance, # 3 [o • v, s v] is a superset language for #6 (o s v] but there is no scrambling in #3. However, the languages in #0 and #7 are odd in the sense that they do not contain any string which consists of one NP only. In other words, no intransitive sentences can occur in these languages. Such patterns do not seem to occur in natural languages. Looking at the parameter settings for #6 and #7, we see that they share the property of having 5(Jlf(*pecl)) set to 0, S(Af(spec2)) set to 1 or 1/0, and S(A/(cipec)) set to 1. In other words, they all require that Cspec be filled before Spell-Out but only the object can move there. There is no overt subject NP movement to Agrlspec and hence no movement of the subject to Cspec. This situation will not arise if our grammar has the following constraint: if the object NP can move to Cspec in a language, then the subject NP in this language must be able to move to Cspec as well. (i.e. S(M(cspec)) cannot be set to 1 unless S(Af(#pecl)) is.) The languages in # 6 and #7 will not be generated if this constraint is in place. In any case, the languages in #6 and #7 can probably be ignored. If so, all superset languages are scrambling languages. 131
  • 148. ther overt A-movement of both the subject and the object or there is at least one optional A or A movement. When S(M(spec1)), S(M(spec2)) and S(M(cspec)) are all set to 1, either the subject NP or the object NP must move to Cspec. As a result, each transitive sentence will have two alternative orders, depending on which of the two NPs is in Cspec. Take the first setting in #9 as an example. In this setting, all the S(M)-parameters for head movement are set to 0. The verb therefore remains VP-internal at Spell-Out. The subject and object NP must move to Agrlspec and Agr2spec respectively, and one of them must continue to move to Cspec. When a sentence is transitive, the word order is SOV if the subject is in Cspec and OSV if the object is. Hence the scrambling phenomenon. Similar situations are found in the 2nd setting in #9, the 1st, 2nd and 3rd settings in #10, and the the 1st setting in #11. We now consider the second source of scrambling: optional movement. That optional movement can create scrambling is fairly obvious. If the movement is not string-vacuous, we will have one word order if the movement does occur and another one if it does not occur. In terms of parameter settings, any setting where one or more parameters are set to 1/0 is equivalent to the merge of two different parameter settings, one with that parameter set to 1 and the other with that set to 0. For instance, the setting [ 0 0 0 0 0 1/0 1 0 ] is equal to [ 0 0 0 0 0 1 1 0 ] plus [ 0 0 0 0 0 0 1 0 ] . The language generated with this setting is the union of the language generated with [ 0 0 0 0 0 0 1 1 0 ] (i.e. #4 [ s o v, s v ]) and the one generated with [ 0 0 0 0 0 0 1 0 ] (i.e. # 3 [ o s v, s ▼]). This is why the language generated with [ 0 0 0 0 0 1/0 1 0 ] is #9 [ s o v , o s v , s v ] . This is also why # 9 is a superset language for #3 132
  • 149. and #4. The value 1/0 is a variable which can be instantiated to either 1 or 0. In general, a setting with n parameters set to 1/0 can be instantiated to 3n —1 subset settings (including partial instantiations). For instance, a setting with two parameters set to 1/0 - [ . . . 1/0 . . . 1/0 . . . ] - can have the following eight instantiations: [ . C . C . [ . . 1 . 0 . 1/0 1 . 1 1 .. 1 . 1/0 If none of the movements contra C . C . [ . C • . 1 . . 0 . . I/O 0 0 0 1/0 . ] . ] . ] . ] lied by these parameters is string-vacuous, a lan­ guage with n optional movements can then have 2n subset languages. In B.2 there are many cases where the setting has one or more parameters set to 1/0, but the the language generated is not a scrambling one. These are all cases where the op­ tional movement is string-vacuous. The subset languages are thus string-identical. Consider the setting [ 0 0 0 0 0 1/0 0 0 ] which is equivalent to [ 0 0 0 0 0 1 0 0 ] plus [ 0 0 0 0 0 0 0 0 ] . The movement which is optional here is subject NP movement to Agrlspec. Since both the verb and the object remain in situ (in the order of VO), we will have SVO regardless of whether the subject is in Vspec or in Agrlspec. To sum up, there are two sources for scrambling or superset languages: overt A- movement and optional movement. This observation is important for learnability issues. The significance of this observation will discussed in Chapter 5. 133
  • 150. 4.2 Other Param eters In 4.1 we investigated the parameter space of S(M)-parameters. We kept the values of HD-parameters and S(F)-parameters constant in order to concentrate on the properties of S(M)-parameters. Now we will start taking those other parameters into consideration. In what follows, we will bring in the HD-parameters first, then some S(F)-parameters, and finally all the S(F)-parameters. 4.2.1 HD-Parameters Only two HD-parameters are assumed in our system: one for CP and one for IP. There is no specifier-head parameter for any category and there is no complement- head parameter for lexical categories. The parameter for CP (HDl) determines whether CP is head-initial or head-final. The parameter for IP (HD2) determines for every segment of IP (i.e. AgrlP, TP, AspP and Agr2P) whether the head precedes or follows the complement. If the parameter is set to 7, all those segments will be head-initial and the whole IP will have the structure in (119(a)). The whole IP will be head-final if the parameter is set to F , as shown in (119(b)). 134
  • 151. AgrlP Spac Agrl-1 / y Agrl TP ^ I / T1 TP Agrl Z T AapP 71 T / / M ^ T Z / A*> Agr2P " f ” **f (119) / X Spw Agr2-1 * 5 ® / NAgr2 VP * * * ***2-1 / X / VP Agr2 Z X (») (b) Head-Initial and Head-Final IP* It is evident that the values of HD-parameters can affect the linear order of a sentence. In our system, they work in conjunction with the values of S(M)- parameters in determining the word order of a language. Given a certain value combination of S(M)-parameters, the order may vary according to the values of HD-parameters. Our parameter vector now has 10 coordinates: 135
  • 152. [ S(M(agr2)) S(MCa«p)) S(M(tna)) S(M(agrl)) S(M(c)) S(M(apecl)) S(M(apec2» S(M(capec)>, HD1 HD2 ] We can keep the values of S(M)-parameters constant and get different word orders by changing the values of HD1 and HD2. Consider the following two settings: (t) [ 1 1 1 0 0 1 0 0 , 1 1 ] (b) [ 1 1 1 0 0 1 0 0 , F F ]. Both settings require that the verb move to TO, the subject NP move to Agrlspcc, and the object NP stay VP-internal before Spell-Out, but TO precedes the VP in (a) and follows the VP in (b). As a result, the language generated with (a) is SVO and the one with (b) is SOV. It should be noted that the value of a HD-parameter has an effect on word order only if the relevant head is the final landing site of an overt head movement. In (a) and (b), overt head movement reaches TO which is a head in IP but not CO which is the head of CP. Consequently, the value of IID1 plays no role in determining the word order. The languages generated with [ 1 1 1 0 0 1 0 0 , 1 1 ] and 1 1 1 0 0 1 0 0 , I F ] are identical. So are the languages generated with [ 1 1 1 0 0 1 0 0 , F F ] and [ 1 1 1 0 0 1 0 0 , F 1 ]. In a Betting like [ 0 0 0 0 0 . ] where there is no overt head movement, the language will be the same no matter what values IID 1 and IID2 may be assigned. We observed earlier that, modulo our current grammar, there are 156 settings of S(M)-parameters with which are syntactically meaningful. Each of these settings can be combined with any of the four settings of HD-parameters: [ I I ], [ F F ], [ I F ] , and [ F I ] . The number of possible settings is therefore 624. The languages generated with these 624 settings are given in Appendix B.4. As in B.2, 136
  • 153. I have grouped together all the settings that generate the same language. It is a surprise to see that, in spite of the four-fold increase in the number of value combinations, the number of distinct languages generated did not increase at all. We had 31 languages when both H D l and H D 2 are fixed to / . We expected to get more languages when the values of H D l and H D 2 are allowed to vary, but there are still exactly 31 languages. In other words, the languages generated when H D l and H D 2 are set to [ I F 3, [ F I ] and [ F F ] respectively all form subsets of the languages generated with. [ I I ]. As we can see in B.4, every language that can be generated by setting H D l and H D 2 to [ I F ] , [ F I ] or [ F F ] can also be generated with the parameters set to [ I I ] . This might suggest that the parameterization of head directions is superfluous. What is the point of varying the directions if the variation accounts for no more word order facts? Why not eliminate the parameters and assume that all projections are universally head-initial? These will certainly be arguments in favor of the views in Kayne (1992, 1993) and Wu (1993). However, the superfluity of HD-parameters is more apparent than real. The parameterization does not seem to increase the generative power of the grammar because the strings in B.2 and B.4 are too simple. They contain nothing more than S, 0 and V. As soon as other constituents are allowed to appear in the strings, the difference in generative power will show up. We can add often to the strings and see what happens.11 When often is added while the HD-parameter values are fixed to [ X I ], 66 languages are distinguished (B.3). When the HD-parameter values are allowed to vary, we can distinguish 86 languages if often appears in the strings. (Due to space limitation, the list of settings and languages are not given in the "W e assume as before that often is left-adjoined to Tl. 137
  • 154. Appendix.) The increase in number is not dramatic, but it does show that there are things that can fail to be generated if HD-parameters are eliminated altogether. One of the languages that cannot be generated if all projections are head-initial is [ o ften s v, o ften s o t ] . To get the SOV order while maintaining a strictly head-initial structure, the subject NP must move to Agrlspec and the object NP to Agr2spec. But the movement to Agrlspec must go beyond often, putting the subject on its left side. Thus s often o v is possible while o ften s o v is impossible. However there is no problem in generating the latter if head direction is permitted to vary. We can get this order with the setting [ 1 1 1 0 0 0 0 0 , F F ], for instance. This setting makes the verb move overtly to TO while keeping everything else in situ. Since TP is head-final, the verb that lands in TO will follow every other elements in the string. The string that results is o ften s o v. There are other languages which cannot be generated in the absence of HD- parameters. In Chapter 2, we mentioned languages with sentence-final auxiliaries and grammatical particles. In the next section, we will look at them again. We will find that the addition of the two HD-parameters can make a big difference in terms of generative power once some functional elements are spelled out. 4.2.2 The Spell-Out of Functional Heads So far we have ignored the values of S(F)-parameters. As a result, morphological variation has been kept out of the picture. One consequence of this simplification is that no functional heads have been spelled out. The head of a functional category consists of a set of features but it has no lexical content. We assume that, unlike the head of a lexical category which is visible regardless of the value of S(F)- 138
  • 155. parameters, the head of a functional category is not visible unless its features are spelled out. When all S(F)-parameters are set to 0, no functional heads are visible and the only head that can undergo overt head movement is the verb. Once some functional heads are spelled out, however, the situation is different. We have assumed earlier that the feature matrix of a functional head can be spelled out as auxiliaries, grammatical particles or expletives. Apparently, auxiliaries can also undergo head movement. This has very interesting syntactic consequences. Recall that each S(F)-parameter can have two sub-parameters in the form F- L. The value of F tells us whether the F-feature (i.e. the feature residing in a functional category) is spelled out and L tells us whether the L-feature is spelled out. Take the parameter S(F(tns)) as an example. When it is set to 1-0, tense features must be spelled out in TO but not on the verb. This is possible only if the verb does not move to TO before Spell-Out. Otherwise, the F-feature and the L-feature will have merged into a single one which can appear on the verb only. Let us assume that, when spelled out, TO appears as an auxiliary. We will use ”Aux” to represent such a morphologically realized functional head. The strings produced by our grammar can now contain Aux in addition to S, O and V. As a visible head, Aux can undergo head movement just as a verb does. This has a significant effect on the parameter space of S(M)-parameters. We have seen in 4.1.1 that many settings of S(M)-parameters are not syntacti­ cally meaningful when the only head that can move is the verb. Now that we have Aux in addition to the main verb, the picture is very different. Many previously meaningless settings now start to make sense. Consider the following setting: (120) [ 1 1 0 1 1 . . . ] 139
  • 156. When head movement is restricted to the verb only, this setting is syntactically meaningless. The verb is required to move to CO, but the movement is blocked by the value of S(M {tns)) which is set to 0. When a functional head (such as TO) is spelled out as an Aux, however, there are two heads that can move. Consequently, the movement required by this setting can now be “split”. The requirement that some head move to Agrl-0 and CO can be satisfied by moving Aux to CO, while the verb can move to AspO to satisfy the requirement that some head move to Agr2-0 and then to AspO. The setting in (120) has therefore become syntactically meaningful. Many other settings of S(M)-parameters are made syntactically meaningful by the spell-out of functional heads. An exhaustive listing of those settings will take too much space. For illustration, we will list below just those settings which become syntactically significant as a result of spelling out TO as an Aux. (121) (a) [ 0 0 0 1 0 . . . ] (V in situ ; Aux to Agrl-0) (b) [ 0 0 0 1 1 . . . ] ( Vi n situ ; Aux to CO) (c) [ 1 0 0 1 0 . . . ] (V to Agr2-0; Aux to Agrl-0) (d) [ 1 0 0 1 1 . . . ] ( Vt o Agr2-0; Aux to CO) (•) [ 1 1 0 1 0 . . . ] ( Vt o AspO; Aux to Agrl-0) (f) [ 1 1 0 1 1 . . . ] (V to AspO; Aux to CO) As a consequence of these new settings, the number of distinct languages that can be generated in our parameter space increased significantly. Our computation shows that with HD-parameters constantly set to J, S(F(tns)) set to 1-0 and other S(F)-parameters set to 0-0, the number of languages that are generated by varying the values of S(M)-parameters is 83 instead of the original 31. Those 83 140
  • 157. languages are listed in Appendix B.5. To save space, only one possible setting for each language is given, but this should be enough for the illustration of how each of the languages could be derived. 4*2.3 HD-Parameters and Functional Heads Now we try varying the values of HD-parameters. With S(F(tns)) set to 1*0 and all other S(F)) parameters to 0*0, the number of distinct “languages” that can be generated with all possible value combinations of HD-parameters and S(M)- parameters is 117. These languages are listed in Appendix B.6. As in Appendix B.5, only one setting is given for each language for the purpose of saving space. The result of this experiment shows again that there are languages that cannot be generated without HD-parameters. When we ignored the HD-parameter by allowing for head-initial constructions only, 83 languages were generated (B.5). Thirty-four additional languages are generated when the two HD-parameters are brought into play. There are languages which can be generated only if CP is head-initial and IP is head-final (e.g. #14). There are also languages which are possible only if CP is head-final and IP is head-initial (e.g. #18). So far there is no language which cannot be generated unless both CP and IP are head-final. But there will be such cases when more than one functional head is spelled out, as we will see. The incorporation of auxiliaries into our system has given us a richer typology. In B.2 or B.4, where there is no auxiliary due to the fact that all S(F)-parameters are ignored, only one pure SVO language13 is distinguished: [ s v, s v o ]. As a result of setting S(F(tna)) to 1*0 and thus allowing for the appearance of one 13By “pure” I mean there is no scrambling. 141
  • 158. auxiliary, we now have four different SVO languages: #1 [ aux a v, aux ■ v o ], #2 [ a ▼aux, a v o aux ], #4 [ a aux v, a aux v o ] and #20 [ a v, a v o ]. Certain predictions are made in this partial typology of SVO languages. Among other things it is predicted that no TO auxiliary can appear between the verb and the object in an SVO language. The sequence v aux o is impossible because of the following contradiction. The fact that the TO Aux precedes 0 shows that IP must be head-initial. In this case the verb must be higher than TO in order to appear to the left of the TO Aux. However, if the verb is higher than TO, it must have moved through TO and become unified with it. In this case, TO will not be able to be spelled out by itself as an Aux. Now what if we do find the sequence s v aux o in natural languages? One potential example can be found in Chinese: (122) 7a mat It net 6en shu he buy Past that book ‘He bought that book.’ The past tense marker /e13 looks like a TO auxiliary that appears between V and 0. However, this may not be a counter-example to the prediction under question. We may analyze It as a suffix of the verb, i.e. a tense feature which is spelled out on the verb. The sequence we are looking at here is therefore [ a v- [tns] o ] rather than [ a v aux o ]. The former can be generated if S(F(tns))is set to 0-1 (spell out the L-feature only) instead of 1-0. In some cases a tense marker can be analyzed either as an auxiliary or anaffix. Take the Japanese sentence (123) as an example. 13Le has traditionally be treated as an aspect marker. See Chiu (1992) for arguments for the treatment of le as a tense marker. 142
  • 159. (123) Taroo-ga Hanako-o mi-ta Taroo-nom Hanako-acc see-past ‘Taroo saw Hanako' The tense marker to can be treated either as a suffix to the verb (the string analyzed as ■ o v -[tn s]) or as a TO auxiliary (the string analyzed as s o v aux). The former is possible in a setting, for example, where 5(Af(fns)) is set to 0-1 and the S(M) and HD-parameters are set to [ 0 0 0 0 0 1 1 0 , i i ]. In this case, both CP and IP are head-initial. The subject and object move to Agrlspec and Agr2spec respectively and the verb remains in situ with its tense feature (the L-feature) spelled out. The latter sequence (s o v aux) is possible, for instance, when 5(F(tns)) is set to 1-0 and the S(M) and HD-parameters set to [ 1 1 0 0 0 0 0 0 , f f ] . I n this case, both CP and IP are head-final. The subject and object remain VP-internal while the verb moves to AspO. TO has not merged with the verb and it is spelled out as an auxiliary. On the basis of (123), we cannot tell if Japanese is a o v- [tns] or s o v aux. The interesting observation is that a language like Japanese which has been regarded as a typical head-final language can be generated with either a head-initial or a head-final structure in our system. We have so far only touched upon one type of auxiliary: an overtly realized TO. Other types of auxiliaries can be obtained by spelling out other functional heads such as CO and AspO. I will not explore these possibilities exhaustively as I did with the spell-out of TO. Some of them will be mentioned later on in this chapter when we consider the settings for some specific languages, but it will basically be left for the reader to figure out what will happen when, say, S(F(prcd)) or S(F(asp)) is set to 1-0. One desirable property of the present account of auxiliaries is that no extra machinery is needed to get a much richer typology. The S(M)-parameters and 143
  • 160. S(F)-parameters are not specifically designed to account for auxiliaries. We need them for independent reasons: S(M)-parameters for word order variation and S(F)- parameters for morphological variation. It just happens that certain value combi­ nations of those two types of parameters predict the occurrence of auxiliaries. In other words, we are able to accommodate auxiliaries in our parameter space at no extra cost. We conclude this section by pointing out that, no matter whether there are auxiliary movements or not, the verb always moves all the way up to CO at LF. For reasons relating to the principle of Full Interpretation (Chomsky 1991, 1992), auxiliaries are assumed to be invisible at LF. Given a setting like [ 0 0 0 1 1 . ] where there is no overt verb movement but TO moves overtly to CO as an auxiliary, the Aux along with the movement it has undergone will disappear at LF where the verb will move to Agr2-0, AspO, TO, Agrl-0 and CO. 4.2.4 S(F)-parameters Up till now we have examined the parameter space created by S(M)-parameters, HD-parameters and one S(F)-parameter (5(F(tns))). When we take all the other S(F)-parameter into consideration, combining their values with S(M)- and HD- parameters, the number of possible settings is huge and the number of languages that can be generated will be in the order of tens of thousands. It is impossible to list all those languages in the Appendix, not to mention the settings each of those languages can be generated with. The best we can do here is to look at a small subset of them and get some idea of what kinds of languages can be generated when all the S(F)-parameters enter the parameter space. One way to do it is to keep the values of S(M)-parameters relatively constant while varying the values of HD- 144
  • 161. and S(F)-parameters. In the following experiment, we will restrict the possible settings of S(M)-parameters to just two: [ 0 0 0 1 0 1 0 0 . ] and [ 1 1 1 1 0 1 0 0 . .]. The first setting represents the case where there is auxiliary movement but no overt verb movement; the second is a case where there is overt verb movement and no auxiliary shows up. The two HD-parameters will work as usual, with four possible settings. Of the six S(F)-parameters that have been assumed - S(F(apr)), S(F(case)), S(F(fns)), S(F(asp)), S(F(prcd)) and S{F(op)) - two will be kept constant and the other four allowed to vary. The two S(F)-parameters whose values will be kept constant in the experiment will be 5(F(pred)) and S(F(op)). They will always be set to 0-0. As a result, we will not see in this experiment any language where CO or Cspec is spelled out. The other S(F)-parameters will have some of their values considered. S(F(a$rr)) will vary between three values tt 0-0 (no agreement features spelled out), 1-0 (agreement features spelled out on the auxiliary), and 0-1 (agreement features spelled out on the verb). S(F(caae)) will vary between 0-0 (no case feature spelled out) and 0-1 (case feature spelled out on the noun). S(F{tns)) varies between 0-0 (no tense feature spelled out), 1-0 (tense feature spelled out on the auxiliary), and 0-1 (tense feature spelled out on the verb). The auxiliary which spells out TO will continue to be called Aux. Finally, S(F(aap)) varies between 0-0 (no aspect feature spelled out) and 0-1 (aspect feature spelled out on the verb). 14 Each parameter setting will now be a vector of 14 coordinates: 14The value 1-0 is impossible with the two settings of S(M)-parameters we ate restricted to here. 145
  • 162. C S(M(*gr2» S(H(asp)) S(M(tns)) S(M(agrl)) S(M(c)) S(M(specl)) S(M(spec2)) , HDl HD2 , S (F (cu «)) S(F(agr>) S(F(tna)> S(F(asp)) ] As we can see, even the S(F)-parameters which are active will not have its full range of value variation tried out in the experiment. Only a subset of their possible values is to be considered. All this is done for the purpose of illustrating the range of variation by looking at a very small sample of the “languages” that are generated. This small sample should be enough to give vis some idea as to what languages can be accommodated in our parameter space when all parameters are fully active. The value combinations and the languages that are generated in this very re­ stricted parameter space is given in Appendix B.7 (only one of the possible settings is shown for each language). Forty-eight distinct languages are generated. These languages form a small subset of the SVO and SOV languages that can be gener­ ated in our system. Looking at the strings in each language, we see that every terminal symbol in these strings has a list attached to it. The list contains information about inflectional morphology. The appearance of a feature in the list indicates that this feature is morphologically visible (spelled out). Any symbol that has an empty list attached to it has no overt inflectional morphology. The list can contain more than one feature when the terminal symbol is inflected for more than one feature. The feature list is unordered, which means that the order in which the features are listed has no implication for the order of affixation or whatever other ordering. A symbol like v- [ag r.tn s] does not necessarily mean v -ag r-tn s whrrr agr and 146
  • 163. tn s are actual morphemes attached to the verb. It only indicates that the verb is inflected for those two features. How the inflection is morphologically represented is not our concern here. The symbols that appear in B.7 and the syntactic entities they represent are displayed in (124). (124) s-[] a subject NP with no case-marking s-C c(l)] a subject NP overtly marked for Case 1 o- [] a object NP with no case-marking o -[c(2 )j a object NP overtly marked for Case 2 v- [] a verb with no inflection v- [agr] a verb inflected for agreement v- [tns] a verb inflected for tense v- [asp] a verb inflected for aspect v-[agr itns] a verb inflected for both agreement and tense v-[ag r, asp] a verb inflected for both agreement and aspect v- [tn s, asp] a verb inflected for both tense and aspect v- [agr,tn s , asp] a verb inflected for agreement, tense and aspect aux- [tns] the TO auxiliary au x -[tns,agr] the TO auxiliary inflected for agreement From the sample in B.7, which mainly illustrates the range of morphological varia­ tion in our system, and B.4, which illustrates word order variation, we can tell how many typological distinctions can be made in the parameter space. With all the parameters working together, there are more than 200,000 syntactically meaning­ ful parameter settings and approximately 30,000 languages (sets of strings) can be 147
  • 164. distinguished. We can get languages with almost any basic word order and with many different types of inflectional morphology. We should list all the “languages” that can be generated in this parameter space and try to match each of them with a natural language. Given the huge number of languages in the parameter space, such listing is impossible in a thesis of the present size. However, to get a better understanding of the generative power of our present system, we will try to fit at least some real languages into the space. In what follows, therefore, we will choose some languages for case study. These case studies will put us in a better position to judge the potential and limitations of the present model. 4.3 Case Studies In this section, we will look at a few natural languages and try to see to what extent they can be accommodated in the parameter space we have assumed. It is unrealistic to expect our parameter space to account for everything of any real language. There are many reasons why this is unrealistic. The grammar we have been using is only a partial UG. There are other modules of UG which have not been taken into consideration so far. We are therefore bound to run into facts that cannot be explained until our model is interfaced with those other modules. The present module is only concerned with basic word order and basic inflectional morphology. Even in these domains we have further restricted ourselves to sim­ ple declarative sentences whose only components are S, V, 0 , Aux and possibly some adverb. Consequently, the “languages” generated in our parameters can­ not be exact matches of natural languages. However, this does not prevent those “languages” from resembling certain natural languages or some subsets of natural languages. When we say that a certain language is accommodated in our param­ 148
  • 165. eter space, we mean that there is a parameter value combination that generates a “language” which is a rough approximation of this natural language. We have a long way to go before we can account for everything with our model, but there is no reason why we should not find out how much can be done in the the current partial model. In what follows, we will be considering some subsets of English, Japanese, Berber, German, Chinese and French. For convenience we will refer to these subsets as English, Jananese, etc., meaning some small subsets of those languages. 4.3.1 English: An SVO Language The first question we have to deal with is how to represent English as a set of strings in the format we have been using here. In terms of word order, English is SVO. In addition, adverbs of the often type appear before the verb. The order OSV is found in topicalization. Morphologically, English pronouns are overtly marked for case. The verb in English shows overt tense and subject-verb agreement. We may therefore tentatively describe English as (125). (125) s -[c (l)3 (often) v-[agr,tns,asp] s - [ c ( l) ] (often) v-[agr,tns,asp] o-[c(2)3 o-[c(2)] s - [ c ( l) ] (often) v-[agr,tns,asp] The language in (125) can be generated with many different parameter settings.15 One of them is (126). (126) [ 1 0 0 0 1 1 1/0 , i i , 0-1 0-1 0-1 0-1 ] 11The fact that this language can be generated with so many different settings can have inter* esting implications for learning. These issues are addressed in Chapter 5. 149
  • 166. Other settings include: (127) (a) [ o 0 0 0 0 1 1 1/0 , i i , 0-1 0-1 0-1 0-1 ] (b) C i 0 0 0 0 1 1 l/o , i i , 0-1 0-1 0-1 0-1 ] (c) C l l 1 0 0 1 1 i/o , i i , 0-1 0-1 0-1 0-1] (d) Co 0 0 0 0 1 1/0 1/0 . i i , 0-1 0-1 0-1 0-1 ] (•) C l 0 0 0 0 1 1/0 1/0 , i i , 0-1 0-1 0-1 0-1 ] (f) [ l 1 0 0 0 1 1/0 1/0 » i i , 0-1 0-1 0-1 0-1 ] Cg) [ l 1 1 0 0 1 1/0 1/0 . i i , 0-1 0-1 0-1 0-1 ] (h) C o 0 0 0 0 1 1 1. i i , 0-1 0-1 0-1 0-1 ] (i) [ l 0 0 0 0 1 1 1 , i i . 0-1 0-1 0-1 0-1 ] (j) C l 1 0 0 0 1 1 1 . i i . 0-1 0-1 0-1 0-1 ] (k) C i 1 1 0 0 1 1 1 . i i . 0-1 0-1 0-1 0-1] Cl) Co 0 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ] <*) C l 0 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ] (n) C l 1 0 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ] Co) etc. C l 1 1 0 0 1 1/0 1 , i i , 0-1 0-1 0-1 0-1 ] According to the setting in (126), English is a strictly head-initial language. At Spell-Out, the verb moves to AspO, the subject NP to Agrlspec and the object NP to Agr2spec. Furthermore, one of the XPs may optionally move to Cspec. We have the SVO order when Cspec is unfilled or filled by the subject. The OSV order occurs when the object moves to Cspec. If often appears in the sentence, it may go to Cspec instead of the subject or object. We then have the strings in (128) in addition to the ones in (125). 150
  • 167. (128) o ften s - [ c ( l) ] v - [a g r(l),tn s,asp ] o ften e-[c(l> ] v-C eg r(l), tn s,asp] o-[c(2>] Morphologically this setting requires that the agreement features be spelled out on the verb, the case features spelled out on the noun, and the tense and aspect features spelled out on the verb. Several questions arise immediately. First of all, the SVO and OSV orders are given equal status in (125). This seems undesirable for it fails to reflect the fact that the SVO order is more basic and occurs far more frequently than the OSV order. But this problem is more apparent than real. With our current setting, the OSV order occurs only if the object has undergone the optional A-movement to Cspec. We know from the principle of Procrastinate that, given the option of whether to move overtly or not, the default choice is always “do not move”. Therefore the object will not move to Cspec unless this default decision is overridden by some other factor such as discourse context. As a result, we will find SVO most of the time and find OSV only in those situations where topicalization is required. Things would be different if we have the setting in (129) or any of the settings in (127(h))—(127(o)). (129) [ 1 1 0 0 0 1 1 1 , i i , 0-1 0-1 0-1 0 - 1 , 1 ] This setting can also account for the strings in (125), but the movement to Cspec is obligatory. So Cspec must always be filled, either by the subject NP or Object NP. If this is the setting for English, we will have to find some other explanation for the peripheral nature of the OSV order. We have to say, for example, that the subject NP is the topic of the sentence in most of the case. Consequently, the Subject NP moves to Cspec far more frequently than the object NP in spite of the 151
  • 168. fact that both S and 0 have equal rights to move there. The second question concerns inflectional morphology. The values of S(F)- parameters are currently assumed to be absolute. Each parameter is set to a single value and no alternation between different values are permitted. This seems to create a number of problems: (130) We have set S{F(cast)) to 0-1 but not every NP in English is overtly marked for case. Only the pronouns are. (131) S(F(asp)) is set to 0-1 but not every verb seems to inflect for aspect. (132) S'(F(apr)) and 5(F(lne)) are set to 0-1, indicating that agreement and tense are to be spelled out on the verb only. This seems contrary to the fact that these features can also be spelled out in an auxiliary in English. This actually leads to the more general problem that the setting in (126) does not let auxiliaries occur in this language. We will deal with these problems one by one. The problem in (130) may be solved by refining our parameter system. So far we have not tried to differentiate various types of NPs. The S(F(case)) parameter, which is associated with the whole class of NP, is blind to the distinction, for example, between pronouns and other NPs. Since the value of this parameter does not seem to apply across the board to all types of NPs (at least in English), we may need to distinguish two S(F{case)) parameters, one for pronouns and one for other NPs. Once this distinction is made, the absolute nature of the S(F)- parameter values is no longer a problem. In fact, alternation of parameter values should not be permitted, for pronouns must be case-marked in English and other 152
  • 169. NPs must not be case-marked. The 5 (/r’(caae)) parameter for pronouns is always set to 0-1 and that for other NPs always to 0-0. If the learner is able to distinguish between pronouns and other NPs, the parameters can be correctly set. How the learner becomes aware of this distinction is of course a different learning problem that needs to be addressed separately. The problem in (131) is not a problem if we assume that a verb is inflected as long as it is morphologically different from other verbs. With regard to aspect marking, the progressive aspect is spelled out as ~ing and the perfective aspect as -ed. When a verb has neither -ing nor -td attached to it, we know that this verb has an aspect feature which is neither progressive nor perfective. In this sense, this verb has had its aspect features overtly marked through zero inflection. Now we look at the problem in (132). The setting in (126) does not allow for the following set of strings which are actually found in English. (133) a-[c(l> ] aux-[agr,tns] (often) v-[asp] s - [ c ( l) ] au x-[agr,tns] (often) v-[asp] o -[c(2 )] o -[c(2 )] s -[ c ( l) ] au x-[agr,tns] (often) v-[asp] Each of these strings contains an auxiliary which spells out the agreement and tense features of Agrl and T. Verbs are inflected for aspect only. For this set of strings to be generated, S(F(agr)) and S(F(tns)) must be set to 1-0 rather than 0-1. In addition, S(Af(agrl)) may have to be set to 1 so that the TO auxiliary can move to Agrl to pick up the agreement features. In other words, we need the following setting. (134) [ 1 1 0 1 0 1 1 1/0 , i i , 1-0 0-1 1-0 0-1 ] 153
  • 170. To generate the strings in both (125) and (133), we seem to need a setting which is a merge of (126) and (134), such as the following: (135) [ 1 1 0 1/0 0 1 1 1/0 , i i , 1-0 1- 0/ 0-1 1- 0/ 0-1 0-1 ] This setting raises several questions. First of all, the fact that 5(M (a^r)) is now set to 1/0 means that head movement can be optionally overt as well. This option is not available in our minimal model, but what we have seen here suggests that we may have to let the S(M)-parameters for head movement have the value 1/0 just like S(M(cspec)), S(M (specl)) and S(M(spec2)). There is other evidence in English which shows that head movement can also be optional. So far we have limited our attention to declarative sentences only. As soon as we look at questions, we find that £(Af(c)) must be set to 1/0 in English: head movement from Agrl-0 to CO occurs overtly in questions but not in statements. If so, this movement will be covert unless the principle of Procrastinate is overridden by some other requirement, such as the need of checking the predication feature (which is in CO) before Spell-Out when this feature has the value “+Q ”. In any case, it seems necessary that optional head movement should be incorporated into our parameter system. Another question concerns the fact that S(F(agr)) and S f/’ffns)) are set to 1-0/0-1. This setting is intended to represent the fact that (a) agreement and tense features must be spelled out in English, and (b) we can spell out either the F-feature or the L-feature, but not both. When the F-feature is spelled out, an auxiliary appears and this auxiliary may move to Agrl-0 or CO. When the L- feature is spelled out, the agreement and tense features appear on the verb and 154
  • 171. there is no auxiliary. The question is why we have to spell out the F-feature in some cases but the L-feature in some others. We do not find an answer to this question in our minimal model here. However, once this model is interfaced with other modules of the grammar, the choice may have an explanation. It maj' turn out that negation requires the spell-out of F-features. This might explain why (136) and (137) are grammatical while (138) and (139)) are not. (136) John does not love Mary. (137) John did not see Mary. (138) John not loves Mary. (139) John not saw Mary. It is also possible that the F-features of agreement and tense must be spelled out when the aspect feature is “strong” or “marked” in some sense. In English, this seems to happen when the aspect is progressive or perfective, as shown in (140) and (141). (140) John is writing a poem. (141) John has written a poem. The auxiliaries be and have here are treated as overtly realized functional heads and they are represented as “aux-[agr,tus]n in our system. Why “aux-(agr,tns]” is spelled out as be in some cases, have insome othercases, and do in most of the other cases has to be explained bytheories whichhave not yet been incorporated into our system. 155
  • 172. 4.3.2 Japanese: An SOV language Japanese is a verb-final language with a considerable amount of scrambling. The subject and the object must precede the verb but they can be ordered freely, resulting in SOV or OSV. Japanese NPs are always case-marked16 and Japanese verbs come with tense markers.17 There does not seem to be any overt agreement whose function is grammatical.1* In terms of surface strings, Japanese can be described as either (142) or (143) depending on whether we treat the tense marker as a suffix or grammatical particle. In (142) the tense marker ta is treated as a verbal suffix while in (143) it is treated as a grammatical particle which spells out the tense feature in TO. (142) s-C c(l)] v -[tn s] a -[c (l)] o-[c(2 )] v-Ctna] o-[c(2 )] [s -[c (l)] v -[tn s] (143) [a-[c (l)3 v -[] aux [s -[c (l)J o-[c<2)] v-[] aux o-[c(2 )] [a -[c (l)] v-[] aux These patterns are illustrated by the Japanese sentences in (144), (145) and (146). (144) Taroo-ga ki-ta Taroo-Nom come-Past ‘Taroo came.' 16Except in very casual speech. 1TIt is controversial whether there is aspect markers in Japanese. What we mean by tense marker here will include the aspect marker. iaThere is, however, agreement with respect to levels of bonorificness and politeness. 156
  • 173. (145) Taroo-ga tcgami-o kai-ta Taroo-Nom letter-Acc write-Past ‘Taroo wrote a letter.' (146) tegami-o Taroo-ga kai-ta letter-Acc Taroo-Nom write-Paat ‘Taroo wrote a letter.' The languages in (142) and (143) can be generated with the settings in (147) and (148) respectively. (147) [ 0 0 0 0 0 1 1 1/0 , i i , 0-0 0-1 0-1 0-0] (148) [ 0 0 0 0 0 1 1 1/0 , f f , 0-0 0-1 1-0 0-0 ] It is required in (147) as well as (148) that both the subject and object move to their case positions (Agrlspec and Agr2spec respectively) and the verb remains in situ. However, CP and IP are head-initial in (147) but head-final in (148). In addition, the value of S(F(tns)) is different in the two settings. In (147) it is set to 0-1 which requires the tense featureto be spelled out on the verb as an affix (spelling out the L-feature). In (148),onthe other hand, this feature is to be spelled out in TO by itself (spelling out the F-feature). The two settings produce very different tree structures, as shown in (149(a)) and (149(b)). 157
  • 174. NFC1) Agrl-1 I / X Taroo-ga Agrl TP AgrIP (149) NPQl Agr2-t I / tagaai-o Agr2 VP ad) kai-ta a<2) (a) Tree generated with (147) NP{1) Agrl-l Taroo-ga TP NPC2J Agr2-1 I / tagaai-o VP Agr2 aCl) kai a(2) Agrl (b) Tree generated with (148) Japaneae Trees It is not possible to tell on the basis of (144), (145) and (146) which of the two structures is more likely to be the correct one for Japanese. If (149(a)) is the correct one, Japanese will not be a head-final language at all, contrary to common belief. What this shows is that a verb-final language is not necessarily a head- final one. When we look at more data from Japanese, however, we begin to see 158
  • 175. evidence that (149(b)) is probably the right choice. The following two sentences are examples in support of the setting in (148). (150) Taroo-wa Jbi-fa ka Taroo-Topic come-Past Q-marker ‘Did Taroo come.' (151) Hanako-ga asoko dt nai-tt i-ru Hanako-Nom there at cry-Cont be-Nonpast ‘Hanako is crying there.' The question marker ka in (150) comes at the end of the sentence. The only way to account for it in (149(a)) is to treat ka os a verbal suffix attached to ki. In other words, two features are spelled on this verb, ta being the tense feature and ka being a predication feature which will be checked in CO at LF. However, this analysis does not seem to accord with the intuition of native Japanese speakers who usually regard ka as a separate word. If ka is indeed not part of the verb, we will have to adopt the analysis in (149(b)) where ta and ka are grammatical particles in TO and CO respectively. Turning to (151), we again see the plausibility of (149(b)). To maintain (149(a)) we would have to say that nai-te-i-ru forms a single big verbal complex, which is again a bit far-fetched. In (149(b)), however, everything is comfortably accounted for: nai-tt is in VO and i-ru is in TO. It is also possible that nai is in VO, te in AspO and i-ru in TO. 4.3.3 Berber: A VSO Language Berber is usually considered a VSO language, but other orders are also found. The most common alternative order is SVO which is usually used in topicaliza- tion.(Sadiqi 1986). Here are some examples: 19 19Examples from Sadiqi (1989) 159
  • 176. (152) i-ara hmad tabrat 3ms-wrote Ahmed letter 'Ahmed wrote the letter.’ (153) hmad i-ara tabrat Ahmed 3ms-wrote letter 'Ahmed wrote the letter.’ In terms of morphology, there is no overt case marking in Berber, but verbs are inflected for tense/aspect and agreement, as we can see in (152) and (153). This language thus have the following set of strings in our abstract representation:30 (154) v- [ag r,tn s , asp] s-[] v- [ag r,tn s ,asp] •-[] o-[] ■-[] v -[ag r, tn a , asp] s -[] v -[ag r, tn s , asp] o-[] This set of strings can be generated with the parameter setting in (155). (155) [ 1 1 1 1 0 1/0 0 1/0 , i i , 0-1 0-0 0-1 0-1 ] There are many alternative settings that are compatible with these strings. Here are some examples: [ 1 0 0 0 0 1/0 0 1/0 ] [ 1 1 0 0 0 1/0 0 1/0 ] [ 1 1 1 0 0 1/ 0 0 1/ 0 ] [ 1 1 1 1 1 1 0 1 /0 ] According to the setting in (155), the verb must move overtly to Agrl and the object must stay in situ. The subject, however, can optionally move to Agrlspec and then to Cspec. If the principle of Procrastinate is not overridden by other ,0The {act that the feature list is attached to the verb on the right in our representation does not have any implication as to whether the features are spelled out as prefixes or suffixes. It simply means those features are realised on the verb. They can appear as any kind ofaffix (prefix or suffix) or other forms of verbal conjugation. 160
  • 177. considerations, the subject will not move and the word order is VSO. When other factors call for overt movement, the subject can move to Agrlspec or Cspec. In either case the order is SVO. The tree structures for (152) and (153), according to (155), are (156(a)) and (156(b)) respectively. AgrIP / X Spec Agrl-1 / X Agrl TP I-ara T1 / AgrlP (156) haad • tabrat L-ara(2) Tl ad) a(2) tabrat (a) (b) Berber Itoes One general question that can be raised at this point is whether the order in which the inflectional features appear in the list can imply anything about the actual 161
  • 178. sequence of affixes. We may be tempted, at least in Berber, to let our feature list have this extra ordering information. For instance, we may let v- [tans*, agr] or [ag r,tan sa]-v mean that the affix representing agreement appears outside the affix representing tense, as in the case of (152) and (153). Our string representation would certainly be more informative if the ordering is encoded there. If the Mirror Principle holds, this kind of encoding will not only be desirable but easy as well. Unfortunately, the Mirror Principle does not seem to hold everywhere, not in Berber at least. If we look at (152) and (153) only, we may conclude that agreement occurs outside tense. The verb is inflected for tense and the agreement affix is added to the tensed verb. However, we also find Berber sentences where the order is reversed. (157) is such an example. 21 (157) ad-y-acgh Mohand ijn teddart will(Tns)-3ms-buy Mohand one house ‘Mohand will buy a house.' In (157) tense clearly occurs outside agreement. To avoid such problems, we will keep to our assumption that the feature list attached to each terminal symbol is unordcred. They only tell us what features are spelled out in some form of verbal inflection. The order of affixation has to be handled separately. As a matter of fact, we cannot exclude the possibility that the ordering is arbitrary and the learner has to acquire it independently. 4.3.4 German: A V2 Language German is a language where root clauses and subordinate clauses have different word orders. In root clauses, the verb must appear in second position, the first 31Example from Ouhalla (1091) 162
  • 179. position being occupied by a subject NP, an object NP, an AdvP, or any other XP. This is illustrated in (158), (159) and (160).33 (158) Karl kaufte gestem das Buch Karl bought yesterday that book (Karl bought that book yesterday.* (159) das Buch kaufte Karl gestem that book bought Karl yesterday ‘That book Karl bought yesterday.’ (160) gestem kaufte Karl das Buch yesterday bought Karl that book ‘Yesterday Karl bought that book’ Assuming that German NPs are inflected for case33and German verbs are inflected for tense, aspect and agreement, we can abstractly represent German root clauses as the set of strings in (161) (where adv stands for an AdvP like yesterday which is presumably left-adjoined to Tl.) (161) s - [ c ( l) ] v -[ag r,tn s,a sp ] (adv) adv v -[ag r,tn s,a sp ] s -[c (l) ] s - [ c ( l) ] v -[a g r,tn s,a sp ] (adv) o-[c(2)] o -[c(2 )] v -[a g r,tn s,a sp ] s -[ c ( l)] (adv) adv v -[ag r,tn s,a sp ] s - [ c (l)] o-[c(2)] This set of strings can be generated with the following parameter setting: (162) [ 1 1 1 1 1 1 1 1 . i f , 0-1 0-1 0-1 0-1 ] “ Examples from Haegeman (1991) a3The ease marking shows up on the determiner, though. 163
  • 180. This setting requires that every movement be overt. By Spell-Out, the verb must move to CO, the NPs to Agrapecs, and one of the XPs to Cspec. We have (158) if the subject NP moves to Cspec, (159) if the object does, and (160) if the AdvP does. The setting also specifies that CP is head-initial and IP is head-final. Incidentally, the structures predicted by this setting can also account for the fact that gestem (yesterday) can appear right after the verb in (158) but not in (159). We have assumed that a time adverb like yesterday can be left-adjoined to T l. In (158) the object has moved to Agr2spec but not to Cspec. This is why we can have the order SV(Adv)0. In (159) the object has moved to Cspec and the subject to Agrlspec. The resulting order can only be OVS(Adv), while OV(Adv)S is impossible. The tree structures for (158) and (159) are in (163(a)) and (163(b)). 164
  • 181. Karl C Agr IP I /Xkaufte Spac Agrl-1 NP Aspl Aap Agr2P Agr2-1 (163) das Buch C AgrIP kaufte NP Agrl-1 (*) Karl TP / T1 Agrl / gaatern T1 / AapP T / Aspl Aap IAgr2P Spac Agr2-1 / VP Agr2 zx (b) German Trees German is similar to English in that the tense and agreement features are spelled out on the verb in some cases but in an auxiliary in others. When an auxiliary 165
  • 182. exists in a sentence, the auxiliary is in second position and the verb in final position. Here is an example: (164) Gestem hat Karl das Buch gckauft yesterday has Karl that book bought ‘Karl bought that book yesterday.’ Obviously, the setting in (162) will flail to account for the word order found in this sentence. This problem may need to be handled in the same way as we handled the English case. We can assume that tense and agreement features must be spelled out in German, either in an auxiliary (spelling out the F-feature) or on the verb (spelling out the L-feature), but not both. When the F-feature is spelled out, an auxiliary appears. This auxiliary moves to CO and the verb moves to AspO only. When the L-feature is spelled out, there is no auxiliary and the verb will move to CO. Why we choose to spell out the F-features in some cases but the L-feature in some others is again an open question which cannot be answered until our model is interfaced with other components of the grammar. We have so far only discussed the word order in German root clauses. The order in subordinate clauses is SOV rather than V2 , as shown in (165) and (166). (165) doss Kart gestem dieses Buch kaufte that Karl yesterday this book bought ‘that Karl bought this book yesterday.’ (166) doss Kart gestem dieses Buch gckauft hat that Karl yesterday this book bought has ‘that Karl bought this book yesterday.’ This fact again forces us to consider the possibility that some S(M)-parameters for head movement (in this case S(Af(c))) must be allowed to have the value 1/0. If S(M (c)) is set to 1/0 in German, then the verb can move to Agrl-0 , resulting in an SOV order, or move to CO resulting in a V2 order. Apparently, the principle 166
  • 183. of Procrastinate is overridden in the root clause. We may conjecture that the predication feature must be checked before Spell-Out in the root clause but not in the subordinate clause. This checking requirement overrides the principle of Procrastinate and forces the verb to move to CO overtly in the root clause. 4.3.5 Chinese: A Head-Final SVO Language We have seen in (104), (105) and (106) that Chinese is a scrambling language, its possible word orders being SVO, SOV and OSV. All these orders can be generated with a parameter setting like (167) where both CP and IP are head-initial. (167) [ 0 0 0 0 0 1 1/0 1 , i i , 0-0 0-0 0-1 0-1 ] But this setting is not able to account for the following sentence where we find sentence-final particles which cannot possibly be spelled out on the verb. (168) Ni kan-wan net-ben shu le ma you finish reading that book Asp Q/A ‘Have you finished reading that book? 4 or ’You have finished reading that book, as I know.’ In this sentence le and ma are not attached to the verb, since the object intervenes between the verb and those functional elements. A fair assumption is that le and ma are some overtly realized functional heads, the former being the head of AspP and the latter the head of CP. (This presupposes that S(F(asp)) and 5(F(pred)) are both set to 1- 0 .) These elements cannot appear in sentence-final positions unless both IP and CP are head-final. What this suggests is that Chinese is a head-final language (in terms of CP and IP).The structure for (168) should be (169) which illustrates how an SVO string can be generated in a head-final structure. 167
  • 184. (169) CP / Spk a AgrIP C NP Agrl-1 m I / Ni TP Agrl I T1 A«pP T Aapl Aap Agr2P k Spac Agr2-1 VP Agr2 / NP VI V VP kan-wan NP VI I Inai-ban ahu V A Chineae Tree 168
  • 185. 4.3.6 Ftench: A Language with Clitics In this case, we are not interested in the French language as a whole, but just its cliticization. Since we are only dealing with simple sentences with two arguments, only sentences like the one in (170) will be considered. (170) J t le-visitais I him-visited ‘I visited him.' There is a huge amount of literature on the analysis of clitics like le here, but we will not try to go through it in this short section. What I want to point out is that our current model may offer an alternative account of this well-known phenomenon. Recall that we observed in Chapter 3 that case and agreement features can be spelled out either on NPs or on the verb. ( See Borer (1984) and Safir (1985) for similar ideas.) The parameter S(F(case)) has four values: 0 - 0 (no case feature spelled out), 0 - 1 (case features spelled out on the NP), 1 - 0 (case features spelled out on the verb)34, and 1-1 (case feature spelled out on both the NP and the verb). We have further assumed that, when spelled out on the verb, the case-features together with the agreement features show up as clitics. Now let us suppose that the S(F(case)) parameter has a value which is opera­ tive only when the object NP is a pronoun. Then the four values of this parameter will have the following effects. When it is set to 0 - 0 , no case features are spelled out. Since a pronoun is nothing more than a set of case and agreement features, no pronoun will be visible in this case. We call this pro-drop. When S(F(case)) is set to 0 - 1 , the features are spelled out as a pronoun. In cases where the value is 1- 0 , the features appear as a clitic instead of a pronoun. The features spelled a4The features to be spelled out in this case are the F-features which the verb can pick up and carry along when it moves through the functional categories. 169
  • 186. out here are some F-fe&tures of Agr2. The verb acquires those features when it moves through Agr2-0 on its way to Agr1-0 . In this sense, clitics are affixes of the verb which spells out some case/agreement features of the verb. This explains why clitics must be adjacent to the verb. Finally, we may have the value 1 -1 which requires that the features be spelled out on both the verb and the NP. As a result, we may see the clitic as well as the pronoun, a case of clitic doubling, The value of 5(F(casc)) seems to be 1-0 in French. In the GB literature there are basically two different accounts of cliticization. The base-generation account has the view that clitics are based generated on the verb. The movement account argues that clitics are generated in NP positions and later get attached to the verb through movement. Recently syntacticians have been trying to reconcile the two approaches and have proposed the view that cliticization involves both base generation and movement (e.g. Sportiche 1992). This is intuitively very similar to our present analysis. Clitics are base generated in the sense that the features are verbal features and they show up wherever the verb goes. They also involve movement because the verb has to move through Agr2-0 and the object NP has to move to Agr2spec. While the verb is in Agr2-0 and the NP in Agr2spec, the verb will get its case/agreement features checked against the object NP through spec-head agreement. It will take more work to see, however, whether the present account can cover all the empirical data that concerning cliticization. The case studies above have given us some more concrete ideas as to what linguistic phenomena can be accommodated in our parameter space. The studies are incomplete, however, because the list of languages that can be studied this 170
  • 187. way is an open-ended one. We should have looked at many more languages but a complete survey is beyond the capacity of the present thesis. 4.4 Sum m ary In this chapter we have laid out the parameter space in our model. We have had a bird’B-eye view at all the passible languages in this space as well as a worm’s- eye view at some specific languages. We have seen the present parameter space is capable of accounting for a wide range of linguistic facts. In terms of word order and inflectional morphology, most natural languages can find a corresponding “language” in this parameter space. We have also discovered, however, that our present system has its limitations. In order to provide a more complete account of any natural language, the system must be enriched in the future. 171
  • 188. Chapter 5 Setting the Parameters This chapter will be devoted to the issue of learnability. We have defined an exper­ imental grammar with a set of parameters. We have also seen that the parameter space thus created is capable of accounting for a wide range of cross-linguistic variation. The next question is how a learner can acquire different languages by setting those parameters. Is every language in our parameter space learnable? Is there a learning algorithm whereby the parameters can be set correctly? If so, what properties does this learning algorithm have? These are the questions that will be addressed in this chapter. We will see that the syntactic model we have adopted has many interesting and often desirable learnability properties. It is found that all the languages in our parameter space are learnable through a particular parameter setting algorithm. Every possible language can be correctly identified in spite of the wide-spread ex­ istence of subset relations and the non-existence of negative evidence. The param­ eter setting algorithm is a variation of Gold’s (1967) identification by enumeration learning paradigm, where the order in which hypotheses are enumerated is derived from Chomsky’s (1992) principle of Procrastinate. The algorithm is implemented in Prolog, which has enabled us to perform an exhaustive search of our parameter 172
  • 189. space. The results are encouraging. Not only are all the languages identifiable, but the learning process is incremental and independent of the order in which input data is presented. There is even a possibility that the learning procedure may provide a mechanism for language change. Let us now get down to the details. 5.1 Basic Assum ptions The study of language acquisition is an enormous project which involves many sub- areas of research. We are not trying to look at every aspect of language learning, however. The area we will focus on is a sub-part of syntactic acquisition1. It is assumed that there are learning modules that are responsible for the acquisition of other linguistic knowledge such as phonology, morphology and semantics. The learning activities to be discussed here will thus take place in an idealized situation where other kinds of learning are supposed to be taken care of. We will take a number of things as given, and the the success or failure of the learning algorithm is to be viewed against the background of those given assumptions. It is therefore important to state those assumptions explicitly at the beginning. Many of the assumptions are standard ones which have been in the literature for a long time (Wexler and Hamburger 1973, Wexler and Culicover 1980, Pinker 1979, Berwick 1985, Lightfoot 1991, Gibson and Wexler 1993, among many others). But they need to be specified in the context of the present syntactic model. 5.1.1 Assumptions about the Input The input to the learning module we are concerned with comprises strings which are abstract representations of Degree-0 declarative sentences. The degree of a 1For a general review of formal approaches to syntactic acquisition, see Atkinson (1992). 173
  • 190. sentence represents the depth of embedding of the sentence. A Degree-0 sentence is a simple sentence with zero embedding.3 Besides, each input string to our learning system is assumed to be a CP, i.e. a complete sentence. In order to abstract away from the phonology and morphology of any particular language and represent all possible languages in a neutral and uniform way, we will let every symbol in the string be made of a category label plus a feature list. Such input string can be called labeled strings3, but they are unusual in that the actual words are absent, with the stringB consisting of the category labels and features only. It is assumed that some other learning mechanisms can enable the learner to segment a string correctly and figure out the grammatical category of each individual symbol. In addition, the learner is supposed to be able to identify the argument structure of each sentence. He can presumably differentiate transitive verbs from intransitive ones and distinguish between subject and object NPs. How such "tagging” (i.e. the assignment of category label to each word) is achieved is not the concern of our present study. Finally, it is assumed that the learner is capable of analyzing the morphological structures of the target language. She can find out, for instance, that the word does in English is overtly marked for the tense and agreement features. The category labels that can appear in the input strings include the following: • a (subject NP) • o (object NP) 3See Wexler and Culicover 1980, Morgan 1986 and Lightfoot 1989,1991 for discussions on the significance of Degree-0, Degree-1 and Degree-2 sentences in language acquisition. 3A labeled string is a phrase where every word as well as the phrase itself has a category label. A sentence like John work* is a labeled string when John is marked as NP, works as V, and the whole string as S or CP. 174
  • 191. • iv (intransitive verb) • tv (transitive verb) • aux (auxiliary or grammatical particle) • of tan (adverb of the “often” type) Each category label has a list of features attached to it. The features that ap­ pear in the list represent overt morphology, i.e. features that are spelled out. For instance, a string like s -[c l] aux-[agr,tns] v-[asp] o -[c 2 ] represents a sentence where the subject and object are overtly marked for different cases, the auxiliary overtly inflects for agreement and tense, and the verb has overt inflec­ tion for aspect. A feature in a language is considered overtly represented if this feature is morphologically realized at least in some cases. The auxiliary in English will therefore be coded as aux -[ag r,tn s], since this is the case with does. The full array of possibilities has been illustrated in 4.2.3. It is taken for granted that the learner is able to identify the inflectional morpheme(s) in each word and the feature(s) encoded in each morpheme. The language to be acquired by a learner is composed of a set ofstrings. These strings are to be presented in an unordered fashion. The learner can encounter any string at any time. It is assumed that every string in the set, each of which representing a type of sentences, will eventually appear in the input, and each string can be encountered more than once. All the input strings are supposed to be grammatical sentences in the target language. No sentence is marked “ungrammatical” to tell the learner that this is not a sentence he should generate. In other words, no negative evidence is available 175
  • 192. (cf. Brown & Hanlon (1970), Wexler it Hamburger (1973), Baker (1979), Marcus (1993), etc.). This assumption may seem too narrow, for there could be indirect negative evidence (Lasnik 1989) which might be used as a substitute for negative evidence. However, the existence of such evidence does not have to mean that the learner has to depend on it for successful acquisition. We will therefore start with the more restrictive hypothesis and conduct our experiments in an environment where there is no negative evidence. 5.1.2 Assumptions about the Learner The learner is equipped with Universal Grammar which has a set of parameters, each having two or more values. In our case, the UG is the experimental grammar defined in Chapter 3. Whenever an input sentence is encountered, the learner tries to parse it using the grammar and the current parameter setting. At the initial stage, the parameters can be either preset or unset. In the latter case, a setting has to be chosen before the parsing starts. If we assume that the parameters are preset, then all learner will start with the same setting which is universal. If the parameters are unset, however, the learner can choose any value combination to start with. In this case, there will not be any universal starting point for parameter setting. The learning model we will discuss is based on the assumption that the parameters are preset. We adopt the hypothesis that the learner is failure-driven or error-driven (Wexler it Culicover (1980))4. He will not change his current parameter setting unless he 4This kind of failure-driven learning paradigm baa been challenged by many people. An interesting debate can be found in Valian (1990, 1993) and Kim (1993). However, the arguments made there are mainly based on the setting of the null-subject parameter. It is still an open question whether failure-driven learning is feasible in setting X-bar parameters or movement parameters. 176
  • 193. encounters a sentence which is not syntactically analyzable with the current set­ ting. We also assume the Greediness Constraint (Clark (1988, 1990), Gibson and Wexler (1993)) according to which the learner will not adopt a new setting unless it can result in a successful parse of the current input sentence.5 Finally, we share with most researchers in the field the assumption that the learner has no memory of either the previous parameter settings or the sentences previously encountered. An ideal learning paradigm within which parameter setting can be experi­ mented with the above assumptions is Gold’s (1967) identification by enumeration. This is a failure-driven algorithm whereby the learner goes through a number of hypotheses until the correct one is found. In our case, the algorithm can be de­ scribed as follows. Given a list of parameter settings, the learner attempts to parse an input sentence S with one of the settings in the list. If S can be successfully parsed, then the setting is retained. If S is unparsable, however, the current setting will be discarded and the next setting in the list will be tried. The settings are tried one by one until the learner reaches a value combination that results in a successful parse. This happens to every S the learner encounters. Some Ss trigger resetting and some do not. Resetting will cease to occur when the learner comes to a setting which can account for any S in the input data set. 5.1.3 Assumptions about the Criterion of Successful Learn ing Will the learner described above be able to successfully acquire any language in our parameter space? The answer to this question depends on our criterion of successful learning. When we say that the learner has acquired a language, we can 'This assumption can also be challenged. See Frank and Kapur (1993) for possible arguments against this assumption. 177
  • 194. mean any of the following: (171) (a) He has identified the parameter setting of the target language. (b) He has become able to parse/generate any string in the target language, but he may also generate some strings which are not in the target lan­ guage. (c) He has become able to parse/generate any string in the target language and no string which is not in the target language. The criterion in (171(a)) requires that the learner acquire a language which is strongly equivalent to the target language. This is a criteria that our present learner cannot meet. As we have seen again and again, a language in our pa­ rameter space can often be generated with two or more parameter settings. The failure-driven learner, however, will stop learning as soon as one of these settings is reached. If the target language is supposed to have any of the other settings, this language will not be learnable according to this criterion. Fortunately, this is not the criterion used in most theories of human language acquisition. It is acceptable to most people that a learner can be said to have acquired a language if he can parse/generate a language which is weakly equivalent (string equivalent but not necessarily setting equivalent) to the target language. The criterion in (171(b)) is debatable. Considering the fact that people do overgenerate in their linguistic performance, we are tempted to accept this crite­ rion. The existence of creoles also seems to show that humans can produce things which are not in their target language. However, this will not be the criterion to be used here. Once overgeneration is allowed in general, we will have to tolerate situations where children produce many sentence patterns that are not acceptable 178
  • 195. to their parents. This is definitely not the case with human language acquisition. The language of the next generation can be a little different, but never to to the extent that it sounded like a different language. We will therefore assume the cri­ terion in (171(c)) where exact identification is required. This criteria may be too strict, but it is a good working hypothesis to start with. Now the question is whether exact identification is achievable in our learning paradigm. A well-known property of the learning algorithm we have assumed is that the enumeration of hypotheses (in our case the parameter settings) must follow the Subset Principle (Angluin 1978,1980, Williams 1981, Berwick 1985, Manzini and Wexler 1987, Wexler and Manzini 1987, etc.). Given two languages L and L-2 and their respective parameter settings Pi and Pi, P must come before Pj in the enumeration if L constitutes a proper subset of Li. Otherwise, the learner will be stuck with the wrong setting and never try to reset it again. Suppose the target language is [ i v , i t o ] and the learner has just set the parameters to C 1 0 0 0 0 1 / 0 1 / 0 0 ]. With this setting, he will be able to process every string in the target language. As a result, he will never change the setting again. But this is a wrong setting, for it will enable him to generate not only SV and SVO strings, but OVS, SOV, VS and VSO strings as well. He has acquired a superset language of the target language instead of the target language itself. We have seen in 4.1.4 that superset and subset languages do exist extensively in our parameter space. Since we require exact identification, we must see to it that the enumerative process of our learning algorithm follows the Subset Principle. This will be a major topic of this chapter. 179
  • 196. 5.2 Setting S(M )-Param eters There are three types of parameters to be set in our model: S(M)-parameters, S(F)-parameters and HD-parameters. S(M) and HD parameters account for word order variation. They are reset if and only if the current setting fails to accept the word order of an input string. S(F)-parameters, on the other hand, are responsible for the morphological paradigm of a language. They are reset on the basis of visible morphology only. Thus the values of S(M) and HD parameters respond to word order only and the values of S(F)-parameters respond to morphology only. Since word order and overt morphology are assumed to be independent of each other in our model, there is no dependency between the values of S(M)/HD parameters and S(F)-parameters. In other words, the former and the latter can be set independently. We can therefore consider them in isolation of each other. In this section we consider the setting of S(M)-parametera. The values of other parameters will be held constant for the moment, with all HD-parameters set to I and S(F)-parameters set to 0 - 0 . Since no feature is spelled out when all S(F)-parameters are set to 0 - 0 , the feature list will be temporarily omitted in the presentation of strings. The string ■ v o , for example, is understood to be an abbreviated form of ■- [] v- [] o- []. In addition, the symbol v will often be used to stand for both iv and tv. 5.2.1 The Ordering Algorithm As we have seen in 4.1.4, some languages in the parameter space of S(M)-parameters are properly included in some other languages. This implies that the learning al­ gorithm we have assumed can fail to result in convergence for some languages if 180
  • 197. the enumeration of parameter settings is random. In order for every language in the parameter space to be learnable, the hypothetical settings must be enumerated in a certain order. In particular, the settings of subset languages must be tried before the settings of their respective superset languages. Let us call the parameter setting for a subset language a subset setting and the one for a superset language a superset setting. A superset setting must then be ordered after all its subset settings. To ensure learnability for every language, we can simply calculate all subset relations in the parameter space, find every superset setting and its subset settings, and enumerate the settings in such a way that all subset settings comes before their relative superset settings. Such an ordering is attainable. In fact, the enumeration can be made to satisfy this ordering condition in more than one way. However, arbitrary ordering of this kind is not linguistically interesting. It can certainly make our learning algorithm work, but we cannot expect a child to know the ordering unless it is built in as part of UG. We are thus in a dilemma: the learning may not succeed if there is no ordering of parameter values, but the assumption that the ordering is directly encoded in UG seems very unlikely. However, there is a way to get out of this dilemma. The child can be expected to know the ordering without it being directly encoded in UG if the following is true: the ordering can be derived from some independent linguistic principle in UG. Such a principle does seem to exist in our current linguistic theory. One possible candidate is the principle of Procrastinate (Chomsky 1992) which requires that movement in overt syntax be avoided as much as possible. This principle has the following implications for the parameter setting problem considered in our model.0 ®There is an alternative approach which goes in the in opposite way. Following Peset- sky’s (1989) Earlinees principle which requires movement to occur as early as possible in the derivational process, we could assume that the learner start from the hypothesis that all S(M)- 181
  • 198. (172) All S(M)-parameters should be set to 0 at the initial stage. Let us suppose that the principle of Procrastinate is operative in children's grammar from the very beginning. According to this principle, an “ideal” grammar should have no overt movement. Therefore, children will initially hypothesize that no movement takes place before Spell-Out in their language. They will con­ sider overt movement (i.e. setting some S(M)-parameters to 1) only if they have encountered sentences which are not syntactically analyzable with the existing parameter setting. (173) In cases where children are forced to change their hypothesis by allowing some movement(s) to occur before Spell-Out, they will try to move as little as possible, again following the principle of Procrastinate. They will not hypothesize more overt movement(s) than is absolutely necessary for the successful parsing of the current input sentence. As a result, given two set­ tings, both of which can make the current input parsable, the setting with fewer S(M)-parameters set to 1 should be preferred and adopted as the new hypothesis. (174) If the principle of Procrastinate is adhered to rigorously, there should not be any optional overt movement. Given the option of moving either before or after Spell-Out, the principle will always dictate that the movement occur after Spell-Out. Setting an S(M)-parameter to 1/0 is therefore no different from setting it to 0. So why should the value 1/0 be considered in the first parameters are set to 1 at the initial stage. This alternative is tried out in Wu (1902) where the learning process involves setting some S(M)-parameters from 1 to 0. Interestingly enough, this approach also works, though it is conceptually less natural and less compatible with the acquisition data. 182
  • 199. place? If a movement must occur before Spell-Out, then its S(M)-parameter must be set to 1 rather than 1/ 0 . Consequently, the value 1/0 should not be tried unless it is the only value which can make all the strings in a given language parsable. (175) In cases where overt movement is absolutely necessary, the principle of Pro­ crastinate will require that the movements which are more “essential” be considered first. Now which movements are more essential? According to Chomsky, the principle of Procrastinate can be overridden to let a movement occur before Spell-Out only if the feature to be checked by this movement is “strong” i.e. realized in overt morphology. In view of the fact that A- movement and head-movement often occur for morphological reasons while A-movements do not, the former are more essential than the latter. In our model, overt movement is independent of overt morphology, so the morpho­ logical explanation may not be available. But there is a common assumption that A-movement and head-movement are more closely related to the basic word order of a language than A-movements which are more likely to be as­ sociated with interrogation, quantification , focusing and topicalization. In this sense, A-movements and head movements are more essential than A-bar movements. If overt movement is to be considered at all, priority should be given to the former rather than the latter.7 7Theie is a potential problem with the assumption that A-movements tend to occur earlier than A-movements. One possible counter-example to this hypothesis is the passive construction which involves A-movement. According to the acquisition data, passives tend to occur fairly late in children’s speech, usually after wh-movement which is an A-movement. The question is then why the A-movement in passive formation is not allowed to apply before some A-movements are. One answer to this question might be the following: The A-movement in passive sentences might be different from other A-movements in that it is forced, not by feature-checking, but by some other grammatical operations which are active only at a later stage of development. The 183
  • 200. To sum up, the principle of Procrastinate can provide certain constraints on or preferences for the choice of the next parameter setting to be tried in the learning process. In particular, the following ordering rules seem to follow from this general principle: (176) (i) Given two parameter settings Pi and Pj, with and vV2 (0 < TVj,0 < jVj) being the respective numbers of S(M)-parameters set to 1 /0 in P and Pj , Pi -< P3 if < N2. In other words, the setting which allows for fewer optional overt movements is to be considered first. (ii) Given two parameter settings Pi and pj, Pi -< Pa if S(M (cspec)) is set to 0 in Pi and 1 in Pa- In other words, the setting which does not require overt A-movement is to be considered first. (iii) Given two parameter settings Pi and P3, with Ni and N? (0 < N ,0 < iVj) being the total numbers of S(M)-parameters set to 1 in Pi and Pa , P, -< Pa if Ni < N2. These ordering rules are to be applied in the sequence given above. The second rule is applied only if the first one fails to decide on the precedence, and the third applied only if the second fails to do so. This order of rule application is not directly derivable from the principle of Procrastinate, but it is not totally stipulative, either. Comparing optional overt movement and overt A-movement, we find the latter “less evil” than the former which, according to the principle, should not exist at all. In our particular parameter space, optional movements always result in subset relations while overt A-movements do so only in some contexts, as absorption of theta-rolea might b« one such operation. For a passive sentence to occur, the theta- role carried by the subject must be “absorbed”. Such absorption may happen relatively late in children’s grammar, thus postponing the A-movement associated with passive constructions. 184
  • 201. we have seen in 4.1.3. This also suggests that optional movement should be the last choice. Here is a situation where linguistic and computational considerations seem to agree with each other. The ordering of (ii) and (iii) is less justified by the principle of Procrastinate, though. We assume here that a setting without overt A-movement is to be preferred over a Betting with A-movement even if the total number of overt movements in the former is greater than that in the latter. The decision here is made on qualitative rather than quantitative grounds. Overt non- A-movements are assumed to be “less evil” than overt A-movements. Therefore the latter should be avoided even at the cost of having more other movements. So far this choice has been motivated by computational considerations more than linguistic arguments. Subset relations are more likely to arise with a setting with overt A-movement than a setting without it. By putting off overt A-movements as much as possible, learnability can be guaranteed. The linguistic intuition in support of our preference here is that A-movements seem to be more “peripheral” than A-movements and head movements on the whole. Whether this intuition is correct or empirically justifiable is an open question. In any event, we will suppose for the time being that there are qualitative differences between different movements. We assume that quantitative arguments apply only in cases where qualitative considerations yield no result. In this sense, (iii) acts as a default rule which applies only if nothing else works. It should be pointed out that there are many settings which will remain unordered to each other after all the precedence rules have been applied. We will see these settings can be tried in any order without a violation of the Subset Principle. 185
  • 202. 5.2.2 Ordering and Learnability Applying the ordering algorithm in (176) to the value combinations of S(M)- parameters, we get a partial order of all the settings. Given any two S(M)- par&meter settings Pi and P3, either Pi ^ Pi or Pi -< Pi, but never both. The first rule of our ordering algorithm applies to all the S(M)-parameter set­ tings, for every setting has zero or more parameters set to 1/ 0 . (The maximum is three because only three S(M)-parameters can have this value.) This partitions all the settings into the following four groups, where (a) -< (b) -< (c) -< (d): (177) (a) settings that contain zero 1/0 value. (b) settings that contain one 1 /0 value. (c) settings that contain two 1 /0 values. (d) Bettings that contain three 1/0 values. We then apply the second rule of the orderingalgorithm within eachgroup. This partitions the settings in each of the four groupsinto two sub-groups: those where the S(M(cspcc)) is set 0 (no overt A-movement permitted), and those where 5(M(cspec)) is set to 1 and 1 /0 (overt A-movement permitted). The settings which do not allow overt A-movement will precede those which do allow such movement. Notice that the second rule does not apply across the groups. It never compares a setting in Group (a) with a setting in Group (b), for example. If P is in Group (a) and Pi is in Group (b), Pi will precede Pa in the partial order even if Pi allows overt A-movement whereas P2 does not. So given the two settings in (178), (178(a)) will be ordered before (178(b)). 186
  • 203. (178) (a) [ 0 0 0 0 0 1 0 1 ] (b) [ 0 0 0 0 0 1/0 1 0 ] After the application of the first and second rules, the S(M)-parameter settings are partitioned into the eight groups in (179) where (a-a) -< (a-b) -< (b-a) -< (b-b) -< (c-a) -< (c-b) -< (d-a) -< (d-b). (179) (a-a) settings with no optional movement and no overt A-movement; (a-b) settings with no optional movement but with overt A-movement; (b-a) settings with one optional movement but no overt A-movement; (b-b) settings with one optional movement and overt A-movement; (c-a) settings with two optional movements but no overt A-movement; (c-b) settings with two optional movements and overt A-movement; (d-a) settings with three optional movements but no overt A-movement; (d-b) settings with three optional movements and overt A-moevment. Finally, we apply the third rule within each of these eight sub-groups. Here we just count in each setting the number of parameters which are set to 1. (The value 1/0 can be ignored as it occurs the same number of times within each group.) Each setting has a number and P will precede Pa if the number associated with Pi is less than that of Pj. It is obvious that this will result in a partial order within each sub-group, since what is involved here is the ordering a natural numbers. Again it should be pointed out that the third rule never relates two settings in two different sub-groups. If Pi is in a group that precedes the group which P2 is in, Pi will 187
  • 204. precede P2 even if P has more parameters set to 1. In (180), for instance, (180(a)) must precede (180(b)), in spite of the fact that the absolute number of parameters set to 1 is greater in (180(a)). (180) (a) t l l l l l l l 1 ] (b) [ 0 0 0 0 0 1 1 1/0 ] In each of the sub-groups, there will be settings which have the same number of overt movements. They will occupy the same position in the partial order. The settings in (181) are settings of this kind. (181) ( a ) £ l 1 1 0 0 0 0 0 ] (b) [ 1 1 0 0 0 1 0 0 ] ( c ) [ l 1 0 0 0 0 1 0 ] ( d ) [ l 0 0 0 0 1 1 0 ] None of these settings has optional movement or overt A-movement, but they share the property of having three S(M)-parameter set to 1. They therefore remain unordered to each other, though they are ordered as a whole relative to any other setting. For example, they are all ordered before the settings in (178) and (180). The enumerative learner can try these settings in any order without having any learnability problems, as we will see. The Prolog program that implements the ordering algorithm is given in Ap­ pendix A.3. In Appendix C, we find the complete ordered list of settings produced by this program. The settings here are listed in 50 groups and numbered in the order in which they are to be tried in the parameter setting process. We notice that the first setting in the list i s [ 0 0 0 0 0 0 0 0 ] which requires no overt movement, and the last setting is [ 1 1 1 1 1 1 / 0 1 / 0 1/ 0] which allows for 188
  • 205. the maximal number of optional movements in addition to requiring every other movement to be overt. Each group number is accompanied by three digits. The first shows the number of parameters set to 1/ 0 , the second indicates whether S{M{cspec)) is set to 1, and the last is the total number of parameters set to 1 or 1/ 0 in a setting. It turns out that the Subset Principle can be observed if our enumerative learner goes through the hypothetical settings in the order given in C. This is not a sur­ prise. We have seen in 4.1.4 that, in the parameter space of S(M)-parameters, subset relations arise from two types of settings. The first type consists of value combinations where 5(A/(specl)), 5(Af(spec2)) and S(M(cspec)) are all set to 1. Languages generated with such settings share the property of having two alterna­ tive orders for any transitive sentence. We have one order when the subject NP is in Cspec and the other one when the object NP is. The following is a complete list of such settings (fully instantiated ones only), the languages they generate, and the subset languages contained in each language. (182) Setting [ 0 0 0 0 0 1 1 1 ] 1 ] Setting [ 0 0 0 0 0 1 1 Language [ s v, s e V. 0 S V ] Subset Languages [ s v, s o V ] [ s V, Setting [ 1 0 0 0 0 1 1 Language [ s v, 8 O V, o 8 V ] Subset Languages [ 8 v, S O V 3 [ 8 V, Setting [ 1 1 0 0 0 1 1 Language [ s V. S V 0., o 8 V 3 Subset Languages [ e V, S V 0 ] [ S v. Setting [ 1 1 1 0 0 1 1 Language [ - V, S V o,* o 8 V ] Subset Language [ • V. S V 0 3 [ 8 V, 1 ] 1 ] 189
  • 206. Setting [ 1 1 1 1 0 1 1 1 ] Language C ■ v, • v o, o a v ] Subset Languages [ s v , s v o ] [ s v , o s v ] Setting [ 1 1 1 1 1 1 1 1 ] Language [ s v , s v o , o v s ] Subset Languages [ s v , s v o ] [ s v , o v s ] It is easy to prove that, with our ordering algorithm, each of the subset lan­ guages is learnable. What the subset languages have in common is that they only have a single word order for a transitive sentence. They can all be generated with a setting where S(M(cspec)) is set to 0s: (183) Setting Language [ 0 0 0 0 0 1 1 0 ] [ s v , s o v ] [ 0 0 0 0 0 0 1 0 ] [ s v , o s v ] [ 0 0 0 0 0 1 0 0 ] [ s v , s v o ] [ 1 0 0 0 0 0 1 0 ] [ s v , o v s ] None of these settings permits overt movement to Cspec while all the settings in (182) requires this movement. Therefore, the settings in (183) will be enumerated before those in (182). Once the setting in (183) is reached, all the strings in the subset languages will be interpretable and the failure-driven learner will never try to reset the parameters again. Therefore the superset settings in (182) will not be reachable unless the learner encounters strings which are not in the subset languages. *There are many alternative settings, but one of them will suffice to illustrate the point. 190
  • 207. The second type of settings that results in subset relations involves optional movement. The parameter value 1/0 is a variable which can be instantiated to either 1 or 0 in a particular syntactic derivation. Every setting which has n S(M)- parameters set to 1/ 0 has 2 " (full) instantiations, each of which being a subset setting of the original setting. For instance, the setting in (184) can be instantiated to the four settings in (185). We can see that the languages generated with the settings in (185) are all subset languages of the language generated with setting in (184). (184) Setting [ 1 1 1 1 0 1 / 0 1 / 0 0 ] Language [v o s , s v , s v o , v s , v s o ] (185) Setting Language [ 1 1 1 1 0 1 0 0 ] [ s v , s v o ] [ 1 1 1 1 0 0 1 0 ] [ v s , v o s ] [ 1 1 1 1 0 0 0 0 ] [ v s , v s o ] [ 1 1 1 1 0 1 1 0 ] [ s v , s v o ] In order for the languages in (185) to be learnable, the settings in (185) must precede the setting in (184) in the enumeration. This is guaranteed by the first rule of the ordering algorithm which puts all the settings in (185) before the setting in (184) in the partial order. However, the subset settings of (184) are not limited to those in (185). They also include partial instantiations in (186). 191
  • 208. (186) S e ttin g Language (a) [ 1 1 1 1 0 1 / 0 1 0 ] [ s v o , s v , v s , v o s ] (b) [ 1 1 1 1 0 1 / 0 0 0 ] [ s v o , s v , v s , v s o ] (c) [ 1 1 1 1 0 1 1 / 0 0 ] [ s v , s v o ] (d) [ 1 1 1 1 0 0 1 / 0 0 ] [ v o s , v s , v s o ] As we can see, tbe languages generated with these settings are all subset languages of the one generated with (184). These subset relations do not cause learnability problems because the settings in (186) all go before (184) in the partial order. They only contain one 1/0 while (184) contains two. In general, any setting with n parameters set to 1/0 can be "factored” into 3" —1 subset settings. All those settings will precede the original setting. This is because each of them will have instantiated at least one of the variables to either one or zero and thus have fewer parameters set to 1/ 0 . There is another complication with optional movement. We find that subset relations do not arise from different numbers of optional movements only. Even settings that have the same number of parameters set to 1/ 0 can generate lan­ guages that are in subset relations. We have just seen such an example in (186). The settings in (186(a)) and (186(c)) have exactly the same number of optional movement, the same number of overt A-movement, and in fact the same total number of overt movements. Our ordering algorithm will therefore put these two settings in the same place in the partial order. Yet the language generated with (186(c)) is properly included in that of (186(a)). At first glance, this seems to be a problem. Upon closer inspection, however, we discover that the problem is not 192
  • 209. real. First of all, if the target language is the one in (186(c)), none of the settings in (186) will ever be tried. We have seen in the previous chapter that a single language can often be generated with more than one setting. The language in (186(c)), i.e. C s v , s v o ], can be generated with 32 different settings, most of which will be ordered before the one in (186(c)). One such setting, for instance, is [ 0 0 0 0 0 0 0 0 ] which is actually the initial hypothesis adopted by the learner. Once the learner reaches one of those settings, all the strings in his language will be analyzable and he will never consider resetting the parameters again. The settings in (186) are therefore not reachable, and the possibility of failing to identify the exact language does not exist. This is a general property of our learning algorithm. Given two setting? Px and Pa with Lx and I>3 being the respective languages they generate, we find numerous instances where Zj is properly included in Lx but P goes before Pj in the partial order of hypothesis enumeration. In each of these cases, there is always another setting Po which also generates L? but Pq -< P. In all these cases, the learner will stop resetting the parameters once Pq is reached. Therefore no subset problems arise. There is a deeper reason why the apparent subset problem exemplified by (186) is not a problem. The language in (186(c)) looks like a subset language of the one in (186(a)) only because the movements are often string-vacuous in our system. Two “words” Wx and W2 may appear to be in the same position while they are not. This happens when Wx and W2 are in positions A and B respectively, while the movement from A to B is string-vacuous. Looking at (186(a)) again, we see that the setting in (186(c)) can actually generate a string which is not in the language of (186(a)) i/the object NP movement from Vspec to Agr2spec is not string-vacuous. 193
  • 210. In (186(a)), the object NP can appear in one position (Agr2spec) only while it can appear in either Vspec or Agr2spec in (186(c)). The structure where the object is in Vspec is not a structure that can be generated with (186(a)). Suppose there is an AdvP left-adjoined to Agr2-1. Then the language generated in (186(a)) will be (187(a)) and the one generated with (186(c)) will be (187(b)). As we can see, (187(b)) is not a subset language of (187(a)), for a v advp o is not in (187(a)). (187) (a) [ a v advp, a v o advp, v advp a, v o advp a ] (b) [ a v advp, a v o advp, a v advp o ] The general observation is that, if none of the movements in our system can ever be string-vacuous, the following two statements will be true. (188) (a) Given two settings Pi and P2 with L and L? being the respective languages they generate, Pi ~<P2 in the enumeration if L C L2. (b) If Pi ■<P2 and Pa ^ Pi, then Li <£. L2 and L2 Li. 5.2.3 The Learning Algorithm We can now describe our learning algorithm as follows. (189) (1) Get all value combinations in a given parameter space and place them in a list P. (II) Sort P according to the precedence rules in (176) and get the partially ordered list Pot* as output. (III) Start learning language L. (i) Select any string 5 from L and try to parse S. (ii) If S is successfully parsed, go back to (i). 194
  • 211. Otherwise, reset the perimeters to the first value combination Pi in P0rd and remove Pi from Pord- Go to (i). If L is in the given parameter space, the learning process will eventually stay in (i) and never leave it. At this point, we can generate Lt which is the set of stringB that can be successfully parsed with the current parameter setting. If Lt = L, then L is leamable. We have converged on the correct value combination. If L C Lt, then L is not learnable. We have converged on a superset setting for the grammar of L. If L is not in the given parameter space at all, Por<t will eventually become empty and the learning process will get stuck in (ii). The Prolog program that implements the algorithm in (189) is given in Ap­ pendix A.4. The ordered list of parameter settings is computed off-line using the get_eettinge/ 0 predicate.9 (The next setting to be tried at each point can also be computed on-line, and the result will be the same. The off-line computation just makes the calculation simpler and the execution of the learning procedure more efficient.) The learning session is initiated by calling sp/ 0 which keeps putting out the “Next?” prompt at which we can type in a. a string from the target language; b. “c u rre n tjse ttin g ” to have the current setting displayed; c. “generate” to get the complete set of string that can be generated with the current setting; d. “in itia liz e ” to put the learner back to the initial stage; or 9The predicates in Prolog are referred to by the form X/Y where X is the predicate name and Y is the number of arguments in the predicate. 195
  • 212. e. “by*” to terminate the session. Appendix D contains a number of Prolog sessions. D.l and D.2 illustrate the process in which the language [ a(ofta n ) iv , a (o ften ) tv o , a (o fte n ) o tv , o a(o fte n ) tv ] (which can be Chinese) is acquired with the program in Appendix A.4. The input strings are numbered “Xl”, “X2”, ... in the two sessions. The successive settings are numbered “Xa”, “Xb”, etc. In D.l, the strings Xl and X2 can be parsed with the initial setting, so the parameter values remains unchanged. Each of the strings in X3-X18 triggered a resetting of the parameters. After each resetting, the “g en erate” command is given to have all the strings accepted by the current setting generated, so that we can see the language that the learner “speaks” at that particular point. The learner converged on the correct setting at Xl8 , after which all of the possible strings in the language (Xl9, X20, X21, X22, X23, X24 X25 and X26) became analyzable and no further resetting is triggered. As the output of “g en erate” shows, the current language is exactly the language we have tried to acquire, there being neither overgeneration nor undergeneration. In the D.2 session, the input strings were presented in a different order, but the final result is the same. 5.2.4 Properties of the Learning Algorithm Several comments can be made on the learning sessions described above. (190) (i) The learner is able to converge on the correct setting on the basis of positive evidence only. Every input string presented to the learner is a grammatical sentence in the language. This is possible because our learning procedure respects the Subset Principle. 196
  • 213. (ii) The learning procedure is incremental. All resetting decisions are based on the current setting and the current input string only. The learner does not have to remember any of the previous strings, nor does she have to memorize the previous settings.10 (iii) The convergence does not depend upon any particular ordering of the input strings. The string to be presented at the next prompt can be selected randomly. The sessions in D.l and D.2 differ in terms of the order in which the input strings are presented, but their final outcome is the same. The learner does require, however, that some crucial sentence types (the distinguishing subset of the data set) be presented more than once. In D.l the string [ ■ tv o ] was presented ten times and eight of them triggered resetting. These requirements are empirically plausible. Children do get exposed to some common sentence patterns over and over again. (iv) The learner has to go through a number of intermediate grammars before she arrives at the correct setting. The intermediate settings that are traversed in the learning process can vary according to the way input strings are presented. At a given point in the learning process, different input strings can cause the parameters to be reset to different values. A comparison of the sessions in D.l and D.2 shows this. For 10One may argue that the learner does have to memorise the previous settings in the sense that no setting that has been tried in the past can be tried again. But in most cases the learner can tell from the current setting which settings have been previously tried. As the settings being tested have progressively more overt movements, all previous settings should have fewer S(M)- parameters set to 1. The only previously-tested settings she may have to remember are those in the same group. Those settings are in the same position in the partial order so he cannot tell from the current setting which of the other settings in that group have been tried before. 197
  • 214. example, after arriving at the setting [ 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 ] (Xb in both sessions), the learner was presented different strings in the two sessions. In D.l, she was given [ i tv o ] which triggered the setting [ 1 , 0 , 0 , 0 , 0 , 1 , 0 , 0 ]; in D.2, she was given [ o s tv ] which triggered a different setting: [ 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 ]. Due to the fact that the presentation of input strings is different in the two sessions, the intermediate settings are different. While most settings appeared in both sessions, some settings were traversed in one session only. We notice that there are fewer intermediate settings in D.2 and the learner converged on the same correct setting with fewer input strings. The general picture seems to be the following: given a set of input strings, there is a definite set of intermediate settings that can be traversed. In a particular learning situation, however, only a subset of the settings will be reached and the members in this set can vary according how the input strings are presented sequentially. (v) No Single Value Constraint (Clark 1988,1990, Gibson and Wexler 1993) is imposed on the parameter setting process. This constraint requires that, in the process of resetting parameters, the new setting and the current setting can differ by one parameter value only. In other words, we can never reset two parameters at the same time. This constraint is not observed here. As we can see in the sessions, two successive settings can differ by more than one parameter value. In D.l, for instance, the settings in Xa and Xb differ by two values while X« and X* differ by 5 values. It is not the case, however, that the learner can arbitrarily choose the next setting. Nor is the learner non-conservative. In fact, 198
  • 215. the learner always tries to make the current input string analyzable by making the smallest change in the parameter values. However, the degree of change is determined by the principle of Procrastinate rather than the absolute number of parameters being reset. (vi) No Pendulum Problem (Randall 1990,1992) exists for the current learn­ ing algorithm. This refers to the problem where a parameter is con­ stantly reset between two alternative values V and Vj. One piece of evidence triggers the reseting from Vi to Vj and another piece sets it back to V|. This does not happen in our system. Why this is so is obvious. The learner proceeds through the list of hypothetical settings in a unidirectional way. Once a setting is considered incorrect, it will not be considered again. Such determinism is made possible by the fact that the learner is conservative and would never entertain settings with more overt movements if settings with fewer overt movements had not been considered yet. Therefore, once a setting is reached, there is no need to consider settings that require fewer overt movements. 5.2.5 Learning All Languages in the Parameter Space The sessions in D.l an D.2 have only shown that at least some language is learn- able in our present system. We have observed in 5.2.2 that every language in this parameter space should be learnable. To have an additional proof for the points made in 5.2.2. I ran a learning a session for each of the languages. This exhaus­ tive testing can be performed by calling lea rm a ll-lan g s/ 0 which is defined in the Prolog program in Appendix A.4. We first use get-pspacs/O to get (a) the ordered list of parameter settings (done by calling g s t.s e ttin g s /0 ) and fb) all 199
  • 216. the possible languages in the parameter space (done by calling gat-languagea/O). The la a rn _ a ll/l predicate then feeds the languages one by one into la a rn l/ 1 which conducts a learning session for each particular language. The real work is done by le a rn / 1 which keeps resetting the parameters until all the strings in the language become parsable. At this point, generate/ 1 is called to get the complete set of stringB generated with the current setting. A language is learnable if the set of strings generated is exactly the target language presented to the learner and not learnable if it is a superset of the target language11. In D.3 and D.4, we find two Prolog sessions run with learn _ all-lan g s/0 . The two sessions differ in that in D.3 often does not appear in the input strings while it does appear in D.4. In D.3, all languages are learnable except two: [ o tv s ] and [ o s tv ]. We have observed in Chapter 4 that these are the only two languages which do not have intransitive sentences. Our syntactic model should be made more restrictive so that such languages are not generated at all. Furthermore, these two languages are not really unlearnable. They cannot be correctly identified in D.3 simply because there is not enough information in the input string to distinguish them from other languages. As we can see, they are correctly identified in D.4 where all the languages, including these two odd ones, are proven to be learnable. This is because the appearance of often in D.4 has made some otherwise indistinguishable languages distinct from each other. One thing we notice in these sessions is that, while all the languages in the parameter space are learnable, not every setting in the parameter space can become n A language is also not learnable if no setting in the parameter space can make every string in this language analysable. But this will not happen here because such a language would fall outside the given parameter space thus not considered a possible language to be acquired in the first place. 200
  • 217. a final setting for some language. This is not a surprise, though. We have seen in Chapter 4 that the relationship between settings and languages can be many- to-one. A single language can be generated with more than one setting. In the learning process, however, only one of the possible setting will be converged on as the final setting for a language. As we have seen in B.2, the language [ i iv , a tv o ] can be generated with 32 different settings. The setting which the learner has converged on in the D.l session is the initial setting. Once this setting is in place, all the stringB in the language will be parsed successfully and no resetting will be considered any more. In D.4, the appearance of often made four pure SVO languages distinct from each other: C (often) s v (o) 3 [ s (often) v (o) ] [ s v (often) (o) 3 [ e (often) v (o ), (often) s v (o) 3 The final settings reached for these languages bythe learners are respectively [ 0 , 0 , 0 , 0 . 0 , 0 , 0 , 0 3 [ 0 , 0 , 0 , 0 , 0 , 1 , 0 , 0 3 C i . i . l . i . 0 , i , o, 0 3 [ o , o, o , o, o , i / o , o, o 3 Looking at all the possible settings for each of these languages (i.e. # 1, #2, #12 and #20 in B.3), we see that the final setting reached for each language is always the setting which is to be ordered first among alternative settings by the precedence rules in (176). In general, the learner always tries to select the setting with the fewest overt movements possible to fit the current data. Further movements are 201
  • 218. considered only if there is evidence that the current setting is inadequate. Given the string [ i i r ] or [ I tv o ], the learner will stay with the initial setting, but a new string like [ a tv o ftan o ] will make her reset the parameters to [ 0, 0. 0, 0, 0, 1, 0, 0 ]. Similar tests to those in D.3 and D.4 have also been run on parameter spaces where some S(F)-parameters are set to 1-0 so that auxiliaries appear. Due to space limitation, these sessions are not given in the appendices. Those tests show that our current learning algorithm works just as well in acquiring languages with auxiliaries and grammatical particles.13 When some functional heads are spelled out as auxiliaries, head movement may be instantiated as either verb movement or auxiliary movement. Consequently, the number of S(M)-parameter settings which are syntactically meaningful is greater, as we have seen in 4.2.2. However, the picture remains unchanged as far as the existence of subset relations among languages is concerned. The sources of subset relations are still overt A-movement and optional movement. Therefore, all the languages in this greater parameter space can still be correctly identified by using the ordering algorithm in (176) and the learning algorithm in (189). The results we get here may at fisrt seem too good to be true. The learning task we have accomplished so far is very similar to the one attempted in Wexler and Hamburger (1973) and Wexler and Culicover (1980). Both assume a universal base of phrase structure rules and try to relate every natural language to this universal base through transformations. However, while they proved that Degree-2 sentences are required for the success of the learning task, we have managed to succeed with Degree-0 sentences. Did they make a mistake in their proofs or have we found a 13The test sessions are available upon request. 202
  • 219. better learning algorithm? The answers to both questions are no. Things have changed, of course, but the crucial change which has made the difference is in the grammar rather than in the learning algorithm. The syntactic model we have developed here is very different from the Standard Theory of Chomsky (1965). There are several major differences: (a) The base structure is more constrained and more universal in the current model. The variation in phrase structures have been restricted by X-bar theory and further restricted by the removal of the specifier-head parameter. (b) The transformations (i.e. the movements) which relate surface strings to the base structure are more restricted. In fact, the set of transformations is universal. The learner does not have to learn the transformations. He only has to decide whether a given movement must be visible. (c) The choices a learner has to make in acquiring a language have been parame­ terized. The computation involved in searching for hypotheses has therefore been simplified. (d) The economy principles in general and the principle of Procrastinate in partic­ ular have given different weights to different hypotheses, so that the decision as to which hypothesis to try next is very easy to make. In short, the current grammar is more universal and the learner has less to learn. The success of the learning algorithm is therefore not a surprise. We will conclude this section by observing a special property that the current learning algorithm has with regard to noisy input. As we have seen, the learner in our model takes every piece of input data seriously and treats it as a grammatical 203
  • 220. sentence in the language. This in principle should cause problems when the input is degenerated. However, those problems are sometimes accidentally remedied in our current system. Ungrammatical input may trigger wrong settings, but as long as those settings are ordered before one of the possible target settings,13 the learner still has a chance to recover from the error. This is illustrated in D.5. The target language to be acquired in this session is the same as the one in D.l and D.2, i.e. [ s (often) iv , s (often) tv o, s (often) o tv , o s (often) tv ]. The learner was presented those stringB plus a number of strings which are not in the language (X2, X4, X6 , X13, Xl8, X21, X28, X33, X37, X42, X46 and X47). The first 8 deviant strings triggered some wrong settings, but the learner still managed to converge on one of the correct settings (the setting at Xz). This is however the last target setting the learner can ever reach. If she for any reason leaves this state and tries some settings further down the list, there will be no more chance of convergence. In the D.5 session, the learner was presented more deviant strings after Xz was reached. These strings made the learner adopt the settings Xbl and Xcl which generate superset languages of the target languages. The deviant strings that appear after Xcl made things even worse. The learner eventually ran out of further hypotheses and the learning ended in failure. The implications of this peculiar property of the learning algorithm are not clear. It may mean that the current system can provide a mechanism for language change. When the input data is perfect, only one of the possible settings for a language will be reachable. When the input contains deviant strings, however, alternative settings will be considered, as we have seen in D.5 where four of the possible settings (Xl, Xp, Xv and Xal) are reached at some point of the learning 13Recall that than can be more than one setting which is compatible with a given language. 204
  • 221. process. The languages generated with these settings are just weakly equivalent to the target language. Underlyingly, each of those settings can potentially generate a different language. At this stage, this account for language change is purely speculative. Much more careful work has to be done before it can be taken seriously. What is certain, however, is that the learning algorithm as it Btands now is not robust enough. Further research has to be done to make it more empirically plausible. 5.3 Setting Other Param eters In this section we discuss how the other two types of parameters - HD-parameters and S(F)-parameters - can be set together with S(M)-parameters. HD-parameters interact with S(M)-parameters in determining the word order of a language. We want to know whether the learning algorithm presented in the previous section can be modified to set HD-parameters as well as S(M)-parameters without losing its basic properties. The S(F)-parameters can be set independently using a separate algorithm. 5.3.1 Setting HD-Parameters When HD-parameters are kept out of the picture, there is only one kind of pa­ rameters to reset when an input sentence is found to be syntactically unanalysable in terms of word order. Now that both S(M)- and HD-parameters are available, we are given a choice. Upon failing to parse an input string, we have to decide which type of parameters to reset. This may seem to be a problem. In Gibson and Wexler (1993) (G&W hereafter) which addresses a similar problem, some target languages are found to be unleamable. The learning process ends in a local mox- 205
  • 222. imum which is not the target setting but from which the learner can not escape. The main contributor to this problem is the fact that there are two types of word order parameters that can be set when a parsing failure occurs. The parameters in their parameter space are X-parameters, which are called HD-parameters in our model, and the V2-parameter which is similar to our S(M)-parameter in that it also determines whether a certain movement is overt. When an input sentence fails to be analyzed by the current grammar, the learner can reset either the V2-parameter or one of the X-parameters, but not both. It is discovered in G&W that, with the Single Value Constraint, the Greediness Constraint, but no parameter ordering in any sense, the learner can get stuck in an incorrect grammar. We may wonder if the same problem will occur in our system, since we also have to set two types of parameters either of which may be responsible for the word order of a language. Upon further reflection, however, we realize that the situation here is very differ­ ent from the one in G&W where local maxima occur. First of all, we only have one kind of X-parameter - the complement-head parameter - while both the specifier- head and complement-head parameters are present in the model G&W assumes. They discovered that local maxima can be avoided if the value of the specifier-head parameter can be fixed before the V2-parameter and the complement-head param­ eter are set. But this condition is satisfied by default in our system, since there is no specifier-head parameter in our model at all. It is as if the specifier-head pa­ rameter were set before the other parameters are considered. According to G&W, this should be sufficient to prevent local maxima. Secondly, the Single Value Con­ straint is not assumed in our system. G&W has shown that local maxima can also be avoided by removing the Single Value Constraint. This is another reason why local maxima should not occur in our model. We do assume the Greediness 206
  • 223. Constraint, though. G&W show that the removal of this constraint can help avoid local maxima as well, but will leave the learning algorithm so unconstrained that the correct grammar can only be found by chance. The solution G&W finds most plausible for the prevention of local maxima is parameter-ordering. There are several ways of ordering the parameters and they favor the one where X-parameters are set before the V2-parameter. We agree with them on that movement operations are costly and should not be considered unless a simple flip of the X-parameter fails to solve the problem. We will therefore basically adopt this ordering hypothesis. When a change of parameter values is required, the learner is to try HD-parameters first. Resetting of S(M)-parameters is attempted only if the resetting of HD-parameters fails to make the input string syntactically analyzable. However, the actual implementation of the ordering has to be more sophisticated than the one suggested in G&W. While there is only one “movement parameter” - the V2-parameter - with two possible values in G&W’s model, we have eight movement parameters with 864 possible value combinations. The values of the two HD-parameters in our system can interact with any of the 864 value combinations, producing a total of 3456 possible settings. In particular, we must allow the four possible value combinations of HD-parameters, t i i 3, [ i f 1, [ f i ] and [ f 1 ], to interact with each value combination of S(M)- parameters. This can be achieved through the following algorithm. (191) Given: an ordered list of S(M)-parameter settings P„d- (i) Initially set all S(M)-parameters to 0 and HD-parameters to any values. (ii) Select any string S from the target language L and try to parse S. (iii) If 5 is successfully parsed, go to (ii); 207
  • 224. Otherwise, go to (iv). (iv) Reset HD-parameters and try to parse S with the new setting. If S is successfully parsed, retain the new HD-parameter setting and go back to (ii); If none of the settings of HD-parameters results in a successful parse of S, reset S(M)-parameters to the first value combination P in Pord, and remove Pi from Pord- Go back to (ii). Intuitively, what the above algorithm does is the following. For each of the S(M)- parameter settings, try combining it with any of the four HD-parameter settings. If none of the four makes the input string analyzable, then pick the next S(M)- parameter setting from the ordered list and try the combinations again. In other words, each setting in the ordered list is expanded into four different settings. The setting [ 1 , l t 0 , 0 , 0 , 1 , 1 , 0 3 , for example, show up as the following four settings: (192) C 1. 1. 0 , 0. 0 , 1 , 1, 0 . i , i ] [ 1 . 1 ,0 , 0 . 0 , 1 , 1 , 0 , i , f ] [ 1 , 1 ,0 , 0 , 0 , 1 . 1 , 0 , f , i ] [ 1, 1,0, 0, 0, 1. 1, 0, f , f ] This being the case, we can have an alternative algorithm where some of the steps in (191) are “compiled" into the ordered list of parameter settings. We can expand the ordered list of S(M)-parameters in Appendix C by turning each setting in the list into a group of four settings, each having a different value combination of HD-parameters. The ordering of settings in the original list now becomes the ordering of groups of settings, with the original partial order preserved. Within 208
  • 225. each group, the four settings can be ordered in any way. For convenience, we will order them arbitrarily in the order shown in (192). This ordering has the implication that the default setting for HD-parameters is head-initial. This does not have to be correct, however, and the success of our learning algorithm does not depend on this arbitrary choice. The learning algorithm will work the same way no matter how these settings are ordered within each group, because the four languages generated with the four settings of a single group never properly include each other. The outcome of the above expansion is obvious. Due to the length of this expanded list, we are unable to put it in the appendices. But the beginning of the list is given below to give the reader some concrete idea of what the new list looks like. [ 0 0 0 0 0 0 0 0 i i ] Co 0 0 0 0 0 0 0 i f 3 [ o 0 0 0 0 0 0 0 f i 3 Co 0 0 0 0 0 0 0 f f ] Cl 0 0 0 0 0 0 0 i i 3 Cl 0 0 0 0 0 0 0 i f 3 i l 0 0 0 0 0 0 0 f i 3 C l 0 0 0 0 0 0 0 f f 3 Co 1 0 0 0 0 0 0 i i 3 I o 1 0 0 0 0 0 0 i f 3 Co 1 0 0 0 0 0 0 f i 3 Co 1 0 0 0 0 0 0 f f 3 Co 0 1 0 0 0 0 0 i i 3 209
  • 226. [ O O l O O O O O i f ] [ O O l O O O O O f i ] [ O O l O O O O O f f ] • « • With this list in place, we can use the simple algorithm in (189) to set both the S(M)-parameters and HD-parameters. As a result, the Prolog program in Appendix A.4 can be used, with very little modification, to set both types of parameters. Testing sessions have been run with this new list of parameter settings. The session in D.6 illustrates how an individual language can be acquired using the sp/ 0 predicate. The target language in this case is [ s i v aux, a o tv aux, o a tv aux ] The parameters being set here are the 8 S(M)-parameters and 2 HD-parameters. All S(F)-parameters are constantly set to 0-0 except S(F(tns)) which is constantly set to 1 -0 to allow the occurrence of some auxiliaries. We can find in this session all the learnability properties listed in (190). The only thing new is the setting of HD-parameters. The learnability of all the languages in this expanded parameter space was exhaustively tested using the l«arn_all_langs/ 0 predicate. Two session (one with often and one without) was run with all S(F)-parameters constantly set to 0 -0 and another two sessions with (S(F(fn«)) set to 1- 0 . Auxiliaries appear when S(F(tns)) is set to 1- 0 . The results are again very similar to those obtained when HD-parameters have fixed values. There are a few languages which are not learnable. These languages are again [ o tv i ] and [ o s tv ] which are odd 210
  • 227. in not having intransitive sentences. When S(F(tns)) is set to 1-0 , the unlearnable languages are [ o s t v ] , [ o s t v aux ] and [o tv a aux]. No language is unlearnable when the position of often is taken into consideration. The reason why these two languages are unlearnable without the appearance of often has been discussed earlier. The log files of these sessions are not included in the appendices for reasons of space, but they are available to anyone who wants to see the actual process. In sum, the fact that there are two kinds of word order parameters in our system does not seem to create any problem for learnability. The success of pa­ rameter setting is attributable to parameter ordering, the absence of the Single Value Constraint, and the non-existence of specifier-head parameters. It is im­ portant to note that the parameter ordering here is not artificially or arbitrarily imposed on the learning system. It is derivable from some general linguistic princi­ ple, namely the principle of Procrastinate. This principle not only tells us how the S(M)-parameters should be ordered but also explains why we should try resetting HD-parameters before resetting S(M)-parameters. Thus the learning algorithm is linguistically motivated. 5.3.2 Setting S(F)-Parameters As mentioned above, the S(F)-parameters can be set independently on the basis of morphological evidence only. We have assumed that each S(F)-parameter can have two sub-parameters, represented as F-L, with the value of F (1 or 0) deter­ mining whether the F-feature is spelled out and the value of L (1 or 0) determining whether the L-feature is spelled out. There are therefore four value combinations for each S(F)-parameter: 0 - 0 , 0 -1 ,1 -0 and 1- 1. The two sub-parameters in each 211
  • 228. parameter can again be set independently. The value of F is based on the mor­ phological properties of function words (such as auxiliaries) only and the value of L on the morphology of content words (such as nouns and verbs) only. What we do in parameter setting is the following. We have to look at the function words, if there are any, and examine their morphological make-up. We know that no auxiliary can appear unless F is set to 1 in at least one of the S(F)-parameters. So the appearance of an auxiliary in the input string tells us that at least one S(F)-parameter is set to 1-L. (L has an independent value.) Exactly how many S(F)-parameters should be set to 1-L depends on how much information the aux­ iliary carries. If it is inflected for tense and agreement, for instance, then both S(F(tna)) and S(F(agr)) will be set to 1-L. We also have to look at the content words and see what features are morphologically represented. For example, a noun inflected for case will tell us that 5(F’(case)) is set to F -l (F has an independent value). It has been assumed that there is a separate learning module which is respon­ sible for finding out whether a word is morphologically inflected and what features are represented by the inflection. The learner in our system takes the output of this module as its input. The input in our case consist of symbols which are of the form C-F where C is a category label such as s, o, iv, tv and aux, and F is a list which contains zero or more features.14 A feature appears in the list only if it is morphologically realized in the language under question. The list is empty if a word carries no overt inflectional morphology. Here is a sample string. (194) [ •-[] aux[agr,tna] iv-faap] ] 14The only exception is often which will appear by itself without a feature list attached to it. 212
  • 229. The string in (194) represents a sentence which consists of the following “words” from left to right: a subject NP with no inflection, an auxiliary inflected for agree­ ment and tense, and an intransitive verb inflected for aspect. The features that are in the feature list of aux are overt F-features and the one in the feature list of iv is an overt L-features. We assume that the learner is able to get the representation in (194) using some independent learning strategies. For instance, she is supposed to be able to conclude from words like do, does, did and have, has, had that English auxiliaries are inflected for agreement and tense. Thus the main auxiliary in En­ glish should be represented as aux- [agr,tn s ]. Once representations like the one in (194) are available, the setting of S(F)-parameters is straight-forward. For in­ stance, the string in (194) tells us that, in this language, S(F(agr)) and S(F(tns)) are set to 1-L while S(F(asp)) is set to F-l. At first sight, we might think that there exist relations of proper inclusion in the morphological patterns presented here. For instance, aux- [tns] may seem to be properly included in au x -[ag r.tn s]. A second thought tells us that this is not true. A feature appears in the list if and only if this feature is morphologically visible. Therefore, aux-[tns] is acceptable to the parser only if S(F(agr)) is set to 0-L, and au x -[ag r.tn s] is acceptable only if S(F(agr)) is set to 1-L. There is no setting with which both aux-[tns] and au x -[ag r.tn s] can be grammatically analyzed. If S(F(agr)) is originally set to 1-L, the analysis of aux-[tns] will end in failure, which will trigger the resetting of this parameter to 0-L; Likewise, au x-[agr,tns] cannot be analyzed with 5(F(ajrr)) set to 0-L, which will cause it to be reset to 1-L. No setting in the parameter space of S(F)-parameters is a proper subset of another. Because of this, the failure-driven learning algorithm we have assumed will always succeed in setting the S(F)-parameters on the basis 213
  • 230. of positive evidence only. No parameter-ordering or default setting is necessary. However, in view of the fact that inflectional morphology is generally absent in children’s speech at the initial stages of language acquisition, we will assume that all S(F)-parameters are initially set to 0 - 0 . They will retain this value if no overt inflectional morphology is found in the target language. If the target language does have overt morphology, they will be reset to 0 - 1 , 1 -0 or 1 -1 when the inflections are detected and analyzed by children in the acquisition process. Notice that this “initial-0” hypothesis is empirically-based rather than computationally-based. We assume this in order to make our learning algorithm more natural, but the success of our algorithm does not depend on this hypothesis. Now that we have both the morphological parameters (i.e. S(F)-parameters) and word order parameters (i.e. S(M)- and HD- parameters), we have to decide which parameters to reset first in the event of a parsing failure. A parsing failure may occur because (a) some S(F)-parameter(s) has the wrong value; (b) some S(M)- or HD- parameter(s) has the wrong value; or (c) both S(F)-parameters and S(M)/HD parameters have wrong values. Whether an S(F)-parameter has the wrong value is easy to check out. There is a one-to-one mapping between the parameter values and the features in the feature lists. A feature appears in the list if and only if its S(F)-parameter is set to 1. We find an error whenever there is a mismatch: the feature is in the list but its S(F)- parameter is set 0. In Case (a), the input sentence should become pars&ble once the wrongly set S(F)-parameter is reset to 1. No resetting of S(M)/HD parameters is necessary. In fact, we will never be able to process the input if we reset S(M)/HD 214
  • 231. parameters instead of S(F)-parameters. This suggests that S(F)-parameter values should be checked first in the resetting. In Case (b), we will not find a mismatch between the overt features and the S(F)-parameter values. The sentence will not become parsable unless some S(M)/HD parameters are reset. However, we will not know if there is a mismatch unless we check the S(F)-parameter values first. Resetting S(M)/HD parameters before checking S(F)-parameters is harmless in this case, but it will be fatal in Cases (a) and (c). In Case (c), like in Case (a), we will never make the input interpretable by resetting S(M)/HD parameters alone. We have to check the values of S(F)-par&meters first. We will find some error and reset the parameters. In Case (c), The input sentence will remain unparsable after the resetting, but now we are sure that the problem is in the values of S(M)/HD parameters. In each of the three cases, we see that it is always safe to check the S(F)-parametera first. The resetting of S(M)/HD parameters can be dangerous if the errors in S(F)-parameter values are not found before the resetting starts. All this indicates that we must check out the the values of S(F)-parameters before initiating the parameter-setting algorithm for S(M)/HD parameters. The algorithm that checks S(F)-parameter values can be described as follows. (195) Go through the feature list of every word in the input sentence and do the following whenever a feature is encountered in the feature list. For an F- feature (which is found in an auxiliary or grammatical particle in our system), leave the S(F)-parameter value of this feature untouched if it is already set to 1-L; otherwise reset it to 1-L. For an L-feature (which is found on nouns and verbs), leave the S(F)-parameter value of this feature untouched if it is already set to F-l; otherwise reset it to F -l. 215
  • 232. Embedding this sub-algorithm in our general acquisition procedure, we now have the following learning algorithm: (196) Given: an ordered list of value combinations of S(M)- and HD- parameters Pord' (i) Initially set all S(M)-parameters to 0, all HD-parameters to /, and all S(F)-parameters to 0 - 0 . (ii) Select any string S from the target language L and try to parse S with the current setting. (iii) If S is successfully parsed, go to (ii); Otherwise, go to (iv). (iv) Check the values of S(F)-parameters using the sub-algorithm in (195) and parse S again. If 5 is successfully parsed, go to (ii); Otherwise, go to (v). (v) Reset S(M)/HD parameters to the first value combination Px in Pord and remove Px from Pord. Go back to (ii). The Prolog program that implements this new algorithm can be found in Appendix A.6 . We can again use ap/ 0 to set the parameters on-line for a given language, ls a r n l/1 to find out if a given language is learnable, and le a rn ja ll-la n g s/ 0 to check out if all the languages in the parameter space are learnable. The search space in this case is huge, comprising tens of thousands of possible languages. Ex­ haustive testing by computer has shown that all the languages in this big parameter 216
  • 233. space are learnable,15 but we do not have to depend on the computer search in order to know that it is true. We know that the S(M)- and HD- parameters can set correctly by themselves for every language in the parameter space. We also know that the S(F)-parameters can be set correctly on its own as well. Since there is no value dependency between S(F)-parameters and S(M)-/HD- parameters, the combination of those parameters does not result in any additional complexity. We can thus conclude that learnability is guaranteed for all the languages that can be generated in our experimental model. 5.4 Acquiring Little Languages In this section we examine how the parameters are set for some little languages. Due to the huge number of languages in the parameter space, it is not possible to put the log file of running la a rn ja llJ.a n g s/0 in the appendices. To see the behavior of the learner in this bigger parameter space, we will look at some Pro­ log sessions where the learnability of several individual languages is tested. The languages being tested are Little English, Little Japanese, Little Berber, Little Chinese, Little German and Little French, which are small subsets of the corre­ sponding languages. We will run le a rn l/1 on Little English, Little Japanese, Little Berber and Little Chinese to test their learnability and then run sp/O on Little French and Little German to see the on-line processes whereby these two languages are acquired. 15Again there are those odd languages which are learnable only if often appears in the input strings. 217
  • 234. 5.4.1 Acquiring Little English The portion of English to be acquired is (197). (197) Little English: { a - [c l] a u x -[a g r,tn s] (o ften ) iv -[a s p ], ■ -[cl] au x-[agr,tns] (often) tv -[asp ] o -[c2 ] > These strings are instantiated by the English sentences He was reading and He was chasing her. The copula BE is treated as an auxiliary carrying agreement and tense information. It is a visible TO which has moved to Agrl-0 before Spell-Out. Aux- [ag r,tns] can be spelled out as HAVE BE or DO in English. The choice of auxiliary in a particular sentence seems to be associated with the aspect feature of the sentence. It is Bpelled out as BE with progressive aspect, HAVE with perfective aspect, and DO with zero aspect. Here is the Prolog session where Little English is acquired. I T- see(english),read(L),sssn,lsarnl(L). Trying to lsarn [[■ -[cl],aux- [agr,tns],iv-[asp]], [s -[c l],aux-[agr,tns],tv-[asp ],o-[c2]], [a-[cl] ,aux-[agr,tns],oft«n ,iv-[asp]],[s-[cl],aux-[agr,tns],oftan,tv-[as p],o-[c2]]] ... ■(f(cass)) is roast to 0-1 a(f(agr)> is reset to 1-0 aCf(tns)) la reset to 1-0 s(f(asp>) Is reset to 0-1 Final setting: [0 0 0 1 0 1 0 0 1 1 agr(l-O) aap(O-l) case(0-1) pred(O-O) tns(1-0) ] Language generated: [s - [ c l] ,aux-[agr,tns],iv-[asp]] Xl [s - [ c l] ,aux-[agr,tns],tv-[asp],o-[c2]] 12 [s - [ c l] ,aux-[agr,tns],often .lv -[asp]] X3 218
  • 235. [a-[cl].aux-[agr,tns].oftan,tv-[up]»o-[c2]] X4 Tha language [[a-[cl],au x-[agr,tn a].iv-[u p ]],[a-[el],au x-[agr.tn a],tv-[tap ],o-[c2]], [•-[cl] ,aux-[agr.tna] .oftan.iv-[up ]] , [b- [ c1] ,aux-[agr.tna] ,often,tv-[aa p].o-[c2]]] ia laarnabla. yea I ?- According to the final setting reached by the learner, Little English is a head- initial language where the subject NP moves to Agrlspec and TO moves to Agrl-0. Agreement and tense features are spelled out on the auxiliary, aspect feature spelled out on the verb, and cue features on NPs. The four strings generated in %1 , %2 , %3 and X4 can be exemplified by the English sentences in (198), (199), (200) and (201). (198) She is running. (199) He is chasing her. (200) He has often succeeded. (201) She has often amused him. 5.4.2 Acquiring Little Japanese The subset of Japanese to be acquired is (202). (202) Little Japanese: { a -[c l] (oftan) iv -[tn a ], a -[c l] (oftan) o-[c2] tv -[tn s ], o-[c2] a -[c l] (oftan) tv -[tn s] > 219
  • 236. Instances of these strings can be found in (144), (145) and (146), repeated here as (203), (204) and (205). (203) Taroo-ga ki-ta Taroo-Nom come-Past ‘Taroo came.’ (204) Taroo-ga tegami-o kai-ta Taroo-Nom letter-Acc write-Past ‘Taroo wrote a letter.’ (205) tegami-o Taroo-ga kai-ta letter-Acc Taroo-Nom write-Past ‘Taroo wrote a letter.’ Let us test its learnability by running le a r n l/1: I ?- •••(Japanese),raad(L),saan,laarnl(L). Trying to l«am [[■ -[cl],iv -[tn s]], [s-[c l],o -[c 2 ],tv-[tn s]], [o -[c 2 ],s-[c l],tv -[tn s]], [ • -[c l],often,iv -[tn s]], [s -[c l],often,o-[c2],tv -[tn s]], [o-[c2],s-[cl] .often,tv-[tns]]] ... s(f(case)) is reset to 0-1 ■(f(tns)) is reset to 0-1 Final setting: [0 0 0 0 0 1 1 1 1 1 agr(O-O) asp(O-O) case(0-1) pred(O-O) tns(O-l) ] Language generated: [[■ -[cl],iv-[tn s]] [■-[cl],o -[c2 ],tv-[tns]] [o -[c 2 ],s-[c l],tv-[tns]] [■-[c l],often,iv-[tn s]] [s-[cl],o ften ,o -[c2 ],tv-[tns]] [o-[c2],s-[cl].often ,tv-[tn s]] Tha language [[• -[c l],iv -[tn s]], [a -[cl],o-[c2],tv -[tn s]], [o -[c 2 ],s-[c l],tv-[tns]] , [s -[c l],often ,iv-[tn s]], [s -[c l],oftan,o-[c2],tv-[tn s]], [o -[c2],s-[cl] ,oftan,tv -[tns]]] is lsaraabla. yas I T- 220
  • 237. According to the final setting, Little Japanese is a head-initial language where the subject NP and object NP move to Agrlspec and Agr2spec respectively. In addition, one of them must move further up to Cspec. The fact that Little Japanese is identified as a head-initial rather than a head-final one shows that a simple SOV string is not sufficient for the identification of a head-final language. There are alternative settings for Little Japanese among which are (206) and (207) where IP is head-final. But (206) is not chosen because it requires more overt movements, and (207) is not chosen because it requires optional movement. (206) [ 1 0 0 0 0 1 1 1 i f agr(0-0) asp(O-O) case(0-1) pred(O-O) tns(O -l) ] (207) [ 0 0 0 0 0 1 1 1/0 i f agr(O-O) asp(O-O) caae(O-l) pred(O-O) tns(O -l) ] 5.4.3 Acquiring Little Berber Little Berber consists of the strings in (208). (208) Little Berber: { iv -[a g r,tn s] (often) a -[], s- [] i v- [agr, tns] (often) tv-[agr, tns] (often) e -[] o -[], s -[] tv-[agr, tns] (often) o-[]
  • 238. The 3rd and 4th strings in this set are exemplified by (152) and (153), repeated here as (209) and (210): (209) i-ara hmad tabrat 3ms-wrote Ahmed letter *Ahmed wrote the letter.’ (210) hmad i-ara tabrat Ahmed 3ms-wrote letter ‘Ahmed wrote the letter.’ I have no data on the position of often in Berber. Its position in Little Berber is purely hypothetical. Let us see how the parameters are set for this language: I 7- saa(barber) ,raad(L) ,seen,learnl(L) . Trying to loam [[iv-[agr,tns] ,iv-[agr,tns]] , [tv-[agr,tns] ,s-Q ,o-[]] tv- [agr,tns] ,o- □ ] , [iv- [agr,tns],oftan,s- □ ], [s- □ , iv- [agr,tns] ,oftan], [ tv-[agr,tns] ,often,s-Q ,o -Q ], [s-D ,tv-[agr ,tns] ,oftan,o-[]]] ... s(f(agr)> is rasat to 0-1 s(f(tn s)) is rasat to 0-1 Final satting: [1 1 1 1 0 1/0 0 0 i i agr(O-l) asp(O-O) casa(O-O) prad(O-O) tns(0-1) ] Languaga ganaratad: [iv- [agr,tns] .s-O ] [•-□ .iv -[a g r , tns]] [tv-[agr,tns] ,s-D ,o-Q] [»-□ ,tv-[agr,tns] ,o-Q ] [iv-[agr.tns] ,often,s-G ] [ • - □ , iv-[agr,tn s],oftan] [tv-[agr,tns] ,often,s-G ,o-G ] [•-□ .tv -[a g r ,tn s],often,o-G ] Tha languaga [[iv-[agr,tns] ,s -G ], [•-□ ,iv-[agr,tn s]], [tv-[agr,tns] ,s-G ,o-G ] , [s-G , tv- [agr,tns] ,o -□ ] , [iv-[agr,tn s],oftan ,s-□ ], [s-□ ,iv-[agr,tn s],oftan], [ tv-[agr, tn s],oftan,s-D ,o-G ] ,[s -D ,tv-[agr,tn s], oftan,o-D ]] is laamabla. yas I 7- 222
  • 239. In this session, Little Berber is identified as a language where the verb must move to Agrl-0 and the subject NP can optionally move to Agrlspec. The verb is required to have its agreement and tense features spelled out. 5.4.4 Acquiring Little Chinese Little Chinese consists of the following strings: (211) Little Chinese: { s-[] (oftan) iv - [] aux-[asp] aux- [prad] , ■-[] (oftan) tv -[] o-[] aux-[asp] aux-[prad], (oftan) o -[] tv -U aux-[asp] aux-[prad], o- [] a- [] (oftan) tv- □ aux- [asp] aux- [prad] > Some actual instances of those strings are found in (213. The grammatical particle ma in the following sentences can be either a question marker or an affirmative marker depending upon the intonation. So they are all ambiguous intheir roman- ized forms, having both an interrogative reading and adeclarative reading.(They are not ambiguous when written in Chinese characters, as the two senses of ma are written in two different characters: “ o ” and “ "). (212) Ta lat le ma he come Asp Q/A ‘Has he come? / He has come, as you know/ (213) 7a kan-wan nei-ben shu le ma you finish reading that book Asp Q/A ‘Have you finished reading that book?’ or ‘He has finished reading that book, as you know.’ 223
  • 240. (214) Ta nei-ben shu kan-wan le ma you that book finish reading Asp Q/A ‘Have you finished reading that book?* or 'He has finished reading that book, as you know.' (215) Nei-ben shu ta kan-vtan le ma that book you finish reading Asp Q/A 'Have you finished reading that book?’ or ‘He has finished reading that book, as you know.' Here is the session where Little Chinese is acquired. I ?- ■••(chinaae) ,read(L) .seen,learnl(L) . Trying to learn [[■ -□ , iv- □ ,aux- [asp] ,aux- [pred]] , [s-D ,tv- □ ,o- [] ,aux- [a»p] ,aux- [pred] ],[■ -[].o- □ ,tv- □ ,aux- [asp] ,aux- [pred]] , [o- D ,s- [] ,tv- D »aux- [asp] ,aux- [pred]] , [[s-D .o ften ,iv -D ,aux-[asp],aux-[pred}] , [■ -□ .often,tv-D ,o-D , aux-[asp] ,aux-Cpred]] , [■-□ ,often,o-□ ,tv -□ ,aux- [asp] ,aux-[pred]] , [o-D ,■-[] ,often,tv-D ,aux-[asp] ,aux-[pred]]] ... s(f(asp>) is reset to 1-0 s(f(pred)) is reset to 1-0 Final setting: [0 0 0 0 0 1 1/0 I f f agr(O-O) asp(l-O) case(0-0) pred(l-O) tns(0-0) ] Language generated: [s- D ,iv - □ ,aux- [asp] ,aux- [pred]] [•-□ ,tv-[] ,o-D ,aux-[asp],aux-[pred]] [■ -□ .© -□ , tv- □ , aux- [asp],aux- [pred]] Co-□ ,■ -[], tv- □ , aux- [asp],aux- [pred]] [[s-D .often ,iv-D .aux-[asp] ,aux-[pred]] [s-D .often, tv- □ ,o-D .aux-[asp],aux-[pred]] [s-D .often, o - 0 ,tv- □ , aux-[asp],aux-[pred]] [o-[] ,s - □ .often ,tv-□ ,aux-[asp],aux-[pred]] The language [ [s- [], iv- □ , aux- [asp] ,aux- [pred]] ,[ • - □ , tv- [] ,o- [] ,aux- [asp],aux- [pred] ] . [s- □ ,o- □ .tv- □ ,aux-[asp] ,aux-[pred]], [o-D ,s- □ ,tv- □ ,aux-[asp] ,aux- [pred]] , [[•-□ .o ften ,iv-□ , aux-[asp],aux-[prod]] ,[•-[] .often ,tv-[] .o -O . aux-[asp],aux-[pred]] , [•-[] .often ,o-D .tv -□ .aux-[asp] ,aux-[pred]], [o-D ,s-D .o fte n ,tv -0 .aux-[asp].aux-[pred]]] is learnable. yes I ?- 224
  • 241. The learner identified Little Chinese as a head-final language where overt move­ ment to Agrspec is obligatory for the subject NP and optional for the object. Cspec must be filled at Spell-Out by the subject or by the object if it has moved to Agr2spec. The verb remains VP-internal while AspO and CO are spelled out in situ. 5.4.5 Acquiring Little lfrench Here is the subset of French to be acquired. (216) { s - [ c l] iv -[tn s .a s p ] (o ften ) s -[c l] tv-[tn s,asp ] (oftsn) o-[c2] > It differs from Little English in that (i) there is no auxiliary, and (ii) often appears post-verbally. The strings in (216) are exemplified by (217) and (218). (217) R nage souvent he swims often ‘He often swims.1 (218) R visite souvent Paris he visit often Paris (He often visits Paris.’ The following session shows how Little French is acquired on-line. I T- sp. The in itia l setting is [0 0 0 0 0 0 0 0 1 1 agr(O-O) asp(O-O) cass(O-O) prsd(O-O) tns(O-O) ] Bext? [ s - [ ] ,iv - [ ] ] . Current setting remains unchanged. Next? [s-[cl] ,iv -D ] . XI Unable to parse [s-[cl] ,iv-Q ] Resetting the parameters ... 225
  • 242. s(f(cas*)) is rssst to 0-1 Successful pars*. Next? [s-[c l],iv -[tn s]]. 12 Unabl* to pars* [s-[cl],iv -[tn s]] Resetting the parameters ... s(f(tns>) is reset to 0-1 Successful parse. Next? [s-[c l],iv -[tn s,a sp ]]. X3 Unabl* to pars* [s-[cl],iv-[tn s,asp ]] Resetting the parameters ... s(f(asp)) is reset to 0-1 Successful parse. Next? [s -[c l],tv-[tn s,asp ],o-[c2]]. X4 Current setting remains unchanged. Next? [s -[c l], iv-[tns,asp].often] . %5 Unable to parse [s-[cl],iv-[tn s,asp ].often ] Resetting the parameters ... Word order parameters reset t o : [ 1 1 1 1 0 1 0 0 i i ] Successful pars*. Next? [s-[cl],tv-[tn s,asp ],often ,o-[c2]]. X6 Current setting remains unchanged. Next? current.setting. [ l l l l O l O O i i agr(O-O) asp(O-l) case(0-1) pred(O-O) tns(O-l) ] Next? generate. Language generated with current setting: [s-C cl].iv-[tns,asp]] [s-[cl],iv-[tn s,asp ].often ] [s -[c l],tv-[tns,asp],oft*n,o-[c2]] [s-[cl],tv-[tn s,asp ],o-[c2]] Next? bye. yes I ?- 226
  • 243. The presentation of input strings here is inconsistent with regard to overt mor­ phology: %1 has no overt morphology whatsoever; %2 has overt case only; %3 has overt tense in addition to overt case; and X4 has all the overt morphology in Little French. This is done intentionally in order to mimic children's gradual acquisition of morphological knowledge. It is a common observation that children’s morpho­ logical knowledge is not acquired instantaneously. At the early stages of language development, they may fail to notice part or all of inflectional morphology in their target language. When this happens, the sentence which is actually being analyzed by children are strings where part or all of the overt morphology is missing. The string C s- [] iv - [] ] thus represents a piece of input data whose inflectional markings are not noticed by children. As acquisition progresses, the morphology is gradually worked out and becomes available for syntactic analysis. In the above session, case morphology becomes visible at %2 where S(F(case)) is reset to 0-1. Tense and aspect morphology appears in the input (or rather intake) at %3 and %4 where 5(F'(<ns)) and S(F(asp)) are successively set to 0-1. Up till %5, how­ ever, Little French has been treated as a language which has no overt movement. The SVO order found so far is compatible with the structure where every lexical item is in its VP-internal position. The trigger for a different setting comes at %6 where often appears. The new string cannot be parsed unless the subject moves to Agrlspec and the verb moves to Agrl-0. Hence the new setting. After that, every string in (216) is acceptable. The language generated by the current setting is exactly Little French. We thus conclude that this language has been correctly identified in the limit. 227
  • 244. 5.4.6 Acquiring Little German Little German is a V2 language which contains the following strings; (219) { s -[c l] aux-[agr,tns] (oftsn) iv -[a sp ], s -[c l] au x -[ag r,tn s] (oftsn) o-[c2] tv -[a sp ], o-[c2] au x -[ag r,tns] s-[c l] (o ftsn ) tv -[a sp ], o ftsn au x -[ag r,tn s] s -[c l] o-[c2] tv -[asp ] > This is the small subset of German main clauses where the auxiliaryisin second position and the verb is clause-final. The strings in (219) can be exemplified by the German sentences in (220), (221) and (222). (The word achon in these sentences is an instance of the “often”-type adverb.) (220) Karl hat achon das Buck gckaufi Karl has already that book bought ‘Karl has already bought that book.’ (221) Das Buch hat Karl schon gckaufi that book has Karl already bought 'That book, Karl has already bought.' (222) Schon hat Karl das Buch gckaufi already has Karl that book bought 'Already Karl has bought that book.' The following shows how Little German is acquired on-line. 1 ?- sp. Ths in itia l setting is [0 0 0 0 0 0 0 0 1 1 agr(0-0) asp(O-O) case(0-0) pred(O-O) tns(0-0) ] Next? [s- □ , aux-[tns],iv- □ ]. Xl Unable to parse [s -D ,aux-[tns], iv -D ] Resetting the parameters ... 228
  • 245. s(f(tn s)) is rssst to 1-0 Word ordsr parameters rssst t o : [ 0 0 0 0 0 1 0 0 i i ] Successful psrss. Mart? [s-D ,aux-[tns] ,o-D ,tv -D ] . X2 Unable to parse [ s - □ ,aux-[tns]to -[ ] ,tv - []] Resetting the parameters ... Word order parameters reset t o : [ 0 0 0 0 0 1 l O i i j Successful parse. Kext? generate. Language generated with current setting: [s-D .often,aux-[tns] ,iv -D ] [•-□ .often,aux-[tns] ,o-D ,tv -D ] [s- D .eux- [tns] , iv- □ ] [ s -D , aux-[tns] ,o-D ,tv -D ] Next? [o-D ,aux-[tns] ,s-D .tv -D ] . X3 Unable to parse [o -D ,*ux-[tns],s - D ,tv -D ] Resetting the parameters ... Word order parameters reset to: [ O O O O O O l i i i ] Successful parse. ■extT generate. Language generated with current setting: [often,aux-[tns] ,o-D ,»- D ,tv -D ] [often,aux- [tns] ,s-D ,iv - []] [o-D .often,aux-[tns] ,s-D ,tv -D ] Co- D ,*ux- [tns] ,s- D .tv - D] Kext? [ s - [cl] ,aux-[tns] , iv -D ] • Xd Unable to parse [s -[e l],aux-[tns],iv -[]] Resetting the parameters ... s(f(case)> is re se t to 0-1 Word order parameters reset t o : [ 1 0 0 0 0 1 0 l i i ] Successful parse. MextT generate. Language generated vith current setting: [o fte n ,s -[c l], aux- [tns] ,iv -D ] [o fte n ,s -[c l], aux- [tns] ,tv -D ,o-[c2]] [s-[c l] .o ften , aux-[tns] ,iv -D ] 229
  • 246. [■-[cl] .oftan,aux-[tna] ,tv-[] ,o-[c2]] [a- [ c l] ,aux- [tna],iv- □ ] [■-[cl] ,aux-[tna] ,tv-[] ,o-[c2]] Maxt? [oftan ,au x-[tn a],a-[cl],iv-[]]. %B Unabla to parsa [oftan,aux-[tna] ,a-[cl] ,iv-Q ] Raaatting tha paranatara ... Word ordar paranatara raaat to: [1 0 0 0 0 0 1 I l f ] Succaaaful paraa. Naxt? ganarata. Languaga ganaratad with currant aatting: [oftan,aux-[tna],o-[c2],a-[cl],tv-[]] [oftan,aux- [tna] ,a-[cl] ,iv -[]] [o-[c2] ,oftan,aux-[tna] ,a-[cl] ,tv -[]] [o-[c2] ,a-[cl] ,tv-[] ,aux-[tna]] Naxt? [a-[cl],aux-[tna], iv -[]]. X6 Unabla to paraa [a-[cl] ,aux-[tna] ,iv -[]] Raaatting tha paranatara ... Word ordar paranatara raaat to: [0 1 0 0 0 1 0 1 1 1 ] Succaaaful paraa. Maxt? ganarata. Languaga ganaratad with currant aatting: [oftan ,a-[cl],aux-[tna],iv-[]] [oftan,a- [cl] ,aux- [tna],tv- □ ,o- [c2]] [a-[cl].oftan,aux-[tna],iv-[]] [a-[cl] .oftan,aux-[tna] ,tv-[] ,o-[c2]] [a-[cl] ,aux-[tna] ,iv-Q ] [a-[c1 ],aux-[tna],tv- □ ,o-[c2]] Maxt? [oftan,aux-[tna] ,a-[cl] ,iv-D ] . %7 Unabla to paraa [oftan,aux-[tna] ,a -[c l],iv- □] Raaatting tha paranatara ... Word ordar paranatara raaat to : [ 0 1 0 0 0 0 l l i i ] Succaaaful paraa. Naxt? [o-[c2],aux-[tna] ,a-[cl] ,tv -Q ]. X8 Currant aatting ranaina unchangad. Maxt? [a -[c l],aux-[tn a],iv-[]]. Unabla to paraa [a-[cl],au x-[tn a],iv-n ] 230
  • 247. Raaatting tha paranatara ... Word ordar paranatara raaat to: [0 0 1 0 0 1 0 1 Succaaaful paraa. HaztT [s-[cl] ,aux-[agr,tns] ,iv-[]] . Unabla to paraa [a-[cl] ,aux-[agr,tna] ,iv -[]] Raaatting tha paranatara ... a(f(agr>) ia raaat to 1-0 Word ordar paranatara raaat to: [0 0 0 1 0 1 0 1 Succaaaful paraa. Naxt? [oftan,aux-[agr,tna] ,a-[cl] ,iv -[]] . Unabla to paraa [oftan,aux- [agr,tns] ,a- [cl] ,iv- []] Raaatting tha paranatara ... Word ordar paranatara rasat to: [0 0 0 1 0 0 1 1 Succaaaful paraa. Naxt? [o- [c2],aux- [agr,tn s],a- [c l],tv- □ ] . Currant aatting ranaina unchangad. Naxt? [a-[cl] ,aux-[agr,tns] ,o-[c2] ,tv -□ ]. Unabla to paraa [s -[c l],aux-[agr,tns],o-[c2],tv-D ] Raaatting tha paranatara ... Word ordar paranatara raaat to: [0 0 0 1 0 1 1 1 Succaaaful paraa. Naxt? [o-[c2],au x-[agr,tn s],s-[cl],tv-[]]. Unabla to paraa [o-[c2] , aux-[agr,tna] ,a -[c l],tv -[]] Raaatting tha paranatara ... Word ordar paranatara raaatto: [ 1 0 0 1 1 0 1 1 Succaaaful paraa. Naxt? [a- [c l],aux-[agr,tna] ,o-[c2],tv- []] . Unabla to paraa [s-[cl],au x-[agr,tn s],o-[c2],tv-[]] Raaatting tha paranatara ... Word ordar paranatara raaatto: [ 1 0 0 1 0 1 1 1 Succaaaful paraa. Naxt? ganarata. Languaga ganaratad with currant aatting: [o fta n ,s-[cl],aux-[agr,tns] ,iv-Q ] [oftan ,a-[cl],aux- [agr,tna] ,o-[c2],tv-[]] [o-[c2] ,s - [ c l] ,aux-[agr,tna] ,oftan,tv-[]] i i ] X9 i i ] XlO i i ] XU i i ] X12 i f ] X13 i i 3 231
  • 248. [o-[c2] ,s-[c l] ,aux- [agr,tns] ,tv -[]] [s-[cl],aux-[agr,tns] .oftan, iv-D ] [s-Ccl] ,aux-[agr.tn s] .oftan,o-[c2] ,tv -[]] [•-[cl].au x-[agr,tn s],iv-[]] [•-[cl].au x-[agr,tn s],o-[c2],t»-[]] Maxt? [s -[c l],aux-[agr.tns],o-[c2] ,tv -[]]. Unabla to parsa [s-[cl] ,aux-[agr,tns] ,o-[c2] ,tv -[]] Rasattlng tha paranatars ... Word ordar paranatars rasat to: [0 0 0 1 1 1 1 1 Succassful parsa. Maxt? C s-[cl],aux-[agr,tns],o-[c2],tv-[asp]]. Unabla to parsa [s -[c l],aux-[agr,tns],o-[c2],tv-[asp]] Rasattlng tha paranatars ... s(f(asp)> is rasat to 0-1 Succassful parsa. Maxt? [o-[c2],aux-[agr,tns],s-[cl],tv-[asp]]. Currant satting ranains unchangad. Maxt? [oftan,aux-[agr,tns],s-[cl],o-[c2],tv - [asp]].X17 Currant sattlng ranains unchangad. Maxt? currant.satting. [0 0 0 1 1 1 1 1 1 1 agr(l-O) asp(O-l) casa(O-l) prad(0-0) tns(l-O) ] Maxt? ganarata. Languaga ganaratad with currant satting: [oftan, aux- [agr, tns] ,s- [c l], iv- [asp]] [oftan,aux-[agr,tns],s-[cl],o-[c2],tv-[asp]] [o-[c2],aux-[agr.tns],s-[cl].oftan,tv-[asp]] [o-[c2],au x-[agr,tns],s-[cl],tv-[asp]] [s -[c l],aux-[agr,tns],oftan,iv-[asp]] [s-[cl],au x-[agr,tn s],oftan,o-[c2],tv-[asp]] [s -[c l],aux-[agr,tns],iv-[asp]] [s-[cl],au x-[agr,tn s],o-[c2],tv-[asp]] Maxt? bya. yas I 7- X14 i i ] X16 X16 232
  • 249. The first input string is not compatible with the initial setting. It becomes anaiyzable when S(Af(specl)) is reset to 1 and S(F(tns)) is set to 1-0. The next string (X2) triggers the resetting of S(M(spec2)) to 1. The language generated at this point is an SOV language instead of a V2 language. The next input string (%3) informs the learner of the error and resetting occurs. The learner is trying to identify the input language as a non-scrambling one at first, but without success. Resetting continues through X4, %5, X6, and %7. The setting at this point is able to accept a language where the Aux is in second position while the first position can be occupied by an Adverb or an Object. But no subject can appear clause-initially, which shows the setting is still incorrect. Therefore more resetting occurs until after the string at Xl5 is presented. Meanwhile, the recognition of morphological inflections cause 5(J*’(case)) and 5,(F(asp)) to be reset to 0-1 whereas S(F(agr)) and 5(/'(pred)) are reset to 1-0. From this point on, every string in Little German are acceptable. In addition, the language generated with the current setting is not a superset of the target language. The learner has thus converged on the correct grammar for Little German. 5.5 Sum m ary We have seen in this chapter that the parameter space associated with our experi­ mental grammar has certain desirable learnability properties. The languages gen­ erated in this parameter space are not only typologically interesting but learnable as well. There is at least one algorithm whereby the parameters can be correctly set for each language in the parameter space. The precedence rules employed in the learning algorithm, which are crucial for the success of the learnability, are derivable from a general linguistic principle, the principle of Procrastinate. This 233
  • 250. makes our learning algorithm linguistically plausible as well as computationally feasible. There are many issues to which our acquisition model is potentially rel­ evant, but they have not been addressed so far. We may want to examine the empirical implications of this model for language development. One of the ques­ tions we can ask is how the intermediate settings the learner goes through in our model relate to the developmental stages that children undergo. Our model may also be theoretically interesting to the study of language change. The existence of weakly equivalent languages in our parameter space may provide an explanation for the kind of reanalysis phenomena where the grammar changes without imme­ diately affecting the language generated. Another issue we may want to pursue is how plausible our learning algorithm will be when the input is noisy and what modifications are needed to make the learning procedure more robust. All these issues are yet to be explored and a serious discussion on them requires much more work than has been done in this thesis. 234
  • 251. Chapter 6 Parsing w ith S-Parameters When discussing the parameter space in Chapter 4 and the parameter setting algo­ rithms in Chapter 5, we have assumed a parser which implements the experimental grammar defined in Chapter 3. In Chapter 4, the parser is used in the generation of all possible strings of all possible languages in our parameter space. In Chapter 5, it is used by the learner to process incoming sentences or generate all the strings in her language.1 So far the parser itself has not received any discussion. It is the goal of this chapter to describe this parser. The parser to be discussed implements our version of UG, namely, the ex­ perimental grammar we have assumed. It is not strictly speaking an axiomatic representation of the grammar, but it is equivalent to the grammar in that it ac­ cepts or rejects exactly the same set of sentences/languages as the grammar does. The parser is universal in the sense that it is capable of processing any language in the parameter space. In other words, the parsing procedures can be applied in a uniform fashion no matter which particular language is being processed. The only thing that can change from language to language is the parameter setting. This 'The Prolog programs pspaes.pl in Appendix A.l, sp .p l in Appendix A.4 and s p l.p l in Appendix A.6 cannot run without this parser. 235
  • 252. should be no surprise because the parser is in fact the embodiment of UG which does not vary except for the parameter values. 6.1 D istinguishing Characteristics o f the Parser Our discussion of the parser will be restricted to those properties of the parser which are absent in other parsers. Those properties are independent of the ways in which the parser might be implemented. They can be preserved no matter whether the parsing algorithm is top-down, bottom-up, left-corner, or any combinations thereof. The reason why the parser we build here can be different from all other parsers is that the grammar used by the parser is different. The most salient feature which distinguishes our grammar from all other versions of UG is the uniform treatment of movement. In traditional grammars, most movements are S-structure movements. The movements that a parser deals with are only those movements which are overt. The parse trees built by the parser are either S- structure representations or combinations of S-structure and D-structure. In the latter case, certain nodes in the parse tree form chains. The constituent that moves iB found at the head of the chain and the tail or foot of the chain consists of a trace. Since S-structure movements can vary from language to language, the chains to be built by the parser can vary according to which language is being parsed. Consequently, the chain-building procedures cannot be universally defined. We have to learn that an XOchain starts at CO in German but at Agrl-0 in French. We also have to learn that the A-chain resulting from wh-movement must be built in English but not in Chinese. Things are different in our present model. We have assumed that there is an underlying set of movements which occur in every language. All those movements 236
  • 253. are LF movements in the sense that they are all necessary for the successful deriva­ tion of the LF representation. The derivation will crash if any of those movements fails to occur. This implies that the parser in our model must build an LF repre­ sentation in order to find out if a sentence is grammatical. Looking at LF, we see that all languages are identical at this level of representation in that the movement chains found there are the same cross-linguistically. If a sentence is grammatical at all, then its LF representation must have those chains, no matter what language this sentence is from. This means that the process of chain-building can be defined universally as an inborn procedure. The parser goes ahead and constructs an in­ variant set of chains in a uniform way regardless of what language ib being parsed. These chains are all LF chains. If the only representation we need were LF, then nothing will have to be learned as far as chain-building is concerned. LF is of course not the only structure the parser has to build. When we talk about a chain, at least two structural positions must be involved, one represented by the head of the chain and one by the tail. In terms of a chain resulting from LF movement, the head is the LF position of a constituent and the tail is the base po­ sition. The base position is the position where the constituent is generated through lexical projection. All the “content words1* in our system are base-generated VP- internally. It is roughly the D-structure (DS) position in traditional terminology. Since every LF movement involves moving a constituent from its base position to the LF position, the chain formed by this movement is a simultaneous repre­ sentation of both positions. When we say that the chains are universal, we in fact mean that both the LF positions and base positions are invariant. By saying that the chains can be built uniformly, we have actually concluded that the LF and “DS1* representations of all languages can be constructed through a uniform 237
  • 254. parsing procedure. However, LF and DS are not the only representations at which the grammat- icality of a sentence is determined. We also have to look at the positions where lexical items are spelled out. In traditional terminology, this representation is called S-structure (SS). We can borrow this term and use it to refer to the struc­ ture which is fed into the PF component. The “S” thus stands for “Spell Out” rather than “surface”. This is the structure which is subject to cross-linguistic variation. The source of variation in our model is the values of S(M)-parameters which determine which subset of LF movements must be performed before Spell- Out in a given language. The constituent involved in an LF movement appears at the head position of a chain at S-structure if it moves before Spell-Out. It appears at the tail position if it moves after Spell-Out. To find out if a sentence is grammatical in a given language, we have to take the S-structure positions of all constituents into consideration. In other words, the parse tree we build must represent SS in addition to LF and DS. This is not difficult to do. LF and DS position can as usual be represented by the chain, with its head invariably marking the LF position and the tail the DS position. The SS position can be indicated by the position of lexical head in the chain. It must be at the head position if it moves before Spell-Out and it must be at the tail position if it moves after Spell-Out. We can thus represent LF, DS and SS in a single parse tree. Examples of such parse trees are given in (223) and (224). 238
  • 255. 239
  • 256. S vkjM t vO<[ 240
  • 257. The chains in those parse trees are represented through coindexing and feature- sharing. There are three chains in each of the trees: (i) A V-chain Unking the nodes CO, Agrl-0, TO, AspO, Agr2-0 and VO. All those nodes are assigned the same index: 1. They also share the following feature values: phi:X, tns:T , asp:A, and th : [ag t.p a t]. This chain is formed by successive head movements from VO to CO. The variables in the chain, X, T and A, can be instantiated to things like 3sm (3rd person singular masculine), pres (present tense), prog (progressive aspect) in an actual sentence. (ii) An NP chain linking the nodes Cspec, Agrlspec and the first Vspec. These nodes are coindexed 2 and they share the following feature values: c a se :l, op:+, phi :X, and th e ta :a g t. This chain is formed by two successive move­ ments of the subject NP: an A-movement from Vspec to Agrlspec and an A-movement from Agrlspec to Cspec. The fact that both the phi feature in this NP chain and the phi feature in the V-chain have the value X indicates that the subject and the verb must agree with each other. The + value of the op feature in this NP chain shows that this NP can be the topic or focus of the sentence, or a QP which receives a wide-scope reading. (iii) Another NP chain linking the nodes Agr2spec and the second Vspec. The index for these nodes is 3 and the feature values that are shared between them are case:2, op:-, phi:Y, and th e ta :p a t. This chain is formed by movement of the object NP from Vspec to Agr2spec. As far as these chains are concerned, (223) and (224) are identical. This should be the case because these chains are formed by LF movements which are universal.3 2Both (223) and (224) have an alternative representation where Cspec is coindexed with the 241
  • 258. What differentiates (223) and (224) are the positions of lexical items in these chains. The lexical items, which are actually pronounced, are represented by Subject, Obj ect and Verb in those trees. In a real sentence they will be real words like he, him and likes. In (223), Subject is in Agrlspec, Object in Agrlspec, and Verb in VO. In (224), Verb is in CO, and Subject and Object are in their VP-internal positions. These positions tell us where Subject, Object and Verb are at the time of Spell-Out. In (223), Subject and Object have moved overtly to their case positions and the surface word order is SOV. In (224), Verb has overtly moved to CO and the surface order is VSO. We get the tree in (223) when the S(M)- parameters are set to [ 0 0 0 0 0 1 1 0 ] and we get (224) when they are set t o [ l 1 1 1 0 0 0 0 ] . It is clear that three levels of representation - LF, DS and SS - are merged into one in those parse trees. The LF and DS positions are identical in the two trees but the SS positions are different. In building trees of this kind, the parser is constructing three levels of repre­ sentations at the same time. There are three things that the parser must do. It must (i) build the tree; (ii) build the chains; and (iii) decide where the lexical item appears in each chain. Chain-building is universal, as we have seen. The parsing procedures which are responsible for chain-formation can therefore be invariant. It can simply be hard-wired into the parser, since it does not respond to language variation at all. So this part of the parsing mechanism can be assumed to be innate. Tree-building is universal in that the same set of nodes are built in a given type of sentence no matter what the language is. This is illustrated in (223) and object NP which will then have the ♦ value for the operator feature. This is possible when the object is understood to be the topic, focus, or wide-scope QP of the sentence. 242
  • 259. (224) which represent different languages but have the same nodes. Furthermore the same set of dominance relations holds between those nodes in every language. AgrlP is always dominated by Cl, for example. However, linear precedence may vary according to the values of HD-parameters. When building CP and IP, the parser must be able to respond in two different ways. It has to build the phrase head-initially if the HD-parameter is set to / and build the phrase head-finally if the parameter is set to F. These alternative actions can again be built-in. We can suppose that the parser is innately able to build a phrase in either way. In parsing a particular language, it has to receive instructions from the HD-parameters in order to decide which action to take. The task of deciding where the lexical item appears in each chain consists of the following computation. The parser must determine for each terminal node in the tree whether this node must dominate a lexical item. In addition, it must make sure that only one node in each chain dominates a lexical item. Can these decisions again be made universally? The answer appears to be negative at first sight. Apparently, the action the parser takes at a given terminal node can vary cross-linguistically. In some languages the terminal node must dominate a lexical item while in some other languages it must be empty. It seems that we may need to specify the parser action at each terminal for each different language. This does not have to be the case, however. It is true that these decisions are language- specific, but the specifications exist in the parameter settings rather than in the parser itself. In our model, the surface position of a lexical item is determined by the values of S(M)-parameters. These parameters decide how far each lexical item moves before Spell-Out. A terminal node must dominate a lexical item if this lexical item has moved exactly to that node at Spell-Out. The node should 243
  • 260. dominate nothing (i.e. empty) if the lexical item in that chain has either moved through this node to a higher position or has not moved to this node by the time of Spell-Out. Take TO as an example. It must dominate a verb if S(A/(*ns)), 5(M (asp)) and S(M(agr2)) are set to 1 while S(M(agrl)) is set to 0. In this case, the verb will overtly move to TO but no further. TO must be empty in either of the following situations: (a) 5(M (fns)), 5(Af(asp)) and S(M(agr2)) are set to 1 and 5(M (aprl)) is set to 1 as well ( the verb moves through TO to a higher position); or (b) S(M(tna)) is set to 0 (the verb does not move to TO before Spell- Out). We assume that, whenever a terminal node is built, the parser is able to consult the S(M)-parameter values and decide whether the node should be empty or not. We can further assume that such decision-making capability of the parser is innate. There can be a built-in mechanism which checks the parameter values and takes the right actions accordingly. If this is true, no learning is needed in this part of the parsing procedure, either. What has to be learned is the parameter setting which exists independently of the parser. Once the parameters are set for a given language, the parser will be able to process that language. It turns out that, given a certain value combination of S(M)-parameters, the action that the parser must take at any given terminal node is unique. The fol­ lowing table shows the S(M)-parameter conditions under which a terminal node is to dominate a content word (V or NP). 244
  • 261. UV in CO S(M (c(l») k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(l))) k S(M(agr2(l))) V in Agrl-0 S(M(c(0))) k S(M (agrl(l») k S(M(tns(l))) k S(M(asp(l))) k S(M(agr2(l))) V inTO S(M(agrl(0))) k S(M(tns(l))) k S(M(asp(l))) k S(M(agr2(l))) V in AspO S(M(tns(0))) k S(M(asp(l))) k S(M(agr2(l))) V in Agr2-0 S(M(asp(0)j) k S(M(agr2(l))) V in VO S(M(agr2(0))) NP in Cspec ( S(M(cspec(l))) k (S(M(specl(l))) ) or S(M(spec2(l)))) NP in Agrlspec S(M(specl(l))) NP in Agr2spec S(M(spec2(l))) NP in Vspecl S(M(specl(0))) NP in Vspec2 S(M(spec2(0))) (225) Decision Tkble for the Spell-Out of V end NP Notice that the conditions for an NP to appear in Agrlspec or Agr2spec are necessary but not sufficient conditions. S(A/(specl(l))) is necessary for Agrlspec to dominate an lexical NP, but it is not sufficient. The NP may move further up to Cspec if 5(M(c3pec(l))) holds. However, we cannot state the condition as 5(Af(*pecl(l))) k S(A/(cspec(0))) because that would be too limiting. The sub­ ject NP may stay in Agrlspec with S(M (cspec)) set to 1 if the object NP moves to Cspec instead. By stating the necessary conditions only, we allow some kind of non-determinism which makes it possible to have alternative NPs (subject or object) in Cspec in some languages. But this non-determinism is local to the posi­ tion of Agrspec. The choice can be made deterministically when a whole sentence is taken into consideration. Let us take Agrlspec again as an example. Suppose both 5(Af(sjiecl)) and S{M(cspec)) are set to 1. Upon seeing S(M(apec2(l))), the parser may decide to put the subject NP under Agrlspec. This would be the correct choice if the object NP has moved to Cspec. If the object is not there, how­ 245
  • 262. ever, Cspec will be empty which is contradictory to 5(A/(cspec(l))). The parser will realize that a mistake has been made and it will try the other choice - putting the subject in Cspec and leaving Agrlspec empty - which is also permitted by the current parameter setting. It should also be noted that the VO in (225) is the head of the top layer of VP. The lower V0(s) are always empty. The assumption is that the verb always moves through all layers of VP to the top one before Spell-Out. There is no variation there. In addition to deciding whether a terminal should contain a content word or not, the parser also has to check, in cases where the terminal is not empty, what features are overtly expressed in the word. An NP may or may not be overtly marked for case; a verb can be overtly marked for the predication, agreement, tense or aspect feature, or any combination of them. A sentence is accepted only if the morphological patterns are correct as well. These patterns are determined by the values of S(F)-parameters. An NP must be overtly marked for case if 5 (/r(case)) is set to F-l; otherwise it must be set to F-0. A verb can be overtly marked for predication, agreement, tense or aspect if and only if S(F(pred)), S(F(agr)), S(F(tns)) or 5(/'(asp)) is set to 1; it must have no overt inflectional morphology if every S(F)-parameter is set to 0. The complete set of possible spell-out of S (Subject NP), 0 (object NP), and V (verb) and the S(F)-parameter values which are necessary and sufficient conditions for the spell-out are given in (226). As usual, we will indicate the overtness of a feature by placing it in the feature list of every symbol. For conciseness, we will use v as a cover term for both iv and tv. 246
  • 263. l r s-[] SfFfcasefL-O))) s-(cl]_____________ °* J S(F(case(L-l))) SFcase^L-0 o- c21 S(F(case(L-l))) | v v-U S(F(pred(L-0))) k S(F(agr(L-0))) 1 k S(F(tns(L-0)l) k S(F(asp(L-0))) v-[pred] S(F(pred(L-l)j) k S(F(agr(L-0))) k S(F(tns(L-0))) k S(F(asp(L-0))) v-lagrJ S(P(pred(L-0))) k S(P(agr(L-l))) k S(F(tns(L-0m k S(F(asp(L-0))) v-[tns] S(F(pred(L-0))) k S(F(agr(L-0))) k S(F(tns(L-l))) k S(F(asp(L-0))) v-laspj §(F(pred(L-0))) & S(P(agr(L-0))) k S(F(tns(L-0))) k S(F(asp(L-l))) v-[pred,agrj S(F(pred(L-l))) k S(F(agr(L-l))) k S(F(tns(L-0))) k S(F(asp(L-0))) v-[pred,tnsj S(F(pred(L-l))) k S(F(agr(L-0))) k S{F(tns(L-l))) k S(F(asp(L-0))) v-[pred,aspj S(F(pred(L-l))) L 3(F(agr(L-0))) k S(F(tns(L-0))) &S(F(asp(L-l))) v-[agr,tns] S(F(pred(L-0))) k S(P(agr(L-l))) k S(F(tns(L-l))) k S(F(asP(L-0))) v-lagr,asp] S(F(pred(L-0))) k S(F(agr(L-l))) k S(F(tns(L-0))) k S(F(asp(L-l))) v-[tns,asp] S(F(pred(L-0))) k S(F(agr(L-0))) k S(F(tns(L-l))) k S(F(asp(L-l))) v-[pred,agr,tns) S(F(pred(L-l))) k S(F(agr(L-l))) k S(F(tna(L-l))) k S(F(asp(L-0))) v-[pred,agr,asp] S(F(pred(L-l))) k S(F(agr(L-l))) k S(F(tns(L-0))) k S(F(asp(L-l))) v-[pred,tns,aap] S(F(pred(L-l))) k S(F(agr(L-0))) k S(F(tns(L-l))) k S(F(asp(L-l))) v-[agr,tns,asp] S(t,(pred(L-0))) & S(F(agr(L-l))) k S(F(tns(L-l))) k S(F(asp(L-l))) v-[pred,agr,tns,asp] S(F(pred(L-l))) k S(F(agr(L-l))) k S(F(tns(L-l))) k S(F(asp(L-l))) (226) DecUion Tkble for the Spell-Out of L-Features So far we have limited the lexical items that can appear in the tree to content 247
  • 264. words (verbs and NPs) only. There is another kind of visible (and pronouncible) elements that can be dominated by terminal nodes: auxiliaries/grammatical parti­ cles which we have represented as Aux. Whether a terminal node can dominate an Aux has to be determined by both S(M)-parameters and S(F)-parameters. Take TO as an example. There are three situations where TO can dominate an Aux. (i) When 5(M (tns(0))) k 5'(F(tna(l —X )) k 5(M (ayrl(0))) is true. This con­ dition requires that (a) no constituent move to TO, (b) the F-feature of TO be spelled out as an Aux, and (c) this Aux remain in situ. In this case, we have aux- [tns] in TO. (ii) When S(Af(«sp(0))) k (F(asp(l - X))) k S(M (fns(l))) k S(M (aprl(0))) & S(F(tns(0 —A"))) is true. This condition requires that (a) no constituent move to AspO, (b) the F-feature of AspO be spelled out as an Aux, (c) this Aux move to TO, (d) it move no furhter up after moving to TO, and (e) the F-feature of TO not be spelled out. In this case, we have aux- [asp] in TO. (iii) When S(Af(<wp(0))) k {F(asp{1 - A))) k S(M(tns(l))) k S(M(agrl{0))) k S(F(tns( 1 —A))) is true. This condition requires that (a) no constituent move to AspO, (b) the F-feature of AspO be spelled out as an Aux, (c) this Aux move to TO, (d) it move no furhter up after moving to TO, and (e) the F-feature of TO be spelled out as well. In this case, we have aux- [tn s , asp] in TO. We assume that an auxiliary can be overtly inflected for a certain feature only if it has originated from or moved through the position in which this feature is found. An Aux can have tn s in its feature list only if it is in TO, or has moved from or through TO. In order for an Aux to be spelled out as aux- [pred,a g r,tn s ,asp], 248
  • 265. for instance, two conditions must be met. First, the Aux must have originated from AspO and moved all the way up to CO; second, S(F(pred)), 5(F’(opr)), S(F(tna)) and S(F(asp)) must all be set to 1-L. In short, the spell-out of Aux has to be determined by the values of both S(M)- and S(F)- parameters. The following table lists all the possible spell-out of Aux and the necessary and sufficient conditions for each possibility. Since an Aux can have a different set of spell-out possibilities in each different position, we have to consider them one by one.3 Aux aux-[pred] SfF^red 1-X111 k S(M(c(0») in aux-[agr] S(F(pred(0-X))) k S(F(agr(l-X))) k S(M(c(l))) k S(M(agrl(0))) CO aux-[tns] S(F(pred(0-X))) k S(F(agr(0-X^) k S(F(tns(l-X))) k S M(c(I))) k S(M(agrl(l))) k S(M(tns(0))) aux-[asp] S(F(prsd(0-X))) k S(F(agr(0-X))) k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l») k S(M(asp(0))) aux-[pred,agr] S(fTpred(l4)i L S ( f ( * p ( l - m k S M(c(I))) k S(M(agrl(0))) aux-[pred,tns] S(F(pred(l-X))) k S(F(agr(0-X))) k S(F(tns(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tn«(0))) aux-[pred,asp] S(F(pred(l-X))) k S(F(agr(0-X))) k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) aux-[agr,tns] S(F(pred(0-X))) k S(F(agr(l-X))) k S(F(tns(l-X))) k S(M(c(l))) k S(M(agrl(l))) & S(M(tn»(0))) aux-[agr,asp] s(P(Pr«i<o-x))) l k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l)))&S(M(asp(0))) aux-[tns,asp] S(F(pred(0*X))) & S(F(agr(0-X)^E S(F(tns(l-X)J) k S(F(asp(l-X))) k S(M<c(l))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(aspfO») aux-[prsd,agr ,tns] S(F(pred(l-X))) k S(F(agr(l-X))) k S(F(tns(l-X))) k S(M(c(l)» k S(M(agrl(l))) k S(M(tns(0))) | (Continued aa the next psge) sAgr2-0 is being ignored here because s i we restricting ourselves to situations where there is no objeet-verb agreement. We can easily it to object-verb agreement but we choose not to do so for the sake of avoiding unnecessary complications in our exposition. 249
  • 266. aux-[pred,agr, asp] S(F(pred(l-X))) k S(F(agr(l-X))) k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(l)» k S(M(agrl(l») k S(M(tns(l))) k S(M(asp(0))) aux-[pred,tns, asp] S(F(pred(l-X))) k S(F(agr(0-X))) k S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) aux-[agr,tns, asp] S(F(pred(0-X))) k S(f(agr(l-X))) k S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) aux-[pred,agr, tns,asp] S(F(pred(l-X))) k S(F(agr(l-X))) k S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(c(l))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) Aux in Agrl-0 aux-[agr] S(F(agi(l-X))) k S(M(c(0))) k S(M(agrl(0))) aux-[tns] Sit'(agrio-X))) k S(F(tns(l-X))) k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(0))) aux-[asp] 5(P(agi(0-X))) k SM tnsfa-xffi k S(F(asp(l-X))) k S(M(c(0))) k fS(M(agrl(l))) k S(M(tns(l))) k S(M{asp(0))) aux-[agr,tns] S(F(agr(l-X))) k S(F(tns(l-X))) k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(0))) aux-[agr,asp] S(F(agr(l-X))) k S(F(tns(0-X))) k S(F(asp(l-X))) k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) aux-[tns,asp] S(F(agr(0-X))) k S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) aux-[agr,tns, asp] k S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(c(0))) k S(M(agrl(l))) k S(M(tns(l))) k S(M(asp(0))) Aux in TO aux- tns] S(F(tns(l-X))) k S(M(agrl(0))) k S(M(tns(0))) | aux-[asp] S(F(tns(0-X))) k S(F(asp(l-X)j) k S(M(agrl(0))) k S jM (M l))) t S(M(asp(0))) aux-[tns,asp] S(F(tns(l-X))) k S(F(asp(l-X))) k S(M(agrl(0))) k S(M(tns(l))) k S(M(asp(0))) II Aux in || AspO aux-[asp] S(F(asp(l-X))) k S(M(tna(0))) k S(M(asp(0))) (227) Decision Table for the Spell-Out of F-Features 250
  • 267. Note that Aux can never have an empty list attached to it. This is so because Aux is the spell-out of some F-feature(s) and there can be no Aux if no feature is spelled out. Using the decision tables in (225), (226) and (227), the parser can uniquely determine the status of each terminal. If the condition in (225) holds, the termi­ nal must contain a content word (a verb or a lexical NP). It then uses (226) to determine the inflectional pattern of this word. If the condition in (227) holds, the terminal must dominate an Aux of a particular morphological make-up. Notice that there is no overlapping between the conditions in (225) and (227). If neither the conditions in (225) nor those in (227) are met, the terminal must be empty i.e. dominating no lexical item. Now we sum up the characteristics of the parser in our model. Like all other principle-based parsers, the present parser has to do at least three things: it has to build a tree, it has to build chains, and it has to decide which terminals are empty and which terminals contain lexical material. What makes this parser special is the degree of universality found in the parsing procedures. As we have seen, chain-building is universal and tree-building is universal aside from the limited variation in head direction. Most of the language-particular decisions are made at the terminal nodes an these decisions can always be made correctly by consulting the values of S-parameters. The feature which sets this parser apart from all other parsers is the way in which traces or empty categories are posited. For all existing parsers, the decision of whether to posit an empty category cannot be made locally. In order to licence a trace in a given position, the parser must find in the left or right context a lexical item which has moved from this position. With our present parser, however, the 251
  • 268. decision can be made on the basis of the S-parameter setting alone. We can simply look at the parameter values and decide whether a node must be empty or non-empty. No memory of the left context or look-head into the right context is necessary. The chains are built universally, so we know in advance how far a given chain goes. This property can be especially valuable for parsing structures with many empty categories, such as those assumed in current syntactic theory. In the next section, we will look at a Prolog implementation of this parser. This will clarify the discussion in this section and add some concreteness to our understanding of the parsing algorithm. 6.2 A Prolog Im plem entation In this section we look at a particular implementation of the parser described above. It is a top-down parser implemented in the DCG (definite clause grammar) format. The choice of presenting a top-down version of the parser in this thesis is motivated by the consideration that this seems to be the simplest way to describe the underlying logic of the parser. It is not theoretically superior, nor is it the most efficient parser that can be built in the present model. We choose it in order to make the main characteristics of the parser clear without getting into the complications of parsing strategies. Once the logic is clear, we can implement it in many other ways. The top-down parser to be discussed is presented in Appendix A.7. We shall examine it by looking at the following sub-processes one by one: (a) tree-building; (b) feature-checking; and (c) leaf-attachment. 252
  • 269. 6.2.1 Tree-Building The tree is built by cp/3, c l/4 , agrlp/5, agr 1-1/4, tp /6 , t l/ 6 , asp.p/6, aapl/6, agr2p/6, agr2_l/5, vp/5 and v l/5 .4 These predicates implement the following phrase structure rules which are equivalent to the rules in (64) presented as part of our experimental grammar in Chapter 3. (228) C P - * X P C1 (i) C 1 - { C°, AgrlP } (si) AgrlP —*■NP(l) Agrl1 (m ) Agrl1 - {Agrl°, TP } (iv) T P -» T 1 (u) T 1 often T 1 (vi) T 1 — { T°, AspP } (vis) AspP —* Asp1 (viii) Asp1 ~4 { AspP, Agr2P } (*x) Agr2P —»JVP(2) Agr2l (x) Agr2P —»Agr2l (xt) Agr2l —» { Agr2°, VP( 1) } (xii) VP{) -» NP(l) Vl (l) (xi*) Vl(l)-+ V ° VP{2) (xiii) y*(l) -> V° (xxv) VP(2) -►NP(2) Vrl(2) (xt>) V*(2) -> V° (xvi) The X P in (i) can be an NP (subject or object) or an AdvP such as often. The first clause of c/3 takes care of the case where Cspec is occupied by an NP. The second clause lets Cspec contain an AdvP whose instantiation is limited to often in the present system. Some of the rules in (228) have their right-hand side enclosed in curly brackets. The symbols in these brackets are unspecified for linear order. Which symbol 4The predicate* are again referred to by the notation X/Y where X is the predicate name and Y is the number of arguments in the predicate. Notice that the two arguments representing difference lists are invisible due to the DCG format used here. 253
  • 270. precedes the other depends on the values of HD-parameters. This is why c l/4 , agrl.1/4, tl/6 , asp l/6 and agr2-l/5 each have at least two clauses, one for the value i (head-initial) and one for f (head-final). Two different NPs are distinguished in these rules: NP( 1) and N P (2). N P (l) is the subject NP which is assigned Casel and NP( 2) is the object NP which has Case2. This distinction is implemented by the checking requirement case (NF) ■■■cl or case(N F)«-c2 in agrlp/5, agr2/6 and vp/6. (The Variables NF, CF, AgrIF, Agr2F, TF, AspF, VF, and AdF in the program represent the feature matrices of C, Agrl, Agr2, T, Asp, V and Adv respectively.) Differentiation is also made between two VPs: V Pfl) and VP{2). V’/a(l) is the top layer of VP whose specifier is the subject NP. This layer of VP is always present in the structure, since every sentence must have at least one argument. VP(2) only appears in transitive sentences and its specifier is the object NP. As we are restricting ourselves to sentences with no more than two arguments for the time being, VP(2) will be the bottom layer of VP. Hence the rules in (xv) and (xvi) which close the whole VP. VP(1) may or may not also be the bottom layer, depending on whether the verb is transitive or not. This is why there are two different expansions of Vf(l) ((xiii) and (xiv)), one taking a VP(2) complement and one closing the VP. In the Prolog implementation, the distinction between VP(1) and VP(2) is made by theta-grid checking. The first clause of vp/5, corresponding to (xii), requires [agt |Ths] which shows that the theta-role assigned in this layer of VP is the agent role. The second clause corresponds to (xv). It requires [pat] which shows that the agent role has already been assigned in the layer above. 254
  • 271. We notice that there are two alternative ways of expanding Agr2P: (x) and (xi). The rule (x) applies when the sentence is transitive and (xi) applies when it is intransitive. The distinction is again made by theta-grid checking. The specifier of Agr2P is generated only if the theta-grid contains two theta roles. The rules in (vi) and (vii) permit zero or more often to be attached to T l. Thus T 1can be expanded recursively, allowing an infinite number of often to be adjoined to T l. In our implementation, however, the recursion is interrupted so that at most one often can be generated. (This is done by adding a dummy argument to 11/6 after often is attached.) We have to do this because the parser also functions as a generator in the system. The generator is used to generate all possible strings in a language. By limiting recursion to Depth 1, we can prevent the generation from being infinite. We find that the second clause of cp/3, the third clause of agrlp/5 , and the third, fourth and fifth clauses of t l / 6 are commented out by “X” in the Prolog program. These clauses are needed only if we want often to appear in the strings. 6.2.2 Feature-Checking There are two types of feature checking in the parsing process. One shows up as Spec-head agreement and the other involves movement. Both types of feature- checking are performed in the tree-building process. Spec-head agreement is checked whenever a specifier is built, and chains are formed as the tree grows. As we can see in the program, all the feature-checking operations are built into the phrasal expansions. Let us look at spec-head agreement first. The tree we build has five Spec positions: Cspec, Agrlspec, Agr2spec, Vspecl and Vspec2. The features checked 255
  • 272. in these five positions are different. The constituent in Cspec is assigned the [+ value of the operator feature. The assignment is done in cp/3 by op(XF)■■■»+'. The features checked through spec-head agreement in Agrl are case and agreement features. The specifier of Agrl is assigned Casel (case(NF)■■■cl in agrlp/5). In addition, the NP in Agrlspec must agree with Agrl in the values of phi-features (phi (AgrIF)"■ p h i (NF) in agrU/6). Similar checking is done in Agr2 where the NP in Agr2spec is assigned Case2.5 The spec-head agreement in VPs are responsible for theta-role assignment. The NP in Vspecl is assigned the first role in the theta grid ( which is always the agent role in our restricted system) and the NP in Vspec2 is assigned the second theta-role (patient in our case). (See theta(NF)-«»agt and theta(NF)«"pat in vp/5). All the feature checking operations are performed by which is Johnson’s (1990a, 1990b) unification algorithm implemented by Ed Stabler. This algorithm (johnson.pl) is given in Appendix A.7 as well. The chains resulting from movement are formed through feature-passing and feature-checking. Three types of chains are built in the parsing process: XO-chains, A-chains and A-chains. Each of the three is represented by a separate argument in the predicates. They are named HC(Head Chain), AC(A chain) and ABC(A-Bar Chain) respectively when appearing as variables. HC is the second argument (if any) in each predicate. Its content is xO(HF,Th) where “HF” is the feature matrix being passed on in the chain and uTh” the theta-grid. ACis the argument following HC (if any). It is a list because there can be more than one A-chains being formed at the same time. Each member of this list is an “np(NF)” where “NF” consists •Verb-object agreement is being ignored in this program. This is why we do not find phi(A gr2F)«aphi(IF) in agr2_l/6. 256
  • 273. of the NP features of the chain. The argument following ACis ABC (if any) which is represented as a list not because there is more than one A-chain but because we want to distinguish between empty and non-empty lists. In our system, ABC may contain an NP (NPl or NP2) or an AdvP depending on which constituent has moved to Cspec. A chain starts when the head of that chain is encountered and terminates when the tail of that chain is reached. Since a verb moves successively from VOto Agr2-0, AspO, TO, Agrl-0 and finally to CO, the head of the XO-chain (HC) is CO and the tail is VO. The formation of the chain starts in c l/4 where an extra argument is created to hold the chain. It goes through agrlp/5, agrl-1/4, tp /6 , tl/6 , asp.p/6, aspl/6, agr2p/6, agr2_l/5, vp/5 and ends in v l/5 where the chain terminates. The features of this chain are checked at each link. The checking is done through chsck.v_fe a tu rss/2 at c l/4 , agrl-1/4, tl/6 , aspl/6, agr2-l/5 and vl/5. A new head is found in each of these steps and the features of the new head is unified with those of the chain. The features that are checked in the present program are index, tense, aspect and phi-features. The A-chains are created by NP-movement. In a transitive sentence, there are two A-chains, one for the subject and one for the object. The subject A-chain has its head in Agrlspec and its tail in Vspecl. The object chain starts in Agr2spec and ends in Vspec2. By the time we come to VP(1), AC contains two NPs. The subject NP, whose case feature is instantiated to cl, is selected and unified with the NP in Vspecl. The other NP is passed on until it is unified with the NP in Vspec2. The unification is performed by chsck_np_featurss/2 which checks the index, operator, theta, case and phi features. The A-chain can consist of the subject NP, the object NP or an AdvP like 257
  • 274. often. The chain starts in Cspec. The first clause of cp/3 deals with the cases where an NP moves to Cspec. An unp(NF)” is thus put in ABC. The second clause is used when Cspec is occupied by an AdvP. In this case ABCwill contain advp(AdF). The A-chain is passed on and different actions are taken depending on what XP is in the chain. If Cspec is filled by the subject NP, this chain will terminate at Agrlspec in which case the first clause of agrlp/5 will apply. (ABC becomes empty after that.) If the object is in Cspec, the chain will be passed on through AgrlP (second clause of agrlp/5), TP and AspP until it comes to Agr2P where the tail of the chain is found in Agr2spec (first clause of agr2p/6). If the constituent that has moved to Cspec is an AdvP, ABC will contain an AdvP instead of an NP. It passes through AgrlP (third clause of agrlp/5) and terminates in TP where an empty AdvP is adjoined to T l and this AdvP iB unified with the AdvP in ABC (fourth clause of tl/6 ). Besides building the chains, we also have to make sure that each chain contains exactly one lexical head (a pronounced NP or verb). This checking is done through indexing. The value of the index feature starts as a variable. When a visible NP or verb is attached to a terminal, the variable becomes instantiated. In this particular implementation, the verb always receives the index 1, the subject NP 2, the object NP 3 and and the AdvP 4. Aux is not considered a full lexical item, so an X0- chain can seemingly contain more than one lexical item: a verb and one or more auxiliaries. For this reason, an Aux shares the index of the verb instead of having one of its own. Once the index feature of a chain has been instantiated, no other visible NPs or verbs can be put into the chain. This is achieved through ind«x/2 which is applied whenever a visible head is found. It requires that the value of index be a variable and it will refuse to incorporate a lexical item into a chain if 258
  • 275. the value is already a constant. This prevents a chain from having more than one lexical head. In addition, we use lex ica l / I to make sure that each chain does have a lexical head, in which case the value of index should be a constant rather than a variable. This predicate is applied after each chain terminates, by which time every chain is supposed to have found a lexical head. The checking of NP chains is done in vp/5 after the termination of each chain in Vspecl or Vspec2. The XO-chain is checked after the parse tree is complete. The checking cannot be done earlier, say, when VO is reached, because the chain will not be complete at that time if CP or IP is head-final. The joint effect of index/2 and le x ic a l/1 ensures that each chain has exactly one lexical head. 6.2.3 Leaf-Attachment This is the part of the parsing procedure which deals with cross-linguistic variation due to different value combinations of S-parameters. It is applied whenever a terminal node is created. The procedure determines, on the basis of the current setting of S-parameters, whether the terminal node should dominate a content word, an Aux, or be empty. How such decisions are made has been discussed in 6.1. The particular actions to take in all individual situations have been summed up in the decision tables in (225), (226) and (227). They are directly coded in the Prolog program as c0 /5 ( agrl JO/5, tO/5, aspO/5, agr2j0/5, vO/5, np/5, subjact/4, objact/4, varb/5 and aux/5. NPs in different positions are differentiated by the second argument in np/S. In each case the parser looks at S(M(cape.c)), S(M (specl)) and S(M(spec2)) to decide whether the terminal NP should contain a subject, object or nothing. If the terminal node must be non-empty, a “word” (Subject/Object) will be taken 259
  • 276. from the input string and attached to the node as a leaf. If a subject NP is to be attached, subject/4 is called to determine whether this NP must be overtly marked for case. Object/4 is called when an object NP is to be attached. In cases where there is overt case, the morphological case of the overt NP must get unified with the case feature of the NP chain of which the terminal is a link. Indexing is also done at this point. The leaf to be attached to XO can be a verb, an Aux, or nothing. The com­ putation involved here is more complex because a three-way decision has to be made. Each of the three possibilities is handled by a separate clause in cO/5, agrlJO/5, tO/5, aspO/5, agr2_0/5 and vO/5. The first clause finds out if a verb can be attached here. If so, verb/5 (which implements the decision table in (226)) is called to process the morphology of the verb to be attached. If not, the second clause is applied to find out if an Aux can be attached here. The real work is done by aux/6 where the decision table in (227) is implemented. This predicate deter­ mines not only the presence/absence of an Aux in a given position but the specific morphological make-up of the Aux as well. The morphological information of the Aux to be attached is incorporated into the XO-chain using code-features/2. If neither the first clause nor the second applies, the third clause will be used and the terminal will be empty. Since the terminal nodes are encountered strictly from left to right and every node is checked for its status as soon as it is encountered, the input string will not be accepted by the parser unless it has the required word order. The string will also be unacceptable if some symbols in the string do not have the correct morphological pattern. 260
  • 277. 6.2.4 The Parser in Action The parser described above can be used in several different ways. We can call parse/O to get all the strings in a language and graphically displays their parse trees. We have seen examples of such parse trees in (223) and (224). We can also use p a rs e /1 to process a string without showing the tree and p arse/2 to get both the string and the tree. Before a sentence is parsed, the parameters must be set to a particular value combination. The setting can be done on-line using reset/O. The following are two Prolog sessions illustrating the use of reset/0 and parse/1. The clauses concerning often were commented out while running the first session but were included when the second session was run. The parse tree is printed out for each string that is generated. To save space, I omitted all the parse trees except for the last string in each setting. Furthermore, only the category label for each node is printed out, with all the other features suppressed in the tree printing. (229) Session 1: I ?- rsset. Haw sstting: [1,1,1,1,0,1,1,0,1,1,0-1,0-0,0-1,0-1,0-1]. Xl yss I ?- parss(S). S ■ [s-[cl],iv-[agr,tn s,asp ]] ; S ■ [s-[cl],tv-[agr,tns,asp],o-[c2]} ; 261
  • 278. cp np c l cO agrlp np SubJ-[cl] ogrl.l ogrl.O Vorb-£ngr,tna,asp] tp tl to “ P-P nspl ospO »gr2p np Obj- [c2] ogr2_l &gr2_0 ▼P np vl vO ▼P np vl VO S ■ [o-[cl] ,tv-[ogr,tns,osp] ,o-[c2]] ; no I ?- r u « t. Mow sotting: [0,0,0,0,0,0,0,0 ,f , t ,0-0,1-0,0-0,1-0,0-1]. X2 yos I ?- porsoCS). S ■ [•-□ ,tv-[osp] ,o-D ,oux-[tns] ,out-[prod]] ; S ■ [•-□ ,iv-[*sp] ,out-[tns] .out-[prod]] ; 262
  • 279. cp np cl ngrlp np agrl.l tp tl “ P-P nspl agr2p np agr2„l vp np Subj - C] vl vO Varb- Cup] vp np Obj - [] vl vO ngr2.0 upO tO Aux-[tns] agrl.O cO Aux-[prad] S • [a-[] ,tv-[u p ] ,o-D ,aux-[tna] ,aux-[prad]] ; no I ?- rasat. Maw tatting: [ l , l , l , l , l , l , 0 , 0 , i , i , 0 - l (0 -0 ,0 -l,0 -l,0 -l]. X3 yas I ?- parta(S). S ■ [iv-[agr,tn*,up],s-tcl33 ; S ■ [tv-[ngr.tnt ,u p ] ,t-[c l] ,o-[c2]] ; 263
  • 280. cp np c l cO Vnrb-[agr,tns,up] ngrlp np Subj-[cl] agr1.1 agrl.O tp t l to u p .p u p l aapO agr2p np agr2_l agr2_0 vp np vl vO ▼P np 0bj-[c2] vl vO S « [tv- [agr,tns,up] ,s-Cel] ,o-[c2]] ; no I ?- ru n t. Nnv sntting: [0,0,0,1,0.1,0,0,1,1,0-1,0-0,1-0,1-0,0-1]. X4 y u I ?- parsn(S). S ■ [n -[c l],aux-[agr.tns],tv-[asp],o-[c2]] ; S ■ [s-[cl],aux-[agr,tns],iv-[asp]] ; 264
  • 281. cp np cl cO •grip np Subj-[cl] agrl.l agr1.0 Aux-[agr,tns] tp tl to asp.p aspl aspO agr2p np agr2_l agr2_0 vp np vl vO Varb- [asp] *P np 0bj-[c2] vl vO S » [s-[cl],aux-[agr#tns],tv-[asp],o-[c2]] ; no I ?- rasst. Nav sattlng: [1 .1 ,0 ,1 ,1 .1 ,1 .1 .i . f ,0-1,0-0,1-0,1-0,0-1]. X5 yas I ?- parsa(S). S * [a -[c l],aux*[agr,tns],o-[c2],tv-[asp]] ; S • [a-[cl] ,aux-[agr,tns],iv-[asp]] ; 265
  • 282. cp np Obj - [c2] cl cO Aux-[agr,tns] •grip np Subj-[cl] agrl.l tp tl “ P-P u p l agr2p np agr2_l vp np vl vO np vl vO agr2_0 aspO Varb-[asp] to agrl.O S ■ [o-[c2],aux-[agr,tns],s-[cl],tv-[asp]] ; no I ?- rasat. Mas sattlng: [0 ,0 ,1 .0 ,1 ,1 ,1 ,0 ,f ,1,0-0.1-0,l-0 » l-0 ,1-0]. X6 yas I ?- paraa(S). S ■ [o-D ,aux-[tns,asp] ,o-[] ,tv-D ,aux-[prad,agr]] ; S ■ [s -D ,aux-[tns,asp] ,iv-[],aux-[prsd,agr]] ; 266
  • 283. cp np cl ngrlp np Subj- □ agr1.1 agrl.O tp t l tO Aux-[tns,aip] aap.p aapl aspO agr2p np Obj-Q agr2_l agr2.0 ▼P np vl vO ▼P cO Aux-[prad,agr] S ■ [s-[] ,aux-[tns,asp] ,o-[] .tv-D ,aux-[prad,agr]] no I ?- rasat. Mas aatting: [0,0,1,1,1,1,1,0,1,1,0-1,1-0,1-0,1-0,1 yas I ?- parsa(S). S ■ [aux-[prad,agr,tns,asp] ,a-[cl] ,o-[c2] ,tv-Q ] ; S - [aux-[prad,agr,tns,asp],s-[cl],iv-[]] ; Varb-[] np vl vO -0 ]. X7 267
  • 284. cp np cl cO Aux-[prad,agr,tns,asp] ngrlp np Subj-[cl] agrl.l agrl.O tp t l to aap.p aspl aapO agr2p np 0bj-[c2] agr2.1 agr2_0 vp np vl vO V*rb-[] np vl vO S ■ [aux-[prad,agr,tna,aap],■-[c l],o-[c2],tv -[]] ; no I ?- raaat. Mav sattlng: [0 ,0 ,1 ,1 ,1 ,1 ,1 ,0 ,1 ,1 ,O-O,1-1,1-1,1-1,1-1]. X8 yaa I ?- paraa(S). S ■ [aux-[prad,agr,tns,asp] ,a -□,<>-□,tv-[prad,agr,tns,asp]] ; S ■ [aux-[prad,agr,tns,asp],s-[],lv-[prad,agr,tns,asp]] ; 268
  • 285. cp np cl cO Aux-[prad,agr,tns,asp] •grip np Subj- □ agrl.l agrl.O tp t l to aap.p aspl aspO agr2p np Obj- □ agr2.1 agr2_0 vp np vl vO Varb-[prad,agr, vp tn s,asp] np vl vO S ■ [aux-[prsd.sgr,tns,asp] ,s - 0 ,o-D ,tv-[prad,agr,tns,asp]] ; no I ?- (230) Session 2: I ?- rasat. Mav satting: [1 ,1 ,1 ,0 ,0 ,1 ,0 ,l , i , 1,0-1,0-0,0-1,0-1,O-O]. %1 y « I ?- parsa(S). S ■ [s-[cl],iv -[a g r,tn s]] ; S • [a-[cl],tv-[agr,tn s],o-[c2]] ; S ■ [s-[cl],oftan ,iv-[agr,tn s]] ; 269
  • 286. S ■ [s-[cl],often,tv-[agr,tns],o-[c2]] ; S ■ [oftan,s-[cl],iv-[agr,tns]] ; cp advp oftan cl cO agrlp np Subj-[cl] agrl.l agrl.O Varb-[agr,tns] tp tl advp tl to asp.p aspl aspO agr2p np agr2.1 agr2.0 ▼P np vl vO vp np 0bj-[c2] vl vO S ■ [o fta n ,s-[cl],tv-[agr,tns],o-[c2]] ; no I ?- rasat. Nav satting: [1 ,1 ,1 ,1 ,0 ,1 ,0 ,1 ,i . i , 0-1,0-0,0-1,0-1,0-0]. X2 yas I ?- parsa(S). 270
  • 287. s ■ [ s - [ c l ] ,iT -[agr,tna]] ; S ■ [» -[c l],I t - [agr,tns],often] ; S ■ [s-[cl],tr-[agr,tn s],o-[c2]] ; S ■ [a-[c l],tT -[agr,tns],oftan,o-[c2]] ; S ■ [o fta n ,s-[el],I t - [agr,tns]] ; cp advp oftan cl cO ngrlp np Subj-[cl] agr1.1 agrl.O Varb-[agr,tns] tp t l advp t l t o M P-P aspl aapO agr2p np ugr2_l agr2.0 ▼P np vl vO *P np 0toj-[c2] vl tO S ■ [oftan,■-te l], t T - [agr,tna],o-[c2]] ; no 271
  • 288. I 7- r«a«t. lav ••tting: [1, 1 ,1 , 1, 1 , 1, 1 , 1 ,1 ,1,0-0,0-0,0-0,0-0,0-0]. %3 yaa I 7- paraa(S). S » [a-[],iv-C 3] ; S ■ [a-[] ,iv -Q ,oftan] ; S » ,tv-D ,o-Q ] ; S ■ [a-[] ,tv-[] ,oftan,o-[]] ; S - Co-□ , t v - ; S ■ [o-[] ,tv- .oftan] ; S - [ o f t o n , i v - ; 272
  • 289. cp advp oftan cl cO Varb-Q agrlp np Subj- □ agrt.l agrl.O tp tl advp tl to asp.p aspl aspO agr2p np Obj-U agr2_l agr2_0 vp np vl vO vp np vl vO S ■ [oftan,tv-□ , no I T- Every time a new setting is entered, p a rse /1 is run exhaustively to find out all the strings that can be parsed with this setting. In Session 1, the number of strings which are accepted by each setting is always three: one intransitive sentence and two transitive sentences. The two transitive sentences are different from each other in that, at LF, Cspec is occupied by the subject NP in the first one while it is occupied by the object NP in the second. In other words, the two sentences 273
  • 290. differ as to whether the subject or object is the topic/focus of the sentence. This difference does not show up in the surface strings if there is no overt movement to Cspec. But it does show up in the pane tree. It shows up in the surface string when S(M(cspec)) is set to 1, as we can see in the fifth setting. In Session 2, more strings are generated with each setting because of the optional appearance of often. The number of strings generated is six if there is no overt movement to Cspec and eight if overt movement to Cspec occurs. There are two additional strings in the latter case because often itself can move to Cspec. 6.2.5 Universal vs. Language-Particular Parsers As we have seen, the parser presented here is a universal one. By consulting the parameter values, it can produce or analyze the strings of any language in our current parameter space. The parser is “complete” in the sense that any language- particular parser can be generated from this universal parser by setting the pa­ rameters in a specific way. Every individual language is a particular combination of parameter values and there is a parser corresponding to any value combination. When the parameters are set in a certain way so that just one language can be accepted, only a subpart of the parser is used. In terms of a Prolog program, we can say that a given parameter setting selects a subset of the clauses in the parser. When the HD-parameters are set to i, for instance, the clauses which require that the parameters be set to f will not be used. The parser in A.7 has three clauses for cO/5, agr1JO/5, tO/5, and aspO/5, fifteen clauses for verb/5, and twenty-six clauses for aux/5. Once the S-parameters are set, however, only one of them will be used. Which one is used depends on the parameter values. Therefore, although the program in A.7 is fairly big with many disjunctions, the 274
  • 291. parser for any particular language can be reasonably small. In fact, we can obtain any language-particular parser by removing all the clauses which can not be used with the given parameter setting. Once these clauses are removed, all the choice points where parameter values are consulted no longer exist. As a result, we can remove all the calls to parameter values as well. The resulting parser can parse one language only but it will be more efficient because the computation involving parameter values is not necessary any more. This process is an instance of partial evaluation or partial execution (Burstall and Darlington 1977, Clark and Sickel 1977, Clark and Tarnlund 1977, Hogger 1981, Tamaki and Sato 1984, Pereira and Shieber 1987) which in our case results in a specialization of the parser. Once the parameters are finally set, any language-particular parser can be derived from the universal parser through such partial evaluation. An example of such a parser is given in Appendix A.8. This parser is obtained by partially executing the original parser with the following parameter setting: (231) [ 1 1 0 1 1 1 1 0, i i , 1-0 0-0 1-0 1-0 0-1 ] . This parser can only process the language having this setting. Compared with the parser in A.7, this parser takes less space but runs more quickly, as all unused clauses and all the calls to parameter values are now removed. This relationship between the universal parser and language-particular parsers is suggestive of a certain hypothesis on learning. We can speculate that children are born with the universal parser. This parser is used in setting the parameters. Once the parameters are set, however, partial evaluation may take place, which makes the parser more compact and more efficient. An interesting question is whether the original parser is kept after the partial execution. It is very likely that it will 275
  • 292. become less and less active after the the “critical period” of language acquisition. At least part of it will get “trashed", since it is no longer useful. A speaker whose language has the setting in (231) may me the parser in A.8 only and discard the parser in A.7. This may provide an explanation for the difficulty people experience in second language learning. If our speculations happen to be correct, then the second language learner would have to either reactivate or reconstruct the original parser, which is of course a costly operation. 6.3 Summ ary In this chapter we have seen how parsing might work in our present model. Our parser differs from other parsers in that the procedures for chain-building are in­ variable across languages. Differences between different languages show up mostly in how the leaves are attached to the tree. It is found that, given a particular setting of S-parameters, there is a unique way to attach the leaves. The parser can consult the parameter values and attach the leaves accordingly. It is universal in the sense that it can parse any language in the parameter space without a single change in the parser itself. Language-particular parsers can be derived from this universal parser through the process of partial evaluation. A Prolog implementa­ tion of the parser is presented to illustrate those new properties. The presentation and the discussion show that our present syntactic model might have advantages over traditional models in terms of parsing. 276
  • 293. Chapter 7 Final Discussion This thesis has been a syntactic experiment in the field of Principles and Parame­ ters theory. We have explored a parametric syntactic model which is based on the notion of Spell-Out in the Minimalist framework. A specific grammar is proposed and this grammar is put to the test of language typology, language acquisition, and language processing. We are now in a better position to see the potentials and limitations of this model. In this chapter, I will first consider some possi­ ble extensions of the model and then discuss the validity of the present model in general. 7.1 Possible Extensions The experimental grammar we have examined in detail here is a very restricted one. Among those things that are left out are the internal structures of DP/NP and PP. We have put these phrases aside in order to limit our experimental pa­ rameter space to a manageable size. There is no principled reason why the present approach cannot be applied to the internal word orders of these constituents as well. As a matter of fact, a great deal of research has already been done in this direction. The parallels between CP/IP and DP have been discussed by many 277
  • 294. people (Abney (1987), Ritter (1988,1990), Tellier (1988), Stowell (1989), Szabolcsi (1990), Carstens (1991,1993), Valois (1991), Mitchell (1993), Campbell (1993), etc.). I will not get into a full discussion of non-verbal projections, but it is fairly obvious how the S(M)-parameters can work there. We can use a very simple DP structure to illustrate this. Suppose that lexical projection and GT operations universally generate the following tree: DP / Spec D1 / D NP (232) A Simple DP Tree Suppose also that the NP in this tree must move to the Spec of DP by LF in order to have its case and ^-features checked against those of the determiner. (The fact that the noun has to agree with the determiner in many languages suggests that this checking operation is plausible.) If this checking movement has an S(M)-parameter associated with it, then the determiner will get spelled out in a pre-nominal position if this parameter is set to 0 and in a post-nominal position if the parameter is set to 1. The word order inside a PP can be derived in a similar way. Let us suppose that PP also has a Spec position as shown in (233). 278
  • 295. PP / " SpM PI / P NP (233) A Hypothetical PP Tree There could be an LF requirement that the prepositional object must move to the Spec of PP to have its case features checked. If so, at Spell-Out the P will precede its object NP when the movement is covert and follow the object when the movement is overt. Extensions of our current approach can also be made with respect to the notion of feature spell-out. Take PP again as an example. In all the cases where a preposition takes an NP object, we can treat the P as an overt realization of the case feature of this NP. In other words, we can let the case feature of this NP be associated with an S(F)-parameter. We see a preposition when this parameter is set to 1. This idea is by no means my own invention. It has been proposed recently (e.g. Kayne 1992) that every NP has an abstract P (whether overt or covert) associated with it. The abstract P may well be a case feature which is spelled out as a preposition in certain cases. If this is true, we will not even need the tree in (233) and the case-checking movement to derive both prepositional and postpositional structures. The P is simply a case-marker which can appear either as a prefix or suffix. What we will have to explain then is why the the case marker can have different physical realizations on different NPs in the sentence, sometimes 279
  • 296. as an integral part of an NP/DP and sometimes as an more independent element such as a preposition. In many languages, including English and Chinese, there exist both NPs car­ rying no case marker and prepositional NPs. If we regard P as a case marker, we face the question of why some NPs have to be overtly marked for case (by a P) and some do not. Here is a tentative answer to this question. As a working hypothesis, we can assume that any NP whose case feature is not checked in the Spec of IP (Agrlspec or Agr2spec) must be spelled out as a preposition. Consider the gram­ matical model used in our experiment. There are two Agr projections in an IP: AgrlP and Agr2P. Usually the subject NP can have its case checked in Agrlspec and the object NP can have its case checked in Agr2P. This is probably why the subject and object NPs almost never need a preposition. If there are other NPs in a sentence, however, there will be no more Agrspecs for these NPs to move to in order to have their cass features checked. This can happen in many situations. One situation is where the sentence has an adjunct modifier, such as in (234). (234) The girl met the boy in the garden. The subject and object NPs in this sentence, the girl and the boy, can obviously have their case features checked in Agrlspec and Agr2spec respectively. The third NP the garden, however, cannot move to any Spec of AgrP. It must therefore have its case feature spelled out as a P, as predicted by the hypothesis suggested above. This hypothesis may also explain why the subject NP in a passive sentence has to appear in a by-phrase. In passivization, the subject 0-role is absorbed. The object NP is “promoted” and can thus move to Agrlspec to have its case checked. If we want to mention the subject NP in a passive sentence, this NP can not move to 280
  • 297. Agrlspec which has already been occupied by the object NP. It cannot move to Agr2spec, either, because its case feature and the feature in Agr2spec will clash. As a result, it must have its case feature spelled out as a preposition, namely, by. Another way to look at it is by treating the by-phrase as an adjunct which, like in the garden, must appear as a PP. The fact that we have assumed two Agreement projections in IP in our exper­ imental model does not mean that there cannot be a third AgrP in IP. Certain verbs may project a triple-Agr IP. One such verb might be give which can be used in a double-object construction such as (235). (235) The girl gave the boy a book. The IP projected by give may look like the following. 281
  • 298. Agrl-P Spsc Agrl-1 / Agrl-0 TP AspO Agr2-P / XSpec Agr2-1 Aar2-0 Aar3-P (236) A Triple-Agr IP In a sentence like (235), each of the three NPs can have its case features checked in one of the Agrspecs. There is therefore no need of a preposition. An obvious question that arises here is why we need a preposition in (237). (237) The girl gave a book to the boy. This sentence contains exactly the same number of NPs as in (235), but one of them has to take a preposition. One way to tackle this problem is to assume that 2S2
  • 299. the verb give is syntactically ambiguous. It may project either a triple-Agr IP or a double-Agr IP. When a double-Agr IP is projected, the third NP in the sentence must be an adjunct which has to be licensed by an overt case feature manifested in a P. The present model can also be extended to cover both nominative-accusative languages and ergative-absolutive languages. We have assumed that an IP con­ tains two Agr projections even in an intransitive sentence. As a result, there are two potential Agrspecs that the sole NP in an intransitive sentence can move to. Let us assume that the case checked in Agrlspec is nominative/ergative and the one checked in Agr2spec is accusative/absolutive. Then we will have an nomi­ native/accusative language if this NP chooses to move to Agrlspec; we get an ergative/absolutive language if this NP moves to Agr2spec. We can then propose a parameter which determines which Agrspec an NP moves to in an intransitive sentence. This is again not my own invention. Similar approaches have been taken by Bobaljik (1992), Chomsky (1992), Laka (1992), etc. They actually have a name for this parameter: the Obligatory Coat Parameter. A potential problem we can have with the particular structure assumed in this thesis is word order. In our IP projection, TP and AspectP come between AgrlP and Agr2P. In a language where the verb moves to T, we can have two different word orders in an intransitive sentence depending on which Agrspec the NP moves to. The order is NP-Verb if it moves to Agrlspec and Verb-NP if it moves to Agr2spec. In an ergative language where the verb moves to T, a transitive sentence will have the order NP-Verb-NP and an intransitive sentence will be Verb-NP. To account for an ergative language which is NP-Verb-NP when transitive and NP-Verb when intransitive, we have to assume that the verb can never move beyond Agr2 in this ergative language. 283
  • 300. This assumption will almost certainly turn out to be wrong. To avoid this prob­ lem, we can try an alternative model where there is only one Agr projection in IP when a sentence is intransitive. But this AgrP can have different case features in different languages. The obligatory case parameter then determines which case the AgrP has. We have a nominative/accusative language if it is Case 1 and an ergative/absolutive language if it is Case 2. All the extensions proposed above can make our model more complete, but a lot more research has to be done before we can incorporate them into our theory. 7.2 P otential Problem s The present model is not perfect and it can be challenged in many different ways. There are at least two kinds of argument that can be made against our approach. First of all, this model may seem too theory-dependent. We may wonder what will happen if some of the specific assumptions in our grammar turn out to be incorrect. Secondly, one may worry about the number of parameters we may need in a full version of the theory. It may seem that, as the grammar is expanded to cover more and more constructions, the parameters will become so many that learnability can become a problem. We will address these two potential problems in this section. We will see that our general approach can remain plausible even if many of the specifics of this theory are thrown out. Almost every present-day syntactic model is theory-dependent to a certain extent. Any approach in the Principles and Parameters paradigm has to start from some basic assumptions of this theory, such as the existence of Universal Grammar. Our current approach is built upon some hypotheses in the Minimalist Framework. One of those hypotheses is the notion of Spell-Out. All the experiments we have 284
  • 301. done in this thesis will be pointless if this basic notion turns out to be fallacious. However, what we have to worry more about is not whether a model is theory- dependent but the degree of such dependency. It is acceptable for a model to depend on certain theoretical assumptions, but these assumptions should be as general as possible. It is not desirable to have a model whose success hinges on some very specific assumptions which have not been generally accepted. One of the assumptions in our model which we may find suspicious is the structure of IP. It has been assumed in our model that the IP consists of a TP, an AspectP and two AgrPs. In addition, these phrasal projections must be arranged in a certain structural configuration. We may wonder what will happen if we replace this more articulated IP with a traditional non-split IP structure, such as the one in (238). (238) w b object A More Traditional Tree Let us assume this base structure and the following LF movements: 285
  • 302. A. The verb must move to I to have its tense, aspect and agreement features checked. B. After moving to I, the verb must move to C to have its predication features checked. C. The subject NP must move to the Spec of IP to have its case and agreement features checked. D. Either the subject NP or the Object NP must move to the Spec of CP to have its operator features checked. We can let each of these movements be associated with an S(M)-parameter and let CP and IP each have an HD-parameter, as we have done before. Then we can derive the following word orders by varying the parameter values (only one of the possibilities is given below for each order): S V O if IP is head-initial, the subject NP moves overtly to Spec of IP and the verb moves to I; S O V if IP is head-final, the subject NP moves overtly to Spec of IP and the verb moves to I; V S O if CP is head-initial, the subject NP moves overtly to Spec of IP and the verb moves to C; V2 if CP is head-initial, the verb moves overtly to C, and either the subject or object NP moves to the Spec of CP; O S V if the object moves to Spec of CP and both the subject and the verb remain in situ; 286
  • 303. O V S if CP is head-initial, the object NP moves overtly to Spec of CP, the verb moves to C, and the subject NP remains in situ. We notice that the V O S order cannot be derived unless we allow the object NP to move to the Spec of IP or allow the Spec of IP to appear on the right. We wilt also lose many of the scrambling orders. What this shows is mixed. On the one hand, our model does seem too theory-dependent, since it misses certain word orders once the Split-Infl hypothesis is removed; on the other hand, we can still get most of the basic word orders even if the IP is non-split. In any case, our model is dependent on the Split-Infl hypothesis, the dependency is not critical. Our specific theory also depends on the VP-internal Subject hypothesis. Once this hypothesis is dismissed, many movements will not be necessary any more. The word order variation we can derive from movement will be very limited. What does all this show? It may mean that the Split-Infl hypothesis and VP-internal Subject hypothesis are correct, as they can provide us with more explanatory power. But let us consider the worst case. Suppose that both of these two hypotheses are proven incorrect in the end. Can the model proposed in this thesis still exist? The answer can be “yes" or “no". The specific grammar used in this thesis can of course no longer exist. The movement patterns will have changed and so will the S(M)-parameters. All the experimental results in the thesis will need to be reconsidered. However, the general approach we are taking here can remain valid even in such a situation. We can proceed in this direction as long as the following are true: (i) The grammar has X-bar structures and movement operations; (ii) The X-bar structures are universal modulo head directions; 287
  • 304. (iii) The movement operations are universal modulo the timing of Spell-Out; (iv) Different head-directions and different spell-out of movements result in word order variation; (v) The head directions of X-bar structures can be parameterized; (vi) The spell-out of movement can be parameterized. If these assumptions hold, we can build a model of the present sort no matter how the other specific assumptions change. The general picture of word order typology described in this thesis will not change; the learning algorithm presented here will still be applicable; and parsing can still proceed in the way presented in this thesis. One of the basic problems this thesis has addressed is how to handle word order in a syntactic model where cross-linguistic variation can result from both X-bar structure and movement. We have found a way to describe a word order typology in terms of both head-direction and movement. We have also discovered a learning strategy which the learner can use to converge on any particular grammar by simultaneously setting two different types of parameters: X-bar parameters and movement parameters. This is a problem that has to be addressed by any acquisition theory which accepts the view that word order can be determined by both phrase structure and movement. Finally, we have seen the possibility of a more universal parser which can parse different languages by looking at the parameter settings of those languages. Now let us consider the potential problem of Hparameter explosion”. The model we have been working with is minimal, but the number of parameters we have assumed does not seem too small. One may wonder how many parameters we 288
  • 305. would eventually need when the model is expanded to include more constructions. There seem to be many ways in which the number of parameters may grow. Here are a few of them: (239) a. In order to account for word orders within other constructions, such as PP/D P/N P, more S(M)-parameters and HD-parameters may be needed to control the movements and head directions internal to these con­ stituents. b. Since even a single language may have different word orders in statements and questions, in main clauses and embedded clauses, etc., we seem in need of different parameters in different types of clauses. c. To handle the full range of inflectional morphology in world’s languages, a greater number of features may need to be taken into consideration. As a result, the number of S(F)-parameters may increase. It looks as if the parameter space could be much bigger than the one we have dealt with. The amount of search involved in learning and parsing could then be so great that language acquisition and language processing might become a problem. Are the problems in (239) real problems? Let us examine them one by one. The problem in (239(a)) exists only if the internal structures of DP/NP and PP are totally unrelated to those of CP/IP. This does not seem to be the case. There are more and more studies showing that DP/NP parallels CP/IP in many ways. It is very likely that these phrases are similar not only in X-bar terms, but in terms of movement as well. It could well be the case that a movement in CP has a counterpart in DP. Moreover, the corresponding movements could be similar in their overtness, i.e. they might both occur before Spell-Out or both after Spell- 289
  • 306. Out. If so, we will not need two separate S(M)-parametera. The two movements could be considered different instances of a single type of movement whose spell- out is controlled by a single S(M)-parameter. Should this be true, the number of parameters will not increase as much as we might expect. Language acquisition will therefore not be a problem. As a matter of a fact, parameter setting could be easier, since the learner can get evidence for a parameter value from both CP/IP and DP/NP (Koopman 1992) The problem described in (239(b)) can be a real problem only if we adopt the assumption that HD and S(M)-parameters are the only determinants of word order. This assumption seems to be false. There are obviously other factors which can influence the word order of a language. When an S(M)-parameter has the value 1/0, for instance, whether the relevant movement occurs before Spell-Out depends on things other than the parameter values. The principle of Procrastinate dictates that the movement should be after Spell-Out in this case, but this principle can be overridden if some other factors call for overt movement. When a language has different word orders in statements and questions, or in main clauses and embedded clauses, the difference can often be explained by the overtness of one or two movements. It is definitely not the case that different clauses have totally different parameters or parameter values. In English, statements are S-Aux-V-0 and yes-no questions are Aux-S-V-0. A simple explanation for this fact is that the auxiliary moves to C in questions but not in statements. We do not need additional parameters to account for this if we assume that the S(M)-parameter for I-to-C movement is set to 1/0 in English. The real question we have to answer then is what overrides the principle of Procrastinate in interrogative sentences to make the movement overt. This is a question that has to be addressed in any linguistic 290
  • 307. theory regardless of the existence of S-parameters. A similar argument can be made for German which has the V2 order in main clauses and SOV orders in subordinate clauses. Assuming that CP is head-initial and IP is head-final in German, we can account for the word order difference by supposing that the S(M)-parameters for I-to-C movement and XP-movement to Cspec are both set to 1/0 in German. In subordinate clauses, these movements are covert due to the principle of Procrastinate. In matrix clauses, however, the movements are made overt by some other factors. What these factors are remain the topics of current linguistic research. The important point is that we do not need different parameters for questions or embedded clauses. Once the module of linguistic theory we have studied here is interfaced with other modules, the correct word order in each type of clauses will emerge. The success of our model therefore depends on the research in those other modules. Finally we address the problem in (239(c)). The number of S(F)-parametera required depends on the number of features required by the grammar. As long as the set of features is finitely small, there will not be too many S(F)-parameters. The question is how many features have to be there in our system. There is no definite answers here, but the number should be finite. This may seem false in view of the fact that morphological variation in world’s languages is so rich. But the seemingly infinite variation in inflectional morphology does not have to imply that the number of morphological features is infinite. We can get tens of thousands of surface morphological paradigms from a small set of features because of the following: i. Different languages can have different subsets of features spelled out; 291
  • 308. ii. Different combinations of features can have different phonological realizations; iii. The phonological realization of a certain feature or a combination of features can be arbitrary. Therefore, while we may need more features than we already have in the system, there is very little indication that the required set of features must be infinite. In conclusion, the likelihood of an explosion of S(M)-parameters and S(F)- parameters is very small. We probably need more parameters but the increase will not be dramatic. As long as the number of parameters is finite and reasonably small, parameter-counting is not particularly meaningful. Given two grammars which account for the same range of linguistic phenomena, the one with fewer parameters is of course preferable. However, there is no principled reason why the number of parameters Bhould be less than 20 or less than 30. As long as there is a learning algorithm whereby those parameters can be correctly set, the exact number of parameters should not be an issue. In fact, a small increase in parameters is welcome if this can result in a simplification of the principles of our grammar. 7.3 C oncluding Rem arks In this thesis, we have studied a particular model of the Principles and Parameters theory. By fully exploiting the notion of Spell-Out, we have set up a grammar where cross-linguistic variation in word order and inflectional morphology is mainly determined by a set of Spell-Out Parameters. The new parametric system, though still in its preliminary form, has been found to possess some desirable properties in terms of language typology, language acquisition and language processing. The 292
  • 309. experiments we have performed in this thesis are far from complete, but the initial results are encouraging. There is reason to believe that this line of research is at least worth pursuing, though a great deal of future work is needed to get this model closer to the truth. 293
  • 310. A ppendix A Prolog Programs A .l pspace.pl X F i l e pspacc.pl X Author: Audi Hu X O ats: Ju ly 15, 1993 X Purpose: Find a l l values coabinations of S(H)-p a ra a e te rs , S (F )-p a ra a e te rs, X BO -paraaeters and AA -paraaetcr. Try generating soae language X (p o ssib ly eapty) w ith each value coabination and c o lle c t th e s e t of X a l l non-aapty languages th a t are generated and th e ir corresponding X p a ra n e te r s e ttin g s . X The p a rse r used here is in p a rs e r.p l dynaaic s /1 , h d l/1 , h d 2 /l, a a /1 , lan g /1 . XX pspace(A l i s t of a l l th e settin g -lan g u ag e p a irs in th e p a ra a e te r space, XX each containing one possible paraaeter setting and the language XX i t generates) pspace(Ps) s e to f(P ,s l_ p a ir(P ),P s ). XXpspacel(A l i s t of a l l th e settin g -lan g u ag e p a irs in th e p a ra a e te r space, XX each containing one o r aore p o ssib le param eter s e ttin g s and the XX sin g le language they generate) X (The s e ttin g s in a sin g le p a ir a l l generate the sane language.) pspacel(P s) :- se to f(P ,sl_ p a ir(P ),P sO ), group.settings(P sO , P s). XX pspace2(A l i s t of a l l th e d is tin c t languages th a t can be generated in th e p a ra a e te r space) pspace2(Ls) :- s e to f(P ,s l_ p a ir(P ),P s ), c o lle c t.la n g s (P s ,Ls). XX sl_pair((Setting,Language]). sl_pair([S,L]):- g e t.s e ttin g (S ) , s e to f(L ,s l_ p a irl(S ,L ),L s ), aerg e_ l(L s,L ). 294
  • 311. XX al.pairl(Setting, A Hat of setting-language pairs, each with a different XX w i t u e instantiation of the setting) sl.pairl(S,L) :- instantiate.var(S,Si). generate.all(31 ,L). X Find e ll strings that can be generated froa a given (fully instantiated) X value combination generate.alKSettlng,Strings) :- ) , S trin g s). XX in sta n tia te _ v a r(S e ttin g ,P a rtic u la r_ In sta n tia tio n _ o f_ S e ttin g ) X ( I t has no e ffe c t on s e ttin g s th a t do not contain v a ria b le s .) X Io ta : s (n (s p e c l)), s(n(spec2>) and s(n(cspee)) any be s e t to 1/0 uhieh, X being a v a ria b le , can be in s ta n tia te d to e ith e r 1 or 0 in a p a rtic u la r X p arse. The language generated by a s e ttin g containing such v a ria b le (s) X is th e union of th e languages generated u ith each p a rtic u la r in s ta n tia tio n . X ( I t only has e ffe c t on s e ttin g s containing v a ria b le v a lu e s.) in s ta n tia te .v a r ( [1 /0 IV sl],[V IV s2]) I, <¥•0; VM). in sta n tia te _ v a r(V s1 ,Vs2). in stan tiate_ v ar([V |V slj,C v jv s2 ]) :- in s ta n tia te .v a r(V s l, Vs2). in sta n tia te _ v a r( □ , □ ). XX uerg e.l($ ete_ o f.S trin g s,U n io n _ o f_ S ets_ o f.S trin g s) X Merge languages generated u ith d iffe re n t in s ta n tia tio n s of a s e ttin g n e rg e .l( [L i,L 2 |L s],L ):- nerge(L l.L 2,L 3), nerge_l([L 3|L s},L ). nerge_l(L ,L ). XX g ro u p .se ttin g s XX (A l i s t of settin g -lan g u ag e p a irs , XX A l i s t of settln g (s)-lan g u a g e p a irs XX ). X Group to g eth er s e ttin g s th a t generate id e n tic a l languages, group.s ettin g s(S L _ P airs,SL_Pairs1) :- r e tr a c ta ll (lang(_> _) )» peck_p*irs(SL_Pairs), c o lle ct_ p a irs(S L _ P a irsl). X A ssert a l l languages and group to g eth er th e s e ttin g s fo r each of then pack _ p airs(C [S ,L j|P airs]) la n g (S l,L l), X th e present language has already s a n e _ se t(L ,L l),I, X been a sse rte d . r e tra c t(la n g (S l,L l)), X add th e present s e ttin g to the a sse rt(la n g (C S tS i],L )), X s e ttin g s fo r th is language. p n ck _ p airs(P airs). pack_paire( CCS.U IP a irs]) X th e p resen t language has not been a sse rte d , a s s e rt(ln n g ([S3,L )), X a s s e rt th is new language p a c h _ p airs(P a irs). pack_pairs( □ ) . essert_new_settins(SettlnjtJ 295
  • 312. <o Ol « • • • • • • • • O H H H I I I • m • m v v &h r o a o i t * «* n ct ct ct * n n I ft K K K 9 .....................I s e e s ss e e i 2222^ rrr ????jtjt^ .e 5 - r r s aaaaaaaa* assa:it it rt c* i* - - • I I V I I K t t ■ • II « It n/snftA A A ni| MHHHK■ ■ I n H H H H B s eeseseess• b b b b i i b b Q • • § • • • ■ • > aaaaaaaa? h h h C i » m• ► OAnwi rt HiN»V • M• a a Q v I'tKH MHwwtSw w w w * WWW WWWWV* WW WWW* if H H H H d B S E * ..i | AAVV| | W a o *d VI■uUw • £ .. .■ ?• A I m m liI I • • A «t S« stl • N ■ S r-S PD .B8C P'nK l t H l U • AAUw « ! ► ► • • A - * ■ BHHv • • v• H « IB B I ■ sv P.SH ill• M B M M *-■ v B » • A | | M M ■• • ■ 5?MI A P v # r i 2 S izAs H M A> ■ M M M M M % v M - S - " M H H nA s* M g M H B O ' 'd M O M »t M Ct ■! !U O w H A A O O A A A A I I M M A A O O A A A A I I A 0 A A A A © © A A A A I I A O A A A O A A i t H m m m ct rSAAAll A □ 3« • C P ccW | • M0 1 3 ncf cc r? bF—M 3.° “ 2 cr i —m*b 0 B 1 U *Fw| fLS?V-Sf O S— ct tl • I •tl nm c 7 7• M □c« w c..v I A M r*tdi—i- I MP rt X 0 I-* o HO• Hrt Hct • 1 A r t-0 ct ?! III!HII lilt I h Aft AA ft A 3 ,-£ 3 ,*73 C 3 & I* - t —■ « H l i B N r 3 £ * 2 □ A r A A P* £A
  • 313. |t t . M t t i B c ( [ i |l |C lD,Bl FlO ,H ,D l|D 2,C u«l l i r lTt lip ,P r td lO p,iA ]):- ap(a(agr2(A ) ) ) , sp(a(aap(B ) ) ) ,sp(n?tns(C ) ) ) , sp(a(agrl(D ) ) ) , sp (a(c(B )) ) ,sp (a (sp ec l(F ) ) ) ,sp(a(spec2(G ))) t sp(a(cspec(H )) ) , sp (f(c a se (C a se ))),sp (f(a g r(A g r))) ,s p (f(tn s (T )) ) , sp (f (asp (Asp))> ,sp (f(p red (P red )) ) , ap(f(op(O p)) ) , c_head(D l), i_head(D2). ap(a(ag r2 (0 )) ) . a p (a (a sp (0 ))). a p (a (tn a (0 ))). s p (a (a g rl(0 ))). sp (a (c 7 o ))). s p (a (s p e c l(0 ))). sp (a(sp ec2 (0 )) ) . ap (a(cap ac(0 ))). a p { f(e u « (0 -0 ))), s p (f(c a a e (l-O ))), s p (f(a g r(0 -0 ))). s p (f ( a g r ( l- 0 )) ) . s p (f(tn s (0 -0 ))). s p (f(tn s (l-O ))). sp(f(asp(O -O ))). sp (f(a s p (l-O ))). sp (f(p red (0 ~ 0 ))). s p (f(p ra d (l-0 ))). sp(f(op(O -O ))). s p ( f ( o p (l-0 ) ) ) , c_ h ead (i). i_ h e a d (i). s p (a (a g r2 (l))). s p (a (a s p (l))). s p (a ( tn s ( l) ) ). s p ( a ( a g r l( l) ) ) . a p ( a ( c ( l) ) ) . s p (a (s p e c l(l))). s p (a (s p e c 2 (l))). a p (a (c a p a c (l))). a p (f(c a s e (0 -l))). s p ( f ( c a s e ( l- l) ) ) . s p (f(a g r(O -l))). a p ( f ( a g r ( l- l) ) ) . s p (f(tn s (O -l))). s p ( f ( tn s ( l- l) ) ) . s p (f(a sp (O -l))). s p ( f ( a s p ( l- l) ) ) . s p (f(p re d (O -l))), s p ( f ( p r e d ( l- l) ) ) . s p (f(o p (O -l))). s p ( f ( o p ( l- l) ) ) . s p ( a ( s p e c l( l/0 ) ) ) . sp (a (sp e c 2 (l/0 )> ). s p (a (e sp a c (l/0 )) ) . c_head(f). i_ h ead (f). A .2 sets.pl X Fila: aata.pl X Author: Audi Vu X Update: July 10, IM S X Purpoaa: Coaputa tha set-theoretic relations between languages in a given X paraaatar apaea. XX diajoint_paira(A lia t coaaiatiag of pairs of languages in tha paraaatar XX spaca which ara disjoint with aach othar) X (Bach d is tin c t language rep resen ted by a d is tin c t nuabar in th a output) d isjo in t_ p a irs(P s) papaca2(L s), ratractall(language(_)), assert_languages(Ls,1), t, setof(P,disjoint_pair(P),Fs). d is jo in t_ p a ir ( [ I I ,12]) la a g u a g a (ll,A ), la n g u a g a (l2 ,» , d isJo ia t(A .B ). 297
  • 314. XX i n t « n t c t i a ^ p i d n ( i l i s t co n sistin g of p a irs of languages in th e paranotor XX space th ic k in te rs e c t each other) X (Bach d is tin c t language represented bp a d is tin c t nnnber in the output) in te rso c tia g _ p a irs(P s) :- pspace2(L s), r e tr a c ta lK language (_ )), a sse rt_ la n g u a g e s(L s,l), I, s e to f(P ,in te rs e c tin g _ p a ir(P ),P s ). in te r s e c tin g _ p a ir ( [ll,l2 ]) la n g u a g e d l,A), language(1 2 ,B). in te rse c tin g (A ,B ). XXp rop er.in clu sio n s(A l i s t co n sistin g of p a irs of languages in th e p a ra a e te r XX space uhere th e f i r s t member of each p a ir is a proper subset of th e XX second member) X (Bach d is tin c t language represented bp a d is tin c t number in th e output) proper_inclnsions(P s) :- pspace2(L s), r e tr a c ta lK language (_ )), a sse rt_ la n g u a g e s (L s ,l),I, se to f(P , properlp_included(P ),P s). p ro p erlp _ in c lu d e d ([II,12)) : - la n g u a g e d l. A), language(I2,B ), proper!p_includes(B ,A ). XX Find out th e s e t-th e o re tic re la tio n between anp too languages. se t_ re la tio n (L l,L 2 ) ( id e n tic a l(L l,L 2 ), w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l, w rite C a re id e n tic a l.') ; d is jo in t (L I, U ) . w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l, w rite C a re d is jo in t.* ) ; in te rse c t(L l,L 2 ), w rite (L l),n l, w rite (a n d ),n l, w rite (L 2 ),n l, w rite C a re in te r s e c tin g .') ; p ro p erlp .in c lu d e s(L I, L2), w rits(L 2 ),n l, w rite ( 'i s a proper subset o f ) , n l , w rite(L l) ) ,n l. identical([A lA sD ,B ) member(A.B), s e le c t(A,B,Be), id e n tic a l(A s, B s). id e n tic a l ( □ , □ ) . d isjo in t(A ,B ) + co_member(A,B). in tersect(A ,B ) co_member(A,B), unique_member(A,B), unique.member(B, A), f . 298
  • 315. p ro p erly .in c lu d e s(A,B) subset(B .A ), nnique_neaber(A,B),! . s u b s e t([],_ ). su b set( [AIAs],B) :- aeaber(A ,B ), subset(A s.B ). co_aeaber(CAI_] , B) aeaber(A ,B ). co_aeaber( C_IAs] ,B) co_aeaber(A s,B ). unique_aeaber([A l_],B ) V aeaber(A ,B). unique_aeaber([_|A sj,B ) :- unique.nenber(A s ,B ). assert_languages([L lL s] ,1) u s t r t d u n u s t l . L ) ) , 11 is B+i, a ssert.lan g u ag es(L s, B1). assert_languages( U ,_ ) . A .3 order.pl X Fils: ordsr.pl X Author: Audi Vu X O ats: August 9 , 1093 X Purpose: Order tbs settings in a given X paraaeter space in the spirit of X th e p rin c ip le of P ro c ra stin a te o rd e r.s e ttin g s (S ,S l) quicksort(S ,S I). q u ick so rt(L ist.S o rte d ) q u ic k so rt2 (L ist, Sorted- □ ). q u ic k so rts( □ ,Z -Z ). quicksorts(CXlTail].A1-Z3) :- s p lit(X ,T a il,S n a il,B ig ), quicksorts(Saall.A l-C X I AS]), q u ic k so rts(Big,AS-ZS). s p l i t (.X , □ , □ , □ ) . s p lit(X .[Y IT a il], [T lS aall].B ig) verify(precedes(Y .X )) . I , s p lit(X ,T a il,S a a ll,B ig ). splitCX ,[Y I T a il].S a a ll,[Y lB ig ]) s p lit(X ,T a ll,S a a ll,B ig ). 299
  • 316. 300 I | i i i i i i a • • • • • K X X X S . 4 tt 0 0 >3 g S 3 T j f f f fB S S B ! J 1"**- ih. •* e e e oi i i • « • v * >4 *4 M M M f s s s f i a I * I - s ; ; ; ; f . V . V S &- . 5 M ~ | g j j ? g g S 3H H H H H flS ►■ CO •• i S l it ct B ct * o o o o o ►b p v m i r t*r ****** r «O » • • • • ( l l & Pv 0 ^ 0 * r>« O 4 ct ft & & AQi «0 it P # X H n H n n K * £ &&&£& & u 5 *0 - *.5 £ n- ? - <5?3ij.- a-•» B *- 4 «»u«». uh ct I P I t s ? ? " 8t*H • vfll fl V HOT!Mw»t » R O »« M f a O K o H M I r» M g g iff'M O O ■ M M I I ii : i I *& r 325 I ) M | I og * d g w0 4 0MOM 1 etl0 t*O•O <->*d f t l- lf t 1 I I 3 m5 s a l 40 4 0 -0 0 OM M M 1 I I _ OO O R 4 4 4 0*0 rr er**•<-.*i r i • 3 g a &aM - ct ct I & 0-t»M O+ « rtrt *0 4 ►* n n ct * 4U I M —• ■ w M M 4 cc > p •• <->m ct i M rs 4 * o O h t» t t • M ■ M m H 3i t c t* ft)M*I *3 *8 A Pi O (ft A A & A rO Hit P’S M«p*^'«M^* x | I w S 2 C +2 S 2 g J J g ** • •- O O ct *0*0 i cr- o C B f r f s r s* o •p I w fi | wO w S . r 5 ' I /S (t t f ♦* i t ♦* ” 5 f V I * ’! 3 U to *9K V fr* m3 ft)«t er - 2 I M
  • 317. gat.papaca gat.languaga s . g s t.s s ttin g t a a to f(S ,g st.s a ttin g (S ) ,S s ), o r d i r . i « t t l i |i ( S i lS i l ) , a ssa rt(a a ttin g sO (S sl)). gat.languagas aato f(L ,g at.lan * u ag a(L ),L a), a s s a rt ( l u g s (L a )). gat_satting([A .B .C .D .E .F.G .H ]) a p T « (ag r2 (* ))),a p (» (a a p (B ))),sp (n (tn s(C ))),sp (a (ag rl(D ))), sp (a (c (E )) ) ,sp (n (a p a cl(F )) ) , sp(a(spac2(G )) ) , sp(a(cspac(H )) ) . gst.langnaga(L ) g a t.a a ttin g (S a ttin g ), a a to f (S trin g , gansratsC S atting, S tring) ,L ). ap in itia lim a , * rita ('T h a i n i t i a l a a ttin g ia ')► c u rra n t.a a ttin g , a p l. apl n a x t.in p a t(S ), ( S « in itia lis a -> ap ; S*bya -> tru a S>gsnarata -> g an arata.sp l ; S « c u rra n t.sa ttin g -> c u rra n t_ sa ttin g ,sp l i p ro c a ss(S ),I, « rlta ('C u rra n t a a ttin g ranaina uncbangad.’) ,n l, apl a rita('U n a b l# to paraa *), * r ita ( S ) ,n l, v r ita ( 'b a s a ttin g tba p a ru a ta ra . . . ') . n l . n l , ra a a t.to .p ro c a a a (S ), raa a t.to .p ro c a a a (S ) t i 7 _ n a x t_ sa ttin g ,! , ( p ro ca a a (S ),I. w rits ('P a ra M ts rs ra a a t to : ') . e u rra a t.a a ttin g ; ra a a t.to .p ro e a ss(S ) lsa rn _ a ll.la n g a :- l u g s (La ) t la a r n .a ll(L a ) . la a rn _ a ll([L lL s]) :- la a r n l( L ) ,!, la a m _ a ll(L a ). l a a n _ a l l ( D ) . 301
  • 318. Ii u b I(L) :- w rita ('T ry in g to lonrn ') , « r ita ( L ) ,v r ita ( ' . . . ') , n l , i n i t i a l i s a , lo n rn (L ). laarn(L ) proe«na_«ll(L ), I , « r ita ( 'P in a l a a ttin g : ') , c u rra n t.a a ttin g , g an arata(L i), v rita('L anguaga ganarntad: ') , W Tita<Ll),nl, ( id a n tic a l(L .L l),! , n rita ('T b a languaga v rita (L ), * r i t a ( ' ia la a rn a b la .') ,n l ; * r i t a ( ' which ia a auparaat of ') , v r ita (L ) ,n l, v rita ('T h a languaga w rita(L ), v r ita C ia VOT la a rn a b la .* ),n l,n l ) ,n l . laarn (L ) :- try .n a x t_ a a ttin g ,I, la a rn (L ). i n i t i a l i s e :- re tra c ta ll(c u rre n t_ a e ttin g (_ )) , r e tr a c ta lK a (_ )), r a t r a c t a l l (a a ttin g a (_ )), aettingaO ( [SISa]), a a a e rt(c u rre n t_ a e ttin g (S )), a a a a rt(a a ttin g a (S a > ). procaaa_all(C SISa]) procaaa(S ), p ro c a a a .a ll(S a ). p ro c e a a .a llC D ). procaaa(S) c u rre n t_ sc ttin g (P ), in s ta n tia te .v a r(P ,P I), re tra c t.o ld .v a lu e , a sse rt_ n e v .v a lu e (P i), p araa(S ). try .n e x t.s a ttin g ra tra c ta ll(c u rre n t.a e ttln g C .)) , ra tra c t(a a ttin g a (C S IS a ])), a a a a rt(e n rra n t_ a a ttin g (S )), a s a a rt(a a ttin g a (S a )). ganarata :- c n rra a t_ a a ttin g (P ), a a to f(S ,g a n a ra ta (P ,S ),S a ), w rite('L anguage ganaratad a ith c u rra n t a a ttin g : * ),n l, w rite (S s ),n l. 302
  • 319. I W oCO a as H M i V H i H H * H *d*d *d*d*d*d * •. - /N /N i^> H Ct £ ?? M I I I I V I • H • •**••<»*»* IlilllK - I f II f f i f f i . » . » » !. 1 'it 1 « r l£ £ • £ £ * • » • •*■»*■»*■5 M £ 0 (A a a a v O w O H H H M M M M B I ft fr H- H ft R • I* S - ? ? # ■ H e B c»M O O O • W W W • • • • • • • • 4 H f ft H ft e « N e «• • • • • • • • 4 4 I ft Mft « t N lt« i ~ r . 3 » 3 : m : t 3 P . S3 .* Sff * 3 S 6 ~(■v/srtrtrtrtrtrtd ft H4 ft94 Ml ft ft »4 4 r™l4 ■• i ii i**4 - ■ H • b www n/srtrtrtrtrtrtd ... . _ —< u “ t n v e B e y S . . . . • « • • • « « • • m e m t H* “ HV “ W • P - 01? P v H• H• _ _ S „ < S O <• ~ g ^ g - ■ it• i m m i f f * # * ■ • •" I H h- rt*1 # <t I <L * t* -1* ^ ^ P.* • f t * * f t * « t * * » • I » «* J l I £ £ £ “ w r X &8 *G « ‘2 ,S£*3Ej'<a.w C eg- £2 2 J{J" < J *• < H-W M f O fl w»«nnMO p w£ 0 C>R • w I • t * M * AMhw<A(0^s* I • • 0 /sw* wo* w * * * * * * * * ^ ^ w Q w w ^ O w *d A* *0 ^ • * VOWdWd a o na wwwww* w w *h« * M *0 ww* wwwwm w « h V ^ * * * * * * * * WWW WWWW* W K • t> WWW w» • w^J « w W0 | H- I . - . • • O*0 • I 4Hi* « ? ‘S ^ B - S 'S• o n w h a /nm Oe*Si«"S4**SWt*WWt* o * • ■ * * *^ ft * * •s's*:• ft ft ft MH I i I *(j)fnTnM'MMZB9 -:(*s)i
  • 320. w rita l([S |S s ]) w rita (S ), w rits(3 s). w r its l( Q ) . n«xt_input(Input) ra p s a t. w rits C ls z tT *), rsa d (In p u t). instantiata_var(C V l|V sl],[V 2|V s2]) :- v s r(V l),!, (V2-0; V2-1), in s ta n tia ts .v a r(V s l, Vs2). in sta n tia ts_ v sr([V IV sl], [VIVa2]) in sta n tia ts _ v a r(V s l. Vs2). in sta n tia ta _ » a r( [ ] , □ ) . A .6 sp2.pl X F ils : sp 2 .p l X Author: Audi Vu X D sts: August 3, 1993 X Purpose: A cquiring word o rd srs and in fle c tio n a l Morphology by a a ttin g X S (N )-p aran stsrs, HD-paraaatera and S (F )-p araaatara. en su ra _ lo ad e d (lib ra ry (b a sics)). en su re .lo ad e d (ae ta ). sn a n rs_ lo ad sd (p arasr). :- e n su re .lo ad e d (o rd e r). sn su rs_ lo a d sd (a p u til). dynaaic a s ttin g a /1 . get.pspace g s t.a s ttin g a , g a t.la n g u ag ss. g s t.a s ttin g a s e to f(S , g e t.s e ttin g (S ) ,Ss), o rd e r_ s e ttin g s (S s ,S s l), a a a s rt(a s ttin g s O (S a l)). gst.langoagsa se to f(L ,g et.la n g u a g e (L ), La), a ssa rt(la n g a (L a )). get_settlng(CA,B,C,D,E,F,<3,I,HDl,HD2]) sp (u (ag r2 (A)) ) . sp(a(asp(B ) ) )» ap (a(tn s(C )) ) . a p (a(ag rl (D)) ) , sp (a (c (E )) ) , a p (a (a p sc l(F )) ) , sp(a(apsc2(Q )) ) . ap(a(capsc(H )) ) . c .h sa d (lD l), i_hsad(HD2). get.settingl(C A ,B ,C ,D ,E ,F,Q ,l,B D l,lD 2,C ese,Pred,A gr,T ns,A ep]) spCa(agr2(A) ) ), ap(a(aap(B) ) ) , ap(a(tas(C ) ) ) , sp (a (a g rl (D) ) ), sp(a(c(E ) ) ) ,sp (a(sp * el (F) ) ) , sp(a(spee2(a) ) ) ,sp (a (c sp e c (I) ) ), c.h aad(IO l),i_hsad(E 02), a p (f (cass(C aas) ) ) ,a p (f (pred(Pred) ) ) ,a p (f (agr( Agr) ) ), sp (f(tn s(T n s))),sp (f(a a p (A sp ))). 304
  • 321. gat.langoaga(L ) g a t.a a ttin g 1(S e ttin g ), a a to l(S trin g ,g a n a ra ta l(S a ttin g ,S trin g ),L ). ■p in itia lix a , w rit*('T ha I n i t i a l a a ttin g ia ') , n l , c u rra n t.a a ttin g , apt. apl n a z t.in p a t(S ), ( S a in itia lix a -> ap ; S«bjra -> tra a ; S*ganarata -> g an arata,ap l ; S *enrrant_aatting -> c u rra n t_ a a ttin g ,a p l ; proeaaa(S )>I, « rita ('C u rra n t a a ttin g raaaina unchangad.*),n l, apl ; a r i t a ( ’Unabla to paraa ') , a r ita ( S ) ,n l, a r ita ( 'ta a a ttin g th a p araaatara . . . ') , n l . n l , ra a a t.to .p ro c a a a (S ), raaat_to_procaaa(S ) ( r a a a t.a lp (S ), p ro c a a a (S ),I, aritaC 'S u ccaaafu l p a ra a .* ),n l try _ n a x t.a a ttin g (V ), procaaa(S ), I , v ritaC U o rd ordar p araaatara ra a a t to : *), a rita .v a lu a a (V ),n l, aritaC 'S u ccaaafu l paraa. *),nl ; I,raaat_to_procaaa(S ) laarn _ all_ lan g a langa(L a), la a ra _ a ll(L a ). laam _ all(C L |L a]) la a r n l( L ) ,l, la a m _ a ll(L a ). l a a m _ a l l ( n ) . la a rn l(L ) a rita ('T ry in g to la a rn ') , a r it a ( L ) ,a r i ta ( ' . . . ') , n l , in itia li* a , a a t.a fp (L ), la a m (L ). 305
  • 322. learn (L ) :- p ro c e e a .a ll(L ),! , v r ite ( 'F in a l a a ttin g : ') , c u rra n t.a a ttin g , g an a ra ta (L i), w rite('Language ganaratad: ') , w rite (L l),n l, ( id a n tlc a l(L ,L l),I, w rite('T he languaga *), w rite(L ), w rite ( ' ia la a rn a b la . ' ) , n l ; w rite (* ahich ia a auparaat of ') , « rita (L ),n l, w rite('T he languaga ') , a rita ( L ) , w rite ( ' ia IOT la a r n a b la .') ,n l,n l ) ,n l. laarn (L ) :- t i 7 _ n e x t_ a e ttin g (_ ), I , laa rn (L ). i n i t i a l i s e r a tra c ta ll(c u rre n t.e e ttin g { _ )), r a tr a c ta ll( a ( _ ) ) , ra tra c ta ll(fa d l(_ )), re tra c ta ll(h d 2 (_ )) . re tra c ta ll(w a ttin g a (_ )), in itia l.e e ttin g ( [ S |S a j), a a a a rt(c u rre n t_ a a ttin g (S )), a a e e rt(c u rra n t_ a e ttin g (S )), a a e e rt(s a ttin g a (S a )), aaaart(a(f(caaa(O -O )) ) ) , aaaert(a(f(p red (O -O )) ) ) , a a a e rt(a (f(a g r(0 -0 )))), a a a a rt(a (f(tn a (0 -0 ))> ), a a a a rt(a (f(a a p (0 -0 )))). procaaa_all(E S lS a]) procaaa(S ), p ro c a a a .a ll(S a ). p ro ceaa_ all( □ ). procaaa(S) c u rre n t.a e ttin g (P ), in a ta n tia te _ v a r(P ,P l), r e tr a c t.o ld .v a lu e r , a a ae rt.n a w .v a lu e r(P I), p a ra a (S ). try .n a z t.a e ttin g O ) ra tra c ta ll(c u rre n t_ a e ttin g (_ )), r e t r a c t (a e ttin g a ( [SISa]) ) , a a a e rt(c u rra n t.a e ttin g (S )), a a a a rt(a e ttin g a (S a )). aat.afpC C SlSa]) re a e t.a fp (S ), s a t.a fp (S a ). a a t.a fp ( □ ). 306
  • 323. rasat_sip([W |W s]) ckack_sfp(V ), ra sa t.slp (W s). raa a t_ sfp ( □ ). chack_sfp(_-[]). chack_sfp(V -[FIPa]) chack_alpl(tf- [F] ), ch«ck_alp(V-Fs). chack_slp(oft n ) . chack_sfpl(aux-[F ]) !, ch ack _ i_ iaatu ra(F ). chack_sfpi( _ - [F]) :- ch ack _ l_ faatu ra(F ). ckack_f_laatura(prad) ( a ( f ( p r a d ( l- _ ) ) ) ,) ; ra tra c t(a (f(p ra d (_ -L )))), a s s a r t(»(i(p rad (1 -L )) ) ) , v rita (* a (f(p ra d )) is ra s a t to *), v r ita ( l- L ) ,n l ). chack_f_laatura(agr) :- ( s 7 f ( a g r ( l- _ ) ) ) t» ; r a tr a c t( s ( f ( a g r ( .- L ) ) ) ) , a a s a r t( s ( f ( a f r ( l- L ) ) ) ) , v r ita ( 's ( f ( a f r > ) Is ra s a t to ') . v r i t a ( l - L ) ,u ). ch ack _ f_ faatn ra(tas) ( s ( f ( tn s ( l- _ ) ) ) » I ; r a tr a c t( s ( f ( ta s ( _ - L ) ) ) ) , a s sa rt(s (* (tn s(l-L ))} > , « r i t a ( ’s ( l( tn s ) ) is ra s a t to ') , * r ita ( l- L ) ,a l ). cback_fj t aato ra(asp ) :- ( s ( f ( a s p ( l- _ ) ) ) ,l ; rs tr a c t(s ( l( a s p (_ - L )) ) ), a s s a r t( s(f(a sp (1 -L )) ) ) , « r i t a ( ’s (f(a s p )) is ra s a t to *). w rita (l-L ),n l ). cb ack _ l_ faato ra(F tr) ( F t r - c l ; F tr> e2), ( s ( f ( c a s a ( _ - l) ) ) ,t ; r s tra c t(s (f(c a s a (F -_ ))} ), a s s a r t( s ( l( c a s a ( F - l) ) ) ) , « r ita ( 's ( f ( c a s a ) ) is ra s a t to *). w rita (F -l),n l ). cback_l_faatura(prad) :- ( s ( f ( p r a d ( .- l ) ) ) ,t ; ra tra c t(s (f(p ra d (F -_ )))), a s s a rt(s (* (p ra d (F -l)) ) ) , v r ita ( 's ( f ( p r a d ) ) is ra s a t to ') , 307
  • 324. writ*(F~D.nl ). chack_l_iaatura(agr) ( a T f(a g r(_ -i))).( i r a tr a c t(a ( i( a g r( F - _ )) )) , a a a a rt(a (f(a g r(F -l)))), a r ita ( 'a ( f ( a g r ) ) i t r t s t t to ’) . t r i t a ( F - l ) ,n l ). chack_l_laaturo(tna) ( a ( * ( tn a ( .- l) ) ) ,l ; ra tr a c t(a ( f( ta a ( F -_ ) ) )) , a a ia r t( a ( i( tn a ( F - l) ) ) ) , w rita (* a (f(tn a )) i t r t t t t to ') . t r i t a ( F - l ) ,n l ). ch ack_l.faatura(aap) ( t ( f ( n t p ( _ - D ) ) ,! ; ra tra c t(a (f(a a p (F -_ )) ) ) , a a a a rt(a (f(a a p (F -l)) ) ) , w rita ('a (l(a a p )) i t r t t t t to *), t r i t t ( F - l ) , n l ). gonorato :- c u rra n t.ia ttin g (P ). t t t o i ( S , ganarata(P , S ),S i), « r i t a ( ’Languaga ganaratad w ith c u rrtn t lo ttin g : * ),n l, a rita l( S a ) .n l. g tn tr a tt(S t) c u r r tn t.ttttin g ( P ) . a a to f(S .g a n a ra ta (P ,S ),S a ). g tn tr ttt( F .S ) in tta n tia ta _ v a r(P ,P l), r a tr a c t.o ld .v a lu ta , aaaart_ n at_ v alu aa(P D . parao(S). g t n tr a tt 1(P.S) in a ta n tia ta _ v a r(P ,P i). r ttr a c t.o ld .v a lu ta i . a aaart.n aw .v alu aaK P D , p a ra t(S ). curraat.aattiag :- eurrtat.ttttlag(P). aatoKV.aUCVD'Va), v r i t a ( *L*). u r ita .v a lu ta (P ). n l ,ta b (1). v rita_valuaa(V a), t r i t t ( • ] * ) ,n l. r a tra c t.o ld .v a lu ta :- r t t r a c t a l K t (■ (_ ))), r a tr a c ta ll( h d l(_ )). r a t r a c t a l l (h d l( .) ) . 308
  • 325. 608 •(*)PWTT •(*)pw r» • (T )p w fT ’<« ~)9*df3)a)da (((")!»•<*•)■)<*■ *(((I)3*dl3)«)dl ■(((T)T9*dt)«)d« «(«*)■)*• •«(»)<*•»)■)<*» •(o)a*n*vaf-? '(((0)»*di3)a)d> *(((0)C9*<I*)■)<*• ’(((0)T9*da)ta)da •« « » «*)■)<!■ •(((0 )dw)«)da •((((i*T)dB»)j)B)*xaaB » * < (((« 1 )* w» )* )« )* » bb» ' ( ( ( (x ff )**») j .)a ) * » a a * ' <( <<pMd)p*ad) *) t)* » a a » ■ ((((•a* o )aa» 3 )j)a)w aaa» ([dav‘a v i‘j9 v <pa.x4‘aa«3])dj3-*j»3a* ‘((SOH)CPiD^»«n {[MH‘TOH])<lp*r*«“ * •<(( *<({(0)C9*<*«)»i)«)WaaB» *<(<(i)I«dB)w)B)*«BBtt *<<<<3)9)■)*)**•»*» *((((a)T 29a)B )a)«jaaaa '(((0 )a n » )« )a )» ja a a » ' <( ( (B )dB »)«)a)*»aa* *((((T)CJtfa)w)a)oaaaaa ([H 'D 'd 'a 'a 'O 'H 'T ])<*«•"»*•■« •( [dap *a«i* ifp 'pa-Xj *a « 3] )d*a“*iaaa» ' ([can*TOE])dpn-iua8« '([a'O'J'a'a'O'a'T] )d«B-*xBBB» ( [dap *aui **®T' P*Jd **a»0 ' MH' oh ' H' D*I *3' 0 *3 ' 8*¥] >Taa«ta*“aaH"»ja aaa '((")C P8)T T i»i> «»« '(C “ )tP 8 )T W 9« » « '((")*)rr»*9«*M -: TaanTBA~pio~*3B.i5a.i *(CZGH*IOS])dpq“vx*aaa *( W 'O 'i'a ’a 'D 'f T ] ) * » - * a*bb» U M H 'T O I'I'D 'd 'a 'a 'D 'a 'T n w iirW M W - * " ” *
  • 326. A .7 parser.pl X F ile : p u ra c r.p l X Author: Audi Vo X Updated: August 20, 1003 X Purpose: A top-down p a rse r implementing the S-param eter model. :- ensure_loaded(Johnson). :- en su re_ lo ad ed (tree). dynamic s /l,h d l/l,h d 2 /l. parse c p (T re e ,_ ,D ), d (T re e ),fa il. parse w rite (’lo more p a r s e .') . parse(S) :- e p (_ ,S ,D ). parse(S,T ) c p (T ,S ,D ). c p (c p /[n p (IF )/l? ,C l]) —> to p (I P ) « » '+ ’) , ap (IP ,c sp e c,IF ), cl(C l,C n p (IF )]). cp(cp/[advp(A dF)/[often],C 1]) —> [o fte n ], { s(m (c sp e c (l))), op(AdF)»"»'+», index(A dF)»*4 c l(C l, [advp(AdF)]). cl(cl/[cO (C F,T h)/C ,A grlP], ABC) —> < h d l( i» , cO(C.CF.Th). agrlp(AgrlF,xO(CF,Th).ABC), < lezical(C F )> , cl(cl/[A grlP,cO (C F ,T h)/C ], ABC) --> th d l(f)> . agrlp(A grlP ,zO(CF,Th), ABC), cO(C.CF.Th), {lexical(C F )}. a g rlp (a g rlp /[n p (IF )/IP .A g rl.l],H C ,[n p (IF l)]) —> < c a s e (IF )« -c l, check.np.fnatures(IF,M F1) >. np(IP , a g rlsp ec, IF ), a g r l.l( A g r l.l. BC. [n p (IF )]. □ ) . ag rlp (ag rlp /[n p (IF )/IP ,A g rl.l].x O (B F .T h ), [np(B Fl)]) —> <Th»[_ ,_ ], c a a e (IF )« -c l, c a se (lF W " c a s e (IF l) >. n p (IP ,a g rlsp e c ,IF ), ag rl.K A g rl.l,x O (B F ,T h ), Cnp(IF)], [n p (IF l)]). a g rlp (a g rlp /[n p (IF )/IP , A grl.l],xO (H F,Th), [advp(AdF)]) --> n p (IP ,a g rlsp e c ,IF ), agrl_l(A grl.l,xO (E F .T h). [n p (IF )], [adwp(AdF)]). 310
  • 327. agrl_l(agrl_l/[agrl_0< A grlF .T h)/A grl,T P ] ,xO(HF,Th), [u p (IF )]. ABC) —> { h d 2 (i), pU <AgrlF) ■■■phi(IF ), ^ch«ck_v_f«atnr«i(AgrlF,HF) agrl_0(A grl,A grlF,T h), tp(TP, xO(IF.Th), [n p (IF )), ABC). agrl_l(*frl_l/[T P,agrl_0<A grlF,T h)/A grl],xO (H F,T h),[np(IF)],A B C ) —> T h d 2 (l). phi(A grlF)■■■phi(IF), ch«ck_v_faattir«a( AgrlF, IF) tp(TP,xO(HF,Th). Cnp(IF)], ABC), ftgrl_0(A grl, A grlF,Th). tp(tp/[Tl),IC,A C,A BC) —> tl(Tl,HC,AC,ABC). tl(tl/[tO(TF,Th)/T,A*pP),xO(BF,Th),AC,ABC) --> < hd2(i). chack_v_laaturaa(TF,H F), +ABC*[advp(_)3 }. tO (T.TF,Th), aap_p(AspP,xO(HF,Th), AC,ABC). tl(tl/[A apP,tO (T F,T h)/T ],xO (H F,T h),AC,ABC) —> < hd2(l), chack_v_f«aturaa(TF,H F), +ABC»[adrp(_)) >. aap_p(AapP,xO(HF,Th), AC,ABC), tO(T.TF.Th). tl( tl/[ a d v p /[ o f te n ) ,T1],xO(HF,Th),AC,ABC) —> [o fta n ], <+ABC-[nd*p(_)]>, tl(T l,xO (H F,T h), AC,ABC,.). tl(tl/[a d ra (A d F )/[] ,T1],xO(HF,Th) ,AC, [advp(AdF)]) —> tl(Tl,xO (HF,Th),AC, □ ,_ ) . tl(tl/[tO (TF,Th)/T,A ipP],xO (BF,Th),A C,A BC,_) --> {check.v_featiirea(TF,H F)}, tO(T.TF.Th), aap_p(AapP,xO(BF,Th),AC,ABC). aap_p(aap_p/[Aapl],HC,AC,ABC) --> a a p l(Aapl, BC, AC, ABC). aapl(aepl/[as£0(A epF,T h)/A ap,Agr2P] , xO(HF.Th) , AC,ABC) —> check.v_featurea(A apF,IF) ) • aapO(Aep,AapF,Th), agr2p(Agr2P,xO(HF,Th), AC,ABC). aepl(aapl7[Agr2P,aspO(AepF,Th)/Aap] ,xO(BF,Th), AC,ABC) ~ > < hd2(f), ch eck .v .featu rea (AapF.BF) agr2p(Agr2P,xO(HF,Th), AC,ABC), aapO(Aap,AapP.Th). 311
  • 328. agr2p(agr2p/[ap(H F)/IP,A jr2_l] .xO(HF.Th) ,AC, tap(H Fl)]) --> <Th»[ _ ,.] , caaa(H F)«*c2, chack_np_faaturaa(IF,1F1) >. np(IP ,afr2apac,V F ), atr2_l(Ifr2_l,xO (H F .T b). Cap(IF)I AC]). agr2p(agx2p/ [a p (IF )/l? , Agr2_l],xO(HF,Th), AC,[] ) —> <Th» caaa(H F)*«c2 >. ap(IP,afr2apae,H F), »*r2_l(Agr2_l.xO<HF,Th).[np<«F)lAC]). agr2p(ajx2p/[A jr2_l],xO <H F.Tta),A C,t]) —> «Sr2_l(Agr2_l,xO(HF,Th), AC). agr2_1(a*r2_1 /Cagr2_0(Agr2F, Th) / Agr2,VP], xO(IF , Th), C ap(IF)IIPa]> < hd2(i), ^ehack_v_faaturaa(Agr2F,HF) afr2_0(A £r2,A gr2r,Th), »P(VP,xO(1F.tS ), Cap (IF )IIP * ]). agr2_l(ag2_l/[V P,a*r2_0(A fr2F.Th)/A jr2].xO (K F.Tb), [ap(IF > IIP s]) ^chack.v_faaturaa (Agr2F, HF) ▼p(VP,xO<HF,Th), C np(IF)IIPs]), afr2_0(A gr2,A fr2F,Th). rp(Tp/Eap(IF)/IP,V I],xO (H F,CagtlTha]),A C) —> U h « ta ( I F ) a n a |t, aalact(A C ,ap (IF l). AC1). c a a a (IF l)« * c l, chaek.ap_faaturaa( IF ,IF 1) >. a p (IP ,v sp a c l,IF ), < lax ical(IF )> , v l(V I. sO (lF ,[a ftI Tha]) , AC1). vp(vp/C ap(IF )/IP ,V i],xO (H F,[pat]) ,AC) —> lth a ta ( IF )* « p a t, aalact(A C ,ap(IFl),A C 1), c a a a (IF l)a « c 2 , ch«ck_ap_faaturaa(IF,IF1) >. ap(V P,vapac2,IF), < lax ical(IF )> , v l(V I,x O (IF .[p at]) , AC1). vl(vl/[vO (V F, CttllTha])/V ,V P],xO (H F.[ThliTha]) , AC) —> {ehack_v_taaturaa(VF,HF)>, vO(V,VF,[ThllTha]), vp(VP,xO(IF,Tha),AC). vl(vl/CvO(VF,[Th))/V],xO(HF,[Th]),_AC) —> {chack_v_laaturaa(VF,KF)}, vO(V,VF,[Th]). 312
  • 329. cO(Varb,CF,Th) --> {»_to_c>, v«rb(V«rb,CF,Th). cO(Anx,CF,_Th) —> aux(A ax,c,CF). C O C O , . , . ) — > □ , {+»_to_c, + a u x (_ ,c ,_,_ ,_ ) >. *grl_0(V,A grlF,Th) —> <T_t©_*grl>, varb(V ,A grlF,Th). agrl_0(Aax,AgrlF,_Th) —> maxCAux, a g r l, AgrlF). * g r l _ 0 ( [ ] —> [ ], < *».to_A grl, ^+aux(_ ,a g r l,_ ,_ ,_ ) tO(V.TF.Th) —> »srb(V,TF,Th). tO(Anx,TF,_Th) --> aux(A ux,t,T F). tO (□ , . , . ) —> [ ] , ^ + a u x (_ ,t,_ ,_ ,_ ) aspO(V,AspF,Th) —> { f .t o .u p ) , v«rb(V,AspF,Th). upO(Aux,AspF,_Th) —>ir ,u p r ,.T n ; —> aux(Aux, u p . AspF). I.— ) --> a .u p o < n , { * » _ to _ u p , + n x ( . , u p ............) >. agr2_0(V,Agr2F,Th) —> {▼-to_agr2>, v*rb(V,Agr2F,Th). agr2_0( [],_ ,_ ) —> □ . vO(V,V F,[agtlThs]) —> < s? « (a g r2 (0 ))» . vsrb(V ,V F,[art IThaJ). ▼0(C3 Cagtl.j) —> [3 , (s ? a (a g r2 (l)))} . [Thl_]J — > □, <+Th«agt>. np(Sub1,cspsc,IF) —> < s (a (c a p a c (l))), ■ (■ (sp a e l(l)))} , s a b ja c t(Subj, IF ). np(Obj, cspac, IF) —> { s (a (c sp a c { l))), s (a ( s p a c 2 ( l) ) » , objsct< O bj,IF ). 313
  • 330. n p ( n , capac,_) —> □ , {■(■(cflp«c(0)))}. np(S ubj,axrlap«c,IF ) —> { a (a (a p « c l(l)))> , a a b j« c t(S u b j,IF ). n p (Q .ig rla p a c ,.) —> □ . np(0bj,asr2apac,IF ) —> {a(a(apac2(!)))> , o b jact(O b j,IF ). ap(Q ,*fr2spac,_) —> □ . np(Subj,vxp«cl,IF ) —> ii(« (« p tc l(0 )))} , a u b ja c t(S u b j,IF ). n p ([],v a p a c i,_ ) —> □ . np(Obj, vap«c2, IF) —> {a(a(apae2(0)))> , o b ja c t(O b j,IF ). n p (t],v ap ac2 ,_ ) --> □ . au b jact( [ 'S u b j-0 ’/ □ ] ,IF ) —> C a-Q ], { a « (c a a * (0 -0 ))), c a s a ( IF )" « c l, indax(IF ,a) a u b ja e t([’S a b j-[c l] V n i .i F ) --> [ a - [ c l] ] , { a (f(c a a a (0 -l))). eaaa(V F )« » cl, iad ax (IF ,a) >. o bjactC [’O bj-D * /□ ] ,IF ) --> [© -□ ], < a « (e a a a (0 -0 )> ), caa«(IF )a» c 2 , indax(VF,o) >. o b j« c t([>0 bj-[c2] '/ □ ] *IF) --> Co-Cc2]]. { a (f(c a a a (O -l))), e u « (IF )> n e 2 , indaz(IF .o) >. aarbtC 'V arb-D * /□ ] ,VF,Th) --> [V -Q J, {tlx_frid(V ,n»), a (ftp ra d („ -O ))),a (f(a g r(..-O ))),a (f(tn a (_ -0 ))),a (f(M p (_ -O ))), co d a.faatu raa( □ , VF), indax(VF,V) >. varb(C'Varb-Cprad] * /□ ] ,VF,Th) —> [V -Cpr.d]], <th_*rld(V ,Th), a (< (p ra d (_ -l))),a (< (a fr(_ -0 ))),a (f(t» a (_ -0 ))),a (* (a a p (_ -0 ))), e o d i .f u t a n a ( Cprad], VF), iadax(VF.V) >. 314
  • 331. v « rb ([* V « rb -[u r]* /0 ],V F ,T h ) —> [V -[agr]], {th_*rid(V ,Tli). s (l( p re d ( _ -0 ) )) .s (f ( a g r ( _ -l)) ) ,s ( f (tn s (_ - 0 )) ) ,s ( f (a a p (_ - 0 ) )) . co d a.faatu raa( [agr].V P), iadax(VF.V) ▼arb([ ’V arb-[tua] * /□ ] ,VF,Th) “ > [V -[tn s]], <th_frid<V .Tb), s (f ( p rs d ( _ - 0 ) ) ) ,s ( f ( a g r ( .- 0 ) ) ) ,a ( f ( tn s ( _ - l) ) ) ,a ( f ( a a p ( .- 0 ) ) ) , co d a.faatu raa( [ tn s ] , VF). indax(VF.V) >. »arb([*V srb-[asp]*/[]],V F,T h) —> [V -[asp]], •Cth_grid(V,Th), s (tT p ra d (_ -0 ))),s { f(a g r(.-O ))),s (f(tn s (.-O ))),s (f(a s p (_ ~ l))), co d a.faatu raa( [asp],V F), indax(VF.V) >. v a rb (['V a rb -[p ra d ,a g r]‘/[]]»V F,Th) --> [V -[p rad .ag r]], {th .g rid (V ,T h ), s ( f ( p r a d ( _ - l) ) ) ,s ( f ( a g r ( _ - l) ) ) ,s ( f ( tn s ( _ - 0 ) ) ) ,s ( f ( a s p ( _ - 0 ) ) ) , co d a.faatu raa( [prad.agr],V F ), indax(VF.V) >. v e rb (['V e rb -[p ra d ,tn s]’/ □ ] .VF.Th) —> [V -[p re d ,tn s]], {th_grid(V ,T h), s < f( p re d ( _ -l) )) .s (f (a g r (_ - 0 ) )) .s (f (tn a (_ - l)) ).s ( f( a s p (_ - 0 )) ) , co d a.faatu raa( [p rad .tas],V F ), iudax(VF.V) >. varb( [ 'V arb-[prad. a s p ]’/[]]»V F.Tli) —> [V -[prad.aap]] , {th_grid(V ,T h), a ( f ( p r a d ( _ - l) ) ) ,a ( f ( a g r ( _ - 0 ) ) ) ,a ( f ( tn s ( _ - 0 ) ) ) ,s ( f ( a s p ( .- l) ) ) . c o d a.faatu raa( [prad.aap].V F), ^indax(VF.V) varb( [ ’V arb-[agr, tn s ] ’/ □ ] .VF.Th) —> [V -[a g r,tn s ]], {th_grid(V ,T h). ■ (f(p ra d (_ -0 ))),s (f(a g r(_ -l))).s (l(tn a (_ -l))),i(f(a a p < :_ -0 ))), co d a.faatu raa( [ag r,tn s].V F ), iudax(VF.V) v a rb ([, V arb-[afr,aap] '/ □ ] .VF.Th) —> [V -[ag r,asp ]], <th_crid(V .Tb). s ( f ( p r a d ( .- 0 ) ) ) ,s ( f ( a g r ( .- l) ) ) ,t( f ( tn s ( .- 0 ) ) ) ,a ( f ( a s p ( _ - l) ) ) , c o d a.faatu raa( [a g r, aa p ].VF)» index(VF.V) varb ([* V arb -[tn s,asp ]* / [ ] ] .VF.Th) —> [V -[tn a ,aa p ]]. {th.grid(V ,T h), s ( f ( p r s d ( .- 0 ) ) ) ,a ( f ( a g r ( .- 0 ) ) ) ,s ( f ( tn s ( _ - l) ) ) ,s ( f ( a s p ( .- l) ) ) , co d a.faatu raa( [ tn s ,aa p ]. VF). indax(VF.V) >. ▼ arb([’V a rb -[p ra d ,a g r,tn s]* / □ ] .VF.Th) —> [V -[p rad .ag r,tn s]] , <th_grid(V ,Th), a (f ( p ra d ( _ - l) ) ) ,a ( f ( a g r ( _ - l) ) ) ,s ( f ( tn a ( _ - l) ) ) ,a ( f ( a s p ( _ - 0 ) ) ) , co d a.faatu raa( [p ra d .a g r.tn s ]. VF), iadax(VF.V) 315
  • 332. >. varb( [ ’V arb -[p rad ,ag r,u p ] * / □ ] ,VF.Th) —> [V -[prad, a g r, u p ] ] , {th_grid(V ,A ) , a ( f ( p r a d ( _ - l) ) ) ,a ( f ( a g r ( _ - l) ) ) ,a ( f ( tn a ( .- 0 ) ) ) ,a ( f ( a a p ( _ - l) ) ) , co d a.faatu raa( [p ro d ,ag r,n ap ], VF), indax(VF.V) >. v a rb ([ ’V e rb -[a g r,tn a ,a u p ]'/[]] ,VT,Th) —> [V -[ag r,tn a,aap ]] , (th_grid(V ,T h), a ( f ( p r a d ( _ - 0 ) ) ) ,a ( f ( a g r ( .- l) ) ) ,a ( f ( tn a ( .- l) ) ) ,a ( f ( a a p ( .- l) ) ) , coda_faaturaa ( [ u r , tn s , aap], VF), indax(VF.V) v a rb ([* V arb -[p rad ,ag r,tn a,aap ]’/ [ ] ] , VF.Th) —> [V -[p ra d ,a g r,tn a ,a a p ]], (th_grid(V .T h), a (f ( p ra d ( _ -l)) ) .a ( f( a g r( _ - l) )) ,a (f ( tn a ( _ -l)) ) ,a ( f( a B p ( _ -l) )) , c o d a .fa a tu ra a ([p ra d ,a g r,tn a ,a a p ], VF), indax(VF.V) >. a u x ([’A ux-[prad]•/□ ],c,C F ) —> [aux-[prad]] , { a ( f ( p r a d ( l- .) ) ) , a (a (c (0 ))), co d a.faatu raa( [prad],CF) au x ([ ’A ux-[agr]'/ [ ] ] ,c,CF) —> [au x -[ag r]], C f(prad(0-_)) ) ,a (f(a g r(l-_ ))T .(a (f(p ra d (0 -_ ))),a (f(a g r(l' a (a ( c ( l) ) ) ,a ( a ( a g r l( 0 ) ) ) , co d a.faatu raa( [agr],CF) a u x ([’A ux-[tna]' / □ ] ,c,CF) —> [a u x -[tn a ]], < a (f(p ra d (0 -.))),a (f(a g r(0 -_ ))),a (f(tn a (l-_ > )), a ( a ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( a ( tn a ( 0 ) ) ) , co d a.faatu raa( [tna],C F) ). aux(['A ux-[aap]’/ Q ] ,e,CF) —> [au x -[aap ]], { a (f ( p ra d ( 0 -_ ) )) ,a (f ( a g r (0 - .) )) ,a (f (tn a (0 - _ ) )) ,a (f (a a p ( l- _ )) ) , a (a (c (1 ) ) ) ,a (a (a g rl( lT )) »a(a(tna( 1 ) ) ) ,a(n (aap (0 )) ) , co d a.faatu raa( [aap],CF) }* au x (['A u x -[p rad ,ag r]'/ □ ] ,c,CF) —> [au x -[p rad ,ag r]] , {a (f (p ra d U --) ) ) . a ( f (agr ( l-_ ) ) ), u(n(c(l))),a(a(ugrl(oT>). co d a.faaturaa([prad.agr],C F ) > aux([’Aux-[prad,tna]'/[]],c,CF) —> [aux-[prad,tna]], {a(f(prad(l-.))),a(f(agr(0-_))),a(f(tna(l-_))). a(B(c(l))),a(a(agrl(IT)),a(a(tna(0))), coda.faaturaa([prad,tna],CF) >. aux([’Aux-[prad,aap]*/[]],c,CF) —> [aux-[prad,aap]], {a(f(prad(l-_))),a(f(agr(0-_))),a(f(tna(0-_))),a(f(aap(l-_))), a(a(c(l))),a(a(agrl(IT)),a(a(taa(l))),a(n(aap(0)}), coda.faaturaa( [prad,aap],CF) aux(['Aux-[agr,tna] ’/□],c,CF) —> [aux-[agr,tna]], {a(f[prad(0-_))),a(f(agr(l-_))),a(f(tna(l-_))), a(a(c(l))),a(n(agrl(lT)) ,a(a(tna(0))), coda.faatnraa( [agr,tna],CF) 316
  • 333. >. aux(['A ux-[agr,aap] '/ [ ] ] ,c,CF) —> [aux-[agr,aap]] , { a ( f ( p r a d ( 0 -_ ) )) ,a (f ( a g r (l- .)) ),a ( f( tn a ( 0 - _ )) ).8 ( f (a a p (l- _ )) ), a ( n ( e ( l) ) ) ,a ( n ( a g r l( l) ) ) ,8 ( a ( tn a ( l) ) ) ,a (n (a a p (0 ))), co d a.f aaturaa ( [agr, aap] ,CF) ] • a u x (['A u x -[tn a ,a a p ]’/ [ ] ] ,c,CF) —> [a u x -[tn a ,a a p ]], { a (f ( p r a d ( 0 - _ ) ) ) ,8 ( f ( a g r ( 0 - .) ) ) ,a ( f ( tn a ( l- .) ) ) .a ( f ( a a p ( l- _ ) ) ) , a (a ^ c (l))),a (n (a g rl(lT )),a (n (ta 8 (l))),s (u (a a p (0 ))), c o d a .fa a tu ra a ( [tn a ,aap],CF) >. a u x (['A u x -[p ra d ,a g r,tn a ]* /[]],c,C F ) —> [aux-[prad, a g r, tn a ]] , { a (f(p ra d (l-_ ) ) ) , a (f(a g r( l-_ )) ) , a (f ( tn a ( 1 - .) ) ) , a (■(c ( 1 ) ) ) ,a (a (a g rl( 1 ) ) ) ,a(n{tna(0 ))), co d a.f aaturaa ( [prad, a g r, tna] ,CF) ] • a u x (['A u x -[p rad ,ag r,aap ]'/ [ ] ] , c.CF) --> [au x -[p rad ,ag r,aap ]] , { a (f ( p r a d ( l- _ ) ) ) ,a ( f ( a g r ( l- .) ) ) ,a ( f ( tn a ( 0 - .) T ) ,a ( f ( a a p ( l- _ ) ) ) , a ( n ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( n ( tn a ( l) ) ) ,a ( n ( a a p ( 0 ) ) ) , ^ c o d a.faatu raa( [prad, ag r, aap] ,CF) au x (['A u x -[p ra d ,tn a,a a p ]* /[ ] ] .c.CF) —> [au x -[p rad ,tn a,aap ]] , { a ( f ( p r a d ( l- _ ) ) ) ,a ( f ( a g r ( 0 - .) ) ) ,a ( f ( tn a ( l- .) ) ) ,a ( f ( a a p ( l- .) ) ) , a (n (c (l))),a (a i(a g ri(lT } ),a (n (tn a (l))),8 (n (a a p (0 ))), co d a.f aaturaa ( [prad, tn a , aap] ,CF) ■ a u x ([’A ux-[agr,tna,aap] * / □ ] ,c,CF) —> [a u x -[a g r,tn a ,a a p ]], < a ( f ( p r a d ( 0 - _ ;) ) ,a ( f ( a g r ( l- .) ) ) ,a ( f ( ta a ( l- .) > ) ,a ( f ( a a p ( l- _ ) ) ) , a ( a ( c ( l) ) ) ,a ( a ( a g r l( l) ) ) ,a ( n ( tn a ( l) ) ) ,a ( n ( a a p ( 0 ) ) ) , c o d a.f aaturaa ( [agr, tn a , aap] ,CF) }• a u x (['A u x -[p rad ,ag r,tn a,aap ] * /□ ] ,c.CF) —> [a u x -[p ra d ,a g r.tn a ,aa p ]] , { a ( f ( p r a d 7 l- _ ) ) ) ,a ( f ( a g r ( l- _ ) ) ) ,a ( f ( tn a ( l- .) ) ) ,a ( f ( a a p ( l- _ ) )>, b(b (c( 1) )) ,a (* (a g rl(l} ) ) ,a (n (tn a (l) )) ,a (a (a a p (0 )) ) , ^coda_faaturaa( [p rad.agr, tn a, aap] ,CF) a u x (['A u x -[ag r]’/n ],a g r l,A g rlF ) —> [au x -[ag r]], < a (f(a g r(l-_ ))T , a (a (c 7 0 ))).a (a (a g rl(0 ))), c o d a .fa a tu ra a ( [agr].A grlF) aux([*A ux-[tna]* / □ ] ,agr1 ,AgrlF) —> [a u x -[tn a ]], { a (f (a g r(0 -_ )) ) , s (f ( tn a ( l-_ ) ) ) , a (a (c (0 )) ) , a (a (a g rl( 1 ) ) ) ,a (a (tn a (0 ))), c o d a .fa a tu ra a ( [tn a ], AgrlF) }. aux(['A ux-[aap]’/[]],a g rl,A g rlF ) --> [au x -[aap ]], { a (f(a g r(0 -.))T ,a (f(tn a (0 -_ ))),a (f(a a p (l-_ ))), a (a ( c T o )) ),a ( a (a g r l( l) ) ),a ( n (tn a ( l)) ) >a (n (a a p (0 ))), c o d a .fa a tu ra a ( [aap ], AgrlF) a u x (['A u x -[a g r,tn a ]'/□ ],a g rl,A g rlF ) --> [a u x -[a g r,tn a ]] , < a ( f u g r ( l- _ ) ) ) ,a ( f ( tn a ( l- _ ) ) ) , a (n (c 7 o ))),a (a (a g rl(l))),a (n (tn a (0 } )), c o d a.f aatu raa ([a g r, tn a] .AgrlF) a u x (['A u x -[a g r,a a p ]'/□ ],a g rl,A g rlF ) —> [au x -[ag r,aap ]] , { a (f ( a g r ( l- _ ) ) ) , a (f(tn a C 0 -.)) ) ,a (f(a a p (l-_ )) ) , a (a (c (0) )) ,B ( n (a g r l( l)) ),a ( a ( tn a ( l)) ),a ( n (a a p (0 ) ) ), 317
  • 334. I ! WI—* GO ft ft « *t n ft o o o v v v a a » r • •• • • M ft ft I I I r w t*th M th M- I I III I | | i " " I i l i f E t SSH?f t f t • • t • ■ ? B A * l l f t | l g « Q SfUUSt ~ 2 * S £ S Jg QQQQQ 3 ; < □ : •; r s 3 * s S s s s s s> • • 4 - «t t t — ■ * w to j » ■ ■ ■ ■ ^ ■• w I ft ft •* << ■ ■ ■w ■a ■■■■■ • i-i wH RA>> O ■ ■ » » ■§ ■ ■ • ■<4 m? - i - 8 3 T * 1 < I £ 3 B t f l f r g g m* m g^si zs/s/s/s*q v - : r ?? ? s | f 3 l s s s s 3•• *■• I n n w«ftv> wwww» w a A • |3« • • ; • • • • l o 5 5 r a ' 7ft j e e i e I e S>BV *ftBv iftfi V (ftfi V isB V «sBV•I I • AAAI* AAA | • AAA I* AAA | ■ AAAI* AOftiNn O/vftn P P QA.rn PftiK* S.«*ft7 ftnt7 ft. a *7 S,• n n i t n n B ( /m>^A f AAM I AAllI Ai l l I AAA IAf^A* H/tA< thAOH A AOH A Aa a p- wto B <|WQ<0 A 6.S*ft A frSHA frSKA frSlftAIftftS AAAA AAA A AAAA I AA*0 I Aft A I AA>« I AAA ? e S ^ r f r . ?<ae^ ? 5 s ^A« a S A►*AA a toAV. BHAS „ ----- _ - — ^A O to I—i A a I»H ) A AO |—| A a t o . - , A V H I A ^ OHS A g ’“'.I H fi O I “ B O lu d O l U |1v I . f w T l pRwlu Hw| . Rw| u Rwl u R » I A5*1 - 5Aw * Awwv Aww* Aw * ABif i AAw* AA* w | Awwn Mws/|t AwwA AAw'd AAV|-| AAA WA AO VU AO WO Ao Wo a AWI—I AS WU Ar-iao *o tnA■ I—I mAo u nA • H AA- - AAo i_i A I f f i Z z r l - f 3 i Z 3 H § & I*-*A Hd o AAOf LJfl A I Ult I o toAI—I o 0*AH oWA *0 oAa of .. .. ►a 3 C E C - • ■ » » ■ ' . * p j. AO 8E83 hES; hE : ???> 8Cg.~ 6 ^w , &C3., 3^ a 3o a oB3-5 Sw-r a4**' I * Wt» I w o | w p p wt* to o wtoR Oww VHw | V —I P If d A- I - »• I to At|w | w| R w R d Al A A Al 4 AWo WA o w | o | I_IAWM HAWW If ~ ‘ “ A A • ■Wti tod W toA A►a w oo IMAW I AA p AtAo «q wAo l•d A>Hp Aw AAVu l_l tos A AA A A w p A — A A • b ' w SI t o g w /-SW(» If *H^W I 1 - j» g *♦-_ *n ^ ^ J, w 6 *. 3, S A Wg V Wgg S ffrfi • « In i r. _ — _____| » It « U U I ___ _ • *d P *d * i * A A Iffl I *TJ w Atop AtoA• J o a I M A •J B I I B - I g 83$ 838 3**? 3.’ a O <t O uw if w «w i wW A W ’ 8
  • 335. I < 4 4 4 4I I I I ICfr <f Ct «t n ^ 0 OOO O t* •1i i i a o h o a & s a ; s ; frE s e s s s s s s s q '*!!7-7 p s r - ’- ^ e E•• i i *• « g, £ g J* • a<t i /->i ^ /s. a a a a a a a a g s * s s, es?»♦to« />■0rsO ■ Pif«totob*totototo»» « I ^01 « ^ m-•• M-w Vi *1^NH 9 I 9H H K ^ P B I 9 H - p • 9 h o W w A A *• 9 p 9 p p A I M O H « f l ( t » l < • * O ^ O O v || f » < t 9 • ■« g Eh? ~ o » « o w w . n 5 £ £ 5 A - • M- « 4 AH H H - i-v ■ I AH I UM*«» ft M I • • v/Op>nwpwnnw> M- •3*?'*w>Uto w• wwl| O•»w p V wwi *www v • wct99 »WW W • If• w. *w * 9 ■*w oB• tt M E« WWWWHtw» « 9w- 99r* 9anas*■c* ■£As| AAH jrsl'S-n«oAHWHAHASw|))H V x§ HWWW9 ESSS5SSgnPPMPPPPH9 33SS333L* mmgmmmm* h WW »• Vwmai a• a■a ■ EAHA*■•<*A> <ti*gs10 AS^HWHMkil 1KKMp, ^es-ssES*S• i*«0Xi?«o0 »iHkfLw§AhaO»*4AlWOHAstoCt4 S s q ss tew# WWWWH*W(to WWWV/%toww wWW« »IPww ww« Q»*n«w • « M•—> WWWtoW» was ^•>*1 f«n SSES♦0*0VAAW**fr*W * r^ws^»V wwtoWWA 4- ■toAs C4* «09*■>*i►*10wAS'WHW g 3 B «tI4W« * 4 m B ■
  • 336. lu m c to r(T e rn ,Itt, 1), u |( l , T * n , l « b t t n ) . u n ify (_ i IId ._ ._ ,Y Id ._ ) XXd»YId, I. X already u n ifie d unify(_,XXd,XBBqa,_,YXd,YV8qa) ( vl_nenber(XId,YIEqe) ; vl_nenber(YXd,XIBqe) ) , I. f a i l . unify(XV,Id,_,YV,Xd,_) ( atonic(XV) ; atonic(YV) ).l. XV-YV. uniiy(XV.Xd.XBBqa,YV,Id,YIEqa> vl_uaion(XBEqa,YIEqa), uaify_avpair«(XV,YV). unify_avpairs(L l,L 2) ( var(L l) ; var(L2) ).». L1-L2. unify_aT paira(L l,L 2) av_nerge(L l,L 3), av_norge(L2,Ll), v l_ ta il(L I,T a il), v l_ ta il(L 2 ,T a il). av_narge(L,_) var(L ), I. av_norge([i:V l|K ],L ) noabor(A:V2,L), I, V1— V2, av_nerge(RlL ). v l.union(S I,S 2) X onauro ovary a l t ol Si is in S2 and vice varaa ▼ l.narga(Sl,S2), v l_ n arg a(S 2 ,S l), v l_ ta il( S l.T a il) , v l_ ta il(S 2 ,T a il). vl_norgo(L,_) var(L ), I. vl_narge(CBlB],S) ▼l_add(B,S), vl_nerge(E ,S ). vl_add(X,L) v ar(L ), I , CXl_] - L. vl_add(X ,C ll.3) X « l, I. ▼ l_add(X.LIM ) vl_add(X.B). vl_nenber(B.L) vl_nanbarl(X , L ), X—B, I. vl_nenberl(„,L ) v ar(L ), I, f a i l . X anunarata nanbars of L vl_nonberl(B ,[B |_ ]). v l_ u e a b o rl(B ,£ .IIj) vl_neuberl(B ,L ). v l_ ta il(T ,T ) var(T ), I. v l.ta ll( C .lt] ,T ) ▼ l.tail(B .T ). X chock inequalities X unify conata X general caaa X copy inequalitiea 320
  • 337. X»/«»Y eval(X ,_,X Id,X IB qs), eval(Y ,_,Y Id,Y IB qs). Xld »» YId, X X and Y h i n d is tin c t Ids vl_add(XId,YIBqs), vl_add(Yld,XIBqs). display.aqns(T ) :- (a to n ic (T );v a r(T )), I,w rite q (T « " T ).fa il. display.eqas(T ) T ■ K . d d , . ) , d isp la y .e q n s(Id ,T ), f a i l . display_eqns(_) n l. display.eqns(L ,R ) (a to n ic(X );w a r(l)), t ,w rlteq(L »«lt) ,n l. display_eqns(L ,• ( ! ,_ ,_ ) ) ato n ic(R ),! ,w riteq(L »*R ) ,n l. disp lay.eqns(Id,0(P os vl_nem berl(A tt: Val, Pos), fu n eto r(T ,A tt,1 ), a rg ( l.T .Id ) . d isp lay .eq n s(T ,V al). d isp lay .sq n s( Id ,• ( _ ,_ ,lE qs)) v l.n en b erl(IB q ,IE q s), w rite q d d ■/■ IBq), n l. A .8 parserl.pl X F ils : p a r s s r l.p l X Author: Andi Vu X Updated: August 24, 1003 X Purpose: A top-down p a rse r inplenenting th e S -p aran eter nodel. ensure_loaded(Johnson). :- e n su re .lo a d e d (tre c ). parse :- c p (T re e ,_ ,□ ) , d (T re e ).fa il, parse w rite C lo nore p a r s e .') . parse(S) :- c p (_ ,S ,D ). parse(S.T ) :- c p (T ,S ,D ). c p (c p /[n p (IF )/IP ,C l]) —> to p (IF )* « -'+ ’}, n p (IP ,c sp e c ,IF ), c l(C l,[n p (lF )]). cl(cl/[cO (CF,Th)/C,AgrlP].ABC) --> cO(C,CF,Th), agrlp(A griP , zO(CF,Th). ABC), (lex ical(C F )} . a g rlp (a g rlp /[n p (IF )/IP ,A g rl.l] ,IC , [n p (IF l)]) —> (c a s e (IF )» > c l, check_np.features(IF ,IF 1) n p (IP ,a g rlsp e c ,IF ), a g rl.l(A g rl.l,H C ,C n p (IF )]. □ ). 321
  • 338. agrlp(agrlp/[np(IF )/B P ,A grl.l},xO (B F .T h), [np(BFl)]) —> <Th» caaa(B F )"» cl, caaa(B FW Bcaaa(IF l) >. np(B P ,agrlapac,IF ), ag rl.l(A g rl.l.x O (B P .T h ), [np(BF)], Cap(BFl)}). »frl-l(agrl_l/[agrl_0(A grlF ,T h)/A grl,T P ],xO (B F ,T h), C np(IF)], ABC) --> { pfci(A grlF)*«pU (B F), ^chack_T_iaaturaa(A grlF,IF) a g rl-O U g rl,AgrlF.Tfc), tp(TP,xO (IF,Th), Cnp(KF)], ABC). tp(tp/[Tl],BC,AC.ABC> --> tl(Tl,BC,AC,ABC). tl(tl/[tO(TF,Th)/T,AapP],xO(IF,Th),A C.ABC) --> < cfcack_v_laaturaa(TF,HF), ^+ABC»[advp(.)3 tO(T.TF,Th). aap_p(AapP,xO(HF,Th) .AC,ABC). tl(tl/[advp/[oftaa},T1],xO (H F,Th).A C ,A B C ) —> [o fta a ], {+ABC*[advp(.)]>, tl(Tl,xO (IF,T h).A C ,A B C ,.). tl(tl/[tO (TF,Th)/T ,A apP},xO (H F,Th),AC,ABC,.) —> {chack_v.*aaturaa(TF,HF)}, tO(T.TF.Tli). aap_p(AapP,xO(HF,Th).AC,ABC). aap_p(aap_p/[Aapl].HC.AC.ABC) ~ > a a p l(Aapl,HC,AC,ABC). aapl(aapl/[aapO(AapF,Tli)/Aap,Agr2P],xO(BF,Tfe), AC,ABC) —> { chack_T_faaturaa(AapF, HF) >, aapO(Aap,AapF,Th), agr2p(Agr2P,xO(BF,Th), AC,ABC). agr2p(agr2p/[np(IF)/BP,Agr2_l],xO(HF,Th),A C,Cnp(BFl)]) —> <Th» c a aa (IF )» * c 2 , chack_np_faaturaa(IF,IF1) np(IP ,ag r2 ap ac.IF ), agr2.1(A gr2_l,xO (BF,H i), Cap(BF)1AC)). agr2p(agr2p/Cnp(BP)/IP,A gr2_l] ,xO(HF,Th), AC, □ ) —> caaa(BF)— c2 >, np(BF,agr2apae,BF), ag r2 .1 (Ig r2 _ l,x O (IF .1 h ), Cap(MF) IAC]). 322
  • 339. egr2p(agi^[A g2_l].xO (H F ,T h),A C ,C ]) —> agr2_l(A gr2_l, iO(HF.Th), AC). agr2_l(sgr2_l/[sgr2_0(A gr2F,Th)/A gr2,V P],xO (H F,Th), C np(IF)IIPs]) —> { check_v_ie»tures(A gr2F,IF) egr2_0( Agr2 ,Agr2F,Th), ¥p(VP, xO(HF.Th>. [np(IF) IIPs] ) . vp(Y p/[& p(IF)/IP,V l],xO (IF ,[sgtIT he]), AC) —> (thete(IF )■ ■ «»gt, select(A C ,n p (IF l), AC1), c e s e ( IF l) u * c l, check_np_festurss (IF, IF1) >. npdP .T B M C l.IF ), (le x ic a l* IP )} , ▼ l(Vl.xOdF,CagtlThs]),ACl).,[agtlTlu vp(T p/[np(IF)/IP,V l],xO (E F,(pat]),A C ) —> [th e ta (IF )a « p a t, s e le c t(AC,np(IFl),AC1), c u « ( lF l) « n e 2 , ch eck.np.features (IF , IF1) ^, np (IP , v spec2,IF ), (le x ic a l(IF )} , Tl(V I, xO(HF,[pat]) , AC1). ▼1(t1/CtO(VF,CThl|T hs])/V ,V F],xO (IF,[T hlIT h*]) , AC) —> (ch eck .v _ featu res(VF, HF)>, vO(V,VF,tThllThs]), vp(VP,xO(HF.Ths),AC). vl(vl/[vO(VF,(Th])/V ],xO (H F,[Th])._AC) —> (check.v_features(V F,IF)}, »0(V,VF,[Th]). c o ( n , . , . j --> □ . egrl.O(Aux,AgrlF,_Th) —> i u ( Aux, e g r l, A grlF). « ( □ , . , _ ) - > □. espO(V.AspF.Th) —> verb(V,AspF,Th). agr2_0(□ —> □ . TO([ ],_ ,_ ) - > □ . np(C ],cspec,_) —> □ . n p (S u b j,eg rlsp ec,IF ) —> su b je c t(S u b j,IF ). np(O bj,egr2spec,IF) —> o b ject(O b J,IF ). np(D ,T epee 1,_) —> □ . 323
  • 340. A I I n u 2.m □ & n *s ▼4a n 3VLJ n 0#■% 1 1 •1 wl__l L-i JL 1 O An 1 1 A 1 JL t n * 1 1 ham • •M 5 i fc 1• *v "" w > 3 - >-. a • O o □ a ~ V I o a □ 2 s c ■s. » 3 > * ►<• *a v a h Aft * > ft-H ft'-' •5 ft* • Sfefe 1WVa ta w fcfcUww■aft*y ^ 3 8 u y /*% V n ? 3 1 g w3 . **» Pa* w W • 4*O wA ? # S' 2** m A*o W ► ap ft*i_i 5 A A I £ ftm* -*.aH4* □ ft £E 35P • I 3 K-A--N 2 Sn B 2 2 2 2 ? aa a ^ a a ?a a a 0 II Nn n wA** paw * ■ a wMi 2.3 • *H 2 2 . ~ £ g s f i f L? § £ ?-II &i ■ a aw i a f 3&& 5 ■ l 8 2 2 3 1 ft -H*1 I► Jo ido SE s ?■ma jS jt> 2u t»w ■ ft■ o ■ ■'-v■ r* ■ ' i t 3ftu o I > ft £ HIN r-l ¥ 3 6 ft 6■•• • > w *0 ft J u» a l •*«*/<%> ft&-£>ftw 5 * £ s i t 'i—P P • 11 S 3 3 I • ft ■fto o •e o u E £ >-*a £* " • &. ■ £55 3 i S 3'> § 3 3 2 3 a 3 LU I I • MO«* r aa a ao a a vUO S3
  • 341. A ppendix B Param eter Spaces B .l P-Space o f S(M )- Param eters (1) •1 0 0 0 0 0 0 0 0 [b y , a ¥ a] n o o o o o o i o ■v. • *1 •3 0 0 0 0 0 1 0 0 [by, • » oj •4 1 0 0 0 0 0 0 0 [T f, ¥ ■ •] U 0 0 0 0 0 0 1 1 [ • •¥] M 0 0 0 0 0 1 0 1 [» *p » » aj «r o o o o o i i o [• T, • • v] M 1 0 0 0 0 0 1 0 [¥ •, • ¥ *1 M 1 0 0 0 0 1 0 0 [• ¥, • ¥ •] • 1 0 1 1 0 0 0 0 0 0 [yB, ¥ ■ •] • 1 1 0 0 0 0 0 1 1 1 [ • ■ ¥ , ■ ¥ , ■ • yJ •12 1 0 0 0 0 0 1 1 (• ¥ •} •13 1 0 0 0 0 1 0 1 [• ¥, • ¥ •] •14 1 0 0 0 0 1 1 0 [• ¥, • • ¥] •15 1 1 0 0 0 0 1 0 t¥ •, ¥ • •] •IS 1 1 0 0 0 1 0 0 [■ ¥, • ¥ •] •IT 1 1 1 0 0 0 0 0 [y at ¥ • •] •It 1 0 0 0 0 1 1 1 to • ¥. BY. • • V] •IS 1 1 0 0 0 0 1 1 [• ¥ •] •30 1 1 0 0 0 1 0 1 [• ¥, • ¥ «] •21 1 1 0 0 0 1 1 0 [• ¥, • ¥ •] • 22 1 1 1 0 0 0 1 0 [y a, ¥ • i] •23 1 1 1 0 0 1 0 0 [■ ¥, a y a] •34 1 1 1 1 0 0 0 0 [* a, ¥ a a] • 2 5 1 1 0 0 0 1 1 1 [•BY, BY, a ¥ a] •26 1 1 1 0 0 0 1 1 [a ¥ a] •2T 1 1 1 0 0 1 0 1 [a ¥, by a] •3* 1 1 1 0 0 1 1 0 [a ¥, a ¥ a] n t i i i i o o i o 325
  • 342. Ca a , T • a ] MO O O O O O l l / O O [ • • > . a a , a * a ] •30 1 1 1 1 0 1 0 0 [ * * , a a a ] • 3 1 1 l a a . a 1 1 a a ] 1 1 0 0 0 • 3 3 1 [ a • a , 1 1 • » . 0 a a 0 a ] 1 1 1 • 3 3 1 Co a a } 1 1 1 0 0 1 1 • 3 4 1 Ca a , a 1 1 a a ] 1 0 1 0 1 • 3 S 1 [ • a , a 1 1 a a ] 1 0 1 1 0 • 3 0 1 Ea •> a I 1 1 a a ] . 1 1 0 1L O • 3 7 1 Ca a , a 1 1 a a ] 1 1 1 0 0 • 3 * 1 [ a • a . 1 1 • *. 1 a a 0 a ] 1 1 1 • 3 0 1 Ca a a ] 1 1 1 1 0 1 1 • 4 0 1 [ a a , a 1 1 a a ] 1 1 1 01 • 4 1 1 1 1 [ a a , a a a ] 1 1 1 1 0 • 4 2 1 [ a a a . 1 1 1 1 1 11 • 4 3 0 [ • a , a 0 0 a a ] 0 0 1 / 0 0 0 [a a a . ■ a , • a a ] • 4 1 0 [ a a , a 0 0 a a ] 0 0 0 0 I / O • 4 4 0 Ca a a , 0 0 a a a, 0 a 0 a ] 1 / 0 1 0 • 4 7 0 Ca a , a 0 0 a a] 0 0 1 / 0 0 1 • 4 0 0 Ca a a] 0 0 0 0 0 1 / 0 1 • 4 * 0 Ca a a . 0 0 a a] 0 0 0 1 1 / 0 M l Ca a 0 . a 0 0 a a] 0 0 1 0 1 / 0 M 2 Ca a 1 0 0 0 0 1 / 0 0 0 Imttit*mi iI 1*1 1 0 0 0 0 0 1 / 0 0 M 4 Ca a 1 . a 0 0 a a] 0 0 0 0 1 / 0 M S Ca a a. a a, a a a] M e Ca a 0 a, 0 0 a a a, 0 a 0 a. 1 1 / 0 1 a v a] M 7 Ca a 0 a, 0 0 a a, 0 a a 0 a] 1 1 1 / 0 M S Ca a 1 0 0 00 1 / 0 1 0 M B Ca a 1 . a 0 0 a a ] 0 0 1 / 0 0 1 • 4 0 Ca a 1 a ] 0 0 0 00 1 / 0 1 M l Ca a 1 . a 0 0 a •] 0 0 0 1 1 / 0 M 2 Ca a 1 0 0 0O 1 1 / 0 0 • 0 3 C* a 1 . • 0 0 a a ] 0 0 1 0 1 / 0 • 0 4 Ca a 1 1 0 0 0 1 / 0 0 0 M S Ca a 1 1 0 0 00 1 / 0 0 M S [a a 1 . a 1 0 a a] 0 0 0 0 1 / 0 M 7 Ca a 1 a, 0 0 a a, 0 a a 0 a. 1 / 0 1 1 a a •] H I Ca a 1 a, 0 0 a • a, 0 a 0 a, 1 1 / 0 1 a a a ] M O Ca a 1 0 0 0 0 1 1 1 / 0 •70 1 1 0 0 0 1/0 1 0 [ a * a . aa. a a . a a a ] 326
  • 343. 07 1 Ca » , 1 p O 1 0 o a ] 0 0 1 / 0 0 1 on i [ • * a ] 1 0 0 0 0 1 / 0 1 0 7 3 [ • * 1 1 0 0 00 1 1 / 0 0 7 4 Ca « , 1 > a 1 0 • 0 ] 0 0 1 1 / 0 0 0 7 0 [ a a , 1 p a 1 0 * a ] 0 0 1 0 1 / 0 0 7 0 Ca a 1 1 1 0 0 1 / 0 0 0 0 7 7 C* 0 1 a . 1 1 * a , 0 0 a a a ] 0 1 / 0 0 0 7 8 Ca a , 1 a 1 1 a o ] 0 0 0 0 1 / 0 0 7 0 1 1 0 0 0 1 / 0 1 1 0 0 0 Co a 1 1 0 a a , 0 0 a a a ] 1 I / O 1 00 1 Co a 1 1 0 0 0 1 1 1 / 0 0 0 3 1 1 1 0 0 1 / 0 1 0 0 0 3 Ca a . 1 a 1 1 a a ] 0 0 1 / 0 0 1 0 0 4 Co a 1 a ] 1 1 0 0 0 I / O 1 0 0 0 Co V 1 1 1 0 0 0 1 1 / 0 0 0 0 Ca a . 1 a 1 1 a a ] 0 0 1 1 / 0 0 0 0 7 Co » , 1 o 1 1 a a ] 0 0 1 0 I / O 000 Ca a 1 1 1 1 01 / 0 0 0 000 Ca O 1 1 1 1 00 1 / 0 0 000 Ca a . 1 a 1 1 a a ] 1 0 0 0 1 / 0 Ml 1 1 1 0 0 1 / 0 1 1 [••*, If. *IT, • * •] 0 0 3 Ca a 1 1 1 0 0 11 / 0 1 0 0 3 Ca a 1 a . 1 1 a a . 0 a a 0 a ] 1 1 1 / 0 0 0 4 Ca a 1 1 1 1 0 1 / 0 1 0 0 0 0 Ca a 1 • a 1 1 a a ] 1 0 I / O 0 1 0 8 0 Ca a 1 a } 1 1 1 0 01 / 0 1 0 0 7 [ a a 1 1 1 1 0 01 I / O 0 0 0 Ca a , 1 . a 1 1 a a ] 1 0 1 1 / 0 0 0 8 9 Ca a, 1 . a 1 1 a a ] 1 0 1 0 1 / 0 0 1 0 0 Ca a , 1 . a 1 1 a a ] 1 1 I / O 0 0 0 1 01 C a a 1 1 1 1 1 0 1 / 0 0 0 1 0 3 Ca a , 1 p a 1 1 a a ] 1 1 0 0 I / O 0 1 0 3 Ca a 1 a . 1 1 a a . 1 a a 0 a , 1 / 0 1 1 a a a ] 0 1 0 4 Ca a 1 1 1 1 0 11 / 0 1 0 1 0 0 Ca a 1 1 1 1 0 1 1 I / O 0 1 0 0 Ca a 1 1 1 11 1 / 0 1 O 8 1 0 7 Ca a , 1 a 1 1 a a ] 1 1 I / O 0 1 0 1 0 0 Ca a 1 a ] 1 1 1 1 0 1 / 0 1 0 1 0 0 Ca a 1 a . 1 1 * a . 1 a a 1 a ] 0 1 1 / 0 0 1 1 0 Ca a , 1 a 1 1 a a ] 1 l 1 1 / 0 0 0 1 1 1 Ca a 1 1 1 1 1 10 1 / 0 327
  • 344. •112 1 1 1 1 1 1 / 0 1 1 [s t •. I f, • t •] • 1 1 3 1 1 1 1 1 1 1 / 0 1 Co T o . o T . o T o ] • 1 1 4 1 1 1 1 1 1 1 1 / 0 •11C 0 0 0 0 0 1/0 1/0 o t o O T , O O T , I * , ■ * •110 0 0 00 0 1/0 0 1/0 Co T , I f • ] • n r o o o o o o i / o i / o [ • I T i 0 ¥ , I T l ) •U« 0 0 0 0 0 1/0 1/0 1 Co ¥ O, • V , • q ¥, • ) f] •110 0 0 0 0 0 1 /0 1 1 /0 [■ • T , o o ¥ , ■ t ] •120 0 0 00 0 1 1/0 1/0 to • t , • » t , o o , o ¥ o ] •1 2 1 1 0 0 0 0 1 / 0 1 / 0 0 to T T 0 0 ] • 1 2 2 1 0 0 0 0 1 / 0 0 1 /0 t o T • 1 2 3 1 0 0 0 0 0 1 / 0 1 / 0 to T a . t a 0 t a o ] • 1 2 4 1 0 0 0 0 1 / 0 1 / 0 1 t o T o , a t » o a ¥ , a 0 ¥ , O ¥ • 1 2 3 1 0 0 0 0 1 / 0 1 1 / 0 t o 0 • 1 2 3 1 0 0 0 0 1 1 / 0 1 / 0 t o 0 T . o o T , a ¥ , a t o ] • 1 2 7 1 1 0 0 0 1 / 0 1 / 0 0 Ct o a , 0 T 0 a t 0 . T O , T O O . • 1 2 3 1 1 0 0 0 1 / 0 0 1 / 0 CO T • 1 2 0 1 1 0 0 0 0 1 / 0 1 / 0 Ct o a . 0 T a . ¥ a . ¥ 0 o ] • 1 3 0 1 1 0 0 0 1 / 0 1 / 0 1 Ca t •1 3 1 1 1 0 0 0 1 / 0 1 I / O to T f * l ] •132 1 1 0 0 0 1 1/0 1/0 t o o ¥ , a ¥ , • * o] •113 1 1 1 0 0 1 /0 1 /0 0 Co O O. . f , . T . , T O , T O O ] •134 1 1 1 0 0 1/0 0 1/0 Co * ^ » o ] •133 1 1 1 0 0 0 1/0 1/0 Ct 0 O, OT 0,TO, TOO] •130 1 1 1 0 0 1 /0 1 /0 1 t o T O. O T , O 0 T. O T o ] •137 1 1 1 0 0 1/0 1 I/O l 0 T O. D O T . O T, 0 T O, T O . T O O ] •133 1 1 1 0 0 1 1/0 1/0 t o O T, O T , O T O] •130 1 1 1 1 0 1 /0 1 /0 0 It O0 . OT , OT O. TO. TOO] •140 1 1 1 1 0 1/0 0 1/0 t o T O. 0 T , T O . T O O ] •141 1 1 1 1 0 0 1/0 1/0 I t O O, O T O, T O , t o o ] •142 1 1 1 1 0 1 /0 1 /0 1 t o T O, 0 T , 0 0 T, 0 * o] •143 1 1 1 1 0 1 /0 1 1 /0 t o T O. O 0 T , 0 T. O T O. T O . t o o ] •144 1 1 1 1 0 1 1 /0 1 /0 t o O T . O T , 0 T o] •143 1 1 1 1 1 1 / 0 1 / 0 0 t T 0 O. T O , T O O ] •143 1 1 1 1 1 1/0 0 I/O t o T. O T O. T O . T O O ] •147 1 1 1 1 1 0 1 /0 1 /0 I t o o . o t o .t o . t o o ] •143 1 1 1 1 1 1 / 0 1 / 0 1 t o T O. 0 T. O T • ] •140 1 1 1 1 1 1 / 0 1 1 / 0 t o T 0 . 0 T O, O T, T O O , T O , t o o ] •IS O 1 1 1 1 1 1 1 / 0 1 / 0 [• t ' •131 0 0 0 0 0 1/0 1/0 1/0 l 0 O T . O 0 T. O T . O T O] 328
  • 345. •a 1 0 0 0 0 0 0 0 nu 1 0 0 0 0 1/0 1/0 1/0 1 1 0 0 o o 0 0 t**• 1 1 1 0 0 0 0 0 *■. T• •] 1 1 1 1 0 0 0 0 1 1 1 1 1 0 0 0 nu 1 1 0 0 0 1/0 1/0 1/0 1 1 1 1 1 1 0 0 [T. .. 1T• >* • a *, • *a , 1 1 1 1 1 1 1 0 »•. * ■ •] 1 0 0 0 0 0 0 1/0 1 1 0 0 0 0 0 1/0 nil 1 1 1 0 0 1/0 1/0 1/0 1 1 1 0 0 0 0 1/0 [*• ■ 1 1 1 1 0 0 0 1/0 * •. * * •] 1 1 1 1 1 0 0 1/0 1 1 1 1 1 1/0 0 0 nu 1 1 1 1 0 1/0 1/0 1/0 1 1 1 1 1 1 1/0 0 [V*l [*• *a o] *•. V••] •3 0 0 0 0 0 o 1 0 nu 1 1 1 1 1 1/0 1/0 1/0 0 0 0 0 0 0 1 1/0 [• a *. a T] *4 0 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 [»* a o »] B .2 P -S p a c e o f S (M )- w 1 0 0 0 0 0 1 0 P a r a m e te r s (2) [*■ 1 • 0 V 0 a] 0 0 0 1 1/0 M 0 0 0 0 0 0 1 1 0 0 0 0 0 10 0 0 0 0 0 0 0 1/0 1 » 0 0 0 10 0 [• a V]0 » 0 0 0 10 1 . . . . . 0 0 0 10 0 #7 1 0 0 0 0 0 1 1> 0 0 0 10 1 1 1 0 0 0 0 1 1 1 0 0 10 0 1 1 1 0 0 0 1 1 0 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 0 10 1 1 1 1 1 1 0 1 1 1 1 0 10 0 1 0 0 0 0 0 1/0 1 1 0 0 1 1 0 1 1 0 0 0 0 1/0 1 1 0 0 10 1 1 1 1 0 0 0 1/0 1 1 1 0 1 1 0 1 1 1 1 0 0 1/0 1 1 1 0 10 1 1 1 1 1 1 0 1/0 1 1 1 1 10 1 [0 Va] 0 » 0 0 0 0 0 1/0 0 » 0 0 0 1/0 0 0 M 1 1 0 0 0 0 1 0 0 » 0 0 0 1 0 1/0 1 1 1 0 0 0 1 0 0 0 0 0 0 1/0 0 1 1 1 1 1 0 0 1 0 0 0 0 0 1 0 1/0 1 1 1 1 1 0 1 0 0 0 0 0 1/0 0 1 [* a * • a] 0 0 0 1 0 1/0 0 0 0 1 1/0 0 M 0 0 0 0 0 1 1 1 0 0 0 1/0 0 1 1 0 0 0 0 1 1 1 1 0 0 1 0 1/0 0 0 0 0 0 1/0 1 0 1 0 0 1 1/0 0 0 0 0 0 0 1 1 1/0 1 0 0 1/0 0 1 0 0 0 0 0 1/0 1 1 1 1 0 1 0 1/0 1 0 0 0 0 1 1 1/0 1 1 0 1 1/0 0 0 0 0 0 0 1/0 1 1/0 1 1 0 1/0 0 1 [a • ] 1 1 1 1/0 0 1 0 0 0 0 0 1/0 0 1/0 •10 1 1 0 0 0 1 1 1 [• *. • T•] 1 1 1 0 0 1 1 t 1 1 1 1 0 1 1 1 329
  • 346. 0 0 0 0 0 0 1 / 00 1 1 0 0 01 1 1 / 0 1 1 0 0 0 1 1 / 0 1 1 1 1 0 0 1 1 I / O 1 1 1 0 0 1 1 / 01 1 1 1 1 0 11 I / O 1 1 1 1 0 1 1 / 0 1 0 0 0 0 0 0 1 / 0 1 / 0 1 1 0 0 0 1 1 / 0 1 / 0 1 1 1 0 01 1 / 0 1 / 0 1 1 1 1 01 1 / 0 1 / 0 Ca a • 1 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 / 01 1 1 1 1 1 1 / 0 11 1 1 1 1 1 1 / 0 1 / 0 1 [■ * • 1 3 1 0 0 0 0 1 / 0 0 0 1 1 0 0 0 1 / 0 0 0 1 1 1 0 0I / O 0 0 1 1 1 1 0 1 / 0 0 0 1 1 1 1 1 1 0 1 / 0 1 0 0 0 0 I / O 01 / 0 1 1 0 0 0 1 / 0 0 1 / 0 1 1 1 0 0 1 / 0 0 1 / 0 1 1 1 1 0 1 / 0 01 / 0 1 1 1 1 1 1 / 0 0 1 / 0 [ • « o . 0 ¥ , * O , T • a ] • 1 3 0 00 0 0 1 1 / 0 0 1 0 0 0 0 1 1 / 0 0 Ca • ¥ , o ¥ , a ¥ a ] • 1 4 1 0 00 0 0 1 / 0 0 1 0 0 0 0 0 1 / 0 1 / 0 [ • v ] • 1 4 1 1 0 0 0 0 1 / 0 0 1 1 1 0 00 1 / 0 0 1 1 1 1 0 0 1 / 00 1 1 1 1 1 0 1 / 0 0 1 1 1 1 1 1 / 0 1 0 1 1 1 1 11 / 0 I / O 0 I t a ] • ! • 0 0 0 0 01 1 / 0 1 1 0 0 0 01 1 / 0 1 0 0 0 0 0 1 / 0 1 / 0 0 0 0 0 0 0 1 1 / 01 / 0 0 0 0 0 0 1 / 0 1 / 0 1 1 0 0 0 0 1I / O 1 / 0 0 0 0 0 0 1 / 0 1 / 0 1/1 Co • • 1 7 1 0 0 0 0 1 / 0 1 0 Co o • 1 0 1 1 0 0 0 0 1 1 / 0 1 1 1 0 0 0 1 1 / 0 1 1 1 1 0 01 1 / 0 1 1 1 1 I 0 1 1/0 [ • « ] •1» 1 0 0 0 1/0 1 0 1 1 0 0 1/0 1 0 1 1 1 0 1/0 1 0 Ca v a ¥i ¥ a , ¥ a a ] •30 0 0 0 0 1/0 1 1 Ca a a¥ , a a ¥. a ¥ a ] •31 1 0 0 0 1/0 1 1 1 1 0 0 1/0 1 1 1 1 1 0 1/0 1 1 1 0 0 0 1/0 1/0 1 1 1 0 0 1/0 1/0 1 1 1 1 0 1/0 1/0 1 Ca * a¥t a a ¥. a ¥ a ] •33 1 1 1 1 1 1/0 1 1 1 1 1 1/0 1/0 Ca ¥ a¥ , a ¥ a . ¥ a , ¥ aa] •33 0 0 0 0 1/0 1/0 0 Ca ¥ I I I ] •34 1 0 0 0 0 1/0 1/0 1 1 0 0 0 1/0 1/0 1 1 1 0 0 1/0 1/0 1 1 1 1 0 I/O 1/0 Ca a n t 1 0 0 0 1/0 1/0 0 1 1 0 0 1/0 1/0 0 1 1 1 0 1/0 1/0 0 C¥ a a¥, a ¥ a ¥ a , a a a] •34 0 0 0 0 1/0 1 1/0 Ca a a¥ , a a ¥• ¥ a , a ¥ a ] •37 0 0 0 0 1/0 1/0 1 Ca ¥ a ¥, a a ¥. a a ¥ , a ¥ a ] n i 1 0 0 0 1/0 1 1/0 1 1 0 0 1/0 1 I/O 1 1 1 0 1/0 1 I/O Ca ¥ a a ¥• a ¥ a ¥ a . ¥ a , ¥ a a ] •30 1 1 1 1 1/0 1 I/O 1 1 1 1 1/0 1/0 1/0 [¥ a a ¥, a ¥ a a ¥ a, ¥ a, ¥ a a ] •30 0 0 0 0 1/0 1/0 1/0 Ca ¥ ¥ a, a a ] •31 1 0 0 0 1/0 1/0 1/0 1 1 0 0 1/0 1/0 1/0 1 1 1 0 1/0 1/0 1/0 330
  • 347. v s, v s * ] B .3 P-Space of S(M )- Param eters (w ith A dv) • 1 0 00 0 0 00 0 0 0 0 0 O 00 1 / 0 [ ( s f t s a ) s s . ( s f t s a ) s v' s ] > 2 0 0 1 0 0 0 0 10 0 0 0 0 0 0 10 1 1 1 0 0 0 1 00 1 0 0 0 0 1 01 1 1 1 0 0 10 0 1 1 0 0 0 11 0 1 1 0 0 0 1 0 1 1 1 1 00 1 1 0 1 1 1 0 0 10 1 0 0 0 0 0 1 0 1 / 0 0 0 0 0 0 1 / 0 0 1 1 0 0 0 0 1 0 1 / 0 1 0 0 00 1 / 0 0 1 1 1 0 0 0 10 1 / 0 1 1 0 0 0 1 1 / 0 0 1 1 0 0 0 1 / 0 0 1 1 1 1 0 0 10 1 / 0 1 1 1 0 0 11 / 0 0 1 1 1 0 0 1 / 0 0 1 [ s ( s f t s a )* . s ( s f t s a ) vs ] 0 3 1 00 0 0 00 0 1 1 0 0 0 0 0 0 1 1 1 0 0 00 0 1 0 0 0 0 00 1 / 0 1 1 0 00 0 0 1 / 0 1 1 1 0 0 0 0 1 / 0 [ ( s f t s a ) V S. ( s f t s a ) V s 0 ] 0 4 0 0 0 0 0 0 10 [ ( s f t s a ) sa v » ( s f t s a ) s* ) OS 0 0 0 0 0 11 0 1 0 0 0 0 11 0 [ s ( s f t s a )* . s ( s f t s a ) s* ] 0 0 1 0 00 0 0 10 [ ( s f t s a ) V s, ( s f t s a ) S V s] OT 0 0 0 0 0 0 1 1 0 0 0 0 0 0 i/o 1 C* ( s f t s a ) s *]1 00 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 /0 1 1 10 0 0 0 1/0 1 1 1 1 0 0 0 1 /0 1 [s (oftu) v s] 03 1 1 0 0 0 0 1 0 1 1 1 0 0 0 1 0 [(sftsa) Va. (sftsa) Vs a] 010 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1/0 1 0 0 0 0 1 1 1/0 [* s (sftsa) v0 a (sftsa) *. s (sftsa) * *;I Oil 1 1 i 1 0 0 0 0 1 1 i 1 1 0 0 0 1 1 i 1 0 0 0 I/O 1 1 i 1 1 0 0 I/O [v (sftsa) a. V (sftsa) a s] 013 1 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 1 1 1 1 0 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 0 1 0 1/0 1 1 1 1 0 1 1/0 0 1 1 1 1 0 1/0 0 1 1 1 1 1 1 1/0 0 1 [s v (sftsa). a v (sftsa) s] 013 1 1 1 1 0 0 1 0 1 1 1 1 1 0 1 0 [v (sftsa) a. V (sftsa) s a] 014 1 1 0 0 0 1 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 I/O 1 1 0 0 0 1 I/O 1 1 1 1 0 0 1 1 1/0 1 1 1 0 0 1 1/0 1 1 1 0 0 0 1 I/O 1/0 1 1 1 0 0 1 I/O I/O [s s (sftsa) v0 a (sftsa) *. a (sftsa) * *;1 010 1 1 i 1 1 1 0 0 1 1 i 1 1 1 1 0 1 1 i 1 1 1 1/0 0 [v a (aftaa), V a (sftsa) s] 010 1 1 i 1 0 0 1 1 1 1 i 1 1 0 1 1 1 1 i 1 0 0 1/0 1 1 1 i 1 1 0 1/0 1 [s v (sftsa) a] 01T 1 1 i 1 0 1 1 1 1 1 i 1 0 1 1 1/0 1 1 i 1 0 1 1/0 1 331
  • 348. 1 1 1 1 0 1 1/0 1/0 [ • i t (iftn ), a a (aftaa), a ? (aftaa) a] (aftaa) a a, (aftaa) a a a] •10 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 / 0 1 [a a a (aftaa), a a (aftaa), a a (aftaa) a] •10 0 0 0 0 0 0 1/0 0 [(aftaa) a a a, (aftaa) a a, (aftaa) a a a] 030 0 0 0 0 0 1/0 0 0 0 0 0 0 0 1/0 0 1/0 [a (aftaa) a a, a (aftaa) a, (aftaa) a a, (aftaa) a a a] 031 1 0 0 0 0 1/0 0 0 1 10 0 0 1/0 0 0 1 1 1 0 0 1/0 0 0 1 0 0 0 0 1/0 0 1/0 1 1 0 0 0 1/0 0 1/0 1 1 1 0 0 1 /0 0 1/0 Ca (aftaa) a a, a (aftaa) a, (aftaa) a a, (aftaa) a a a] 033 0 0 0 0 0 1 1/0 0 1 0 0 0 0 1 1/0 0 [a (aftaa) a a, a (aftaa) a, a (aftaa) a a] 0 0030 0 0 Ca (aftaa) a a, a a (aftaa) a. 031 0 1/0 1 1 a (aftaa) a, a (aftaa) a a] 1 1 0 0 0 0 1 1/0 1 1 1 0 0 0 1 1 /0 Ca (aftaa) a a, (aftaa) a a, (aftaa) a a a] 033 1 1 0 0 0 1/0 1 0 1 1 1 0 0 1 / 0 1 0 [a (aftaa) aa, a (aftaa) a, (aftaa) a a, (aftaa) a a a] 033 1 0 0 0 0 1/0 1 1 Ca (aftaa) aa, a (aftaa) a, a a (aftaa) a, a (aftaa) a a] 034 1 1 1 1 0 0 1/0 0 1 1 1 1 1 0 1/00 [a (aftaa) a a, a (aftaa) a, a (aftaa) a a] 030 1 1 1 1 0 1 / 0 0 0 1 1 1 1 0 1/001/0 [a a (aftaa) a, a a (aftaa), a (aftaa) a, a (aftaa) a a] 030 1 1 0 0 0 1/0 1 1 ------— —------- 1 1 1 0 0 1/0 1 1 •33 1 0 0 0 0 0 1/0 0 1 1 0 0 0 1/0 1/0 1 [(aftaa) a a a, (aftaa) a a. 1 1 1 0 0 1/0 1/0 1 (aftaa) a a a] Ca (aftaa) a a, a (aftaa) a. a a (aftaa) a, a (aftaa) a a] 034 0 0 0 0 0 0 1 1 /0 -------- — — Ca (aftaa) a a, (aftaa) a a a. •37 1 1 1 1 0 0 1 1 /0 (aftaa) a aj 1 1 1 1 1 0 1 1 /0 ——-------- Ca a (aftaa) a. a (aftaa) a, 031 0 0 0 0 0 1/0 1 0 a (aftaa) a aj Ca (aftaa) a a, a (aftaa) a, (aftaa) a • *. (aftaa) a a] 030 1 1 1 1 1 1 /0 0 0 Ca a (aftaa) a, a a (a fta a ), 030 1 1 0 0 0 0 1/0 0 a (aftaa) a, a (aftaa) a a] 1 1 1 0 0 0 1/0 0 1 1 ! 1 1 1 t 1 1 1• 11 11 11 I11aa C(aftaa) a a a, (aftaa) a a, •3* 1 1 1 1 0 1/0 1 0 (aftaa) a a a] Ca a (aftaa) a, a a (aftaa), a (aftaa) a, a (aftaa) a a] e o 8 0 0 0 1 1 /0 1 --------—---------------------------- 1 0 0 0 0 1 1 /0 1 MO 1 1 1 1 1 1 0 1/0 0 0 0 0 0 1 1/0 1/0 Ca a (aftaa) a. a a (aftaa), 1 0 0 0 0 1 1/0 1/0 a a (aftaa), a a (aftaa) aj Ca a (aftaa) a, a (aftaa) a a, 1a aa i a i a i a i ia i a ia i i• a (aftaa) ». a (aftaa) a a} Ml 1 1 1 1 1 1 /0 1 0 — ------------------------------------------ Ca a (aftaa) a, a a (a fta a ), o • 8 0 0 0 0 1 1 /0 a (aftaa) a, a (aftaa) a aj Ca (aftaa) * a, (aftaa) a a, ———— -------------—-------------—- (aftaa) a a a] M3 l i l t 0 1/0 1 1 1 1 1 1 0 1/0 1/0 1 030 1 0 0 0 0 1/0 1 0 [a a (aftaa) a. a a (aftaa). [a (aftaa) a a, a (aftaa) a, a a a (a fta a). a a (aftaa) a]
  • 349. M3 I I 1 I 1 1 1 1/0 1 1 1 1 1 1 1 / 0 1 / 0 [« * (aftaa) a, a * (aftaa), a * a (aftaa), a a (aftaa), v a (aftaa) a] M4 1 1 1 1 1 1 / 0 1 1 1 1 1 1 1 1 / 0 1 / 0 1 Ca a (aftaa) a, a v (aftaa), a a a (aftaa), a a (aftaa) a} Hi 0 0 0 0 0 0 1/0 1/0 [(aftaa) a a a, a (aftaa) a a, (aftaa) a a, (aftaa) a a a] MO 0 0 0 0 0 1/0 1/0 0 C(aftaa) a a a, a (aftaa) a a, a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a a] MT 1 0 0 0 0 0 1/0 1/0 [(aftaa)a a a, a (aftaa) a a, (aftaa)a a, (aftaa) a a a] MS 1 0 0 0 0 1/0 1/0 0 [(aftaa) a a a, a (aftaa) a a, a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a aj MS 0 0 0 0 0 1/0 1 1/0 [a (aftaa) a a, a a (aftaa) a, a (aftaa) a, a (aftaa) a a, (aftaa)a a a, (aftaa) a aj •SO 0 0 0 0 0 1/0 1/0 1 [a (aftaa) a a, a (aftaa) a, a a (aftaa) a, a (aftaa) a a, a (aftaa) a aj •St 1 1 0 0 0 0 1/0 1/0 1 1 1 0 0 0 1/0 1/0 [(aftaa) a a a, a (aftaa) a a, (aftaa) a a, (aftaa) a a a] •S3 1 1 0 0 0 1/0 1/0 0 1 1 1 0 0 1/01/00 [(aftaa) a a a, a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a a] •S3 1 0 0 0 0 1/0 1 1/0 [a (aftaa) a a, a s (aftaa) a, a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a aj •St 1 0 0 0 0 1/0 1/0 1 [a (aftaa) a a, a (aftaa) a, a a (aftaa) a, a (aftaa) a a, a (aftaa) a aj 1 1 1 0 0 1/01 1/0 [a (aftaa) a a, a a (aftaa) a, a (aftaa) a, a (aftaa) a a. (aftaa) a s . (aftaa) a a aj •SO 1 1 1 1 0 0 1/0 1/0 1 1 1 1 1 0 1/01/0 [a (aftaa) a s , a a (aftaa) a, a (aftaa) a. a (aftaa) a a) •ST 1 1 1 1 0 1/0 1/0 0 [a (aftaa) a a, a a (aftaa), a a (aftaa) a, a (aftaa) a, a (aftaa) a aj •SO 1 1 1 1 1 1 / 0 0 I/O [a a (aftaa) a, a a (aftaa), a a (aftaa), a a (aftaa) a, a (oftaa) a, a (aftaa) a aj •SS 1 1 1 1 1 1 / 0 1/0 0 [a (aftaa) a s , a a (aftaa), a a (aftaa) a, a (aftaa) a, a (aftaa) a aj MO 1 1 1 1 0 1 /0 1 1 /0 [a a (aftaa) a, a a a (aftaa), a a (aftaa), a a (aftaa) a, a (aftaa) a, a (aftaa) a aj Ml 1 1 1 1 1 1 / 0 1 1/0 [a a (aftaa) a, a a (aftaa) a, a a (aftaa), a a a (aftaa), a a (aftaa), a a (aftaa) a, a (aftaa) a, a (aftaa) a aj M3 0 0 0 0 0 1/0 1/0 1/0 [(aftaa) a a a, a (aftaa) a a, a (aftaa) a, a (aftaa) a a, a a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a a] M3 1 0 0 0 0 1/0 1/0 1/0 [(aftaa) a a a, a (aftaa) a a, a (aftaa) a, a (aftaa) a a, a a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a s s ] MS 1 1 0 0 0 1/0 1/0 1/0 1 1 1 0 0 1/0 1/0 1/0 [(aftaa) a a a, a (aftaa) a a, a (aftaa) a, a a (aftaa) a, a (aftaa) a a, (aftaa) a a, (aftaa) a a aj •SS 1 1 0 0 0 1 /0 1 1/0 MS 1 1 1 1 0 1/0 1 /0 1/0 [a (aftaa) a s , a a (aftaa) a, a a (aftaa), a a a (aftaa), a a (aftaa) a, a (aftaa) a, a (aftaa) a aj MO 1 1 1 1 1 1 / 0 1 / 0 1/0 333
  • 350. [» (aftam) a a, « • (aftaa) a, v a (aftaa), a * a (aftaa), • » (aftaa), a * (aftaa) a, a a (aftaa) a, v (aftaa) a, v (aftaa) a a] B .4 P-Space o f S(M ) & H D Param eters •i 0 0 0 0 0 0 0 0 , 1 1 0 0 0 0 0 0 0 0 , 1 f 0 0 0 0 0 0 0 0 , f i 0 0 0 0 0 0 0 0 , f f 0 0 0 0 0 1 0 0 , 1 i 0 0 0 0 0 1 0 0 , i f 0 0 0 0 0 1 0 0 , f 1 0 0 0 0 0 1 0 0 , f f 1 0 0 0 0 1 0 0 . i 1 1 0 0 0 0 1 0 0 , f 1 0 0 0 0 0 1 0 1 , 1 1 0 0 0 0 0 1 0 1 , 1 f 0 0 0 0 0 1 0 1 , f 1 1 1 0 0 0 1 0 0 , i 1 1 1 0 0 0 1 0 0 , f 1 1 0 0 0 0 1 0 1 , 1 1 1 0 0 0 0 1 0 1 , f 1 1 I 1 0 0 1 0 0 , 1 1 1 1 1 0 0 1 0 0 , f 1 1 1 0 0 0 1 1 0 , 1 i 1 1 0 0 0 t 1 0 , f i 1 1 0 0 0 1 0 1 , 1 1 1 1 0 0 0 1 0 1 , f 1 1 1 1 1 0 1 0 0 , i 1 1 1 1 1 0 1 0 0 , f 1 1 1 1 0 0 1 1 0 , 1 1 1 1 1 0 0 t 1 0 , f 1 1 1 1 0 0 1 0 1 . 1 1 1 1 1 0 0 1 0 1 , f 1 1 1 1 1 0 1 1 0 , 1 1 1 I 1 1 0 1 1 0 , f 1 1 1 1 1 0 I 0 1 , 1 1 1 1 1 1 0 1 0 1 , f 1 1 1 1 1 1 1 0 1 , i 1 1 1 1 1 1 1 0 1 , 1 f 0 0 0 0 0 1/0 » • • 0 0 0 0 0 0 0 1/0 1 * * 0 0 0 0 0 0 0 1/0 0 * • 0 0 0 0 0 0 0 1/0 * * * 0 0 0 0 0 1/0 0 0 » ^ 1 0 0 0 0 0 1/0 0 0 1 • * 0 0 0 0 0 1/0 0 0 I * • 0 0 0 0 0 1/0 0 0 • * * 0 0 0 0 0 1 0 1/0 1 • • 0 0 0 0 0 1 0 1/0 » • * 0 0 0 0 0 1 0 1/0 1 ■ • 0 0 0 0 0 1/0 0 1, 1 0 0 0 0 0 1/0 0 1, 1 0 0 0 0 0 1/0 0 1, f 0 0 0 0 0 1/0 0 1, f 0 0 0 0 1 0 1/0 , 1 0 0 0 1 0 1/0 , f 0 0 0 0 1/0 0 1, 1 0 0 0 1/0 0 1, f 1 0 0 0 1 0 1/0 , 1 1 0 0 0 1 0 1/0 , f 1 0 0 0 1 1/0 0 , 1 1 0 0 0 1 1/0 0 . f 1 0 0 0 1/0 0 1, 1 1 0 0 0 1/0 0 1 , f 1 1 0 0 1 0 1/0 , 1 1 1 0 0 1 0 1/0 , f 1 1 0 0 1 1/0 0 , 1 1 1 0 0 1 1/0 o , f 1 1 0 0 1/0 0 1. 1 1 1 0 0 1/0 0 1, f 1 1 1 0 1 0 1/0 , 1 1 1 i 0 1 0 1/0 , f 1 1 1 0 1 1/0 0 . 1 1 1 1 0 1 1/0 0 , f 1 1 1 0 1/0 0 1, 1 1 1 1 0 1/0 0 1. f 1 1 1 1 1/0 0 1, 1 1 1 1 1 1/0 0 1, 1 0 0 0 0 1/0 0 1/0 , 1 1 0 0 0 0 1/0 0 1/0 . 1 f 0 0 0 1/0 0 1/0 . f 1 0 0 0 0 1/0 0 1/0 . f f [a » a a « a] 0 0 0 0 0 0 0 , 1 1 0 0 0 0 0 0 0 , f 1 1 0 0 0 0 0 0 , 1 1 1 0 0 0 0 0 0 , f 1 1 1 0 0 0 0 0 , 1 1 1 1 0 0 0 0 0 , f 1 1 1 1 0 0 0 0 , 1 1 1 1 1 0 0 0 0 , f 1 1 1 1 1 0 0 0 , 1 1 1 1 1 1 0 0 0 , 1 f 1 1 1 1 1 0 0 , 1 1 1 1 1 1 1 0 0 , 1 f 1 1 1 1 1 1 0 . 1 1 1 1 1 1 1 1 0 , 1 f 0 0 0 0 1/0 . 1 0 0 0 0 0 0 1/0 , f 1 0 0 0 0 0 1/0 , 1 1 0 0 0 0 0 1/0 , f 1 1 0 0 0 0 1/0 , 1 1 1 0 0 0 0 1/0 , f 1 1 1 0 0 0 1/0 , 1 1 1 1 0 0 0 1/0 , f 1 1 1 1 0 0 1/0 , 1 1 1 1 1 0 0 1/0 , 1 1 1 1 1 1/0 0 0, 1 1 1 1 1 1/0 0 0 , 1 1 1 1 1 1 1/0 0 . 1 334
  • 351. 1 1 1 1 1 1/0 0 . 1 ft 1 0 0 0 0 1/0 0 1 1 ft * * • •] 1 0 0 0 0 1/0 0 1 « f 1 1 1 0 0 0 0 I/O 1 f 3 1 1 1 0 o 0 0 I/O f ft 1 1 1 0 o 1/0 0 0 1 ft 1 1 1 0 0 1/0 0 0 ft ft 1 1 0 0 0 1 0 1/0 1 ft 1 1 0 0 0 1 0 1/0 ft ft 1 1 0 0 0 1 1/0 0 1 ft 0 0 0 0 1 0 0 ft ft 1 1 0 0 0 1 1/0 0 ft ft 0 0 0 0 1 1 0 1 1 1 1 0 0 0 1/0 0 1 1 ft 0 0 0 0 1 1 0 1 ft 1 1 0 0 0 1/0 0 1 ft ft 0 0 0 0 1 1 0 ft 1 1 1 1 1 0 0 0 1/0 1 ft 0 0 0 0 1 1 0 ft ft 1 1 1 1 0 0 0 1/0 ft ft 1 1 0 0 0 0 0 1 ft 1 1 1 1 0 1/0 0 0 1 ft 1 1 0 0 0 0 0 ft ft 1 1 1 1 0 1/0 0 0 ft ft 1 0 0 0 1 0 0 t 1 ft 1 1 1 0 0 1 0 1/0 1 ft 1 0 0 0 1 0 0 • ft ft 1 1 1 0 0 1 0 1/0 ft ft 0 0 0 0 1 1 0 • 1 1 1 1 1 0 0 1 1/0 0 1 ft 0 0 0 0 1 1 0 1 ft 1 1 1 0 0 1 1/0 0 ft ft 0 0 0 0 1 1 0 ft 1 1 1 1 0 0 1/0 0 1 1 ft 0 0 0 0 1 1 0 ft ft 1 1 1 0 0 1/0 0 1 ft ft 0 0 0 0 1 0 1 » 1 ft 1 1 1 1 1 0 0 1/0 ft 1 0 0 0 0 1 0 1 » ft ft 1 1 1 1 1 0 0 1/0 ft ft 1 1 1 0 0 0 0 » 1 ft 1 1 1 1 1 1/0 0 0 ft 1 1 1 1 0 0 0 0 f ft 1 1 1 1 1 1/0 0 0 ft ft 1 1 0 0 1 0 0 ft 1 ft 1 1 1 1 0 1 0 1/0 1 ft 1 1 0 0 I 0 0 ft ft 1 1 1 1 0 1 0 1/0 ft ft 1 0 0 0 i 1 0 1 1 ft 1 1 1 1 0 1 1/0 0 1 ft 1 0 0 0 1 1 0 ft ft 1 1 1 1 0 1 1/0 0 ft ft 1 0 0 0 1 0 1 1 ft 1 1 1 1 0 1/0 0 1 1 ft 1 0 0 0 1 0 1 ft ft 1 1 1 1 0 1/0 0 1 ft ft 1 1 1 1 0 0 0 ft 1 1 1 1 1 1 1 0 1/0 ft 1 1 1 1 1 o 0 0 ft ft 1 1 1 1 1 1 0 1/0 ft ft 1 1 1 0 1 0 0 1 ft 1 1 1 1 1 1 1/0 0 ft i 1 1 1 0 1 0 0 ft ft 1 1 1 1 1 1 1/0 0 ft ft 1 1 0 0 1 1 0 1 * 1 1 1 1 1 1/0 0 1 ft 1 1 1 0 0 1 1 0 ft ft 1 1 1 1 1 1/0 0 1 ft ft 1 1 0 0 1 0 1 1 ft 1 0 0 0 0 1/0 0 1/0 . 1 1 1 0 0 1 0 1 ft ft 1 0 0 0 0 1/0 0 1/0 . ft 1 1 1 1 1 0 0 ft 1 1 1 0 0 0 1/0 0 1/0 . 1 1 1 1 1 1 0 0 ft ft 1 1 0 0 0 1/0 0 1/0 . ft 1 1 1 0 1 1 0 1 ft 1 1 1 0 0 1/0 0 1/0 . 1 1 1 1 0 1 1 0 ft ft 1 1 1 0 0 1/0 0 1/0 . ft 1 1 1 0 1 0 1 i # 1 1 1 1 0 1/0 0 1/0 . 1 1 1 1 0 1 0 1 ft ft 1 1 1 1 0 1/0 0 1/0 , ft 1 1 1 1 1 1 0 ft i 1 1 1 1 1 1/0 0 1/0 . ft 1 1 1 1 1 1 0 ft ft 1 1 1 1 1 1/0 0 1/0 . ft 1 1 1 1 1 0 1 ft 1 [■ *. ■ • *1 I 1 1 1 1 0 1 ft ft 0 0 0 0 0 0 1/0 ft * * •4 0 0 0 0 0 0 1/0 ft * * 0 0 0 0 1/0 0 0 ft * " 0 0 0 0 1/0 0 0 • * * 1 0 0 0 0 0 1/0 ft • * 1 0 0 0 0 0 1/0 1 0 0 0 1/0 0 0 I • * 1 0 0 0 0 0 1 0 , ft ft 1 0 0 0 1/0 0 0 ft * * 1 1 0 0 0 0 1 0 , 1 ft 0 0 0 0 1 0 1/0 ft * * 1 1 0 0 0 0 1 0 , f ft 0 0 0 0 1 0 1/0 ft T * 1 1 1 0 0 0 1 0 , 1 ft 0 0 0 0 1 1/0 0 ft * * 1 1 1 0 0 0 1 0 . ft ft 0 0 0 0 1 1/0 0 ft * * 1 1 1 1 0 0 1 0 , 1 ft 335
  • 352. 1 1 1 I 0 0 1 0 , f 1 0 0 0 0 1 1 1 , 1 1 1 t 1 1 1 0 1 0 . * 1 0 0 0 0 1 1 1 , 1 f 1 1 1 1 1 0 1 0 , 1 1 0 0 0 0 1 1 1 . f 1 0 0 0 0 0 0 1 1/0 1 1 1 0 0 0 0 1 1 1 . f f 0 0 0 0 0 0 1 1/0 1 f 1 1 0 0 0 1 1 1 . 1 1 0 0 0 0 0 1 1/0 « 1 1 1 0 0 0 1 1 1 . f t 0 0 0 0 0 0 t 1/0 t 1 1 1 1 0 0 1 1 1 . 1 t 1 0 0 0 0 0 1 1/0 1 f 1 1 1 0 0 1 1 1 . * t 1 0 0 0 0 0 1 1/0 t t 1 1 1 1 0 1 1 1 . 1 t 1 1 0 0 0 0 1 1/0 1 f 1 1 1 1 0 1 1 1 . f 1 1 1 0 0 0 0 1 1/0 1 1 1 1 1 1 1 1 1 1 . f 1 1 1 1 0 0 0 1 1/0 i f 1 1 1 1 1 1 1 1 . * f 1 1 1 0 0 0 1 1/0 t t 1 0 0 0 0 0 1/0 0 1 1 1 1 1 0 0 1 1/0 1 t 1 0 0 0 0 0 1/0 0 f 1 1 1 1 0 0 1 1/0 f 1 0 0 0 0 0 1/0 1 0 0 1 t 1 1 1 1 0 1 1/0 1 1 0 0 0 0 1/0 1 0 1 I I 1 1 0 1 1/0 f f 0 0 0 0 0 1/0 1 0 f [• • ». » v] 0 0 0 0 0 1/0 1 0 f 1 1 0 0 0 0 1/0 0 1 M 1 1 0 0 0 0 1/0 0 t 0 0 0 0 0 0 1 1 . 1 1 1 0 0 0 0 1/0 1 0 i 0 0 0 0 0 0 1 1 . 1 * 1 0 0 0 0 1/0 1 0 f 0 0 0 0 0 0 t 1 . t 1 0 0 0 0 0 1 1 1/0 1 0 0 0 0 0 0 1 1 . t f 0 0 0 0 0 1 1 1/0 1 0 0 0 0 0 1 1 1/0 1 1 0 0 0 0 o 1 1 . t 1 0 0 0 0 0 1 1 1/0 1 1 1 0 0 0 0 1 1 . 1 f 0 0 0 0 0 1/0 1 1 i 1 1 0 0 0 0 1 1 , f t 0 0 0 0 0 1/0 1 1 1 1 1 1 0 0 0 1 1 . 1 1 0 0 0 0 0 1/0 1 1 f 1 1 1 0 0 0 1 1 . t t 0 0 0 0 0 1/0 1 1 f 1 1 1 1 0 o 1 1 . 1 * 1 1 1 0 0 0 1/0 0 1 1 I t 1 0 0 1 1 . t 1 1 1 1 0 0 0 1/0 0 1 1 1 1 1 1 0 1 1 , t i 1 1 0 0 0 1/0 1 0 1 1 1 1 1 1 0 1 1 . t t 1 1 0 0 0 1/0 1 0 f 0 1 0 0 0 0 1 1 I/O 1 0 0 0 0 0 0 I/O 1 1 f 1 0 0 0 0 1 1 1/0 1 0 0 0 0 0 0 1/0 1 « 1 1 0 0 0 0 1 1 1/0 1 0 0 0 0 0 0 1/0 1 f t 1 0 0 0 0 1 1 1/0 f 1 0 0 0 0 0 1/0 1 1 f 1 0 0 0 0 1 1/0 1 1 1 0 0 0 0 0 1/0 1 f f 1 0 0 0 0 1 1/0 1 f 1 1 0 0 0 0 1/0 1 1 f 1 0 0 0 0 1/0 1 1 1 1 1 0 0 0 0 1/0 1 f f 1 0 0 0 0 1/0 1 1 f 1 1 1 0 0 0 1/0 1 t f 1 1 1 1 0 0 1/0 0 1 1 1 1 0 0 0 1/0 1 t 1 1 1 1 1 0 0 1/0 0 f 1 I 1 1 0 0 1/0 1 * 1 * 1 1 1 0 0 1/0 1 0 1 1 1 t 1 0 0 1/0 1 t t 1 1 1 0 0 1/0 1 0 f 1 1 1 1 1 0 1/0 1 t 1 1 1 0 0 0 1 1 1/0 1 1 1 1 1 1 0 1/0 1 f f 1 1 0 0 0 1 1 1/0 f C« ■ *] 1 1 0 0 0 1 1/0 1 1 1 1 0 0 0 1 1/0 1 f M 1 1 0 0 0 1/0 1 1 1 1 0 0 0 0 0 1 0 . 1 1 1 1 0 0 0 1/0 1 1 f 1 0 0 0 0 0 1 0 . f 1 1 1 1 1 1 0 1/0 0 « 1 0 0 0 0 0 1 1/0 1 1 1 1 1 1 1 0 1/0 0 * t 1 0 0 0 0 0 1 1/0 « 1 1 1 1 1 0 1/0 1 0 1 [* •. • t *] 1 1 1 1 0 1/0 1 0 f 1 1 1 0 0 1 1 1/0 1 •7 1 1 1 0 0 1 1 1/0 f 0 0 0 0 0 1 1 1 . 1 i 1 1 1 0 0 1 1/0 1 1 0 0 0 0 0 1 1 1 , 1 f 1 1 1 0 0 1 1/0 1 f 0 0 0 0 0 1 1 1 . f 1 1 1 1 0 0 1/0 1 1 1 0 0 0 0 0 1 1 1 . t t 1 1 1 0 0 1/0 1 1 f 336
  • 353. 1 1 1 1 1 1 1 1 1 I 1 1 1 1 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 I I 1 1 1 1 1 1 1 1 1 1 1 I 1 1 1 1 1 1 t 1 1 1 1 1/0 1 0 . f 1 1 1 1 1 1 1 1/0 i / 0 , 1 i 1 1 1 1/0 1 0 , « 1 1 1 1 1 1 1 1/0 i / 0 , 1 1 1 1 0 1 1 1 /0 ,1 1 1 1 1 1 1 1/0 1 i / 0 , 1 1 1 1 0 1 1 I/O , 1 1 1 1 1 1 1 1/0 1 1/0 , * * 1 1 0 1 1/0 1 , 1 f 1 1 1 1 1 1/0 1/0 1 . f i 1 1 0 1 1/0 1 , f f 1 1 1 1 1 1/0 1/0 1 , 1 1 1 1 0 1/0 1 1 ,1 1 1 0 0 0 0 I/O 1/0 i / o ,1 1 1 0 1/0 1 I , * « 1 0 0 0 0 1/0 I/O 1/0 , 1 1 1 I 1 1 1/0 , 1 1 1 1 0 0 0 I / O 1/0 1/0 . 1 1 1 1 1 1 1/0 , 1 1 1 1 0 0 0 1/0 1/0 1/0 , 1 1 1 1 1 1/0 1 , 1 i 1 1 1 0 0 1/0 1/0 1/0 , 1 1 1 1 1 1/0 1 , f f 1 1 1 0 0 1/0 1/0 1/0 . 1 1 i 1 1/0 1 1 , 1 1 1 1 1 1 0 1/0 1/0 1/0 , i 1 1 1 1/0 1 1 , f f 1 1 1 1 0 1/0 1/0 I/O , 1 0 0 0 0 1/0 1 /0 .1 1 1 1 1 1 1 1/0 1/0 1/0 , # 0 0 0 0 1/0 1/0 , 1 1 1 1 1 1 1 1/0 1/0 1/0 , # 0 0 0 1/0 1/0 0 , 1 « [• a *. • a y• a r] 0 0 0 1/0 1/0 0 , 1 1 0 0 0 1/0 1 1 /0 ,1 1 M 0 0 0 1/0 1 1 /0 ,1 f 1 0 0 0 0 0 1 1 • 1 1 0 0 0 1/0 1 1/0 , f 1 1 0 0 0 0 0 1 1 • f i 0 0 0 1/0 1 1 / 0 , 1 1 1 1 0 0 0 0 1 1 » 1 1 0 0 0 0 1/0 1 /0 ,1 1 1 1 0 0 0 0 1 1 • 1 1 0 0 0 0 1/0 1/0 , 1 1 1 1 1 0 0 0 1 1 i i 0 0 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 1 1 • 1 i 0 0 0 1/0 1/0 0 , 1 1 1 1 1 1 0 0 1 1 » 1 1 0 0 0 1 1/0 1 /0 ,1 1 1 1 1 1 0 0 1 1 1 i 0 0 0 1 1/0 I/O , 1 1 1 1 1 1 1 0 1 1 1 i 0 0 0 1/0 1 1 /0 ,1 1 1 1 1 1 1 0 1 1 1 1 0 0 0 1/ 0 1 1/0 . 1 t 1 0 0 0 0 0 1/0 1 , 1 1 0 0 0 1/0 1/0 1 , 1 1 1 0 0 0 0 0 1/0 1 . t 1 0 0 0 1/0 1/0 1 , 1 1 1 1 0 0 0 0 1/0 1 . 1 1 1 0 0 0 1/0 1 /0 .1 1 1 1 0 0 0 0 1/0 1 . 1 1 1 0 0 0 1/0 i / 0 , 1 1 1 1 1 0 0 0 I/O 1 . 1 i 1 0 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 I/O 1 . 1 i 1 0 0 1/0 1/0 0 , 1 1 1 1 1 1 0 0 1/0 1 . i 1 0 0 0 1 1/0 1 /0 ,1 1 1 1 1 1 0 0 1/0 1 , / 1 0 0 0 1 1/0 i / 0 , 1 1 1 1 1 1 1 0 I/O 1 . 1 1 0 0 0 I/O 1 1 /0 ,1 1 1 1 1 1 1 0 1/0 1 . 1 1 0 0 0 1/0 1 i / 0 , 1 1 c* Ya] 0 0 0 i / o 1/0 1 , 1 1 0 0 0 1/0 1/0 1 , f 1 n 1 1 0 0 1/0 1 /0 ,1 1 1 1 0 0 0 0 1 0 1 1 1 1 0 0 1/0 i / 0 , 1 1 1 1 0 0 0 0 1 0 f 1 1 1 0 1/0 1/0 0 , 1 1 1 1 1 0 0 o 1 0 1 1 1 1 0 1/0 1/0 0 , 1 1 1 1 1 0 0 0 1 0 1 1 1 0 0 1 1/0 1 /0 ,1 f 1 1 1 1 0 0 1 0 1 1 1 0 0 1 1/0 i / 0 , 1 1 1 1 1 1 0 0 1 0 1 1 1 0 0 1/0 1 1 /0 ,1 f 1 1 1 1 1 0 1 0 1 1 1 0 0 1/0 1 1/0 , f f 1 1 1 1 1 0 1 0 1 1 1 0 0 1/0 1/0 1 , 1 1 [* a. * a a] 1 0 0 1/0 1/0 1 , 1 1 1 1 1 0 1/0 i / 0 , 1 1 •10 1 1 1 0 1/0 1/0 , 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1/0 1/0 0 , f 1 1 1 0 0 0 1 1 1 « 1 1 1 1 1/0 1/0 0 , 1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1/0 1 /0 ,1 1 1 1 1 0 0 1 1 1 1 1 1 1 0 1 1/0 1/0 , # f 1 1 1 1 0 1 1 1 1 1 1 1 0 1/0 1 1 /0 ,1 f 1 1 1 1 0 1 1 1 f 1 1 1 0 1/0 1 1/0 , < 1 0 0 0 0 0 0 1/0 0 , i i 1 1 0 1/0 1/0 1 , 1 1 0 0 0 0 0 0 1/0 0 . i t 1 1 0 1/0 1/0 1 , < f 0 0 0 0 0 0 1/0 0 . 1 i 337
  • 354. ■*V f4*4 *4*4*4*4 o o o o O O O O m o o o o o o o o o o o o o o o o O O *4*4 O O k h 0 0 0*1WNN«(<«<4«• o o o o o o o o o o o o • o o o o • o o o o • *4I • f4 <4*4v«i •rt%4*4%««4V*4*4***4 * • o o o o o o o o o o AAo o**>*-XOOOOOOOOf4«4v4f4lSVNNNNNN«4f4f4f4f4f4f4*4. _ —_o o o oNN NNOOOOOOOOf4f4*4H 00©00©*4*4*4*4*4*4 ©©©©o4o4*4o4*4o4o4*4 oo*4o4«4*4*4«4*4o4^^ I*t4*4-H*4li *4 l*404«4*4*41 ............................................o o o o o o o o o o _ A _ _ N N N W S V N N S00f<t«0044004f<4f4«<f(«<«(4*«f4*4 o o O O O O O O O O O O O O r - i 0«*f44«4«l«l«(M ««#f4400004f44f(H H o o o o o o o o o o o o o o o o o o o o o o o 000000000*4f4f4«400000000«4«« 00000^«*Hf4f4f4*4v4000000««*«4>4 0«4o4o4*4«4o4*4*4o4«4v4v40000«4«4*4-p4 o*«4 of t f « f 4 * t « 4 0 0 0 0 « t l 4 f t v t H « 4 U 000001/01/0 013000001/01/0 100001/000,11000001/01/0 •*4«40404*4*40404*4«40404<*404 * * • * « • • • » • * * * • ■ » • • 0 0 0 0 0 O O O dO O O O ftftH ftO O vtftfo** ^ N S W NN © OOOO 0 * 0 0 «400O0«**t«4«40Oftf4ft * 9 0 0 0 9 0 9 9 9 0 9 0 0 0 9 0 9 9 0 0 9 0 9 9 0 9 0 0 9 0 0 9 0 0 9 9 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 « 4 o « 0 0 0 > o m ► 9 0 *4«•r-i* • i OO> •*4*4 O O a ^4rt » oo f e o ■ oo . 0 9 0 O o o o 00 CO CO • . * • . • * * * 0 0 9 0 0 9 0 0 9 0 •A A .N N N N N N N N N N000000000<4f4«««4«4Hft«4f4«4 ■'Sl 0 0 0 0 0 0 0 * * * " 0 0 0 0 0 0 0 0 0 0 * o o o o o o o ° ° o o o o o o o o o o IN S N N X N S S N N V X X S N X N«tftftH f4«tft*«f4«tft«tftft«tf««tf4f4 ► 0 9 0 0 0 0 0 ^ o 4 0 0 0 0 0 0 9 9 * 4 * 4 * 00000*4*4*4*«000000v4*4«4*4 ■ 0 0 0 4 « 4 < t f t 4 0 0 0 0 * t « t f t f t f t i 4 • 0 O M ftftitftftftftO O ftfiftftftftftft p o oXX 9 9 9 9 OO O 9 «**40 0
  • 355. o o o o ►***©© o ©©e © o e e e o o o © © © © © © o e © o o o o i t ...................... n ►*►*m 4 • 510 • 4 © © • B W * © © • O © • © © 4 V N. ©O U • S+**»«*«* I 4 • ****©©©©►*►*0000 4 * © o © © o © © e e o © © © © "s. C C ° ° ©©• - _ W N V W S X N • 0©0©©00©00©0 » »” »” ■■m-^ ”” ^ ^ ^ ^ ^ v*s s s x x v _ © © © © © e • »* »*»*»* M •» 4 - - * • - •J- * * « - » >i ** H t* K ^ H **•>4►*•*4**• M>M»*►n, t* t* © © O O o o o © N t* >»*v ©o I l» I • I 4 i.° i • i f **** © o © ©o o ©e o o ©e I 4 ►*►•►***►*►* o o o © ****© © ©© © © © © ©© i ■I • © © © © o © © © © © t H t0 **• © © ©©©©© © © ►4►*►% CO CO CO w cn > *d hfl 6 8 GO P *d p> o ©© rt* © *1 CO o *-*» C/5 - £**■ d *y I n I « *I B • [- •I I -• B B : • « i• t i - - g t* »*H H 0 Q ►***0000 © © © o © © N N S V N S © © © O © © ©©©o o o 9 ■»•*»*•* **»* N X N S X4 © © ©© ©O n ►*►*p «• sB 4 © © 4 B B • 4 ©© ©O © © • ©© 4• ►*►* ©©b • •* **x. v, 4 ©© • H!► B ►*••*» 4 14 7 “ >, nh3 © © o © © © • * •4 **• »*#*»««* o © ** **o © ©o © ©o o o © »* t*fc*fr*•*«* W N W S©©O©©© W S N N V© o © © © © M>H l* H I* 4 © O ^ a • © o B © © * © © 0 ►*►*■S."N* B O © 4* ►*m • ° « 4 * - * © © • o © 4-* ►*►* 4 ****Vi BO© • *4M* 4 t*B l n ************ i »*»*►*►• O O **►*0000 © O © © © © **»*»*•*»*»* V S S N S V © © © © © © H H H MH »* S N N N N SO © © © © © O OO OO© H ►*H •**4*■
  • 356. [axx v • , i u T i t ] [axx • s v, u i a yl [• »w t *, • m s y. > u • y. •33 0 0 0 0 0 0 1/0 0 U u • • y, * u • v,i u • y *] n j 1 0 0 0 0 1 / 0 0 0 [• axx a a, • sms y, i u v •, * o x y i y ] 134 1 0 0 0 0 0 1/0 0 [ u i • * a, u i v a, is s * a a] 03B 0 0 0 0 0 1 1/0 0 [a i u • *, a i u tr, a i u a a] •30 0 0 0 0 0 0 11/0 [a i u a v, i u a a a, i u a v] •37 0 0 0 0 0 1/0 1 0 [a ass a v, a m i a, i u a a v, sox a a] *30 1 1 0 0 0 0 1/0 0 [ i u a a a, i u a a, u i a a a] m i o o o o o i i /o [a amx a a, sax a a, i u a a a] •30 0 0 0 0 0 1 1/0 1 [a a axx a, a i u a a, a i u a, a axx a a] •31 1 0 0 0 0 1/0 1 0 [a axx a a, a axx a, axx a a, axx a a a] •33 0 0 0 0 0 1/0 1 1 [a axx a a, a axx a, a a axx a, a axx a a] •3 3 1 1 1 0 0 1 /0 0 0 [a a a, a a.a a. a a a] •34 1 1 1 0 0 0 1/0 0 [a a a, a a, a a a] •3* 1 1 0 0 0 0 1 1/ 0 [a axx a a. axx a a. axx a a a] •34 1 0 0 1 1 1/0 0 0 [axx a a a, axx a a, axx a a, axx a a a] •37 1 1 0 0 0 1/0 1 0 [a axx a a. a axx a, axx a a, axx a a a] •31 0 0 0 1 1 1 1 / 0 0 [axx a a a, axx a a, axx a a a] i n l o o o o t / o i i [a axx a a, a axx a, a a axx a, a axx a a) 340
  • 357. •40 0 0 0 1 1 1 / 0 1 0 [ i u i * t>i u • • a, i u at] 041 1 1 1 0 0 0 1 1/0 [a * a. *a, a a a] •42 1 1 1 0 0 1/0 1 0 [a a a. aa, a a. a a a] 043 1 1 0 0 0 1/0 1 1 [a i u a a, a aas a, a a IU a, a i u a a] 044 0 0 0 1 1 1 1 1/0 [a i u a a, a i u a, a aaa a a, i u a a, i u a a a] 040 0 0 0 1 1 1 1/0 1 (a aax a a, a i u a. a i u q a, a i u a a] 046 1 0 0 1 1 1/0 1 0 [aax a a a, aax a a, aax a a, aax a a a] 047 1 1 10 0 1/0 1 1 [a a a, a a, a a a, a a a] 040 1 1 0 1 1 1/0 1 0 [aax a a a, aax a a, aax a a, aax a a a] 040 1 0 0 1 1 1/0 1 1 [a aax a a, a aax a, a aax aa, • aax a a] 000 1 1 0 1 1 1 1 1/0 [a aax a a, a aax a, a aax a a, aax a a, aax a a a] 001 1 1 0 1 1 1 / 0 1 1 [a aax a a, a aax a, a aax aa, a aax a a] 003 1 1 11 1 1 1 1/0 [a a a. a a. a a a. a a, aa a] 003 0 0 0 0 0 0 1/0 1/0 [aax a a a, a aax a a, aax aa, aax a a a] 004 0 0 0 0 0 1/0 1/0 0 [aax a a a. a aax • a, a aaxa, a aax a a,aax a a, aax a a a] 000 1 0 00 0 0 1/0 1/0 [aax a a a, a aax a a. aax aa, aax a a a] 006 1 0 0 0 0 1/0 1/0 0 [aax a a a. a aax a a, a aaxa, OCT 0 00 0 0 1/0 1 1/0 [a aax a a, aa aax a, a aaxa. a aax a a, aax a a a, aax aa] 006 0 00 0 0 1/0 1/0 1 [a aax a a, a aax a, a a aax a, a aax a a, a aax a a] 006 1 1 0 0 0 0 1/0 1/0 [aax a a a, a aaxa a, aax a a, aax a a a] •60 1 1 0 0 0 1/0 1/0 0 [aax a a a, a aaxa, a aax a a, aax a a, aax a a aj 061 1 0 0 0 0 1/0 1 1/0 [a aax a a, a a aax a, a aax a. a aax a a, aax a a, aax a a a] 063 1 0 0 0 0 1/0 1/0 1 [a aax a a, a aax a, a a aax a, a aax a a, a aax a a] 063 0 0 0 1 1 1/0 1/0 0 [aax a a a, aax a a a, aax a a, aax a a a] 064 1 1 1 0 0 0 1/0 1/0 [a a a. a a a. a a. a a a] 060 1 1 1 0 0 1/0 1/0 0 [a a a, a a, aa a, a a, a a a] •66 1 0 0 1 1 1 /0 0 1/0 [iu a a a. i u a a, a aax a, a i u a a •67 1 1 [a aax a a 0 0 aax a a a] •60 1 0 [aax a a a aax a a a 070 0 0 [a aax a a aax a a a aax a a, aax a a a] 0 0 0 1/0 1 1/0 a a aax a, a aax a, aax a a. aax a a a] 0 1 1 1 1 / 0 1/0 0 1 1 1 / 0 1/0 0 aax a a a, aax a a, aax a a, aax a a a] 0 1 1 1 / 0 1 1/0 a aax a a. a aax a, aax a a a, aax a aj •71 1 1 1 0 0 1 /0 1 1/0 [a a a, a a a, a a, a a a, a a a aj •73 1 1 0 1 1 1 / 0 1 / 0 0 341
  • 358. [ M I a • ■ m i v a. a a a] •73 1 0 0 1 1 1/0 t I/O [a »u « i, f i u • v , a h i a , a i u a a, m i a a, m i a a a, h i a a, m i a a a] •74 1 [a aai a 0 0 1 1 1/0 1/0 1 •73 1 1 0 1 1 1/0 1 1/0 [a aai a a aax a a, aax a a, aax a a a. j” t 1 1 1 1 1/0 1 1/0 [a a a, a a a] *77 0 0 0 0 0 1/0 1/0 1/0 *p a aax a a, a a aax a. a aax a a. •78 1 0 0 0 0 1/0 1/0 1/0 ». a aai a aai a a, a, a a , aax a aax a, a a] a aax a a, »7» 1 1 0 0 0 1/0 1/0 I/O a, a a aai aai a a a, a aax a a, a] aax a a. MO 0 0 0 1 1 1/0 1/0 1/0 a aax a, aai a a, a aax aax a a a, a a a] aax a *. Ml 1 1 1 0 0 1/0 1/0 1/0 a a, a a a) j- s 0 0 1 1 1/0 1/0 1/0 aax a a a, a aai a a, a aax a a, a aax a, aax a a, a aai aax a a a, a a a] aax a a, MS 1 1 0 1 1 1 / 0 1 / 0 1/0 [ a u a a a, aax a a a , m i a a, a m i a a, a aai a, a ami a a, a m i a a, m i a a. aai a a a] B .6 P-Space o f S(M ) & H D Param eters ( w ith A ux ) • 1 0 0 0 0 0 0 0 0 , 1 [aai a a. m i a a a] • 3 0 0 0 0 0 0 0 0 , 1 [a a aai, a a a aai] • 3 1 0 0 0 0 0 0 0 , 1 [a a aai, a a a aax] • 4 0 0 0 0 0 1 0 0 , 1 [a aai a, a aai a a] • 3 0 0 0 0 0 0 1 0 , 1 [aai a a a, aui a a] • 6 0 0 0 0 0 0 1 0 , 1 [a a a aai. a a aai] •7 1 0 0 0 0 0 0 0 , 1 [aax a a, aai a a a] M 0 0 0 0 0 1 1 0 , 1 [a aai a, a aai a a] • • 0 0 0 [a aax a a] •10 0 0 0 0 0 0 1 1 , 1 [a a a aai] •111 0 0 0 0 0 1 0 , 1 [aai a a, aax a a a] •13 1 1 . 1 [a a. a a a] •13 1 1 1 0 0 0 0 0 . 1 [a a. a a a] •14 0 0 0 0 0 1 1 1 . 1 [aax a a aax a a a) •16 1 0 0 0 0 0 1 1 . 1 (a aax a a] •17 1 1 0 0 0 0 1 0 , 1 [aax a a, aai a a a] •16 1 0 0 1 1 0 0 0 , f [a a aai, a a a aax] •!• 0 0 0 0 0 1 1 1, 1 [a a aax a, a aai a, a aai a a •301 1 1 0 0 1 0 0 , 1 1 342
  • 359. [a a. « f •] •21 1 1 1 « [v • , * • ■) •22 1 1 1 0 0 0 1 0 , 1 f [a • a, a a] •23 1 0 0 1 1 0 1 0 . f i (a a a u , a a a aax] •24 1 1 1 0 0 0 1 1 . 1 1 MS 1 1 1 0 0 0 1 1 , I f •20 0 0 0 1 1 1 1 1 . 1 1 [a aax a a, a aax a, a aax a a] n o 1 1 0 1 i o i o , f i [a a aax, a a a aax] n oi 1 0 0 0 1 1 1 , 1 1 [a a au a, a aax a, a aax a a] •30 1 1 1 0 0 1 1 1 , i f •32 1 1 0 1 1 1 1 1 , 1 1 [a aax a a, a aax a, a aax a a] •33 1 1 0 1 1 1 1 1 , f i Ca a a aax, a a aax, a a a aax] • 3 4 1 1 1 1 1 1 1 1 , 1 1 aax a a a] • 3 0 0 0 0 0 0 0 1/0 0 , 1 1 [aax a a a, aax a a, aax a a a] • 3 7 0 0 0 0 0 0 1 1/0 , *1 [a aax a a, aax a a a, aax a a] •30 1 0 00 0 1/0 0 0 , 11 [a aax a a, a aax a, aax aa, aax a a a] • 3 0 0 0 0 0 0 1 1/0 0 , 1 f [a a a aax, a a aax, a a a aax] • 4 0 1 0 0 0 0 0 1/0 0 , 1 1 [aax a a a, aax a a, aax a a a] • 4 1 0 0 0 0 0 1 1/0 0 , 1 1 [a aax a a. aaax a, a aax a a] 0 4 2 0 0 0 0 0 1/0 1 0 , 1 1 [a aax a a, a aax a,aax aaa, aax a a] •431 1 0 0 0 0 1/0 0 , 1 1 [aax a a a, aax a a, aax a a a] • 4 4 0 0 0 0 0 1 I/O 1 , 1 f [a a a aax, a a a aax, a a aax, a a a aax] •4S1 0 0 0 0 0 1 1 /0 ,1 1 [a aax a a, aax a a, aax a a a] •40 0 0 0 0 0 1 1/0 1 , 1 1 [a a aax a, aaax a a, a aax a. a aax a a] MT 1 0 0 0 0 1/0 1 0 , 1 1 [a aax a a, a aax a, aax a a, aax a a a] • 4 0 0 0 0 0 0 1/0 1 1 , 1 1 [a aax a a, a aax a, a a aax a, a aax a a] • 4 9 1 1 1 0 0 1/0 0 0 , 1 1 [a a a. a a, a a. a a a] •SO 1 1 1 0 0 0 1/0 0 , 1 1 [a a a, a a, a a a] •SI 1 1 0 0 0 0 1 1/0 , 1 1 [a aax a a, aax aa, aax a a a] •S3 1 0 0 11 0 1/0 0 , 1 f [aax a a a, aaxa a a, aax a a] OSS 1 0 0 1 1 1/0 0 0 , 1 1 [aax a a a, aax a a, aax a a, aax a a aj •04 1 0 0 1 1 1/0 0 0 . f 1 [a a a aax, a a aax, a a aax, a a a aax] OSS 1 1 O0 0 1/0 1 0 , 1 1 [a aax a a, a aax a, aax a a. aax a a a] •SO 0 0 0 1 1 1 1/0 0 , 1 1 [aax a a a, aax a a, aax a a a] •ST 1 0 O1 1 0 1/0 0 , f 1 [a a a aax, a a aax. a a a aax] ■00 1 0 00 0 1/0 1 1 , 1 1 343
  • 360. [■ M> on i 1 0 1 1 1/0 1 1 . 1 1 • HI a a] [a aax a aax a a] H f 1 1 1 0 0 0 1 1 / 0 , 1 1 [a 7 a. * a• a a a] 070 1 1 0 1 1 1/0 1 1 , t 1 [a a a aax, a a aax, a a a aax, MO 1 1 1 0 0 1 / 0 1 0 , 1 1 a a a aax] [•* » •77 i 1 1 1 1 1 1 1/0 , 1 1 Ml 0 0 0 1 1 1 1 1 /0 ,1 1 Ca a a [a IU a a, a aax a, a aax a a, •as ■ a, aax a a a] •7# 0 0 0 0 0 0 1/0 1/0 . 1r___ Ma l 0 0 1 1 1 0 1 /0 ,1 1 aax a a a] [• M I •ax • a a] •TO0 0 0 0 0 1/0 1/0 0 . 1-------- -------— [aax a a a, a aax a a, a aax a, M3 1 1 0 1 1 0 1/0 0 , « i a aax a a. axx a a, aax a a a] [a • a aax, a a aax, a a a aax] MO 1 0 0 0 0 0 1/0 1/0 . 1 m i 1 0 0 0 1/0 1 1 , 1 1 [aax a a a, a aax a a, aax a a, [• aax aax a a a] a aax a a] M l 1 0 0 0 0 1/0 1/0 0 . 1 Ml 0 0 0 1 1 1 1/0 1 , 1 1 [aax a a a. a aax a a, a aax a. Cx aax 7 a, a aax a, a aax a a. a aax a a. aax a a, aax a a a] a aax a a] M3 0 0 0 0 0 1/0 1 1/0 , 1 MO 1 0 0 1 1 1/0 1 0 , 1 1 [a aax [aax a a a, aax a a, aax a a. a aax a a, aax a a a, aax a 7] aax a a a] Ml 0 0 0 0 0 1 /0 1 /0 1 , 1 MT 1 0 0 1 1 1/0 1 0 , 1 1 [a aax a, [a a a aax, a a aax, a a aax, a aax a a aax a a] a a x aax] •M 1 1 0 0 0 0 1/0 1/0 , 1 MO 1 1 1 0 0 1/0 1 1 , 1 1 [aax a a a, a aax a a, aax a a, [a a a,> a a* a a a, a a a] aax a a a] OOO 1 1 0 1 1 0 1 1 /0 , « 1 MS 1 1 0 0 0 1 /0 1 /0 0 . 1 [a a a aax, a a aax, a a a aax] [aax a a a, a aax a, a aax a a. aax a a, aax a a a] 070 1 1 0 1 1 1/0 1 0 , 1 1 [aax a * a, aax a a, aax a a, •M 1 0 0 0 0 1/0 1 1/0 . 1 aax a a a] [a aax a a, a a aax a, a aax 7, a aax a *. aax a a, aax a a a] •71 1 1 0 1 1 1/0 1 0 , « 1 [a a a aax, a a aax, a a aax, •07 1 0 0 0 0 1/0 1 /0 1 , 1 a a a aax] [a aax 7, a aax a *i a aax a a] 073 1 0 0 1 1 1/0 1 1 , 1 1 [s aax MO 0 0 0 1 1 1 /0 1 /0 0 . 1 a aax a a] aax a a a] •73 1 0 0 1 1 1/0 1 1 , f 1 [a a a aax. a a aax, a a a aax, •M 1 1 1 0 0 0 1/0 1 /0 . i a a a aax] [v a a, 070 I 1 0 1 1 1 1 1 /0 ,1 1 •90 1 1 1 0 0 1 /0 1 /0 0 . 1 [a aax [a a a, a a• a a a, a a, a a a] aax a a, aax a a a] -------- •91 1 0 0 1 1 0 1/0 1/0 . 1 344 i 1 1 1 1 1 1 i 1 1 i i 1 «
  • 361. [MI . I f , MI I I I ] 1 1 [a aai a a. a aai a aai a a, aai a m i a a, m i a a a, aai a a a, •] 1 1 M3 1 0 0 [m i i f I. ■ IU f 1, MS 0 o o [• MI • f, MI I l f , 1 1 1 / 0 0 1/0 . 1 u i a a, aai a a a] 1 1 1 / 0 1 1/0 , 1 a aai a a, a aai a, aai a a a, aai a a] •107 1 1 0 1 1 1 / 0 [a a a aai, a a a m i, a a a aai, a a aai, a •100 1 1 1 1 1 1 / 0 [a a a, a a, a a a. a 1 1 /0 ,1 a a aai, a a aai] 1 1 /0 ,1 M4 1 1 0 0 0 1/0 1 1/0 . 1 1 •1M 0 0 0 0 0 1/0 I/O 1/0 , 1 1 [• MI * I, a a aai a, a m i a. [aai a a a, a aai a a, • U I Y» I MI a •, aai a a, aai a a aj a aai a a, a a aai a. • ««x a ?, aai a a, aai a a a] MS 4 0 4 1 1 1 1 / 0 1/0 , 1 1 -------------------------- — [■ MI f 0. •110 1 0 0 0 0 1/0 1/0 1/0 * 1 1 • MI S *, ««1 • • «tl 1 V, MI • * a] MS 1 0 0 1 1 1/0 1/0 0 ,1 1 [•U 1 f 1, aai a a a, m i a a, •111 1 1 0 0 0 1/0 1/0 1/0 , 1 1 MI I f * , aai a a, aai a a a] ------------------------------------------ a a aai a, a aai a a. aai a a, M7 1 0 0 1 1 1/0 1/0 0 , t 1 aai a a a] [a * a i u . a a a aai, a a aai. ■ T • MI, a a m i, a a a aai] •113 0 0 0 1 1 1/0 1/0 1/0 , 1 1 [aai a a a, aai a • a, a aai a a. MS 1 1 1 0 0 1/0 1 1/0 . 1 1 a aai a, a aai a a, a aai a a. aai a a, aai a a a] * • •] --------- ••••1 1a ia i ta a i •113 1 1 1 0 0 1/0 1/0 1/0 , 1 1 SM 1 1 0 1 1 0 1/0 1/0 , f 1 [a a a. a a a . a [f I I MI, a a a aai, a a aai, a a a, a a, a a a] V I* Ml] •114 1 0 0 1 1 1/0 1/0 1/0 , 1 1 MM 1 1 0 1 1 1/0 1/0 0 , 1 1 [aai a a a, au a a a, aai a a. [m i » • I, a u a a, aai a a a, aai a a a, a aai a a. a aai a a. m i t a , i u * a a] aai a a, aai a a a] S101 1 1 0 1 1 1/0 1/0 0 , « 1 [f * i aai. a a aai, a a a aai, •116 1 0 0 1 1 1/0 1/0 1/0 , f 1 v a m i , v a a aai) [a a a aai, a a a aai, a a aai, a a a aai, a a a aai, a a aai. S103 1 0 0 r , ___ _ . 1 1 1 / 0 1 1/0 , 1 1 a a a aai] a aai a a, aai a a, aai a a a. •US 1 1 0 1 1 1/0 1/0 1/0 , 1 1 m i a a, aai a a a] [aai a a a, aai a a a , aai a a, ------------------ a aai a a, a aai a, a aai a a, •10S 1 0 0 1 1 1 / 0 1 1/0 , f 1 [a a a aai, a a aai, a a a aai, a a aai, a a a aai] •117 1 1 0 1 1 1/0 1/0 1/0 , f 1 [a a a aai, a a a aai, a a a u , #104 1 0 0 1 1 1/0 1/0 1 , 1 1 a a a aai, a a a aai, a a a u , [mi a a , a aai a, a aai a a, a a a aai] a aai a a, a aai a a] S10S 1 0 0 1 1 1/0 1/0 1 , f 1 [a a a aai. a a aai, a a a aai, a a a aai. a a a aai] •106 1 1 0 1 1 1 / 0 1 1/0 , i i 345
  • 362. B ,7 P-Space o f S(F)-Param eters •I 0 0 0 0 0 1 0 0 , 1 1 , 0 - 1 1 - 0 1 - 0 0 - 1 [ a - [ e ( l ) } »u*[Im] a - [ a a p ] , c [ c ( l ) ] I i f t u t ] v [ u f ] • ■ [ « ( ! ) ] ] n o o o o o i o o . t i . o - i i - o i - o o - o [ a - [ c ( l > ] i u - [ Im ] * - Q , a - [ c ( l ) ] a a t - [ t a a ] r D a - [ c ( 2 ) ] ] n 0 0 0 0 0 1 0 0 , 1 1 , 0-0 1 -0 1 -0 0 - 1 [ » - n * u - [ t u ] a - [ a a p ] , * - D a a a - [ t a a ) i - [ u p ] • ' [ } ] M 0 0 0 0 0 1 0 0 , 1 1 , 0 - 0 1-0 1-0 0-0 t « - Q * • [ ]. a - 0 a a x - [ t a a ] * - 0 a - D l as o o o o o i o o , l « , o-i 1 - 0 1 -0 o-i [i-[e(l)l v-Caap] aax -[taa], a-[c(l)J *-[up] t-[c{D] a u -[tu ]] M 0 0 0 0 0 1 0 0 , 1 f , 0-1 1-0 1-0 0-0 [a-[e(l)] f [ ] i u - [ t u ] , a-[c(l> ] a-O a-[c<2>] m - [ t u ] ] *7 0 0 0 0 0 1 0 0 , 1 f , 0-0 1-0 1-0 0-1 [a-O a-[tap] i u - [ t u ] , a-D a-[aap] a-[] aax-[taa]] M 0 0 0 0 0 1 0 0 , 1 f , 0-0 1-0 1-0 0-0 [a-D a-[] a u - [ta a ], a-D r D o-D aaa-{taa]l M 0 0 0 1 0 1 0 0 , 1 1 , 0*1 1-0 1-0 0-1 [a-[c(l>] aax-[afr,taa] a-[tap], •-[«(!)] aax-[agr,taa]a-[aap]#-[«(!)]] •10 0 0 0 1 0 1 0 0 ,1 1 , 0-11-0 1-0 0-0 [a-[c(l>] aaa-[afr,taa] a-O, a-[«(l>] aw-[agr,taa] r [ ] «-[c(2>]] ill 0 0 0 1 0 1 0 0 , 1 1 , 0-0 1-0 1-0 0-| [a-D tu -[« ft,m ] V-[up], a-D > u -[ip ,tu ] r [ u p j a-Dl • l a 0 0 0 1 0 1 0 0, 1 1 , 0-0 1 -0 1 -0 0 - 0 [a-[] iu * [i|r,tia ] r D , a-D aax-[agr,taa] t-D a-D] •13 0 0 0 1 0 1 0 0 , 1f , 0-1 1-0 1-0 0-1 [a-[e(l>] a-[aap] au-[afr,taa], a-[c(l)] r[u p ] a-[e(2)]lu-C ip.tia]] •14 0 0 0 1 0 1 0 0 , 1f , 0-1 1-0 1-0 0-0 [a-[c(D] r D Mx-[afr,*aa], a-[c(l>] a-D a-[c(2)] aax-[agr,taa]] •IS 0 0 0 1 0 1 0 0 , 1t , 0-0 1-0 1-0 0-1 [a-D a-[aap] aax-[agr,*aa], a-D a-[aap) a-D aaa-Cagr,taa]] •1« 0 0 0 1 0 1 0 0 , 1f , 0-0 1-0 1-0 0-0 [a-D a-D aax-[agr,taa], a-D a-D a-D a«x-[agr,taa]] •17 11 1 0 0 1 0 0 , 1 1,0-1 1-0 0-1 0-1 [a-[e<D] a-[taa,aap], a-[c(D] a-[taa,aap] a-[e<2)]] •10 11 1 0 0 1 0 0 . 11,0-1 1-0 0-1 0-0 [a-[c(l>] a-[taa], a-[c<D] a-[taa] a-[c(2)]] •10 1 1 1 0 0 1 0 0 , 1 1 , 0-1 1-0 0-0 0-1 [a-[c(l>] a-[aap], a-[c(l)] v-[aap] a-[e(2)]] •20 1 1 1 0 0 1 0 0 , 1 1,0-1 1-0 0-0 0-0 [a-[c(D] a-D, a-[c(l)} a-D e-[«<2)]] 346
  • 363. •31 1 1 t 0 0 1 0 0 . 1 1 . 0*1 0-1 0-1 0-1 [•-[c(l>] T -Itp.tM .tif], a-[c(l>] «-[(p,tH ,up] •-[c(3>]] •33 1 1 1 0 0 1 0 0 , 1 1.0-1 0-1 0-1 0-0 [•”[•(1)1 v-frgr.tM], •-[•(!)] «-[ap,tM] ••[«(!))] •33 1 1 1 0 0 1 0 0 , 1 1.0 -1 0-1 0-0 0-1 [•-(c(l)l T -(t|t,up], •-[«(!)] f [ i|r ,u p ] •-[c(a)]J 134 1 1 1 0 0 1 0 0 , 1 1,0-1 0-1 0-0 0-0 [•-Cc(l>] T-[«gr], •-[•(!)} r-(*|r] •-[c(a>]] •31 [*-□ 1 1 1 0 0 1 0 0 , 1 1 , 0-0 T-[tM,up], •-□ r-[tu ,u p ] •-[]] 1-0 0-1 0-1 n o 1 1 1 0 0 1 0 0 , 1 1 , 0-0 [•-□ *-[«■»], i -[] *-[«>■] •-[]] 1-0 0-1 0-0 •37 [•-□ 1 1 1 0 0 1 0 0 , 1 1 , 0-0 v-[Mp], »-[) a-[up] o-m 1-0 0-0 0-1 n o [•-a 1 1 1 O O l 0 0 , 1 1 , 0 - 0 *-[]. •-□ *-□ •-[)] 1-0 0-0 0-0 n o [•-a 1 1 1 0 0 1 0 0 , 1 1 , 0 - 0 »-[»|r,to»,MpJ, •-[] f-[ipr,tM ,u| 0-1 p] •- 0-1 []] 0-1 •30 [•-[] 1 1 1 0 0 1 0 0 , 1 1 , 0 - 0 T-CifT.tu], •-□ r-[agr,taa) •-[]] 0-1 0-1 0-0 •31 [•-a 1 1 1 0 0 1 0 0 , 1 1 , 0 - 0 *-[*jt,up], •-□ v-[*fr,up] •-[]] 0-1 0-0 0-1 • s a [.-□ 1 1 1 0 0 1 0 0 , 1 1 , 0 - 0 *-[•*•] •-[]] 0-1 0-0 0-0 •33 1 1 1 0 0 1 0 0 , 1 * , 0-1 [•-[c(l>] • - [ e ( a > ] (-[tM .up], a-[c(l>] 1-0 0-1 0-1 «-[tu,up)] •34 1 1 1 0 0 1 0 0 , 1 f , 0-1 1-0 0-1 0-0 [ • * [ « ( ! ) ] • - [ c ( a > ] c [ t u ] , • - [ • ( ! ) ] i - [ t u ] ] •3* 1 1 1 0 0 1 0 0 , 1 * , 0-1 1-0 0-0 0-1 [•-[•(!)] •-[c(a>] •-(•(!)] tr[up]] •30 1 1 1 0 0 1 0 0 , 1 * , 0-1 1-0 0-0 0-0 [ • - [ c ( l > ] o - [ c < a > ] * - [ ] , • - [ « ( ! ) ] » - [ ] ] •37 1 1 1 0 0 1 0 0 , I f , 0-1 0-1 0-1 0-1 [•-[•(!)] »-[e(a)] f [ i|r ,tii,u p ) , •-[«(!)] i-[i^ ,tu ,u p ]] •30 1 1 1 0 0 1 0 0 , 1 * , 0-1 0-1 0-1 0-0 ■-C«<1>3 s o o * . 0-1 0-1 0-0 0-1 [•-(«(!>} •-(•<») «-Caor,aap], ■-[•(1)3 T-[l|T,Up]] MO 1 1 1 0 0 1 0 0 , 1 f . O - l 0-1 0-0 0-0 [•-[•(I)] •-[«(•>] «-[«|r], ■-[*(1)3 »-[•*»)) Ml 1 1 1 0 0 1 0 0 , 1 * . 0-0 1-0 0-1 0-1 [•-[) •-[] f[tM ,u p ], •-□ l*[lH ,up]] 347
  • 364. M2 1 1 1 0 O 1 0 0 . 1 f . 0-0 1-0 0-1 0-0 [■-O a-D r l t M j , 1 * 0 c [ tM ]] M3 1 1 1 0 0 1 0 0 , 1 f , 0-0 1-0 0-0 0-1 t* -0 o -tl * -[o p ], > -[u p ]] M4 (•-□ 1 1 1 0 *-[J. 0 1 0 0 , 1 f , 0-0 a-[] *-CJ] 1-0 0-0 0-0 Ml [•-[] 1 1 •-[] 1 0 0 1 0 0 , 1 f , 0-0 *-Ca*r,taa,aap3, a-[] 0-1 ■a.u 0-1 pH 0-1 mo ta-C3 1 1 1 0 0 1 0 0 , 1 * . 0-0 0-1 *-[agr,t«a], a-[D *-[«gr.t»aJ] 0-1 0-0 M7 [a-n 1 1 o-n 1 0 *-[agr 0 1 0 0 , 1 7 , 0 - 0 ,aap], «-[] v-(agr,aap]] 0-1 0-0 0-1 MO 1 1 1 0 0 1 0 0 , i f , 0-0 0-1 0-0 0-0 *-!•**], »-[] v [ ip l] 348
  • 365. Appendix C Partial Ordering of S(M )-Param eter Settings x> 0. 0 0 C o 0 0 X 0 X X0 3 C o 0 0 X X 0 X 0 J ] C 00 0 X X X0 0 ] [ 0 0 X 0 0 X X0 3 a> o . 0 1 C o 0 X 0 X 0 Xo ] t o 0 X 0 X X 00 3 0 0 0 0 0 0X 0 ] C 0 0 X X 0 0 X 0 3 0 0 0 00 X 0 0 ] [ 0 0 X X 0 X 00 3 0 0 0 0 X 00 0 ] [ o0 X X X 0 00 3 0 0 0 X 0 00 0 ] C oX 0 0 0 X X0 3 0 0 1 00 0 0 0 3 [ 0X 0 0 X 0 X0 3 0 1 0 0 00 0 0 1 C o X 0 0 X X 00 3 1 0 0 0 0 00 0 ] C oX 0 X 0 0 X 0] 0 0 0 0 0 X X0 ] t 0 X 0 X 0 X 0 o 3 E o X 0 X X 0 0 o ] 3 ) 0 . 0 a C o X X 0 0 0 Xo 3 [ o X X 0 0 X 0 o 3 0 0 0 0X 0 X 0 ] [ oX X 0 X 0 0o 3 0 0 0 0 X X0 0 ] C o X X X 0 0 0 o 3 0 0 0 X 0 0 X0 1 [ X 0 0 0 0 X X0 3 0 0 0 X 0X 0 0 3 C x 0 0 0 X 0 X o 3 0 0 0 X X 0 00 3 E X 0 0 0 X X 0o 3 0 0 1 0 00 X 0 3 C x 0 0 X 0 0 X o 3 0 0 1 0 0 X 00 3 t X 0 0 X 0 X 0o 3 0 0 1 0 X 0 0 0 3 [ X 0 0 X X 0 0o 3 0 0 1 X 0 0 0 0] [ X 0 X 0 00 X 0 3 0 1 0 0 0 0X 0 3 C x 0 X 0 0 X 0o 3 0 1 0 0 0 X 0 0] C x 0 X 0 X0 0 0 3 0 1 0 0 X 0 00 3 E 1 0 X X 0 0 0o 3 0 X 0 X 0 0 0 0 ] C x X 0 0 0 0X o 3 0 1 1 0 0 0 0 0 ] [ X X 0 0 0 X 0o 3 1 0 0 0 0 0 X 0] C x X 0 0 X 0 00 3 1 0 0 0 0 X 00 ] C X X 0 X 0 0 0 0 3 1 0 0 0 X 0 0 0) E X X X 0 0 0 0o 3 1 0 0 X 0 0 00 ] 1 0 X 0 0 0 0 03 ( t > 0. 0 . 4 1 I 0 0 0 0 00 3 C o 0 0 X X X X 0 3 4 ) 0 . 0 . 3 [ 0 0 X 0 X X X o 3 [ 0 0 X X 0 X X o 3 0 0 0 0X X X 0 3 [ o 0 X X X 0 X o 3 349
  • 366. ***+►*© O 9*»*©** O •* ►*******¥+■**•* **** •* m•*•*•* © ¥*© *+** © •* fr*N**»*** 0 0 0 0 1 T 1 T 0 9*•*m I 0 1 o mm »* - 1 0 o 0 I*•* •* © i i i 0 i 9*9*e **** mm p+»* »» m»• |M** © >• •* m•» 0 i 0 0 m©**»*«• o •***•* **o ►*¥+i*•*¥*■ OO ©© OOOOOOOOOO0OOO©OOOOOO M W^^^OOOOOO********** **oooo**^»*****»*oooo** O n q o ^ O O ^ ^ O ^ O O ^ ^ O ^ ^ O ^ ^ O O ^ ^ O * * * 1 ^**** o q **o o »*o »*o »*o »*o »*o ***»o *+**o **o »*o »***o »*»*»* o o o » ‘ 0©»»©»*'*o©»*ot*»*©»*»*»*oo»*©t*t*©fr*MMo e p o o o o o o o o o o o e o o o o o o o e o o o e o o o o o 00 c* o OOOOOOOOOOOOOO V o»*»*»***0 0 0 0 0 0 ©0 0 © • © © © © ► * • * > * ► * ► * • * © © 0 © » »*OOOh **m OOO******0 0 **»»0 »*©©»»t*0 «*i*0 ** ^ O h h O O ^ O ^ ^ O * * ^ * * •*ooo©©**«*«*»*»*©©©©©ooooo o m o o o o **o o o o »*»*»**»© © o © o o o o ^ o o o o N e e o N o e o ^ ^ ^ o c o o o o h o o o o ^ o o o ^ o o ^ o o ^ ^ o o©o©»*©o©©»*ooo»*©©»*o«*o»* oooo©»*©©eo»*©o©»*©o»»©**»* ► * 0 0 0 0 0 0 ©** ©o o o © o o **o o oo 0 0 0 ^ 0 0 0 O O o o M>o o ©O OO © ►*© o o o o © o ►* o o o o o o © o o © o **•* o
  • 367. E o 0 1 1 1 1 1 1 ] C 0 1 0 1 1 0 0 1 ] E o 1 0 1 1 1 1 1 ] [ 0 1 1 0 0 1 1 ] E o 1 1 0 1 1 1 1 ] E 0 1 1 0 0 1 0 1 ] E o 1 1 1 0 1 1 1 ] E 0 1 1 0 1 0 0 1 ] E o 1 1 1 1 0 1 1 ] E 0 1 E 0 1 1 1 1 1 0 1 ] C 1 0 0 0 0 1 1 1 ] C l 0 0 1 1 1 1 1 ] E 1 0 0 0 1 0 1 1 ] E i 0 1 0 1 1 1 1 ] C l 0 0 0 1 1 0 1 ] E l 0 1 1 0 1 1 1 ] [ 1 0 0 1 0 0 1 1 ] E i 0 1 1 1 0 1 1 ] t 1 0 0 1 0 1 0 1 ] E l 0 1 1 1 1 0 1 ] [ 1 0 0 1 1 0 0 1 ] [ 1 1 0 0 1 1 1 1 ] C l 0 1 0 0 0 1 1 ] t 1 1 0 1 0 1 1 1 ] [ 1 0 1 0 0 1 0 1 ] E i 1 0 1 1 0 1 1 ) [ 1 0 1 0 1 0 0 1 ] E l 1 0 1 1 1 0 1 ] [ 1 0 1 1 0 0 0 1 ] E 1 1 1 0 0 1 1 1 ] t 1 1 0 0 0 0 1 1 ] E l 1 1 0 1 1 1 ] C i I 0 0 0 1 0 1 ] E i 1 1 0 1 1 0 1 ] [ 1 1 0 0 1 0 0 1 ] E 1 1 1 1 0 1 1 ] [ 1 1 0 1 0 0 0 1 ] E i 1 1 1 0 1 0 1 ] [ 1 1 1 0 0 0 0 1 ] E l 1 1 1 1 0 0 1 ] (13) 0.1 .5 (IS) 0.1 .7 [ 0 0 0 1 1 1 1 1 ] E o 1 1 1 1 1 1 1 ] [ o 0 1 0 1 1 1 1 ] E l 0 1 1 1 1 1 1 ] [ 0 0 1 1 0 1 1 1 ] E i 1 0 1 1 1 1 1 ] E 0 0 1 1 1 0 1 1 ] E l 1 1 0 1 1 1 1 ] [ o 0 1 1 1 1 0 1 ] E i 1 1 1 0 1 1 1 ] [ 0 1 0 0 1 1 1 1 ] E i 1 1 1 1 0 1 1 ] [ o 1 0 1 0 1 1 1 ] E l 1 1 1 1 1 0 1 ] E 0 1 0 1 1 1 1 ] C o 1 0 1 1 1 o 1 ] (10) 0.1 .3 C o 1 1 0 0 1 1 1 ] E o 1 1 0 1 1 1 ] E i 1 1 1 1 1 1 1 ] E o 1 1 0 1 1 0 1 ] C o 1 1 1 0 1 1 ] (17) 1.0.1 E 0 1 1 1 0 1 o 1 ) C o 1 1 1 1 0 0 1 ] E o 0 0 0 0 1/0 0 0 ] [ l 0 0 0 1 1 1 1 ] E o 0 0 0 0 0 1/0 0 ] [ i 0 0 1 0 1 1 1 ] E o 0 0 0 0 1/0 1 0 ] [ l 0 0 1 1 1 1 ] E l 0 0 1 1 1 0 1 ] (IS) 1.0.3 E l 0 1 0 0 1 1 1 ] E l 0 1 0 1 0 1 1 1 E o 0 0 0 0 1 1/0 0 ] £ i 0 1 0 1 1 0 1 ] E o 0 0 0 1 1/0 0 0 ] E l 0 1 1 0 0 1 1 ] E o 0 0 0 1 0 1/0 0 ] E l 0 1 1 0 1 0 1 ] E o 0 0 1 0 1/0 0 0 ] E l 0 1 1 1 0 0 1 ] E 0 0 0 1 0 0 1/0 0 ] E l 1 0 0 0 1 1 1 ] E 0 0 1 0 0 1/0 0 0 ] E 1 1 0 0 1 0 1 1 ] E o 0 1 0 0 0 1/0 0 ] E l t 0 0 1 1 0 1 ] E o 1 0 0 0 1/0 0 0 ] E l 1 0 1 0 1 1 ] E o 1 0 0 0 0 1/0 0 ] E i 1 0 1 0 1 0 1 ] E l 0 0 0 0 1/0 0 0 ] E 1 1 0 1 1 0 1 ] E i 0 0 0 0 0 1/0 0 ] E 1 1 1 0 0 1 1 ] E 1 1 1 0 0 1 0 1 ] (IS) 1.0.3 E l 1 1 0 1 0 0 1 ] E l 1 1 1 0 0 0 1 ] E o 0 0 0 1 1/0 1 0 ] E o 0 0 0 1 1 1/0 0 ] (14) 0.1 • E o 0 0 1 0 1/0 1 0 ] E o 0 0 1 0 1 1/0 0 ] 351
  • 368. o» » * * « * » * i * » * ( * w i * * * i * i t e O O O O O O O O O O O O O O O O O O O w H K H k * H ^ » H H K O O O O O O O O O O O O O O O O t* ^ o o o o o o o o o o o o ~ ~ ~ ~ ~ ~ ~ ~ ~ P * ~ - o o o o o o o o **ft*0©©©e©©©**ft*ft***ft***ft*ft*0©©©©©©0o * o©B***o©oo©©»®**o©©©oo»*B*B«»»a»*»oo ©*®*®©©©©*®»®»®»®©o»»»»©oo©**»®»»«»©o»*»»ta»»*oo**»* o o o e ^ ^ o o o o o o ^ ^ o o o o ^ ^ e o o o * * ^ o©©t®»®e©»®»®oo»*»®o©B®»*©o**»»o©*»»»»ftB*oo»*»*»*B* oo©©©©»®«»©oo©©©»***oooo»*»®oo»®*® V N V V S K S S V S - V XV S * V V SV V V S S V V N X . v s S N ® . - ® . . ® . . ® . . ® m ® ..® ..® .-® ..® .*® ,*® .*® >.® ,.® .*® .*® ^ ® ^ ® ^ ® ^ ® ^ ® ^ ® M® ^ ® ^ ® (.® M ® ^ ® ^ ® S N V N V X S V V ' « . ' « . V N V S S S N S S V S N N X X > * o o o o o > * e o o > ‘ o>‘ o o o o e > ‘ o oo> *o> ‘ o o a > ‘ o>‘ 0 >* o o o o o o o o o » o o o o o o o » * o o o o o > * o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o e o o o o co aj io M K M r ' V ^ e e H M W ©b*******************^ S S S V N_ © o _ o © _ © oft* ft* ft* ft* t* B* N K > 0 0 0 ^*0 ^ 0 ^ 0 ^ 0 ^ o e m h h m h h h h k h » * ^ 0 0 0 0 0 0 0 0 * * * * * * * * * * * * * * * * 0 0 * * * * o o o o * * * * * * * * o o * * ^ * * ^ o o ^ ^ * * * * h * * o o * * * * * * * * O O h h O O * * ^ O O h k * * m O O ^ * * ^ * * * * * * 0 0 ^ ^ > * * * » * * * ©ft*©ft***ft*©**ft***ft*ft*©ft*****ft*ft*ft*ft*©ft*ft*ft*ft***ft*ft*B*ft*V K K N N X X V S S S V S S o e e o o o o o o o © o _ ©_ ©_ ©ft* ft* ft* ft* ft* ** ** ** ft* ** ** ** ** ft* **S V S . S » N N N V N N V S OOOOO» ®OOO» ®O» ®OOO» ®O» * O» ®0 OO» * 0 * ®O» ®O» « o o o e c o o o o o o e e o o o o o o o o o o o e o o o o o *®►* oo e e e o o ►*•* o ©© o © ©© *®►*o ©** o**o •*** © © ©B* ft* B* ft* V s.e © oo ©e © ©© oo e e ©
  • 369. t o 0 0 1 1 1 0 1 /0 0 0 0 o 0 0 0 1 /0 ] t 0 0 1 0 0 1 /0 1 1 [ o 0 1 0 0 1 1 /0 1 » > 1.1 .2 I o 0 1 0 0 1 1 1 /0 I 0 0 1 0 1 1 /0 0 1 0 0 0 0 0 1 /0 0 11 [ 0 0 1 0 1 0 1 /0 1 0 0 0 0 0 0 1 /0 1 ] t 00 1 0 1 0 1 1 /0 0 0 0 0 0 0 1 1 /0 ] [ o 0 1 0 1 1 0 1 /0 0 0 0 0 0 1 0 1 /0} [ o 0 1 1 0 I/O 0 1 0 0 0 0 1 0 0 1 /0 1 I 0 0 1 1 0 0 1 /0 1 0 0 0 1 0 0 0 1 /0] [ o 0 1 1 0 0 1 1 /0 0 0 1 0 0 0 0 1 /0J [ o 0 1 1 0 1 0 1 /0 0 1 0 0 0 0 0 1 /0] [ o0 1 1 1 0 0 1 /0 1 0 0 0 0 0 0 1 /0 J C o 1 0 0 0 1 /0 1 1 C 0 1 0 0 0 1 1 /0 1 30) 1 .1 .3 C o 1 0 0 0 1 1 1 /0 t 0 1 0 0 1 1 /0 0 1 0 0 0 0 0 1 /0 1 1 ] C o 1 0 0 1 0 1 /0 1 0 0 0 0 0 1 1 /0 1 ] C 0 1 0 0 1 0 1 1 /0 0 0 0 0 0 1 1 1 /0 ] C o 1 0 0 1 1 0 1 /0 0 0 0 0 1 1 /0 0 1 ] t o 1 0 1 0 1 /0 0 1 0 0 0 0 1 0 1 /0 1) [ o 1 0 1 0 0 1 /0 1 0 0 0 0 1 0 1 1 /0 1 C 01 0 1 0 0 1 1 /0 0 0 0 0 1 1 0 1 /0] C 0 1 0 1 0 1 0 1 /0 0 0 0 1 0 1 /0 0 1] C 01 0 1 1 0 0 1 /0 0 0 0 1 00 1 /0 1 3 [ 0 1 1 0 0 1 /0 0 1 0 0 0 1 0 0 1 1 /0] [ 01 1 0 0 0 1 /0 1 0 0 0 1 0 1 0 1 /0 ] C 01 1 0 0 0 1 1 /0 0 0 0 1 1 0 0 1 /0 3 [ 0 1 1 0 0 1 0 1 /0 0 0 1 0 01 /0 0 1 ] [ 0 1 1 0 1 0 0 1 /0 0 0 1 0 0 0 1 /0 1 ] t 0 1 1 1 0 0 0 1 /0 0 0 1 0 0 0 1 1 /0] [ 1 0 0 0 0 1 /0 1 1 0 0 1 0 0 1 0 1 /0] [ 1 0 0 0 0 1 1 /0 1 0 0 1 0 1 0 0 1 /0 3 [ 1 0 0 0 0 1 1 1 /0 0 0 1 1 0 0 0 1 /0 3 C 1 0 0 0 1 1 /0 0 1 0 1 0 0 0 1 /0 0 1 3 C i 0 0 0 1 0 1 /0 1 0 1 0 0 0 0 1 /0 13 [ 1 0 0 0 1 0 1 1 /0 0 1 0 0 0 0 1 1 /0 ] [ 1 0 0 0 1 1 0 1 /0 0 1 0 0 0 1 0 1 /0} [ 1 0 0 1 0 1 /0 0 1 0 1 0 0 1 0 0 1 /0 ] [ 1 0 0 1 0 0 1 /0 1 0 1 0 1 0 0 0 1 /03 [ 1 0 0 1 0 0 1 I/O 0 1 1 0 0 0 0 1 /0 ] [ 1 0 0 1 0 1 0 1 /0 1 0 0 0 0 1 /0 0 1 ] C l0 0 1 1 O 0 1 /0 1 0 0 0 0 0 1 /0 1 ] [ 1 0 1 0 0 1 /0 0 1 1 0 0 0 0 0 1 1 /0 ] [ 1 0 1 0 0 0 1 /0 1 1 0 0 0 0 1 0 1 /0 ] £ 1 0 1 0 0 0 1 1 /0 1 0 0 0 1 0 0 1 /0 1 t 1 0 1 0 0 1 0 1 /0 1 0 0 1 0 0 0 1 /0 3 t 10 1 0 1 0 0 1 /0 1 0 1 0 0 0 0 1 /0 ] C l0 1 1 0 0 0 1 /0 1 1 0 0 0 0 0 1 /0 3 £ 1 1 0 0 0 1 /0 0 1 [ 1 1 0 0 0 0 1 /0 1 3T) 1 .1 .4 E l1 0 0 0 0 1 1 /0 E l 1 0 0 0 1 0 1 /0 0 0 0 0 1 1 /0 1 1] [ 1 1 0 0 1 0 0 1 /0 0 0 0 0 1 1 1 /0 1 ] [ 1 1 0 1 0 0 0 1 /0 0 0 0 0 1 1 1 1 /01 ( 1 1 1 0 0 0 0 1 /0 0 0 0 1 0 1 /0 1 1] 0 0 0 1 0 1 1 /0 13 (23) 1.1 .0. 0 0 0 1 0 1 1 1 /03 0 0 0 1 1 1 /0 0 13 E 0 0 0 1 1 1 /0 1 1 0 0 0 1 1 0 1 /0 1 ) [ o 0 0 1 1 1 1 /0 1 0 0 0 1 1 0 1 1 /03 E 0 0 0 1 1 1 1 1 /0 353
  • 370. I >.k m ^ ^ ^ » * n ^ ^ » * n »>»«^^^»*»«»*»***h h h h h h h v 0 O O O O OO O OO O OO O OO OO O OO O OO O OO O OO O O » * ^ ^ ^ ^ k A * * f t * o O O O O O O O O O O O O O O O O O O O O O ^ H K H ^ H » o » * * * » « r * i 4 M M H ^ H » * H H H H O O O O O O O O O Q 0 0 0 0 0 0 0 0 * * ^ ^ ^ ^ * * ^ * * ^ ^ » * ^ 0 0 0 0 0 0 0 0 0 0 * * ^ * * ^ ^ ^ > * ^ ^ ^ ^ ^ 0 0 0 0 0 0 0 0 0 0 > * * * ^ N * ^ ^ ^ ^ * * ^ H O O O O O O O ^ ^ ^ ^ ^ O O O O O O O ^ ^ ^ ^ H ^ ^ O O O ^ ^ ^ ^ ^ O O O O O O O ^ ^ ^ H ^ M M O O O W H H K H H H O O O 0»***»*»*000**0000**»*****00©**»*»***000********0000********000****»***000**»***fr*^»t^000»***»* t*>*»»»*o»*oo***»oo****»*»*»»oo******»***»***o»»oo«»»»oo»*»‘ ^»*»*oo**»t**t»f»***»*oo»*»*»***»*»***K S S S ^ N V S V S N S X O O O O O O O O O O O O O a a a oo**** o**** **n o **** ** ►* o © *• **o oo ** ** ►*** O OO © ** OO o o ©** OO ©** ©** OO OO OH o o ©** o - OO ©** ©**(rf H M ****** **** ** N H ** *+ **»*M **** ** **** ** ^ M N ^ I*S V N N N V N N N V N S X S S N N S > S N N »*oo»*»*o**»*ooo**»*oo****ot*»*oo****o****o****ooo«»»»oo»»»»o****oo»*»*o****o»*»*oo»*^o»*t*o»*** CO CA 4* wo**i*******»*********»*****»*»*»*»***»***»*»*»*«*»*****»*t*»***»»»*0000000000000000 w *•»**•*»***•*•••*•* * * * * * * * * * * * * * * * * * * * * * * * * •*«a«a»»»»«»b*«a»*»*0000000000********************000******^*^»*********000****** O »«*«**Hh*>*0000 »*ia»*0 0 0 0 0 0 0 **************0 0 0 **************0 0 0 +*****»**»**»*»*»***0 0 0 **»********* >«ooooo******** ooo********ooo********ooo**************ooo********************ooo****************** o * * o o o o * * o o o OO»*O0**0**OO o o o © © © ___o o o ^ o ^ o o o o o o**** ©**** ** **©***£ ***J^ ***J © •* *£ ***^ ***£ ***^ O ♦*H** »*** ** *» *»** »* _ o o _ © o o * * _ _ o o o** o** o o o**^o**^o**___ o o o**_o**_o>* o** o o o** **** ** »*N M ** ** •* ** ** ** »* ** ** »* ** N H K H H »*»*I*X X X x * ^ ^ ^ " X X X X* X X X X ' X X X . ©****oo****©****oo****©****©****©o****©****e****o****©©****©****©****©****©**** oo oo *** *o oo* *
  • 371. 1 1 1 0 1 0 1/0 3 1 1 1 1 0 0 I/O ] 30) 1.1.7 1 1 1 1 1/0 1 1 1 1 1 1 1 1 1/0 1 3 1 1 1 1 1 1 1/0 ] 0 1 1 1 1/0 1 1 3 0 1 1 1 1 1/0 1 3 0 1 1 1 1 1 1/0 3 1 0 1 1 1/0 1 1 3 1 0 1 1 1 1/0 1 3 1 0 1 1 1 1 1/0 3 1 1 0 1 1/0 1 1 3 1 1 0 1 1 1/0 1 ] 1 1 0 1 1 1 1/0 ] 1 1 1 0 1/0 1 1 ] 1 1 1 0 1 1/0 1 ] 1 1 1 0 1 1 I/O ] 1 1 1 1 1/0 0 1 3 1 1 1 1 0 1/0 1 ] 1 1 1 1 0 1 1/0 3 1 1 1 1 1 0 1/0 J 31) 1.1 .8 1 1 1 1 1/0 1 1 ] 1 1 1 1 1 1/0 1 ] 1 1 1 1 1 1 1/0 ) 33) 2.0 .2 0 0 0 0 0 1/0 I/O 0 3 33) 2.0.3 0 0 0 0 1 I/O 1/0 0 ] 0 0 0 1 0 I/O 1/0 0 ] 0 0 1 0 0 1/0 1/0 0 ] 0 1 0 0 0 I/O 1/0 0 ] 1 0 0 0 0 I/O I/O 0 } 34) 3.0 .4 0 0 0 1 1 1/0 1/0 0 3 0 0 1 0 1 1/0 1/0 0 ] 0 0 1 1 0 1/0 1/0 0 3 0 1 0 0 1 1/0 1/0 0 ] 0 1 0 1 0 1/0 1/0 0 ] 0 1 1 0 0 1/0 1/0 0 3 1 0 0 0 1 1/0 1/0 0 3 1 0 0 1 0 1/0 1/0 0 3 1 0 1 0 0 1/0 1/0 0 3 1 1 0 0 0 1/0 1/0 0 3 St) 3.0 t 0 0 1 1 1 1/0 1/0 0 3 0 1 0 1 1 1/0 1/0 0 3 0 1 1 0 1 1/0 1/0 0 3 0 1 1 1 0 1/0 1/0 0 3 ( 1 0 0 1 1 1/0 1/0 0 3 [ 1 0 1 0 1 1/0 1/0 0 3 ( 1 0 1 1 0 1/0 1/0 0 3 [ 1 1 0 0 1 1/0 1/0 0 ] [ 1 1 0 1 0 1/0 1/0 0 ] I 1 1 1 0 0 1/0 1 /0 o 3 <36) 3.0 .6 [ o 1 1 1 1 1/0 1/0 0 3 [ 1 0 1 1 1 I/O 1/0 0 3 [ 1 1 0 1 1 1/0 1/0 0 3 [ 1 1 1 0 1 I/O 1/0 0 3 [ 1 1 1 1 0 1/0 1/0 0 3 (37) 3.0 .7 C i 1 1 1 1 I/O 0* o o 3 (38) 3.1.2 [ 0 0 0 0 0 1/0 0 1/0 ] t o 0 0 0 0 0 1/0 1/0 ] (38) 3.1 .3 I o 0 0 0 0 1/0 1/0 1 3 C o 0 0 0 0 1/0 1 1/0 3 C o 0 0 0 0 1 1/0 1/0 3 C o 0 0 0 1 1/0 0 1/0 3 [ o 0 0 0 1 0 1/0 1/0 3 [ 0 0 0 1 0 1/0 0 1/0 3 [ o 0 0 1 0 0 1/0 1/0 3 C 0 0 1 0 0 1/0 0 1/0 3 C o 0 1 0 0 0 1/0 1/0 3 [ o 1 0 0 0 1/0 0 1/0 3 [ o 1 0 0 0 0 1/0 1/0 3 [ 1 0 0 0 0 1/0 0 I/O 3 [ 1 0 0 0 0 0 1/0 1/0 3 (40) 3.1 .4 [ 0 0 0 0 1 1/0 1/0 1 3 C 0 0 0 0 1 1/0 1 1/0 3 [ 0 0 0 0 1 1 1/0 1/0 3 [ o 0 0 1 0 1/0 I/O 1 3 C o 0 0 1 0 1/0 1 1/0 3 ( 0 0 0 1 0 1 1/0 1/0 3 [ 0 0 0 1 1 1/0 0 I/O 3 ( o 0 0 1 1 0 1/0 1/0 3 C 0 0 1 0 0 1/0 I/O 1 3 C o 0 1 0 0 1/0 1 1/0 3 C o 0 1 0 0 1 1/0 1/0 3 [ 0 0 1 0 1 1/0 0 1/0 3 [ o 0 1 0 1 0 1/0 1/0 3 [ 0 0 1 1 0 1/0 0 1/0 3 [ o 0 1 1 0 0 1/0 1/0 3 [ 0 1 0 0 0 1/0 1/0 1 3 [ 0 1 0 0 0 1/0 1 1/0 3 [ 0 1 0 0 0 1 1/0 1/0 3 [ 0 1 0 0 1 1/0 0 I/O 3 [ 0 1 0 0 1 0 1/0 1/0 3 355
  • 372. t 0 1 0 1 0 1/0 0 1/0 3 C o 1 0 1 0 0 1/0 1/0 ] C o 1 1 0 0 1/0 0 1/0 3 C o 1 1 0 0 0 1/0 1/0 ] C i 0 0 0 0 1/0 1/0 1 3 [ l 0 0 0 0 1/0 1 1/0 3 C i 0 0 0 0 1 1/0 1/0 3 [ i 0 0 0 1 1/0 0 1/0 3 [ l 0 0 0 1 0 1/0 1/0 ] [ l 0 0 1 « 1/0 0 1/0 3 [ l 0 0 1 0 0 1/0 1/0 3 [ i 0 1 0 0 1/0 0 1/0 3 [ l 0 1 0 0 0 I/O 1/0 3 t l 1 0 0 0 1/0 0 1/0 3 E l 1 0 0 0 0 1/0 1/0 3 (41) 2.1.6 [ o 0 0 1 1 1/0 1/0 1 [ 0 0 0 1 1 1/0 1 1/0 C o 0 0 1 1 1 1/0 1/0 [ o 0 1 0 1 1/0 1/0 1 C o 0 1 0 1 1/0 1 1/0 E 0 0 1 0 1 1 1/0 1/0 [ o 0 1 1 0 1/0 1/0 1 t o 0 1 1 0 1/0 1 1/0 0 1 1 0 1 1/0 1/0 [ 0 0 1 1 1 1/0 0 1/0 [ o 0 1 1 1 0 1/0 1/0 [ o 1 0 0 1 1/0 1/0 1 [ 0 1 0 0 1 1/0 1 1/0 [ o 1 0 0 1 1 1/0 1/0 [ o 1 0 1 0 1/0 1/0 1 C o 1 0 1 0 1/0 1 1/0 C o 1 0 1 0 1 1/0 1/0 1 0 1 1 1/0 0 1/0 E o 1 0 1 1 0 1/0 1/0 E o 1 1 0 0 1/0 1/0 1 E o 1 1 0 0 1/0 1 1/0 E o 1 1 0 0 1 1/0 1/0 E o 1 1 0 1 1/0 0 1/0 E o 1 1 0 1 0 1/0 1/0 E o 1 1 1 0 1/0 0 1/0 E o 1 1 1 0 0 1/0 1/0 { i 0 0 0 I 1/0 1/0 1 [ i 0 0 0 1 1/0 1 1/0 E l 0 0 0 1 1 I/O 1/0 E l 0 0 1 0 1/0 1/0 1 E i 0 0 1 0 1/0 1 1/0 C l 0 0 1 0 1 1/0 1/0 t l 0 0 1 1 1/0 0 1/0 E t 0 0 1 1 0 I/O 1/0 E l 0 1 0 0 1/0 1/0 1 E l 0 1 0 0 1/0 1 1/0 E l 0 1 0 0 1 1/0 1/0 E l 0 1 0 1 1/0 0 1/0 E i 0 1 0 1 0 I/O 1/0 E l 0 1 1 0 1/0 0 I/O [ l o 1 1 0 0 1/0 I/O [ i 1 0 0 0 I/O I/O 1 C t 1 0 0 0 1/0 1 1/0 C i 1 « 0 0 1 1/0 1/0 1 1 0 0 1 i / o o i / o 3 1 1 0 o 1 o 1/0 1/0 3 1 1 0 1 0i / o o i / o 3 1 1 0 1 0 o i / o i / o 3 1 1 1 0 0 i / o o i / o 3 1 1 1 0 0 0 1 / 0 1 / 0 3 43) 2 . 1 .0 0 0 1 1 1 I/O i / o i 3 0 0 1 1 1 i / o i i / o 3 0 0 1 1 1i 1 / 0 1 / 0 3 0 1 0 1 1 i / o i / o l 3 0 1 0 1 1 1 / 0 1 1 / 0] 0 1 0 1 1 1 1 / 0 1 / 0 3 0 1 1 0 1 i / o i / o i3 0 1 1 0 1 1 / 0 i 1 / 0 3 0 1 1 0 1 i i / o i / o 3 0 1 1 1 0 i / o i / o i 3 0 1 1 1 0i / o i 1/0 3 0 1 1 1 0 1 1 / 0 1 / 0 3 0 1 1 1 1 i / o o i / o 3 0 1 1 1 1 o i / o i / o 3 1 0 0 1 1I/O i / o l 3 1 0 0 1 1 i / o i i / o 3 1 0 0 1 1l 1/0 i / o 3 1 0 1 0 1 i / o i / o i 3 1 0 1 0 1 i / o i1/0 3 1 0 1 0 1 l i / o i / o 3 1 0 1 1 0 i / o i / o i 3 1 0 1 1 0i / o i i / o 3 1 0 1 1 0 i i / o i / o 3 1 0 1 1 1i / o o i / o 3 1 0 1 1 1 o i / o i / o 3 1 1 0 0 1 i / o i / o i 3 1 1 0 0 1 i / o i i / o 3 1 1 0 0 1 l i / o i / o 3 1 1 0 1 0 i / o i / o i 3 1 1 0 1 0 i / o i i / o 3 1 1 0 1 0 i i / o i / o 3 1 1 0 1 1 1 / 0 0 1 / 0 3 1 1 0 1 1 0 1 / 0 1 / 0 3 1 1 1 0 0 i / o i / o i 3 1 1 1 0 0 i / o i i / o 3 1 1 1 0 0 i I/O i / o 3 1 1 1 o 1 l / o o t / o 3 1 1 1 0 1 o i / o i / o 3 1 1 1 1 0 i / o o i / o 3 1 1 1 1 0 o i / o i / o 3 43) 2 . 1 .7 0 1 1 1 1 1 / 0 1 / 0 1 3 0 1 1 1 1 t / o i i / o 3 0 1 1 1 1 i i / o i / o 3 1 0 1 1 1 i / o i / o i 3 1 0 1 1 1 i / o i i / o 3 1 0 1 1 1 i i / o i / o 3 1 1 0 1 1 i/o i / o i 3 1 1 o 1 1 1/0 i i/o 3 1 1 0 1 1 t i / o i / o 3 1 1 1 0 1 i / o i / o i 3 356
  • 373. I **•* o *+ o o o o © * * * * * * * * * * S N S So o o o o **********s v x wo o o o o * A m 5** ** ft* ** ** ** o o o o ft* ft* ft* ft* o 0 0 o o o w <* • ** ** ft* o o o ft* ft* **o ft* o o o ft* ft* ft* o o o•* ft* ** o o ** ** o k* ** o ft* o o ft* o o ft* o o ft*ft*o o ** 1 0 o ** **o ** ft* o o ft* o o ft* o ft* o ft* o o ** o **** o t* ft* ft* o o o ft* o o ft* o ft*ft* ft* ** **** »* ** ** ft* ft*ft* ft* ft*ft* ft*ft* ft*ft* ft*ft* ft*X X X ■V. X.X X. X. X X X X X X X X X X X o o o o o o o o o o o o o o o o o o o o ft* ** ft* ** ** ft* ft* ** ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* X X X V. X V. X. X X X X X X X X X X X X o o o o o o o o o o o o o o o o o o o o t* ft* ** ft* ** ** ** ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft* ft*ft* X X X X X X X X, X. X X X X X X X X X X X o o o o o o o o o o o o o o o o o o o o u u u u u u u u w u u CO Oi 1/01/01/0 i/oi/oi/o i/oi/oi/o i/oi/oi/o i/oi/oi/o V (i ** oo o o o **o e o o o** o o © o©**o © o o o * * sw C# O o o o £ w „ o © X o **** o o ** **t» •»•»•* o o *-**• o o o **•* A********** ** N S V S o o o o* * * * * * S N OO O»»* O •* o* * * * * * * * * * » * N S N S 0 0 0 0 ^ 0 0 X o
  • 374. Appendix D Learning Sessions D .l Acquiring an Individual Language (1) I T - a p . T k a i a l t i a l a a t t l a g l a [ 00 0 0 0 0 0 0 ] Xa l a a t r [ a . i v ] . XI C a r r a a t a * t t i i | r m l u u e k w f i l . l a a t r [ a , t v , a ] , X3 O u n t t a a t t l a g r a a a i a a a a c k a a g a d . l a a t r [ a t v ] . X3 U a a b la t a p a r a a [ a , a , t v ] b a a t t l i f t k a p a r a a a t a r s . . . P m a a l t r a r a a a t t a : [ 0 0 0 0 0 1 I 0 ] Xb l a a t r g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t s a t t l a g : [ [ a ,1 v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a , a f t a a , a , t v ] ] l a a t T [ a , t v , a ] . X4 U a a b la t a p a r a a [ a , t v , a ] k a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a :[ 1 0 0 0 0 1 0 0 ] Xc l a a t r g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , i v ] , [ a , a f t a a . l v ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] ] l a a t r [ a , a , t v ] . X I U a a b la t a p a r a a [ a , a , t v ] k a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r a r a a a t t a : [10 0 0 0 1 1 0 ] I t l a a t r g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , 1v ] , [ a , a , t v ] , [ a , a f t a a , 1v ] , [ a , a f t a a , a , t v ] ] l a a t T [ a , t v , a ] . X I U a a b la t a p a r a a [ a , t v , a ] k a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a : [11 0 0 0 1 0 0 ] Xa l a a t r g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , i v ] , [ a , a f t a a , i v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a a t r [ a , a . t v ] . XT U a a b la t a p a r a a [ a , a , t v ] 358
  • 375. h r i w U n r a a a tt s : [ 0 0 0 0 0 0 1 1 ] X f l a a t T g a a a r a t a - L a a g a a g a g a a a r a t a d s l t h c a r r a a t a a t t i a g : [ [ a , a f t a a , a , t v ] , [ a , a , t v ] ] ■ a a t r [ a , t v , a } . M U m a b la t a p a r a a [ a , t v , a ] K a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a s [ 00 0 0 0 1 0 1 ] Xg I s x t T g a a a r a t a . La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , l v ] , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ s , t v , a ] ] l a a t T [ a , a , t v ] . X* U a a b la t a p a r a a [ a , a , t v ] K a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a : [ 00 0 0 0 1 1 1 ] Xh l a a t r g a a a r a t a . l a a g a a g a g a a a r a t a d v l t h c a r r a a t a a t t l a g : [ [ a , a , a f t a a , t v ] , [ a , a , t v ) , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a . a f t a a , a , t v ] ] ■ a a t r [ a , t v , a ] . X I0 U a a b la t a p a r a a [ a , t v , a ] K a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a : [1 0 0 0 0 1 0 1 ] X i l a a t r g a a a r a t a . L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ s . l v ] , ( a , a f t a a . l v ] , [ a , a f t a a , t v , o ] , [ s , t v , a ] ] ■ a a t r [ a , a . t v ] . X ll U a a b la t a p a r a a [ a , a , t v ] K a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a :[ 1 0 0 0 0 1 1 1 ] XJ ■ a a t r g a a a r a t a . L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g ; [ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ) , [ a , a , t v ] , [ a , o f t a a . l v ] , [ s , s f t t a , a , t v ] ] l a a t T [ a , t v , a ] . X l l U a a b la t a p a r a a [ a , t v , a ] K a a a t t l a g t k a p a r a a a t a r s . . . P a r a a a t a r s r a a a t t a :[ 1 1 0 0 0 1 0 1 ] Xk l a a t r g a a a r a t a . L aa g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , 1 v ] , [ a ,a f t a a , 1v ] , [ a ,a f t a a , t v , a ] , [ a , t v ,a ] ] l a a t T [ a , a , t v ] . X13 U a a b la t a p a r a a [ a , a , t v ] K a a a t t l a g t k a p a r a s w t a r s . . . P a r a a a t a r s r a a a t t a : [ 0 0 0 0 0 1 / 0 1 0 ] X I l a a t T g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a , l v ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , [ a . a f t a a , a , t v ] ] l a a t T [ a . t v . a ] . X l« U a a b la t a p a r a a [ a , t v , a ] K a a a t t l a g t k a p a r a a w t a r a . . . P a r a a a t a r s r a a a t t a : [ 0 0 0 0 0 1 1 / 0 O ] X a l a a t T g a a a r a t a . L a a g a a g a g a a a r a t a d w i t h c a r r a a t a a t t l a g : [ [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] . [ a . a f t a a , a , t v ) , [ a , o f t a a , t v , a ] , [ a , t v , a ] ] 359
  • 376. ■ a x tT [ ■ , i , t v ] . k i t Um U i t * p u M [ • ■ • . t v ] U i t t t l i | t k a f t r u a u i f . . . P u i M t i r i t w i t t * : [ 0 0 0 0 0 0 1 / 0 1] k a h i t ? p M i t U . I n p > ( i g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ t , * f u i , i , n ] , [ i , i , t v ] ] h r t t ( a , t v , a ] . l i t D a a k la u p a r s * [ a , t v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . h n a a t t n r a a a t t a : [ 0 0 0 0 0 1 0 1 / 0 ] t a S a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [[ a , i v ] , [ a , a < t a a ,1a], [ a , a f t a a , tr , a ] , [ a , t a , o ]] ■ a x t f [ a , a , t v ) . U T O a a b la t a p a r a a [ a , o , t v ] k a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 } I p ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a f t a a , a , t v ] , [ a , a , a f t c a , t v ] , [ o . a , t v ] , [ a , l v ] , [ a , o , t v ] , [ a , o f t a a , i v ] , [ a , o f t a a , a , t v ] ] ■ a x tT [ a , t v , a ) . l i t Q a a k la t a p a r a a [ a , t v , a ] l a a a t t l a g t k a p a r a i M t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 1] l q ■ a x tT [ a . l v ] . H O C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a f t a a , l v ] . 1 2 0 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , t v , a ) . 1 2 1 C a r r a a t a a t t l a g r a a w l a a a a c k a a g a d . ■ a x tT [ a , a , t v ) . 1 2 2 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . t v ) . 1 2 3 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a f t a a , t v , a ] . 1 2 4 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . a f t a a . t v ] . 1 2 1 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a f t a a , a , t v ) . 1 2 0 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a , a f t a a . t v ] , [ a , a , t v ] , [ a , 1v ] , [ a , a , t v ] , [ a , a f t e a . 1v ] , [ a , a f t a a . a , t v ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] ] ■ a x tT k p a . J « I T - D .2 Acquiring an Individual Language (2) I T- ap. Tka laltial aattlag la [0 0 0 0 0 0 0 0 ] la ■axtT [a,a,tv). U Oaakla ta paraa [a,a,tv] laaattlag tka paraaatara ... Paraaatara raaat ta: [0 0 0 0 0 1 1 0 ] Ik 360
  • 377. ■ • a t ? 12 Qm U « t a p a n * l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 00 0 0 0 0 1 1 ] Xc ■ a i t ? [ a . a . t v ] . 1 3 I h u U t t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a w t a r a . . . P a r a a a t a r a r a a a t t a ; [ 0 0 0 0 0 1 1 1 ] M ■ a i t ? [ a , t v , a } . X4 O a a k la t a p a r a a [ a , t v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [10 0 0 0 1 0 1 ] I t ■ • a t ? [ a , a , t v ] . XC O a a k la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ 1 0 0 0 0 1 1 1 ] I f l a s t ? [ a , t v , a ] . XO O a a k la t a p a r a a [ a , t v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [11 0 0 0 1 0 1 ] Xg ■ a r t ? [ a . a . t v ] . XT O a a k la t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ 1 1 0 0 0 1 1 1 ] Xk ■ a r t ? g a a a r a t a . L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g : [ [ a . a . a f t a a . t v ] . [ a . c . t v ] , [ a , i v ] . [ a . a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] ■ • a t ? [ a . a . t v ] . X* O a a k la t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 0 ) XI ■ • a t ? g a a a r a t a . L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g : [ [ • , a , t v ) , [ a f t a a . a , a . t v ) , [ a f t a a . a , l v ] , [ a . I v ] , [ • , a , t v ] , [ a . a f t a a . l v ] . [ a . a f t a a . a , t v ] ] ■ • a t ? [ a . t v . a ] , X t O a a k la t a p a r a a [ a , t v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 0 ] XJ ■ a a t ? [ a . a . t v ] . X I0 O a a k la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 0 1 / 0 1 ) Xk ■ a a t ? [ a . a . t v ] . X l l O a a k la t a p a r a a [ a . a . t v ) l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 ] X I ■ a a t ? g a a a r a t a . L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g : [ [ a , a f t a a . a , t v ] . [ a . a . a f t a a . t v ] , [ a , a . t v ] , [ a . I v ] , [ • , « , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a . a , t v ) ] ■ a a t ? [ a , t v , a ) . X l l O a a k la t a p a r a a [ a , t v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . 361
  • 378. r u i M t t n r a a a t * • : ( 0 0 0 0 0 1 1 / 0 1 ] fa ■ a r t ? [ a . t r ] . X i3 C u r n i t M t t i i f r a a a l a a a a c k u g a d . l a a t T ( a , a f t a a .I t ] . 114 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT [ a . t r . a ] . t i t C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . l a a t T [ a . a . t r ] . { I t C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT ( a . a . t r ) . %17 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a t ? [ a , a f t a a . t r , a ] . t i t C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . l a s t ? [ a , a , a f t a a . t r ] . t i t C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a t ? [ a . a f t a a . a . t r ] . 1 1 0 C a r r a a t a a t t l a g r a a w l a a a a c k a a g a d . ■ a a tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : ( C a , a , a f t a a . t r ] , [ a . a . t r ] , [ a , l r ] , [ a . a . t r ] , ( a , a f t a a , I r ] , ( a . a f t a a . a . t r ] , [ a , o f t a a . t r , a ] , [ a . t r , . ] ] ■ a r t ? b y a . ?•* I T - D .3 Acquiring A ll Languages in the P-Space of S(M )-Param eters I ? - l a a r a _ a l l _ l a a g a . T r y l a g t a l a a r a [ ( a , i r ] , ( a , t r , • ] ] . . . F i a a l a a t t l a g :0 0 0 0 0 0 0 0 L a a g a a g a g a a a r a t a d : ( [ a , l r ] , [ a , t r , a ] ] T k a l a a g a a g a [ [ a . i r ] , [ a , t r , a ] ] l a l a a r a a b l o . T r y l a g t a l a a r a [ [ a , l r ] , [ a . a . t r ] , [ a , t r , a ] ] . . . F i a a l a a t t l a g : 0 0 0 0 0 1 1 / 0 0 L a a g a a g a g a a a r a t a d : [ [ a , l r ] , [ a , a . t r ] , [ a , t r , a ] ] T k a l a a g a a g a [ ( a , l r ] , [ a , a , t r ] , [ a , t r , a ) ] l a l a a r a a b l a . T r y l a g t a l a a r a ( ( a , l r ] , [ a . a . t r ] ] . . . F i a a l a a t t l a g :0 0 0 0 0 1 1 0 L a a g a a g a g a a a r a t a d : [ [ a , l r ] , [ a . a . t r ] ] T k a l a a g a a g a [ [ a , l r ] , [ a , o , t r ] ] l a l a a r a a b l o . T r y l a g t a l a a r a [ [ a , t r , a ] , [ a , l r ] , [ a , t r , a ] } . . . F i a a l a a t t l a g :1 1 1 1 1 1 1 1 L a a g a a g a g a a a r a t a d : [ [ a . t r . a ] , ( a , i r ] , [ a . t r , a ] ] T k a l a a g a a g a [ [ a , t r , a ] , [ a , i r ] , [ a , t r , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a , t r , a ] ] . . . F i a a l a a t t l a g :1 0 0 0 0 0 1 0 L a a g a a g a g a a a r a t a d : [ [ l r , a ] , [ a . t r . a ] ] a k l e h l a a a a g a r a a t a f [ [ a , t r , a ] ) T k a l a a g a a g a [ [ a , t r , a ] ] l a I 0 T l a a r a a b l a . T r y l a g t a l a a r a [ [ a , a , t r ] , [ a , l r ] , [ a , t r , a ] ] . . . F i a a l a a t t l a g :1 1 0 0 0 1 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a . t r ) , [ a , i r ) , [ a . t r . a ] ] T k a l a a g a a g a ( [ a , a , t r } , [ a , l r ] , [ a , t r . a ] ] l a l a a r a a b l a . 362
  • 379. I r j i « | t a l a a r a [ [ a . a . t v ] , [ a , i v ] ,ta.*.t v ] . [ a . t v . a ] ] . . . F i a i l a a t t l a g : 0 0 0 0 0 11 /0 1 l a a p t f i p i t n M : [ [ a . a . t v ] . [ a . i v ] , [ a . a . t v ] , t a , t v , a ] ] T k a l a a g a a g a [ [ a . a . t v ] , [ a , l v ] , [ a . a . t v ] , [ a . t v . a ] ] l a l a a r a a b l a . T r y l a g « a l a a r a [ ( a . a . t v ] , [ a , l v ] , [ a . a . t v ] ] . . . F i a a l a a t t l a g :0 0 0 0 0 1 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a , t v ] , [ a , i v ] , [ a , a , t v l ] T k a la a g a a g a [ [ a . a . t v ] , [ a , l v ] , [ a , a , t v ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a . a . t v ] . [ a , i t ] ] . . . F i a a l a a t t l a g :0 0 0 0 0 0 1 0 L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] . [ a , I v ] ] T k a l a a g a a g a [ [ a . a . t v ] , [ a , i v ] ] i t l a a r a a b l a . T r y l a g t a l a a r a [ [ a . a . t v ] , [ a , t v , a ] , [ a . i v ] . [ a . t v . a ] ] . . . F i a a l a a t t l a g ; 1 1 0 0 0 1 / 0 1 1 L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] . [ a . t v . a ] , [ a , I v ] , [ a . t v . a ] ] T k a l a a g a a g a [ [ a . a . t v ] , [ a . t v . a ] , [ a . I v ] , [ a . t v . a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a . a . t v ] . [ a . t v . a ] . [ a , i v ] . [ a . a . t v ] , [ a , t v , a ] ] . . . F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 1 / 0 1 L a a g a a g a g a a a r a t a d : [ [ a , a , t v ] . [ a . t v . a ] , [ a , I v ] , [ a . a . t v ] , [ a , t v , » ] ] T k a l a a g a a g a [ [ a . a . t v ] , [ a , t v . a ] , [ a , i v ] , [ a , a , t v ] , [ a , t v , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a ( [ a , a , t v ] , [ a , t v , a ] , [ a , l v ] , [ a , a , t v ] ] . . . F i a a l a a t t l a g : 1 0 0 0 01 / 0 1 1 L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] , [ a . t v . a ] , [ a , i v ] , [ a , a , t v ] ] T k a l a a g a a g a [ [ a , a , t v ] , [ a , t v , a ] , [ a , l v ] , [ a , a , t v ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a . a . t v ] } . . . F i a a l a a t t l a g :0 0 0 0 0 0 1 0 L a a g a a g a g a a a r a t a d : [ [ a . a . t v ] , [ a . i v ] ] ■ k ic k l a a a a p a r a a t a f [ [ a . a . t v ] ] T k a l a a g a a g a [ [ a . a . t v ] ] l a I 0 T l a a r a a b l a . T r y l a g t a l a a r a [ [ i v , a ] , [ t v , a , a ] ] . . . F i a a l a a t t l a g :1 0 0 0 0 0 0 0 L a a g a a g a g a a a r a t a d : [ [ l v , a ] , [ t v , a , a ] ] T k a l a a g a a g a [ [ i v , a ] , [ t v , a , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ i v , a ] . [ t v , a , a ] , [ t v , a , a ] ] . . . F i a a l a a t t l a g : 1 1 0 0 0 0 1 / 0 0 L a a g a a g a g a a a r a t a d : [ [ l v , a ] . [ t v , a . a ] . [ t v , a , a j ] T b a la a g a a g a [ [ l v . a ] . [ t v , a . a ] , [ t v , a , a ] ) l a l a a r a a b l a . T t y l a g t a l a a r a [ [ l v , a ] , [ t v , a , a ] ] . . . F i a a l a a t t l a g :1 1 0 0 0 0 1 0 La a g a a g a g a a a r a t a d : [ [ l v , a ] , [ t v , a , a ] ] T b a la a g a a g a [ [ l v . a ] . [ t v . a . a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ l v , a ] , [ a , i v ] , [ a . t v . a ] , [ t v . a . a ] } . . . F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 0 0 L a a g a a g a g a a a r a t a d : [ [ i v , a ] , [ a . i v ] , [ a , t v , a ] , [ t v , a , a ] ] T k a l a a g a a g a [ [ l v , a ] , [ a , i v ] , [ a , t v , a ] , [ t v , a , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ l v , a ] , [ a . i v ] , [ a , t v , a ] , [ t v , a , a ] , [ t v . a . a ] ] . . . F i a a l a a t t l a g : 1 1 0 0 0 1 / 0 1 / 0 0 L a a g a a g a g a a a r a t a d : [ [ l v . a ] , [ a . i v ] , [ a , t v , a ] . [ t v . a . a ] , [ t v , a , o ] ] T ka l a a g a a g a [ [ l v , a ] ,[a,l v ] ,[a,t v , a ] , [ t v , a ,a],[ t v ,a,a ] ] l a l a a r a a b l a . 363
  • 380. Trylag «• laara CClv,a],Cs.lv],[a,tv,a],[tv.a,a]] ... Fiaal aattlag: 1 1 0 0 0 1/0 1 0 Laagaaga gaaaratad: [[lv,s] ,[s,iv],[s,tv,a], [tv,a,s]] Tka laagaaga [[lv,a],[s,iv] .[a.tv.a],[tv.a.a]] la laaraabla. Trylag ta laara [[lv,a],[a.tv.a],[tv.a.a]] ... Fiaal aattlag: 1 0 0 0 0 0 1/0 0 Laagaaga gaaaratad: [[lv,a],[a.tv,a],[tv.a.a]] Tka laagaaga [[lv,a],[a.tv.a], [tv.a.a]] la laaraabla. Trylag ta laara [[tv,a],(a.tv.a],[tv.a.a].[tv.a.a]] ... Fiaal aattlag: 1 1 0 0 0 0 1/0 1/0 Laagaaga gaaaratad: [[lv,a],[a,tv.a],[tv.a.a],[tv.a.a]] Tka laagaaga [[lv.a],[a.tv.a].[tv.a.a].[tv.a.a]] ta laaraabla. Trylag ta laara [[iv,a],[a,tv,a].[tv.a.a]] ... Fiaal aattlag: 1 1 0 0 0 0 1 1/0 Laagaaga gaaaratad: [[lv.a].[e.tv.al.Ctv.a.a]] Tka laagaaga [[lv,a],[a,tv.a],[tv,a,a]] la laaraabla. Trylag ta laara [[lv,a],[a.tv.a],[a.iv],[a,tv.a],[tv.a.a]] ... Fiaal aattlag: 1 1 1 1 1 1 1 1/0 Laagaaga gaaaratad: [[iv.al.Ca.tv.al.Ca.ivl.Ca.tv.ol.Ctv.a.e]] Tka laagaaga [[lv.a],[a,tv,a].[a.iv],[a,tv,a],[tv,a,o]] la laaraabla. Trylag *• laara [[iv.a],[a,tv,a],[a,lv],[a,tv.a],[tv,a,a],[tv.a.a]] ... Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1 / 0 Laagaaga gaaaratad: [[lv.a].[a.tv.a].[a,lv].[a.tv.a].[tv.a.a],[tv.a,a]] Tka laagaaga [[lv,a],[a,tv,a],[a.iv],[a.tv.a],[tv,a,a],(tv,a,a]] la laaraabla. Trylag ta laara [[lv.a],[a.tv.a],[a,lv],[a,a,tv],[a.tv.a],[tv,a,a]] ... Fiaal aattlag; 1 0 0 0 0 1/0 1/0 0 Laagaaga gaaaratad: [[lv.a],[a,tv.a],[a.iv],[a,a.tv],[a,tv.a],[tv,a,a]] Tka laagaaga CClv,a],[t,tv,s],[s,lv],[s,s,tv],[a,tv,al,Ctv,s,a]] la laaraabla. Trylag ta laara [[iv,a],Ca,tv,a],[a.iv],[a.a.tv]] ... Fiaal aattlag: 1 0 0 0 0 1/0 1 0 Laagaaga gaaaratad: [[lv,a],[o,tv,a],[a,lv],[a,a,tv]] Tka laagaaga [[iv.a],[a,tv,a],[a.iv].[a.a.tvj] la laaraabla. Trylag ta laara [[iv,a],[a,tv,a]] ... Fiaal aattlag: 1 0 0 0 0 0 1 0 Laagaaga gaaaratad: [[lv,a],[a.tv.a]] Tka laagaaga [[iv,s],[a,tv,aj] la laaraabla. Trylag ta laara [[lv,a],[a,a,tv],[a,tv,a],[a.iv],[a,tv,a ] ,[tv,a.a],[tv.a,a]] ... Fiaal aattlag: 1 1 0 0 0 1/0 1/0 1/0 Laagaaga gaaaratad; [[lv,a],[a,a,tv],[a,tv,a],[a,lv],[a,tv,a],[tv,a,a],[tv,a,a]] Tka laagaaga [[lv.a],[a.a.tv],[a,tv.a],[a,iv],[a,tv,a],[tv,a,a],[tv.a.a]] la laaraabla. Trylag ta laara [[iv,a],[a,a,tv],[a,tv,a],[a,lv],[a,tv.a],[tv,a,a]] ... Fiaal aattlag: 1 1 0 0 0 1/0 1 1/0 Laagaaga gaaaratad: [[lv,a],[a,a,tv],[a.tv.a],[a,lv],[a.tv.a],[tv.a.a]] Tba laagaaga [[iv,a],[a,a,tv],[a,tv,a],[a.iv],[a,tv,a],[tv,a,a]] la laaraabla. Trylag ta laara [[iv,a],[a.a.tv],[a,tv.a],[a,lv],[a,a,tv],[a,tv,a],[tv.a.a]] ... Fiaal aattlag: 1 0 0 0 0 1/0 1/0 1/0 Laagaaga gaaaratad: [[lv,a],[a,a,tv],[a,tv,a],[a,lv],[a,a,tv].[a.tv,s],[tv.a.a]] Tka laagaaga [[iv,a],[a.a.tv],[a,tv,a],[a,lv],(a,a,tv],(a,tv,a],[tv.a.a]] la laaraabla. Trylag ta laara [[lv,a],[a,a,tv],Ca,tv,a],[a,iv],[a,a,tv]] ... Fiaal a atti^ ; 1 0 0 0 0 1/0 1 1/0 364
  • 381. Laagaaga (tM tilid: [[lv.a],[a,a,tv],[a,tv,a),[a.iv],[a,a,tv]] Tba laagaaga [[lv,a],(*,*,tv],(a,tv,a],[a,lv],|i.a,tv]] it ltu u k U . I** I- T D .4 Acquiring A ll Languages in the P-Space of S(M )-Param eters (w ith A dv) I T- l u n . t l l .l u |i . Trylag ta U u a , [a.aftaa, itr],[a.aftaa.tv,*],[•,<«,«]] ... Fiaal aattlag: 0 0 0 0 0 1 0 0 Laagaaga gaaaratad: [Ca.lt],[*,aftaa,iv],[a,aftaa,tv,a],[a.tr.a]] Tka laagaaga [[a,1*1,[a,aftaa.lt].[a.aftaa,tv,a].[a.tv.a]] la laaraabla. Trylag tv laara [[a,lt].[a,a,tv],[a.aftaa.it],[a,aftaa.a,ttl,[a.aftaa,tv,a],[a,tv,a]] ... Fiaal aattlag: O 0 0 0 0 1 1/0 0 Laagaaga gaaaratad: [[a.iv].[a.a.tv].[a,aftaa.lt],[a,aftaa,a,tv],[a.aftaa.tv,a],[a.tv.a]] Tba laagaaga [[a.It],[a.a.tv],[a.aftaa.lv],[a.aftaa,a,tv],[a.aftaa.tv,a],[a.tv.a]] la laaraabla- Trylag ta laara [[a.iv],[a,a,tv],[#,aftaa.lv],[a,aftaa,a,t*]] ... Fiaal aattlag: 0 0 0 0 0 1 1 0 Laagaaga gaaaratad: [[a.iv].[a.a.tv].[a.aftaa,iv),[a,aftaa,a,tvj] Tba laagaaga [[a,iv],(a,a,tv].[a.aftaa,iv],[a,aftaa,a,tv]] la laaraabla. Trylag ta laara CCa.lv],[a.1*.aftaa],[a,tv.a],[a,tv,aftaa.a]] ... Fiaal aattlag: 1 1 1 1 0 1 0 0 Laagaaga gaaaratad: [[a.iv].Ca,lv,*ftaa],[a,tv,a),[a,tv,aftaa,a]) Tba laagaaga [[a.iv],(a.iv,aftaa],[a.tv.a],[a,tv,aftaa,a]] la laaraabla. Trylag ta laara [(aftaa.a,lv],[aftaa.a,tv,a],[a.iv],[a,tv,*]] ... Fiaal aattlag: 0 0 0 0 0 0 0 0 Laagaaga gaaaratad: [[aftaa,a,lv],[aftaa,a,tv,a],[a,lv],[a,tv,a]] Tba laagaaga [[aftaa,a,iv].[aftaa,a,tv.a],[a.iv],[a,tv,a]] la laaraabla. Trylag ta laara [[aftaa,a.iv],[aftaa,a,tv,a],[a,lv],[a.aftaa.lv],[a.aftaa,tv,a],[a,tv.a]] ... Fiaal aattlag: 0 0 0 0 0 1/0 0 0 Laagaaga gaaaratad: [[aftaa,a,iv], [aftaa,a,tv,a], [a, iv], [a.aftaa.lv], [a.aftaa,tv,a], [a,tv,a]] Tba laagaaga [[aftaa,a,iv),[aftaa,a,tv,a],[*,!*],[a,aftaa.iv],[a,aftaa.tv,*],[a.tv.a]] la laaraabla. Trylag ta laara [[a,tv,a],[a,tv,a,aftaa], [a.iv],[a,lv,aftaa],[a,tv.a],[a,tv,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 1 1 Laagaaga gaaaratad: [[a,tv,a),[a,tv,a,aftaa], [a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a]] Tba laagaaga [[a,tv,a], [a,tv,a,aftaa], [a,lv], [a,lv,aftaa], [a,tv,a],[a,tv,aftaa,a]] la laaraabla. Trylag ta laara [[a,tv,aftaa,a], [a,tv,a], [a,tv.a,aftaa], [a,lv], [a,iv,aftaa], [a,tv,a], [a,tv,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1 365
  • 382. Laagaaga pM ritri: [[a,tV,aftaB,a] .[0,tV,a] ,[a,tV.a,aftaa] .Cl.lv] ,[a,iV,aftaa],[a,tV,a] , Tka laagaaga [[alt i laftM ,t],[«>tf,«}l[i,t«>i,>ll*a]l[a ,i* ],[i,h l«ft*a]l[«,tv>t], [iit*iafta»,*U la laaraabla. Trylag «a laara [[a,tv,aftaa,a],[a.tv.a]] ... Fiaal aattlag: 1 1 1 1 0 0 1 1 Laagaaga gaaaratad: [[a.tv,aftaa,a],to,tv,a]] Tka laagaaga [[a,tv,aftaa,a],[a,tv,a]] la laaraabla. Trylag ta laara [[a,a,tv],[aftaa,a.a.tv],[aftaa,a,iv],[ [a.aftaa,a,tv)] ... Fiaal aattlag: 0 0 0 0 0 1/0 1 0 Laagaaga gaaaratad: [[a.a.tv],[aftaa.a,a, tv],[aftaa,a,lv], [ Tka laagaaga [[a.a.tv].[oftaa,a.a,tv].[aftaa.a,1v].[ la laaraabla. Trylag ta laara [[•,a,tv],[aftaa,a,a,tv],[aftaa.a,lv].[a.iv]] ... Fiaal aattlag: 0 0 0 0 0 0 1 0 Laagaaga gaaaratad: [[a.a.tv],[aftaa,a,a,tv],[aftaa,a,iv],[a,lv]] Tba laagaaga [[a,a,tv],[aftaa,a,a,tv],[aftaa.a,iv],[a,lv]] la laaraabla. Trylag ta laara [[a,a,tv],[aftaa,a,a,tv],[aftaa.a,lv],[aftaa.a,tv,a],[a,lv],[a,tv,a]] ... Fiaal aattlag: 0 0 0 0 0 0 1/0 0 Laagaaga gaaaratad: [[a,a,tv], [aftaa,a,a, tv], [aftaa,a,lv], [aftaa,a,tv,a], [a,lv], [a,tv,a]] Tba laagaaga [[a,a,tv],[aftaa,a,a,tv],[aftaa,a,iv],[aftaa,a,tv,a],[a.iv],[a,tv,a]] la laaraabla. Trylag ta laara [[a,a,tv], [aftaa,a,a,tv], [aftaa,a.iv], [aftaa.a,tv,a], [a.iv], [a,a,tv], [a,aftaa.lv], [a,aftaa.a,tv],[a,aftaa,tv,a] ,[a,tv,a]] ... Fiaal aattlag: 0 0 0 0 0 1/0 I/O 0 Laagaaga gaaaratad: [[a,a,tv],[aftaa,a,a,tv],[aftaa.a,iv],[aftaa.a,tv,a],[a,iv], [a,a,tv],[a,aftaa, iv], [a,aftaa,a,tv], [a,aftaa,tv,a], [a,tv,a]] Tka laagaaga [[a ,a,tv],[aftaa,a,a,tv],[aftaa,a,lv),[aftaa,a,tv ,a],[a,lv],[a,a ,tv],[a,aftaa,iv], [a,aftaa,a,tv],[a,aftaa,tv,a],[a,tv,a]] la laaraabla. Trylag ta laara [[a,a,tv],[a,a,tv,aftaa],[a,lv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 0 1 1 1 Laagaaga gaaaratad: [[a,a,tv], [a.a.tv,aftaa], [a.iv], [a,iv,aftaa],[a,tv,a], [a,tv,aftaa,a]] Tka laagaaga [[a,a,tv],[a,a,tv,aftaa],[a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a]] la laaraabla. Trylag ta laara [[a.a.tv], [a,a,tv,aftaa], [a,tv,aftaa,a],[a,tv,a], [a,lv], [a,lv,aftaa], [a, tv,a], [a,tv,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 0 1/0 1 1 Laagaaga gaaaratad: [[a,a,tv],[a,a,tv,aftaa], [a,t v,aftaa,a],[a,tv,a],[a,1v],[a,1v,aftaa],[a,tv,a], [a,tv,aftaa.a]] -]. ] ,[a,aftaa.a,tv]] -), [a.aftaa.a, tv]) 366
  • 383. T b a l u p a p [ [ a . a . t v ] , [ a , a , t v , a f t a a ] , [ a . t v , a f t a a , a ] , [ a . t v . a ] , [ a . l v ] , [ a . i v , o f t a a ] , [ a , t v , a ] , [ a , t v , a f t a a . a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a . a . a f t a a . t v ) , [ a , a . t v ] , [ a . i v ] . [ a , a f t a a , l v ] , [ a . a f t a a , t v . a ] . [ a , t v . a ) ] . . . F i a a l a a t t l a g :1 1 0 0 0 1 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a , a f t a a , t v ] , [ a . a , t v ] , [ a . i v ] , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a . t v , a ] ] T b a l a a g a a g a [ [ a , a , a f t a a , t v ] , [ a , a , t v ] , ( a , i v ] , [ a , a f t a a , l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ] , [ a . a . t v ] , [ a , a f t a a , l v ] . [ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] . . . F i a a l a a t t l a g : 0 0 0 0 0 1 1 / 0 1 L a a g a a g a g a a a r a t a d : [ [ a , a , a f t a a , t v ) , [ a . a . t v ] , [ a , l v ] , ( a . a . t v ] , [ a . a f t a a , i v ] . [ a . a f t a a . a . t v j . [ a . a f t a a , t v . a ] , [ a . t v . a ] ] T b a l a a g a a g a [ [ a , a , a f t a a , t v ] , [ o , a , t v ] , [ a . i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , o f t a a , a , t v ] , [ a , a f t a a . t v , e ] , [ a . t v . a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a ( [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a . i v ] , [ a , a , t v ] , [ a , o f t a a . l v ] , [ a , a f t a a , a , t v ] ] . . . F i a a l a a t t l a g :0 0 0 0 0 1 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , l v ] , [ a , a , t v ] , [ a . a f t a a . l v ] , [ a , a f t a a , o , t v ] ] T b a l a a g a a g a [ ( a , a , a f t a a , t v ] , [ a , a , t v ) , [ a , l v ] , [ a , a , t v ] , [ a , a f t a B , i v ] , [ a , a f t a a , a , t v ] ] l a l a a r a a b l o . T r y l a g t a l a a r a [ [ a , a f t a a , t v . a ] , [ a . t v . a ] ] . . . F i a a l a a t t l a g :1 0 0 0 0 0 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a f t a a , t v , a ] , [ a , t v , a ] ) T b a l a a g a a g a [ [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a ( [ a , a f t a a , t v . a ] , [ a . a . a f t a a , t v ] , ( a , a , t v ] , [ a . t v . a ] , [ a . i v ] , [ a , o f t o a . l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] . . . F i a a l a a t t l a g : 1 1 0 0 0 1 / 0 1 1 L a a g a a g a g a a a r a t a d : ( [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a . i v ] , [ a , a f t a a . l v ] , ( a , a f t a a , t v , a ] , [ a , t v , a ] ) T b a l a a g a a g a [ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a . i v ] , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a . t v . a ] ] l a l a a r a a b l a . T r y l a g t a l a a r a [ [ a , a f t a a , t v . a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a , i v ] , [ a . a . t v ] , [ a , a f t a a . l v ] , [ a , a f t a a . a , t v ] , [ a , a f t a a , t v , a ) , [ a , t v , a ] ] . . . F i a a l a a t t l a g : 1 0 0 0 0 1 / 0 1 / 0 1 La a g a a g a g a a a r a t a d : [ [ a . a f t a a , t v . a ] , [ a , a , a f t a a . t v ) , [ a , a , t v ] , [ a , t v . a ] , [ a , i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a , a , t v ] . [ a . a f t a a . t v , a ] , C a , t v . a ] ] T b a l a a g a a g a [ [ a , a f t a a , t v , a ] , [ a . a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v . a ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ] l a l a a r a a b l a . T r y l a g t o l a a r a [ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ o , 1v ] , [ a , a , t v ] , [ a . a f t a a , 1 v ] , [ a , a f t a a . a . t v ) ) . . . F i a a l a a t t i ^ : 1 0 0 0 0 1 / 0 1 1 L a a g a a g a g a a a r a t a d : [ [ a , a f t a a , t v , a ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a , t v , a ] , [ a , i v ] , [ a , a , t v ] , [ a , a f t a a , l v ] , 367
  • 384. [a ,a fta a ,a ,tv ]] Tba lM |M |i [£• .a fta a , tV ,s ] ,[ a ,a , oft u ,tv] .[a ,a . tv] ,[o , tv ,a] ,[ • , lv] ,[ i,o , tv] ,[ • ,a fta a , iv ] , i s U u m U «. Trylag ta la a ra [ [a .a fta a ,a ,tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ] ,[a fta a ,a ,lv ] , [s ,lv ]] . . . Fiaal a a ttla g : 0 0 0 0 0 0 1 1/0 Laagaaga gaaarata*: [ [a .a fta a ,a ,tv ] ,[ a ,a ,tv ] .[ a f ta a ,a ,a ,tv ] .[a fta a ,a .iv ] , [a ,lv ]] Tba laagaaga [ [a .a fta a .a .tv ] , [a ,a ,tv ].[a fta a ,a ,a ,tv ],[a fta a .a ,lv ] , [a ,lv ]] la laaraab la. Trylag ta la a ra [[a .a fta a ,a ,tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ] ,[a fta a ,a .iv ] ,[ a f ta a ,a ,tv ,a ] , [a ,iv ] , [a .tv .a ]] . . . F iaal a a ttla g : 0 0 0 0 0 0 1/0 1/0 Laagaaga gaaaratad: [[a ,a fta a ,a ,tv ],[a ,a ,tv ],[a fta a ,a ,a ,tv ],[a fta a ,a ,iv ] , [a fta a ,a ,tv ,a ] ,[a ,lv ] , [a .tv .a ]] Tba laagaaga [ [a ,a fta a ,a ,tv ] ,[ a ,a ,tv ] .[ a f ta a .a .a ,tv ] ,[a fta a ,a ,lv ] ,[ a f ta a ,a ,tv ,a ],[s ,lv ] , [a .tv .a ]] la laaraab la. Trylag ta la ara [ [a ,a fta a ,a ,tv ] , [a ,a ,tv ]] . . . Fiaal a a ttla g : 0 0 0 0 0 0 1 1 Laagaaga gaaaratad: [[a ,a fta a ,a ,tv ].[a .a .tv ]] Tba laagaaga [[a ,a fta a ,a ,tv ] ,[ a .a .tv ] ] la laaraabla. Trylag ta la a ra [ [a ,a fta a ,a ,tv ] , [a ,a ,a fta a ,tv ], [a ,a ,tv ] , ( a ,lv ] , [ a .a .tv ] , [a ,a fta a .lv ].[ a .a f ta a .a .tr ] , [ a ,a fta a ,tv ,a ],[a ,tv ,a ]] . . . F iaal a a ttla g : 0 0 0 0 0 1 / 0 1 / 0 1 Laagaaga gaaaratad: [ [a ,a fta a ,a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a .tv ] ,[a .iv ] , [ a .a .tv ] ,[ a .a fta a .lv ], [ a .a fta a .a , tv ] , [a ,a fta a ,tv .a ] , [a .tv .a ]] Tba laagaaga [ [a ,a fta a .a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a ,iv ] , [ a ,a ,tv ],[a ,a f ta a .lv ] ,[a ,a fta a ,a ,tv ], [ a ,a f ta a ,tv ,a ] ,[a ,tv ,a ] ) la laaraabla. Trylag ta la a ra [[a ,a fta a ,a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a ,lv ] ,[a ,a ,tv ] , [ a ,a fta a ,iv ],[a .a fta a ,a ,tv ]] . . . F iaal a a t t l ^ : 0 0 0 0 0 1/0 1 1 Laagaaga gaaaratad: [[a , a fta a ,a ,tv ], [a, a ,a fta a , tv ], [a ,a , tv ], [a .iv ], [a,a, tv ], [a ,a f ta a, iv ], [a ,a fta a ,a , tv ]] Tba laagaaga [[a .a fta a .a , tv ], [a ,a .a fta a .tv ], [ a ,a ,tv j, [ a .lv ] , [ a .a ,tv ] , [a ,a fta a .lv ], [a ,a fta a ,a ,tv ]] la laaraabla. Trylag ta la a ra [[a ,a fta a ,a ,tv ] , (a ,a ,a f ta a ,tv ] ,[a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[ a f ta a .a .iv ] ,[a .iv ] , [ a ,a ,tv ] ,[ a .a f ta a .lv ] ,[a .a fta a ,a ,tv ]] . .. F iaal a a ttla g : 0 0 0 0 0 1/0 1 1/0 Laagaaga gaaaratad: [[a , a f ta a ,a ,tv ] .[ a .a .a f ta a .tv ] ,[a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[ a f ta a ,a ,iv ] , [ a .iv ] , (a .a ,tv ] , [a .a fta a , iv ] , [a .a fta a ,a , tv ]] Tba laagaaga [[a .a fta a .a ,tv ] ,[ a ,a ,a f ta a ,tv ],[a ,a ,tv ] ,[a fta a ,a ,a ,tv ] ,[ a f ta a ,a ,iv ],[a ,lv ] , [a ,a ,tv ] , [ a ,a fta a ,iv ],[a ,a fta a ,a ,tv ]] la laaraabla. Trylag ta la ara [ [ a .a f ta a .a .tr ] , [a ,a .a fta a , tv ] ,[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[a fta a ,a ,lv ] ,[ a f ta a ,a ,tv ,a ], [ a ,lv ] ,[ a ,a ,tv ] ,[ a .a f ta a .lv ] ,[a ,a fta a ,a ,tv ],[a ,a fta a .tv ,a ] , [a ,tv ,a ]] . . . F iaal a a ttla g : 0 0 0 0 0 1/0 1/0 1/0 Laagaaga gaaaratad: [[a ,a fta a .a ,tv ],[a ,a ,a fta a , tv ],[ a ,a ,tv ] ,[ a f ta a ,a ,a ,tv ],[a fta a ,a ,iv ] ,[ a f ta a ,a ,tv ,a ], [a ,1v ], [a ,a ,t v ], [a ,a fta a , 1v ], [a,af ta a ,a ,t v ], [a,afta a ,t v ,a1, [ a ,t v ,a] ] 368
  • 385. Tka lu p a |( [[a,aftaa,a.tv], [a.a.aftaa, tv],[e,a, tv],[aftaa,a,a,tv],[aftaa.I,iv],[aftaa,a,t*,«], [■,1*], [a,a ,tv],[a.aftaa,lv].[a.aftaa.a ,tv].[a,aftaa,tv,a].[a,t»,aj] la laaraabla. Trylag ta laara [[lv.a].[aftaa.lv,a].[aftaa,tv,a,a],[tv,a,a]] ... Fiaal aattlag: 1 0 0 0 0 0 0 0 Laagaaga gaaarata*: [[iv,a],[aftaa,lv.a],[aftaa,tv.a,a],(tv,a,a]] Tba laagaaga [[lv,a],[aftaa.lv,a],[aftaa,tv,a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[lv,a],[aftaa,lv.a],[aftaa,tv,a,a],[a,lv],[a,aftaa,lv],(a,aftaa,tv,a],[a,tv,a], [tv.a.a]] ... Fiaal aattlag: 1 0 0 0 0 1/0 0 0 Laagaaga gaaarata*: [[lv,a].[aftaa.lv,a],[aftaa.tv.a.a],[a,iv],[a,aftaa.lv],[a.aftaa,tv,a], [a.tv.a],[tv,a,*]] Tba laagaaga [[iv,a], [aftaa. lv,a], [aftaa,tv,a,a], [a.iv], [a,aftaa.lv], [a,oftaa,tv,a], [a,tv,a], [tv,a,a]] la laaraabla. Trylag ta laara [[iv,a],[aftaa.lv,a],[aftaa,tv,a,s],[tv,a,a]] ... Fiaal aattlag: 1 1 0 0 0 0 1 0 Laagaaga gaaarata*: [[iv,a],[aftaa.lv,a],[aftaa,tv,a,a],(tv,a,a]] Tba laagaaga [[tv,a],[aftaa,iv,a],(aftaa,tv,a,a],[tv,a,a]] la laaraabla. Trylag to laara [[lv.a],[aftaa.lv,a],[aftaa.tv,a,a],[a.iv],[a,attaa.lv] .[a,aftaa,tv,a],[a,tv.a], [tv.a.a]] ... Fiaal aattlag: 1 1 0 0 0 1/0 1 0 Laagaaga gaaarata*: [[lv.a],[aftaa,lv.a],[aftaa.tv,a,a], [a,lv],[a,aftaa,lv],[a,aftaa,tv,a], [a,tv,a],[tv,a,a]] Tba laagaaga [[1v,a],(aftaa,lv.a],[aftaa.tv,a,a], [a,1v],[a,aftaa,iv),[a,oftaa,tv ,o],[a,tv,a], [tv.a.a]] la laaraabla. Trylag to laara [[lv,a],[aftaa,iv,a].[aftaa.tv,a,a],[aftaa,tv,a,a],[tv.a.a],[tv,a,a]] ... Fiaal aattlag: 1 1 0 0 0 0 1/0 0 Laagaaga gaaarata*: [[iv,a], [aftaa, iv,a], [aftaa,tv,a,a], [aftaa,tv,a,a], [tv,a,a], [tv,a,a]] Tba laagaaga [[lv,a].[aftaa.lv,a),[aftaa,tv,a,a].[aftaa,tv,a,a],[tv,a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[iv,a),[aftaa,lv.a],[aftaa.tv.a.a]. [aftaa.tv.a.a],[a,iv], [a,aftaa.lv],[a,aftaa,tv.a), (a,tv.a],[tv,a,a],[tv,a,a]) ... Fiaal aattlag: 1 10 0 0 1/0 1/0 0 Laagaaga gaaarata*: [[lv,a],(aftaa,lv,a],[aftaa,tv,a,a],[aftaa,tv,a,a],[a,lv],[a,aftaa,iv), [a,aftaa,tv,a], [a,tv,a], [tv,a,a], [tv,a,a]) Tba laagaaga [[lv.a],[aftaa,lv.a],[aftaa,tv,a,a],[aftaa,tv,a,a],[a,iv],[a,aftaa,lv].[a.aftaa,tv,a] , [a,tv,a],[tv,a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[lv.a], [a, tv,a], [aftaa,lv.a], [aftaa,a,tv,a], [a.iv], [a,a,tv), [a,aftaa.lv], [a,aftaa,a,tv)] ... Fiaal aattlag: 1 0 0 0 0 1/0 1 0 Laagaaga gaaarata*: [[lv,a], [a,tv,a), [aftaa.lv,a], [aftaa,a, tv,a], [a,lv], [a,a,tv], [a,aftaa,iv], [a .aftaa.a,tv]] Tba laagaaga [[lv,a],[a,tv,a],[aftaa,iv,a],[aftaa,a,tv,a],[a,iv],[a,a,tv],[a,aftoa.lv], 369
  • 386. [),•««»,•,»]] Is ln m U « . Trylag *• laara [[lv.a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv.a],[aftaa.tv.a.a],[tv.a.a]] ... Plaal aattlag: 1 0 0 0 0 0 1/0 0 Laagaaga gaaaratad: [[lv,a],[a.tv.a] .[aftaa.tv,a].(aftaa,a,tv.a].[aftaa,tv,a,a],[tv.a.a]] Tba laagaaga [[lr.a],(a,tv,a].[aftaa,lv,a].[aftaa,a,tv,a],[aftaa,tv.a.a],[tv,a,a]] la laaraabla. Trylag to laara [[lv.a],[a,tv.a],[aftaa,lv.a],[aftaa,a,tv.a],[aftaa,tv,a,a],[a.iv] ,[a.a.tv], [a,aftaa.tv],[a,aftaa,a,tv],[a,aftaa,tv,a],[a,tv,a],[tv,a,a]] ... Plaal aattlag: 1 0 0 0 0 1/0 I/O O Laagaaga gaaaratad: CClv,a],[a,tv,a],[aftaa.tv,a],[aftaa.a,tv,a],[aftaa,tv,a,a],[a,lv],[a,a,tv], [a,oftaa.lv],[a,aftaa,a,tv].[a.aftaa,tv,a],[a,tv,a],[tv,a,a]] Tba laagaaga [[lv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv.a],[oftaa,tv,a,a],[a,lv],[a,a,tv], [a,oftaa,1v],[a,aftaa,a,tv],[a.aftaa,tv,o],[a,tv.a],[tv.a,a]] la laaraabla. Trylag ta laara [[lv.a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv.a]] ... Plaal aattlag: 1 0 0 0 0 0 1 0 Laagaaga gaaaratad: [[lv,a],[a,tv.a].[aftaa.lv,a],[aftaa,a,tv,a}] Tka laagaaga [[lv,a],[a,tv,a],[aftaa.lv,a],[aftaa,a,tv,a]] la laaraabla. Trylag ta laara [[lv.a] .[a.aftaa,tv,a] ,[a,tv,a],[aftaa,lv,a].[aftaa.tv,a,a],[tv,a,a]] ... Plaal aattlag: 1 1 0 0 0 0 1 1/0 Laagaaga gaaaratad: [[lv,a],[a,aftaa,tv.a],[a,tv.a],[aftaa,lv.a].[aftaa,tv,a,a],[tv.a.a]] Tba laagaaga [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv,a],[aftaa,tv.a,a],[tv,a,a]] la laaraablo. Trylag ta laara [[lv,a],[a,aftaa,tv,a],[a.tv.a],[aftaa.lv,a],[aftaa,tv,a,a],[aftaa,tv,a,a], [tv.a,a],[tv,a,a]] ... Plaal aattlag: 1 1 0 0 0 0 1/0 1/0 Laagaaga gaaaratad: C[lv,a],[a,aftaa,tv,a].[a.tv.a],[aftaa,lv.a],[aftaa,tv.a,a],[aftaa,tv,a,a], [tv,a,a],[tv.a,a]] Tba laagaaga C[lv,a],[a,aftaa,tv,a],[a,tv,a],[oftaa.lv,a],[aftaa,tv,a,a],[aftaa,tv.a.a], [tv,a,a],[tv,a,a]] la laaraablo. Trylag ta laara [[lv.a],[a.aftaa.tv,a],[a.tv,a],[oftaa.lv.a],[aftaa,a,tv.a],[aftaa.tv.a.a],[tv.a.a]] ... Plaal aattlag: 1 0 0 0 0 0 1/0 1/0 Laagaaga gaaaratad: [[lv.a].(a.aftaa.tv,a],[a.tv,a].[aftaa.lv.a].[aftaa.a.tv,a],[aftaa.tv,a,a]. [tv.a.a]] Tba laagaaga [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,iv,a],[aftaa,a,tv,a],[aftaa,tv.a,a],[tv,a,a)] la laaraabla. Trylag ta laara [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv,a]] ... Plaal aattlag: 1 0 0 0 0 0 1 1/0 Laagaaga gaaaratad: [[lv,a],[a,aftaa,tv,a],[a,tv,a],[oftaa.lv,a],[aftaa,a,tv,a]] Tba laagaaga [[lv,a],[a,aftaa,tv,a],[a,tv,a],[aftaa,lv.a],[aftaa,a,tv.a]] la laaraabla. Trylag to laara [[lv.a],[a.aftaa,tv,a],[a,a,aftaa,tv],[a,a,tv],[a,tv,a],[aftaa.lv,a],[aftaa,tv.a,a], (a,lv],[a.aftaa,lv],[a,aftaa,tv,a],[a,tv,a],[tv,a,a]] ... Plaal aattlag: 1 1 0 0 0 1/0 1 1/0 Laagaaga gaaaratad: [[lv,a], [a,aftaa.tv,a], [a,a,aftaa.tv], [a.a, tv], [a,tv,a], [aftaa.lv,a], [aftaa.tv.a.a], [a, lv], [a,aftaa.lv], [a,aftaa, tv,a], [a,tv,a], [tv,a,a]] Tba laagaaga [[lv.a], [a,aftaa,tv,a], [a.a.aftaa,tv], [a.a.tv], [a,tv,a], [aftaa.lv,a], [aftaa. 370
  • 387. is ItUHbl*. Trylag ts laara [[lv.s], |s, aftaa,tv,a], [a.s.aftaa,tv], [a.s.tv], [s,tv,a], [aftaa.lv,a], [aftaa,tv,a.a], [aftaa,tv.s.s],[a,lv],[a.aftaa,lv],[a,aftaa,tv,a],[s,tv,s],[tv,a,s},(tv,s,a]] ... F lu l aattlag: 1 1 0 0 0 1/0 1/0 1/0 Laagaaga gaaaratad: [[iv.s],[a,aftaa,tv.a],[«,s,aftaa,tv],[s,s,tv],[a.tv.a],[sftss.lv,a], (aftaa.»»,»,!], [aftaa,tv,s,a],[a,iv],[a.aftaa.tv],[a.aftaa.tv,a],[a.tv,a],[tv,a,a],[tv,a,a}] Tba l u p t f i [[t«,s],[*l*ftn,tT ,s]>I*.s,tfUklw ]l[« ,i,tf],(* ,U ,t],[« fm ,i« .s],t* ttu ,tf,» ,s]l [aftaa, tv,a,a], [a,lv], [a,aftaa.tv], [s.aftaa, tv,a], [a,tv,a], [tv.a.a], [tv,a,a]] Is lssrssbls. Trjria| ta l u n [[lV,s],[s,8ftaa,tV,s] ,[8,S,aftaa,tv],[o,S,tv] ,[a,tV,a],[aftaa,lv,s], [aftaa.a,tV,s] , [a,iv], [a,a,tv], [a,aftaa.lv], [a,aftaa,a,tv]] ... F lu l aattlag: 1 0 0 0 0 1/0 1 1/0 U ip m (u m ta i; [[lv,a],[a.aftaa,tv,a],[a,a,aftaa,tv], [a.a.tv].[a.tv.a].[aftaa,iv.s].[aftaa.a.tv.a], [s,is],[s,s ,tv] .[s .sftss,iv],[s,aftaa,«,tv]1 Tba laagaaga £[1*.a],[a,aftaa,tv,s],[a,s,aftaa,tv],[a.s.tv] ,[a,tv.a] ,[aftaa,lv,a],[aftaa,a,t*,s] , [a,lv],[a,a,tv],[a,aftaa.lv],[a,aftaa,a,tv]] is laaraabla. Trylag ta laara [[lv,a], [a,aftaa,tv.a], [a,a.aftaa.tv], [a.a.tv],[a,tv,a],[aftaa,lv.s],[aftaa,s.tv,a]. [oftaa,tv,s,a],[s,iv] ,Es,a,tv},[s,aftaa,lv],[s,aftaa,a,tv],[s,oftaa,tv,o],[s,tv,a], [tv,a,a]] ... Fiaal sa tti^ : 1 0 0 0 0 1/0 1/0 1/0 Laagaaga gaaaratad: [[lv,s], [a.aftaa,tv,s],[a,s,aftaa,tv], [o.s,tv], [a,tv,s], [aftaa, lv,s], [aftaa,o.tv.s], [aftaa,tv,s,a], [a, lv], [a.a.tv], [s.aftaa,iv],[»,aftaa,a, tv], [a.aftaa,tv,a], [s, tv,o], [tv,s.o]] Tba laagaaga [[lr .a].[a,aftaa,tv,s],[a,s,aftaa,tv],[a,s,tv],[a,tv,s],[aftaa,lv,s], [aftaa.a,tv,s]. [aftaa,tv,s,a],[s,lv],[s,a,tv],[s,aftaa,lvj,[s,aftaa,a,tv],[s,aftaa,tv,a],[s,tv,a],[tv,s,o]] la laaraabla. Trylag ta laara [[iv,a],[lv,s,aftaa],[tv.a,a],[tv,s,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 0 0 Laagaaga gaaaratad: [[lv,a],[iv,s,aftaa],[tv,a,a],[tv,s,aftaa.a]] Tba laagaaga [[lv.a],[lv.a,aftaa],[tv,a,a],[tv,s,aftaa,a]] is laaraabla. Trylag ta laara [[lv,a],[lv,s,aftaa],[s,iv],[s,lv,aftaa],[s,tv,a],[B,tv,aftaa,a],[tv,s.a], [tv,a,aftaa.a]] ... Fiaal aattlag: 1 1 1 1 1 1 0 1/0 Laagaaga gaaaratad: [[iv,s],[iv,s,aftaa],[s,iv],[s,lv,aftaa],[s.tv,a],[s,tv,aftaa,a],[tv.a.a], [tv,s,aftaa,a]] Tba laagaaga [[lv,a],[iv,s,aftaa],[s,iv],[a,iv,aftaa],[s,tv,a],[s,tv,aftaa.o],[tv,s,a],[tv,s,aftaa,a]] is laaraabla. Trylag ta laara [(lv,s],[lv,s,aftaa],[a,tv,a],[a,tv,s,aftaa],[a,lv],[s,iv,aftaa],[s,tv,a], [s,tv,aftaa.a],[tv,s,a],[tv,s,aftaa,a]] ... Fiaal a attl^ : 1 1 1 1 1 1 1 1 / 0 Laagaaga gaaaratad: [[lv,s],[iv,s,aftaa],[a,tv,a],[a,tv,a,aftaa],[s,iv],[s,lv,aftaa],(s,tv,a], [a,tv,aftaa,a],[tv.s.a],[tv,s,aftaa,a]} Tba laagaaga [[lv,a), [iv,s,aftaa], [a,tv,s], [a,tv.a,aftaa], [s,lv], [a.iv,aftaa], [s.tv,a], [s.tv,aftaa,a], [tv,a,a].[tv,a,aftaa,a]] ia laaraabla. Trylag ta laara [[lv,aftaa,a],[lv,a],(tv,aftaa,a,a],[tv,a,a]] ... Fiaal aattlag: 1 1 1 1 0 0 0 0 Laagaaga gaaaratad: [[iv,aftaa,a],[iv,a],[tv,aftaa,s,a],[tv,a,a]] 371
  • 388. TIm lu |U |t [[lv.aftaa,a],[iv,a],[tv,aftaa,a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[iv,aftaa.a].[lv.a].[tv.a,a],[tv,aftaa.a.a],[tv,aftaa,a,a],[tv.a.a]] ... Fiaal aattlag: 1 1 1 1 0 0 1 / 0 0 Laagaaga gaaaratad: [El*.aftaa.a],[lv.a],[tv.a.a],[tv,aftaa,a,a] ,[tv,aftaa,a,a], [tv.a.a]] Tba laagaaga [[iv,aftaa,a],[iv,a],[«v,a,a],[tv,aftaa.a,a],[tv,aftaa.a,a],[tv.a,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a],[lv,a],[tv,a,a],[tv,aftaa,a,a]] ... Fiaal aattlagt 1 1 1 1 0 0 1 0 Laagaaga gaaaratad: [[1*.aftaa,a],[lv,a],[tv.a.a],[tv,aftaa,a,a]] Tba laagaaga [[lv,aftaa,a],[lv,a],[tv,a,a],[tv,aftaa,a,a)] la laaraabla. Trylag ta laara E[lv,aftaa,a],[iv,a],[a,lv] ,[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a] ,[tv,aftaa,a,a], [tv,a,a]] ,.. Fiaal aattlag: 1 1 1 1 0 1/0 0 0 Laagaaga gaaaratad: [[lv .aftaa,a] .[lv.a] ,[a.iv],[a,It.oftaa], [a.tv.a]. [a, tv.aftaa,a] .[tv.aftaa.a.a], [tv.a.a]] Tba laagaaga [[lv,aftaa,a],[lv.a] ,[a.iv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaa,a] ,(tv,aftaa,a,a] , [tv,a,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a],[iv,a],[a,lv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaa.a],[tv,a,a], [tv,aftaa.a,a],[tv,aftaa,a,a],[tv,a,a]] ... Fiaal aattlag: 1 1 1 1 0 1/0 1/0 0 Laagaaga gaaaratad: [[lv,aftaa,a], [lv,a], [a, lv], [a.iv,aftaa], [a,tv,a], [a,tv,aftaa,a), [tv,a,a], [tv,aftaa,a,a],[tv,aftaa,s.a],[tv.a.a]] Tba laagaaga ([lv,aftaa,a],[lv.a],[a,lv],[a,iv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a] , [tv,aftaa,a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[lv.aftaa,a],(lv,a],[a,iv],[a,lv,oftaa],[a,tv,a],[a,tv,oftaa,a],[tv,o,a], [tv,aftaa,a,a]] ... Fiaal aattlag: 1 1 1 1 0 1 / 0 1 0 Laagaaga gaaaratad: [[iv,aftaB,a],[iv,a],[a,iv],[a.iv,aftaa],[a,tv,a],[a,tv,aftaB,a],[tv,a,a], [tv,aftaa,a,a]] Tba laagaaga [[lv,aftaa,a],[iv,a],[a,iv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a]] la laaraabla. Trylag ta laara [[iv,aftaa.a],[lv.a],[a,tv,aftaa.a],[a,tv,a],[tv.a,a),[tv,aftaa.a,a).[tv.aftaa.a.a], [tv.a.a]] ... Fiaal aattlag: 1 1 1 1 0 0 1/01/0 Laagaaga gaaaratad: [[lv,aftaa,a] ,[lv,a],[a,tv,aftaa,a] ,[a,tv,a],[tv,a,a],[tv,aftaa,a,a] , [tv.aftaa.a.a],[tv.a.a]] Tba laagaaga [[lv,aftaa,a], [lv.a], [a,tv,aftaa,a], [a,tv.a], [tv,a,a], [tv,aftaa,a,a], [tv,aftaa,a,a], [tv,a,a]] la laaraabla. Trylag ta laara [[iv,aftaa,a],[iv,a],[a.tv.aftaa,a],[a,tv,a],(tv,a,a],[tv,aftaa,a,a]] ... Fiaal aattlag: 1 1 1 1 0 0 1 1/0 Laagaaga gaaaratad: [[iv,aftaa,a],[lv,a],[a,tv,aftaa,a],[a,tv,a],[tv,a,a],[tv,aftaa,a,a]] Tba laagaaga [[iv,aftaa,a],[tv,a],[a,tv,aftaa,a],[a,tv,a],[tv,a,a],[tv,aftaa,a,a]] la laaraabla. Trylag ta laara [[iv,aftaa,a],[iv,a],[a,a,tv],[a,a,tv,aftaa],[a,tv,aftaa,a],[a,tv,a],[a,iv], [a,lv,aftaa],[a,tv,a], [a,tv,aftaa,a],[tv,a,a),[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a]] ... Fiaal aattlag: 1 1 1 1 0 1/0 1/0 1/0 372
  • 389. L u |(t|< gaaaratad: [[lv.aftaa,a], [lv.a], [a,a,tv], [a,a, tv,aftaa], [a.tv,aftaa,a], [a, tv,a], [a, lv], [a,lv,aftaa], [a,tv,a], [a, tv,aftaa,a], [ta,a.a], [tv,aftaa,a,a], [tv,aftaa,a,a], [tv.a.a]] Tka laagaaga [[iv,aftaa,a],[lv,a],[a,a,tv],[a,a,tv,aftaa},[a,tv,aftaa,a],[a,tv,a],[a,lv],[a, iv,aftaa], [a,tv,a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa.a,a],[tv,a,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a],[lv,a],[a,a,tv],[a,a,tv,aftaa],(a,tv,aftaa,a],[a,tv,a],[a,iv], [a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a],[tv.a.a],[tv,aftaa,a,a]] ... Fiaal aattlag: 1 1 1 1 0 1/0 1 1/0 Laagaaga gaaaratad: [[lv,aftaa,a],[lv.a],[a.a.tv],[a,a,tv,aftaa],[a,tv,aftaa,a],[a,tv,a],[a.iv], [a,lv,aftaa],[a,tv,a], [a,tv,aftaa,a],[tv,a,a],[tv.aftaa,a, a]] Tba laagaaga [[iv,aftaa.a],[lv.a],[a,a,tv],[a,a,tv,aftaa], [a,tv,aftaa.a],[a,tv.a],[a,lv], [a,iv,aftaa],[a.tv.a],[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a], [lv.a],[lv,a,aftaa],[tv,aftaa,a,a],[tv,a,a], [tv,a,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1/0 0 0 Laagaaga gaaaratad: [[lv,aftaa,a],[lv.a],[iv,a,aftaa],[tv.aftaa.a.a],[tv.a.a],[tv,a.aftaa.a]] Tba laagaaga [[lv,aftaa,a],[lv,a],[lv,a.aftaa],[tv,aftaa,a,a],(tv.a,a],[tv.a,aftaa,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a],[iv,al,[lv.a,aftaa] ,[tv,a,a],[tv.aftaa.a.a],[tv.a.a] ,[tv,a,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 / 0 1 0 Laagaaga gaaaratad: [[lv,aftaa,a],[lv,a],[lv,a,aftaa],[tv,a,a],(tv,aftaa,a,a] ,[tv,a,a],[tv,a,aftaa,a]] Tba laagaaga [[iv,aftaa,a),(lv,a],[lv,a,aftaa],[tv, a,a],(tv,aftaa,a,a],[tv,a,a],(tv,a,aftaa,a]] la laaraabla. Trylag ta laara [[lv,aftaa,a],[iv,a],[iv,a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a], [tv.a.aftaa.a]] ... Fiaal aattl^; 1 1 1 1 1 1 / 0 1 / 0 0 Laagaaga gaaaratad: [[lv,aftaa,a],[lv.a],[lv.a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a], [tv,a,aftaa,a]] Tba laagaaga [[lv,aftaa,a],[iv,a],[iv,a,aftaa],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a], (tv,a,aftaa,a]] la laaraabla. Trylag ta laara [[iv,aftaa,a],[lv.a],[lv,a,aftaa],[a,lv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa,a], [tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 / 0 0 1 / 0 Laagaaga gaaaratad: C[lv,aftta,a],[iv,a],[iv,a,aftaa],(a,iv],[a.iv.aftaa],[a,tv,a], [a,tv,aftaa,a], [tv,aftaa,a,a],[tv,a,a],[tv.a,aftaa.a]] Tba laagaaga [[iv,aftaa,a],[iv,a],[iv,a,aftaa],[a,iv],[a,lv,aftaa],[a,tv,a],[a,tv,aftaa.a], [tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] la laaraabla. Trylag ta laara [(iv,aftaa.a],[lv,a],[lv,a,aftaa],[a,tv,aftaa,a],[a.tv.a].[a,tv,a,aftaa],[a,lv], (a, lv,aftaa], [a,tv,a], [a,tv.aftaa,a], [tv.a,a], [tv.aftaa.a.a], [tv,a,a], [tv,a.aftaa,a]] ... Fiaal aattlag: 1 1 1 1 1 1 / 0 1 1 / 0 Laagaaga gaaaratad: [[lv,aftaa,a],(lv,a],[iv,a,aftaa],[a,tv,aftaa,a],[a,tv,a], [a,tv.a,aftaa],[a,lv], [a,iv,aftaa], (a,tv,a], [a,tv,aftaa,a], [tv,a,a], [tv,aftaa,a,a), [tv,a,a], [tv,a,aftaa,a]] Tba laagaaga [[lv,aftaa,a],[iv,a],[iv.a,aftaa],[a,tv,aftaa,a].[a.tv,a].[a,tv.a,aftaa].[a,iv],[a.iv.aftaa]. [a.tv.a],[a,tv,aftaa,a],[tv.a,a],[tv,aftaa,a,a],[tv,a,a],[tv,a,aftaa,a]] la laaraabla. 373
  • 390. Trylag U l t t r i [[lv,aftaa.a],[lv.a],[lv,a.aftaa],[o,tv,aftaa.a],[a,tv.a],[a.tv,a,oftaa3.ta.lv]. [a, la,aftaa], [a,tv,a], [a,tv.aftaa,a], [tv.a.a], [tv.aftaa.a.a] .[tv.aftaa.a.a], [tv.a.a], [ta,a,aftaa.a]] ... Piaal aattlag: 1 1 1 1 1 1 / 0 1/0 1/0 Laagaaga gaaaratad: [[la,aftaa.a],[iv,a],[la.a,aftaa],[a,tv,aftaa,a],[a.tv.a],[a,tv,a.aftaa]. [a, lv], [a.iv,aftaa], [a,tv,a], [a,tv,aftaa,a], [tv,a,a], [tv.aftaa.a.a], [tv.aftaa.a.a], [tv,a,a],[tv.a,aftaa.a]] Tka laagaaga [[lv,aftaa,a],[lv,a],[lv,a,aftaa],[a,tv.aftaa,a],[a,tv,a],[a,tv,a,aftaa],[a,lv],[a,lv,aftaa], [a,tv.a),[a,tv,aftaa,a],[tv,a,a],[tv,aftaa,a,a],[tv,aftaa,a,a],[tv,a,a], [tv.a,aftaa,a]] la laaraaUa. I « I- ? D .5 Param eter Setting w ith N oisy Input I ?- ap. Tka laltlal aattlag la [0 0 0 0 0 0 0 0 ] la ■aatT [a.a.tv]. 11 Oaakla ta paraa [a.a.tv] laaattlag tka paraaatara ... Paraaatara raaat ta: [0 0 0 0 0 1 1 0 ] Ik ■aatT [lv.a]. 13 Oaakla ta paraa [iv,a] laaattlag tka paraaatara ... Paraaatara raaat ta: [1 0 0 0 0 0 1 0 ] 1c ■axtT [a.lv], 13 Oaakla ta paraa [a,lv] laaattlag tka paraaatara ... Paraaatara raaat ta: [1 0 0 0 0 1 0 0 ] Id ■aatT [a,tv,a]. 14 Oaakla ta paraa [a.tv.a] laaattlag tka paraaatara ... Paraaatara raaat ta: [1 0 0 0 0 0 1 1 ] la ■aatT [a,a,tv]. 11 Oaakla ta paraa [a,a,tv] laaattlag tka paraaatara ... Paraaatara raaat ta: [1 0 0 0 0 1 1 1 ] If ■art? [tv,a,a]. 10 Oaakla ta paraa [tv.a.a] laaattlag tka paraaatara ... Paraaatara raaat ta: [l 0 0 0 0 1/0 0 0 ] lg ■axtT [a.tv.a]. IT Carraat aattlag raaalaa aackaagad. ■axtT [a,a,tv]. 11 Oaakla ta paraa [a,a,tv] laaattlag tka paraaatara ... Paraaatara raaat ta: [l 0 0 0 0 1/0 1 0 ] lk ■axtT [a,tv.a]. It Oaakla ta paraa [a.tv.a] laaattlag tka paraaatara ... 374
  • 391. h n a r t t n r a a a t t a : [1 0 0 0 0 1 1 / 0 0 ] X I ■ u t l ( a , l , t « ] . X lO V w U a t a p a r a a [ a , » , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 0 1 / 0 1 ] XJ • a a t T [ a . a . t v ] . t i l U a a U a t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 1 ] Xk ■ a x tT [ a . t v . a ] . X I3 O a a k la t a p a r a a [ a . t v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 1 ] X I ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a . a f t a a , t v ] , [ a . a . t v ] , [ a . i v ] , [ a . a , t v ) , [ a . a f t a a . l v ] , [ a . a f t a a . a . t v ] , [ a , a f t a a , t v . a ] , [ a , t v , « ] ] ■ a x tT [ l v , a ] . X l9 O a a k la t a p a r a a [ i v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 1 0 0 0 0 0 1 1 / 0 ] X a ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ l v . a ] , [ a . a f t a a , t v , a ] , [ a , t v , a ] , [ a f t a a . l v , a ] , [ o f t a a , a , t v , a ] ] ■ a x tT [ a . t v . a ] . X ld O a a k la t a p a r a a [ a . t v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [10 0 0 0 1 0 1 / 0 ] Xa ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a . i v ) , [ a , a f t a a . l v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] ) ■ a x tT [ a . a . t v ] . X IS O a a k la t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 1 ] Xa ■ a x tT [ a . a . t v ] . X l« C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , t v , a ] . X1T O a a k la t a p a r a a [ a . t v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [10 0 0 0 1 1 / 0 1 ] Xp ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a . a f t a a . t v ] , [ a , a , t v ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a , a , t v ] , [ a . a f t a a . t v , a ] , [ a , t v . a ] ] ■ a x tT [ l v . a ] . X l l O a a k la t a p a r a a [ l v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 0 0 0 1 1 / 0 ] Xq ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ l v . a ] , [ a . a f t a a . t v , a ) , [ a , t v . a ) , [ a f t a a . l v , a ] , [ a f t a a , t v . a , a ] , [ t v , a , a ] ) ■ a x tT [ a . t v . a ] . Xll O a a k la t a p a r a a [ a . t v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . 375
  • 392. P u t H l t n t m t M : [1 1 0 0 0 1 0 1 / 0 ] I t ■ a x tT ( i M r t M . L u p > | t p M t i t d w i t h c a r r a a t a a t t l a g : [ [ a , i t ] , [ a , e f t a a , i v ] , [ a . a f t a a , t * . a ] , [ a . t v , a ] ] ■ a a tT ( a , a , * v ] . 1 3 0 O a a h la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 0 ] Xa ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a * a l t k c a r r a a t a a t t l a g : [ [ a . a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a , i v ] . [ a f t a a . a . t v , a ] , [ a . i v ] . [ a . a . t v ] , [ a . a f t a a . l v ] , [ a . a f t a a , a . t v ] , [ a , a f t a a , t v , a ] , [ a . t v . a ] ] ■ a a tT [ i t . a ] . 1 3 1 O a a k la t a p a r a a [ l v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 01 /0 0 ] X t ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ 1 * . a ] , [ a , t a . a ] , [ a f t a a , i t , a l , [ a f t a a . a , t v , a ] , [ a f t a a , t v . a , a ] , [ a , i t ] . [ a . a . t v ] , [ a , a T t a a . i t ] , ( a , a f t a a . a , t v ] , [ a . a f t a a , t v , e ] . [ a , t v , o ] . [ t v . a , » ] ] ■ a x tT [ a . t v . o ] . X » C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v ] . U S C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . t v ] . X 24 O a a k la t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a ; [ 0 0 0 0 0 0 1 / 0 1 / 0 ] Xa ■ a x tT [ a , a , t v ] . X 2 I O a a k la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 1 ] Xv ■ a x tT [ a . t v . a ] , X 30 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v ] . 1ST C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a f t a a , a , t v ] , [ a , a , a f t a a , t v ] , [ a . a . t v ] , [ a . i v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a . a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a . t v . a ] ] ■ a x tT [ l v , a ] . X 2 I O a a k la t a p a r a a [ l v . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 0 1 / 0 ] Xa ■ a x tT ( a , t v , a ) . X 30 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . t v ] . X 30 O a a k la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 / 0 1 ] Xx ■ a x tT [ a . t v . a ] . X l l C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . t v ] . X32 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : 376
  • 393. [ a , # * » • « , « , » ] , [ a , t v , a ] ] b t t r [ i v . a ] . 1 3 3 i m t U i t a p a r a * [ l v . a ] ■ • a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a ; (1 0 0 0 0 1 / 0 1 I / O ] X j ■ a a tT [ a , t * , a ] . 1 3 4 O a a b la t a p a r a a [ a . t v . a ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 1 / 0 1 / 0 ] %m ■ a a tT [ a . a . t v ] . X3C C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT [ a . a . t v ] . 1 3 6 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a , a f t a a , t v ] . [ a . a . t v ] , [ a , l v ) , [ a , a , t v ] , [ a , a f t a a . l v ] . [ a . a f t a a , a , t v ] , [ a . a f t a a , t v , a ] . [ a . t v . o ] ] ■ a a tT [ l v . a ] . X37 O a a k la t a p a r a a [ l v . a ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 0 0 1 / 0 0 1 / 0 ] 1 * 1 ■ a a tT ( a . t v . a ) . X36 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT [ a . a . t v ] . X3P O a a k la t a p a r a a [ a . a . t v ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 / 0 1 / 0 1 / 0 ] I b l ■ a x tT [ a . t v . a ) . 1 4 0 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT ( a , a , t v ) . 1 4 1 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . La a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ a , a f t a a . a , t v ] , [ a , a , a f t a a , t v ] , [ a , a , t v ] , [ a f t a a , a , a , t v ] , [ a f t a a , a . i v ] , [ a f t a a , a , t v , a ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a , l v ) , [ a , a f t a a . a , t v ] , ( a , a f t a a . t v , a ] , [ a , t v . a ] ] ■ a x tT [ l v , a ] . 1 4 2 O a a k la t a p a r a a [ l v , a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 1 / 0 1 / 0 1 / 0 ] l e i ■ a x tT [ a . t v , a ] . 1 4 3 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a . a . t v ] . 1 4 4 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v ] . I 4 t C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : [ [ l v . a ] , [ a , a f t a a , t v , a ] , [ a , a . a f t a a . t v ] , [ a , a , t v ] , [ a . t v . a ] , [ a f t a a , l v . a ] , [ a f t a a , a , t v , a ] , [ a f t a a , t v . a . a ] , [ a , l v ] , [ a , a , t v ] , [ a , a f t a a . l v ] , [ a , a f t a a , a , t v ] , [ a , a f t a a , t v , a ] , [ a , t v , a ] , ( t v , a , a ] ] ■ a x tT [ a , t v , a ] . 1 4 0 C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ t v , a , a ] . O a a k la t a p a r a a [ t v . a . a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 0 0 1 / 0 1 / 0 1 / 0 ] 1 4 1 ■ a x tT [ a . t v . a ] . 1 4 7 377
  • 394. C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT [ a , a , t v ] . I U U a afcla t a p a r a a [ a , a , t v ] l a a a t t l a g t k a p a r a a a t a r a . . . a a I T - D .6 Setting S(M ), S(F) and HD Param eters a Particular Language I T - a p . T k a l a l t l a l a a t t l a g l a [ 00 0 0 0 0 0 0 1 1 1 - 0 ] ■ a x tT [ a , i v , a a x ] . U a a b la t a p a r a a [ a , l v t a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 00 0 0 0 0 0 0 1 4 1 - 0 ) ■ a x tT t a , t v t a , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a . a ^ v . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1O O O O O O O l f 1 - 0 ] ■ a x tT [ a , t v , a , a a x | . U a a b la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 10 0 0 0 1 f 1 - 0 ] ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [11 0 0 0 0 0 0 1 4 1 - 0 ] ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 10 0 0 4 1 1 - 0 ] ■ a a tT C a , a , t v , a a a ] . O a a k la t a p a r a a [ a . a . t v . a x x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 00 0 0 0 1 1 0 1 4 1 - 0 ] ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a >t v t a .a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 00 0 1 1 1 0 0 4 1 1 - 0 ] ■ a x tT [ a . a . t v . a a x ] . U a a b la t a p a r a a [ a , a , t v ( a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : ( 00 0 1 0 1 1 0 1 4 1 - 0 ] ■ a x tT [ a , t v , a , a a a ] . U a a b la t a p a r a a [ a . t v . a . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . .
  • 395. I w t T [ s . a . t v . a a x ] . U a a k l* t a f i m l a a a t t l a g t k a r u i M M n . . . h r i M t m n m I « • : (1 0 0 1 1 1 0 01 1 1 - 0 ] ■ a x tT [ l , t T , * , t u ] . V u U « « • p a r a * l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : E l 1 0 1 1 I 0 01 1 1 - 0 ] ■ a a tT [ a , t t , a , a u ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g : tC a , l r , a a x ] , [ a , t a , a .m a x ] ] ■ a x tT C * ,o , t v . a a x j . O a a k la t a p a r a * E a . a . t r . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o :El 1 0 1 1 1O O f f 1 - 0 ] ■aatT E s.tr,a , a a x ] . O a a k la t o p a r a a l a . t r . a . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o : [1 1 0 1 1 1 1 0 # 1 1 - 0 ] ■ a x tT E a . a . t r . a a x ] . O a a k la t a p a r a * [ a . a . t r . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o :El 1 0 1 1 1 1 0 # f 1 - 0 ] ■ a a tT [ s . t v , a , a a x ] . O a a k la t o p a r a a E a . t r . a . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o : £ 00 0 0 0 1 0 1 I f 1 - 0 ] ■ a a tT Ea.a.tr.aax). O a a k la t o p a r a * E a . a . t r . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o :C l O O O O l O l l f 1 - 0 ] ■ a a tT E a . t r . a . a a a ] . O a a k la t o p a r s * E a . t r , a . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 0 1 0 1 1 f 1 - 0 ] ■ a x tT E a . a . t r . a a a ] . O a a k la t a p a r a * E a . a . t r . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :E O O O O O l l l l f 1-0 ] ■ a a tT C a . t r , a , a « x ] . O a a k la t o p a r a a E a . t r , a . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :( O O O l l l O l f l 1 - 0 ) ■ a x tT C a . a . t r . a a x ] . O a a k la t o p a r s * E a . a . t r . a a a ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t o :E O O O l O t l l l f 1 - 0 ] ■ a x tT [ a . t v , * , s a x ] . O a a k la t o p a r s * E s . t r . a . a a x ] 379
  • 396. kH ttii| tka pwiMttn h n u M t i r a a a t t a :[ 1 0 0 1 1 1 0 1 f 1 1 - 0 ] ■ a x tT [ a , a , t v , a i u ] . O a a k la t a p a r a a [ a , a , t * , i a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ l O O l l l O i f f 1 - 0 ] ■ a a tT [ a , t v , a , a a x ] . O a a k la t a p a r a a [ a , t v , a . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ i l O l l l O l f l 1 - 0 } ■ a x tT [ a , a , t v , a a x l . O a a k la t a p a r a a [ a , o , t v , t a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ l l O l l l O l f f 1 - 0 ] ■ a x tT [ a , t v , a , a a x ] . O a a k la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a tt a : [ 1 1 0 1 1 1 1 1 f l 1 - 0 1 ■ a x tT [ a . a . t v , a a x ] . O a a k la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 1 11 t 1 - 0 ] ■ a x tT [ a , t v , a , a a x ] . O a a k la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 ■ a x tT [ a , a , t v , a a x ] . O a a k la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 0 0 0 1 / 0 0 1t 1 - 0 ■ a x tT [ a , t v , a , a a x ] . O a a k la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a M t a r a , . . P a r a a a t a r a r a a a t t a : [ 0 0 i ■ a a tT [ a , a , t v , a a x ] . O a a k la t a p a r a a ( a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[0 0 0 0 0 1 1/0 0 1 f 1-0 ■ a x tT [ a , t v , a , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t k c a r r a a t a a t t l a g ; [ [ a . l v . a a x ] , [ a , a , t v , a a x ] , [ a , t v , a , a a x ] ] ■ a x tT [ a , a , t v , a a x ] . U a a k la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[0 0 0 0 0 1/0 1 0 1 f 1-0 ■ a x tT [ a , t v , a , a a x ] . O a a k la t a p a r a a ( a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . .
  • 397. P v i w t i r f r a a a t t a : [ 0 0 0 1 1 0 I / O 0 ■ a a tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ » , o , t v .x o x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 0 1 1 / 0 0 ■ a a tT [ a , t v , a , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a a tT [ a , a , t v , a a x ] . D a a k la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 O 1 / 0 1 0 ■ a a tT [ f . t t . i . a w O . U a a b la t a p a r a a [ a , t v t » , a u ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 0 0 ■ a x tT [ a , o , t v , a a x ] , U a a b la t a p a r a a [ a . a . t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 0 0 ■ a x tT [ a , t v , o , a a x ] . U a a b la t a p a r a a [ a . t v . a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 1 1 1 / 0 0 ■ a x tT [ a , a , t v , a a x ] , C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a . t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 1 1 1 / 0 1 0 ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a , t v , a . a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : (1 1 0 1 1 1 / 0 0 0 ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 / 0 0 0 ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 1 / 0 0 ■ a x tT [ a , a , t v , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v , a a x ) . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 0 0 1 1 1 / 0 1 0 ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 1 1 1 1 / 0 0 f 1-0 ] 1 1-0 ] 1 1-0 ] t 1 - 0 ] 1 1-0 ] 1 1-0 ] 1 1-0 ] t 1 - 0 ] 1 1-0 ] f 1-0 ] 1 1- 0 ] 381
  • 398. ■ a x tT [ a . a . t v , a a x ] . U a a b la t a p a r a * [ a . a . t v , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ 1 1 0 • a x t T [ a , t v , a , a a x } . U a a b la t a p a r a a [ a , t v . a , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 0 ■ a x tT [ a , a , t v . a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a :[ 1 1 0 ■ a x tT [ a . t v . a , a a x ] . U a a U a t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 1 0 0 ■ a x tT ( a . t v . a , a a x ] . U a a b la t a p a r a a [ a , t v , a . a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [1 1 o ■ a x tT [ a , t v , a , a a x ] . U a a b la t a p a r a a [ a , t v , a , a a x ] l a a a t t l a g t k a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a : [ 0 0 0 ■ a x tT [ a , a , t v , a a x ] . U a a b la t a p a r a a [ a , a , t v , a a x ] l a a a t t l a g t b a p a r a a a t a r a . . . P a r a a a t a r a r a a a t t a ; [ 0 0 0 ■ a x tT [ a . t v . a . a a x ] . U a a b la t a p a r a a [ a , t v , a , a u ] l a a a t t l a g t b a p a r a a a t a r a . . . 1 1 1 1/0 0 f f 1-0 ] 1 1 1/0 1 0 f 1 1-0 ] 1 1 1 / 0 1 0 f f 1 - 0 ] 0 0 0 0 1 / 0 1 f 1 - 0 ] 0 0 0 0 1/0 1 f 1-0 ] 1 0 0 0 1 / 0 1 f 1 - 0 ] 0 0 0 0 1/0 1 « 1-0 ] 1 1 0 0 1/0 f 1 1-0 ] o o i i i/o i i i-o 3 P a r a a a t a r a r a a a t t a : [ 0 0 0 0 0 1 1 / 0 I l f 1 - 0 ] ■ a x tT [ a , a , t v , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT [ a , a , t v , a a x ] . C a r r a a t a a t t l a g r a a a l a a a a c k a a g a d . ■ a x tT g a a a r a t a . L a a g a a g a g a a a r a t a d a l t h c a r r a a t a a t t l a g : [ [ a , a , t v . a a x ] , [ a , 1v , a a x ] , [ a , a , t v , a a x ] , [ a , t v ,a , a a x ] ] ■ a x tT b p a . T *» I T - 382
  • 399. 1 References Abney, S. and J. Cole (1985) A Government-Binding Parser. In Proceedings of the Sixteenth North East Linguistic Society Conference, University of Massechusetts, Amherst, MA. Abney, S. (1987) The English Noun Phrase in its Sentential Aspect. MIT doctoral dissertation, Cambridge, MA. Abney, S. (1989) A Computational Model of Human Parsing. Journal of Psy­ cholinguists Research, IS. 129-144. Aho, A.V., J.E. Hopcroft and J.D. Ullman (1974) The Design and Analysis of Computer Algorithms. Addison-Wesley, Menlo Park, CA. Angluin, D. (1978) On the Complexity of Minimum Inference of Regular Sets. Information and Control, 39: 337-350. Angluin, D. (1980) Inductive Inference of Formal Languages from Positive Data. Information and Control, 45: 117-135. Aoun, J., N. Hornstein and D. Sportiche (1981) Some Aspects of Wide-Scope Quantification. Journal of Linguistic Research, I: 69-95. Atkinson, M. (1992) Children’s Syntax. Blackwell, Cambridge, MA. Baker, C.L. (1979) Syntactic Theory and the Projection Problem. Linguistic Inquiry 10: 533-581. Baker, M.C. (1985a) The Mirror Principle and Morphosyntactic Explanation. Linguistic Inquiry, 16: 373-417. Baker, M.C. (1985b) Syntactic Affixation and English Gerunds. In Cobler et al. (eds.) Proceedings of the West Coast Conference of Formal Linguistics, Stanford University, Palo Alto. Baker, M.C. (1988) Incorporation: A Theory of Grammatical Function Changing. University of Chicago Press, Chicago. Belletti, A. (1990) Generalized Verb Movement. Rosenberg & Sellier, Turin. Berwick, R.C. (1985) The Acquisition of Syntactic Knowledge. The MIT Press, Cambridge, MA. Berwick, R.C. (1987) Parsability and Learnability. In B. MacWhinney (ed.) Mechanisms of Language Acquisition. Lawrence Erlbaum Associates, Hills­ dale, NJ. Berwick, R.C. (1991) Principles of Principle-Based Parsing. In R.C. Berwick et al. (eds.) Principle-Based Parsing: computation and psycholinguistics, Kluwer, Boston. 383
  • 400. Berwick, R.C. and A. Weinberg (1984) The Grammatical Basis o f Linguistic Per­ formance: Language Use and Acquisition. MIT Press, Cambridge, MA. Berwick, R.C. and S. Fong (1991) Madama Butterfly Redux: A Parsing Opera in Two Acts or Parsing English and Japanese with a Principles and Parameters Approach. To appear in Computational Linguistics. Bobaljik, J. (1992) Norminally Absolutive is Not Absolutely Nominative. Pre­ sented at the Eleventh West Coast Conference on Formal Linguistics, UCLA, Los Angeles. Bobaljik, J. and D. Jonas (1993) Subject Positions and the Role of TP. Presented at the Sixteenth GLOW Colloquium, Lund, Sweden. Borer, H. (1984) Parametric Syntax: case studies in Semitic and Romance lan­ guages. Foris, Dordrecht. Borer, H. (1992) The Ups and Downs of Hebrew Verb Movement. Umass manuscript. Bouchard, D. (1984) On the Content of Empty Categories. Foris, Dordrecht. Brody, M. (1990) Remarks on the Order of Elements in the Hungarian Focus Field. In I. Kenesei (ed.) Approaches to Hungarian, Vol. 2. JATE, Szeged. Brown, R. and C. Hanlon (1970) Derivational Complexity and the Order of Acqui­ sition of Child Speech. In J.R. Hayes (ed.) Cognition and the Development of Language, Wiley, New York, NY. Burzio, L. (1986) Italian Syntax. Reidel, Dordrecht. Campbell, R. 11993) The Occupants of Spec-DP. Presented at the Sixteenth GLOW Colloquium, Lund, Sweden. Carstens, V. (1991) The Morphology and Syntax of Determiner Phrases in Kiswahili. UCLA doctoral dissertation, Los Angeles. Carstens, V. (1993) On Grammatical Gender and NP Internal Subjects. Pre­ sented at the Sixteenth GLOW Workshop, Lund, Sweden. Carstens, V. (1993) Feature-Types, DP-Syntax and Subject Agreement. Pre­ sented at the Sixteenth GLOW Colloquium, Lund, Sweden. Cheng, L. (1991) On the Typology of Wh-Questions. MIT doctoral dissertation, Cambridge, MA. Cheng, L. (1993) Wh-Scope Markers and Partial Wh-Movement. Presented at the Colloquium of the UCLA Dept, of Linguistics. Chiu, B. (1993) The Inflectional Structure of Mandarin Chinese. UCLA doctoral dissertation, Los Angeles. Chomsky, N. (1955) The Logical Structure of Linguistic Theory. Plenum, New York (1975). Cambridge, MA. 384
  • 401. Chomsky, N. (1957) Syntactic Structures. The Hague: Mouton. Chomsky, N. (1965) Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Chomsky, N. (1980) On binding. Linguistic Inquiry, 11: 1-46 Chomsky, N. (1981a) Principles and Parameters in Syntactic Theory. In N. Horn- stein & D. Lightfoot (eas.) Explanation in Linguistics, Longman, London. Chomsky, N. (1981b) Lectures on Government and Binding. Foris, Dordrecht. Chomsky, N. (1982) Some Concepts and Consequences of the Theory of Govern­ ment and Binding. MIT Press, Cambridge, MA. Chomsky, N. (1986a) Knowledge of Language. Praeger, New York. Chomsky, N. (1986b) Barriers. MIT Press, Cambridge, MA. Chomsky, N. (1991) Some Notes on Economy of Derivation and Representation. In R. Freidin (ed.) Principles and Parameters in Comparative Grammar, MIT Press, Cambridge, MA. Chomsky, N. (1992) A Minimalist Program for Linguistic Theory. M IT Occa­ sional Papers in Linguistics, Number 1. Chomsky, N. and H. Lasnik (1977) Filters and Control. Linguistic Inquiry, 8, 425-504. Reprinted in Lasnik (1990). Chomsky, N. and H. Lasnik (1991) Principles and Parameters Theory. In J. Jacobs, A. von Stechow, W. Stemefeld and T. Vennemann (eds.) Syntax: An International Handbook of Contemporary Research, Walter de Gruyter, Berlin. Clahsen, H. (1991) Constraints on Parameter Setting: a grammatical analysis of some acquisitional stages in German child language. Language Acquisition 1(4): 361-391. Clark, R. (1988) On the Relationship between the Input Data and Parameter Setting. In Proceedings of the Nineteenth North East Linguistic Society Con­ ference, Cornell University, Ithaca, NY. Clark, R. (1990) Some Elements of a Proof for Language Learnability. Manuscript, University of Geneva. Clocksin, W.F. and C.S. Mellish (1984) Programming in Prolog, 2nd Edition. Springer-Verlag, New York. Cornell, T. (1992) Description Theory, Licensing Theory, and Principle-Based Grammars and Parsers. UCLA doctoral dissertation, Los Angeles. 385
  • 402. Emonds, J. (1976) A Transformational Approach to Syntax. Academic Press, New York. Emonds, J. (1978) The Verbal Complex V’-V in French. Linguistic Inquiry, ft 151-175 Emonds, J. (1980) Word Order in Generative Grammar. Journal of Linguistic Research, 1: 33-54. Emonds, J. (1985) A Unified Theory of Syntactic Categories. Foris, Dordrecht, The Netherlands. Fodor, J,D. (1990) Phrase Structure Parameters. Linguistics and Philosophy, IS: 619-659. Fodor, J.D. (1991) Learnability of Phrase Structure Grammar. To appear in R. Levine (ed.) Formal Grammar: Theory and Implementation, Vancouver Studies in Cognitive Science, University of British Columbia Press. Fong, S. (1991) Computational Properties of Principle-Based Grammatical The­ ories. MIT doctoral dissertation, Cambridge, MA. Forster, P. (19891 Identification of Zero-Reversible Languages. MA thesis, The University of Western Ontario. Frank, R. (1990) Computation and Linguistic Theory: A Government Binding Theory Parser Using Tree Adjoining Grammar. MA thesis, University of Pennsylvania. Frank, R. and S. Kapur (1993) On the Use of Triggers in Parameter Setting. Upenn manuscript, Philadelphia. Frank, R. and A. Kroch (1993) Generalized Transformation in Successive Cyclic and Long Dependencies. Presented at the Sixteenth GLOW Colloquium, Lund, Sweden. Frazier, L. and K. Rayner (1988) Parameterizing the Language Processing Sys­ tem: left- vs. right-branching within and across languages. In J. Hawkins (ed.) Explaining Language Universals, Basil Blackwell, New York. Fukuda, M. (1993) Head Government and Case Marker Drop in Japanese. In Linguistic Inquiry, 24' 168-172. Fukui, N. (1993) Parameters and Optionality. Linguistic Inquiry, 24: 399-420. Gibson, E. (1991) A Computational Theory of Human Linguistic Processing: Memory Limitations and Processing Breakdown. Carnegie Mellon Univer­ sity doctoral doctoral dissertation, Pittsburgh, PA. Gibson, E. and K. Wexler (1993) Triggers. To appear in Linguistic Inquiry. Gold, E. M. (1967) Language Identification in the Limit. Information and Con­ trol, 10, 447-474. 386
  • 403. Gorrell, P. (1993) Structural Relations in the Grammar and the Parser. Manuscript, University of Maryland. Greenberg, J. (ed.) (1966) Universal* of Language. The MIT Press, Cambridge, MA. Grimshaw, J. (1991) Extended Projection. Manuscript, Brandies University. Guilfoyle, E. and M. Noonan (1988) Functional Categories and Language Acqui­ sition. Presented at the Boston University Conference on Language Acqui­ sition, Boston. Haegeman, L. (1991) Introduction to Government and Binding Theory. Basil Blackwell, Oxford, UK. Haider, H. and M. Prinzhorn (eds.) (1986) Verb Second Phenomena in Germanic Languages. Foris, Dordrecht, The Netherlands. Hamburger, H. and K. Wexler (1975) A Mathematical Theory of Learning Trans­ formational Grammar. Journal of Mathematical Psychology, IS: 137-177. Hoekstra, T. (1984) Transitity. Foris, Dordrecht, The Netherlands. Hoji, H .(1985) Logical Form Constraints and Configurational Structures in Japanese. University of Washington doctoral dissertation, Seattle, WA. Horvath, J. (1986) Focus In the Theory of Grammar and the Syntax of Hungarian. Foris, Dordrecht, The Netherlands. Horvath, J. (1992) The Syntax of Focus and Parameters of Feature-Assignment. Presented at the Colloquium of the UCLA Dept, of Linguistics. Huang, J. (1982) Logical Relations in Chinese and the Theory o f Grammar. MIT doctoral dissertation, Cambridge, MA. Hyams, N. (1986) Language Acquisition and the Theory o f Parameters. Reidel, Dordrecht, The Netherlands. Hyairis, N. (1987) The Theory of Parameters and Syntactic Development. In T. Roeper and E. Williams (eds.) Parameter Setting, Reidel, Dordrecht, The Netherlands. Hyams, N. (1991) V2, Null Arguments and C Projections. To appear in T. Hoestra and B. Schwartz (eas.) Language Acquisition Studies in Generative Grammar. Jackendoff R. (1977) X-bar Syntax. MIT Press, Cambridge, MA. Jaeggli, 0 . (1980) On Some Phonologically-Null Elements in Syntax. MIT doc­ toral dissertation, Cambridge, MA. Johnson, K. (1991) Object Positions. Natural Language and Linguistic Theory, 9: 577-636. 387
  • 404. Johnson, M. (1990a) Features, Frames and Quantifier-Free Formulae. In P. Saint- Dizter and V. Dahl (eds.) Logic and Logic Grammars for Language Process- ing, Ellis Horwood, New York. Johnson, M. (1990b) Expressing Disjunctive and Negative Feature Constraints with Classical First-Order Logic. In Proceedings of the £8th Annual Meeting of the Association for Computational Linguistics. Kayne, R. (1984) Connectedness and Binary Branching. Foris, Dordrecht, The Netherlands. Kayne, R. (1992) Talks given at the Fifteenth GLOW Colloquium and UCLA. Kayne, R. (1993) The Antisymmetry of Syntax. CUNY manuscript, New York. Kim, J. (1993) Null Subjects: comments on Valian (1990). Cognition, 46: 183- 193. Kitagawa, Y. (1986) Subjects in Japanese and English. University of Massechusetts doctoral dissertation, Amherst, MA. koopman, H. (1984) The Syntax of Verbs. Foris, Dordrecht, The Netherlands. Koopman, H. (1987) On the Absence of Case Chains in Bambara. UCLA manuscript. Koopman, H. (1992) Licensing Heads. To appear in N. Hornstein and D. Lightfoot (eds.) Verb Movement. Koopman, H. and D. Sportiche (1985) Theta Theory and Extraction. Abstract in GLOW newsletter. Koopman, H. and D. Sportiche (1988) Subjects. UCLA manuscript. Koopman, H. and D. Sportiche (1990) The Position of Subjects. UCLA manuscript. To appear in Lingua. Kuno, S. (1978) Japanese: A Characteristic OV Language. In W. Lehmann, (ed.) Syntactic Typology: studies in the phenomenology of language. University of Texas Press, Austin. Kuroda, S.-Y. (1988) Whether We Agree or Not: a Comparative Syntax of En­ glish and Japanese. In W. Poser (ed.) Papers on the Second International Workshop on Japanese Syntax, CSLI, Stanford University. Laka, I. (1990) Negation in Syntax: on the nature of functional categories and projections. MIT doctoral dissertation, Cambridge, MA. Laka. I. (1992) Ergative for Unergatives? Presented at the Colloquium of the UCLA Dept, of Linguistics. Langley, P. and J. Carbonell (1987) Language Acquisition and Machine Learning. In B. MacWhinney (ed.) Mechanisms of Language Acquisition, Lawrcuce Erlbaum Associates, Hillsdale, NJ. 388
  • 405. Larson, R. (1988) On the Double Object Construction. Linguistic Inquiry, 19: 335-391. Lasnik, H. (1981) Restricting the Theory of Transformations: a Case Study. In N. Hornstein and D. Lightfoot (eds.) Explanation in Linguistics, Longman, London. Reprinted in Lasnik 1990. Lasnik, H. (1989) On Certain Substitutes for Negative Data. In R. Matthews and W. Demopoulos (eds.) LeamabUity and Linguistic Theory, Kluwer, Boston. Lasnik, H. (1990) Essays on Restrictiveness and Leamability. Reidel, Dordrecht. Lasnik, H. and J. Uriagereka (1988) A Course in GB Syntax. The MIT Press, Cambridge, MA. Lasnik, H. and M. Saito (1989) Move a. The MIT Press, Cambridge, MA. Ligntfoot, D. (1989) The Child's Trigger Experience: Degree-0 Learnability. Be­ havioral and Brain Sciences, 12: 321-334. Lightfoot, D. (1991) How to Set Parameters: Arguments from Language Change. MIT Press, Cambridge, MA. Mahajan, A. (1990) The A/A-Bar Distinction and Movement Theory. MIT doc­ toral dissertation, Cambridge, MA. Mallinson, G. and B. Blake (1981) Language Typology. North-Holland Publishing Company, Amsterdam, The Netherlands. Manzini, M.R. and K. Wexler (1987) Parameters, Binding Theory and Learnabil­ ity. Linguistic Inquiry, 18: 413-444. Marcus, G. (1993) Negative Evidence in Language Acquisition. Cognition. May, R. (1985) Logical Forms. MIT Press, Cambridge, MA. McDanial, D. (1989) Partial and Multiple Wh-Movement. Natural Language and Linguistic Theory, T. 565-604. Mitchell (1993) The Nature and Location of Agreement Within DP. Presented at the Sixteenth GLOW, Lund, Sweden. Morgan, J. (1986) From Simple Input to Complex Grammar. MIT Press, Cam­ bridge, MA. Nyberg, E. (1987) Parsing and the Acquisition of Word Order. Proceedings of the Fourth Eastern States Conference on Linguistics, Ohio State University, Columbus, OH. Nyberb, E. (1990) A Limited Non-Deterministic Parameter-Setting Model. Pro­ ceedings o f the Twenty-First North East Linguistic Society Conference, McGill University, Montreal, Quebec. 389
  • 406. Osherson, D., M. Stob and S. Weinstein (1984) Learning Theory and Natural Language. Cognition, IT. 1-28. Osherson, D., M. Stob and S. Weinstein (1986) Systems that Learn. MIT Press, Cambridge, MA. Ouhalla, J. (1991) Functional Categories and Parametric Variation. Routledge, New York and London. Pesetsky, D. (1989) Language-Particular Processes and the Earliness Principle. MIT manuscript. Pesetsky, D. (1993) Cascade Syntax and Layered Syntax. Presented at the Six­ teenth GLOW, Lund, Sweden. Pereira, F. and S. Shieber (1987) Prolog and Natnral-Language Analysis. Chicago University Press, Chicago. Pinker, S. (1979) Formal Models of Language Learning. Cognition, 7: 217-283. Pinker, S. (19841 Language Leamability and Language Development. Harvard University Press, Cambridge, MA. Poeppel, D. and K. Wexler (1993) The Full Competence Hypothesis of Clause Structure in Early German. Language, 69: 1-33. Pollock, J.-Y. (1989) Verb Movement, UG and the Structure of IP. Linguistic Inquiry, SO: 365-424. Pritchett, B. (1991) Head Position and Parsing Ambiguity. Journal of Psycholin­ guists Research, SO: 251-270. Pullum (1983) How Many Possible Human Languages are there? Linguistic In­ quiry, lj: 447-467. Radford, A. (1990) Syntactic Theory and the Acquisition of English Syntax: the nature of early child grammars of English. Basil Blackwell, Cambridge, MA. Randall, J. (1987) Indirect Positive Evidence: overturning overgeneralizations in language acquisition. Reproduced by the Indiana University Linguistic Club. Randall, J. (1990) Catapults and Pendulums: the mechanics of language acqui­ sition. Linguistics, S8y 1381-1406. Randall, J. (1992) The Catapult Hypothesis: grammars as machines for unlearn­ ing. In J. Weissenborn, H. Goodluck and T. Roeper (eds.) Theoretical Issues in Language Acquisition: continuity and change in development, Lawrence Erlbaum Associates, Inc. Hillsdale, NJ. Ritter, E. (1988) A Head-Movement Approach to Construct-State Noun Phrases. Linguistics, S6. 390
  • 407. ■? Ritter, E. (1990) Two Functional Categories in Noun Phrases: evidence from Modern Hebrew. UQAM manuscript. Rizzi, L. (1982) Issues in Italian Syntax. Foris, Dordrecht, The Netherlands. Rizzi, L. (1990) Relativized Minimality. MIT Press, Cambridge, MA. Roberts, I. (1991) Excoporation and Minimality. Linguistic Inquiry, 22: 209-218. Roberts, I. (1992) Two Types of Head Movement in Romance. To appear in N. Homstein and D. Lightfoot (eds.) Verb Movement. Sadiqi, Fatima (1989) Studies in Berber Syntax: the complex sentence. Konigshausen + Nuumann, Wurzburg, Germany. Safir, K. (1985) Syntactic Chains. Cambridge University Press. Saito, M. (1985) Some Asymmetries in Japanese and Their Theoretical Conse­ quences. MIT doctoral dissertation, Cambridge, MA. Saito, M and H. Hoji (1983) Weak Crossover and Move Alpha in Japanese. Nat­ ural Language and Linguistic Theory. Schachter, P. (1976) The Subject in Philippine Languages: Topic, Actor, Actor- Topic, or None of the Above. In C.N. Li (ed.) Subject and Topic. Academic Press, New York. Shapiro, E.Y. (1983) Algorithmic Program Debugging. MIT Press, Cambridge, MA. Sportiche, D. (1988) Conditions on Silent Categories. UCLA manuscript. Sportiche, D. (1990) Movement, Agreement and Case. UCLA manuscript. Sportiche, D. (1992) Clitic Constructions. UCLA manuscript. Sproat, R. (1985) Welsh Syntax and VSO Structure. Natural Language and Lin­ guistic Theory ft 173-216. Stabler, E.P., Jr. (1987) Restricting Logic Grammars with Government-Binding Theory. Computational Linguistics, 75(1-2): 1-10. Stabler, E.P., Jr. (1988a) Parsing with Explicit Representations of Syntactic Constraints. In V. Dahl and P. Saint-Dizier (eds.) Natural Language Under­ standing and Logic Programming, II, North-Holland, New York. Stabler, E.P., Jr. (1988b) Implementing Government Binding Theory. To appear in Levine ana S. Davis (eds.) Formal Linguistics: Theory and Implementa­ tion. Stabler, E.P., Jr. (1989a) Avoid the Pedestrian’s Paradox. In M IT Parsing Volume 1988-1989, eaited by C. Tenny, M IT Center for Cognitive Science. Revised version in R.C. Berwick, S. Abney, C. Tenny (eds.) (1991) Principle- Based Parsing: Computation and Psycholinguistics, Kluwer, Boston. 391
  • 408. Stabler, E.P., Jr. (1989b) W hat’s a Trigger? Behavioral and Brain Sciences, 12: 358-360. Stabler, E.P., Jr. (1990) Relaxation Techniques for Relaxation Principles for Principle-Based Parsing. UCLA Center for Cognitive Science Technical Re­ port 90-1. Revised version to appear in E. Wenrli (ed.) Proceedings of the Geneva Workshop on GB Parsing. Stabler, E.P., Jr. (1992) The Logical Approach to Syntax: foundations, speci­ fications and implementations of theories of government and binding, MIT Press, Cambridge, MA. Sterling,L. and E.Y. Shapiro (1986) The Art of Prolog: Advanced Programming Techniques. MIT Press. Cambridge. MA. Schaufele, S. (19911 Richness of Subject-Agreement Marking and V-AGR Merger: the Verdict of Vedic. Presented at tne 20th Annual Conference on South Asia, Madison, WI, Nov. Stowell, T. (1981) Origins of Phrase Structure. MIT doctoral dissertation, Cam­ bridge, MA. Stowell, T. (1989) Subjects, Specifiers, and X-bar Theory. In M.R. Baltin and A.S. Kroch (eds.) Alternative Conceptions of Phrase Structure. University of Chicago Press, Chicago. Szabolcsi, A. (1987) Functional Categories in the Noun Phrase. In I. Kenesei (ed.) Approaches to Hungarian, Volume Two: Theories and Analyses, Szeged. Szabolcsi, A. (1989) Noun Phrases and Clauses: Is DP analogous to IP or CP? To appear in J. Payne (ed.) Proceedings of the Colloquium on Noun Phrase Structure. Teng S.-H. (1973) Negation and Aspect in Chinese. Journal of Chinese Linguis­ tics, 1: 14-37. Thiersch, C. (1978) Topics in Germanic Syntax. MIT doctoral dissertation, Cam­ bridge, MA. Toman, J. and U. Koln (1981) Aspects of Multiple Wh-Movement in Polish and Czech. In R. May and J. Koster (eds.) Levels of Syntactic Representation, Foris, Dordrecht, The Netherlands. Travis, L. (1984) Parameters and Effects of Word Order Variation. MIT doctoral dissertation, Cambridge, MA. Valian, V. (1990) Null Subjects: a problem for parameter-setting models of lan­ guage acquisition. In Cognition, 35: 105-122. Valian, V. (1993) Parser Failure and Grammar Change. In Cognition, 46: 195- 202. 392
  • 409. Valois, D. (1991) The Internal Syntax of DP. UCLA doctoral dissertation, Los Angeles. Veenstra, M. (1993) An Implementation of the Minimalist Program. Manuscript, University of Groningen. Vergnaud, J.-R. (1982) Dipendences et nivcaux de representations en syntaxe. Universite de Paris VIl These de Doctorat d’Etat, Paris. Watanabe, A. (1991) Wh-in-situ, Subjacency, and Chain Formation. MIT manuscript, Cambridge, MA. Webelhuth, G. (1989) Syntactic Saturation Phenomena and the Modem Germanic Languages. University of Massachusetts dissertation, Amherst, MA. Weissenbom, J. (1990) Functional Categories and Verb Movement: The Acquisi­ tion of German Syntax Reconsidered. In M. Orthweiler (ed.) Spracherwerb und Grammatik, Linguistische Berichte, Sonderheft 3. Weissenbom J., H. Goodluck and T. Roeper (eds.) (1992) Theoretical Issues in Language Acquisition: continuity and change in development. Lawrence Erlbaum Associates, Hillsdale, NJ. Wexler, K. (1991) The Subset Principle is an Intensional Principle. To appear in E. Reuland and W. Abraham feds.) Knowledge and Language: Issues in Representation and Acquisition, Kluwer. Wexler, K. (1991) Optional Infinitives, Head Movement, and the Economy of Derivation. Presented at the Verb Movement Conference at the University of Maryland, College Park, MD. Wexler, K. and P. Culicover (1980) Formal Principles of Language Acquisition. MIT Press, Cambridge, MA. Wexler, K. and H. Humburger (1973) On the Insufficiency of Surface Data for the Learning of Transformational Languages. In K.J. Hintikka, J.M.E. Moravcsik and P. Suppes (eds.) Approaches to Natural Language, Reidel, Dordrecht, The Netherlands. Wexler K. and M. Manzini (1987) Parameters and Learnability in Binding Theory. In T. Roeper and E. Williams (eds.) Parameter Setting, Reidel, Dordrecht, The Netherlands. Williams, E. (1981) Language Acquisition, Markedness and Phrase Structure. In S. Tavakolian (ed.) Language Acquisition and Linguistic Theory, MIT Press, Cambridge, MA. Wu, A. (1992) Acquiring Word Order Without Word-Order Parameters? In UCLA Working Papers in Psycholinguistics. Wu, A. (1993a) A Minimalist Universal Parser. In UCLA Occasional Papers in Linguistics, 11. 393
  • 410. Wu, A. (1993b) The P-Parameter and the Acquisition of Word Order. Presented at tne Sixty-Seventh Annual Meeting of the Linguistic Society of America. Wu, A. (1993c) Parsing DS, SS and LF Simultaneously. Presented at the Sixth CUNY Conference on Human Sentence Processing. Wu, A. (1993d) The S-Parameter. Presented at the Sixteenth GLOW Collo­ quium. 394