Frampton & Guttmann (2002) argue that a language design that assumes “crashing derivations” would seem to be less computationally efficient than a design which outputs only convergent derivations. Therefore, they advocate a “crash-proof” syntax which requires constraining all the computational operations. This paper makes a distinction between a fatal crash/strict crash and non-fatal crash/soft crash. I will argue that in a model with Feature Inheritance (Chomsky 2000, 2001 and 2004), a mechanism that supersedes Agree, seemingly non-convergent derivations can be salvaged as long every mechanism in the grammar that’s available is exhausted. I argue, given data from Tamazight Berber, that the three logical possibilities of Feature Inheritance namely DONATE, KEEP, and SHARE, proposed in Ouali (2006, 2008), and whose application is ranked with KEEP applying only if DONATE fails, and SHARE applying only of KEEP fails, despite requiring seemingly different derivations can be accounted for within a less strict crash-proof syntax.
I argue that grammatical gender is semantically empty but intrinsically valued, so the strict linkage between uninterpretable and unvalued in Chomsky (2001) cannot be correct. I then demonstrate that gender is infinitely reusable as an “ activity” feature; in contrast, abstract Case activates a DP for just one Agree relation. This asymmetry suggests that valuation via Agree causes goal deactivation, and that deactivation is not necessary for every uninterpretable feature (uF). I accordingly analyze deactivation as arising from PF illegibility of multiple values for a single feature. Agree relations value Case, but never value nominal gender, so the legibility problem does not arise. I demonstrate that in Bantu, adjunction of N to D makes gender accessible to all probes outside DP. This and the reusability of gender as an activity feature leads to a cluster of systematic contrasts between Bantu and Indo-European languages: Bantu DPs A-move much more freely than Indo-European DPs, and value iterating subject agreement. The facts thus demonstrate that the internal syntax of DP impacts its feature matrix; it is not the case that a DP automatically inherits all f-features of its subparts, as syntactic theory generally assumes. Finally, I illustrate that Bantu C and T can agree with different expressions, casting doubt on the Feature Inheritance approach to uF in Chomsky (2007, 2008) and Richards (2007). The facts of grammatical gender argue that valued uF Transfer to the Conceptual-Intentional Interface without inducing crashes.
Argument drop is commonly subject to the Empty Left Edge Condition, ELEC, requiring that the left edge of the clause not be spelled out. ELEC can be explained in terms of minimality, as an intervention effect (blocking context-linking of the null-argument). We argue that sensitivity to this effect is the most important ‘pro drop parametric’ factor and that there are no inherent or lexical differences between ‘different types’ of null-arguments. However, we also present striking evidence from Icelandic that emptiness conditions of this sort are operative in PF, a conclusion that suggests that much of ‘syntax’ in the traditional sense is actually morphosyntax or ‘PF syntax’, invisible to the semantic interface. If so, derivational crashes may occur (in the PF derivation), even though narrow syntax itself is crash-proof.
It is argued that the notions “well-formedness” and “grammaticality,” inspired by formal-language theory, are not necessarily relevant for the study of natural language. The assumption that a [± grammatical] distinction exists, i.e. that I-language generates only certain structures but not others, is empirically questionable and presumably requires a richly structured UG. Some aspects of “crash-proof” models of syntax that assume such a distinction are discussed and contrasted with an alternative proposal (the Minimalist Program as pursued by Chomsky), which dispenses entirely with grammaticality, allowing syntax to generate freely. The latter program aims not at distinguishing “grammatical” from “ungrammatical” sentences, but at providing a true theory of the mechanisms that assign interpretations to structures at the interfaces.
This paper consists of four sections. Section 1 identifies an important unclarity regarding the central concept “crash” and suggests a way to rectify it. Section 2 reveals a pervasive empirical problem confronting Chomsky’s (2007, 2008) attractively deductive valuation-transfer analysis. Section 3 offers a possible solution to this problem, reanalyzing the relation between uninterpretable features and Transfer. Section 4 presents a possible modification of a crash-proof aspect of the proposed model and briefly discusses a remaining question.
Survive-minimalism, as developed in Stroik (1999, 2009) and Putnam (2007), argues for a “crash-proof” syntax that is divested of all derivation-to-derivation and derivation-to-interface operations, such as Internal Merge and Transfer. In this paper, we extend our investigations into Minimalist syntax by showing how it is possible to derive crash-proof syntactic relations using the External Merge operation only. Central to our analysis is the active role that the Numeration plays in building derivations. We demonstrate here that our approach to syntactic relations is in many respects conceptually superior to other Minimalist alternatives, mainly on the grounds that our analysis offers a conceptually grounded explication of how a derivation begins, proceeds and (successfully) terminates without relying on theory-internal stipulations or labels. Contra Boeckx (this volume) and Ott (this volume), we conclude that an optimal design of the CHL is indeed ‘crash-proof’ after all.
Pointing out several undesirable consequences that Merge gives rise to in the mainstream minimalist approach to phrase structure, a strongly derivational model is developed that dispenses with the narrow syntactic Merge operation. Representations and recursion are argued to be properties of the interface components only, and to be absent from narrow syntax. Transfer, implementing feature checking in a local fashion and instructing interface computations, is defined as an iterative operation mapping Lexical Items to the interface components directly. In lack of Merge, narrow syntactic overgeneration is eliminated in toto, since no narrow syntactic representations are created and filtering of Transfer operations by the interface modules is immediate. It is argued that of the twin (overlapping) objectives of making syntax crash-proof and restricting syntactic overgeneration, only the latter is of relevance to the architecture of grammar.
This paper looks at how the particular computational mechanism of Crash-Proof Syntax (CPS) (Frampton & Gutmann 1999, 2002) as an instantiation of the Minimalist Program (Chomsky 1995) can be understood from the point of view of mathematical foundation that captured the spotlight among mathematicians during the nineteenth century. I claim that CPS can be analyzed as an analogy with Classical Peano’s Axioms that generate the theory of natural numbers. Instead of its computational efficiency, CPS is driven by the economization of axioms of formal systems. Further comparisons between syntax and natural numbers reveal that the central tenets of CPS can be defined mathematically on one hand, and highlight the significance of the ‘third factor’ as the design feature of language (Chomsky 2005) on the other hand.
This article argues that even when it turns out to be possible to develop a crash-proof syntax that only generates well-formed objects that satisfy the interface conditions, filters on the output of the computational system will remain an essential ingredient of the theory of syntax. This does not necessarily imply, however, that the more general and modest aim of the crash-proof syntax project to limit the output of the derivational system to “objects that are well-formed and satisfy conditions imposed by the interface systems” should be dismissed as irrelevant.
The problem of obtaining a ‘crash-proof syntax’ has proved a difficult one for the Minimalist Program (Chomsky, 1995). This paper argues that this difficulty stems from the intrinsic enumerative-generative nature of the framework, since model-theoretic frameworks of grammar are crash-proof by definition (Pullum & Scholtz, 2001). The latter do not describe, define or produce derivations, or any kind of linguistic structure for that matter. The production of linguistic structures is left to the performance modules (i.e. comprehension and production), which consult the competence grammar module in order to determine which structures are possible. On the other hand, it is clear that the construction of syntactic structure performed by performance modules can – and often does – go awry during production and comprehension. A proper general theory of language should account for such empirically motivated performance ‘crashes’. Because they lack the notion of derivation, model-theoretic frameworks are better suited to be integrated with theories of how linguistic structure is actually built in production and comprehension. It is unclear what psychological correlate, if any, there is to derivations and crashes in a Minimalist setting. It is known since Fodor et al. (1974) that a derivational theory of complexity has no psycholinguistic grounding. Model-theoretic frameworks do not have this problem precisely because they are process-neutral.
Frampton & Guttmann (2002) argue that a language design that assumes “crashing derivations” would seem to be less computationally efficient than a design which outputs only convergent derivations. Therefore, they advocate a “crash-proof” syntax which requires constraining all the computational operations. This paper makes a distinction between a fatal crash/strict crash and non-fatal crash/soft crash. I will argue that in a model with Feature Inheritance (Chomsky 2000, 2001 and 2004), a mechanism that supersedes Agree, seemingly non-convergent derivations can be salvaged as long every mechanism in the grammar that’s available is exhausted. I argue, given data from Tamazight Berber, that the three logical possibilities of Feature Inheritance namely DONATE, KEEP, and SHARE, proposed in Ouali (2006, 2008), and whose application is ranked with KEEP applying only if DONATE fails, and SHARE applying only of KEEP fails, despite requiring seemingly different derivations can be accounted for within a less strict crash-proof syntax.
I argue that grammatical gender is semantically empty but intrinsically valued, so the strict linkage between uninterpretable and unvalued in Chomsky (2001) cannot be correct. I then demonstrate that gender is infinitely reusable as an “ activity” feature; in contrast, abstract Case activates a DP for just one Agree relation. This asymmetry suggests that valuation via Agree causes goal deactivation, and that deactivation is not necessary for every uninterpretable feature (uF). I accordingly analyze deactivation as arising from PF illegibility of multiple values for a single feature. Agree relations value Case, but never value nominal gender, so the legibility problem does not arise. I demonstrate that in Bantu, adjunction of N to D makes gender accessible to all probes outside DP. This and the reusability of gender as an activity feature leads to a cluster of systematic contrasts between Bantu and Indo-European languages: Bantu DPs A-move much more freely than Indo-European DPs, and value iterating subject agreement. The facts thus demonstrate that the internal syntax of DP impacts its feature matrix; it is not the case that a DP automatically inherits all f-features of its subparts, as syntactic theory generally assumes. Finally, I illustrate that Bantu C and T can agree with different expressions, casting doubt on the Feature Inheritance approach to uF in Chomsky (2007, 2008) and Richards (2007). The facts of grammatical gender argue that valued uF Transfer to the Conceptual-Intentional Interface without inducing crashes.
Argument drop is commonly subject to the Empty Left Edge Condition, ELEC, requiring that the left edge of the clause not be spelled out. ELEC can be explained in terms of minimality, as an intervention effect (blocking context-linking of the null-argument). We argue that sensitivity to this effect is the most important ‘pro drop parametric’ factor and that there are no inherent or lexical differences between ‘different types’ of null-arguments. However, we also present striking evidence from Icelandic that emptiness conditions of this sort are operative in PF, a conclusion that suggests that much of ‘syntax’ in the traditional sense is actually morphosyntax or ‘PF syntax’, invisible to the semantic interface. If so, derivational crashes may occur (in the PF derivation), even though narrow syntax itself is crash-proof.
It is argued that the notions “well-formedness” and “grammaticality,” inspired by formal-language theory, are not necessarily relevant for the study of natural language. The assumption that a [± grammatical] distinction exists, i.e. that I-language generates only certain structures but not others, is empirically questionable and presumably requires a richly structured UG. Some aspects of “crash-proof” models of syntax that assume such a distinction are discussed and contrasted with an alternative proposal (the Minimalist Program as pursued by Chomsky), which dispenses entirely with grammaticality, allowing syntax to generate freely. The latter program aims not at distinguishing “grammatical” from “ungrammatical” sentences, but at providing a true theory of the mechanisms that assign interpretations to structures at the interfaces.
This paper consists of four sections. Section 1 identifies an important unclarity regarding the central concept “crash” and suggests a way to rectify it. Section 2 reveals a pervasive empirical problem confronting Chomsky’s (2007, 2008) attractively deductive valuation-transfer analysis. Section 3 offers a possible solution to this problem, reanalyzing the relation between uninterpretable features and Transfer. Section 4 presents a possible modification of a crash-proof aspect of the proposed model and briefly discusses a remaining question.
Survive-minimalism, as developed in Stroik (1999, 2009) and Putnam (2007), argues for a “crash-proof” syntax that is divested of all derivation-to-derivation and derivation-to-interface operations, such as Internal Merge and Transfer. In this paper, we extend our investigations into Minimalist syntax by showing how it is possible to derive crash-proof syntactic relations using the External Merge operation only. Central to our analysis is the active role that the Numeration plays in building derivations. We demonstrate here that our approach to syntactic relations is in many respects conceptually superior to other Minimalist alternatives, mainly on the grounds that our analysis offers a conceptually grounded explication of how a derivation begins, proceeds and (successfully) terminates without relying on theory-internal stipulations or labels. Contra Boeckx (this volume) and Ott (this volume), we conclude that an optimal design of the CHL is indeed ‘crash-proof’ after all.
Pointing out several undesirable consequences that Merge gives rise to in the mainstream minimalist approach to phrase structure, a strongly derivational model is developed that dispenses with the narrow syntactic Merge operation. Representations and recursion are argued to be properties of the interface components only, and to be absent from narrow syntax. Transfer, implementing feature checking in a local fashion and instructing interface computations, is defined as an iterative operation mapping Lexical Items to the interface components directly. In lack of Merge, narrow syntactic overgeneration is eliminated in toto, since no narrow syntactic representations are created and filtering of Transfer operations by the interface modules is immediate. It is argued that of the twin (overlapping) objectives of making syntax crash-proof and restricting syntactic overgeneration, only the latter is of relevance to the architecture of grammar.
This paper looks at how the particular computational mechanism of Crash-Proof Syntax (CPS) (Frampton & Gutmann 1999, 2002) as an instantiation of the Minimalist Program (Chomsky 1995) can be understood from the point of view of mathematical foundation that captured the spotlight among mathematicians during the nineteenth century. I claim that CPS can be analyzed as an analogy with Classical Peano’s Axioms that generate the theory of natural numbers. Instead of its computational efficiency, CPS is driven by the economization of axioms of formal systems. Further comparisons between syntax and natural numbers reveal that the central tenets of CPS can be defined mathematically on one hand, and highlight the significance of the ‘third factor’ as the design feature of language (Chomsky 2005) on the other hand.
This article argues that even when it turns out to be possible to develop a crash-proof syntax that only generates well-formed objects that satisfy the interface conditions, filters on the output of the computational system will remain an essential ingredient of the theory of syntax. This does not necessarily imply, however, that the more general and modest aim of the crash-proof syntax project to limit the output of the derivational system to “objects that are well-formed and satisfy conditions imposed by the interface systems” should be dismissed as irrelevant.
The problem of obtaining a ‘crash-proof syntax’ has proved a difficult one for the Minimalist Program (Chomsky, 1995). This paper argues that this difficulty stems from the intrinsic enumerative-generative nature of the framework, since model-theoretic frameworks of grammar are crash-proof by definition (Pullum & Scholtz, 2001). The latter do not describe, define or produce derivations, or any kind of linguistic structure for that matter. The production of linguistic structures is left to the performance modules (i.e. comprehension and production), which consult the competence grammar module in order to determine which structures are possible. On the other hand, it is clear that the construction of syntactic structure performed by performance modules can – and often does – go awry during production and comprehension. A proper general theory of language should account for such empirically motivated performance ‘crashes’. Because they lack the notion of derivation, model-theoretic frameworks are better suited to be integrated with theories of how linguistic structure is actually built in production and comprehension. It is unclear what psychological correlate, if any, there is to derivations and crashes in a Minimalist setting. It is known since Fodor et al. (1974) that a derivational theory of complexity has no psycholinguistic grounding. Model-theoretic frameworks do not have this problem precisely because they are process-neutral.