next up previous
Next: Reversibility Up: Reversible grammars Previous: Reversible grammars



Motivations for reversible grammars can be divided into linguistic, language technological, and psychological motivation. The different kinds of motivation are now discussed in turn.

Linguistic motivation

If we assume a reversible grammar, then we make two claims. The first claim is that language should be described by a single grammar (rather than a different grammar for understanding and a different grammar for production). The second claim is that this single grammar, moreover, can be used effectively both for parsing and generation.

The first claim can be motivated linguistically as follows. The primary goal of (theoretical) linguistics is to characterize languages. How such languages are used by humans (or computers) are different questions. Thus, given a language such as English, the primary goal of linguistics is to define the possible English utterances and their corresponding meanings. Thus a single language should be described by a single grammar.

The second claim, that this single grammar should moreover be (effectively) reversible, can be motivated as follows. Given that the goal of linguistics is to define the relationship between utterances and meanings, it seems that, to check a possible theory, we should be able to find out the predictions such a theory makes. That is, for a given utterance it should be possible to `know' what the possible meanings are, according to the grammar (and vice versa). Thus, for each grammar, we want to be able to compute the corresponding meaning representations for a given utterance, and to compute the corresponding utterances for a given meaning representation.

Language technological motivations

In order to build practical NLP systems, the use of reversible grammars can be motivated, both on methodological ground (as a means to obtain `better' systems) and practical grounds (as a means to obtain systems in a more efficient way).


An important motivation for reversible grammars in NLP is of a methodological nature. If we are to use grammars both for parsing and generation we are forced to write grammars declaratively. This in turn implies that a more abstract analysis of the linguistic facts is necessary in the general case. If we are to write a declarative grammar which is used only for, say, parsing, it is quite easy to `cheat' and `adapt' the grammar to the parsing algorithm that is being used. In a reversible grammar this is much harder because at any moment there are two algorithms for which the grammar must be applicable.

The claim is that the use of reversible grammars will eventually lead to better grammars. For example, a grammar that is written for parsing will typically over-generate quite a lot; i.e. it will assign logical forms to sentences that are in fact ungrammatical. Not only is such a state of affairs undesirable if we are interested in describing the relation between form and meaning, it can also be argued that over-generation of parsing is a problem, even if we are only interested in parsing well-formed utterances, because over-generation typically leads to `false ambiguities'.

An example may clarify this point (this example was actually encountered in the development of a working system). Consider a grammar for English that is intended to handle auxiliaries. Suppose that the English auxiliaries are analyzed as verbs that take an obligatory VP-complement. Moreover each auxiliary may restrict the vform (participle, infinite) of this complement. This allows the analysis of sentences such as

Graham {\em will have been traveling} with his aunt
However, the possible order of English auxiliaries (eg. `have' should precede `be') is not accounted for and the analysis sketched above will for example allow sentences such as

*Graham {\em will be having traveled} with his aunt
In the case of a reversible grammar such constructions should clearly not be generated, hence the analysis will be changed accordingly. However, even if the grammar is only used for parsing, this analysis runs into problems because it will assign two meanings to the sentence:

Graham is having grilled meat
The meanings that are assigned, roughly correspond to the sentences:

\item Graham ordered grilled meat
\item Graham has been grilling the meat

where only the first reading is acceptable. Thus, over-generation is not acceptable, even for grammars which are used only for parsing, because over-generation typically implies that `false ambiguities' are produced.

In some cases, the over-generation may also lead to an explosion of local possibilities during parsing. If the grammar is more constrained, then this may sometimes be good from an efficiency point of view, because in that case there are less local ambiguities the parser has to keep track of.1

Thus, a reversible architecture may be a useful methodology to obtain a good parsing system.

Similarly, a grammar that is built for generation will usually under-generate; i.e. it will only generate a canonical sentence for a given logical form, even if there are several possibilities. Again, from a theoretical perspective this is clearly undesirable. It can be argued that a reversible architecture also leads to a better generation system. It has often been argued that, in particular situations, a generation system should produce an un-ambiguous utterance. In other situations, however, ambiguous utterances are harmless because the hearer can easily disambiguate the utterance. In this article we propose a model of language production in which a generator instructs its grammatical component whether or not it should check for ambiguity of its proposed utterance. The grammatical component, quite independently, computes an un-ambiguous utterance if so desired. For this model to be possible at all, it must be the case that the grammatical component has at its disposal several utterances for a given semantic structure in order to find in a given situation the most appropriate one. Note that `ambiguity' might be only one of several parameters that may or may not be instantiated in a given situation. Summarizing this point, the claim is that under-generation is undesirable from the point of view of extendability.


Consider an NLP system which is used both to convey messages to a user, and to understand the requests of the user. Necessarily, the sentences the system is able to produce and to understand are somewhat limited, given the state of the art. This may not be problematic for a user, as she might adapt herself to this restriction. However, a user will invariably assume that she can use the sort of sentences the system produces itself. That is, a reasonable constraint for such a system is that the sentences it produces is a subset of the sentences it is able to understand. In a reversible grammar the problem to check that the parser and generator are consistent in this respect (i.e. that the system should be able to understand those types of sentences, which it produces itself) is solved automatically; hence no special consistency checking device needs to be considered.


From a practical point of view it may be argued that it is cheaper to construct one reversible grammar than two separate grammars. The same argument can then be applied to the costs of maintaining the grammars. These two arguments of course extend to the lexical entries in the grammar: in a reversible architecture only one lexicon needs to be built and maintained, although clearly non-reversible grammars may share their lexicon too.

Furthermore, a reversible grammar provides grammar writers with a very effective debugging tool. To check whether the grammar does accept ungrammatical sentences it is possible to use the generator to see whether such ungrammatical sentences are being produced. Clearly, this technique cannot be used to ensure that a grammar does not produce ungrammatical sentences, however in practice such a tool turns out to be quite useful, as many errors in the grammar are detected this way.

Psychological motivation

An interesting question might be whether humans base their language production and language understanding on a single body of grammatical knowledge. Clearly, this would explain why humans speak the same language they understand and vice versa. Some empirical evidence for shared processors or facilities is discussed in [Garrett1982], [Frazier1982] and [Jackendoff1987].

In practice speakers often understand sentences they would never produce. This observation may have several reasons.

Many differences are due to the fact that people often are able to understand otherwise mysterious utterances, because of the context and situation -- using intelligence rather than grammatical knowledge. For example, hearers may understand an utterance even if the utterance contains a word they hear for the first time (and hence they could never have produced such an utterance), provided the situation or context makes it clear what this word means. Thus `learning' often takes over from natural language understanding proper.

Alternatively, it may simply be the case that people understand sentences, they never utter, because they do not come up with the meaning in the first place. This situation might occur, either because they are not able to come up with the meaning (Einstein's case), or because they do not want to come up with that meaning (Dan Quayle's case) 2. The first time that Einstein explained to his colleagues the relativity theory they were probably able to understand him. However none of them would have been able to produce Einstein's utterances. As another example, consider the case where someone uses special stylistic effects. A hearer may recognize the social register associated with these effects; this thus will be part of the `meaning' of the utterances of that speaker. However, the hearer may belong to a different social class, and hence its language components will generally be instructed with a different `meaning' representation to that effect. Thus, it seems that some of the differences between understanding and production are not to be explained linguistically, but are due to a difference at another level of cognitive behavior.

Thus, maybe it is possible to maintain that the grammatical part of language understanding and production can be modeled by assuming it is based on a reversible grammar. On the other hand, if it is not possible to maintain this claim in its full generality, then I believe that the model proposed here provides a good starting point for a more realistic model of language behavior.

next up previous
Next: Reversibility Up: Reversible grammars Previous: Reversible grammars
Noord G.J.M. van