Chapter 2 defines a formalism, called (), which is in several respects typical for formalisms in use in computational linguistics. It can be characterized as a constraint-based formalism, where the constraints are restricted to the path-equations known from PATR II. Unless PATR II, the formalism does not prescribe that phrases are built by concatenation. Hence the formalism is comparable to pure Prolog, but instead of first-order terms, the data-structures of the formalisms are feature structures (defined by path equations). Furthermore, the formalism is defined within a very general setting, provided by the work of [32]. This provides for more powerful constraints to be added to the formalism, without affecting some of the properties of the formalism, if the necessary constraint-solving techniques are available for these more powerful constraints.

The resulting formalism,
(), is used in the thesis as a language
to define grammars with, but moreover as a language to define
meta-interpreters in. A grammar of
() is a definite clause specification
of the relation
*sign*/1. A simplified
clause of a grammar might read as follows:

Such a clause will generally be written in matrix notation as follows:

The procedural semantics of the formalism is comparable to Prolog's proof procedure. Thus, refutation of a goal proceeds in a top-down manner. A left-most computation rule is assumed, such that the leftmost atom of a goal is reduced first. Furthermore, the search tree is traversed in a depth-first backtrack manner. Alternative proof strategies for () grammars are defined later as meta-interpreters in ().

In the case we use
() for grammars, the *p*-parsing problem for a
path *p* is defined as follows. Assume the input for parsing is a
feature structure with phonological representation
as the value
of the attribute
*phon*. The answers to the
*phon*-parsing
problem will be those compatible signs in the grammar
which have
as the value of their
*phon* attribute.
Similarly, assume the input for generation is some feature structure
with semantic representation
as the value of the attribute
*sem*. The answers to the
*sem*-parsing problem
are those compatible signs in the grammar which have
*sem* as their semantic structure.

It is shown that in general the *p*-parsing problem for
() grammars is not solvable, by showing that an undecidable
problem, Post's Correspondence Problem, can be defined in a
()
grammar. It can thus be concluded that in the general case, grammars
of
() are not reversible.

In practice however, the grammars computational linguists tend to
write *are* reversible. For this reason an important task consists
in the construction of proof procedures which solve the *p*-parsing
problem for the grammars typically encountered. This is the goal of
the third and fourth chapter of the thesis. In these chapters
generation- and parsing techniques are presented, which are more
generally applicable than some competing techniques. Moreover these
techniques are motivated linguistically in that certain *head-driven* and *lexicalistic* aspects of most modern grammars
are exploited in those techniques. It is hoped that the linguistic
foundation of these processing techniques leads to an increased
efficiency.

1998-09-30