next up previous
Next: Discussion Up: The Head-corner Parser for Previous: An example


Definite clause specification

To explain the basic ideas behind the head-corner parser for LTAGs I present a definite clause specification of the head-corner parser. Such a definite clause specification can be considered as a declarative characterization of the parser. It defines a search space which can then be investigated with different techniques. One can use SLD resolution with backtracking (as in Prolog), possibly with memo-ization and packing, or Earley deduction. A definite clause definition can be seen as an abstract specification of different implementations of such a head-corner parser. It is easy to obtain a worst-case efficiency bound if we view the definite clause specification as an abstract characterization of an Earley deduction system. In that case, the number of possible instantiations of a clause gives an approximation of the upper-bound for the computational resources needed in the worst-case.

The basics of the parser are very simple. A tree to be recognized by the parser is represented as a pair consisting of the head-corner and a list of triples. The list of triples is the chain of nodes from the head-corner up to the root. Each triple consists of a head, a list of trees representing the daughters of this node left of its head, and a list of trees representing the daughters of this node right of its head.

t(Term,Chain). % for tree with terminal symbol as head-corner
e(Cat, Chain). % for tree with empty string as head-corner
s(Subs,Chain). % for tree with substitution node as head-corner

As an example, consider initial tree $ \alpha_{4}^{}$ of figure 2. The head-corner of this tree is the terminal node saw. The chain of nodes consists of the nodes v, vp, s, sbar and the root node s. This tree therefore is represented as follows:

t(saw,[c(v   ,[]        ,[]),
       c(vp  ,[]        ,[]),
       c(s   ,[s(np,[])],[]),
       c(sbar,[e(c ,[])],[]),
       c(s,   [s(wh,[])],[])
Note that this is a simple example in the sense that each of the non-head daughters of the tree is a non-complex tree. In general however chains appear recursively within chains.

The parse predicate identifies three cases depending on the nature of the head-corner of the tree to be recognized. The head-corner is either a substitution node, a terminal symbol, or an empty element.

In the first case the parser predicts an initial tree whose root node matches the substitution node (the ini predicate). The chain of nodes needed to connect the anchor of this initial tree to its root node, and the chain of nodes from the substitution node up to its root node are both traversed via the hc_na predicate.

In the second case (head-corner is a terminal symbol) the parser checks whether there is indeed a terminal symbol in a possible position (i.e. between the current extreme positions). The chain of nodes to connect the head-corner to the root is then traversed via the hc_na predicate.

In the third case (head-corner is empty element) the parser selects a position where the empty element might occur and continues to traverse the chain via the hc predicate.

goal(s(Subs,Chain),P0,P,E0,E) :-
    ini(Subs,Chain0,Word,Q0,Q),                 % select initial tree 
    between(Q0,Q,E0,E),                         % in appropriate position
    hc_na(Chain0,lex(Word),Mid,Q0,Q,R0,R,E0,E), % recognize initial tree
    hc_na(Chain,Mid,_,R0,R,P0,P,E0,E).          % proceed with Chain

goal(l(Word,Chain),P0,P,E0,E) :-
    connects(Word,Q0,Q),                        % select word in string
    between(Q0,Q,E0,E),                         % in appropriate position
    hc_na(Chain,lex(Word),_,Q0,Q,P0,P,E0,E).    % proceed with Chain

goal(e(Cat,Chain),P0,P,E0,E) :-
    between(Q,Q,E0,E),                          % gap in appr. position
    hc(Chain,Cat,_,Q,Q,P0,P,E0,E). % proceed

The predicates hc and hc_na both define a head-driven bottom-up tree walk. The difference is that in the latter case no adjunctions are possible at the current node. Note that this predicate is used if the current node is a terminal symbol. 1

The predicate hc identifies two cases. Either no adjunction occurs, or an adjunction does occur. In the first case the predicate hc_na is called (the predicate unify_node is relevant only in the case of feature-structures, cf. below). If an adjunction is possible then an appropriate auxiliary tree is selected. The chain of nodes of this auxiliary tree is traversed first. After that we continue to traverse the current chain of nodes. 2

The predicate hc_na succeeds trivially in case the chain is empty. Otherwise it takes the first triple of the chain, recognizes each of the daughters left and right of the current head and proceeds with a head-driven traversal of the tail of the chain.

hc(Chain,Cat,Goal,Q0,Q,P0,P,E0,E) :-
    hc_na(Chain,Cat,Goal,Q0,Q,P0,P,E0,E). % proceed without adjunction

hc(Chain,Cat,Goal,Q0,Q,P0,P,E0,E) :-            
    aux(Cat,Chain0,_,E0,Q0,Q,E),          % select aux, anchor in E0-Q0 or Q-E
    hc_na(Chain0,_,Mid,Q0,Q,R0,R,E0,E),   % recognize aux, no adjs. at foot
    hc_na(Chain,Mid,Goal,R0,R,P0,P,E0,E). % proceed

hc_na([],G,G,P0,P,P0,P,_,_).              % finish at root
hc_na([t(Cat,L,R)|Chain],_,Goal,Q1,Q2,P0,P,E0,E) :-
    goal_l(L,Q0,Q1,E0),                   % recognize material left of head
    goal_r(R,Q2,Q,E),                     % and right of head
    hc(Chain,Cat,Goal,Q0,Q,P0,P,E0,E).    % proceed with mother

To parse a list of trees the predicates goal_l and goal_r are defined in order to parse to the left or to the right respectively. In order to simplify the first predicate it is assumed that the daughters left of a head are represented in a right-to-left order. Note the use of the indices that represent extreme positions.



If a sentence is grammatical, then, after selecting possible initial and auxiliary trees from the lexicon, the following goal should succeed (where TopCat should represent the top category, and End the length of the sentence):


This completes the definite clause description of the head-corner parser for lexicalized TAGs.

Feature structures

The head-corner parser straightforwardly deals with feature structures, to be represented here as first-order terms. The details are hidden in the definitions of the predicates aux and unify_node. The latter predicate simply unifies the bottom and top parts of a node. This predicate is called for some node, therefore, once it is known that no adjunctions are possible at that node. In the predicate aux the appropriate unifications are carried out with regard to the foot and root node of the auxiliary tree and the target node of the adjunction.

Note that it is assumed that adjunction constraints are all expressed via constraints on feature structures. Also note that adjunctions at foot nodes are generally disallowed (although it would not be difficult to allow such adjunctions if there were linguistic motivation to do so).

Derived trees and derivation trees

The head-corner parser can be easily extended to deliver derived trees, and/or derivation trees. It is not difficult to extend the parser with a mechanism to obtain structure sharing in the case of ambiguities (packing). The implementation of the head-corner parser in the appendix indeed uses packing, and produces items on the basis of which derivation trees are recovered.


It is possible to obtain an efficient parser for LTAGs on the basis of the specification given above. Rather than using `chains', use a pointer to implement the tree walk. As there is only a polynomial number of different instantiations of each of the predicates, it is possible to obtain a polynomially bounded implementation, for example by using Earley deduction.

In fact, if we remove from all of the clauses the pair of arguments that represent the extreme positions, then it is quite easy to see that an O(n6) parser can be obtained (note that this does not affect the correctness of the algorithm, as the extreme positions are used as filter only). This can be seen if we regard the definite clauses above as inference rules for an Earley-deduction system. The most complex inference rule--the second clause of hc--then has 6 position markers, the other arguments are all bounded by the size of the grammar.

The version that we presented above--including a pair of markers indicating the extreme positions--has an obvious implementation with a O(n7) worst-case bound3 but can be turned into an O(n6) chart parser with some additional effort. This effort consists in representing `goals' as separate items, along the lines of [5].

If we consider feature-structures then a polynomial worst-case bound can only be obtained by placing some (arbitrary) limit on the size of feature-structures. But in such cases it is far from clear whether chart-based techniques lead to practical benefits. In my experience, a backtracking scheme, possibly enriched with memo-ization of a few well-chosen predicates (in this case the goal predicate), is in such cases much more practical.

next up previous
Next: Discussion Up: The Head-corner Parser for Previous: An example
Noord G.J.M. van