For our current purposes, the co-routining facilities offered by
Sicstus Prolog are powerful enough to implement a delayed evaluation
strategy for the cases discussed above. For each constraint we declare
the conditions for evaluating a constraint of that type by means of a
block declaration. For example the concat constraint is
associated with a declaration:
:- block concat(- ,?, -).
This declaration says that evaluation of a call to concat should be delayed if both the first and third arguments are currently variable (uninstantiated, of type TOP). It is clear from the definition of concat that if these arguments are instantiated then we can evaluate the constraint in a top-down manner without risking non-termination. E.g. the goal concat([A,B],C,D) succeeds by instantiating D as the list [A,B|C].
Note that block declarations apply recursively. If the third argument
to a call to concat is instantiated as a list with a variable
tail, then the evaluation of the recursive application of that goal
might be blocked; e.g. evaluation of the goal
succeeds either with both A and C instantiated as the empty list and by
unifying Sj and B, or with A instantiated as the list
for which the constraint
concat(D,[Sj] has to be satisfied.
Similarly, for each of the other constraints we declare the
conditions under which the constraint can be evaluated. For the
add_adj constraint we define:
:- block add_adj(?,-,?,?).
One may wonder whether in such an architecture enough information will ever become available to allow the evaluation of any of the constraints. In general such a problem may surface: the parser then finishes a derivation with a large collection of constraints that it is not allowed to evaluate -- and hence it is not clear whether the sentence associated with that derivation is in fact grammatical (as there may be no solutions to these constraints).
The strategy we have used successfully so-far is to use the structure hypothesized by the parser as a `generator' of information. For example, given that the parser hypothesizes the application of rules, and hence of certain instantiations of the subcat list of the (lexical) head of such rules, this provides information on the subcat-list of lexical categories. Keeping in mind the definition of a lexical entry as in figure 7 we then are able to evaluate each of the constraints on the value of the subcat list in turn, starting with the push_slash constraint, up through the inflection and add_adj constraints. Thus rather than using the constraints as `builders' of subcat-lists the constraints are evaluated by checking whether a subcat-list hypothesized by the parser can be related to a subcat-list provided by a verb-stem. In other words, the flow of information in the definition of lexical_entry is not as the order of constraints might suggest (from top to bottom) but rather the other way around (from bottom to top).