The delayed evaluation techniques discussed above can be easily implemented in parsers which rely on backtracking for their search. For the grammars that we have worked with, a simple bottom-up (shift-reduce) parser combined with delayed evaluation guarantees termination of the parsing process.
To obtain an efficient parser more complicated search strategies are required. However, chart-based search techniques are not easily generalized for grammars which make use of complex constraints. Even if the theoretical problems can be solved  severe practical problems might surface, if the constraints are as complex as the ones proposed here.
As an alternative we have implemented chart-based parsers using the `non-interleaved pruning' strategy (terminology from ). Using this strategy the parser first builds a parse-forest for a sentence on the basis of the context-free backbone of the grammar. In a second processing phase parses are recovered on the basis of the parse forest and the corresponding constraints are applied. This may be advantageous if the context-free backbone of the grammar is `informative' enough to filter many unsuccessful partial derivations that the parser otherwise would have to check.
As clearly a CUG grammar does not contain such an informative context-free backbone a further step is to use `selective feature movement' (cf. again ). In this approach the base grammar is compiled into an equivalent modified grammar in which certain constraints from the base grammar are converted to a more complex context-free backbone in the modified grammar.
Again, this technique does not easily give good results for grammars of the type described. It is not clear at all where we should begin extracting appropriate features for such a modified grammar, because most information passing is simply too `indirect' to be easily compiled into a context-free backbone.
We achieved the best results by using a `hand-fabricated' context-free grammar as the first phase of parsing. This context-free grammar builds a parse forest that is then used by the `real' grammar to obtain appropriate representation(s) for the input sentence. This turned out to reduce parsing times considerably.
Clearly such a strategy raises questions on the relation between this context-free grammar and the CUG grammar. The context-free grammar is required to produce a superset of the derivations allowed by the CUG. Given the problems mentioned above it is difficult to show that this is indeed the case (if it were easy, then it probably would also be easy to obtain such a context-free grammar automatically).
The strategy can be described in somewhat more detail as follows. The context-free phase of processing builds a number of items defining the parse forest, in a format that can be used by the second processing phase. Such items are four-tuples
where is a rule name (consistent with the rule names from the CUG), are string positions and describes the string positions associated with each daughter of the rule (indicating which part of the string is covered by that daughter).
Through a head-driven recursive descent the second processing phase recovers derivations on the basis of these items. Note that the delayed evaluation technique for complex constraints is essential here. Alternative solutions are obtained by backtracking. If the first phase has done a good job in pruning many failing search branches then this is not too expensive, and we do not have to worry about the interaction of caching and complex constraints.