The main conclusion to be drawn from the experiments discussed above is that the influence of the grammar can hardly be underestimated. The parser that works best for one grammar may easily turn out to be the most inefficient one for a different grammar. This observation also holds for the grammars discussed above even though these are both lexicalist grammars.
Head-corner parsing appears to be superior for grammars in which the head-corner table contains discriminating information. A typical DCG grammar for a head-final language such as Dutch is an example of such a grammar. On the other hand, for grammars in which top-down filtering is difficult to implement, strictly bottom-up parsing strategies are more useful, especially if the number of active items can be reduced, either by a lazy strategy which never enters active items in the chart or, even more successful for the CUG grammar for English we considered, a head-driven strategy.
Clearly many other factors may be relevant in finding the best parser for a particular grammar. For example the cost of unification turns out to be an important factor. As indicated above a cheap unification procedure may favor an inactive chart parser, even if in that parser many useless reductions are attempted. However, if the cost of unification is relatively high, the cost of the use of active items to reduce the number of useless reductions, for example by a head-driven strategy, may be worthwhile.
Another result we obtained during the experiments is that the use of a head-corner and left-corner table may also lead to inefficiency. It may be the case that on the basis of the left-corner table (resp. head-corner table) very little derivations are actually filtered out. Furthermore, the use in the table may even lead to more derivations as now certain subcases are considered which are considered as a single derivation in a parser without prediction. An important problem thus is to come up with the most useful left-corner (resp. head-corner) table for a given grammar.
A final factor in determining the best parser is the actual use we want to make of the parser. For example, are we interested in the times needed to do recognition, or do we need to consider the times used for the recovery of parse trees as well. In some systems these different parse trees are never actually built but the semantic and pragmatic components directly work on the items built by the parser . We conjecture that even in such applications it is probably a good thing to limit the size of the parse forest, but the importance may vary from application to application.