Lexicalist grammar formalisms, such as Head-driven Phrase Structure Grammar (HPSG) and Categorial Unification Grammar (CUG) have two characteristic properties. Lexical elements and phrases are associated with categories that have considerable internal structure. Second, instead of construction specific rules, a small set of generic rule schemata is used. Consequently, the set of constituent structures defined by a grammar cannot be `read off' the rule set directly, but is defined by the interaction of the rule schemata and the lexical categories.
Applying standard parsing algorithms to such grammars is unsatisfactory for a number of reasons. Earley parsing is intractable in general, as the rule set is simply too general. For some grammars, naive top-down prediction may even fail to terminate.  therefore proposes a modified version of the Earley-parser, using restricted top-down prediction. While this modification leads to termination of the prediction step, in practice it easily leads to a trivial top-down prediction step, thus leading to inferior performance.
Bottom-up parsing is far more attractive for lexicalist formalisms, as it is driven by the syntactic information associated with lexical elements. Certain inadequacies remain, however. Most importantly, the selection of rules to be considered for application may not be very efficient. Consider, for instance, the following DCG rule:
Parsers in which the application of a rule is driven by the rightmost daughter, such as shift-reduce and inactive bottom-up chart parsers, encounter a similar problem for rules such as (2).
Several authors have suggested parsing algorithms which appear to be more suitable for lexicalist grammars.  discusses the concept of head-driven parsing. The key idea underlying this concept is that the linguistic notion head can be used to obtain parsing algorithms which are better suited for typical natural language grammars. Most linguistic formalisms assume that among the daughters introduced by a rule or rule-schema there is one daughter which can be identified as the head of that rule. There are several criteria for deciding which daughter is the head. Two of these criteria seem relevant for parsing. First of all, the head of a rule determines to a large extent what other daughters may or must be present, as the head subcategorizes for the other daughters. Second, the syntactic category and morphological properties of the mother node are, in the default case, identical to the category and morphological properties of the head daughter. These two properties suggest that it might be possible to design a parsing strategy in which one first identifies a potential head of a rule, before starting to parse the non-head daughters. By starting with the head, important information about the remaining daughters is obtained. Furthermore, since the head is to a large extent identical to the mother category, effective top-down identification of a potential head should be possible. A head-driven parsing strategy is particularly interesting for lexicalist grammars, as these grammars normally suffer most from the problem that rules or rule-schemata hardly constrain the search-space of the parser.
In  two different head-driven parsers are presented. First, a `head-driven' shift-reduce parser is presented which differs from a standard shift-reduce parser in that it considers the application of a rule (i.e. a reduce step) only if a category matching the head of the rule has been found. Furthermore, it may shift elements onto the parse-stack which are in a sense similar to the active items (or `dotted rules') of active chart parsers. By using the head of rule to determine whether a rule is applicable, the head-driven shift-reduce parser avoids the disadvantages of parsers in which either the leftmost or rightmost daughter is used to drive the selection of rules.
Kay also presents a `head-corner' parser. The striking property of this parser is that it does not parse a phrase from left to right, but instead operates `bidirectionally'. It starts by locating a potential head of the phrase and then proceeds by parsing the daughters to the left and the right of the head. Again, this strategy avoids the disadvantages of parsers in which rule selection is uniformly driven by either the leftmost or rightmost daughter. Furthermore, by selecting potential heads on the basis of a `head-corner table' (comparable to the left-corner table of a left-corner parser) it may use top-down filtering to minimize the search-space. Head-corner parsing has also been considered elsewhere. In [7,9] chart-based head-corner parsing for context-free grammar is considered. It is shown that, in spite of the fact that bidirectional parsing seemingly leads to more overhead than left-to-right parsing, the worst-case complexity of a head-corner parser does not exceed that of an Earley parser. [10,11] argues that head-corner parsing is especially useful for parsing with non-concatenative grammar formalisms. In  a head-driven parsing strategy for Lexicalized Tree Adjoining Grammars is presented.
Although it has been suggested that head-driven parsing has benefits for lexicalist grammars, this has not been established in practice. The potential efficiency gains of a head-driven parser are often outbalanced by the cost of additional overhead. This is particularly true for the (bidirectional) head-corner parser. The results of the experiment we describe in section 3 establish that efficient head-driven parsing is possible. That is, we show that for a radical lexicalist grammar (based on CUG) a bottom-up head-driven chart parser (a chart-based breadth-first implementation of Kay's head-driven shift-reduce parser) is more efficient than standard pure bottom-up chart parsers. Also, we show that for a lexicalist (definite clause) grammar in which the rules still contain a substantial amount of information, (bidirectional) head-corner parsing, in which a bottom-up parsing strategy is guided by top-down prediction, is more efficient than pure bottom-up parsing as well as left-corner parsing.
Before discussing the experiment, however, we first discuss the two head-driven parsers used in the experiment, and how they relate to standard parsing algorithms.