In this section we compare the two alternative approaches with respect to accuracy and the number of states of the resulting transducers. We distinguish between exact and approximating implementations. An implementation is exact if it produces the right result for all possible inputs.
Assume we have a transducer T which correctly implements an OT analysis, except that it perhaps fails to distinguish between different numbers of constraint violations for one or more relevant constraints. We can decide whether this T is exact as follows. T is exact if and only if T is exact with respect to each of the relevant constraints, i.e., for each constraint, T distinguishes between different numbers of constraint violations. In order to check whether T is exact in this sense for constraint C we create the transducer is_exact(T,C):
macro(is_exact(T,C), T o mark_violation(C) o {(?  @) x [], @}*).If there are inputs for which this transducer produces multiple outputs, then we know that T is not exact for C; otherwise T is exact for C. This reduces to the question of whether is_exact(T,C) is ambiguous. The question of whether a given transducer is ambiguous is shown to be decidable in [1]; and an efficient algorithm is proposed in [20].^{12}Therefore, in order to check a given transducer T for exactness, it must be the case that for each of the constraints C, is_exact(T,C) is nonambiguous.
If a transducer T is not exact, we characterize the quality of the approximation by considering the maximum length of input strings for which T is exact. For example, even though T fails the exactness check, it might be the case that
[? ^,? ^,? ^,? ^,? ^] o Tin fact is exact, indicating that T produces the correct result for all inputs of length .
Suppose we are given the sequence of constraints:
have_ons >> fill_ons >> parse >> fill_nuc >> no_codaand suppose furthermore that we require that the implementation, using the counting approach, must be exact for all strings of length . How can we determine the level of precision for each of the constraints? A simple algorithm (which does not necessarily produce the smallest transducer) proceeds as follows. Firstly, we determine the precision of the first, most important, constraint by checking exactness for the transducer
gen oo P :: have_onsfor increasing values for P. As soon as we find the minimal P for which the exactness check succeeds (in this case for P=0), we continue by determining the precision required for the next constraint by finding the minimal value of P in:
gen oo 0 :: have_ons oo P :: fill_onsWe continue in this way until we have determined precision values for each of the constraints. In this case we obtain a transducer with 8269 states implementing:
gen oo 0 :: have_ons oo 1 :: fill_ons oo 8 :: parse oo 5 :: fill_nuc oo 4 :: no_codaIn contrast, using matching an exact implementation is obtained using a precision of 1 for the fill_nuc constraint; all other constraints have a precision of 0. This transducer contains only 28 states.
The assumption in OT is that each of the constraints is universal, whereas the constraint order differs from language to language. Prince and Smolensky identify nine interestingly different constraint orderings. These nine ``languages'' are presented in table 2.

In table 3 we compare the size of the resulting automata for the matching approach, as well as for the counting approach, for three different variants which are created in order to guarantee exactness for strings of length , and respectively.

Finally, the construction of the transducer using the matching approach is typically much faster as well. In table 4 some comparisons are summarized.
