Algorithm
As with BLEU, the basic unit of evaluation is the sentence, the algorithm first creates an alignment (see illustrations) between two sentences, the candidate translation string, and the reference translation string. The alignment is a set of mappings between unigrams. A mapping can be thought of as a line between a unigram in one string, and a unigram in another string. The constraints are as follows; every unigram in the candidate translation must map to zero or one unigram in the reference. Mappings are selected to produce an alignment as defined above. If there are two alignments with the same number of mappings, the alignment is chosen with the fewest crosses, that is, with fewer intersections of two mappings. From the two alignments shown, alignment (a) would be selected at this point. Stages are run consecutively and each stage only adds to the alignment those unigrams which have not been matched in previous stages. Once the final alignment is computed, the score is computed as follows: Unigram precision is calculated as:
Examples of pairs of words which will be mapped by each module |
|||
Module | Candidate | Reference | Match |
---|---|---|---|
Exact | good | good | Yes |
Stemmer | goods | good | Yes |
Synonymy | well | good | Yes |
Where is the number of unigrams in the candidate translation that are also found in the reference translation, and is the number of unigrams in the candidate translation. Unigram recall is computed as:
Where is as above, and is the number of unigrams in the reference translation. Precision and recall are combined using the harmonic mean in the following fashion, with recall weighted 9 times more than precision:
The measures that have been introduced so far only account for congruity with respect to single words but not with respect to larger segments that appear in both the reference and the candidate sentence. In order to take these into account, longer n-gram matches are used to compute a penalty for the alignment. The more mappings there are that are not adjacent in the reference and the candidate sentence, the higher the penalty will be.
In order to compute this penalty, unigrams are grouped into the fewest possible chunks, where a chunk is defined as a set of unigrams that are adjacent in the hypothesis and in the reference. The longer the adjacent mappings between the candidate and the reference, the fewer chunks there are. A translation that is identical to the reference will give just one chunk. The penalty is computed as follows,
Where c is the number of chunks, and is the number of unigrams that have been mapped. The final score for a segment is calculated as below. The penalty has the effect of reducing the by up to 50% if there are no bigram or longer matches.
To calculate a score over a whole corpus, or collection of segments, the aggregate values for, and are taken and then combined using the same formula. The algorithm also works for comparing a candidate translation against more than one reference translations. In this case the algorithm compares the candidate against each of the references and selects the highest score.
Read more about this topic: METEOR