Document Type

Article

Language

eng

Format of Original

8 p.

Publication Date

10-2003

Publisher

Elsevier

Source Publication

Computer Speech and Language

Source ISSN

0885-2308

Original Item ID

doi: 10.1016/S0885-2308(02)00052-9

Abstract

Large vocabulary continuous speech recognition can benefit from an efficient data structure for representing a large number of acoustic hypotheses compactly. Word graphs or lattices have been chosen as such an efficient interface between acoustic recognition engines and subsequent language processing modules. This paper first investigates the effect of pruning during acoustic decoding on the quality of word lattices and shows that by combining different pruning options (at the model level and word level), we can obtain word lattices with comparable accuracy to the original lattices and a manageable size. In order to use the word lattices as the input for a post-processing language module, they should preserve the target hypotheses and their scores while being as small as possible. In this paper, we introduce a word graph compression algorithm that significantly reduces the number of words in the graphical representation without eliminating utterance hypotheses or distorting their acoustic scores. We compare this word graph compression algorithm with several other lattice size-reducing approaches and demonstrate the relative strength of the new word graph compression algorithm for decreasing the number of words in the representation. Experiments are conducted across corpora and vocabulary sizes to determine the consistency of the pruning and compression results.

Comments

Accepted version. Computer Speech and Language, Vol. 17, No. 4 (October 2003): 329-356. DOI. © 2003 Elsevier. Used with permission.

johnson_5657acc.docx (286 kB)
ADA Accessible Version

Share

COinS