Natural Language Engineering

Articles

A fast and flexible architecture for very large word n-gram datasets

MICHAEL FLOR

NLP and Speech Group, Educational Testing Service, Princeton, NJ 08541, USA e-mail: mflor@ets.org

Abstract

This paper presents TrendStream, a versatile architecture for very large word n-gram datasets. Designed for speed, flexibility, and portability, TrendStream uses a novel trie-based architecture, features lossless compression, and provides optimization for both speed and memory use. In addition to literal queries, it also supports fast pattern matching searches (with wildcards or regular expressions), on the same data structure, without any additional indexing. Language models are updateable directly in the compiled binary format, allowing rapid encoding of existing tabulated collections, incremental generation of n-gram models from streaming text, and merging of encoded compiled files. This architecture offers flexible choices for loading and memory utilization: fast memory-mapping of a multi-gigabyte model, or on-demand partial data loading with very modest memory requirements. The implemented system runs successfully on several different platforms, under different operating systems, even when the n-gram model file is much larger than available memory. Experimental evaluation results are presented with the Google Web1T collection and the Gigaword corpus.

(Received April 11 2011)

(Revised November 23 2011)

(Accepted November 30 2011)

(Online publication January 10 2012)