Proceedings of ACL 2011

Event Discovery in Social Media Feeds

Edward Benson, Me, and Regina Barzilay
We present a novel method for record extrac- tion from social streams such as Twitter. Un- like typical extraction setups, these environ- ments are characterized by short, one sentence messages with heavily colloquial speech. To further complicate matters, individual mes- sages may not express the full relation to be uncovered, as is often assumed in extraction tasks. We develop a graphical model that ad- dresses these problems by learning a latent set of records and a record-message alignment si- multaneously; the output of our model is a set of canonical records, the values of which are consistent with aligned messages. We demonstrate that our approach is able to accu- rately induce event records from Twitter mes- sages, evaluated against events from a local city guide. Our method achieves significant error reduction over baseline methods

Proceedings of ACL 2011

Content Models with Attitude

Christina Sauper, Me, and Regina Barzilay
We present a probabilistic topic model for jointly identifying properties and attributes of social media review snippets. Our model simultaneously learns a set of properties of a product and captures aggregate user senti- ments towards these properties. This approach directly enables discovery of highly rated or inconsistent properties of a product. Our model admits an efficient variational mean- field inference algorithm which can be paral- lelized and run on large snippet collections. We evaluate our model on a large corpus of snippets from Yelp reviews to assess property and attribute prediction. We demonstrate that it outperforms applicable baselines by a con- siderable margin.

Proceedings of CoNLL 2011

Modeling Syntactic Context Improves Morphological Segmentation

Yeong Keok Lee, Me, and Regina Barzilay
The connection between part-of-speech (POS) categories and morphological properties is well-documented in linguistics but underuti- lized in text processing systems. This pa- per proposes a novel model for morphologi- cal segmentation that is driven by this connec- tion. Our model learns that words with com- mon affixes are likely to be in the same syn- tactic category and uses learned syntactic cat- egories to refine the segmentation boundaries of words. Our results demonstrate that incor- porating POS categorization yields substantial performance gains on morphological segmen- tation of Arabic.

Proceedings of EMNLP 2010

Incorporating Content Structure into Text Analysis Applications

Christina Sauper, Me, and Regina Barzilay

Proceedings of EMNLP 2010

Simple Type-Level Unsupervised POS Tagging

Yeong Keok Lee, Me, and Regina Barzilay
Part-of-speech (POS) tag distributions are known to exhibit sparsity --- a word is likely to take a single predominant tag in a corpus. Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. However, in existing systems, this expansion come with a steep increase in model complexity. This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments. In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts. On several languages, we report performance exceeding that of more complex state-of-the art systems.

Proceedings of ACL 2010

An Entity-Level Approach to Information Extraction

Me and Dan Klein
We present a generative model of template-filling in which coreference resolution and role assignment are jointly determined. Underlying template roles first generate abstract entities, which in turn generate concrete textual mentions. On the standard corporate acquisitions dataset, joint resolution in our entity-level model reduces error over a mention-level discriminative approach by up to 20%.

Proceedings of NAACL 2010 [BEST PAPER AWARD]

Coreference Resolution in a Modular, Entity-Centered Model

Me and Dan Klein
Coreference resolution is governed by syntactic, semantic, and discourse constraints. We present a generative, model-based approach in which each of these factors is modularly en- capsulated and learned in a primarily unsupervised manner. Our semantic representation first hypothesizes an underlying set of latent entity types, which generate specific entities that in turn render individual mentions. By sharing lexical statistics at the level of abstract entity types, our model is able to substantially reduce semantic compatibility errors, resulting in the best results to date on the complete end-to-end coreference task.

Proceedings of NAACL 2009

Exploring Content Models for Multi-Document Summarization

Me and Lucy Vanderwende
We present an exploration of generative probabilistic models for multi-document summarization. Beginning with a simple word fre- quency based model (Nenkova and Vanderwende, 2005), we construct a sequence of models each injecting more structure into the representation of document set content and exhibiting ROUGE gains along the way. Our final model, HIERSUM, utilizes a hierarchical LDA-style model (Blei et al., 2004) to represent content specificity as a hierarchy of topic vocabulary distributions. At the task of producing generic DUC-style summaries, HIERSUM yields state-of-the-art ROUGE performance and in pairwise user evaluation strongly outperforms Toutanova et al. (2007)'s state-of-the-art discriminative system. We also explore HIERSUM's capacity to produce multiple 'topical summaries' in order to facilitate content discovery and navigation.

Proceedings of ACL 2009

Better Word Alignments with Supervised ITG Models

Me, John Blitzer, and and "Dan Klein"
This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally, we introduce many-to-one block alignment features, which significantly improve our ITG models. Altogether, our method results in the best reported AER numbers for Chinese-English and a performance improvement of 1.1 BLEU over GIZA++ alignments.

Proceedings of EMNLP 2009

Simple Coreference Resolution with Rich Syntactic and Semantic Features

Me and Dan Klein
Coreference systems are driven by syntactic, semantic, and discourse constraints. We present a simple approach which completely modularizes these three aspects. In contrast to much current work, which focuses on learning and on the discourse component, our system is deterministic and is driven entirely by syntactic and semantic compatibility as learned from a large, unlabeled corpus. Despite its simplicity and discourse naivete, our system substantially outperforms all unsupervised systems and most supervised ones. Primary contributions include (1) the presentation of a simple- to-reproduce, high-performing baseline and (2) the demonstration that most remaining errors can be attributed to syntactic and semantic factors external to the coreference phenomenon (and perhaps best addressed by non-coreference systems).

Proceedings of ACL 2008

Learning Bilingual Lexicons from Monolingual Corpora

Me, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein
We present a method for learning bilingual translation lexicons from monolingual corpora. Word types in each language are characterized by purely monolingual features, such as context counts and orthographic substrings. Translations are induced using a generative model based on canonical correlation analysis, which explains the monolingual lexicons in terms of latent matchings. We show that high-precision lexicons can be learned in a variety of language pairs and from a range of corpus types.

Proceedings of EMNLP 2008

Coarse-to-Fine Syntactic Machine Translation using Language Projections

Slav Petrov, Me, and Dan Klein
The intersection of tree transducer-based translation models with n-gram language models results in huge dynamic programs for machine translation decoding. We propose a multipass, coarse-to-fine approach in which the language model complexity is incrementally introduced. In contrast to previous *order-based* bigram-to-trigram approaches, we focus on *encoding-based* methods, which use a clustered encoding of the target language. Across various hierarchical encoding schemes and for multiple language pairs, we show speed-ups of up to 50 times over single-pass decoding while improving BLEU score. Moreover, our entire decoding cascade for trigram language models is faster than the corresponding bigram pass alone of a bigram-to-trigram decoder.

Fully Distributed EM for Very Large Datasets

Jason Wolfe, Me, and Dan Klein
In EM and related algorithms, E-step computations distribute easily, because data items are independent given parameters. For very large data sets, however, even storing all of the parameters in a single node for the M- step can be impractical. We present a framework that fully distributes the entire EM procedure. Each node interacts only with parameters relevant to its data, sending messages to other nodes along a junction-tree topology. We demonstrate improvements over a MapReduce topology, on two tasks: word alignment and topic modeling.

Proceedings of Computational Linguistics

A Global Joint Model for Semantic Role Labeling

Kristina Toutanova, Me, and Christopher D. Manning

Proceedings of ACL 2007

Unsupervised Coreference Resolution in a Nonparametric Bayesian Model

Me and Dan Klein
We present an unsupervised, nonparametric Bayesian approach to coreference resolution which models both global entity identity across a corpus as well as the sequential anaphoric structure within each document. While most existing coreference work is driven by pairwise decisions, our model is fully generative, producing each mention from a combination of global entity proper- ties and local attentional state. Despite be- ing unsupervised, our system achieves a 70.3 MUC F1 measure on the MUC-6 test set, broadly in the range of some recent supervised results.

Proceedings of NAACL 2007

Approximate Factoring for A* Search

Me, John DeNero, and Dan Klein
We present a novel method for creating A∗ estimates for structured search problems. In our approach, we project a complex model onto multiple simpler models for which exact inference is efficient. We use an optimization framework to estimate parameters for these projections in a way which bounds the true costs. Similar to Klein and Manning (2003), we then combine completion estimates from the simpler models to guide search in the original complex model. We apply our approach to bitext parsing and lexicalized parsing, demonstrating its effectiveness in these domains.

Proceedings of AAAI 2007

A* Search via Approximate Factoring

Me, John DeNero, and Dan Klein

Proceedings of NAACL 2006

Prototype-driven Learning for Sequence Models

Me and Dan Klein
We investigate prototype-driven learning for primarily unsupervised sequence modeling. Prior knowledge is specified declaratively, by providing a few canonical examples of each target an- notation label. This sparse prototype information is then propagated across a corpus using distributional similarity features in a log-linear generative model. On part-of-speech induction in English and Chinese, as well as an information extraction task, prototype features provide substantial error rate reductions over competitive baselines and outperform previous work. For example, we can achieve an English part-of-speech tagging accuracy of 80.5% using only three examples of each tag and no dictionary constraints. We also compare to semi-supervised learning and discuss the system's error trends.

Proceedings of ACL 2006

Prototype-driven Grammar Induction

Me and Dan Klein
We investigate prototype-driven learning for primarily unsupervised grammar induction. Prior knowledge is specified declaratively, by providing a few canonical examples of each target phrase type. This sparse prototype information is then propagated across a corpus using distributional similarity features, which augment an otherwise standard PCFG model. We show that distributional features are effective at distinguishing bracket labels, but not determining bracket locations. To improve the quality of the induced trees, we combine our PCFG induction with the CCM model of Klein and Manning (2002), which has complementary strengths: it identifies brackets but does not label them. Using only a handful of prototypes, we show substantial improvements over naive PCFG induction for English and Chinese grammar induction.

Robust Textual Inference via Graph Matching

Aria D. Haghighi, Andrew Y. Ng, and Christopher D. Manning

Proceedings of PASCAL Challenge Workshop in Recognizing Textual Entailment 2005

Robust Textual Inference Using Diverse Knowledge Sources

Rajat Raina, Me, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova, Bill MacCartney, Marie-Catherine de Marneffe, Christopher D. Manning, and Andrew Y. Ng

A Joint Model for Semantic Role Labeling

Kristina Toutanova, Aria Hahgighi, and Chris D. Manning

Proceedings of ACL 2005

Joint Learning Improves Semantic Role Labeling

Kristina Toutanova, Aria Hahgighi, and Chris D. Manning