Proceedings of WSDM 2022 [BEST PAPER AWARD]

Learning Stance Embeddings from Signed Social Graphs

John Pougué-Biyong, Akshay Gupta, Me, and Ahmed El-Kishky
A key challenge in social network analysis is understanding the position, or stance, of people in the graph on a large set of topics. While past work has modeled (dis)agreement in social networks using signed graphs, these approaches have not modeled agreement patterns across a range of correlated topics. For instance, disagreement on one topic may make disagreement(or agreement) more likely for related topics. We propose the Stance Embeddings Model(SEM), which jointly learns embeddings for each user and topic in signed social graphs with distinct edge types for each topic. By jointly learning user and topic embeddings, SEM is able to perform cold-start topic stance detection, predicting the stance of a user on topics for which we have not observed their engagement. We demonstrate the effectiveness of SEM using two large-scale Twitter signed graph datasets we open-source. One dataset, TwitterSG, labels (dis)agreements using engagements between users via tweets to derive topic-informed, signed edges. The other, BirdwatchSG, leverages community reports on misinformation and misleading content. On TwitterSG and BirdwatchSG, SEM shows a 39% and 26% error reduction respectively against strong baselines.

Proceedings of KDD 2022

kNN-Embed: Locally Smoothed Embedding Mixtures for Multi-interest Candidate Retrieval

Ahmed El-Kishky, Thomas Markovich, Kenny Leung, Frank Portman, Me, and Ying Xiao
Candidate generation is the first stage in recommendation systems, where a light-weight system is used to retrieve potentially relevant items for an input user. These candidate items are then ranked and pruned in later stages of recommender systems using a more complex ranking model. Since candidate generation is the top of the recommendation funnel, it is important to retrieve a high-recall candidate set to feed into downstream ranking models. A common approach for candidate generation is to leverage approximate nearest neighbor (ANN) search from a single dense query embedding; however, this approach this can yield a low-diversity result set with many near duplicates. As users often have multiple interests, candidate retrieval should ideally return a diverse set of candidates reflective of the user’s multiple interests. To this end, we introduce kNN-Embed, a general approach to improving diversity in dense ANN-based retrieval. kNN-Embed represents each user as a smoothed mixture over learned item clusters that represent distinct ‘interests’ of the user. By querying each of a user’s mixture component in proportion to their mixture weights, we retrieve a high-diversity set of candidates reflecting elements from each of a user’s interests. We experimentally compare kNN-Embed to standard ANN candidate retrieval, and show significant improvements in overall recall and improved diversity across three datasets. Accompanying this work, we open source a large Twitter follow-graph dataset, to spur further research in graph-mining and representation learning for recommender systems.

Proceedings of NAACL 2022

CTM--A Model for Large-Scale Multi-View Tweet Topic Classification

Vivek Kulkarni, Kenny Leung, and Me
Automatically associating social media posts with topics is an important prerequisite for effective search and recommendation on many social media platforms. However, topic classification of such posts is quite challenging because of (a) a large topic space (b) short text with weak topical cues, and (c) multiple topic associations per post. In contrast to most prior work which only focuses on post classification into a small number of topics (-), we consider the task of large-scale topic classification in the context of Twitter where the topic space is times larger with potentially multiple topic associations per Tweet. We address the challenges above by proposing a novel neural model, CTM that (a) supports a large topic space of topics and (b) takes a holistic approach to tweet content modeling -- leveraging multi-modal content, author context, and deeper semantic cues in the Tweet. Our method offers an effective way to classify Tweets into topics at scale by yielding superior performance to other approaches (a relative lift of in median average precision score) and has been successfully deployed in production at Twitter.

Proceedings of Neurips 2022

Tweetnerd-end to end entity linking benchmark for tweets

Subanshu Mishra, Aman Saini, Raheleh Makki, Sneha Mehta, Me, and Ali Mollahosseini
Named Entity Recognition and Disambiguation (NERD) systems are foundational for information retrieval, question answering, event detection, and other natural language processing (NLP) applications. We introduce TweetNERD, a dataset of 340K+ Tweets across 2010-2021, for benchmarking NERD systems on Tweets. This is the largest and most temporally diverse open sourced dataset benchmark for NERD on Tweets and can be used to facilitate research in this area. We de- scribe evaluation setup with TweetNERD for three NERD tasks: Named Entity Recognition (NER), Entity Linking with True Spans (EL), and End to End En- tity Linking (End2End); and provide performance of existing publicly available methods on specific TweetNERD splits. TweetNERD is available at: https: //doi.org/10.5281/zenodo.6617192 under Creative Commons Attribu- tion 4.0 International (CC BY 4.0) license [Mishra et al., 2022]. Check out more details at https://github.com/twitter-research/TweetNERD.

Proceedings of arXiv 2022

TwHIN: Embedding the Twitter Heterogeneous Information Network for Personalized Recommendation

Ahmed El-Kishky, Thomas Markovich, Serim Park, Chetan Verma, Baekjin Kim, Ramy Eskander, Yury Malkov, Frank Portman, Sofía Samaniego, Ying Xiao, and Me
Social networks, such as Twitter, form a heterogeneous information network (HIN) where nodes represent domain entities (e.g., user, content, advertiser, etc.) and edges represent one of many entity interactions (e.g, a user re-sharing content or "following" another). Interactions from multiple relation types can encode valuable information about social network entities not fully captured by a single relation; for instance, a user's preference for accounts to follow may depend on both user-content engagement interactions and the other users they follow. In this work, we investigate knowledge-graph embeddings for entities in the Twitter HIN (TwHIN); we show that these pretrained representations yield significant offline and online improvement for a diverse range of downstream recommendation and classification tasks: personalized ads rankings, account follow-recommendation, offensive content detection, and search ranking. We discuss design choices and practical challenges of deploying industry-scale HIN embeddings, including compressing them to reduce end-to-end model latency and handling parameter drift across versions.

Proceedings of Findings EMNLP 2021

LMSOC An Approach for Socially Sensitive Pretraining

Vivek Kulkarni, Shubhanshu Mishra, and Me
While large-scale pretrained language models have been shown to learn effective linguistic representations for many NLP tasks, there remain many real-world contextual aspects of language that current approaches do not capture. For instance, consider a cloze-test 'I enjoyed the ____ game this weekend': the correct answer depends heavily on where the speaker is from, when the utterance occurred, and the speaker's broader social milieu and preferences. Although language depends heavily on the geographical, temporal, and other social contexts of the speaker, these elements have not been incorporated into modern transformer-based language models. We propose a simple but effective approach to incorporate speaker social context into the learned representations of large-scale language models. Our method first learns dense representations of social contexts using graph representation learning algorithms and then primes language model pretraining with these social context representations. We evaluate our approach on geographically-sensitive language-modeling tasks and show a substantial improvement (more than 100% relative lift on MRR) compared to baselines.

Proceedings of Workshop on Noisy User-generated Text (W-NUT) @ EMNLP 2021

Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction

Shubhanshu Mishra and Me
We evaluate a simple approach to improving zero-shot multilingual transfer of mBERT on social media corpus by adding a pretraining task called translation pair prediction (TPP), which predicts whether a pair of cross-lingual texts are a valid translation. Our approach assumes access to translations (exact or approximate) between source-target language pairs, where we fine-tune a model on source language task data and evaluate the model in the target language. In particular, we focus on language pairs where transfer learning is difficult for mBERT: those where source and target languages are different in script, vocabulary, and linguistic typology. We show improvements from TPP pretraining over mBERT alone in zero-shot transfer from English to Hindi, Arabic, and Japanese on two social media tasks: NER (a 37% average relative improvement in F1 across target languages) and sentiment classification (12% relative improvement in F1) on social media text, while also benchmarking on a non-social media task of Universal Dependency POS tagging (6.7% relative improvement in accuracy). Our results are promising given the lack of social media bitext corpus. Our code can be found at: https://github.com/twitter-research/multilingual-alignment-tpp.

Proceedings of ACM SIGMOD 2020

Entity Matching in the Wild: A Consistent and Versatile Framework to Unify Data in Industrial Applications

Yan Yan, Stephen Meyles, Me, and Dan Suciu
Entity matching--the task of clustering duplicated database records to underlying entities--has become an increasingly critical component in modern data integration management. Amperity provides a platform for businesses to manage customer data that utilizes a machine-learning approach to entity matching, resolving billions of customer records on a daily basis. We face several challenges in deploying entity matching to industrial applications at scale, and they are less prominent in the literature. These challenges include:(1) Providing not just a single entity clustering, but supporting clusterings at multiple confidence levels to enable downstream applications with varying precision/recall trade-off needs.(2) Many customer record attributes may be systematically missing from different sources of data, creating many pairs of records in a cluster that appear to not match due to incomplete, rather than conflicting …

Proceedings of ACL 2011

Event Discovery in Social Media Feeds

Edward Benson, Me, and Regina Barzilay
We present a novel method for record extrac- tion from social streams such as Twitter. Un- like typical extraction setups, these environ- ments are characterized by short, one sentence messages with heavily colloquial speech. To further complicate matters, individual mes- sages may not express the full relation to be uncovered, as is often assumed in extraction tasks. We develop a graphical model that ad- dresses these problems by learning a latent set of records and a record-message alignment si- multaneously; the output of our model is a set of canonical records, the values of which are consistent with aligned messages. We demonstrate that our approach is able to accu- rately induce event records from Twitter mes- sages, evaluated against events from a local city guide. Our method achieves significant error reduction over baseline methods

Proceedings of ACL 2011

Content Models with Attitude

Christina Sauper, Me, and Regina Barzilay
We present a probabilistic topic model for jointly identifying properties and attributes of social media review snippets. Our model simultaneously learns a set of properties of a product and captures aggregate user senti- ments towards these properties. This approach directly enables discovery of highly rated or inconsistent properties of a product. Our model admits an efficient variational mean- field inference algorithm which can be paral- lelized and run on large snippet collections. We evaluate our model on a large corpus of snippets from Yelp reviews to assess property and attribute prediction. We demonstrate that it outperforms applicable baselines by a con- siderable margin.

Proceedings of CoNLL 2011

Modeling Syntactic Context Improves Morphological Segmentation

Yeong Keok Lee, Me, and Regina Barzilay
The connection between part-of-speech (POS) categories and morphological properties is well-documented in linguistics but underuti- lized in text processing systems. This pa- per proposes a novel model for morphologi- cal segmentation that is driven by this connec- tion. Our model learns that words with com- mon affixes are likely to be in the same syn- tactic category and uses learned syntactic cat- egories to refine the segmentation boundaries of words. Our results demonstrate that incor- porating POS categorization yields substantial performance gains on morphological segmen- tation of Arabic.

Proceedings of EMNLP 2010

Incorporating Content Structure into Text Analysis Applications

Christina Sauper, Me, and Regina Barzilay

Proceedings of EMNLP 2010

Simple Type-Level Unsupervised POS Tagging

Yeong Keok Lee, Me, and Regina Barzilay
Part-of-speech (POS) tag distributions are known to exhibit sparsity --- a word is likely to take a single predominant tag in a corpus. Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. However, in existing systems, this expansion come with a steep increase in model complexity. This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments. In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts. On several languages, we report performance exceeding that of more complex state-of-the art systems.

Proceedings of ACL 2010

An Entity-Level Approach to Information Extraction

Me and Dan Klein
We present a generative model of template-filling in which coreference resolution and role assignment are jointly determined. Underlying template roles first generate abstract entities, which in turn generate concrete textual mentions. On the standard corporate acquisitions dataset, joint resolution in our entity-level model reduces error over a mention-level discriminative approach by up to 20%.

Proceedings of NAACL 2010 [BEST PAPER AWARD]

Coreference Resolution in a Modular, Entity-Centered Model

Me and Dan Klein
Coreference resolution is governed by syntactic, semantic, and discourse constraints. We present a generative, model-based approach in which each of these factors is modularly en- capsulated and learned in a primarily unsupervised manner. Our semantic representation first hypothesizes an underlying set of latent entity types, which generate specific entities that in turn render individual mentions. By sharing lexical statistics at the level of abstract entity types, our model is able to substantially reduce semantic compatibility errors, resulting in the best results to date on the complete end-to-end coreference task.

Proceedings of NAACL 2009

Exploring Content Models for Multi-Document Summarization

Me and Lucy Vanderwende
We present an exploration of generative probabilistic models for multi-document summarization. Beginning with a simple word fre- quency based model (Nenkova and Vanderwende, 2005), we construct a sequence of models each injecting more structure into the representation of document set content and exhibiting ROUGE gains along the way. Our final model, HIERSUM, utilizes a hierarchical LDA-style model (Blei et al., 2004) to represent content specificity as a hierarchy of topic vocabulary distributions. At the task of producing generic DUC-style summaries, HIERSUM yields state-of-the-art ROUGE performance and in pairwise user evaluation strongly outperforms Toutanova et al. (2007)'s state-of-the-art discriminative system. We also explore HIERSUM's capacity to produce multiple 'topical summaries' in order to facilitate content discovery and navigation.

Proceedings of ACL 2009

Better Word Alignments with Supervised ITG Models

Me, John Blitzer, and and "Dan Klein"
This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally, we introduce many-to-one block alignment features, which significantly improve our ITG models. Altogether, our method results in the best reported AER numbers for Chinese-English and a performance improvement of 1.1 BLEU over GIZA++ alignments.

Proceedings of EMNLP 2009

Simple Coreference Resolution with Rich Syntactic and Semantic Features

Me and Dan Klein
Coreference systems are driven by syntactic, semantic, and discourse constraints. We present a simple approach which completely modularizes these three aspects. In contrast to much current work, which focuses on learning and on the discourse component, our system is deterministic and is driven entirely by syntactic and semantic compatibility as learned from a large, unlabeled corpus. Despite its simplicity and discourse naivete, our system substantially outperforms all unsupervised systems and most supervised ones. Primary contributions include (1) the presentation of a simple- to-reproduce, high-performing baseline and (2) the demonstration that most remaining errors can be attributed to syntactic and semantic factors external to the coreference phenomenon (and perhaps best addressed by non-coreference systems).

Proceedings of ACL 2008

Learning Bilingual Lexicons from Monolingual Corpora

Me, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein
We present a method for learning bilingual translation lexicons from monolingual corpora. Word types in each language are characterized by purely monolingual features, such as context counts and orthographic substrings. Translations are induced using a generative model based on canonical correlation analysis, which explains the monolingual lexicons in terms of latent matchings. We show that high-precision lexicons can be learned in a variety of language pairs and from a range of corpus types.

Proceedings of EMNLP 2008

Coarse-to-Fine Syntactic Machine Translation using Language Projections

Slav Petrov, Me, and Dan Klein
The intersection of tree transducer-based translation models with n-gram language models results in huge dynamic programs for machine translation decoding. We propose a multipass, coarse-to-fine approach in which the language model complexity is incrementally introduced. In contrast to previous *order-based* bigram-to-trigram approaches, we focus on *encoding-based* methods, which use a clustered encoding of the target language. Across various hierarchical encoding schemes and for multiple language pairs, we show speed-ups of up to 50 times over single-pass decoding while improving BLEU score. Moreover, our entire decoding cascade for trigram language models is faster than the corresponding bigram pass alone of a bigram-to-trigram decoder.

Fully Distributed EM for Very Large Datasets

Jason Wolfe, Me, and Dan Klein
In EM and related algorithms, E-step computations distribute easily, because data items are independent given parameters. For very large data sets, however, even storing all of the parameters in a single node for the M- step can be impractical. We present a framework that fully distributes the entire EM procedure. Each node interacts only with parameters relevant to its data, sending messages to other nodes along a junction-tree topology. We demonstrate improvements over a MapReduce topology, on two tasks: word alignment and topic modeling.

Proceedings of Computational Linguistics

A Global Joint Model for Semantic Role Labeling

Kristina Toutanova, Me, and Christopher D. Manning

Proceedings of ACL 2007

Unsupervised Coreference Resolution in a Nonparametric Bayesian Model

Me and Dan Klein
We present an unsupervised, nonparametric Bayesian approach to coreference resolution which models both global entity identity across a corpus as well as the sequential anaphoric structure within each document. While most existing coreference work is driven by pairwise decisions, our model is fully generative, producing each mention from a combination of global entity proper- ties and local attentional state. Despite be- ing unsupervised, our system achieves a 70.3 MUC F1 measure on the MUC-6 test set, broadly in the range of some recent supervised results.

Proceedings of NAACL 2007

Approximate Factoring for A* Search

Me, John DeNero, and Dan Klein
We present a novel method for creating A∗ estimates for structured search problems. In our approach, we project a complex model onto multiple simpler models for which exact inference is efficient. We use an optimization framework to estimate parameters for these projections in a way which bounds the true costs. Similar to Klein and Manning (2003), we then combine completion estimates from the simpler models to guide search in the original complex model. We apply our approach to bitext parsing and lexicalized parsing, demonstrating its effectiveness in these domains.

Proceedings of AAAI 2007

A* Search via Approximate Factoring

Me, John DeNero, and Dan Klein

Proceedings of NAACL 2006

Prototype-driven Learning for Sequence Models

Me and Dan Klein
We investigate prototype-driven learning for primarily unsupervised sequence modeling. Prior knowledge is specified declaratively, by providing a few canonical examples of each target an- notation label. This sparse prototype information is then propagated across a corpus using distributional similarity features in a log-linear generative model. On part-of-speech induction in English and Chinese, as well as an information extraction task, prototype features provide substantial error rate reductions over competitive baselines and outperform previous work. For example, we can achieve an English part-of-speech tagging accuracy of 80.5% using only three examples of each tag and no dictionary constraints. We also compare to semi-supervised learning and discuss the system's error trends.

Proceedings of ACL 2006

Prototype-driven Grammar Induction

Me and Dan Klein
We investigate prototype-driven learning for primarily unsupervised grammar induction. Prior knowledge is specified declaratively, by providing a few canonical examples of each target phrase type. This sparse prototype information is then propagated across a corpus using distributional similarity features, which augment an otherwise standard PCFG model. We show that distributional features are effective at distinguishing bracket labels, but not determining bracket locations. To improve the quality of the induced trees, we combine our PCFG induction with the CCM model of Klein and Manning (2002), which has complementary strengths: it identifies brackets but does not label them. Using only a handful of prototypes, we show substantial improvements over naive PCFG induction for English and Chinese grammar induction.

Robust Textual Inference via Graph Matching

Aria D. Haghighi, Andrew Y. Ng, and Christopher D. Manning

Proceedings of PASCAL Challenge Workshop in Recognizing Textual Entailment 2005

Robust Textual Inference Using Diverse Knowledge Sources

Rajat Raina, Me, Christopher Cox, Jenny Finkel, Jeff Michels, Kristina Toutanova, Bill MacCartney, Marie-Catherine de Marneffe, Christopher D. Manning, and Andrew Y. Ng

A Joint Model for Semantic Role Labeling

Kristina Toutanova, Aria Hahgighi, and Chris D. Manning

Proceedings of ACL 2005

Joint Learning Improves Semantic Role Labeling

Kristina Toutanova, Aria Hahgighi, and Chris D. Manning