Jonathan Berant, Percy Liang
Imitation Learning of Agenda-based Semantic Parsers
Transactions of ACL 3 pp. 545-558 (2015).
Semantic parsers conventionally construct
logical forms bottom-up in a fixed order, resulting
in the generation of many extraneous
partial logical forms. In this paper, we combine
ideas from imitation learning and agenda-based
parsing to train a semantic parser that
searches partial logical forms in a more strategic
order. Empirically, our parser reduces the
number of constructed partial logical forms by
an order of magnitude, and obtains a 6x-9x
speedup over fixed-order parsing, while maintaining
Roy Bar-Haim, Ido Dagan, Jonathan Berant
Knowledge-based Textual Inference via Parse-Tree transformations
Journal of Artificial Intelligence Research 54 pp.1-57 (2015).
Textual inference is an important component in many applications for understanding
natural language. Classical approaches to textual inference rely on logical representations
for meaning, which may be regarded as “external” to the natural language itself. However,
practical applications usually adopt shallower lexical or lexical-syntactic representations,
which correspond closely to language structure. In many cases, such approaches lack a
principled meaning representation and inference framework. We describe an inference formalism
that operates directly on language-based structures, particularly syntactic parse trees. New
trees are generated by applying inference rules, which provide a unified representation for
varying types of inferences. We use manual and automatic methods to generate these rules,
which cover generic linguistic structures as well as specific lexical-based inferences. We also
present a novel packed data-structure and a corresponding inference algorithm that allows
efficient implementation of this formalism. We proved the correctness of the new algorithm
and established its efficiency analytically and empirically. The utility of our approach was
illustrated on two tasks: unsupervised relation extraction from a large corpus, and the
Recognizing Textual Entailment (RTE) benchmarks.
Yushi Wang*, Jonathan Berant*, Percy Liang
Building a Semantic Parser Overnight
Long paper in ACL 2015.
* equal contribution
How do we build a semantic parser in a new domain starting with zero training examples? We introduce a new methodology for this setting: First, we use a simple grammar to generate logical forms paired with canonical utterances. The logical forms are meant to cover the desired set of compositional operators, and the canonical utterances are meant to capture the meaning of the logical forms (although clumsily). We then use crowdsourcing to paraphrase these canonical utterances into natural utterances. The resulting data is used to train the semantic parser. We further study the role of compositionality in the resulting paraphrases. Finally, we test our methodology on seven domains and show that we can build an adequate semantic parser in just a few hours.
Jonathan Berant, Noga Alon, Ido Dagan, Jacob Goldberger
Efficient Global Learning of Entailment Graphs
Long paper in The Journal of Computational Linguistics 41(2), pp. 221-264 (2015).
Entailment rules between predicates are fundamental to many semantic-inference applications.
Consequently, learning such rules has been an active field of research in recent years.
Methods for learning entailment rules between predicates that take into account
dependencies between different rules (e.g., entailment is a transitive relation)
have been shown to improve rule quality, but suffer from scalability issues, that is,
the number of predicates handled is often quite small. In this paper, we present methods
for learning transitive graphs that contain tens of thousands of nodes, where nodes represent
predicates and edges correspond to entailment rules (termed entailment graphs). Our methods
are able to scale to a large number of predicates by exploiting structural properties of
entailment graphs such as the fact that they exhibit a “tree-like" property. We apply our
methods on two datasets and demonstrate that (a) our methods find high-quality solutions
much faster than methods proposed in the past (b) our methods for the first time scale
to large graphs containing 20,000 nodes and more than 100,000 edges.
Jonathan Berant*, Vivek Srikumar*, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, Christopher D. Manning
Modeling Biological Processes for Reading Comprehension
Long paper in EMNLP 2014. (Best long paper award)
* equal contribution
Machine reading calls for programs that read and understand text,
but most current work only attempts to extract facts from redundant
web-scale corpora. In this paper, we focus on a new reading
comprehension task that requires complex reasoning over a single
document. The input is a paragraph describing a biological process,
and the goal is to answer questions that require an understanding of
the relations between entities and events in the process. To answer
the questions, we first predict a rich structure representing the
process in the paragraph. Then, we map the question to a formal
query, which is executed against the predicted structure. We
demonstrate that answering questions via predicted structures
substantially improves accuracy over baselines that use shallower
Jonathan Berant, Percy Liang
Semantic Parsing via Paraphrasing
Long paper in ACL 2014. (best long paper honorable mention)
A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves state-of-the-art accuracies on two recently released question-answering datasets.
Xuchen Yao, Jonathan Berant, Benjamin Van Durme
Freebase QA: Information Extraction or Semantic Parsing?
Workshop on Semantic Parsing 2014.
We contrast two seemingly distinct approaches to the task of question answering (QA) using Freebase: one based on information extraction techniques, the other on semantic parsing.Results over the same test-set were collected from two state-of-the-art, open-source systems, then analyzed in consultation with those systems’ creators. We conclude that the differences between these technologies, both in task performance, and in how they get there, is not significant. This suggests that the semantic parsing community should target answering more compositional open-domain questions that are beyond the reach of more direct information extraction methods.
Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang
Semantic Parsing on Freebase from Question-Answer Pairs
Long paper in EMNLP 2013.
In this paper, we train a semantic parser that scales up to Freebase. Instead of relying on annotated logical forms, which is especially expensive to obtain at large
scale, we learn from question-answer pairs.
The main challenge in this setting is narrowing down the huge number of possible
logical predicates for a given question.
We tackle this problem in two ways: First, we build a coarse mapping from phrases to predicates
using a knowledge base and a large text corpus.
Second, we use a bridging operation to generate additional predicates based on neighboring predicates.
On the dataset of Cai and Yates (2013),
despite not having annotated logical forms,
our system outperforms their state-of-the-art parser.
Additionally, we collected a more realistic and challenging dataset of
question-answer pairs and improves over a natural baseline.
Aju Thalappillil Scaria*, Jonathan Berant*, Mengqiu Wang, Peter Clark, Justin Lewis, Brittany Harding and Christopher D. Manning
Learning Biological Processes with Global Constraints
Long paper in EMNLP 2013.
* equal contribution
Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) -- specifically ``How?" and ``Why?" questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.
Oren Melamud, Jonathan Berant, Ido Dagan, Jacob Goldberger and Idan Szpektor
A Two Level Model for Context Sensitive Inference Rules
Long paper in ACL 2013. (best paper runner-up)
Automatic acquisition of inference rules for predicates has been commonly addressed by computing distributional similarity between vectors of argument words, operating at the word space level. A recent line of work, which addresses context sensitivity of rules, represented contexts in a latent topic space and computed similarity over topic vectors. We propose a novel two-level model, which computes similarities between word-level vectors that are biased by topic-level context representations. Evaluations on a naturally-distributed dataset show that our model significantly outperforms prior word-level and topic-level models. We also release a first context-sensitive inference rule set.
Hila Weisman, Jonathan Berant, Idan Szpektor and Ido Dagan
Learning Verb Inference Rules from Linguistically-Motivated Evidence
Long paper in EMNLP 2012.
Learning inference relations between verbs is at the heart of many semantic applications. However, most prior work on learning such rules focused on a rather narrow set of information sources: mainly distributional similarity, and to a lesser extent manually constructed verb co-occurrence patterns. In this
paper, we claim that it is imperative to utilize information from various textual scopes: verb co-occurrence within a sentence, verb cooccurrence within a document, as well as overall corpus statistics. To this end, we propose a much richer novel set of linguistically motivated cues for detecting entailment between verbs and combine them as features in a supervised classification framework. We empirically demonstrate that our model significantly outperforms previous methods and that information from each textual scope contributes to the verb entailment learning task.
Jonathan Berant, Ido Dagan, Meni Adler and Jacob Goldberger
Efficient Tree-based Approximation for Entailment Graph Learning
Long paper in ACL 2012.
Learning entailment rules is fundamental in many semantic-inference applications and has been an active field of research in recent years. In this paper we address the problem of learning transitive graphs that describe entailment rules between predicates (termed entailment graphs). We first identify that entailment graphs exhibit a “tree-like” property and are very similar to a novel type of graph termed
forest-reducible graph. We utilize this property to develop an iterative efficient approximation algorithm for learning the graph edges, where each iteration takes linear time. We compare our approximation algorithm to a recently-proposed state-of-the-art exact algorithm and show that it is more efficient and scalable both theoretically and empirically, while its output quality is close to that given by the optimal solution of the exact algorithm.
Naomi Zeichner, Jonathan Berant and Ido Dagan
Crowdsourcing Inference-Rule Evaluation
Short paper in ACL 2012.
The importance of inference rules to semantic applications has long been recognized and extensive work has been carried out to automatically acquire inference-rule resources. However, evaluating such resources has turned out to be a non-trivial task, slowing progress in the field. In this paper, we suggest a framework for evaluating inference-rule resources. Our framework simplifies a previously proposed
“instance-based evaluation” method that involved substantial annotator training, making it suitable for crowdsourcing. We show that our method produces a large amount of annotations with high inter-annotator agreement for a low cost at a short period of time, without requiring training expert annotators.
Meni Adler, Jonathan Berant and Ido Dagan
Entailment-based Text Exploration with Application to the Health-care Domain
Demo paper in ACL 2012.
We present a novel text exploration model, which extends the scope of state-of-the-art technologies by moving from standard concept- based exploration to statement-based exploration. The proposed scheme utilizes the textual entailment relation between statements as the basis of the exploration process. A user of our system can explore the result space of a query by drilling down/up from one statement to another, according to entailment relations specified by an entailment graph and an optional concept taxonomy. As a prominent use case, we apply our exploration system and illustrate its benefit on the health-care domain. To the best of our knowledge this is the first implementation of an exploration system at the statement level that is based on the textual entailment relation.
Asher Stern, Amnon Lotan, Shachar Mirkin, Eyal Shnarch, Lili Kotlerman, Jonathan Berant and Ido Dagan
Knowledge and Tree-Edits in Learnable Entailment Proofs.
Proceedings of TAC 2011.
This paper describes BIUTEE - Bar Ilan University Textual Entailment Engine. BIUTEE is a natural language inference system in which the hypothesis is proven by the text, based on linguistic- and world- knowledge resources, as well as syntactically motivated tree transformations.
The main progress in BIUTEE in the last year is a new confidence model that estimates the validity of the proof found by BIUTEE.
Jonathan Berant, Ido Dagan and Jacob Goldberger Learning Entailment Relations by Global
Graph Structure Optimization.
Long paper in The Journal of Computational Linguistics 38(1) pp. 73-111 (2012)
Identifying entailment relations between predicates is an important part of applied semantic
inference. In this article we propose a global inference algorithm that learns such entailment
rules. First, We define a graph structure over predicates that represents entailment relations as
directed edges. Then, we use a global transitivity constraint on the graph to learn the optimal set
of edges, formulating the optimization problem as an Integer Linear Program. The algorithm is
applied in a setting where given a target concept, the algorithm learns on-the-fly all entailment
rules between predicates that co-occur with this concept. Results show that our global algorithm
improves performance over baseline algorithms by more than 10%.
Jonathan Berant, Ido Dagan and Jacob Goldberger Global Learning of Typed Entailment Rules.
Long paper in the proceedings of ACL 2011 (best student paper)
Extensive knowledge bases of entailment rules between predicates are crucial for applied semantic inference. In this paper we propose an algorithm that utilizes transitivity constraints to learn a globally-optimal set of entailment rules for typed predicates. We model the task as a graph learning problem and suggest methods that scale the algorithm to larger graphs. We apply the algorithm over a large data set
of extracted predicate instances, from which a resource of typed entailment rules has been recently released (Schoenmackers et al., 2010).
Our results show that using global transitivity information substantially improves performance over this resource and several baselines, and that our scaling methods allow us to increase the scope of global learning of entailment-rule graphs.
Catherine L. Caldwell-Harris, Jonathan Berant and Shimon Edelman Measuring Mental Entrenchment of Phrases with Perceptual Identification, Familiarity Ratings, and Corpus Frequency Statistics.
To appear in S. T. Gries and D. Divjak (eds.), Frequency effects in cognitive linguistics (Vol. 1): Statistical effects in learnability, processing and change, The Hague, The Netherlands: De Gruyter Mouton (2011).
Asher Stern, Eyal Shnarch, Amnon Lotan, Shachar Mirkin, Lili Kotlerman, Naomi Zeichner, Jonathan Berant and Ido Dagan Rule Chaining and Approximate Match in Textual Inference.
Text Analysis Conference 2010 (RTE-6)
This paper describes the participation of Bar-Ilan university in the sixth RTE challenge. Our textual-entailment engine, BiuTee , was enhanced with new components that introduce chaining
of lexical-entailment rules, and tackle the problem of approximately matching the text and the hypothesis after all available knowledge of entailment rules was utilized. We have also re-engineered
our system aiming at an open-source open architecture. BiuTee's performance is better than the median of all-submissions, and outperforms significantly an IR-oriented baseline.
Shachar Mirkin, Jonathan Berant, Ido Dagan and Eyal Shnarch Recognising Entailment within Discourse.
Proceedings of COLING, 2010.
Texts are commonly interpreted based on the entire discourse in which they are situated. Discourse processing has been shown useful for inference-based application; yet, most systems for textual entailment - a popular paradigm for applied inference - have only addressed discourse considerations via off-the-shelf coreference resolvers. In this paper we explore
various discourse aspects in entailment inference, suggest initial solutions for them and investigate their impact on entailment
performance. Our experiments suggest that discourse provides useful information, which signi?cantly improves entailment inference, and should be better addressed by future entailment systems.
Jonathan Berant, Ido Dagan and Jacob Goldberger Global Learning of Focused Entailment Graphs.
Long paper in the proceedings of ACL, 2010.
We propose a global algorithm for learning entailment relations between predicates. We define a graph structure over predicates that represents entailment relations as directed edges, and use a global transitivity constraint on the graph to learn the optimal set of edges, by formulating the optimization problem as an Integer Linear Program. We motivate this graph
with an application that provides a hierarchical summary for a set of propositions that focus on a target concept, and show that our global algorithm improves performance by more than 10% over baseline algorithms.
Shachar Mirkin, Roy Bar-Haim, Jonathan Berant, Ido Dagan, Eyal Shnarch, Asher Stern
and Idan Szpektor
Addressing Discourse and Document Structure in the RTE Search Task
. Proceedings of TAC, 2009.
This paper describes Bar-Ilan University's submissions to RTE-5. This year we focused on the Search pilot, enhancing our entailment system to address two main issues introduced by this new setting: scalability and, primarily, document-level discourse. Our system achieved the highest score on the Search task amongst participating groups, and proposes first steps towards addressing this challenging setting.
Roy Bar-Haim, Jonathan Berant and Ido Dagan
A Compact Forest for Scalable Inference over Entailment and Paraphrase Rules
. Proceedings of EMNLP, 2009.
A large body of recent research has been investigating the acquisition and application of applied inference knowledge. Such knowledge may be typically captured as entailment rules, applied over syntactic representations. Efficient inference with such knowledge then becomes a fundamental problem. Starting out from a formalism for entailment-rule application we
present a novel packed data-structure and a corresponding algorithm for its scalable implementation. We proved the validity of
the new algorithm and established its efficiency analytically and empirically.
, Jonathan Berant, Ido Dagan, Iddo Greental, Shachar Mirkin, Eyal Shnarch
and Idan Szpektor
Efficient Semantic Deduction and Approximate Matching over Compact Parse Forests
. Proceedings of TAC, 2008.
Semantic inference is often modeled as application of entailment rules, which specify generation of entailed sentences from a source sentence. Efficient generation and representation of entailed consequents is a fundamental problem common to such inference methods. We present a new data structure, termed compact forest, which allows efficient generation and representation of entailed consequents, each represented as a parse tree. Rule-based inference is complemented with a new approximate
matching measure inspired by tree kernels, which is computed efficiently over compact forests. Our system also makes use of novel large-scale entailment rule bases, derived fromWikipedia as well as from information about predicates and their argument mapping,
gathered from available lexicons and complemented by unsupervised learning.
Jonathan Berant, Catherine Caldwell-Harris
and Shimon Edelman
Tracks in the Mind: Differential Entrenchment of Common and Rare Liturgical and Everyday Multiword Phrases in Religious and Secular Hebrew Speakers
. Proceedings of CogSci, 2008.
We tested the hypothesis that more frequent exposure to multiword phrases results in deeper entrenchment of their representations, by examining the performance of subjects of different religiosity in the recognition of briefly presented liturgical and secular phrases drawn from several frequency classes. Three of the sources were prayer texts that religious Jews are required to recite on a daily, weekly, and annual basis, respectively; two others were common and rare expressions encountered in the general secular Israeli culture. As expected, linear dependence of recognition score on frequency was found for the religious subjects (being most pronounced for men, who are usually more observant than women); both religious and secular subjects performed better on common than on rare general culture items. Our results support the notion of graded entrenchment introduced by Langacker and shared by several cognitive linguistic theories of language comprehension and production.
Jonathan Berant, Yaron Gross, Matan Mussel, Ben Sandbank
, Eytan Ruppin and Shimon Edelman
Boosting Unsupervised Grammar Induction by Splitting Complex Sentences on Function Words
. Proceedings of BUCLD, 2007.
Semantic parsing on Freebase from Question-Answer pairs, Apple, August 2014.
Semantic Parsing via Paraphrasing, Facebook AI Research Lab, June 2014.
Semantic Parsing on Freebase from Question-Answer pairs, Google, October 2013, and Advanced vision seminar, Stanford,
Global Learning of Textual Entailment Graphs, thesis public lecture, Tel-Aviv University, August 2012.
Global Learning of Entailment Graphs, NYU, Columbia, MIT and UIUC seminars, January 2011.
Global Learning of Focused Entailment Graphs, University of Washington AI seminar, Seattle, October 2010.
An Entailment-based Ontology for Domain-Specific Relations, ITCH workshop, Trento, September 2009.
Standard and Non-standard Parse Trees Equally Improve Grammar Induction, ISCOL, Ramat-Gan, September 2008.
Short presentation about the argument from the poverty of the stimulus.