Members of the Stanford NLP Group pursue research in a broad
variety of topics:
Extraction of structured information from unstructured text.
This includes identifying named entities, resolving anaphora, linking
them to a global namespace, and identifying relations between the entities.
Sentiment and Social Meaning
We are interested the extraction of sentiment and other kinds of social meaning, including politeness, bias, friendliness, and flirtation, from speech and text.
Dropout Learning and Feature Noising
Algorithms that aim to preventing feature co-adaptation using fast dropout training
by sampling from or integrating a Gaussian approximation, or equivalently as adaptive regularizers,
which can be generalized to semi-supervised learning settings.
Deep Learning in Natural Language Processing
The use of continuous-space distributed representations
(neural nets) for tackling
various problems in natural language processing and
vision, including parsing, sentiment analysis and paraphrase detection.
Parsing & Tagging
Language modeling, re-ordering models, phrase extraction
techniques, syntactic methods, and better training for
statistical machine translation:
Dialog and Speech Processing
The History of Computational Linguistics
Unsupervised and Semisupervised Learning of Linguistic Structure
A variety of NLP investigations in Chinese, Arabic, and
German, including tagging, segmentation, probabilistic
syntactic parsing, and semantic role parsing, as well as work on speech processing. Research
Linguistics @ Stanford
Other groups at Stanford doing NLP-related research:
Semantics Lab at CSLI
project at CSLI
Statistical NLP resources