Members of the Stanford NLP Group pursue research in a broad variety of topics:

Information Extraction

Semantic Parsing

  • We are interested in mapping utterances to deep meaning representations that take into account the compositional and quantification structure of language.

Sentiment and Social Meaning

Dropout Learning and Feature Noising

  • Algorithms that aim to preventing feature co-adaptation using fast dropout training by sampling from or integrating a Gaussian approximation, or equivalently as adaptive regularizers, which can be generalized to semi-supervised learning settings.

Deep Learning in Natural Language Processing

  • Algorithms that tackle various problems in natural language processing such as parsing, sentiment analysis or paraphrase detection. The methods that we explore are general in nature and can also be applied to vision problems.

Parsing & Tagging

  • Algorithms for assigning part of speech and syntactic structure, emphasizing probabilistic and discriminative models. Research topics include:

Machine Translation

  • Language modeling, re-ordering models, phrase extraction techniques, syntactic methods, and better training for statistical machine translation:

Dialog and Speech Processing

The History of Computational Linguistics

Unsupervised and Semisupervised Learning of Linguistic Structure

Multilingual NLP

  • A variety of NLP investigations in Chinese, Arabic, and German, including tagging, segmentation, probabilistic syntactic parsing, and semantic role parsing, as well as work on speech processing. Research topics include:

Past Projects


  • Computational Linguistics @ Stanford
  • Other groups at Stanford doing NLP-related research:

  • The Computational Semantics Lab at CSLI
  • The LinGO/LKB project at CSLI
  • Martin Kay

  • Other links:

  • Statistical NLP resources
  • Linguistics resources