Jason Naradowsky



Hi! I'm a research scientist at Johns Hopkins University, where I specialize in machine learning and natural language processing, and at Preferred Networks, an AI and robotics startup in Tokyo. Previously I did postdoctoral work at the University of Cambridge (with Anna Korhonen) and at University College London (with Sebastian Riedel). Prior to that I completed my PhD in Computer Science at UMass Amherst with David Smith and Mark Johnson.

I've bounced around a lot, both geographically and topically, and so it is difficult to claim my research has much of a unifying theme, but some overarching trends are:

Recently I've become increasingly interested in computational creativitiy and the potential for AI in music creation, and perform some consulting work in these areas. If you're also interested in these topics and would like to pursue a project with me, please feel free to get in touch.

Links:   CVGoogle ScholarGithub

My Erdős–Bacon number is arguably no greater than 8.


Wolfe is a probabilistic programming language that enables practitioners to develop machine learning models in a declarative manner. Wolfe models are written in Scala and compiled by Wolfe into highly-optimized inference and learning routines (using Scala's own abstract syntax trees!), enabling researchers to focus on modelling while Wolfe does the heavy lifting. It currently features matrix factorization, message passing, and alternating directions dual decomposition, can perform many structured prediction tasks, visualize inference in factor graphs, and more.

Natural Language Toolkit (NLTK):
The Natural Language Toolkit is a collection of open source Python modules that can be used freely for research or pedagogical purposes. There's also a book documenting how to use the NTLK which doubles as an introductory computational linguistics coursebook. For the summer of 2008 I worked on the NLTK while sponsored under the Google Summer of Code program, during which time I implemented a suite of dependency parsers under the supervision of Sebastian Riedel and Jason Baldridge.

papers by year[show all abstracts]


Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
Daniela Gerz, Ivan Vulic´, Edoardo Ponti, Jason Naradowsky, Roi Reichart, and Anna Korhonen
TACL 2018
[abstract] [paper] [bib]

A Structured Variational Autoencoder for Morphological Inflection
Lawrence Wolf-Sonkin, Jason Naradowsky, Sebastian J. Mielke, and Ryan Cotterell
ACL 2018
[abstract] [paper] [bib]

Hypothesis Only Baselines in Natural Language Inference
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme
*Sem 2018
[abstract] [paper] [bib]
Best Paper Award

Gender Bias in Coreference Resolution
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme
NAACL 2017
[abstract] [paper] [bib]


Programming with a differentiable forth interpreter
Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, and Sebastian Riedel
ICML 2017
[abstract] [paper] [bib]

Modeling exclusion with a differentiable factor graph constraint
Jason Naradowsky and Sebastian Riedel
ICML 2017, DeepStruct
[abstract] [paper] [bib]

Break it down for me: A study in automated lyric annotation
Lucas Sterckx, Jason Naradowsky, Bill Byrne, Thomas Demeester, and Chris Develder
EMNLP 2017
[abstract] [paper] [bib]

A neural forth abstract machine
Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, and Sebastian Riedel
[abstract] [paper] [bib]


Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing
James Goodman, Andreas Vlachos and Jason Naradowsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
[abstract] [paper] [bib]

UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an α-bound
James Goodman, Andreas Vlachos and Jason Naradowsky
Proceedings of the 10th International Workshop on Semantic Evaluation
[abstract] [paper] [bib]


Learning with Joint Inference and Latent Linguistic Structure in Graphical Models
Jason Naradowsky
Doctoral Dissertation, 2014
Supervisors: David Smith and Mark Johnson
[abstract] [paper] [bib]


Improving NLP through Marginalization of Hidden Syntactic Structure
Jason Naradowsky, Sebastian Riedel, and David Smith
EMNLP 2012
[abstract] [paper] [bib]

Grammarless Parsing for Joint Inference
Jason Naradowsky, Tim Vieira, and David Smith
[abstract] [paper] [bib]

Combinatorial Constraints for Constituency Parsing in Graphical Models
Jason Naradowsky, David Smith
Technical Report, University of Massachusetts Amherst, 2012.


Unsupervised Bilingual Morpheme Segmentation and Alignment with
Context-rich Hidden Semi-Markov Models

Jason Naradowsky and Kristina Toutanova
ACL 2011
[abstract] [paper] [slides] [bib]

A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
John Lee, Jason Naradowsky, and David Smith
ACL 2011
[abstract] [paper] [bib]

Feature Induction for Online Constraint-based Phonology Acquisition
Jason Naradowsky, Joe Pater, and David Smith
Synthesis Project, Presented at NECPHON 2011
[abstract] [paper] [bib]


Learning Hidden Metrical Structure with a Log-linear Model of Grammar
Jason Naradowsky, Joe Pater, David Smith, and Robert Staubs
Workshop on Computational Modelling of Sound Pattern Acquisition 2010


Polylingual Topic Models
David Mimno, Hanna Wallach, Jason Naradowsky, David Smith and Andrew McCallum
EMNLP 2009
[abstract] [paper] [bib]

Improving Morphology Induction by Learning Spelling Rules
Jason Naradowsky and Sharon Goldwater
IJCAI 2009
[abstract] [paper] [slides] [bib]

Polylingual Topic Models
David Mimno, Hanna Wallach, Limin Yao, Jason Naradowsky and Andrew McCallum
The Learning Workshop (Snowbird) 2009