Jason Naradowsky



Hi! I'm a researcher at Square Enix AI & Arts Alchemy, working towards ML-powered next generation interactive experiences in gaming. I am also a project assistant professor at University of Tokyo, Miyao-ken, leading the dialogue and LLM group. Some of my current research interests are:

Recently I've become increasingly interested in computational creativitiy and the potential for AI in music creation, and often work on projects in these areas in my spare time. If you're also interested in these topics, especially if you're located in Japan, feel free to get in touch.

Previously I worked at the Japanese AI/Robotics startup, Preferred Networks. Prior to moving to Japan, I was a faculty research scientist at Johns Hopkins University, and did postdoctoral work at the University of Cambridge (with Anna Korhonen) and at University College London (with Sebastian Riedel). Prior to that I completed my PhD in Computer Science at UMass Amherst with David Smith and Mark Johnson.

Links:   CVGoogle ScholarGithub

My Erdős–Bacon number is arguably no greater than 8.


Wolfe is a probabilistic programming language that enables practitioners to develop machine learning models in a declarative manner. Wolfe models are written in Scala and compiled by Wolfe into highly-optimized inference and learning routines (using Scala's own abstract syntax trees!), enabling researchers to focus on modelling while Wolfe does the heavy lifting. It currently features matrix factorization, message passing, and alternating directions dual decomposition, can perform many structured prediction tasks, visualize inference in factor graphs, and more.

Natural Language Toolkit (NLTK):
The Natural Language Toolkit is a collection of open source Python modules that can be used freely for research or pedagogical purposes. There's also a book documenting how to use the NTLK which doubles as an introductory computational linguistics coursebook. For the summer of 2008 I worked on the NLTK while sponsored under the Google Summer of Code program, during which time I implemented a suite of dependency parsers under the supervision of Sebastian Riedel and Jason Baldridge.

papers by year[show all abstracts]


Mind the gap between conversations for improved long-term dialogue generation
Qiang Zhang, Jason Naradowsky, and Yusuke Miyao
EMNLP Findings 2023
[abstract] [paper] [bib]

Ask an Expert: Leveraging Language Models to Improve Strategic Reasoning in Goal-Oriented Dialogue Models
Qiang Zhang, Jason Naradowsky, and Yusuke Miyao
ACL Findings 2023
[abstract] [paper] [bib]

Fiction-Writing Mode: An Effective Control for Human-Machine Collaborative Writing
Wenjie Zhong, Jason Naradowsky, Hiroya Takamura, Ichiro Kobayashi, and Yusuke Miyao
EACL 2023
[abstract] [paper] [bib]

Emergent Communication with Attention
Ryokan Ri, Ryo Ueda, and Jason Naradowsky
CogSci 2023
[abstract] [paper] [bib]


Rethinking Offensive Text Detection as a Multi-Hop Reasoning Problem
Qiang Zhang, Jason Naradowsky, and Yusuke Miyao
ACL Findings 2022
[abstract] [paper] [bib]


Amp-Space: A Large-scale Dataset for Fine-grained Timbre Transformation
Jason Naradowsky
DAFx 2021
[abstract] [paper] [bib] [code]


Machine Translation System Selection from Bandit Feedback
Jason Naradowsky, Xuan Zhang, and Kevin Duh
AMTA 2020
[abstract] [paper] [bib]

Pow-Wow: A dataset and study on collaborative communication in Pommerman
Takuma Yoneda, Matthew Walter, and Jason Naradowsky
Language in Reinforcement Learning (LaReL), 2020
[abstract] [paper] [bib]

Meta-learning Extractors for Music Source Separation
David Samuel, Aditya Ganeshan, and Jason Naradowsky
[abstract] [paper] [bib] [code] [colab]


Emergent Communication with World Models
Alex Cowen-Rivers and Jason Naradowsky
NeurIPS 2019 Workshop on Emergent Communication (EmeCom)
[abstract] [paper] [bib]


Language Modeling for Morphologically Rich Languages: Character-Aware Modeling for Word-Level Prediction
Daniela Gerz, Ivan Vulic´, Edoardo Ponti, Jason Naradowsky, Roi Reichart, and Anna Korhonen
TACL 2018
[abstract] [paper] [bib]

A Structured Variational Autoencoder for Morphological Inflection
Lawrence Wolf-Sonkin, Jason Naradowsky, Sebastian J. Mielke, and Ryan Cotterell
ACL 2018
[abstract] [paper] [bib]

Hypothesis Only Baselines in Natural Language Inference
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme
*Sem 2018
[abstract] [paper] [bib]
Best Paper Award

Gender Bias in Coreference Resolution
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme
NAACL 2017
[abstract] [paper] [bib]

Improvised Robotic Design with Found Objects
Azumi Maekawa, Ayaka Kume, Hironori Yoshida, Jun Hatori, Jason Naradowsky, Shunta Saito
NeurIPS Machine Learning for Creativity and Design 2018
[abstract] [paper] [bib]

Automatic Illumination Effects for 2D Characters
Zhengyan Gao, Taizan Yonetsuji, Tatsuya Takamura, Toru Matsuoka, Jason Naradowsky
NeurIPS Machine Learning for Creativity and Design 2018
[abstract] [paper] [bib]

The Hitachi/JHU CHiME-5 system: Advances in speech recognition for everyday home environments using multiple microphone arrays
Naoyuki Kanda, Rintaro Ikeshita, Shota Horiguchi, Yusuke Fujita, Kenji Nagamatsu (Hitachi, Ltd), Xiaofei Wang, Vimal Manohar, Nelson Enrique Yalta Soplin, Matthew Maciejewski, Szu-Jui Chen, Aswin Shanmugam Subramanian, Ruizhi Li, Zhiqi Wang, Jason Naradowsky, L. Paola Garcia-Perera, and Gregory Sell
CHiME 2018
[abstract] [paper] [bib]


Programming with a differentiable forth interpreter
Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, and Sebastian Riedel
ICML 2017
[abstract] [paper] [bib]

Modeling exclusion with a differentiable factor graph constraint
Jason Naradowsky and Sebastian Riedel
ICML 2017, DeepStruct
[abstract] [paper] [bib]

Break it down for me: A study in automated lyric annotation
Lucas Sterckx, Jason Naradowsky, Bill Byrne, Thomas Demeester, and Chris Develder
EMNLP 2017
[abstract] [paper] [bib]

A neural forth abstract machine
Matko Bosnjak, Tim Rocktäschel, Jason Naradowsky, and Sebastian Riedel
[abstract] [paper] [bib]


Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing
James Goodman, Andreas Vlachos and Jason Naradowsky
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
[abstract] [paper] [bib]

UCL+Sheffield at SemEval-2016 Task 8: Imitation learning for AMR parsing with an α-bound
James Goodman, Andreas Vlachos and Jason Naradowsky
Proceedings of the 10th International Workshop on Semantic Evaluation
[abstract] [paper] [bib]


Learning with Joint Inference and Latent Linguistic Structure in Graphical Models
Jason Naradowsky
Doctoral Dissertation, 2014
Supervisors: David Smith and Mark Johnson
[abstract] [paper] [bib]


Improving NLP through Marginalization of Hidden Syntactic Structure
Jason Naradowsky, Sebastian Riedel, and David Smith
EMNLP 2012
[abstract] [paper] [bib]

Grammarless Parsing for Joint Inference
Jason Naradowsky, Tim Vieira, and David Smith
[abstract] [paper] [bib]

Combinatorial Constraints for Constituency Parsing in Graphical Models
Jason Naradowsky, David Smith
Technical Report, University of Massachusetts Amherst, 2012.


Unsupervised Bilingual Morpheme Segmentation and Alignment with
Context-rich Hidden Semi-Markov Models

Jason Naradowsky and Kristina Toutanova
ACL 2011
[abstract] [paper] [slides] [bib]

A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
John Lee, Jason Naradowsky, and David Smith
ACL 2011
[abstract] [paper] [bib]

Feature Induction for Online Constraint-based Phonology Acquisition
Jason Naradowsky, Joe Pater, and David Smith
Synthesis Project, Presented at NECPHON 2011
[abstract] [paper] [bib]


Learning Hidden Metrical Structure with a Log-linear Model of Grammar
Jason Naradowsky, Joe Pater, David Smith, and Robert Staubs
Workshop on Computational Modelling of Sound Pattern Acquisition 2010


Polylingual Topic Models
David Mimno, Hanna Wallach, Jason Naradowsky, David Smith and Andrew McCallum
EMNLP 2009
[abstract] [paper] [bib]

Improving Morphology Induction by Learning Spelling Rules
Jason Naradowsky and Sharon Goldwater
IJCAI 2009
[abstract] [paper] [slides] [bib]

Polylingual Topic Models
David Mimno, Hanna Wallach, Limin Yao, Jason Naradowsky and Andrew McCallum
The Learning Workshop (Snowbird) 2009