About me

I aim to develop machines that can read, write, and use language as a tool, in both static datasets and interactive worlds.

Education

  • B.S. in Communications Engineering, Beijing University of Technology, 2011
  • M.S. in Computer Science, New York University, 2015

Work experience

Service

  • Reviewer: ACL, EMNLP, NAACL, ICML, NeurIPS, SIGIR, Montreal AI Symposium (MAIS), Reasoning for Complex Question Answering (RCQA) Workshop.
  • Outstanding/Highlighted Reviewer: EMNLP 2020, NeurIPS 2021, ICLR 2022, ICML 2022.
  • Organizer:
    • Wordplay: When Language Meets Games Workshop at NeurIPS 2020, NAACL 2021. Website
    • Knowledge-Based Reinforcement Learning (KBRL) Workshop at IJCAI-PRICAI 2020, Yokohama, Japan. Website

Selected publications

  • Machines that can read:
    • Building Dynamic Knowledge Graphs from Text using Machine Reading Comprehension. R. Das, T. Munkhdalai, X. Yuan, A. Trischler, A. McCallum. In ICLR, 2019. ArXiv
    • Machine Comprehension by Text-to-Text Neural Question Generation. X. Yuan*, T. Wang*, C. Gulcehre*, A. Sordoni*, P. Bachman, S. Zhang, S. Subramanian, A. Trischler. In RepL4NLP Workshop, ACL, 2017. ArXiv
    • NewsQA: a Machine Comprehension Dataset. A. Trischler*, T. Wang*, X. Yuan*, J. Harris, A. Sordoni, P. Bachman, K. Suleman. In RepL4NLP Workshop, ACL, 2017. ArXiv
    • Natural Language Comprehension with the Epireader. A. Trischler, Z. Ye, X. Yuan, P. Bachman, A. Sordoni, K. Suleman. In EMNLP, 2016. ArXiv
  • Machines that can write:
    • General-to-Specific Transfer Labeling for Domain Adaptable Keyphrase Generation R. Meng, T. Wang, X. Yuan, Y. Zhou, D. He. In ACL Findings 2023. ArXiv
    • An Empirical Study on Neural Keyphrase Generation R. Meng, X. Yuan, T. Wang, S. Zhao, A. Trischler, D. He. In NAACL 2021. ArXiv
    • One Size Does Not Fit All: Generating and Evaluating Variable Number of Keyphrases. X. Yuan*, T. Wang*, R. Meng*, K. Thaker, P. Brusilovsky, D. He, A. Trischler. In ACL 2020. ArXiv
  • Machines that can use language as a tool:
    • Deep Language Networks: Joint Prompt Training of Stacked LLMs using Variational Inference A. Sordoni, X. Yuan, M.-A. Côté, M. Pereira, A. Trischler, Z. Xiao, A. Hosseini, F. Niedtner, N. Le Roux. ArXiv
    • Augmenting Autotelic Agents with Large Language Models C. Colas, L. Teodorescu, P.-Y. Oudeyer, X. Yuan, M.-A. Côté. In CoLLAs 2023. ArXiv
    • One-Shot Learning from a Demonstration with Hierarchical Latent Language N. Weir, X. Yuan, M.-A. Côté, M. Hausknecht, R. Laroche, I. Momennejad, H. Van Seijen, B. Van Durme. In AAMAS 2023. ArXiv
    • Asking for Knowledge (AFK): Training RL Agents to Query External Knowledge Using Language. I.-J. Liu*, X. Yuan*, M.-A. Côté*, P.-Y. Oudeyer, and A. Schwing. In ICML 2022. ArXiv
    • Interactive Machine Comprehension with Dynamic Knowledge Graphs. X. Yuan. In EMNLP 2021. ArXiv
    • ALFWorld: Aligning Text and Embodied Environments for Interactive Learning M. Shridhar, X. Yuan, M.-A. Côté, Y. Bisk, A. Trischler, M Hausknecht. ICLR 2021. ArXiv
    • Learning Dynamic Knowledge Graphs to Generalize on Text-Based Games. A. Adhikari*, X. Yuan*, M.-A. Côté*, M. Zelinka, M.-A. Rondeau, R. Laroche, P. Poupart, J. Tang, A. Trischler, W. Hamilton. NeurIPS 2020. ArXiv
    • Interactive Fiction Games: A Colossal Adventure. M. Hausknecht, P. Ammanabrolu, M.-A. Côté, X. Yuan. In AAAI, 2020. ArXiv
    • Interactive Machine Comprehension with Information Seeking Agents. X. Yuan*, J. Fu*, M.-A. Côté, Y. Tay, C. Pal, A. Trischler. In ACL 2020. ArXiv
    • Interactive Language Learning by Question Answering. X. Yuan*, M.-A. Côté*, J. Fu, Z. Lin, C. Pal, Y. Bengio, A. Trischler. In EMNLP, 2019. ArXiv
    • TextWorld: A Learning Environment for Text-based Games. M.-A. Côté, Á. Kádár, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, M. Hausknecht, L. El Asri, M. Adada, W. Tay, A. Trischler. In Computer Games Workshop, ICML/IJCAI, 2018. ArXiv

Fun stuff