I lead the GLAMOR Lab at USC. My research brings together natural language processing and robotics to connect language to the world (RoboNLP). I am interested in connecting language to agent perception and action, and lifelong learning through interaction.
Assistant Professor @ |
jessetho🙃usc.edu |
I am not hiring new PhD students. | |
Thomas Lord Department of Computer Science |
CS PhD FAQ |
News | |||
Invited Talk CoRL | Workshop on Language and Robot Learning: Flip the Script: Bring Robotics to NLP | websiteslides | November 2024 |
Invited Talk Georgia Tech | Summit on Responsible Computing, AI, and Society: Use AI Grout without Losing AI Grit | websiteslides | October 2024 |
Invited Talk RO-MAN | HRI4Wellbeing Workshop: Bringing LPTMs and Symbolic Reasoning Together for Robots | websiteslides | August 2024 |
Featured Research Scientific American | Scientists Are Putting ChatGPT Brains Inside Robot Bodies. What Could Possibly Go Wrong? | website | March 2024 |
Invited Talk University of Utah | @ Utah Robotics Center Seminar: Language Guided Robots | slides | January 2024 |
Invited Talk NeurIPS | 6th Robot Learning Workshop: LPTMs Can Help Robots Without Ignoring Robotics | websiteslides | December 2023 |
Invited Talk CMU | LTI Colloquium: Using Large Models as Duct Tape, Not Hammers | websiteslidesvideo | October 2023 |
Invited Talk ICML | Workshop on Interactive Learning with Implicit Human Feedback | websiteslides | July 2023 |
Organizer CoRL | Workshop on Language and Robot Learning (LangRob) | website | December 2022 |
Dataset Release Amazon | TEACh: Task-driven Embodied Agents that Chat | website | October 2021 |
Organizer IROS | Semantic Policy and Action Representations for Autonomous Robots (SPAR) Workshop | website | September 2021 |
New Position University of Southern California | Assistant Professor - Viterbi Department of Computer Science | website | August 2021 |
Invited Talk USC/ISI | @ USC/ISI NL Seminar | websiteslidesvideo | February 2021 |
Outreach PhD Recruiting | 2020-2021 CS[-ish] PhD Recruiting | website | November 2020 |
Invited Talk Stanford | @ Stanford NLP Seminar | websiteslides | October 2020 |
Organizer ECCV | Embodied Vision, Actions & Language (EVAL) Workshop | website | August 2020 |
New Position Amazon | Visiting Academic at Alexa AI | August 2020 | |
Invited Talk ACL–NLP4ConvAI | @ Second Workshop on NLP for Conversational AI | websiteslides | July 2020 |
Organizer ACL | First Workshop on Advances in Language and Vision Research (ALVR) | website | July 2020 |
Invited Talk NeurIPS–ViGIL | @ Visually Grounded Interaction and Language (ViGIL) Workshop | websiteslidesvideo | December 2019 |
Invited Talk University of Southern California | @ USC AI Rising Stars Symposium | slides | December 2019 |
Invited Talk University of Utah | @ Utah Robotics Center Seminar | slides | November 2019 |
Invited Talk IROS–SPAR | @ Semantic Policy and Action Representations for Autonomous Robots (SPAR) Workshop | website | November 2019 |
Invited Talk Microsoft Research | Vision-and-Dialog Navigation | slidesvideo | July 2019 |
Co-Chair NAACL | Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP) | website | June 2019 |
Organizer SIGdial | Special Session on Physically Situated Dialogue | website | July 2018 |
Organizer RSS | Workshop on Models and Representations for Natural Human-Robot Communication | website | June 2018 |
New Position UW | Postdoc with Luke Zettlemoyer | June 2018 | |
Dissertation Defense UT Austin | Continually Improving Grounded Natural Language Understanding through Human-Robot Dialog | April 2018 | |
•Fall 2024 [ongoing] syllabus |
•Spring 2024 syllabus |
•Spring 2023 websitesyllabus |
•Fall 2022 syllabus |
•Spring 2022 syllabus |
Papers and Preprints |
search by category: | | | | | | | | | | | | | | |
2024 |
The American Sign Language Knowledge Graph: Infusing ASL Models with Linguistic Knowledge Lee Kezar, Nidhi Munikote, Zian Zeng, Zed Sevcikova Sehyr, Naomi Caselli, and Jesse Thomason. arXiv, 2024. categories: sign language, neurosymbolic, benchmark preprint paper @article{kezar:aslkg, title={The American Sign Language Knowledge Graph: Infusing {ASL} Models with Linguistic Knowledge}, author={Lee Kezar and Nidhi Munikote and Zian Zeng and Zed Sevcikova Sehyr and Naomi Caselli and Jesse Thomason}, journal={arXiv}, year={2024}, url={https://arxiv.org/abs/2411.03568} } |
When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models Ting-Yun Chang, Jesse Thomason, and Robin Jia. Empirical Methods in Natural Language Processing (EMNLP), 2024. categories: interpretability conference paper @inproceedings{chang:partsgtsums, title={When Parts are Greater Than Sums: Individual {LLM} Components Can Outperform Full Models}, author={Ting-Yun Chang and Jesse Thomason and Robin Jia}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2024}, url={https://arxiv.org/abs/2406.13131} } |
Evaluating Creativity and Deception in Large Language Models: A Simulation Framework for Multi-Agent Balderdash Parsa Hejabi, Elnaz Rahmati, Alireza S. Ziabari, Preni Golazizian, Jesse Thomason, and Morteza Dehghani. Wordplay: When Language Meets Games @ ACL, 2024. categories: benchmark, evaluation workshop paper @inproceedings{hejabi:balderdash, title={Evaluating Creativity and Deception in Large Language Models: A Simulation Framework for Multi-Agent {B}alderdash}, author={Parsa Hejabi and Elnaz Rahmati and Alireza S. Ziabari and Preni Golazizian and Jesse Thomason and Morteza Dehghani}, booktitle={Wordplay: When Language Meets Games @ ACL}, year={2024}, url={https://arxiv.org/abs/2411.10422} } |
Contrast Sets for Evaluating Language-Guided Robot Policies Abrar Anwar, Rohan Gupta, and Jesse Thomason. Conference on Robot Learning (CoRL), 2024. categories: physical robots, evaluation, language and robotics conference paper @inproceedings{anwar:robotcontrasteval, title={Contrast Sets for Evaluating Language-Guided Robot Policies}, author={Abrar Anwar and Rohan Gupta and Jesse Thomason}, booktitle={Conference on Robot Learning (CoRL)}, year={2024}, url={https://arxiv.org/abs/2406.13636} } |
ViSaRL: Visual Reinforcement Learning Guided by Human Saliency Anthony Liang, Jesse Thomason, and Erdem Biyik. Intelligent Robots and Systems (IROS), 2024. categories: physical robots conference paper @inproceedings{liang:visarl, title={{ViSaRL}: Visual Reinforcement Learning Guided by Human Saliency}, author={Anthony Liang and Jesse Thomason and Erdem Biyik}, booktitle={Intelligent Robots and Systems (IROS)}, year={2024}, url={https://arxiv.org/abs/2403.10940} } |
Selective "Selective Prediction": Reducing Unnecessary Abstention in Vision-Language Reasoning Tejas Srinivasan, Jack Hessel, Tanmay Gupta, Bill Yuchen Lin, Yejin Choi, Jesse Thomason, and Khyathi Raghavi Chandu. Findings of Association for Computational Linguistics (ACL Findings), 2024. categories: language and vision, neurosymbolic conference paper @inproceedings{srinivasan:recoverr, title={Selective {"}Selective Prediction{"}: Reducing Unnecessary Abstention in Vision-Language Reasoning}, author={Tejas Srinivasan and Jack Hessel and Tanmay Gupta and Bill Yuchen Lin and Yejin Choi and Jesse Thomason and Khyathi Raghavi Chandu}, booktitle={Findings of Association for Computational Linguistics (ACL Findings)}, year={2024}, url={https://arxiv.org/abs/2402.15610} } |
Generating Contextually-Relevant Navigation Instructions for Blind and Low Vision People Zain Merchant, Abrar Anwar, Emily Wang, Souti Chattopadhyay, and Jesse Thomason. Interactive AI for Human-Centered Robotics (InterAI) Workshop @ Ro-MAN, 2024. *Best Paper Award, 2nd Place. categories: language and vision, evaluation workshop paper |
The COLOSSEUM: A Benchmark for Evaluating Generalization for Robotic Manipulation Wilbert Pumacay, Ishika Singh, Jiafei Duan, Ranjay Krishna, Jesse Thomason, and Dieter Fox. Robotics: Science and Systems (RSS), 2024. categories: benchmark, physical robots, evaluation conference paper @inproceedings{pumacay:colosseum, title={{The COLOSSEUM}: A Benchmark for Evaluating Generalization for Robotic Manipulation}, author={Wilbert Pumacay and Ishika Singh and Jiafei Duan and Ranjay Krishna and Jesse Thomason and Dieter Fox}, booktitle={Robotics: Science and Systems (RSS)}, year={2024}, url={https://arxiv.org/abs/2402.08191} } |
Language Models can Infer Action Semantics for Classical Planners from Environment Feedback Wang Zhu, Ishika Singh, Robin Jia, and Jesse Thomason. arXiv, 2024. categories: language and planning, neurosymbolic preprint paper @article{zhu:psalm, title={Language Models can Infer Action Semantics for Classical Planners from Environment Feedback}, author={Wang Zhu and Ishika Singh and Robin Jia and Jesse Thomason}, journal={arXiv}, year={2024}, url={https://arxiv.org/abs/2406.02791} } |
TwoStep: Multi-agent Task Planning using Classical Planners and Large Language Models Ishika Singh, David Traum, and Jesse Thomason. arXiv, 2024. categories: language and planning, neurosymbolic preprint paperwebsite @article{singh:twostep, title={{TwoStep}: Multi-agent Task Planning using Classical Planners and Large Language Models}, author={Ishika Singh and David Traum and Jesse Thomason}, journal={arXiv}, year={2024}, url={https://arxiv.org/abs/2403.17246} } |
Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding Chancharik Mitra, Abrar Anwar, Rodolfo Corona, Dan Klein, Trevor Darrell, and Jesse Thomason. North American Chapter of the Association for Computational Linguistics (NAACL), 2024. categories: language and vision conference paper @inproceedings{mitra:whichone, title={Which One? Leveraging Context Between Objects and Multiple Views for Language Grounding}, author={Chancharik Mitra and Abrar Anwar and Rodolfo Corona and Dan Klein and Trevor Darrell and Jesse Thomason}, booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2024}, url={https://arxiv.org/abs/2311.06694} } |
Do Localization Methods Actually Localize Memorized Data in LLMs? A Tale of Two Benchmarks Ting-Yun Chang, Jesse Thomason, and Robin Jia. North American Chapter of the Association for Computational Linguistics (NAACL), 2024. categories: interpretability conference paper @inproceedings{chang:localization, title={Do Localization Methods Actually Localize Memorized Data in {LLMs}? {A} Tale of Two Benchmarks }, author={Ting-Yun Chang and Jesse Thomason and Robin Jia}, booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2024}, url={https://arxiv.org/abs/2311.09060} } |
Efficient End-to-End Visual Document Understanding with Rationale Distillation Wang Zhu, Alekh Agarwal, Mandar Joshi, Robin Jia, Jesse Thomason, and Kristina Toutanova. North American Chapter of the Association for Computational Linguistics (NAACL), 2024. categories: neurosymbolic, language and vision conference paper @inproceedings{zhu:vizdoc, title={Efficient End-to-End Visual Document Understanding with Rationale Distillation}, author={Wang Zhu and Alekh Agarwal and Mandar Joshi and Robin Jia and Jesse Thomason and Kristina Toutanova}, booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2024}, url={https://arxiv.org/abs/2311.09612} } |
WinoViz: Probing Visual Properties of Objects Under Different States Woojeong Jin, Tejas Srinivasan, Jesse Thomason, and Xiang Ren. Workshop on Secure and Trustworthy Large Language Models (SeT LLM) @ ICLR, 2024. categories: benchmark, language and vision workshop paper @inproceedings{jin:winoviz, title={{WinoViz}: Probing Visual Properties of Objects Under Different States}, author={Woojeong Jin and Tejas Srinivasan and Jesse Thomason and Xiang Ren}, booktitle={Workshop on Secure and Trustworthy Large Language Models (SeT LLM) @ ICLR}, year={2024}, url={https://arxiv.org/abs/2402.13584} } |
2023 |
Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering Wang Zhu, Jesse Thomason, and Robin Jia. Empirical Methods in Natural Language Processing (EMNLP), 2023. categories: neurosymbolic, semantic parsing conference paper @inproceedings{zhu:chainofquestions, title={Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering}, author={Wang Zhu and Jesse Thomason and Robin Jia}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2023}, url={https://arxiv.org/abs/2305.14901} } |
Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation Yuliang Cai, Jesse Thomason, and Mohammad Rostami. Findings of Empirical Methods in Natural Language Processing (EMNLP Findings), 2023. categories: continual learning, language and vision conference paper @inproceedings{cai:taskattentive, title={Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation}, author={Yuliang Cai and Jesse Thomason and Mohammad Rostami}, booktitle={Findings of Empirical Methods in Natural Language Processing (EMNLP Findings)}, year={2023}, url={https://arxiv.org/abs/2303.14423} } |
Exploring Strategies for Efficient Real-World VLN Evaluation Abrar Anwar, Rohan Gupta, Elle Szabo, and Jesse Thomason. Workshop on Language and Robot Learning (LangRob) @ CoRL, 2023. categories: vln, language and robotics workshop paper @inproceedings{anwar:langrob23, title={Exploring Strategies for Efficient Real-World {VLN} Evaluation}, author={Abrar Anwar and Rohan Gupta and Elle Szabo and Jesse Thomason}, booktitle={Workshop on Language and Robot Learning (LangRob) @ CoRL}, year={2023}, url={https://openreview.net/forum?id=uABEHp6tjy} } |
The Sem-Lex Benchmark: Modeling ASL Signs and Their Phonemes Lee Kezar, Elana Pontecorvo, Adele Daniels, Connor Baer, Ruth Ferster, Lauren Berger, Jesse Thomason, Zed Sevcikova Sehyr, and Naomi Caselli. Conference on Computers and Accessibility (ASSETS), 2023. categories: sign language, benchmark conference paper @inproceedings{kezar:semlex, title={The {Sem-Lex} Benchmark: Modeling {ASL} Signs and Their Phonemes}, author={Lee Kezar and Elana Pontecorvo and Adele Daniels and Connor Baer and Ruth Ferster and Lauren Berger and Jesse Thomason and Zed Sevcikova Sehyr and Naomi Caselli}, booktitle={Conference on Computers and Accessibility (ASSETS)}, year={2023}, url={https://doi.acm.org/?doi=3597638.3608408} } |
Exploring Strategies for Modeling Sign Language Phonology Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sevcikova Sehyr, Naomi Caselli, and Jesse Thomason. European Symposium on Artificial Neural Networks (ESANN), 2023. categories: sign language, continual learning conference paper @inproceedings{kezar:esann, title={Exploring Strategies for Modeling Sign Language Phonology}, author={Lee Kezar and Riley Carlin and Tejas Srinivasan and Zed Sevcikova Sehyr and Naomi Caselli and Jesse Thomason}, booktitle={European Symposium on Artificial Neural Networks (ESANN)}, year={2023}, url={https://www.esann.org/sites/default/files/proceedings/2023/ES2023-83.pdf} } |
RREx-BoT: Remote Referring Expressions with a Bag of Tricks Gunnar Sigurdsson, Jesse Thomason, Gaurav Sukhatme, and Robinson Piramuthu. Intelligent Robots and Systems (IROS), 2023. categories: physical robots, vln, language and robotics conference paper @inproceedings{sigurdsson:rrexbot, title={{RREx-BoT}: Remote Referring Expressions with a Bag of Tricks}, author={Gunnar Sigurdsson and Jesse Thomason and Gaurav Sukhatme and Robinson Piramuthu}, booktitle={Intelligent Robots and Systems (IROS)}, year={2023}, url={https://arxiv.org/abs/2301.12614} } |
ProgPrompt: Program generation for situated robot task planning using large language models Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. Autonomous Robots (AURO), 2023. categories: language and planning, language and robotics, physical robots journal papercoverage @article{singh:progprompt:ar, title={{ProgPrompt}: Program generation for situated robot task planning using large language models}, author={Ishika Singh and Valts Blukis and Arsalan Mousavian and Ankit Goyal and Danfei Xu and Jonathan Tremblay and Dieter Fox and Jesse Thomason and Animesh Garg}, journal={Autonomous Robots (AURO)}, year={2023}, url={https://link.springer.com/article/10.1007/s10514-023-10135-3} } |
I2I: Initializing Adapters with Improvised Knowledge Tejas Srinivasan, Furong Jia, Mohammad Rostami, and Jesse Thomason. Conference on Lifelong Learning Agents (CoLLAs), 2023. categories: continual learning, language and vision conference paper @inproceedings{srinivasan:i2i, title={{I2I}: Initializing Adapters with Improvised Knowledge}, author={Tejas Srinivasan and Furong Jia and Mohammad Rostami and Jesse Thomason}, booktitle={Conference on Lifelong Learning Agents (CoLLAs)}, year={2023}, url={https://arxiv.org/abs/2304.02168} } |
Multimodal Speech Recognition for Language-Guided Embodied Agents Allen Chang, Xiaoyuan Zhu, Aarav Monga, Seoho Ahn, Tejas Srinivasan, and Jesse Thomason. Annual Conference of the International Speech Communication Association (INTERSPEECH), 2023. categories: language and vision, speech recognition conference paper @inproceedings{chang:embodiedspeech, title={Multimodal Speech Recognition for Language-Guided Embodied Agents}, author={Allen Chang and Xiaoyuan Zhu and Aarav Monga and Seoho Ahn and Tejas Srinivasan and Jesse Thomason}, booktitle={Annual Conference of the International Speech Communication Association (INTERSPEECH)}, year={2023}, url={https://arxiv.org/abs/2302.14030} } |
Iterative Vision-and-Language Navigation Jacob Krantz, Shurjo Banerjee, Wang Zhu, Jason J. Corso, Peter Anderson, Stefan Lee, and Jesse Thomason. Computer Vision and Pattern Recognition (CVPR), 2023. categories: continual learning, vln conference paperwebsite @inproceedings{krantz:ivln, title={Iterative Vision-and-Language Navigation}, author={Jacob Krantz and Shurjo Banerjee and Wang Zhu and Jason J. Corso and Peter Anderson and Stefan Lee and Jesse Thomason}, booktitle={Computer Vision and Pattern Recognition (CVPR)}, year={2023}, url={https://arxiv.org/abs/2210.03087} } |
Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions? Wang Zhu, Ishika Singh, Yuan Huang, Robin Jia, and Jesse Thomason. Workshop on Open-Domain Reasoning Under Multi-Modal Settings (ODRUM) @ CVPR, 2023. categories: language and vision, vln workshop paper @inproceedings{zhu:nonsensevln, title={Does {VLN} Pretraining Work with Nonsensical or Irrelevant Instructions?}, author={Wang Zhu and Ishika Singh and Yuan Huang and Robin Jia and Jesse Thomason}, booktitle={Workshop on Open-Domain Reasoning Under Multi-Modal Settings (ODRUM) @ CVPR}, year={2023}, url={https://arxiv.org/abs/2311.17280} } |
Curriculum Learning for Data-Efficient Vision-Language Alignment Tejas Srinivasan, Xiang Ren, and Jesse Thomason. Workshop on Open-Domain Reasoning Under Multi-Modal Settings (ODRUM) @ CVPR, 2023. categories: language and vision workshop paper @inproceedings{srinivasan:tonics, title={Curriculum Learning for Data-Efficient Vision-Language Alignment}, author={Tejas Srinivasan and Xiang Ren and Jesse Thomason}, booktitle={Workshop on Open-Domain Reasoning Under Multi-Modal Settings (ODRUM) @ CVPR}, year={2023}, url={https://arxiv.org/abs/2207.14525} } |
ProgPrompt: Generating Situated Robot Task Plans using Large Language Models Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. International Conference on Robotics and Automation (ICRA), 2023. categories: language and planning, physical robots, language and robotics conference paperwebsitecoverage @inproceedings{singh:progprompt:icra, title={{ProgPrompt}: Generating Situated Robot Task Plans using Large Language Models}, author={Ishika Singh and Valts Blukis and Arsalan Mousavian and Ankit Goyal and Danfei Xu and Jonathan Tremblay and Dieter Fox and Jesse Thomason and Animesh Garg}, booktitle={International Conference on Robotics and Automation (ICRA)}, year={2023}, url={https://arxiv.org/abs/2209.11302} } |
Improving Sign Recognition with Phonology Lee Kezar, Jesse Thomason, and Zed Sevcikova Sehyr. European Chapter of the Association for Computational Linguistics (EACL), 2023. categories: sign language, language and vision conference paper @inproceedings{kezar:islr_phonology, title={Improving Sign Recognition with Phonology}, author={Lee Kezar and Jesse Thomason and Zed Sevcikova Sehyr}, booktitle={European Chapter of the Association for Computational Linguistics (EACL)}, year={2023}, url={https://arxiv.org/abs/2302.05759} } |
Geolocated Social Media Posts are Happier: Understanding the Characteristics of Check-in Posts on Twitter Julie Jiang, Jesse Thomason, Francesco Barbieri, and Emilio Ferrara. Web Sciences (WebSci), 2023. categories: language and vision conference paper @inproceedings{jiang:geolocatedhappy, title={Geolocated Social Media Posts are Happier: Understanding the Characteristics of Check-in Posts on Twitter}, author={Julie Jiang and Jesse Thomason and Francesco Barbieri and Emilio Ferrara}, booktitle={Web Sciences (WebSci)}, year={2023}, url={https://arxiv.org/abs/2207.10887} } |
Multimodal embodied attribute learning by robots for object-centric action policies Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, and Shiqi Zhang. Autonomous Robots (AURO), 2023. categories: language and robotics journal paper @article{zhang:multimodal_embodied_ar23, title={Multimodal embodied attribute learning by robots for object-centric action policies}, author={Xiaohan Zhang and Saeid Amiri and Jivko Sinapov and Jesse Thomason and Peter Stone and Shiqi Zhang}, journal={Autonomous Robots (AURO)}, year={2023}, url={https://link.springer.com/article/10.1007/s10514-023-10098-5} } |
2022 |
CLIP-Nav: Using CLIP for Zero-Shot Vision-and-Language Navigation Vishnu Sashank Dorbala, Gunnar Sigurdsson, Robinson Piramuthu, Jesse Thomason, and Gaurav Sukhatme. Workshop on Language and Robot Learning (LangRob) @ CoRL, 2022. categories: vln workshop paper |
ALFRED-L: Investigating the Role of Language for Action Learning in Interactive Visual Environments Arjun Akula, Spandana Gella, Aishwarya Padmakumar, Mahdi Namazifar, Mohit Bansal, Jesse Thomason, and Dilek Hakkani-Tur. Empirical Methods in Natural Language Processing (EMNLP), 2022. categories: language and action, vln conference paper @inproceedings{akula:alfredl, title={{ALFRED-L}: Investigating the Role of Language for Action Learning in Interactive Visual Environments}, author={Arjun Akula and Spandana Gella and Aishwarya Padmakumar and Mahdi Namazifar and Mohit Bansal and Jesse Thomason and Dilek Hakkani-Tur}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2022}, url={https://aclanthology.org/2022.emnlp-main.636/} } |
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems Wang Zhu, Jesse Thomason, and Robin Jia. Findings of Empirical Methods in Natural Language Processing (EMNLP Findings), 2022. categories: neurosymbolic, evaluation, language and vision conference papersource @inproceedings{zhu:multi_image_contrast_vqa, title={Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems}, author={Wang Zhu and Jesse Thomason and Robin Jia}, booktitle={Findings of Empirical Methods in Natural Language Processing (EMNLP Findings)}, year={2022}, url={https://arxiv.org/abs/2210.15037} } |
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, and Jesse Thomason. Neural Information Processing Systems (NeurIPS), 2022. categories: language and vision, continual learning, benchmark conference papersource @inproceedings{srinivasan:climb, title={{CLiMB}: A Continual Learning Benchmark for Vision-and-Language Tasks}, author={Tejas Srinivasan and Ting-Yun Chang and Leticia Leonor Pinto Alva and Georgios Chochlakis and Mohammad Rostami and Jesse Thomason}, booktitle={Neural Information Processing Systems (NeurIPS)}, year={2022}, url={https://arxiv.org/abs/2206.09059} } |
VAuLT: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations Georgios Chochlakis, Tejas Srinivasan, Jesse Thomason, and Shrikanth Narayanan. arXiv, 2022. categories: language and vision preprint papersource @article{chocklakis:vault, title={{VAuLT}: Augmenting the Vision-and-Language Transformer with the Propagation of Deep Language Representations}, author={Georgios Chochlakis and Tejas Srinivasan and Jesse Thomason and Shrikanth Narayanan}, journal={arXiv}, year={2022}, url={https://arxiv.org/abs/2208.09021} } |
Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic Sara Mohammadinejad, Jesse Thomason, and Jyotirmoy V. Deshmukh. arXiv, 2022. categories: language and planning preprint paper @article{mohammadinejad:dialoguestl, title={Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic}, author={Sara Mohammadinejad and Jesse Thomason and Jyotirmoy V. Deshmukh}, journal={arXiv}, year={2022}, url={https://arxiv.org/abs/2207.00627} } |
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions Jing Gu, Eliana Stefani, Qi Wu, Jesse Thomason, and Xin Eric Wang. Association for Computational Linguistics (ACL), 2022. categories: language and action, vln conference papersource @inproceedings{gu:acl22, title={Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions}, author={Jing Gu and Eliana Stefani and Qi Wu and Jesse Thomason and Xin Eric Wang}, booktitle={Association for Computational Linguistics (ACL)}, year={2022}, url={https://arxiv.org/abs/2203.12667} } |
TEACh: Task-driven Embodied Agents that Chat Aishwarya Padmakumar, Jesse Thomason, Ayush Shrivastava, Patrick Lange, Anjali Narayan-Chen, Spandana Gella, Robinson Piramuthu, Gokhan Tur, and Dilek Hakkani-Tur. Conference on Artificial Intelligence (AAAI), 2022. categories: language and action, benchmark, dialogue conference paperwebsitesourcecoverage @inproceedings{padmakumar:teach, title={{TEACh}: Task-driven Embodied Agents that Chat}, author={Aishwarya Padmakumar and Jesse Thomason and Ayush Shrivastava and Patrick Lange and Anjali Narayan-Chen and Spandana Gella and Robinson Piramuthu and Gokhan Tur and Dilek Hakkani-Tur}, booktitle={Conference on Artificial Intelligence (AAAI)}, year={2022}, url={https://arxiv.org/abs/2110.00534} } |
2021 |
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges Yizhou Zhao, Kaixiang Lin, Zhiwei Jia, Qiaozi Gao, Govind Thattai, Jesse Thomason, and Gaurav Sukhatme. Controllable Generative Modeling in Language and Vision (CtrlGen) Workshop @ NeurIPS, 2021. categories: language and action workshop papersource @inproceedings{zhao:luminous, title={{LUMINOUS}: Indoor Scene Generation for Embodied AI Challenges}, author={Yizhou Zhao and Kaixiang Lin and Zhiwei Jia and Qiaozi Gao and Govind Thattai and Jesse Thomason and Gaurav Sukhatme}, booktitle={Controllable Generative Modeling in Language and Vision (CtrlGen) Workshop @ NeurIPS}, year={2021}, url={https://arxiv.org/abs/2111.05527} } |
Language Grounding with 3D Objects Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, and Luke Zettlemoyer. Conference on Robot Learning (CoRL), 2021. categories: benchmark, language and vision conference papervideosource @inproceedings{thomason:snare, title={Language Grounding with {3D} Objects}, author={Jesse Thomason and Mohit Shridhar and Yonatan Bisk and Chris Paxton and Luke Zettlemoyer}, booktitle={Conference on Robot Learning (CoRL)}, year={2021}, url={https://arxiv.org/abs/2107.12514} } |
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion Alessandro Suglia, Qiaozi Gao, Jesse Thomason, Govind Thattai, and Gaurav Sukhatme. Novel Ideas in Learning-to-Learn through Interaction (NILLI) Workshop @ EMNLP, 2021. categories: language and action workshop papersource @inproceedings{suglia:embert, title={Embodied {BERT}: A Transformer Model for Embodied, Language-guided Visual Task Completion}, author={Alessandro Suglia and Qiaozi Gao and Jesse Thomason and Govind Thattai and Gaurav Sukhatme}, booktitle={Novel Ideas in Learning-to-Learn through Interaction (NILLI) Workshop @ EMNLP}, year={2021}, url={https://arxiv.org/abs/2108.04927} } |
2020 |
The RobotSlang Benchmark: Dialog-guided Robot Localization and Navigation Shurjo Banerjee, Jesse Thomason, and Jason J. Corso. Conference on Robot Learning (CoRL), 2020. categories: vln, language and robotics, physical robots, dialogue conference paperwebsitevideosourcecoverage @inproceedings{banerjee:corl20, title={{The RobotSlang Benchmark}: Dialog-guided Robot Localization and Navigation}, author={Shurjo Banerjee and Jesse Thomason and Jason J. Corso}, booktitle={Conference on Robot Learning (CoRL)}, year={2020}, url={https://arxiv.org/abs/2010.12639} } |
Experience Grounds Language Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. Empirical Methods in Natural Language Processing (EMNLP), 2020. categories: language and vision, language and robotics, language and action conference papervideocoverage @inproceedings{bisk:emnlp20, title={Experience Grounds Language}, author={Yonatan Bisk and Ari Holtzman and Jesse Thomason and Jacob Andreas and Yoshua Bengio and Joyce Chai and Mirella Lapata and Angeliki Lazaridou and Jonathan May and Aleksandr Nisnevich and Nicolas Pinto and Joseph Turian}, booktitle={Empirical Methods in Natural Language Processing (EMNLP)}, year={2020}, url={https://arxiv.org/abs/2004.10151} } |
RMM: A Recursive Mental Model for Dialog Navigation Homero Roman Roman, Yonatan Bisk, Jesse Thomason, Asli Celikyilmaz, and Jianfeng Gao. Findings of Empirical Methods in Natural Language Processing (EMNLP Findings), 2020. categories: dialogue, vln — Also presented at the Third International Workshop on Spatial Language Understanding (SpLU), 2020. conference papersource @inproceedings{roman:emnlpf20, | SpLU websitetitle={{RMM}: A Recursive Mental Model for Dialog Navigation}, author={Homero Roman Roman and Yonatan Bisk and Jesse Thomason and Asli Celikyilmaz and Jianfeng Gao}, booktitle={Findings of Empirical Methods in Natural Language Processing (EMNLP Findings)}, year={2020}, url={https://arxiv.org/abs/2005.00728} } |
Interpreting Black Box Models via Hypothesis Testing Collin Burns, Jesse Thomason, and Wesley Tansey. Foundations of Data Science (FODS), 2020. categories: interpretability conference papersource @inproceedings{burns:fods20, title={Interpreting Black Box Models via Hypothesis Testing}, author={Collin Burns and Jesse Thomason and Wesley Tansey}, booktitle={Foundations of Data Science (FODS)}, year={2020}, url={https://arxiv.org/abs/1904.00045} } |
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Computer Vision and Pattern Recognition (CVPR), 2020. categories: language and action, benchmark conference paperwebsitevideosource @inproceedings{shridhar:cvpr20, title={{ALFRED}: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks}, author={Mohit Shridhar and Jesse Thomason and Daniel Gordon and Yonatan Bisk and Winson Han and Roozbeh Mottaghi and Luke Zettlemoyer and Dieter Fox}, booktitle={Computer Vision and Pattern Recognition (CVPR)}, year={2020}, url={https://arxiv.org/abs/1912.01734} } |
Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J. Mooney. The Journal of Artificial Intelligence Research (JAIR) 67, 2020. categories: physical robots, dialogue, language and robotics — Also presented at the IJCAI Journal Track (IJCAI), 2021. journal paper @article{thomason:jair20, | IJCAI websitetitle={Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog}, author={Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Nick Walker and Yuqian Jiang and Harel Yedidsion and Justin Hart and Peter Stone and Raymond J. Mooney}, journal={The Journal of Artificial Intelligence Research (JAIR)}, volume={67}, year={2020}, url={https://jair.org/index.php/jair/article/view/11485} } |
2019 |
Vision-and-Dialog Navigation Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. Conference on Robot Learning (CoRL), 2019. categories: dialogue, vln, benchmark conference paperwebsitevideodemosourceposter @inproceedings{thomason:corl19, title={Vision-and-Dialog Navigation}, author={Jesse Thomason and Michael Murray and Maya Cakmak and Luke Zettlemoyer}, booktitle={Conference on Robot Learning (CoRL)}, year={2019}, url={https://arxiv.org/abs/1907.04957} } |
Improving Robot Success Detection using Static Object Data Rosario Scalise, Jesse Thomason, Yonatan Bisk, and Siddhartha Srinivasa. Intelligent Robots and Systems (IROS), 2019. categories: physical robots, language and vision, language and robotics — Also presented at the Combined Workshop on Spatial Language Understanding & Grounded Communication for Robotics (SpLU-RoboNLP), 2019. conference papervideosourceslides @inproceedings{scalise:iros19, | SpLU-RoboNLP postertitle={Improving Robot Success Detection using Static Object Data}, author={Rosario Scalise and Jesse Thomason and Yonatan Bisk and Siddhartha Srinivasa}, booktitle={Intelligent Robots and Systems (IROS)}, year={2019}, url={https://arxiv.org/abs/1904.01650} } |
Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog Saeid Amiri, Sujay Bajracharya, Cihangir Goktolga, Jesse Thomason, and Shiqi Zhang. Intelligent Robots and Systems (IROS), 2019. categories: language and robotics, dialogue conference papervideoslides @inproceedings{amiri:iros19, title={Augmenting Knowledge through Statistical, Goal-oriented Human-Robot Dialog}, author={Saeid Amiri and Sujay Bajracharya and Cihangir Goktolga and Jesse Thomason and Shiqi Zhang}, booktitle={Intelligent Robots and Systems (IROS)}, year={2019}, url={https://arxiv.org/abs/1907.03390} } |
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA Jesse Thomason, Daniel Gordon, and Yonatan Bisk. North American Chapter of the Association for Computational Linguistics (NAACL), 2019. categories: language and vision, vln, evaluation conference paperposter @inproceedings{thomason:naacl19, title={Shifting the Baseline: Single Modality Performance on Visual Navigation \& {QA}}, author={Jesse Thomason and Daniel Gordon and Yonatan Bisk}, booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2019}, url={https://arxiv.org/abs/1811.00613} } |
Improving Grounded Natural Language Understanding through Human-Robot Dialog Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, and Raymond J. Mooney. International Conference on Robotics and Automation (ICRA), 2019. categories: dialogue, language and robotics, physical robots — Also presented at the SIGDIAL Special Session on Physically Situated Dialogue (RoboDIAL), 2018. Also presented at the RSS Workshop on Models and Representations for Natural Human-Robot Communication (MRHRC), 2018. conference papervideoposter @inproceedings{thomason:icra19, | RoboDIAL paperRoboDIAL videoMRHRC paperMRHRC postertitle={Improving Grounded Natural Language Understanding through Human-Robot Dialog}, author={Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Nick Walker and Yuqian Jiang and Harel Yedidsion and Justin Hart and Peter Stone and Raymond J. Mooney}, booktitle={International Conference on Robotics and Automation (ICRA)}, year={2019}, url={https://arxiv.org/abs/1903.00122} } |
Prospection: Interpretable Plans From Language By Predicting the Future Chris Paxton, Yonatan Bisk, Jesse Thomason, Arunkumar Byravan, and Dieter Fox. International Conference on Robotics and Automation (ICRA), 2019. categories: language and robotics, language and planning conference paper @inproceedings{paxton:icra19, title={Prospection: Interpretable Plans From Language By Predicting the Future}, author={Chris Paxton and Yonatan Bisk and Jesse Thomason and Arunkumar Byravan and Dieter Fox}, booktitle={International Conference on Robotics and Automation (ICRA)}, year={2019}, url={https://arxiv.org/abs/1903.08309} } |
2018 |
Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence Justin Hart, Harel Yedidsion, Yuqian Jiang, Nick Walker, Rishi Shah, Jesse Thomason, Aishwarya Padmakumar, Rolando Fernandez, Jivko Sinapov, Raymond J. Mooney, and Peter Stone. AI-HRI AAAI Fall Symposium Series (AAAI-FSS), 2018. categories: language and robotics workshop paper @inproceedings{hart:aaai-fss18, title={Interaction and Autonomy in RoboCup@Home and Building-Wide Intelligence}, author={Justin Hart and Harel Yedidsion and Yuqian Jiang and Nick Walker and Rishi Shah and Jesse Thomason and Aishwarya Padmakumar and Rolando Fernandez and Jivko Sinapov and Raymond J. Mooney and Peter Stone}, booktitle={AI-HRI AAAI Fall Symposium Series (AAAI-FSS)}, year={2018}, url={https://arxiv.org/abs/1810.02919} } |
Multi-modal Predicate Identification using Dynamically Learned Robot Controllers Saeid Amiri, Suhua Wei, Shiqi Zhang, Jivko Sinapov, Jesse Thomason, and Peter Stone. International Joint Conference on Artificial Intelligence (IJCAI), 2018. categories: physical robots, language and robotics conference paper @inproceedings{amiri:ijcai18, title={Multi-modal Predicate Identification using Dynamically Learned Robot Controllers}, author={Saeid Amiri and Suhua Wei and Shiqi Zhang and Jivko Sinapov and Jesse Thomason and Peter Stone}, booktitle={International Joint Conference on Artificial Intelligence (IJCAI)}, year={2018}, url={https://www.ijcai.org/proceedings/2018/0645.pdf} } |
Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions Jesse Thomason, Jivko Sinapov, Raymond J. Mooney, and Peter Stone. Conference on Artificial Intelligence (AAAI), 2018. categories: language and robotics — Also presented at the Workshop on Language Grounding for Robotics (RoboNLP), 2017. conference papersourceslides @inproceedings{thomason:aaai18, | RoboNLP paperRoboNLP postertitle={Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions}, author={Jesse Thomason and Jivko Sinapov and Raymond J. Mooney and Peter Stone}, booktitle={Conference on Artificial Intelligence (AAAI)}, year={2018}, url={https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16512/} } |
Maximum-Variance Total Variation Denoising for Interpretable Spatial Smoothing Wesley Tansey, Jesse Thomason, and James G. Scott. Conference on Artificial Intelligence (AAAI), 2018. categories: interpretability — Also presented at the ICML Workshop on Human Interpretability in Machine Learning (ICML-WHI), 2017. conference paperposter @inproceedings{tansey:aaai18, | ICML-WHI paperICML-WHI postertitle={Maximum-Variance Total Variation Denoising for Interpretable Spatial Smoothing}, author={Wesley Tansey and Jesse Thomason and James G. Scott}, booktitle={Conference on Artificial Intelligence (AAAI)}, year={2018}, url={https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16974} } |
2017 |
Opportunistic Active Learning for Grounding Natural Language Descriptions Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin Hart, Peter Stone, and Raymond J. Mooney. Conference on Robot Learning (CoRL), 2017. categories: dialogue, physical robots, language and robotics conference papervideosourceposter @inproceedings{thomason:corl17, title={Opportunistic Active Learning for Grounding Natural Language Descriptions}, author={Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Justin Hart and Peter Stone and Raymond J. Mooney}, booktitle={Conference on Robot Learning (CoRL)}, year={2017}, url={http://proceedings.mlr.press/v78/thomason17a/thomason17a.pdf} } |
Improving Black-box Speech Recognition using Semantic Parsing Rodolfo Corona, Jesse Thomason, and Raymond J. Mooney. International Joint Conference on Natural Language Processing (IJCNLP), 2017. categories: speech recognition, semantic parsing conference paperposter @inproceedings{corona:ijcnlp17, title={Improving Black-box Speech Recognition using Semantic Parsing}, author={Rodolfo Corona and Jesse Thomason and Raymond J. Mooney}, booktitle={International Joint Conference on Natural Language Processing (IJCNLP)}, year={2017}, url={https://www.aclweb.org/anthology/I17-2021/} } |
Multi-Modal Word Synset Induction Jesse Thomason and Raymond J. Mooney. International Joint Conference on Artificial Intelligence (IJCAI), 2017. categories: language and vision conference paperposterslides @inproceedings{thomason:ijcai17, title={Multi-Modal Word Synset Induction}, author={Jesse Thomason and Raymond J. Mooney}, booktitle={International Joint Conference on Artificial Intelligence (IJCAI)}, year={2017}, url={https://www.ijcai.org/proceedings/2017/0575.pdf} } |
Integrated Learning of Dialog Strategies and Semantic Parsing Aishwarya Padmakumar, Jesse Thomason, and Raymond J. Mooney. European Chapter of the Association for Computational Linguistics (EACL), 2017. categories: semantic parsing, dialogue conference paper @inproceedings{padmakumar:eacl17, title={Integrated Learning of Dialog Strategies and Semantic Parsing}, author={Aishwarya Padmakumar and Jesse Thomason and Raymond J. Mooney}, booktitle={European Chapter of the Association for Computational Linguistics (EACL)}, year={2017}, url={http://www.cs.utexas.edu/users/ml/papers/padmakumar.eacl17.pdf} } |
BWIBots: A platform for bridging the gap between AI and human--robot interaction research Piyush Khandelwal, Shiqi Zhang, Jivko Sinapov, Matteo Leonetti, Jesse Thomason, Fangkai Yang, Ilaria Gori, Maxwell Svetlik, Priyanka Khante, Vladimir Lifschitz, J. K. Aggarwal, Raymond J. Mooney, and Peter Stone. The International Journal of Robotics Research (IJRR), 2017. categories: language and robotics journal paper @article{khandelwal:ijrr17, title={BWIBots: A platform for bridging the gap between AI and human--robot interaction research}, author={Piyush Khandelwal and Shiqi Zhang and Jivko Sinapov and Matteo Leonetti and Jesse Thomason and Fangkai Yang and Ilaria Gori and Maxwell Svetlik and Priyanka Khante and Vladimir Lifschitz and J. K. Aggarwal and Raymond J. Mooney and Peter Stone}, journal={The International Journal of Robotics Research (IJRR)}, publisher={Sage}, year={2017}, url={http://www.cs.utexas.edu/users/pstone/Papers/bib2html-links/IJRR17-khandelwal.pdf} } |
2016 |
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy" Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Peter Stone, and Raymond J. Mooney. International Joint Conference on Artificial Intelligence (IJCAI), 2016. categories: physical robots, language and robotics, dialogue conference papervideosourceposterslides @inproceedings{thomason:ijcai16, title={Learning Multi-Modal Grounded Linguistic Semantics by Playing ``{I} Spy''}, author={Jesse Thomason and Jivko Sinapov and Maxwell Svetlik and Peter Stone and Raymond J. Mooney}, booktitle={International Joint Conference on Artificial Intelligence (IJCAI)}, year={2016}, url={http://www.ijcai.org/Proceedings/16/Papers/491.pdf} } |
2015 |
Learning to Interpret Natural Language Commands through Human-Robot Dialog Jesse Thomason, Shiqi Zhang, Raymond J. Mooney, and Peter Stone. International Joint Conference on Artificial Intelligence (IJCAI), 2015. categories: dialogue, semantic parsing, language and robotics, physical robots conference papervideosourceposterslides @inproceedings{thomason:ijcai15, title={Learning to Interpret Natural Language Commands through Human-Robot Dialog}, author={Jesse Thomason and Shiqi Zhang and Raymond J. Mooney and Peter Stone}, booktitle={International Joint Conference on Artificial Intelligence (IJCAI)}, year={2015}, url={https://www.ijcai.org/Proceedings/15/Papers/273.pdf} } |
2014 |
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild Jesse Thomason, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Raymond J. Mooney. Conference on Computational Linguistics (COLING), 2014. categories: language and vision conference paperposter @inproceedings{thomason:coling14, title={Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild}, author={Jesse Thomason and Subhashini Venugopalan and Sergio Guadarrama and Kate Saenko and Raymond J. Mooney}, booktitle={Conference on Computational Linguistics (COLING)}, year={2014}, url={http://anthology.aclweb.org/C/C14/C14-1115.pdf} } |
2013 |
Prosodic Entrainment and Tutoring Dialogue Success Jesse Thomason, Huy Nguyen, and Diane Litman. Artificial Intelligence in Education (AIED), 2013. categories: dialogue conference paperposter @inproceedings{thomason:aied13, title={Prosodic Entrainment and Tutoring Dialogue Success}, author={Jesse Thomason and Huy Nguyen and Diane Litman}, booktitle={Artificial Intelligence in Education (AIED)}, year={2013}, url={https://link.springer.com/chapter/10.1007/978-3-642-39112-5_104} } |
Differences in User Responses to a Wizard-of-Oz versus Automated System Jesse Thomason and Diane Litman. North American Chapter of the Association for Computational Linguistics (NAACL), 2013. categories: dialogue conference paperslides @inproceedings{thomason:naacl13, title={Differences in User Responses to a Wizard-of-Oz versus Automated System}, author={Jesse Thomason and Diane Litman}, booktitle={North American Chapter of the Association for Computational Linguistics (NAACL)}, year={2013}, url={http://www.aclweb.org/anthology/N13-1098} } |
Thesis work |
2018 |
Continually Improving Grounded Natural Language Understanding through Human-Robot Dialog Jesse Thomason. Department of Computer Science, The University of Texas at Austin, 2018. categories: dialogue, semantic parsing, language and vision, language and robotics thesis paperslides @phdthesis{thomason:thesis18, title={Continually Improving Grounded Natural Language Understanding through Human-Robot Dialog}, author={Jesse Thomason}, booktitle={Doctoral Dissertation}, school={Department of Computer Science, The University of Texas at Austin}, year={2018}, url={http://www.cs.utexas.edu/users/ml/papers/thomason.thesis18.pdf} } |
2016 |
Continuously Improving Natural Language Understanding for Robotic Systems through Semantic Parsing, Dialog, and Multi-modal Perception Jesse Thomason. Doctoral Dissertation Proposal, 2016. categories: semantic parsing, language and vision, dialogue, language and robotics thesis paperslides @inproceedings{thomason:proposal16, title={Continuously Improving Natural Language Understanding for Robotic Systems through Semantic Parsing, Dialog, and Multi-modal Perception}, author={Jesse Thomason}, booktitle={Doctoral Dissertation Proposal}, year={2016}, url={http://www.cs.utexas.edu/users/ml/papers/thomason.proposal16.pdf} } |