Current | ||||||||
USC-Capital One Center for Responsible AI and Decision Making in Finance (CREDIF) grant for Human-in-the-loop Multi-Agentic AI Systems for Task-oriented Dialogue. PI: Jesse Thomason; Aug 2025–Jul 2026 We propose to empower multi-agentic AI systems with transparency and explainability by surfacing of their reasoning processes in natural language, including frictive dialogue with human users to clarify information, agent-agent introspection to overcome single-agent uncertainties, and human-initiated interruption for human-in-the-loop control. | ||||||||
DARPA ARC Safe and Assured Foundation Robots for Open Environments (SAFRON) grant for Assuring Pretrained Robot Policy Behavior by Combining Statistical ML and Software Testing Methods (Award HR0011-25-3-0154). PI: Jesse Thomason; Co-PIs: Souti Chattopadhyay, William G.J. Halfond; Jul 2025–Jun 2026 We propose a method to assure that the robot behaves consistently with the expectation of a human operator by combining the strengths of statistical machine learning and systematic robustness assurance methods and applying them to pretrained vision-language-action (VLA) policies. We propose methods to quantify 1) whether inputs to such controllers are out-of-distribution and could lead to unpredictable behavior and 2) whether inputs are likely to lead to policy behavior that is not consistent with human expectation. | ||||||||
IARPA Bias Effects and Notable Generative AI Limitations (BENGAL) in final negotiations with USC to award grant for Enabling Efficient Unlearning in Pretrained Large Language Models through Information Localization. PI: Jesse Thomason; Co-PI: Robin Jia; Jan 2025–Jan 2027 The main objective of the proposed work is to enable safe information flow in sensitive environments by developing algorithms to identify and ``unlearn'' specified information in large language models (LLMs). Pretrained large language models have been shown to memorize sensitive information from their training data verbatim. We propose to formally define LLM memorization and introduce standardized evaluation protocols for methods that localize such memorized information and methods that then remove information from specified parameters in a pretrained model. | ||||||||
Army Research Laboratory: Army Artificial Intelligence Innovation Institute (A2I2) grant for Communicating with Natural Language Dialogue for Teams of Intelligent Systems and Humans (Award W911NF-23-2-0010). PI: David Traum; Co-PI: Jesse Thomason; Feb 2022–Nov 2025 Investigate closing the perception-action-communication loop between heterogeneous agents and human teammates using natural language dialogue. Leverage natural language generation and understanding to enable interpretable, dialogue-based communication in mixed teams of agents and humans.
|
Completed | ||||||
DARPA Friction and Accountability in Conversational Transactions (FACT) AI Exploration grant for BECAREFUL: Building Embodied Conversational Agent Reliability by Exerting Friction through Uncertain Language (Award HR00112490376). PI: Dilek Hakkani-Tur; Co-PIs: Gokhan Tur, Malihe Alikhane, Jesse Thomason, Julia Hockenmeier; Mar 2024–Aug 2025 The objective of BECAREFUL is to enhance decision making mechanisms for conversational embodied AI agents by reducing user over-reliance to possible mis-information from AI systems due to AI hallucinations, AI sycophancy or misunderstanding the user as a result of low-bandwidth and unreliable communication situations common in humanitarian relief and search-and-rescue. Our approach involves creating dialogue systems that can track intentions based on the interaction history, assess the accountability of potential actions and responses, and encourage critical thinking through introducing friction when appropriate.
| ||||||
USC Undergraduate Research Associates Program (URAP) gift for Glass is to Shatter as Rubber is to Bounce: Analogies for Natural Language Processing. PI: Jesse Thomason; Aug 2023–Apr 2024 We propose to collect and curate a benchmark of linear word analogies to test the understanding capabilities of modern large language models. Benchmarks that probe physical and social understanding are difficult and expensive to create, and so we propose analogies as a minimal probe that require little human effort to create, and will collect analogies that are easy for humans and hard for models as an online game. | ||||||
Laboratory for Analytic Sciences grant for Multimodal Transformers with Compositional Modules for Continual Learning. PI: Mohammad Rostami; Co-PI: Jesse Thomason; Jan 2023–Apr 2023 We aim to develop a method for learning compositional, adapter-based modules for multimodal transformers in continual learning (CL) settings. We will develop this method for language and vision classification tasks, then explore applying it to audiovisual classification and visuolinguistic, sequential decision making.
| ||||||
NSF Convergence Accelerator Track H grant for Determining Community Needs for Accessibility Tools that Facilitate Programming Education and Workforce Readiness for Personas with Disabilities (Award 2236320). PI: Maja Matarić; Co-PIs: Stephen Aguilar, Sook-Lei Liew, Gisele Ragusa, Jesse Thomason; Dec 2022–Nov 2024 The objective of this planning grant is to develop a means of ameliorating the negative labor outcomes faced by PWD with a set of early prototypes of multimodal interfaces (e.g., speech, eye tracking, pedals) that enable PWD to learn programming skills. Our project aims to develop a means for PWD to train for—and ultimately enter—the tech workforce, bridging the programming career gap that currently blocks most PWD from such career paths. | ||||||
UPenn's Alzheimer's Disease Research Center supported by NIH 'Penn Artificial Intelligence and Technology Collaboratory for Healthy Aging' subaward for An Accessible Machine Learning-Based ADRD Screening Tool for Families and Caregivers (Award 5-P30-AG-073105-02). PI: Maja Matarić; Co-PI: Jesse Thomason; Dec 2022–Nov 2024 The objective of the Penn AI Tech Collab is to perform early detection of dementia symptoms by leveraging multimodal speech, gaze, and pencil pressure inputs during a standard dementia diagnostic suite of tasks embedded in an easy-to-use mobile application. | ||||||
USC Undergraduate Research Associates Program (URAP) gift for Language-Guided Mobile Manipulators. PI: Jesse Thomason; Aug 2022–Apr 2023 This work will build a real-world dataset to investigate challenges of robot learning with human collaborators. We will leverage our robotics infrastructure at USC to collect data and demonstrate learned robot policies that collaborate efficiently with humans. | ||||||
Laboratory for Analytic Sciences grant for Continual Learning of Few Shot Learners for Natural Language Processing. PI: Mohammad Rostami; Co-PI: Jesse Thomason; May 2022–Dec 2022 The proposed work aims to enable NLP models to learn a downstream task using only a few task-specific annotated data points to relax the need for generating large-scale annotated datasets. We expect the learned model exhibits strong few-shot generalization ability as a result of positive knowledge transfer from past learned tasks when a new task is learned, while not suffering from catastrophic forgetting of past learned tasks.
| ||||||
Amazon AWS Credits for Amazon Vising Academics gift for Language-Guided Mobile Manipulators. PI: Jesse Thomason; Sep 2021–Aug 2022 We will overcome noisy actuators and sensors while executing language instructions by building and maintaining a visio-linguistic memory. Combining language and multimodal sensory inputs, such memory will ground language instructions in physical observations. |