Robotic Grounding and LLMs: Advancements and Challenges
Casey Kennington<https://www.caseyreddkennington.com/>
Computer Science, Boise State
09-Nov
jeudi/thursday 10h30
https://uqam.zoom.us/j/83002459798
ABSTRACT: Large Language Models (LLMs) are primarily trained using large amounts of text, but there have also been noteworthy advancements in incorporating vision and other sensory information into LLMs. Does that mean LLMs are ready for embodied agents such as robots? While there have been important advancements, technical and theoretical challenges remain including use of closed language models like ChatGPT, model size requirements, data size requirements, speed requirements, representing the physical world, and updating the model with information about the world in real time. In this talk, I explain recent advance on incorporating LLMs into robot platforms, challenges, and opportunities for future work.
Casey Kennington is associate professor in the Department of Computer Science at Boise State University where he does research on spoken dialogue systems on embodied platforms. His long-term research goal is to understand what it means for humans to understand, represent, and produce language. His National Science Foundation CAREER award focuses on enriching small language models with multimodal information such as vision and emotion for interactive learning on robotic platforms. Kennington obtained his PhD in Linguistics from Bielefeld University, Germany.
Josue Torres-Foncesca, Catherine Henry, Casey Kennington. Symbol and Communicative Grounding through Object Permanence with a Mobile Robot<https://aclanthology.org/2022.sigdial-1.14/>. In Proceedings of SigDial, 2022.
Clayton Fields and Casey Kennington. Vision Language Transformers: A Survey<https://arxiv.org/abs/2307.03254>. arXiv, 2023.
Casey Kennington. Enriching Language Models with Visually-grounded Word Vectors and the Lancaster Sensorimotor Norms<https://aclanthology.org/2021.conll-1.11/>. In Proceedings of CoNLL, 2021
Casey Kennington. On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotion<https://arxiv.org/abs/2307.04518>. arXiv, 2023.
14-Sep
Benjamin Bergen
UCSD
LLMs are Impressive But We Still Need Grounding
21-Sep
Dimitri C Mollo
Umea
Grounding in LLMs: Functional AI Ontologies
28-Sep
Dave Chalmers
NYU
Does Thinking Require Grounding?
05-Oct
Ellie Pavlick
Brown
Symbols and Grounding in LLMs
12-Oct
Paul Rosenbloom
USC
Rethinking the Physical Symbol Systems Hypothesis
19-Oct
Melanie Mitchell
Santa Fe Ins
Language and Grounding
26-Oct
Dor Abrahamson
Berkeley
Enactive Symbol Grounding in Mathematics Education
02-Nov
09-Nov
Eric Schulz
Casey Kennington
Tuebingen
Boise State
Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
Freiburg IAS
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
Frédéric Alexandre
Bordeaux
« Apprentissage continu et contrôlé cognitif »