Robotic Grounding and LLMs: Advancements and Challenges
Casey Kenningtonhttps://www.caseyreddkennington.com/ Computer Science, Boise State
09-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798 ABSTRACT: Large Language Models (LLMs) are primarily trained using large amounts of text, but there have also been noteworthy advancements in incorporating vision and other sensory information into LLMs. Does that mean LLMs are ready for embodied agents such as robots? While there have been important advancements, technical and theoretical challenges remain including use of closed language models like ChatGPT, model size requirements, data size requirements, speed requirements, representing the physical world, and updating the model with information about the world in real time. In this talk, I explain recent advance on incorporating LLMs into robot platforms, challenges, and opportunities for future work. Casey Kennington is associate professor in the Department of Computer Science at Boise State University where he does research on spoken dialogue systems on embodied platforms. His long-term research goal is to understand what it means for humans to understand, represent, and produce language. His National Science Foundation CAREER award focuses on enriching small language models with multimodal information such as vision and emotion for interactive learning on robotic platforms. Kennington obtained his PhD in Linguistics from Bielefeld University, Germany. Josue Torres-Foncesca, Catherine Henry, Casey Kennington. Symbol and Communicative Grounding through Object Permanence with a Mobile Robothttps://aclanthology.org/2022.sigdial-1.14/. In Proceedings of SigDial, 2022. Clayton Fields and Casey Kennington. Vision Language Transformers: A Surveyhttps://arxiv.org/abs/2307.03254. arXiv, 2023. Casey Kennington. Enriching Language Models with Visually-grounded Word Vectors and the Lancaster Sensorimotor Normshttps://aclanthology.org/2021.conll-1.11/. In Proceedings of CoNLL, 2021 Casey Kennington. On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotionhttps://arxiv.org/abs/2307.04518. arXiv, 2023.
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding
26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State Machine Psychology
Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
« Algorithmes de Deep Learning flous causaux »
Usef Faghihihttps://github.com/joseffaghihi Informatique, UQTR
16-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
RÉSUMÉ : Je donnerai un bref aperçu de l'inférence causale et de la manière dont les règles de la logique floue peuvent améliorer le raisonnement causal (Faghihi, Robert, Poirier & Barkaoui, 2020). Ensuite, j'expliquerai comment nous avons intégré des règles de logique floue avec des algorithmes d'apprentissage profond, tels que l'architecture de transformateur Big Bird (Zaheer et al., 2020). Je montrerai comment notre modèle de causalité d'apprentissage profond flou a surpassé ChatGPT sur différentes bases de données dans des tâches de raisonnement (Kalantarpour, Faghihi, Khelifi & Roucaut, 2023). Je présenterai également quelques applications de notre modèle dans des domaines tels que la santé et l'industrie. Enfin, si le temps le permet, je présenterai deux éléments essentiels de notre modèle de raisonnement causal que nous avons récemment développés : l'Effet Causal Variationnel Facile Probabiliste (PEACE) et l'Effet Causal Variationnel Probabiliste (PACE) (Faghihi & Saki, 2023).
Usef Faghihi est professeur adjoint à l'Université du Québec à Trois-Rivières. Auparavant, Usef était professeur à l'Université d'Indianapolis aux États-Unis. Usef a obtenu son doctorat en Informatique Cognitive à l'UQAM. Il est ensuite allé à Memphis, aux États-Unis, pour effectuer un post-doctorat avec le professeur Stan Franklin, l'un des pionniers de l'intelligence artificielle. Ses centres d'intérêt en recherche sont les architectures cognitives et leur intégration avec les algorithmes d'apprentissage profond.
LLMs: Indication or Representation?
Anders Søgaardhttps://anderssoegaard.github.io/ Computer Science & Philosophy, University of Copenhagen
23-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: People talk to LLMs - their new assistants, tutors, or partners - about the world they live in, but are LLMs parroting, or do they (also) have internal representations of the world? There are five popular views, it seems:
(i) LLMs are all syntax, no semantics.
(ii) LLMs have inferential semantics, no referential semantics.
(iii) LLMs (also) have referential semantics through picturing
(iv) LLMs (also) have referential semantics through causal chains.
(v) Only chatbots have referential semantics (through causal chains) I present three sets of experiments to suggest LLMs induce inferential and referential semantics and do so by inducing human-like representations, lending some support to view (iii). I briefly compare the representations that seem to fall out of these experiments to the representations to which others have appealed in the past.
Anders Søgaard is University Professor of Computer Science and Philosophy and leads the newly established Center for Philosophy of Artificial Intelligence at the University of Copenhagen. Known primarily for work on multilingual NLP, multi-task learning, and using cognitive and behavioral data to bias NLP models, Søgaard is an ERC Starting Grant and Google Focused Research Award recipient and the author of Semi-Supervised Learning and Domain Adaptation for NLP (2013), Cross-Lingual Word Embeddings (2019), and Explainable Natural Language Processing (2021). Søgaard, A. (2023). Grounding the Vector Space of an Octopus. https://link.springer.com/article/10.1007/s11023-023-09622-4 Minds and Machines 33, 33-54. Li, J.; et al. (2023) Large Language Models Converge on Brain-Like Representationshttps://arxiv.org/pdf/2306.01930.pdf. arXiv preprint arXiv:2306.01930 Abdou, M.; et al. (2021) Can Language Models Encode Perceptual Structure Without Grounding? https://aclanthology.org/2021.conll-1.9/ CoNLL Garneau, N.; et al. (2021) Analogy Training Multilingual Encoders. https://ojs.aaai.org/index.php/AAAI/article/view/17524 AAAI
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux »
23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
LLMs, Patterns, and Understanding
Christof Durthttps://www.durt.de/ Philosophy, U. Heidelberg
30-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: It is widely known that the performance of LLMs is contingent on their being trained with very large text corpora. But what in the text corpora allows LLMs to extract the parameters that enable them to produce text that sounds as if it had been written by an understanding being? In my presentation, I argue that the text corpora reflect not just “language” but language use. Language use is permeated with patterns, and the statistical contours of the patterns of written language use are modelled by LLMs. LLMs do not model understanding directly, but statistical patterns that correlate with patterns of language use. Although the recombination of statistical patterns does not require understanding, it enables the production of novel text that continues a prompt and conforms to patterns of language use, and thus can make sense to humans.
Christoph Durt is a philosophical and interdisciplinary researcher at Heidelberg university. He investigates the human mind and its relation to technology, especially AI. Going beyond the usual side-to-side comparison of artificial and human intelligence, he studies the multidimensional interplay between the two. This involves the study of human experience and language, as well as the relation between them. If you would like to join an international online exchange on these issues, please check the “courses and lectures” section on his websitehttp://www.durt.de/.
Durt, Christoph, Tom Froese, and Thomas Fuchs. preprint. “Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Usehttp://philsci-archive.pitt.edu/21983/.” Durt, Christoph. 2023. “The Digital Transformation of Human Orientation: An Inquiry into the Dawn of a New Erahttp://www.bit.ly/3R5JdN7” Winner of the $10.000 HFPO Essay Prize. Durt, Christoph. 2022. “Artificial Intelligence and Its Integration into the Human Lifeworldhttps://doi.org/10.1017/9781009207898.007.” In The Cambridge Handbook of Responsible Artificial Intelligence, Cambridge University Press. Durt, Christoph. 2020. “The Computation of Bodily, Embodied, and Virtual Realityhttp://phaenomenologische-forschung.de/site/ophen/dgpf/dox/Durt.pdf” Winner of the Essay Prize “What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?”Phänomenologische Forschungen, no. 2: 25–39.
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Heidelberg LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Falsifying the Integrated Information Theory of Consciousness
Jake R Hansonhttps://jakerhanson.weebly.com/ Sr. Data Scientist, Astrophysics
07-Dec jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
Abstract: Integrated Information Theory is a prominent theory of consciousness in contemporary neuroscience, based on the premise that feedback, quantified by a mathematical measure called Phi, corresponds to subjective experience. A straightforward application of the mathematical definition of Phi fails to produce a unique solution due to unresolved degeneracies inherent in the theory. This undermines nearly all published Phi values to date. In the mathematical relationship between feedback and input-output behavior in finite-state systems automata theory shows that feedback can always be disentangled from a system's input-output behavior, resulting in Phi=0 for all possible input-output behaviors. This process, known as "unfolding," can be accomplished without increasing the system's size, leading to the conclusion that Phi measures something fundamentally disconnected from what could ground the theory experimentally. These findings demonstrate that IIT lacks a well-defined mathematical framework and may either be already falsified or inherently unfalsifiable according to scientific standards.
Jake Hanson is a Senior Data Scientist at a financial tech company in Salt Lake City, Utah. His doctoral research in Astrophysics from Arizona State University focused on the origin of life via the relationship between information processing and fundamental physics. He demonstrated that there were multiple foundational issues with IIT, ranging from poorly defined mathematics to problems with experimental falsifiability and pseudoscientific handling of core ideas.
Hanson, J.R., & Walker, S.I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombieshttps://www.mdpi.com/1099-4300/21/11/1073. Entropy, 21.11, 1073. Hanson, J.R., & Walker, S.I. (2021). Formalizing falsification for theories of consciousness across computational hierarchieshttps://watermark.silverchair.com/niab014.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAA1IwggNOBgkqhkiG9w0BBwagggM_MIIDOwIBADCCAzQGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMFqV1MQgKDexHNyFtAgEQgIIDBZ-C-NbPZuaziQ0gJPwKQhpNjZDfFvPFMb1BucW. Neuroscience of Consciousness, 2021.2, niab014. Hanson, J.R., & Walker, S.I. (2021). Falsification of the Integrated Information Theory of Consciousnesshttps://www.proquest.com/docview/2532092940?pq-origsite=gscholar&fromopenview=true. Diss. Arizona State University, 2021. Hanson, J.R., & Walker, S.I. (2023). On the non-uniqueness problem in Integrated Information Theoryhttps://doi.org/10.1093/nc/niad014. Neuroscience of Consciousness, 2023.1, niad014.
« Apprentissage continu et contrôle cognitif chez les humains et les LLMs » (Cognitive Control in Continuous Learning In humans and LLMs)
Frédérique Alexandrehttps://www.labri.fr/perso/falexand/ Inria, Bordeaux
14-Dec https://uqam.zoom.us/j/83002459798
Résumé : J’explore la différence entre l'efficacité de l'apprentissage humain et celle des grands modèles de langage en termes de temps de calcul et de coûts énergétiques. L'étude se focalise sur le caractère continu de l'apprentissage humain et les défis associés, tels que l'oubli catastrophique. Deux types de mémoires, la mémoire de travail et la mémoire épisodique, sont examinés. Le cortex préfrontal est décrit comme essentiel pour le contrôle cognitif et la mémoire de travail, tandis que l'hippocampe est central pour la mémoire épisodique. Alexandre suggère que ces deux régions collaborent pour permettre un apprentissage continu et efficace, facilitant ainsi la pensée et l'imagination.
Abstract: I explore the difference between the efficiency of human learning and that of large language models in terms of computational time and energy costs. The study focuses on the continuous nature of human learning and associated challenges, such as catastrophic forgetting. Two types of memory, working memory and episodic memory, are examined. The prefrontal cortex is described as essential for cognitive control and working memory, while the hippocampus is central for episodic memory. Alexandre suggests that these two regions collaborate to enable continuous and effective learning, thus facilitating thought and imagination.
Frédéric Alexandre est directeur de recherche à l'Inria et dirige l'équipe Mnemosyne à Bordeaux, spécialisée en Intelligence Artificielle et Neurosciences Computationnelles. L'équipe étudie les différentes formes de mémoire cérébrale et leur rôle dans des fonctions cognitives telles que le raisonnement et la prise de décision. Ils explorent la dichotomie entre mémoires explicites et implicites et comment elles interagissent. Leurs projets récents s'étendent de l'acquisition du langage à la planification et la délibération. Les modèles créés sont validés expérimentalement et ont des applications médicales, industrielles, ainsi qu'en sciences humaines, notamment en éducation, droit, linguistique, économie, et philosophie.
Frédéric Alexandre. A global framework for a systemic view of brain modelingfile:///Users/harnad/Desktop/5DIC9270:1/23-24/SpeakersAutumn/ALLautumn/pp.22.%20https:/braininformatics.springeropen.com/articles/10.1186/s40708-021-00126-4. Brain Informatics, 2021, 8 (1), Snigdha Dagar, Frédéric Alexandre, Nicolas P. Rougier. From concrete to abstract rules : A computational sketchhttps://inria.hal.science/hal-03695814. 15th International Conference on Brain Informatics, Jul 2022. Randa Kassab, Frédéric Alexandre. Pattern Separation in the Hippocampus: Distinct Circuits under Different Conditionshttps://link.springer.com/article/10.1007/s00429-018-1659-4. Brain Structure and Function, 2018, 223 (6), pp.2785-2808. Hugo Chateau-Laurent, Frédéric Alexandre. The Opportunistic PFC: Downstream Modulation of a Hippocampus-inspired Network is Optimal for Contextual Memory Recallhttps://hal.science/hal-03885715. 36th Conference on Neural Information Processing System, Dec 2022. Pramod Kaushik, Jérémie Naudé, Surampudi Bapi Raju, Frédéric Alexandre. A VTA GABAergic computational model of dissociated reward prediction error computation in classical conditioninghttps://www.sciencedirect.com/science/article/abs/pii/S1074742722000776. Neurobiology of Learning and Memory, 2022, 193 (107653),
Mechanistic Explanation in Deep Learning
Raphaël Millière Department of Philosophy Macquarie University
Thursday/jeudi 10h30 am 14 September, 2024
https://uqam.zoom.us/j/87530510617
Abstract: Deep neural networks such as large language models (LLMs) have achieved impressive performance across almost every domain of natural language processing, but there remains substantial debate about which cognitive capabilities can be ascribed to these models. Drawing inspiration from mechanistic explanations in life sciences, the nascent field of "mechanistic interpretability" seeks to reverse-engineer human-interpretable features to explain how LLMs process information. This raises some questions: (1) Are causal claims about neural network components, based on coarse intervention methods (such as “activation patching”), genuine mechanistic explanations? (2) Does the focus on human-interpretable features risk imposing anthropomorphic assumptions? My answer will be "yes" to (1) and "no" to (2), closing with a discussion of some ongoing challenges.
[A close-up of a person Description automatically generated]Raphael Millière is Lecturer in Philosophy of Artificial Intelligence at Macquarie University in Sydney, Australia. His interests are in the philosophy of artificial intelligence, cognitive science, and mind, particularly in understanding artificial neural networks based on deep learning architectures such as Large Language Models. He has investigated syntactic knowledge, semantic competence, compositionality, variable binding, and grounding.
Elhage, N., et al. (2021). A mathematical framework for transformer circuitshttps://transformer-circuits.pub/2021/framework/index.html. Transformer Circuits Thread. Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about Mechanismshttps://doi.org/10.1086/392759. Philosophy of Science, 67(1), 1–25. Millière, R. (2023). The Alignment Problem in Context. https://arxiv.org/abs/2311.02147 arXiv preprint arXiv:2311.02147. Mollo, D. C., & Millière, R. (2023). The vector grounding problemhttps://arxiv.org/abs/2304.01481. arXiv preprint arXiv:2304.01481. Yousefi, S., et al. (2023). In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representationhttps://arxiv.org/abs/2310.00313s. arXiv preprint arXiv:2310.00313. 2023-2024:
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo U Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown U Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson UC Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State U Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
U Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
U Heidelberg
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
2024
11-Jan
18-Jan
25-Jan
01-Feb
15-Feb
22-Feb
07-Mar
14-Mar
21-Mar
28-Mar
Frédéric Alexandre
Raphaël Millière
Ben Goertzel
Sander vandeCruys
Rob Goldstone
Alessandro Lenci
Gary Lupyan
Erica Huynh
Andy Lücking
Pierre-Yves Oudeyer
Matt Fredrikson U Bordeaux
Macquarie
SingularityNet
Antwerp
U Indiana
U Pisa
U Wisconsin
McGill U
U Frankfort
INRIA Bordeaux
Carnegie-Mellon
« Apprentissage continu et contrôlé cognitif »
Mechanistic Explanation in Deep Learning
Toward AGI via Embodied Neural-Symbolic-Evolutionary Cognition
Predictive Inference
New Perceptual Descriptions During Category Learning
Human and Artificial Language Ubderstanding Gap
What Counts As Understanding
Musical Timbre Category Learning
Deixis, Reference and Iconicity
Autotelic Agents that Use and Ground LLMs
Transferable Attacks on Alighned Language Models
Correction: The seminar will be this Thursday, 11 January, as in the Subject Heading (not last September! Apologies for my error)
From: Stevan Harnad stevan.harnad@mcgill.ca Date: Monday, January 8, 2024 at 5:27 PM To: MCGILL_COGNITIVE_SCIENCE@LISTS.MCGILL.CA. COGSCIMONTREAL@LISTS.CONCORDIA.CA coggroup@www.psych.mcgill.ca, cog-neuroscience.neuro@mcgill.ca Subject: Thursday 11 Jan 10h30 ZOOM: "Mechanistic Explanation in Deep Learning" Raphaël Millière (Macquarie University) Mechanistic Explanation in Deep Learning
Raphaël Millière Department of Philosophy Macquarie University
Thursday/jeudi 10h30 am 14 September, 2024 11 January 2024
https://uqam.zoom.us/j/87530510617
Abstract: Deep neural networks such as large language models (LLMs) have achieved impressive performance across almost every domain of natural language processing, but there remains substantial debate about which cognitive capabilities can be ascribed to these models. Drawing inspiration from mechanistic explanations in life sciences, the nascent field of "mechanistic interpretability" seeks to reverse-engineer human-interpretable features to explain how LLMs process information. This raises some questions: (1) Are causal claims about neural network components, based on coarse intervention methods (such as “activation patching”), genuine mechanistic explanations? (2) Does the focus on human-interpretable features risk imposing anthropomorphic assumptions? My answer will be "yes" to (1) and "no" to (2), closing with a discussion of some ongoing challenges.
[A close-up of a person Description automatically generated]Raphael Millière is Lecturer in Philosophy of Artificial Intelligence at Macquarie University in Sydney, Australia. His interests are in the philosophy of artificial intelligence, cognitive science, and mind, particularly in understanding artificial neural networks based on deep learning architectures such as Large Language Models. He has investigated syntactic knowledge, semantic competence, compositionality, variable binding, and grounding.
Elhage, N., et al. (2021). A mathematical framework for transformer circuitshttps://transformer-circuits.pub/2021/framework/index.html. Transformer Circuits Thread. Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about Mechanismshttps://doi.org/10.1086/392759. Philosophy of Science, 67(1), 1–25. Millière, R. (2023). The Alignment Problem in Context. https://arxiv.org/abs/2311.02147 arXiv preprint arXiv:2311.02147. Mollo, D. C., & Millière, R. (2023). The vector grounding problemhttps://arxiv.org/abs/2304.01481. arXiv preprint arXiv:2304.01481. Yousefi, S., et al. (2023). In-Context Learning in Large Language Models: A Neuroscience-inspired Analysis of Representationhttps://arxiv.org/abs/2310.00313s. arXiv preprint arXiv:2310.00313. 2023-2024:
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo U Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown U Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson UC Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State U Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
U Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
U Heidelberg
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
2024
11-Jan
18-Jan
25-Jan
01-Feb
15-Feb
22-Feb
07-Mar
14-Mar
21-Mar
28-Mar
Frédéric Alexandre
Raphaël Millière
Ben Goertzel
Sander vandeCruys
Rob Goldstone
Alessandro Lenci
Gary Lupyan
Erica Huynh
Andy Lücking
Pierre-Yves Oudeyer
Matt Fredrikson U Bordeaux
Macquarie
SingularityNet
Antwerp
U Indiana
U Pisa
U Wisconsin
McGill U
U Frankfort
INRIA Bordeaux
Carnegie-Mellon
« Apprentissage continu et contrôlé cognitif »
Mechanistic Explanation in Deep Learning
Toward AGI via Embodied Neural-Symbolic-Evolutionary Cognition
Predictive Inference
New Perceptual Descriptions During Category Learning
Human and Artificial Language Ubderstanding Gap
What Counts As Understanding
Musical Timbre Category Learning
Deixis, Reference and Iconicity
Autotelic Agents that Use and Ground LLMs
Transferable Attacks on Alighned Language Models
Toward AGI via Embodied Neural-Symbolic-Evolutionary Cognition
Ben Goertzelhttps://singularitynet.io/ SingularityNEThttps://singularitynet.io/
jeudi/thursday 10h30 am 18 January, 2024
https://uqam.zoom.us/s/9921472228
ABSTRACT: A concrete path toward AGI with capability at the human level and beyond is outlined, centered on a common mathematical meta-representation capable of integrating neural, symbolic, evolutionary and autopoietic aspects of intelligence. The instantiation of these ideas in the OpenCog Hyperon software framework is discussed. An in-progress research programme is reviewed, in which this sort of integrative AGI system is induced to ground its natural language dialogue in its experience, via embodiment in physical robots and virtual-world avatars.
[A person wearing a hat and glasses Description automatically generated]Ben Goertzelhttps://scholar.google.ca/citations?user=kTfdhRcAAAAJ&hl=en&oi=ao is a cross-disciplinary scientist, entrepreneur and author. He leads the SingularityNET Foundation, the OpenCog Foundation, and the AGI Society which runs the annual Artificial General Intelligence conference. His research work encompasses multiple areas including artificial general intelligence, natural language processing, cognitive science, machine learning, computational finance, bioinformatics, virtual worlds, gaming, parapsychology, theoretical physics and more. He has published 25+ scientific books, ~150 technical papers, and numerous journalistic articles, and given talks at a vast number of events of all sorts around the globe.
Goertzel, B. (2023). Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMshttps://arxiv.org/abs/2309.10371. arXiv preprint arXiv:2309.10371.
Rodionov, S., Goertzel, Z. A., & Goertzel, B. (2023). An Evaluation of GPT-4 on the ETHICS Datasethttps://arxiv.org/abs/2309.10492. arXiv preprint arXiv:2309.10492.
Huang, K., Wang, Y., Goertzel, B., & Saliba, T. (2023). ChatGPT and Web3 Applications. In Beyond AI: ChatGPT, Web3, and the Business Landscape of Tomorrow (pp. 69-95). Cham: Springer Nature Switzerland.
14-Sep 2023 Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo U Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown U Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson UC Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State U Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
U Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
U Heidelberg
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
11-Jan 2024
18-Jan
25-Jan
01-Feb
15-Feb
22-Feb
07-Mar
14-Mar
21-Mar
28-Mar
Frédéric Alexandre
Raphaël Millière
Ben Goertzel
Sander vandeCruys
Rob Goldstone
Alessandro Lenci
Gary Lupyan
Erica Huynh
Andy Lücking
Pierre-Yves Oudeyer
Matt Fredrikson U Bordeaux
Macquarie
SingularityNet
Antwerp
U Indiana
U Pisa
U Wisconsin
McGill U
U Frankfort
INRIA Bordeaux
Carnegie-Mellon
« Apprentissage continu et contrôlé cognitif »
Mechanistic Explanation in Deep Learning
AGI and Symbol Grounding
Predictive Inference
New Perceptual Descriptions During Category Learning
Human and Artificial Language Ubderstanding Gap
What Counts As Understanding
Musical Timbre Category Learning
Deixis, Reference and Iconicity
Autotelic Agents that Use and Ground LLMs
Transferable Attacks on Alighned Language Models
Language Writ Large: LLMs, ChatGPT, Meaning and Understanding
Stevan Harnad UQÀM, McGill
Thursday, 25 January 2004 10h30 am
ZOOM LINK: https://uqam.zoom.us/s/9921472228
*The scheduled speaker for this date, Sander vandeCruys, U Antwerp, has contracted long-covid; his seminar is postponed.
ABSTRACT: Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, etc.). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. It has even driven some of us to conclude that it actually understands. It’s not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign “biases” -- convergent constraints that emerge at LLM-scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at LLM-scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These benign biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the “mirroring” of language production and comprehension, (4) iconicity in propositions at LLM-scale, (5) computational counterparts of human “categorical perception” in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought.
Stevan Harnadhttps://scholar.google.ca/citations?user=_HQz-vEAAAAJ&hl=en&oi=ao is Professor of psychology and cognitive science at UQÀM. His research is on category-learning, symbol-grounding, language-evolution, and Turing-Testing
Discussion of this talk with ChatGPThttps://generic.wordpress.soton.ac.uk/skywritings/2024/01/14/language-writ-large-llms-chatgpt-meaning-and-understanding/
Bonnasse-Gahot, L., & Nadal, J. P. (2022). Categorical perception: a groundwork for deep learninghttps://direct.mit.edu/neco/article/34/2/437/107914?casa_token=ijnJguGtGAYAAAAA:yeRrjIxLtKdJJv9cGTY1OaJOmFuKKIX92q-CL3fJWLYaEULUPkU-dyP8ZVe-C3vcNUg2YcZT_g. Neural Computation, 34(2), 437-475. Harnad, S. (2012). From sensorimotor categories and pantomime to grounded symbols and propositionshttps://eprints.soton.ac.uk/271439/1/Harnad-Tallerman-Gibsonrev.pdf In: Gibson, KR & Tallerman, M (eds.) The Oxford Handbook of Language Evolutionhttps://academic.oup.com/edited-volume/37200/chapter-abstract/327376180?redirectedFrom=fulltext&login=false 387-392. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligencehttps://eprints.soton.ac.uk/262954/1/turing.html. In: Epstein, R, Roberts, Gary & Beber, G. (eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computerhttps://link.springer.com/book/10.1007/978-1-4020-6710-5. Springer, pp. 23-66. Thériault, C., Pérez-Gay, F., Rivas, D., & Harnad, S. (2018). Learning-induced categorical perception in a neural network modhttps://arxiv.org/pdf/1805.04567.pdfel. arXiv preprint arXiv:1805.04567. Vincent‐Lamarre, P; Blondin-Massé, A; Lopes, M; Lord, M; Marcotte, O; & Harnad, S (2016). The latent structure of dictionarieshttps://onlinelibrary.wiley.com/doi/full/10.1111/tops.12211. Topics in Cognitive Science 8(3): 625-659. Pérez-Gay Juárez, F., Sicotte, T., Thériault, C., & Harnad, S. (2019). Category learning can alter perception and its neural correlateshttps://onlinelibrary.wiley.com/doi/full/10.1111/tops.12211. PloS one, 14(12), e0226000.
14-Sep 2023 Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo U Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown U Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson UC Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State U Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
U Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
U Heidelberg
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
11-Jan 2024
18-Jan
25-Jan
01-Feb
15-Feb
22-Feb
07-Mar
14-Mar
21-Mar
28-Mar
Frédéric Alexandre
Raphaël Millière
Ben Goertzel
Stevan Harnad
Rob Goldstone
Alessandro Lenci
Gary Lupyan
Erica Huynh
Andy Lücking
Pierre-Yves Oudeyer
Matt Fredrikson U Bordeaux
Macquarie
SingularityNet
UQÀM
U Indiana
U Pisa
U Wisconsin
McGill U
U Frankfort
INRIA Bordeaux
Carnegie-Mellon
« Apprentissage continu et contrôlé cognitif »
Mechanistic Explanation in Deep Learning
AGI and Symbol Grounding
Language Writ Large: Convergent Constraints in LLM Space
New Perceptual Descriptions During Category Learning
Human and Artificial Language Understanding Gap
What Counts as Understanding
Musical Timbre Category Learning
Deixis, Reference and Iconicity
Autotelic Agents that Use and Ground LLMs
Transferable Attacks on Aligned Language Models
Learning Categories by Creating New Descriptions
Roberthttps://pc.cogs.indiana.edu/people/ Goldstone Indiana University
jeudi/thursday 10h30 am Feb 1 2024
https://uqam.zoom.us/s/9921472228
ABSTRACT: In Bongard problems, problem-solvers must come up with a rule for distinguishing visual scenes that fall into two categories. Only a handful of examples of each category are presented. This requires the open-ended creation of new descriptions. Physical Bongard Problems (PBPs) require perceiving and predicting the spatial dynamics of the scenes. We compare the performance of a new computational model (PATHS) to human performance. During continual perception of new scene descriptions over the course of category learning, hypotheses are constructed by combining descriptions into rules for distinguishing the categories. Spatially or temporally juxtaposing similar scenes promotes category learning when the scenes belong to different categories but hinders learning when the similar scenes belong to the same category.
[A close-up of a person smiling Description automatically generated]Robert Goldstone is a Distinguished Professor in the Department of Psychological and Brain Sciences and Program in Cognitive Science at Indiana University. His research interests include concept learning and representation, perceptual learning, educational applications of cognitive science, and collective behavior.
Goldstone, R. L., Dubova, M., Aiyappa, R., & Edinger, A. (2023). The spread of beliefs in partially modularized communities. Perspectives on Psychological Science, 0(0). https://doi.org/10.1177/17456916231198238
Goldstone, R. L., Andrade-Lotero, E., Hawkins, R. D., & Roberts, M. E. (2023). The emergence of specialized roles within groups. Topics in Cognitive Science, DOI: 10.1111/tops.12644.
Weitnauer, E., Goldstone, R. L., & Ritter, H. (2023). Perception and simulation during concept learning. Psychological Review, https://doi.org/10.1037/rev0000433https://psycnet.apa.org/doi/10.1037/rev0000433.
14-Sep 2023 Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo U Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown U Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson UC Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov
09-Nov Eric Schulz
Casey Kennington Tuebingen
Boise State U Machine Psychology
Robotic grounding and LLMs
16-Nov
Usef Faghihi
UQTR
« Algorithmes de Deep Learning flous causaux »
23-Nov
Anders Søgaard
U Copenhagen
LLMs: Indication or Representation?
30-Nov
Christoph Durt
U Heidelberg
LLMs, Patterns, and Understanding
07-Dec
Jake Hanson
ASU
Falsifying the Integrated Information Theory of Consciousness
14-Dec
11-Jan 2024
18-Jan
25-Jan
01-Feb
15-Feb
22-Feb
07-Mar
14-Mar
21-Mar
28-Mar
Frédéric Alexandre
Raphaël Millière
Ben Goertzel
Stevan Harnad
Robert Goldstone
Alessandro Lenci
Gary Lupyan
Erica Huynh
Andy Lücking
Pierre-Yves Oudeyer
Matt Fredrikson U Bordeaux
Macquarie
SingularityNet
UQÀM, McGill
Indiana U
U Pisa
U Wisconsin
McGill U
U Frankfort
INRIA Bordeaux
Carnegie-Mellon
« Apprentissage continu et contrôlé cognitif »
Mechanistic Explanation in Deep Learning
AGI and Symbol Grounding
Language Writ Large: LLMs, ChatGPT, Meaning and Undetsanding
New Perceptual Descriptions During Category Learning
Human and Artificial Language Ubderstanding Gap
What Counts As Understanding
Musical Timbre Category Learning
Deixis, Reference and Iconicity
Autotelic Agents that Use and Ground LLMs
Transferable Attacks on Alighned Language Models