cog Intuitive Physical Reasoning and Mental Simulation
Todd Gureckis Psychology, NYU
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am December 15, 2022 Zoom: https://uqam.zoom.us/j/88481835073 ( videos of past seminars: https://youtu.be/XePaBMc_HFg ) Abstract: The ability to reason about the physics of our world (e.g., what arrangements of objects are stable, how things will fall or move under a force) is central to human intelligence. One influential hypothesis is that this capacity stems from the ability to perform “mental simulations” of physical events (in effect, playing a mental “movie” of the future evolution of a scene according to the laws of physics). In this talk, I’ll try to pin down several core commitments of the mental simulation approach that must be present for the general theory to be viable. I then will describe experiments we conducted recently trying to test these commitments. Along the way, we stumbled into several curious and novel errors and biases in human physical reasoning ability that we believe represent limits to the universality of contemporary simulation theories. If there is time, I will discuss a related project considering how efficient or optimal people are when they “experiment” in the physical world in order to learn the covert properties of objects such as mass or attractive/repulsive forces like magnetism.
Todd M. Gureckis, Professor of Psychology, New York University, studies how people actively explore their world in order to learn, including everyday reasoning capacities for the physical and social world. His research combines methods of computational modeling, developmental psychology, cognitive neuroscience, and online data collection. He is the founder and a lead developer of the psiTurkhttps://psiturk.org/ package, a tool for facilitating online experiments used in hundreds of research labs. His work has been recognized by the NSF CAREER award, the Presidential Early Career Award (PECASE) from the Office of Science and Technology Policy at the White House, the James S. McDonnell Foundation Scholar award, and several paper and conferences awards with his students including the Marr Prize from the Cognitive Science Society, the Clifford T. Morgan Prize from the Psychonomic Society. He has variously served an Associate Editor for Cognitive Science, Topics in Cognitive Science, and Computational Brain and Behavior.
References https://gureckislab.org/https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckislab.org%2F&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7C6ac0778bb48b4e4f0f0b08dad150943a%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638052441780695700%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=uuxLutKnSIdvHsmbIq0ZDqO%2FzpoIDbKtCIsJAXFa3Fg%3D&reserved=0 : https://gureckislab.org/papers/#/ref/ludwin2021limitshttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckislab.org%2Fpapers%2F%23%2Fref%2Fludwin2021limits&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7C6ac0778bb48b4e4f0f0b08dad150943a%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638052441780695700%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2BM41kWaUJYSKo2IcYoQP7Os2v4N1WhTBqEWH8jc62lY%3D&reserved=0 https://gureckislab.org/papers/#/ref/ludwinpeery2020brokenhttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckislab.org%2Fpapers%2F%23%2Fref%2Fludwinpeery2020broken&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7C6ac0778bb48b4e4f0f0b08dad150943a%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638052441780695700%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=amVIY%2FEZQX2GLcF4ywvI%2BnFEPKkgoWuJ25N%2Ftl1Qst4%3D&reserved=0 https://gureckislab.org/papers/#/ref/bramley2018intuitivehttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckislab.org%2Fpapers%2F%23%2Fref%2Fbramley2018intuitive&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7C6ac0778bb48b4e4f0f0b08dad150943a%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638052441780851923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ldd%2F%2FgJ%2BfD8RloY02pr9gIzTgJSnvfihwdszjceZuIE%3D&reserved=0
The enduring myth of Categorical Perception: A view from its source in Speech Perception
Bob McMurray Dept. of Psychological and Brain Sciences and Dept. of Linguistics University of Iowa and Haskins Laboratories
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) Feb 2, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0 Abstract: Categorical perception is the finding from speech perception with the largest impact on cognitive science. However, within speech perception, it is known to be an artifact of task demands. Categorical perception is empirically defined as a relationship between phoneme identification and discrimination. As discrimination tasks do not appear to require categorization, this was thought to imply that listeners perceive speech solely in terms of categories. However, 50 years of work using discrimination tasks, priming, the Visual World Paradigm, and Event Related Potentials has rejected the strongest forms of CP and provide little strong evidence for any form of it. This talk reviews the origins and impact of this scientific meme and the work challenging it. Critically, CP stands in the way of a modern theoretical synthesis in speech perception in which listeners preserve fine-grained detail to enable more flexible processing. The demise of CP leads to a new understanding of how to use and interpret the most basic experimental paradigms–identification along a continuum – and has implications for language and hearing disorders, development, and multilingualism. Critically, the rise and fall of CP in speech perception – and the theoretical and empirical reasons for it – have large implications for the variety of other fields in which CP has been invoked. [Picture of Professor McMurray]Bob McMurray, F. Wendell Miller Professor of Psychological and Brain Scienceshttps://psychology.uiowa.edu/people/bob-mcmurray at the University of Iowa, has done fundamental work on speech perception, word recognition in reading, how these fundamental language skills develop, and how they vary in a variety of language and communicative impairments. His work leverages psycholinguistic techniques like eye-tracking in the visual world paradigm, computational modeling, and electro-physiolology with machine learning. He is currently director of the longitudinal Growing Wordshttp://GrowingWords.lab.uiowa.edu Project which examines the development of language and reading skills in school age children, and associate director for the Iowa Cochlear Implant Clinical Research Center. References McMurray, B. (2022) The Myth of Categorical Perception. Journal of the Acoustical Society of America.https://asa.scitation.org/doi/full/10.1121/10.0016614?casa_token=HCtFTs3VEbkAAAAA%3AlNq6Q5yVDyrb1RuKusVNuRyhL2B73ufQXOGRQDVTo3KCDbpmPb87LSn1eFIMQqmD58w8q4GIAmK9. 152(6), 3819-3842. https://psyarxiv.com/dq7ej/ McMurray, B. (in press) The acquisition of speech categories: Beyond perceptual narrowing, beyond unsupervised learning and beyond infancy. Language, Cognition and Neurosciencehttps://www.tandfonline.com/doi/full/10.1080/23273798.2022.2105367. https://psyarxiv.com/njm3r/ McMurray, B., Apfelbaum, K., and Tomblin, J.B. (2022) The slow development of real-time processing: Spoken Word Recognition as a crucible for new about thinking about language acquisition and disorders. Current Directions in Psychological Sciencehttps://journals.sagepub.com/doi/abs/10.1177/09637214221078325. 31(4), 305-315. https://psyarxiv.com/uebfc/ Kapnoula, E., & McMurray, B. (2021) On the locus of individual differences in perceptual flexibility: ERP evidence for perceptual warping of speech sounds. Brain & Languagehttps://www.sciencedirect.com/science/article/pii/S0093934X21001255, 223 (2021): 105031 https://psyarxiv.com/q9stn McMurray, B., Danelz, A., Rigler, H., and Seedorff, M. (2018) Speech categorization develops slowly through adolescence. Developmental Psychologyhttps://psycnet.apa.org/search/display?id=960f4e5d-1c25-ebdf-5d58-a83243a1c813&recordId=1&tab=PA&page=1&display=25&sort=PublicationYearMSSort%20desc,AuthorSort%20asc&sr=1, 54(8), 1472-1491.
« Éthique de l’intelligence artificielle » : une analyse critique
Catherine Tessierhttps://www.onera.fr/fr/staff/catherine-tessier Onera,https://www.onera.fr/fr/centres/toulouse Toulouse
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Jeudi, 10h30 (EST) (16h30 GMT+1) 19 janvier, 2023 Zoom *nouveau*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0 Résumé: La profusion de documents et d'organismes qui traitent de l’« éthique de l’intelligence artificielle » nous amène à nous demander pourquoi l'intelligence artificielle est récemment devenue un objet d'attention particulier et quelle éthique est en jeu. À partir d’un examen de quelques documents internationaux, nous mettrons en évidence les problèmes liés au vocabulaire utilisé, aux postulats, et soulignerons des tensions et des paradoxes. À titre d'exemple, nous nous attarderons sur le principe de « contrôle humain ». Nous conclurons sur les risques de détournement de l'éthique et sur la nécessité d'une véritable réflexion éthique tant au niveau de la recherche que de la conception et de l'utilisation des systèmes d'intelligence artificielle.
[A close-up of a person smiling Description automatically generated]Catherine Tessier est directrice de recherche à l’ONERA à Toulouse, France, et référente intégrité scientifique et éthique de la recherche de l’ONERA. Elle enseigne à l’ISAE-SUPAERO. Ses recherches portent sur la modélisation de cadres éthiques et sur les questions éthiques liées à l’« autonomie » des robots. Au niveau national français, elle est membre du Comité national pilote d’éthique du numérique et membre du Comité d’éthique de la défense. Elle a fait partie du Groupe d'experts ad hoc de l'UNESCO en vue de l'élaboration de la recommandation relative à l'éthique de l'Intelligence Artificielle.
Références
Tessier, C. (2022). «Autonomie» dans les systèmes d'armes: questionnements sémantiques, techniques, éthiqueshttps://hal.archives-ouvertes.fr/hal-03740293/document. Enjeux de l'autonomie des systèmes d'armes létaux.
Tessier, C. (2021). Éthique et IA: analyse et discussionhttps://hal.science/hal-03321149v1/document. In CNIA 2021: Conférence Nationale en Intelligence Artificielle
Tessier, C. (2019). Éthique de la robotique et «robot éthique». https://hal.archives-ouvertes.fr/hal-02294488/document Journal Polethis, 2
Spaun 2.0: A large-scale model of biological cognition
Chris Eliasmith Centre for Theoretical Neuroscience University of Waterloo
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST), January 26, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0 Abstract: The large-scale model of the brain, Spaun, has undergone significant development. In this talk, I describe how it has more than doubled in size, to 6.3 million neurons, 20 billion connections, and significantly increased in functionality. New functions include the ability to adapt online to changes in motor dynamics, classification of over 1000 categories of images, and perhaps most importantly the ability to perform simple 'mental gymnastics'. In this talk I describe the semantic pointer architecture (SPA) that is used to construct the model, demonstrate Spaun’s abilities, and discuss future plans for improving on what is currently the world's largest functional brain model. [Chris Eliasmith | Centre for Theoretical Neuroscience | University of Waterloo]Chris Eliasmith, Director, Centre for Theoretical Neurosciencehttp://uwaterloo.ca/centre-for-theoretical-neuroscience/ (CTN) at the University of Waterloo, is the co-inventor of the Neural Engineering Framework (NEF), the Neural Engineering Objects (Nengo) software environment, and the Semantic Pointer Architecture (SPA), all dedicated to understanding how the brain works. His team has developed the Semantic Pointer Architecture Unified Network (Spaun) which is the most realistic functional brain simulation yet developed. Chris is the author of How to Build a Brain http://compneuro.uwaterloo.ca/research/spa.html (Oxford University Press) and Neural Engineeringhttp://compneuro.uwaterloo.ca/research/nef.html (MIT Press).
References
Duggins, P., & Eliasmith, C. (2022). Constructing functional models from biophysically-detailed neuronshttps://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010461. PLOS Computational Biology, 18(9), e1010461.
Jan Gosmann and Chris Eliasmith. CUE: a unified spiking neuron model of short-term and long-term memoryhttp://compneuro.uwaterloo.ca/publications/gosmann2021.html. Psychological Review, 128(1):104-124, 01 2021.
Voelker, A. R., Blouw, P., Choo, X., Dumont, N. S. Y., Stewart, T. C., & Eliasmith, C. (2021). Simulating and predicting dynamical systems with spatial semantic pointershttps://direct.mit.edu/neco/article/33/8/2033/102625. Neural Computation, 33(8), 2033-2067.
Choo, F. X. (2018). Spaun 2.0: Extending the world’s largest functional brain modelhttps://uwspace.uwaterloo.ca/bitstream/handle/10012/13308/Choo_Feng-Xuan.pdf?sequence=3&isAllowed=y.
Spaun 2.0: A large-scale model of biological cognition
Chris Eliasmith Centre for Theoretical Neuroscience University of Waterloo
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST), January 26, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0 Abstract: The large-scale model of the brain, Spaun, has undergone significant development. In this talk, I describe how it has more than doubled in size, to 6.3 million neurons, 20 billion connections, and significantly increased in functionality. New functions include the ability to adapt online to changes in motor dynamics, classification of over 1000 categories of images, and perhaps most importantly the ability to perform simple 'mental gymnastics'. In this talk I describe the semantic pointer architecture (SPA) that is used to construct the model, demonstrate Spaun’s abilities, and discuss future plans for improving on what is currently the world's largest functional brain model. [Chris Eliasmith | Centre for Theoretical Neuroscience | University of Waterloo]Chris Eliasmith, Director, Centre for Theoretical Neurosciencehttp://uwaterloo.ca/centre-for-theoretical-neuroscience/ (CTN) at the University of Waterloo, is the co-inventor of the Neural Engineering Framework (NEF), the Neural Engineering Objects (Nengo) software environment, and the Semantic Pointer Architecture (SPA), all dedicated to understanding how the brain works. His team has developed the Semantic Pointer Architecture Unified Network (Spaun) which is the most realistic functional brain simulation yet developed. Chris is the author of How to Build a Brain http://compneuro.uwaterloo.ca/research/spa.html (Oxford University Press) and Neural Engineeringhttp://compneuro.uwaterloo.ca/research/nef.html (MIT Press).
References
Duggins, P., & Eliasmith, C. (2022). Constructing functional models from biophysically-detailed neuronshttps://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010461. PLOS Computational Biology, 18(9), e1010461.
Jan Gosmann and Chris Eliasmith. CUE: a unified spiking neuron model of short-term and long-term memoryhttp://compneuro.uwaterloo.ca/publications/gosmann2021.html. Psychological Review, 128(1):104-124, 01 2021.
Voelker, A. R., Blouw, P., Choo, X., Dumont, N. S. Y., Stewart, T. C., & Eliasmith, C. (2021). Simulating and predicting dynamical systems with spatial semantic pointershttps://direct.mit.edu/neco/article/33/8/2033/102625. Neural Computation, 33(8), 2033-2067.
Choo, F. X. (2018). Spaun 2.0: Extending the world’s largest functional brain modelhttps://uwspace.uwaterloo.ca/bitstream/handle/10012/13308/Choo_Feng-Xuan.pdf?sequence=3&isAllowed=y.
The enduring myth of Categorical Perception: A view from its source in Speech Perception
Bob McMurray Dept. of Psychological and Brain Sciences and Dept. of Linguistics University of Iowa and Haskins Laboratories
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) Feb 2, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0 Abstract: Categorical perception is the finding from speech perception with the largest impact on cognitive science. However, within speech perception, it is known to be an artifact of task demands. Categorical perception is empirically defined as a relationship between phoneme identification and discrimination. As discrimination tasks do not appear to require categorization, this was thought to imply that listeners perceive speech solely in terms of categories. However, 50 years of work using discrimination tasks, priming, the Visual World Paradigm, and Event Related Potentials has rejected the strongest forms of CP and provide little strong evidence for any form of it. This talk reviews the origins and impact of this scientific meme and the work challenging it. Critically, CP stands in the way of a modern theoretical synthesis in speech perception in which listeners preserve fine-grained detail to enable more flexible processing. The demise of CP leads to a new understanding of how to use and interpret the most basic experimental paradigms–identification along a continuum – and has implications for language and hearing disorders, development, and multilingualism. Critically, the rise and fall of CP in speech perception – and the theoretical and empirical reasons for it – have large implications for the variety of other fields in which CP has been invoked. [Picture of Professor McMurray]Bob McMurray, F. Wendell Miller Professor of Psychological and Brain Scienceshttps://psychology.uiowa.edu/people/bob-mcmurray at the University of Iowa, has done fundamental work on speech perception, word recognition in reading, how these fundamental language skills develop, and how they vary in a variety of language and communicative impairments. His work leverages psycholinguistic techniques like eye-tracking in the visual world paradigm, computational modeling, and electro-physiolology with machine learning. He is currently director of the longitudinal Growing Wordshttp://GrowingWords.lab.uiowa.edu Project which examines the development of language and reading skills in school age children, and associate director for the Iowa Cochlear Implant Clinical Research Center. References McMurray, B. (2022) The Myth of Categorical Perception. Journal of the Acoustical Society of America.https://asa.scitation.org/doi/full/10.1121/10.0016614?casa_token=HCtFTs3VEbkAAAAA%3AlNq6Q5yVDyrb1RuKusVNuRyhL2B73ufQXOGRQDVTo3KCDbpmPb87LSn1eFIMQqmD58w8q4GIAmK9. 152(6), 3819-3842. https://psyarxiv.com/dq7ej/ McMurray, B. (in press) The acquisition of speech categories: Beyond perceptual narrowing, beyond unsupervised learning and beyond infancy. Language, Cognition and Neurosciencehttps://www.tandfonline.com/doi/full/10.1080/23273798.2022.2105367. https://psyarxiv.com/njm3r/ McMurray, B., Apfelbaum, K., and Tomblin, J.B. (2022) The slow development of real-time processing: Spoken Word Recognition as a crucible for new about thinking about language acquisition and disorders. Current Directions in Psychological Sciencehttps://journals.sagepub.com/doi/abs/10.1177/09637214221078325. 31(4), 305-315. https://psyarxiv.com/uebfc/ Kapnoula, E., & McMurray, B. (2021) On the locus of individual differences in perceptual flexibility: ERP evidence for perceptual warping of speech sounds. Brain & Languagehttps://www.sciencedirect.com/science/article/pii/S0093934X21001255, 223 (2021): 105031 https://psyarxiv.com/q9stn McMurray, B., Danelz, A., Rigler, H., and Seedorff, M. (2018) Speech categorization develops slowly through adolescence. Developmental Psychologyhttps://psycnet.apa.org/search/display?id=960f4e5d-1c25-ebdf-5d58-a83243a1c813&recordId=1&tab=PA&page=1&display=25&sort=PublicationYearMSSort%20desc,AuthorSort%20asc&sr=1, 54(8), 1472-1491.
Learning through the eyes and ears of a child
Brenden M. Lake Department of Psychology and Center for Data Science New York University
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) February 9, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience. [Brenden M. Lake - Psychology and Data Science at NYU]
Brenden Lake, Department of Psychologyhttp://psych.nyu.edu/psychology.html and Center for Data Sciencehttp://cds.nyu.edu/, New York University, uses advances in machine intelligence to better understand human intelligence and vice versa, with a focus on concept learning, compositional generalization, question asking, goal generation, and abstract reasoning. The technical focus includes neuro-symbolic modeling and learning “through the eyes of a child” on developmentally realistic datasets. In cognitive science, if people have abilities beyond the reach of algorithms, then we do not fully understand how these abilities work. In AI, these abilities are important open problems with opportunities to reverse-engineer the human solutions. References
Wang, W., Vong, W. K., Kim, N., Lake, B. M. (2022). Finding Structure in One Child’s Linguistic Experiencehttps://psyarxiv.com/85k3y. Preprint available on PsyArXiv:85k3y. Orhan, E., Gupta, V., and Lake, B. M. (2020). Self-supervised learning through the eyes of a childhttps://proceedings.neurips.cc/paper/2020/file/7183145a2a3e0ce2b68cd3735186b1d5-Paper.pdf. In Advances in Neural Information Processing Systems 33, pages 9960–9971. Sullivan, J., Mei, M., Perfors, A., Wojcik, E. H., and Frank, M. C. (2021). SAYCam: A Large, Longitudinal Audiovisual Dataset Recorded From the Infant’s Perspectivehttps://psyarxiv.com/fy8zx/. Open Mind, 5:20–29. Elman, J. L. (1990). Finding Structure in Timehttps://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1402_1. Cognitive Science, 14:179–211.
De Eliza à ChatGPT ou voyage au pays merveilleux des Chatbot/Dialogueurs …
Marc Bidanhttps://www.researchgate.net/profile/Marc_Bidan LEMNAhttps://lemna.univ-nantes.fr/, Nantes
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Jeudi 10:30 (EST) 16 février, 2023 Zoom: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: Le dialogueur ou agent conversationnel ou chatbot est avant tout un outil d’aide ou d’accompagnement à la décision destiné à des humains. Il a une longue histoire qui commence en 1966 avec « Eliza » qui est l’un des premiers chatbots créés. Elle a été conçue par Joseph Weizenbaum pour imiter les conversations humaines en utilisant des techniques de réponse en chaîne. Ensuite en 1972 nait « Parry », un chatbot développé par Kenneth Colby, a été conçu pour imiter les conversations de personnes souffrant de schizophrénie. Puis en 1984 c’est le tour de « A.L.I.C.E » pour Artificial Linguistic Internet Computer Entity qui est l'un des premiers chatbots à utiliser des techniques de traitement du langage naturel pour mieux imiter les conversations humaines. Nous sommes ensuite en 1997 avec l’apparition de « SmarterChild » l'un des premiers chatbots à être disponible sur les réseaux de messagerie instantanée, comme AOL Instant Messenger. C’est au tour d’Apple de lancer « Siri » en 2006 comme assistant vocal basé sur la technologie chatbot. Puis son concurrent californien Microsoft va lancer en 2015 « Tay » qui sera vite corrompu et ne pourra donc pas imiter les conversations de personnes âgées de 18 à 24 ans. Et c’est en 2017 que le laboratoire OpenAI va lancer son « GPT-2 » qui n’est au début qu’un modèle de traitement du langage capable de générer du texte de manière autonome et qui sera appuyé en 2019 par « ChatGPT » basé sur GPT-3 avec un modèle de traitement du langage conçu spécifiquement pour générer des réponses à des questions posées. Comment prendre un peu de recul et mettre en lumière – derrière les réelles prouesses technologiques – tous les impacts ? Les bons et les moins bons … [https://cdn.theconversation.com/avatars/196740/width238/image-20190619-17118...] Marc Bidanhttps://www.researchgate.net/profile/Marc_Bidan est chercheur en management des systèmes et des technologies d’information dans l'équipe NTO du laboratoire d’économie et de mabagement de nantes Atlantique (LEMNA). Il s’intéresse aux processus d’acceptation, d’usage et d’appropriation des technologies numériques – de type progiciel, chatbot, agent conversationnel, jeux sérieux, etc. - en entreprise ou en organisation. https://theconversation.com/profiles/marc-bidan-196740
Références
Quinio, B., & Bidan, M. (2023). ChatGPT: Un robot conversationnel peut-il enseigner?https://management-datascience.org/articles/22060/ Management & Datascience. S. Michel, S. Gerbaix & M. Bidan (2022) A plea for choosing ex ante an ethical theorical position for a relevant response to ethical issues posed by algorithmic systemsdoi:%2010.1109/NextComp55567.2022.9932235 2022 3rd International Conference on Next Generation Computing Applications (NextComp), 2022, pp. 1-6, M Bidan, D truex and F Rowe « An empirical study of IS architectures in French SMEs: integration approaches », EJIS, March 2012, p. 287-302, M. Bidan « Fédération et Intégration des Applications du Système d'Information de Gestion », Revue Système d'Information et Management, Ed. ESKA, Revue Vol. 9, N. 4, 2004, p. 5-25. Turing, A. M. (1950). Machines informatiques et intelligencehttp://denisevellachemla.eu/transc-Turing.pdf. Mind, 49, 433-460.
Expanding grounded cognition
Adolfo Garcia Centro de neurosciencia cognitiva U San Andres, Argentina
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) March 2, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: I will review evidence on action semantics, across behavioral, neural, genetic, and experiential levels. The recycling of non-linguistic mechanisms during language processing extends beyond primary sensorimotor systems, with reactivation of face-processing and inhibitory mechanisms during the processing of facial concepts and negation markers, respectively. I will also summarize new findings on social semantics. All these themes include evidence from clinical populations, leading to translational innovations for detecting and differentiating brain diseases.
[cid:image001.jpg@01D942C2.62AEC5D0]Adolfo Garcíahttp://www.adolfogarcia.com.ar/ is Director of the Cognitive Neuroscience Centerhttps://www.udesa.edu.ar/centro-de-neurociencias-cognitivas (Universidad de San Andrés, Argentina). He has authored more than 200 publicationshttps://www.conicet.gov.ar/new_scp/detalle.php?id=33842&keywords=adolfo+garc%C3%ADa+biling%C3%BCismo&datos_academicos=yes. His contributions have been recognized by awards and distinctions from the Linguistic Association of Canada and the United States, the Argentine Association of Behavioral Science, the Legislature of the City of Buenos Aires, and the Alzheimer’s Association.
References
. Cervetto, S., Birba, A., Pérez, G., Amoruso, L., García, A. M. (2022). Body into narrative: Behavioral and neurophysiological signatures of action text processing after ecological motor traininghttps://www.sciencedirect.com/science/article/abs/pii/S0306452222005413. Neuroscience 507, 52-63. Birba, A., Fittipaldi, S., Cediel Escobar. J., Gonzalez Campo, C., Legaz, A., Galiani, A., Díaz Rivera, M., Martorell Caro, M., Alifano, F., Piña-Escudero, S., Cardona, J. F., Neely, A., Forno, G., Carpinella, M., Slachevsky, A., Serrano, C., Sedeño, L., Ibáñez, A. & García, A. M. (2021). Multimodal neurocognitive markers of naturalistic discourse typify diverse neurodegenerative diseaseshttps://academic.oup.com/cercor/advance-article-abstract/doi/10.1093/cercor/bhab421/6455662?redirectedFrom=fulltext. Cerebral Cortex 32(16), 3377-3391. Cervetto, S., Díaz-Rivera, M., Petroni, A., Birba, A., Martorell Caro, M., Sedeño, L., Ibáñez, A. & García, A. M. (2021). The neural blending of words and movement: ERP signatures of semantic and action processes during motor-language couplinghttps://direct.mit.edu/jocn/article/33/8/1413/101851/The-Neural-Blending-of-Words-and-Movement-Event. Journal of Cognitive Neuroscience 33(8), 1413-1427. García, A. M. & Ibáñez, A. (2016). A touch with words: Dynamic synergies between manual actions and languagehttps://www.sciencedirect.com/science/article/abs/pii/S0149763415302918. Neuroscience and Biobehavioral Reviews 68, 59-95.
Expanding grounded cognition
Adolfo Garcia Centro de neurosciencia cognitiva U San Andres, Argentina
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) March 2, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: I will review evidence on action semantics, across behavioral, neural, genetic, and experiential levels. The recycling of non-linguistic mechanisms during language processing extends beyond primary sensorimotor systems, with reactivation of face-processing and inhibitory mechanisms during the processing of facial concepts and negation markers, respectively. I will also summarize new findings on social semantics. All these themes include evidence from clinical populations, leading to translational innovations for detecting and differentiating brain diseases.
[cid:image001.jpg@01D942C2.62AEC5D0]Adolfo Garcíahttp://www.adolfogarcia.com.ar/ is Director of the Cognitive Neuroscience Centerhttps://www.udesa.edu.ar/centro-de-neurociencias-cognitivas (Universidad de San Andrés, Argentina). He has authored more than 200 publicationshttps://www.conicet.gov.ar/new_scp/detalle.php?id=33842&keywords=adolfo+garc%C3%ADa+biling%C3%BCismo&datos_academicos=yes. His contributions have been recognized by awards and distinctions from the Linguistic Association of Canada and the United States, the Argentine Association of Behavioral Science, the Legislature of the City of Buenos Aires, and the Alzheimer’s Association.
References
. Cervetto, S., Birba, A., Pérez, G., Amoruso, L., García, A. M. (2022). Body into narrative: Behavioral and neurophysiological signatures of action text processing after ecological motor traininghttps://www.sciencedirect.com/science/article/abs/pii/S0306452222005413. Neuroscience 507, 52-63. Birba, A., Fittipaldi, S., Cediel Escobar. J., Gonzalez Campo, C., Legaz, A., Galiani, A., Díaz Rivera, M., Martorell Caro, M., Alifano, F., Piña-Escudero, S., Cardona, J. F., Neely, A., Forno, G., Carpinella, M., Slachevsky, A., Serrano, C., Sedeño, L., Ibáñez, A. & García, A. M. (2021). Multimodal neurocognitive markers of naturalistic discourse typify diverse neurodegenerative diseaseshttps://academic.oup.com/cercor/advance-article-abstract/doi/10.1093/cercor/bhab421/6455662?redirectedFrom=fulltext. Cerebral Cortex 32(16), 3377-3391. Cervetto, S., Díaz-Rivera, M., Petroni, A., Birba, A., Martorell Caro, M., Sedeño, L., Ibáñez, A. & García, A. M. (2021). The neural blending of words and movement: ERP signatures of semantic and action processes during motor-language couplinghttps://direct.mit.edu/jocn/article/33/8/1413/101851/The-Neural-Blending-of-Words-and-Movement-Event. Journal of Cognitive Neuroscience 33(8), 1413-1427. García, A. M. & Ibáñez, A. (2016). A touch with words: Dynamic synergies between manual actions and languagehttps://www.sciencedirect.com/science/article/abs/pii/S0149763415302918. Neuroscience and Biobehavioral Reviews 68, 59-95.
Introduction à l’éthique de l’IA
Martin Giberthttps://www.lecre.umontreal.ca/chercheur-e/martin-gibert/ Centre de recherche en éthique Université de Montréal
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Jeudi, 10h30 (EST) (16h30 GMT+1) 9 mars, 2023 Zoom *nouveau*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: L’éthique de l’intelligence artificielle (IA) constitue le sous-domaine de l’éthique de la technique qui évalue moralement les IA et autres systèmes automatisés de traitement de l’information: sont-ils bons, justes ou vertueux? Comment les programmer pour le mieux? Bien qu’on y réfléchisse depuis longtemps (les fameuses lois de la robotique d’Asimov datent de 1943), les avancées récentes en apprentissage automatique soulèvent des questions inédites et parfois pressantes. Je présenterai quelques-uns des concepts mobilisés en philosophie morale pour y voir plus clair: agents et de patients moraux, éthique des algorithmes, agent moral artificiel, robots “vertueux”. Je soutiendrai que les récents développements en IA nous pressent de faire des choix dont les conséquences morales pourraient être majeures - en particulier pour ce qui concerne les systèmes de recommandations comme YouTube.
[cid:image002.jpg@01D94E6C.5487CA00]
Martin Gibert est chercheur en éthique de l’intelligence artificielle à l’Université de Montréal; il est affilié au Centre de Recherche en Éthique (CRÉ) et à l’Institut de valorisation des données (IVADO). Il a publié trois livres L’imagination en morale (2014), Voir son steak comme un animal mort (2015) et Faire la morale aux robots (2020) ainsi que plusieurs articles disponibles sur son site web et son blog « La quatrième blessure ».
Références:
Introduction à l’éthique de l’IAhttps://catalogue.edulib.org/fr/cours/PIA-ETHIA/, cours en ligne sur Edulib (2022),.
The case for virtuous robotshttps://link.springer.com/article/10.1007/s43681-022-00185-1, AI Ethics (2022)
In search of the moral status of AI: why sentience is a strong argumenthttps://link.springer.com/article/10.1007/s00146-021-01179-z, AI and Society (2021)
Automatiser les théories moraleshttps://mimesisjournals.com/ojs/index.php/giornale-filosofia/article/view/1694, Giornale di Filosofia (2021)
Could i please be removed from this mailing list.
Thank you Amélie
---------- Forwarded message --------- From: Stevan R. Harnad, Dr. stevan.harnad@mcgill.ca Date: Sat, Mar 4, 2023 at 07:40 Subject: [Coggroup] 10h30 Thurs Mar 9th: "Introduction à l’éthique de l’IA" (Martin Gibert, Université de Montrèal) To: MCGILL_COGNITIVE_SCIENCE@LISTS.MCGILL.CA < MCGILL_COGNITIVE_SCIENCE@lists.mcgill.ca> CC: COGSCIMONTREAL@LISTS.CONCORDIA.CA COGSCIMONTREAL@lists.concordia.ca, coggroup@www.psych.mcgill.ca coggroup@mx0.psych.mcgill.ca, Cognitive Research at McGill cram@psych.mcgill.ca, cog-neuroscience.neuro < cog-neuroscience.neuro@mcgill.ca>
*Introduction à l’éthique de l’IA*
*Martin Gibert* https://www.lecre.umontreal.ca/chercheur-e/martin-gibert/
Centre de recherche en éthique
Université de Montréal
UQÀM ISC DIC CRIA
Séminaire en informatique cognitive/Cognitive Informatics Seminar
*Jeudi, 10h30 (EST) *(16h30 GMT+1)
*9 mars, 2023*
*Zoom *nouveau*: * *https://uqam.zoom.us/j/89902403751* https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
*Abstract*: *L’éthique de l’intelligence artificielle *(IA) constitue le sous-domaine de *l’éthique de la technique* qui évalue moralement les IA et autres systèmes automatisés de traitement de l’information: sont-ils bons, justes ou vertueux? Comment les programmer pour le mieux? Bien qu’on y réfléchisse depuis longtemps (les fameuses lois de la robotique d’Asimov datent de 1943), les avancées récentes en apprentissage automatique soulèvent des questions inédites et parfois pressantes. Je présenterai quelques-uns des concepts mobilisés en philosophie morale pour y voir plus clair: agents et de patients moraux, éthique des algorithmes, agent moral artificiel, robots “vertueux”. Je soutiendrai que les récents développements en IA nous pressent de faire des choix dont les conséquences morales pourraient être majeures - en particulier pour ce qui concerne les systèmes de recommandations comme YouTube.
*Martin Gibert* est chercheur en éthique de l’intelligence artificielle à l’Université de Montréal; il est affilié au Centre de Recherche en Éthique (CRÉ) et à l’Institut de valorisation des données (IVADO). Il a publié trois livres *L’imagination en morale *(2014), *Voir son steak comme un animal mort* (2015) et *Faire la morale aux robots* (2020) ainsi que plusieurs articles disponibles sur son site web et son blog « La quatrième blessure ».
*Références*:
Introduction à l’éthique de l’IA https://catalogue.edulib.org/fr/cours/PIA-ETHIA/, cours en ligne sur Edulib (2022),.
The case for virtuous robots https://link.springer.com/article/10.1007/s43681-022-00185-1, *AI Ethics* (2022)
In search of the moral status of AI: why sentience is a strong argument https://link.springer.com/article/10.1007/s00146-021-01179-z, *AI and Society* (2021)
Automatiser les théories morales https://mimesisjournals.com/ojs/index.php/giornale-filosofia/article/view/1694 ,* Giornale di Filosofia* (2021)
_______________________________________________ Coggroup mailing list Coggroup@www.psych.mcgill.ca https://www.psych.mcgill.ca/cgi-bin/mailman/listinfo/coggroup
Rethinking behavior in the light of evolution
Paul Cisek Department of Neuroscience University of Montreal
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am March 30, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: In psychology and neuroscience, the brain is usually described as an information processing system that encodes and manipulates representations of knowledge to produce plans of action. This leads to a decomposition of brain functions into processes like object recognition, memory, decision-making, action planning, etc. However, neurophysiological data do not support many of these subdivisions. I will explore a different set of functional subdivisions, guided by data on the evolutionary process that produced the human brain. I will summarize a sequence of innovations that appeared in nervous systems from the earliest multicellular animals to humans. Along the way, functional subdivisions and elaborations will be introduced in parallel with the neural specializations that made them possible, gradually building up an alternative conceptual taxonomy of brain functions. These functions emphasize mechanisms for real-time interaction with the world, rather than for building explicit knowledge of the world, and the relevant representations emphasize pragmatic outcomes rather than decoding accuracy, mixing variables in the way seen in real neural data. This alternative taxonomy may better delineate the real functional pieces into which the human brain is organized, offering a more natural mapping between behavior and neural mechanisms.
[Paul Cisek - Département de neurosciences - Faculté de Médecine - Université de Montréal]Paul Cisek is a professor in the Department of Neuroscience at the University of Montreal. He has a background in computer science, artificial intelligence, and neurophysiology. His work combines these in an interdisciplinary approach toward understanding how the brain controls our interactions with the world, suggesting that the brain is organized as a system of parallel sensorimotor streams that have been differentiated and elaborated over millions of years of evolution. His empirical work investigates the neural dynamics of how potential actions are specified and how they compete in cortical and subcortical circuits.
References
Cisek, P. (2022) “Evolution of behavioural control from chordates to primateshttps://cisek.org/pavel/Pubs/Cisek2022-PTRSB.pdf” Philosophical Transactions of the Royal Society B. 377(1844): 20200522 Cisek, P. (2019) “Resynthesizing behavior through phylogenetic refinement” Attention, Perception, and Psychophysicshttps://cisek.org/pavel/Pubs/Cisek2019.pdf. 81(7): 2265-2287 Pezzulo, G. and Cisek, P. (2016) “Navigating the affordance landscape: Feedback control as a process model of behavior and cognitionhttps://cisek.org/pavel/Pubs/PezzuloCisek2016.pdf”. Trends in Cognitive Sciences. 20(6): 414-424. [
Perceptions of vegans and vegetarians across time and cultures
MATTHEW RUBY Psychology Department, La Trobe University, Australia
Abstract: The talk will examine the perceptions – by themselves and others -- of vegetarians and vegans over the past decade in a diverse array of cultural contexts (Argentina, Australia, Brazil, Canada, France, India, Switzerland, the UK, and the USA). The studies are based on direct methods (e.g., asking participants their perceptions of and attitudes toward veg/ns) and indirect methods (the Asch impressions paradigm).
DATE: Mar 31, 9:30 am ZOOM: https://uqam.zoom.us/j/81686998498 Salle SU-1550, UQÀM, 100 Sherbrooke W., Montreal https://plancampus.uqam.ca/pavillon-su
Matthew Ruby’s research focuses on “the dilemma of the modern omnivore” – the conflict between people's desire for meat and the costs of satisfying that desire. He studies how people decide which foods (of animal origin) are acceptable and which are not, how people reconcile the dissonance between loving meat and loving animals, and how omnivores and vegetarians/ vegans perceive themselves.
Les recherches Matthew Ruby portent sur « le dilemme de l'omnivore moderne » - le conflit entre le désir des gens pour la viande et les coûts de satisfaction de ce désir. Ruby étudie comment les gens décident quels aliments (d'origine animale) sont acceptables et lesquels ne le sont pas, comment les gens réconcilient la dissonance entre aimer la viande et aimer les animaux, et comment les omnivores et les végétariens/véganes se perçoivent.
Perceptions of vegans and vegetarians across time and cultures
MATTHEW RUBY Psychology Department, La Trobe University, Australia
Abstract: The talk will examine the perceptions – by themselves and others -- of vegetarians and vegans over the past decade in a diverse array of cultural contexts (Argentina, Australia, Brazil, Canada, France, India, Switzerland, the UK, and the USA). The studies are based on direct methods (e.g., asking participants their perceptions of and attitudes toward veg/ns) and indirect methods (the Asch impressions paradigm).
DATE: Mar 31, 9:30 am ZOOM: https://uqam.zoom.us/j/81686998498 Salle SU-1550, UQÀM, 100 Sherbrooke W., Montreal https://plancampus.uqam.ca/pavillon-su
Matthew Ruby’s research focuses on “the dilemma of the modern omnivore” – the conflict between people's desire for meat and the costs of satisfying that desire. He studies how people decide which foods (of animal origin) are acceptable and which are not, how people reconcile the dissonance between loving meat and loving animals, and how omnivores and vegetarians/ vegans perceive themselves.
Les recherches Matthew Ruby portent sur « le dilemme de l'omnivore moderne » - le conflit entre le désir des gens pour la viande et les coûts de satisfaction de ce désir. Ruby étudie comment les gens décident quels aliments (d'origine animale) sont acceptables et lesquels ne le sont pas, comment les gens réconcilient la dissonance entre aimer la viande et aimer les animaux, et comment les omnivores et les végétariens/véganes se perçoivent.
Multimodal Grounding of Abstract Concepts
Penny Pexman Department of Psychology University of Calgary
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) April 13, 2023 Zoom: https://uqam.zoom.us/j/89902403751 Abstract: Abstract concepts, like wisdom, joy, and friendship, are central to our mental and social lives and yet they cannot be directly experienced through the senses. As such, they pose a challenge for cognitive models that assume a central role for sensorimotor information in the way we learn and understand concepts. There is growing recognition, however, that it is possible for meaning to be ‘grounded’ in other ways. In a series of studies, my colleagues and I have explored the roles of language, emotion, and socialness in the acquisition and representation of abstract concepts. I will describe that research and its implications for our understanding of human cognition. [cid:image001.jpg@01D96BB7.474C24E0]Penny Pexman is Professor of Psychology and Associate Vice-President (Research) at the University of Calgary. She directs the Language Processing Labhttps://ucalgary.ca/labs/language-processing/language-processing at UCalgary and is a member of both the Hotchkiss Brain Institutehttps://hbi.ucalgary.ca/ and the Alberta Children’s Hospital Research Institutehttps://research4kids.ucalgary.ca/. Her research expertise is in cognitive development, psycholinguistics, and cognitive neuroscience. In broad terms, she is interested in how we derive meaning from language, and how those processes are changed by damage or experience. An award-winning researcher and mentor, Penny has published over 150 journal articles and book chapters on those topics.
References
Lund, T. C., Sidhu, D. M., & Pexman, P. M. (2019). Sensitivity to emotion information in children’s lexical processinghttps://www.sciencedirect.com/science/article/abs/pii/S001002771930099X. Cognition, 190, 61-71.
Pexman, P. M. (2019). The role of embodiment in conceptual developmenthttps://www.tandfonline.com/doi/abs/10.1080/23273798.2017.1303522. Language, Cognition and Neuroscience, 34, 1274-1283. doi: 10.1080/23273798.2017.1303522.
Pexman, P. M., Diveica, V., & Binney, R. J. (2023). Social semantics: The organisation and grounding of abstract concepts.https://royalsocietypublishing.org/doi/full/10.1098/rstb.2021.0363 Philosophical Transactions of the Royal Society B 378: 1870. doi: 10.1098/rstb.2021.0363
Zdrazilova, L., Sidhu, D. M., & Pexman, P. M. (2018). Communicating abstract meaning: Concepts revealed in words and gestureshttps://royalsocietypublishing.org/doi/10.1098/rstb.2017.0138. Philosophical Transactions of the Royal Society B. 373: 20170138 doi: 10.1098/rstb.2017.0138
The Machine That Will Soon Need No Minding
Peter Hancock Department of Psychology University of Central Florida
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) April 20, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751 Abstract: Ergonomics is the discipline focused on the “laws of work’. Any future research endeavor will have to keep re-examining what is meant by ‘work’. The future of work may prove to be a bleak one. The driving economic forces embrace the greater utility of automated and increasingly autonomous systems. Human-centered endeavors like Ergonomics often find themselves in opposition to efficiency/profit imperatives. More optimistic approaches seek to harmonize these conflicting forces, envisaging harmonious cooperation between humans and machines of increasing ‘intelligence’ and capability. I will describe why that positive narrative is unlikely, at least within the foreseeable future. [A person in a purple shirt Description automatically generated with low confidence] Peter A. Hancock, Provost Distinguished Research Professor in the Department of Psychology, University of Central Florida (UCF). He directs the MIT2 Research Laboratories. The author of more than 1,000 refereed scientific articles, chapters and reports as well more than twenty books, including Transports of Delight: How Technology Materializes Human Imagination (Springer 2018).
Reference:
Hancock, P. A. (2022). Machining the mind to mind the machinehttps://www.tandfonline.com/doi/full/10.1080/1463922X.2022.2062067?casa_token=kNj8UUveUdEAAAAA%3Axvfqy0Uqwybr3q_izX19FjqF5r8d3C1kN5y9Qls4UvmbVzZZXG66niEo6wanCxDoHrUvQPBdODx4. Theoretical Issues in Ergonomics Science, 1-18.
Hancock, P. A. (2019). The humane use of human beings?https://www.sciencedirect.com/science/article/abs/pii/S0003687018301996 Applied ergonomics, 79, 91-97.
This year’s zoom seminars are focussed on ChatGPT and its implications for cognition, language, and education.
Séminaires en informatique cognitive *** Seminars in Cognitive Informatics
jeudi/thursday 10h30
https://uqam.zoom.us/j/83002459798
résumés ci-dessous full abstracts on following pages
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
LLMs are impressive but we still need grounding to explain human cognition
Benjamin Bergenhttps://cogsci.ucsd.edu/people/faculty/benjamin-bergen.html Cognitive Science, UCSD
jeudi/thursday 10h30 14 sept https://uqam.zoom.us/j/83002459798
ABSTRACT: Human cognitive capacities are often explained as resulting from grounded, embodied, or situated learning. But Large Language Models, which only learn on the basis of word co-occurrence statistics, now rival human performance in a variety of tasks that would seem to require these very capacities. This raises the question: is grounding still necessary to explain human cognition? I report on studies addressing three aspects of human cognition: Theory of Mind, Affordances, and Situation Models. In each case, we run both human and LLM participants on the same task and ask how much of the variance in human behavior is explained by the LLMs. As it turns out, in all cases, human behavior is not fully explained by the LLMs. This entails that, at least for now, we need grounding (or, more accurately, something that goes beyond statistical language learning) to explain these aspects of human cognition. I’ll conclude by asking but not answering a number of questions, like, How long will this remain the case? What are the right criteria for an LLM that serves as a proxy for human statistical language learning? and, How could one tell conclusively whether LLMs have human-like intelligence?
Ben Bergen is Professor of Cognitive Science at UC San Diego, where he directs the Language and Cognition Lab. His research focuses on language processing and production with a special interest in meaning. He’s also the author of 'Louder than Words: The New Science of How the Mind Makes Meaning' and 'What the F: What Swearing Reveals about Our Language, Our Brains, and Ourselves.’
Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do Large Language Models know what humans know?https://arxiv.org/abs/2209.01515 Cognitive Science 47(7): e13309. Chang, T. & B. Bergen (2023). Language Model Behavior: A Comprehensive Survey.https://arxiv.org/abs/2303.11504 Computational Linguistics. Michaelov, J., S. Coulson, & B. Bergen (2023). Can Peanuts Fall in Love with Distributional Semantics?https://arxiv.org/abs/2301.08731 Proceedings of the 45th Annual Meeting of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Jones, C., Chang, T., Coulson, S., Michaelov, J., Trott, T., & Bergen, B. (2022). Distributional Semantics Still Can't Account for Affordances.https://pages.ucsd.edu/~bkbergen/papers/cogsci_2022_nlm_affordances_final.pdf Proceedings of the 44th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society.
Grounding in Large Language Models: Functional Ontologies for AI
Dimitri Coelho Mollohttps://dimitrimollo.academicwebsite.com/ Philosophy of AI, Umeå University
jeudi/thursday 10h30 21 sept https://uqam.zoom.us/j/83002459798
ABSTRACT: I will describe joint work with Raphaël Millière, arguing that language grounding (but not language understanding) is possible in some current Large Language Models (LLMs). This does not mean, h that the way language grounding works in LLMs is similar to how grounding works in humans. The differences open up two options: narrowing the notion of grounding to only the phenomenon in humans; or pluralism about grounding, extending the notion more broadly to systems that fulfil the requirements for intrinsic content. Pluralism invites applying recent work in comparative and cognitive psychology to AI, especially the search for appropriate ontologies to account for cognition and intelligence. This can help us better understand the capabilities and limitations of current AI systems, as well as potential ways forward.
Dimitri Coelho Mollo is Assistant Professor with focus in Philosophy of Artificial Intelligence at the Department of Historical, Philosophical and Religious Studies, at Umeå University, Sweden, and focus area coordinator at TAIGA (Centre for Transdisciplinary AI), for the area 'Understanding and Explaining Artificial Intelligence'. I am also an external Principal Investigator at the Science of Intelligence Cluster, in Berlin, Germany. My research focuses on foundational and epistemic questions within artificial intelligence and cognitive science, looking for ways to improve our understanding of mind, cognition, and intelligence in biological and artificial systems. My work often intersects issues in Ethics of Artificial Intelligence, Philosophy of Computing, and Philosophy of Biology.
Coelho Mollo and Millière (2023), The Vector Grounding Problemhttps://arxiv.org/abs/2304.01481 Francken, Slors, Craver (2022), Cognitive ontology and the search for neural mechanisms: three foundational problemshttps://link.springer.com/article/10.1007/s11229-022-03701-2
From the History of Philosophy to AI:
Does Thinking Require Sensing?
David Chalmershttps://consc.net/ Center for Mind, Brain & Consciousness, NYU
jeudi/thursday 10h30 28-Sep https://uqam.zoom.us/j/83002459798
ABSTRACT: There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.
Chalmers, D. J. (2023). Could a large language model be conscious?. https://arxiv.org/pdf/2303.07103.pdf arXiv preprint arXiv:2303.07103. Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophyhttps://scholar.google.ca/citations?view_op=view_citation&hl=en&user=o8AfF3MAAAAJ&sortby=pubdate&citation_for_view=o8AfF3MAAAAJ:7T_dCfhhGW4C. Penguin Chalmers, D. J. (1995). Facing up to the problem of consciousnesshttps://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/chalmers1995.pdf. Journal of Consciousness Studies, 2(3), 200-219. Clark, A., & Chalmers, D. (1998). The extended mindhttps://web-archive.southampton.ac.uk/cogprints.org/320/1/extended.html?source=post_page---------------------------. Analysis, 58(1), 7-19.
Symbols and Grounding in LLMs
Ellie Pavlickhttps://cs.brown.edu/people/epavlick/ Computer Science, Brown
jeudi/thursday 10h30 05-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models (LLMs) appear to exhibit human-level abilities on a range of tasks, yet they are notoriously considered to be "black boxes", and little is known about the internal representations and mechanisms that underlie their behavior. This talk will discuss recent work which seeks to illuminate the processing that takes place under the hood. I will focus in particular on questions related to LLM's ability to represent abstract, compositional, and content-independent operations of the type assumed to be necessary for advanced cognitive functioning in humans.
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.
Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf Pavlick, Ellie. "Symbols and grounding in large language models." Philosophical Transactions of the Royal Society A 381.2251 (2023): 20220041. https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2022.0041 Lepori, Michael A., Thomas Serre, and Ellie Pavlick. "Break it down: evidence for structural compositionality in neural networks." arXiv preprint arXiv:2301.10884 (2023). https://arxiv.org/pdf/2301.10884.pdf Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language Models Implement Simple Word2Vec-style Vector Arithmetic." arXiv preprint arXiv:2305.16130 (2023). https://arxiv.org/pdf/2305.16130.pdf
Rethinking the Physical Symbol Systems Hypothesis
Paul Rosenbloomhttps://viterbi.usc.edu/directory/faculty/Rosenbloom/Paul
Computer Science, USC
jeudi/thursday 10h30 12-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: It is now more than a half-century since the Physical Symbol Systems Hypothesis (PSSH) was first articulated as an empirical hypothesis. More recent evidence from work with neural networks and cognitive architectures has weakened it, but it has not yet been replaced in any satisfactory manner. Based on a rethinking of the nature of computational symbols – as atoms or placeholders – and thus also of the systems in which they participate, a hybrid approach is introduced that responds to these challenges while also helping to bridge the gap between symbolic and neural approaches, resulting in two new hypotheses, one – the Hybrid Symbol Systems Hypothesis (HSSH) – that is to replace the PSSH and the other focused more directly on cognitive architectures. This overall approach has been inspired by how hybrid symbol systems are central in the Common Model of Cognition and the Sigma cognitive architectures, both of which will be introduced – along with the general notion of a cognitive architecture – via “flashbacks” during the presentation.
Paul S. Rosenbloom is Professor Emeritus of Computer Science in the Viterbi School of Engineering at the University of Southern California (USC). His research has focused on cognitive architectures (models of the fixed structures and processes that together yield a mind), such as Soar and Sigma; the Common Model of Cognition (a partial consensus about the structure of a human-like mind); dichotomic maps (structuring the space of technologies underlying AI and cognitive science); “essential” definitions of key concepts in AI and cognitive science (such as intelligence, theories, symbols, and architectures); and the relational model of computing as a great scientific domain (akin to the physical, life and social sciences).
Rosenbloom, P. S. (2023). Rethinking the Physical Symbol Systems Hypothesishttps://www.dropbox.com/s/l9v7mjddktlokgo/Rosenbloom-PSSH-HSSH%20Final%20D.pdf?dl=0. In Proceedings of the 16th International Conference on Artificial General Intelligence (pp. 207-216). Cham, Switzerland: Springer.
Laird, J. E., Lebiere, C. & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligencehttps://www.dropbox.com/s/z50a70vl8sn3all/LLR-SMM-AI%20Magazine-Published-Personal.pdf?dl=0, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38, 13-26.
Rosenbloom, P. S., Demski, A. & Ustun, V. (2016). The Sigma cognitive architecture and system: Towards functionally elegant grand unificationhttps://www.dropbox.com/s/hwv6eok7uhcps91/jagi-2016-0001.pdf?dl=0. Journal of Artificial General Intelligence, 7, 1-103.
Rosenbloom, P. S., Demski, A. & Ustun, V. (2016). Rethinking Sigma’s graphical architecture: An extension to neural networkshttps://www.dropbox.com/s/3q0mhigs9gv7mid/RSGA%20AGI%202016%20Final%20D.pdf?dl=0. Proceedings of the 9th Conference on Artificial General Intelligence (pp. 84-94).
The Debate Over “Understanding” in AI’s Large Language Models
Melanie Mitchellhttps://melaniemitchell.me/ Santa Fe Institute
jeudi/thursday 10h30 19-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said -- in any important sense -- to "understand" language and the physical and social situations language encodes. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) is a finalist for the 2023 Cosmos Prize for Scientific Writing.
Mitchell, M. (2023). How do we know how smart AI systems are?https://www.science.org/doi/10.1126/science.adj5957 Science, 381(6654), adj5957. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language modelshttps://arxiv.org/pdf/2210.13966.pdf. Proceedings of the National Academy of Sciences, 120(13), e2215907120. Millhouse, T., Moses, M., & Mitchell, M. (2022). Embodied, Situated, and Grounded Intelligence: Implications for AIhttps://arxiv.org/pdf/2210.13589.pdf. arXiv preprint arXiv:2210.13589.
Enactivist Symbol Grounding:
From Attentional Anchors to Mathematical Discourse
Dor Abrahamsonhttps://bse.berkeley.edu/dor-abrahamson Faculty of Education, UC-Berkeley
jeudi/thursday 10h30 26-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: According to the embodiment hypothesis knowledge is the capacity for perceptuomotor enactment, situated in the world as much as in the body: a way of engaging the environment in anticipation of accomplishing interactions. What does this mean for educational practice? What is the embodiment or enactment of abstract ideas, like justice, photosynthesis, or algebra? What is the teacher’s role in embodied designs for learning? I will describe my lab’s educational design-based collaborative research on mathematical learning, and how we came to view in the analysis and promotion of content learning. I will describe how students spontaneously generate perceptual solutions to motor-control problems. These then become verbal through adopting symbolic artifacts provided by the teacher. This approach can also help students with diverse sensorimotor capacities.
Dor Abrahamson is Professor in the Graduate School of Education at the University of California Berkeley, where he established the Embodied Design Research Laboratory devoted to pedagogical technologies for teaching and learning mathematics. He is particularly interested in relations between learning to move in new ways and learning mathematicaal concepts. His research draws on embodied cognition, dynamic systems theory, and sociocultural theory.
Abrahamson, D., & Sánchez-García, R. (2016). Learning is moving in new ways: The ecological dynamics of mathematics educationhttps://doi.org/10.1080/10508406.2016.1143370. Journal of the Learning Sciences, 25(2), 203-239. Abrahamson, D. (2021). Grasp actually: An evolutionist argument for enactivist mathematics education. Human Development, 65(2), 1–17. https://doi.org/10.1159/000515680 Shvarts, A., & Abrahamson, D. (2023). Coordination dynamics of semiotic mediation: A functional dynamic systems perspective on mathematics teaching/learning. In T. Veloz, R. Videla, & A. Riegler (Eds.), Education in the 21st century [Special issue]. Constructivist Foundations, 18(2), 220–234. https://constructivist.info/18/2 https://constructivist.info/18/2
Machine Psychology
Eric Schulzhttps://cpilab.org/eric.html MPI Tuebingen
jeudi/thursday 10h30 02-Nov https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models are on the cusp of transforming society while they permeate into many applications. Understanding how they work is, therefore, of great value. We propose to use insights and tools from psychology to study and better understand these models. Psychology can add to our understanding of LLMs and provide a new toolkit for explaining LLMs by providing theoretical concepts, experimental designs, and computational analysis approaches. This can lead to a machine psychology for foundation models that focuses on computational insights and precise experimental comparisons instead of performance measures alone. I will showcase the utility of this approach by showing how current LLMs behave across a variety of cognitive tasks, as well as how one can make them more human-like by fine-tuning on psychological data directly.
Eric Schulz, Max-Planck Research Group Leader, Tuebingen University works on the building blocks of intelligence using a mixture of computational, cognitive, and neuroscientific methods. He has worked with Maarten Speekenbrink on generalization as function learning and Sam Gershman and Josh Tenenbaum.
Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3https://www.pnas.org/doi/full/10.1073/pnas.2218523120. Proceedings of the National Academy of Sciences, 120(6), e2218523120 Akata, E., Schulz, L., Coda-Forno, J., Oh, S. J., Bethge, M., & Schulz, E. (2023). Playing repeated games with Large Language Modelshttps://arxiv.org/pdf/2305.16867.pdf. arXiv preprint arXiv:2305.16867. Allen, K. R., Brändle, F., Botvinick, M., Fan, J., Gershman, S. J., Griffiths, T. L., ... & Schulz, E. (2023). Using Games to Understand the Mindhttps://psyarxiv.com/hbsvj/download?format=pdf Binz, M., & Schulz, E. (2023). Turning large language models into cognitive modelshttps://arxiv.org/pdf/2306.03917.pdf. arXiv preprint.
Robotic Grounding and LLMs: Advancements and Challenges
Casey Kenningtonhttps://www.caseyreddkennington.com/ Computer Science, Boise State
09-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798 ABSTRACT: Large Language Models (LLMs) are primarily trained using large amounts of text, but there have also been noteworthy advancements in incorporating vision and other sensory information into LLMs. Does that mean LLMs are ready for embodied agents such as robots? While there have been important advancements, technical and theoretical challenges remain including use of closed language models like ChatGPT, model size requirements, data size requirements, speed requirements, representing the physical world, and updating the model with information about the world in real time. In this talk, I explain recent advance on incorporating LLMs into robot platforms, challenges, and opportunities for future work. Casey Kennington is associate professor in the Department of Computer Science at Boise State University where he does research on spoken dialogue systems on embodied platforms. His long-term research goal is to understand what it means for humans to understand, represent, and produce language. His National Science Foundation CAREER award focuses on enriching small language models with multimodal information such as vision and emotion for interactive learning on robotic platforms. Kennington obtained his PhD in Linguistics from Bielefeld University, Germany. Josue Torres-Foncesca, Catherine Henry, Casey Kennington. Symbol and Communicative Grounding through Object Permanence with a Mobile Robothttps://aclanthology.org/2022.sigdial-1.14/. In Proceedings of SigDial, 2022. Clayton Fields and Casey Kennington. Vision Language Transformers: A Surveyhttps://arxiv.org/abs/2307.03254. arXiv, 2023. Casey Kennington. Enriching Language Models with Visually-grounded Word Vectors and the Lancaster Sensorimotor Normshttps://aclanthology.org/2021.conll-1.11/. In Proceedings of CoNLL, 2021 Casey Kennington. On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotionhttps://arxiv.org/abs/2307.04518. arXiv, 2023.
« Algorithmes de Deep Learning flous causaux »
Usef Faghihihttps://github.com/joseffaghihi Informatique, UQTR
16-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
RÉSUMÉ : Je donnerai un bref aperçu de l'inférence causale et de la manière dont les règles de la logique floue peuvent améliorer le raisonnement causal (Faghihi, Robert, Poirier & Barkaoui, 2020). Ensuite, j'expliquerai comment nous avons intégré des règles de logique floue avec des algorithmes d'apprentissage profond, tels que l'architecture de transformateur Big Bird (Zaheer et al., 2020). Je montrerai comment notre modèle de causalité d'apprentissage profond flou a surpassé ChatGPT sur différentes bases de données dans des tâches de raisonnement (Kalantarpour, Faghihi, Khelifi & Roucaut, 2023). Je présenterai également quelques applications de notre modèle dans des domaines tels que la santé et l'industrie. Enfin, si le temps le permet, je présenterai deux éléments essentiels de notre modèle de raisonnement causal que nous avons récemment développés : l'Effet Causal Variationnel Facile Probabiliste (PEACE) et l'Effet Causal Variationnel Probabiliste (PACE) (Faghihi & Saki, 2023).
Usef Faghihi est professeur adjoint à l'Université du Québec à Trois-Rivières. Auparavant, Usef était professeur à l'Université d'Indianapolis aux États-Unis. Usef a obtenu son doctorat en Informatique Cognitive à l'UQAM. Il est ensuite allé à Memphis, aux États-Unis, pour effectuer un post-doctorat avec le professeur Stan Franklin, l'un des pionniers de l'intelligence artificielle. Ses centres d'intérêt en recherche sont les architectures cognitives et leur intégration avec les algorithmes d'apprentissage profond.
LLMs: Indication or Representation?
Anders Søgaardhttps://anderssoegaard.github.io/ Computer Science & Philosophy, University of Copenhagen
23-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: People talk to LLMs - their new assistants, tutors, or partners - about the world they live in, but are LLMs parroting, or do they (also) have internal representations of the world? There are five popular views, it seems:
(i) LLMs are all syntax, no semantics.
(ii) LLMs have inferential semantics, no referential semantics.
(iii) LLMs (also) have referential semantics through picturing
(iv) LLMs (also) have referential semantics through causal chains.
(v) Only chatbots have referential semantics (through causal chains) I present three sets of experiments to suggest LLMs induce inferential and referential semantics and do so by inducing human-like representations, lending some support to view (iii). I briefly compare the representations that seem to fall out of these experiments to the representations to which others have appealed in the past.
Anders Søgaard is University Professor of Computer Science and Philosophy and leads the newly established Center for Philosophy of Artificial Intelligence at the University of Copenhagen. Known primarily for work on multilingual NLP, multi-task learning, and using cognitive and behavioral data to bias NLP models, Søgaard is an ERC Starting Grant and Google Focused Research Award recipient and the author of Semi-Supervised Learning and Domain Adaptation for NLP (2013), Cross-Lingual Word Embeddings (2019), and Explainable Natural Language Processing (2021). Søgaard, A. (2023). Grounding the Vector Space of an Octopus. https://link.springer.com/article/10.1007/s11023-023-09622-4 Minds and Machines 33, 33-54. Li, J.; et al. (2023) Large Language Models Converge on Brain-Like Representationshttps://arxiv.org/pdf/2306.01930.pdf. arXiv preprint arXiv:2306.01930 Abdou, M.; et al. (2021) Can Language Models Encode Perceptual Structure Without Grounding? https://aclanthology.org/2021.conll-1.9/ CoNLL Garneau, N.; et al. (2021) Analogy Training Multilingual Encoders. https://ojs.aaai.org/index.php/AAAI/article/view/17524 AAAI
LLMs, Patterns, and Understanding
Christof Durthttps://www.durt.de/ Philosophy, U. Heidelberg
30-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: It is widely known that the performance of LLMs is contingent on their being trained with very large text corpora. But what in the text corpora allows LLMs to extract the parameters that enable them to produce text that sounds as if it had been written by an understanding being? In my presentation, I argue that the text corpora reflect not just “language” but language use. Language use is permeated with patterns, and the statistical contours of the patterns of written language use are modelled by LLMs. LLMs do not model understanding directly, but statistical patterns that correlate with patterns of language use. Although the recombination of statistical patterns does not require understanding, it enables the production of novel text that continues a prompt and conforms to patterns of language use, and thus can make sense to humans.
Christoph Durt is a philosophical and interdisciplinary researcher at Heidelberg university. He investigates the human mind and its relation to technology, especially AI. Going beyond the usual side-to-side comparison of artificial and human intelligence, he studies the multidimensional interplay between the two. This involves the study of human experience and language, as well as the relation between them. If you would like to join an international online exchange on these issues, please check the “courses and lectures” section on his websitehttp://www.durt.de/.
Durt, Christoph, Tom Froese, and Thomas Fuchs. preprint. “Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Usehttp://philsci-archive.pitt.edu/21983/.” Durt, Christoph. 2023. “The Digital Transformation of Human Orientation: An Inquiry into the Dawn of a New Erahttp://www.bit.ly/3R5JdN7” Winner of the $10.000 HFPO Essay Prize. Durt, Christoph. 2022. “Artificial Intelligence and Its Integration into the Human Lifeworldhttps://doi.org/10.1017/9781009207898.007.” In The Cambridge Handbook of Responsible Artificial Intelligence, Cambridge University Press. Durt, Christoph. 2020. “The Computation of Bodily, Embodied, and Virtual Realityhttp://phaenomenologische-forschung.de/site/ophen/dgpf/dox/Durt.pdf” Winner of the Essay Prize “What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?”Phänomenologische Forschungen, no. 2: 25–39.
Falsification of the Integrated Information Theory of Consciousness
Jake R Hansonhttps://jakerhanson.weebly.com/ Sr. Data Scientist, Astrophysics
07-Dec jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
Abstract: Integrated Information Theory is a prominent theory of consciousness in contemporary neuroscience, based on the premise that feedback, quantified by a mathematical measure called Phi, corresponds to subjective experience. A straightforward application of the mathematical definition of Phi fails to produce a unique solution due to unresolved degeneracies inherent in the theory. This undermines nearly all published Phi values to date. In the mathematical relationship between feedback and input-output behavior in finite-state systems automata theory shows that feedback can always be disentangled from a system's input-output behavior, resulting in Phi=0 for all possible input-output behaviors. This process, known as "unfolding," can be accomplished without increasing the system's size, leading to the conclusion that Phi measures something fundamentally disconnected from what could ground the theory experimentally. These findings demonstrate that IIT lacks a well-defined mathematical framework and may either be already falsified or inherently unfalsifiable according to scientific standards.
Jake Hanson is a Senior Data Scientist at a financial tech company in Salt Lake City, Utah. His doctoral research in Astrophysics from Arizona State University focused on the origin of life via the relationship between information processing and fundamental physics. He demonstrated that there were multiple foundational issues with IIT, ranging from poorly defined mathematics to problems with experimental falsifiability and pseudoscientific handling of core ideas.
Hanson, J.R., & Walker, S.I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombieshttps://www.mdpi.com/1099-4300/21/11/1073. Entropy, 21.11, 1073. Hanson, J.R., & Walker, S.I. (2021). Formalizing falsification for theories of consciousness across computational hierarchieshttps://watermark.silverchair.com/niab014.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAA1IwggNOBgkqhkiG9w0BBwagggM_MIIDOwIBADCCAzQGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMFqV1MQgKDexHNyFtAgEQgIIDBZ-C-NbPZuaziQ0gJPwKQhpNjZDfFvPFMb1BucW. Neuroscience of Consciousness, 2021.2, niab014. Hanson, J.R., & Walker, S.I. (2021). Falsification of the Integrated Information Theory of Consciousnesshttps://www.proquest.com/docview/2532092940?pq-origsite=gscholar&fromopenview=true. Diss. Arizona State University, 2021. Hanson, J.R., & Walker, S.I. (2023). On the non-uniqueness problem in Integrated Information Theoryhttps://doi.org/10.1093/nc/niad014. Neuroscience of Consciousness, 2023.1, niad014.
« Apprentissage continu et contrôle cognitif »
Frédérique Alexandrehttps://www.labri.fr/perso/falexand/ Inria, Bordeaux
14-Dec jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
Résumé : J’explore la différence entre l'efficacité de l'apprentissage humain et celle des grands modèles de langage en termes de temps de calcul et de coûts énergétiques. L'étude se focalise sur le caractère continu de l'apprentissage humain et les défis associés, tels que l'oubli catastrophique. Deux types de mémoires, la mémoire de travail et la mémoire épisodique, sont examinés. Le cortex préfrontal est décrit comme essentiel pour le contrôle cognitif et la mémoire de travail, tandis que l'hippocampe est central pour la mémoire épisodique. Alexandre suggère que ces deux régions collaborent pour permettre un apprentissage continu et efficace, facilitant ainsi la pensée et l'imagination.
Frédéric Alexandre est directeur de recherche à l'Inria et dirige l'équipe Mnemosyne à Bordeaux, spécialisée en Intelligence Artificielle et Neurosciences Computationnelles. L'équipe étudie les différentes formes de mémoire cérébrale et leur rôle dans des fonctions cognitives telles que le raisonnement et la prise de décision. Ils explorent la dichotomie entre mémoires explicites et implicites et comment elles interagissent. Leurs projets récents s'étendent de l'acquisition du langage à la planification et la délibération. Les modèles créés sont validés expérimentalement et ont des applications médicales, industrielles, ainsi qu'en sciences humaines, notamment en éducation, droit, linguistique, économie, et philosophie.
Frédéric Alexandre. A global framework for a systemic view of brain modelingfile:///Users/harnad/Desktop/5DIC9270:1/23-24/SpeakersAutumn/ALLautumn/pp.22.%20https:/braininformatics.springeropen.com/articles/10.1186/s40708-021-00126-4. Brain Informatics, 2021, 8 (1), Snigdha Dagar, Frédéric Alexandre, Nicolas P. Rougier. From concrete to abstract rules : A computational sketchhttps://inria.hal.science/hal-03695814. 15th International Conference on Brain Informatics, Jul 2022. Randa Kassab, Frédéric Alexandre. Pattern Separation in the Hippocampus: Distinct Circuits under Different Conditionshttps://link.springer.com/article/10.1007/s00429-018-1659-4. Brain Structure and Function, 2018, 223 (6), pp.2785-2808. Hugo Chateau-Laurent, Frédéric Alexandre. The Opportunistic PFC: Downstream Modulation of a Hippocampus-inspired Network is Optimal for Contextual Memory Recallhttps://hal.science/hal-03885715. 36th Conference on Neural Information Processing System, Dec 2022. Pramod Kaushik, Jérémie Naudé, Surampudi Bapi Raju, Frédéric Alexandre. A VTA GABAergic computational model of dissociated reward prediction error computation in classical conditioninghttps://www.sciencedirect.com/science/article/abs/pii/S1074742722000776. Neurobiology of Learning and Memory, 2022, 193 (107653),
This year’s zoom seminars are focussed on ChatGPT and its implications for cognition, language, and education.
THURSDAY 21 September 10h30 EST https://uqam.zoom.us/j/83002459798
full abstracts on following pages
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Grounding in Large Language Models: Functional Ontologies for AI
Dimitri Coelho Mollohttps://dimitrimollo.academicwebsite.com/ Philosophy of AI, Umeå University
jeudi/thursday 10h30 21 sept https://uqam.zoom.us/j/83002459798
ABSTRACT: I will describe joint work with Raphaël Millière, arguing that language grounding (but not language understanding) is possible in some current Large Language Models (LLMs). This does not mean, that the way language grounding works in LLMs is similar to how grounding works in humans. The differences open up two options: narrowing the notion of grounding to only the phenomenon in humans; or pluralism about grounding, extending the notion more broadly to systems that fulfil the requirements for intrinsic content. Pluralism invites applying recent work in comparative and cognitive psychology to AI, especially the search for appropriate ontologies to account for cognition and intelligence. This can help us better understand the capabilities and limitations of current AI systems, as well as potential ways forward.
Dimitri Coelho Mollo is Assistant Professor with focus in Philosophy of Artificial Intelligence at the Department of Historical, Philosophical and Religious Studies, at Umeå University, Sweden, and focus area coordinator at TAIGA (Centre for Transdisciplinary AI), for the area 'Understanding and Explaining Artificial Intelligence'. I am also an external Principal Investigator at the Science of Intelligence Cluster, in Berlin, Germany. My research focuses on foundational and epistemic questions within artificial intelligence and cognitive science, looking for ways to improve our understanding of mind, cognition, and intelligence in biological and artificial systems. My work often intersects issues in Ethics of Artificial Intelligence, Philosophy of Computing, and Philosophy of Biology.
Coelho Mollo and Millière (2023), The Vector Grounding Problemhttps://arxiv.org/abs/2304.01481 Francken, Slors, Craver (2022), Cognitive ontology and the search for neural mechanisms: three foundational problemshttps://link.springer.com/article/10.1007/s11229-022-03701-2
From the History of Philosophy to AI:
Does Thinking Require Sensing?
David Chalmershttps://consc.net/ Center for Mind, Brain & Consciousness, NYU
jeudi/thursday 10h30 28-Sep https://uqam.zoom.us/j/83002459798
ABSTRACT: There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.
Chalmers, D. J. (2023). Could a large language model be conscious?. https://arxiv.org/pdf/2303.07103.pdf arXiv preprint arXiv:2303.07103. Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophyhttps://scholar.google.ca/citations?view_op=view_citation&hl=en&user=o8AfF3MAAAAJ&sortby=pubdate&citation_for_view=o8AfF3MAAAAJ:7T_dCfhhGW4C. Penguin Chalmers, D. J. (1995). Facing up to the problem of consciousnesshttps://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/chalmers1995.pdf. Journal of Consciousness Studies, 2(3), 200-219. Clark, A., & Chalmers, D. (1998). The extended mindhttps://web-archive.southampton.ac.uk/cogprints.org/320/1/extended.html?source=post_page---------------------------. Analysis, 58(1), 7-19.
Symbols and Grounding in LLMs
Ellie Pavlickhttps://cs.brown.edu/people/epavlick/ Computer Science, Brown
jeudi/thursday 10h30 05-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models (LLMs) appear to exhibit human-level abilities on a range of tasks, yet they are notoriously considered to be "black boxes", and little is known about the internal representations and mechanisms that underlie their behavior. This talk will discuss recent work which seeks to illuminate the processing that takes place under the hood. I will focus in particular on questions related to LLM's ability to represent abstract, compositional, and content-independent operations of the type assumed to be necessary for advanced cognitive functioning in humans.
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.
Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf Pavlick, Ellie. "Symbols and grounding in large language models." Philosophical Transactions of the Royal Society A 381.2251 (2023): 20220041. https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2022.0041 Lepori, Michael A., Thomas Serre, and Ellie Pavlick. "Break it down: evidence for structural compositionality in neural networks." arXiv preprint arXiv:2301.10884 (2023). https://arxiv.org/pdf/2301.10884.pdf Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language Models Implement Simple Word2Vec-style Vector Arithmetic." arXiv preprint arXiv:2305.16130 (2023). https://arxiv.org/pdf/2305.16130.pdf
Rethinking the Physical Symbol Systems Hypothesis
Paul Rosenbloomhttps://viterbi.usc.edu/directory/faculty/Rosenbloom/Paul
Computer Science, USC
jeudi/thursday 10h30 12-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: It is now more than a half-century since the Physical Symbol Systems Hypothesis (PSSH) was first articulated as an empirical hypothesis. More recent evidence from work with neural networks and cognitive architectures has weakened it, but it has not yet been replaced in any satisfactory manner. Based on a rethinking of the nature of computational symbols – as atoms or placeholders – and thus also of the systems in which they participate, a hybrid approach is introduced that responds to these challenges while also helping to bridge the gap between symbolic and neural approaches, resulting in two new hypotheses, one – the Hybrid Symbol Systems Hypothesis (HSSH) – that is to replace the PSSH and the other focused more directly on cognitive architectures. This overall approach has been inspired by how hybrid symbol systems are central in the Common Model of Cognition and the Sigma cognitive architectures, both of which will be introduced – along with the general notion of a cognitive architecture – via “flashbacks” during the presentation.
Paul S. Rosenbloom is Professor Emeritus of Computer Science in the Viterbi School of Engineering at the University of Southern California (USC). His research has focused on cognitive architectures (models of the fixed structures and processes that together yield a mind), such as Soar and Sigma; the Common Model of Cognition (a partial consensus about the structure of a human-like mind); dichotomic maps (structuring the space of technologies underlying AI and cognitive science); “essential” definitions of key concepts in AI and cognitive science (such as intelligence, theories, symbols, and architectures); and the relational model of computing as a great scientific domain (akin to the physical, life and social sciences).
Rosenbloom, P. S. (2023). Rethinking the Physical Symbol Systems Hypothesishttps://www.dropbox.com/s/l9v7mjddktlokgo/Rosenbloom-PSSH-HSSH%20Final%20D.pdf?dl=0. In Proceedings of the 16th International Conference on Artificial General Intelligence (pp. 207-216). Cham, Switzerland: Springer.
Laird, J. E., Lebiere, C. & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligencehttps://www.dropbox.com/s/z50a70vl8sn3all/LLR-SMM-AI%20Magazine-Published-Personal.pdf?dl=0, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38, 13-26.
Rosenbloom, P. S., Demski, A. & Ustun, V. (2016). The Sigma cognitive architecture and system: Towards functionally elegant grand unificationhttps://www.dropbox.com/s/hwv6eok7uhcps91/jagi-2016-0001.pdf?dl=0. Journal of Artificial General Intelligence, 7, 1-103.
Rosenbloom, P. S., Demski, A. & Ustun, V. (2016). Rethinking Sigma’s graphical architecture: An extension to neural networkshttps://www.dropbox.com/s/3q0mhigs9gv7mid/RSGA%20AGI%202016%20Final%20D.pdf?dl=0. Proceedings of the 9th Conference on Artificial General Intelligence (pp. 84-94).
The Debate Over “Understanding” in AI’s Large Language Models
Melanie Mitchellhttps://melaniemitchell.me/ Santa Fe Institute
jeudi/thursday 10h30 19-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said -- in any important sense -- to "understand" language and the physical and social situations language encodes. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) is a finalist for the 2023 Cosmos Prize for Scientific Writing.
Mitchell, M. (2023). How do we know how smart AI systems are?https://www.science.org/doi/10.1126/science.adj5957 Science, 381(6654), adj5957. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language modelshttps://arxiv.org/pdf/2210.13966.pdf. Proceedings of the National Academy of Sciences, 120(13), e2215907120. Millhouse, T., Moses, M., & Mitchell, M. (2022). Embodied, Situated, and Grounded Intelligence: Implications for AIhttps://arxiv.org/pdf/2210.13589.pdf. arXiv preprint arXiv:2210.13589.
Enactivist Symbol Grounding:
From Attentional Anchors to Mathematical Discourse
Dor Abrahamsonhttps://bse.berkeley.edu/dor-abrahamson Faculty of Education, UC-Berkeley
jeudi/thursday 10h30 26-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: According to the embodiment hypothesis knowledge is the capacity for perceptuomotor enactment, situated in the world as much as in the body: a way of engaging the environment in anticipation of accomplishing interactions. What does this mean for educational practice? What is the embodiment or enactment of abstract ideas, like justice, photosynthesis, or algebra? What is the teacher’s role in embodied designs for learning? I will describe my lab’s educational design-based collaborative research on mathematical learning, and how we came to view in the analysis and promotion of content learning. I will describe how students spontaneously generate perceptual solutions to motor-control problems. These then become verbal through adopting symbolic artifacts provided by the teacher. This approach can also help students with diverse sensorimotor capacities.
Dor Abrahamson is Professor in the Graduate School of Education at the University of California Berkeley, where he established the Embodied Design Research Laboratory devoted to pedagogical technologies for teaching and learning mathematics. He is particularly interested in relations between learning to move in new ways and learning mathematicaal concepts. His research draws on embodied cognition, dynamic systems theory, and sociocultural theory.
Abrahamson, D., & Sánchez-García, R. (2016). Learning is moving in new ways: The ecological dynamics of mathematics educationhttps://doi.org/10.1080/10508406.2016.1143370. Journal of the Learning Sciences, 25(2), 203-239. Abrahamson, D. (2021). Grasp actually: An evolutionist argument for enactivist mathematics education. Human Development, 65(2), 1–17. https://doi.org/10.1159/000515680 Shvarts, A., & Abrahamson, D. (2023). Coordination dynamics of semiotic mediation: A functional dynamic systems perspective on mathematics teaching/learning. In T. Veloz, R. Videla, & A. Riegler (Eds.), Education in the 21st century [Special issue]. Constructivist Foundations, 18(2), 220–234. https://constructivist.info/18/2 https://constructivist.info/18/2
Machine Psychology
Eric Schulzhttps://cpilab.org/eric.html MPI Tuebingen
jeudi/thursday 10h30 02-Nov https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models are on the cusp of transforming society while they permeate into many applications. Understanding how they work is, therefore, of great value. We propose to use insights and tools from psychology to study and better understand these models. Psychology can add to our understanding of LLMs and provide a new toolkit for explaining LLMs by providing theoretical concepts, experimental designs, and computational analysis approaches. This can lead to a machine psychology for foundation models that focuses on computational insights and precise experimental comparisons instead of performance measures alone. I will showcase the utility of this approach by showing how current LLMs behave across a variety of cognitive tasks, as well as how one can make them more human-like by fine-tuning on psychological data directly.
Eric Schulz, Max-Planck Research Group Leader, Tuebingen University works on the building blocks of intelligence using a mixture of computational, cognitive, and neuroscientific methods. He has worked with Maarten Speekenbrink on generalization as function learning and Sam Gershman and Josh Tenenbaum.
Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3https://www.pnas.org/doi/full/10.1073/pnas.2218523120. Proceedings of the National Academy of Sciences, 120(6), e2218523120 Akata, E., Schulz, L., Coda-Forno, J., Oh, S. J., Bethge, M., & Schulz, E. (2023). Playing repeated games with Large Language Modelshttps://arxiv.org/pdf/2305.16867.pdf. arXiv preprint arXiv:2305.16867. Allen, K. R., Brändle, F., Botvinick, M., Fan, J., Gershman, S. J., Griffiths, T. L., ... & Schulz, E. (2023). Using Games to Understand the Mindhttps://psyarxiv.com/hbsvj/download?format=pdf Binz, M., & Schulz, E. (2023). Turning large language models into cognitive modelshttps://arxiv.org/pdf/2306.03917.pdf. arXiv preprint.
Robotic Grounding and LLMs: Advancements and Challenges
Casey Kenningtonhttps://www.caseyreddkennington.com/ Computer Science, Boise State
09-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798 ABSTRACT: Large Language Models (LLMs) are primarily trained using large amounts of text, but there have also been noteworthy advancements in incorporating vision and other sensory information into LLMs. Does that mean LLMs are ready for embodied agents such as robots? While there have been important advancements, technical and theoretical challenges remain including use of closed language models like ChatGPT, model size requirements, data size requirements, speed requirements, representing the physical world, and updating the model with information about the world in real time. In this talk, I explain recent advance on incorporating LLMs into robot platforms, challenges, and opportunities for future work. Casey Kennington is associate professor in the Department of Computer Science at Boise State University where he does research on spoken dialogue systems on embodied platforms. His long-term research goal is to understand what it means for humans to understand, represent, and produce language. His National Science Foundation CAREER award focuses on enriching small language models with multimodal information such as vision and emotion for interactive learning on robotic platforms. Kennington obtained his PhD in Linguistics from Bielefeld University, Germany. Josue Torres-Foncesca, Catherine Henry, Casey Kennington. Symbol and Communicative Grounding through Object Permanence with a Mobile Robothttps://aclanthology.org/2022.sigdial-1.14/. In Proceedings of SigDial, 2022. Clayton Fields and Casey Kennington. Vision Language Transformers: A Surveyhttps://arxiv.org/abs/2307.03254. arXiv, 2023. Casey Kennington. Enriching Language Models with Visually-grounded Word Vectors and the Lancaster Sensorimotor Normshttps://aclanthology.org/2021.conll-1.11/. In Proceedings of CoNLL, 2021 Casey Kennington. On the Computational Modeling of Meaning: Embodied Cognition Intertwined with Emotionhttps://arxiv.org/abs/2307.04518. arXiv, 2023.
« Algorithmes de Deep Learning flous causaux »
Usef Faghihihttps://github.com/joseffaghihi Informatique, UQTR
16-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
RÉSUMÉ : Je donnerai un bref aperçu de l'inférence causale et de la manière dont les règles de la logique floue peuvent améliorer le raisonnement causal (Faghihi, Robert, Poirier & Barkaoui, 2020). Ensuite, j'expliquerai comment nous avons intégré des règles de logique floue avec des algorithmes d'apprentissage profond, tels que l'architecture de transformateur Big Bird (Zaheer et al., 2020). Je montrerai comment notre modèle de causalité d'apprentissage profond flou a surpassé ChatGPT sur différentes bases de données dans des tâches de raisonnement (Kalantarpour, Faghihi, Khelifi & Roucaut, 2023). Je présenterai également quelques applications de notre modèle dans des domaines tels que la santé et l'industrie. Enfin, si le temps le permet, je présenterai deux éléments essentiels de notre modèle de raisonnement causal que nous avons récemment développés : l'Effet Causal Variationnel Facile Probabiliste (PEACE) et l'Effet Causal Variationnel Probabiliste (PACE) (Faghihi & Saki, 2023).
Usef Faghihi est professeur adjoint à l'Université du Québec à Trois-Rivières. Auparavant, Usef était professeur à l'Université d'Indianapolis aux États-Unis. Usef a obtenu son doctorat en Informatique Cognitive à l'UQAM. Il est ensuite allé à Memphis, aux États-Unis, pour effectuer un post-doctorat avec le professeur Stan Franklin, l'un des pionniers de l'intelligence artificielle. Ses centres d'intérêt en recherche sont les architectures cognitives et leur intégration avec les algorithmes d'apprentissage profond.
LLMs: Indication or Representation?
Anders Søgaardhttps://anderssoegaard.github.io/ Computer Science & Philosophy, University of Copenhagen
23-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: People talk to LLMs - their new assistants, tutors, or partners - about the world they live in, but are LLMs parroting, or do they (also) have internal representations of the world? There are five popular views, it seems:
(i) LLMs are all syntax, no semantics.
(ii) LLMs have inferential semantics, no referential semantics.
(iii) LLMs (also) have referential semantics through picturing
(iv) LLMs (also) have referential semantics through causal chains.
(v) Only chatbots have referential semantics (through causal chains) I present three sets of experiments to suggest LLMs induce inferential and referential semantics and do so by inducing human-like representations, lending some support to view (iii). I briefly compare the representations that seem to fall out of these experiments to the representations to which others have appealed in the past.
Anders Søgaard is University Professor of Computer Science and Philosophy and leads the newly established Center for Philosophy of Artificial Intelligence at the University of Copenhagen. Known primarily for work on multilingual NLP, multi-task learning, and using cognitive and behavioral data to bias NLP models, Søgaard is an ERC Starting Grant and Google Focused Research Award recipient and the author of Semi-Supervised Learning and Domain Adaptation for NLP (2013), Cross-Lingual Word Embeddings (2019), and Explainable Natural Language Processing (2021). Søgaard, A. (2023). Grounding the Vector Space of an Octopus. https://link.springer.com/article/10.1007/s11023-023-09622-4 Minds and Machines 33, 33-54. Li, J.; et al. (2023) Large Language Models Converge on Brain-Like Representationshttps://arxiv.org/pdf/2306.01930.pdf. arXiv preprint arXiv:2306.01930 Abdou, M.; et al. (2021) Can Language Models Encode Perceptual Structure Without Grounding? https://aclanthology.org/2021.conll-1.9/ CoNLL Garneau, N.; et al. (2021) Analogy Training Multilingual Encoders. https://ojs.aaai.org/index.php/AAAI/article/view/17524 AAAI
LLMs, Patterns, and Understanding
Christof Durthttps://www.durt.de/ Philosophy, U. Heidelberg
30-Nov jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
ABSTRACT: It is widely known that the performance of LLMs is contingent on their being trained with very large text corpora. But what in the text corpora allows LLMs to extract the parameters that enable them to produce text that sounds as if it had been written by an understanding being? In my presentation, I argue that the text corpora reflect not just “language” but language use. Language use is permeated with patterns, and the statistical contours of the patterns of written language use are modelled by LLMs. LLMs do not model understanding directly, but statistical patterns that correlate with patterns of language use. Although the recombination of statistical patterns does not require understanding, it enables the production of novel text that continues a prompt and conforms to patterns of language use, and thus can make sense to humans.
Christoph Durt is a philosophical and interdisciplinary researcher at Heidelberg university. He investigates the human mind and its relation to technology, especially AI. Going beyond the usual side-to-side comparison of artificial and human intelligence, he studies the multidimensional interplay between the two. This involves the study of human experience and language, as well as the relation between them. If you would like to join an international online exchange on these issues, please check the “courses and lectures” section on his websitehttp://www.durt.de/.
Durt, Christoph, Tom Froese, and Thomas Fuchs. preprint. “Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Usehttp://philsci-archive.pitt.edu/21983/.” Durt, Christoph. 2023. “The Digital Transformation of Human Orientation: An Inquiry into the Dawn of a New Erahttp://www.bit.ly/3R5JdN7” Winner of the $10.000 HFPO Essay Prize. Durt, Christoph. 2022. “Artificial Intelligence and Its Integration into the Human Lifeworldhttps://doi.org/10.1017/9781009207898.007.” In The Cambridge Handbook of Responsible Artificial Intelligence, Cambridge University Press. Durt, Christoph. 2020. “The Computation of Bodily, Embodied, and Virtual Realityhttp://phaenomenologische-forschung.de/site/ophen/dgpf/dox/Durt.pdf” Winner of the Essay Prize “What Can Corporality as a Constitutive Condition of Experience (Still) Mean in the Digital Age?”Phänomenologische Forschungen, no. 2: 25–39.
Falsification of the Integrated Information Theory of Consciousness
Jake R Hansonhttps://jakerhanson.weebly.com/ Sr. Data Scientist, Astrophysics
07-Dec jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
Abstract: Integrated Information Theory is a prominent theory of consciousness in contemporary neuroscience, based on the premise that feedback, quantified by a mathematical measure called Phi, corresponds to subjective experience. A straightforward application of the mathematical definition of Phi fails to produce a unique solution due to unresolved degeneracies inherent in the theory. This undermines nearly all published Phi values to date. In the mathematical relationship between feedback and input-output behavior in finite-state systems automata theory shows that feedback can always be disentangled from a system's input-output behavior, resulting in Phi=0 for all possible input-output behaviors. This process, known as "unfolding," can be accomplished without increasing the system's size, leading to the conclusion that Phi measures something fundamentally disconnected from what could ground the theory experimentally. These findings demonstrate that IIT lacks a well-defined mathematical framework and may either be already falsified or inherently unfalsifiable according to scientific standards.
Jake Hanson is a Senior Data Scientist at a financial tech company in Salt Lake City, Utah. His doctoral research in Astrophysics from Arizona State University focused on the origin of life via the relationship between information processing and fundamental physics. He demonstrated that there were multiple foundational issues with IIT, ranging from poorly defined mathematics to problems with experimental falsifiability and pseudoscientific handling of core ideas.
Hanson, J.R., & Walker, S.I. (2019). Integrated information theory and isomorphic feed-forward philosophical zombieshttps://www.mdpi.com/1099-4300/21/11/1073. Entropy, 21.11, 1073. Hanson, J.R., & Walker, S.I. (2021). Formalizing falsification for theories of consciousness across computational hierarchieshttps://watermark.silverchair.com/niab014.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAA1IwggNOBgkqhkiG9w0BBwagggM_MIIDOwIBADCCAzQGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMFqV1MQgKDexHNyFtAgEQgIIDBZ-C-NbPZuaziQ0gJPwKQhpNjZDfFvPFMb1BucW. Neuroscience of Consciousness, 2021.2, niab014. Hanson, J.R., & Walker, S.I. (2021). Falsification of the Integrated Information Theory of Consciousnesshttps://www.proquest.com/docview/2532092940?pq-origsite=gscholar&fromopenview=true. Diss. Arizona State University, 2021. Hanson, J.R., & Walker, S.I. (2023). On the non-uniqueness problem in Integrated Information Theoryhttps://doi.org/10.1093/nc/niad014. Neuroscience of Consciousness, 2023.1, niad014.
« Apprentissage continu et contrôle cognitif »
Frédérique Alexandrehttps://www.labri.fr/perso/falexand/ Inria, Bordeaux
14-Dec jeudi/thursday 10h30 https://uqam.zoom.us/j/83002459798
Résumé : J’explore la différence entre l'efficacité de l'apprentissage humain et celle des grands modèles de langage en termes de temps de calcul et de coûts énergétiques. L'étude se focalise sur le caractère continu de l'apprentissage humain et les défis associés, tels que l'oubli catastrophique. Deux types de mémoires, la mémoire de travail et la mémoire épisodique, sont examinés. Le cortex préfrontal est décrit comme essentiel pour le contrôle cognitif et la mémoire de travail, tandis que l'hippocampe est central pour la mémoire épisodique. Alexandre suggère que ces deux régions collaborent pour permettre un apprentissage continu et efficace, facilitant ainsi la pensée et l'imagination.
Frédéric Alexandre est directeur de recherche à l'Inria et dirige l'équipe Mnemosyne à Bordeaux, spécialisée en Intelligence Artificielle et Neurosciences Computationnelles. L'équipe étudie les différentes formes de mémoire cérébrale et leur rôle dans des fonctions cognitives telles que le raisonnement et la prise de décision. Ils explorent la dichotomie entre mémoires explicites et implicites et comment elles interagissent. Leurs projets récents s'étendent de l'acquisition du langage à la planification et la délibération. Les modèles créés sont validés expérimentalement et ont des applications médicales, industrielles, ainsi qu'en sciences humaines, notamment en éducation, droit, linguistique, économie, et philosophie.
Frédéric Alexandre. A global framework for a systemic view of brain modelingfile:///Users/harnad/Desktop/5DIC9270:1/23-24/SpeakersAutumn/ALLautumn/pp.22.%20https:/braininformatics.springeropen.com/articles/10.1186/s40708-021-00126-4. Brain Informatics, 2021, 8 (1), Snigdha Dagar, Frédéric Alexandre, Nicolas P. Rougier. From concrete to abstract rules : A computational sketchhttps://inria.hal.science/hal-03695814. 15th International Conference on Brain Informatics, Jul 2022. Randa Kassab, Frédéric Alexandre. Pattern Separation in the Hippocampus: Distinct Circuits under Different Conditionshttps://link.springer.com/article/10.1007/s00429-018-1659-4. Brain Structure and Function, 2018, 223 (6), pp.2785-2808. Hugo Chateau-Laurent, Frédéric Alexandre. The Opportunistic PFC: Downstream Modulation of a Hippocampus-inspired Network is Optimal for Contextual Memory Recallhttps://hal.science/hal-03885715. 36th Conference on Neural Information Processing System, Dec 2022. Pramod Kaushik, Jérémie Naudé, Surampudi Bapi Raju, Frédéric Alexandre. A VTA GABAergic computational model of dissociated reward prediction error computation in classical conditioninghttps://www.sciencedirect.com/science/article/abs/pii/S1074742722000776. Neurobiology of Learning and Memory, 2022, 193 (107653),
From the History of Philosophy to AI:
Does Thinking Require Sensing?
David Chalmershttps://consc.net/ Center for Mind, Brain & Consciousness, NYU
jeudi/thursday 10h30 28-Sep https://uqam.zoom.us/j/83002459798
ABSTRACT: There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.
Chalmers, D. J. (2023). Could a large language model be conscious?. https://arxiv.org/pdf/2303.07103.pdf arXiv preprint arXiv:2303.07103. Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophyhttps://scholar.google.ca/citations?view_op=view_citation&hl=en&user=o8AfF3MAAAAJ&sortby=pubdate&citation_for_view=o8AfF3MAAAAJ:7T_dCfhhGW4C. Penguin Chalmers, D. J. (1995). Facing up to the problem of consciousnesshttps://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/chalmers1995.pdf. Journal of Consciousness Studies, 2(3), 200-219. Clark, A., & Chalmers, D. (1998). The extended mindhttps://web-archive.southampton.ac.uk/cogprints.org/320/1/extended.html?source=post_page---------------------------. Analysis, 58(1), 7-19.
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
From the History of Philosophy to AI:
Does Thinking Require Sensing?
David Chalmershttps://consc.net/ Center for Mind, Brain & Consciousness, NYU
jeudi/thursday 10h30 28-Sep https://uqam.zoom.us/j/83002459798
ABSTRACT: There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will discuss the underlying issue and will break down the strongest reasons for and against. I suggest that given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, their lack of recurrent processing, a global workspace, and unified agency. At the same time, it is quite possible that these obstacles will be overcome in the next decade or so. I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that extensions and successors to large language models may be conscious in the not-too-distant future.
David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996), Constructing The World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He is known for formulating the “hard problem” of consciousness, and (with Andy Clark) for the idea of the “extended mind,” according to which the tools we use can become parts of our minds.
Chalmers, D. J. (2023). Could a large language model be conscious?. https://arxiv.org/pdf/2303.07103.pdf arXiv preprint arXiv:2303.07103. Chalmers, D.J. (2022) Reality+: Virtual worlds and the problems of philosophyhttps://scholar.google.ca/citations?view_op=view_citation&hl=en&user=o8AfF3MAAAAJ&sortby=pubdate&citation_for_view=o8AfF3MAAAAJ:7T_dCfhhGW4C. Penguin Chalmers, D. J. (1995). Facing up to the problem of consciousnesshttps://personal.lse.ac.uk/ROBERT49/teaching/ph103/pdf/chalmers1995.pdf. Journal of Consciousness Studies, 2(3), 200-219. Clark, A., & Chalmers, D. (1998). The extended mindhttps://web-archive.southampton.ac.uk/cogprints.org/320/1/extended.html?source=post_page---------------------------. Analysis, 58(1), 7-19.
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Symbols and Grounding in LLMs
Ellie Pavlickhttps://cs.brown.edu/people/epavlick/ Computer Science, Brown
Thursday 10h30 October 5 https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models (LLMs) appear to exhibit human-level abilities on a range of tasks, yet they are notoriously considered to be "black boxes", and little is known about the internal representations and mechanisms that underlie their behavior. This talk will discuss recent work which seeks to illuminate the processing that takes place under the hood. I will focus in particular on questions related to LLM's ability to represent abstract, compositional, and content-independent operations of the type assumed to be necessary for advanced cognitive functioning in humans.
Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.
Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf Pavlick, Ellie. "Symbols and grounding in large language models." Philosophical Transactions of the Royal Society A 381.2251 (2023): 20220041. https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2022.0041 Lepori, Michael A., Thomas Serre, and Ellie Pavlick. "Break it down: evidence for structural compositionality in neural networks." arXiv preprint arXiv:2301.10884 (2023). https://arxiv.org/pdf/2301.10884.pdf Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language Models Implement Simple Word2Vec-style Vector Arithmetic." arXiv preprint arXiv:2305.16130 (2023). https://arxiv.org/pdf/2305.16130.pdf
Symbols and Grounding in LLMs
Ellie Pavlick, Computer Science, Brown https://cs.brown.edu/people/epavlick/
ABSTRACT: Large language models (LLMs) appear to exhibit human-level abilities on a range of tasks, yet they are notoriously considered to be "black boxes", and little is known about the internal representations and mechanisms that underlie their behavior. This talk will discuss recent work which seeks to illuminate the processing that takes place under the hood. I will focus in particular on questions related to LLM's ability to represent abstract, compositional, and content-independent operations of the type assumed to be necessary for advanced cognitive functioning in humans.
[A person smiling at the camera Description automatically generated]Ellie Pavlick is an Assistant Professor of Computer Science at Brown University. She received her PhD from University of Pennsylvania in 2017, where her focus was on paraphrasing and lexical semantics. Ellie’s research is on cognitively-inspired approaches to language acquisition, focusing on grounded language learning and on the emergence of structure (or lack thereof) in neural language models. Ellie leads the language understanding and representation (LUNAR) lab, which collaborates with Brown’s Robotics and Visual Computing labs and with the Department of Cognitive, Linguistic, and Psychological Sciences.
Tenney, Ian, Dipanjan Das, and Ellie Pavlick. "BERT Rediscovers the Classical NLP Pipeline." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. https://arxiv.org/pdf/1905.05950.pdf Pavlick, Ellie. "Symbols and grounding in large language models." Philosophical Transactions of the Royal Society A 381.2251 (2023): 20220041. https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.2022.0041 Lepori, Michael A., Thomas Serre, and Ellie Pavlick. "Break it down: evidence for structural compositionality in neural networks." arXiv preprint arXiv:2301.10884 (2023). https://arxiv.org/pdf/2301.10884.pdf Merullo, Jack, Carsten Eickhoff, and Ellie Pavlick. "Language Models Implement Simple Word2Vec-style Vector Arithmetic." arXiv preprint arXiv:2305.16130 (2023). https://arxiv.org/pdf/2305.16130.pdf
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
The Debate Over “Understanding” in AI’s Large Language Models
Melanie Mitchellhttps://melaniemitchell.me/ Santa Fe Institute
Thursday 10h30 19-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said -- in any important sense -- to "understand" language and the physical and social situations language encodes. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) is a finalist for the 2023 Cosmos Prize for Scientific Writing.
Mitchell, M. (2023). How do we know how smart AI systems are?https://www.science.org/doi/10.1126/science.adj5957 Science, 381(6654), adj5957. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language modelshttps://arxiv.org/pdf/2210.13966.pdf. Proceedings of the National Academy of Sciences, 120(13), e2215907120. Millhouse, T., Moses, M., & Mitchell, M. (2022). Embodied, Situated, and Grounded Intelligence: Implications for AIhttps://arxiv.org/pdf/2210.13589.pdf. arXiv preprint arXiv:2210.13589.
Full Autumn Series: 14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
The Debate Over “Understanding” in AI’s Large Language Models
Melanie Mitchellhttps://melaniemitchell.me/ Santa Fe Institute
Thursday 10h30 EDT 19-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said -- in any important sense -- to "understand" language and the physical and social situations language encodes. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems. I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions.
Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems. Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) is a finalist for the 2023 Cosmos Prize for Scientific Writing.
Mitchell, M. (2023). How do we know how smart AI systems are?https://www.science.org/doi/10.1126/science.adj5957 Science, 381(6654), adj5957. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language modelshttps://arxiv.org/pdf/2210.13966.pdf. Proceedings of the National Academy of Sciences, 120(13), e2215907120. Millhouse, T., Moses, M., & Mitchell, M. (2022). Embodied, Situated, and Grounded Intelligence: Implications for AIhttps://arxiv.org/pdf/2210.13589.pdf. arXiv preprint arXiv:2210.13589.
Full Autumn Series: 14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins The Debate Over “Understanding” in AI’s Large Language Models 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Enactivist Symbol Grounding:
From Attentional Anchors to Mathematical Discourse
Dor Abrahamsonhttps://bse.berkeley.edu/dor-abrahamson Faculty of Education, UC-Berkeley
jeudi/thursday 10h30 26-Oct https://uqam.zoom.us/j/83002459798
ABSTRACT: According to the embodiment hypothesis knowledge is the capacity for perceptuomotor enactment, situated in the world as much as in the body: a way of engaging the environment in anticipation of accomplishing interactions. What does this mean for educational practice? What is the embodiment or enactment of abstract ideas, like justice, photosynthesis, or algebra? What is the teacher’s role in embodied designs for learning? I will describe my lab’s educational design-based collaborative research on mathematical learning, and how we came to view in the analysis and promotion of content learning. I will describe how students spontaneously generate perceptual solutions to motor-control problems. These then become verbal through adopting symbolic artifacts provided by the teacher. This approach can also help students with diverse sensorimotor capacities.
Dor Abrahamson is Professor in the Graduate School of Education at the University of California Berkeley, where he established the Embodied Design Research Laboratory devoted to pedagogical technologies for teaching and learning mathematics. He is particularly interested in relations between learning to move in new ways and learning mathematicaal concepts. His research draws on embodied cognition, dynamic systems theory, and sociocultural theory.
Abrahamson, D., & Sánchez-García, R. (2016). Learning is moving in new ways: The ecological dynamics of mathematics educationhttps://doi.org/10.1080/10508406.2016.1143370. Journal of the Learning Sciences, 25(2), 203-239. Abrahamson, D. (2021). Grasp actually: An evolutionist argument for enactivist mathematics education. Human Development, 65(2), 1–17. https://doi.org/10.1159/000515680 Shvarts, A., & Abrahamson, D. (2023). Coordination dynamics of semiotic mediation: A functional dynamic systems perspective on mathematics teaching/learning. In T. Veloz, R. Videla, & A. Riegler (Eds.), Education in the 21st century [Special issue]. Constructivist Foundations, 18(2), 220–234. https://constructivist.info/18/2 https://constructivist.info/18/2
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Machine Psychology
Eric Schulzhttps://cpilab.org/eric.html MPI Tuebingen
jeudi/thursday 10h30 EDT 02-Nov https://uqam.zoom.us/j/83002459798
ABSTRACT: Large language models are on the cusp of transforming society while they permeate into many applications. Understanding how they work is, therefore, of great value. We propose to use insights and tools from psychology to study and better understand these models. Psychology can add to our understanding of LLMs and provide a new toolkit for explaining LLMs by providing theoretical concepts, experimental designs, and computational analysis approaches. This can lead to a machine psychology for foundation models that focuses on computational insights and precise experimental comparisons instead of performance measures alone. I will showcase the utility of this approach by showing how current LLMs behave across a variety of cognitive tasks, as well as how one can make them more human-like by fine-tuning on psychological data directly.
Eric Schulz, Max-Planck Research Group Leader, Tuebingen University works on the building blocks of intelligence using a mixture of computational, cognitive, and neuroscientific methods. He has worked with Maarten Speekenbrink on generalization as function learning and Sam Gershman and Josh Tenenbaum.
Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3https://www.pnas.org/doi/full/10.1073/pnas.2218523120. Proceedings of the National Academy of Sciences, 120(6), e2218523120 Akata, E., Schulz, L., Coda-Forno, J., Oh, S. J., Bethge, M., & Schulz, E. (2023). Playing repeated games with Large Language Modelshttps://arxiv.org/pdf/2305.16867.pdf. arXiv preprint arXiv:2305.16867. Allen, K. R., Brändle, F., Botvinick, M., Fan, J., Gershman, S. J., Griffiths, T. L., ... & Schulz, E. (2023). Using Games to Understand the Mindhttps://psyarxiv.com/hbsvj/download?format=pdf Binz, M., & Schulz, E. (2023). Turning large language models into cognitive modelshttps://arxiv.org/pdf/2306.03917.pdf. arXiv preprint.
14-Sep Benjamin Bergen UCSD LLMs are Impressive But We Still Need Grounding 21-Sep Dimitri C Mollo Umea Grounding in LLMs: Functional AI Ontologies 28-Sep Dave Chalmers NYU Does Thinking Require Grounding? 05-Oct Ellie Pavlick Brown Symbols and Grounding in LLMs 12-Oct Paul Rosenbloom USC Rethinking the Physical Symbol Systems Hypothesis 19-Oct Melanie Mitchell Santa Fe Ins Language and Grounding 26-Oct Dor Abrahamson Berkeley Enactive Symbol Grounding in Mathematics Education 02-Nov 09-Nov Eric Schulz Casey Kennington Tuebingen Boise State Machine Psychology Robotic grounding and LLMs 16-Nov Usef Faghihi UQTR « Algorithmes de Deep Learning flous causaux » 23-Nov Anders Søgaard Copenhagen LLMs: Indication or Representation? 30-Nov Christoph Durt Freiburg IAS LLMs, Patterns, and Understanding 07-Dec Jake Hanson ASU Falsifying the Integrated Information Theory of Consciousness 14-Dec Frédéric Alexandre Bordeaux « Apprentissage continu et contrôlé cognitif »
Cognitive architectures and the crucial role of motivation: Linking intrinsic needs, goals, effort, and performance
Ron Sun Cognitive Science Department Rensselaer Polytechnic Institute
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am (EST) April 27, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751
Abstract: Motivation is a crucially important aspect of human psychology. But “motivation” can denote a number of different things. Effects of motivation on cognition and performance have been found empirically in different fields, and the relationship between them seems complex and multi-faceted. There are many seemingly inconsistent studies as well as many different theories from different disciplines. I will show many of these can actually be synthesized within the unifying framework of a computational cognitive architecture. The framework can account for empirical phenomena across a wide range of domains, based on intrinsic needs/motives, utility calculation, and their effects on cognitive processes. What enables it to address this fundamental aspect of human psychology includes some of the most important characteristics that distinguish it from other popular models.
Ron Sun, Professor of Cognitive Science at Rensselaer Polytechnic Institute, [Ron Sun | Faculty] studies, models, and simulates human cognitive agents, including their abilities to learn, reason, and act in the real world. His research can be roughly categorized into the following main strands: (1) cognitive architectures, (2) hybrid connectionist (“neurosymbolic”) models, as well as (3) cognitive social simulation and cognitive social sciences. See his personal Webpagehttps://sites.google.com/site/drronsun as well as the The Clarion projecthttps://sites.google.com/site/drronsun/clarion/clarion-project.
References: Sun, R., Bugrov, S., & Dai, D. (2022). A unified framework for interpreting a range of motivation-performance phenomenahttp://www.david-dai.net/s/2021SunBugrovDai.pdf. Cognitive Systems Research, 71, 24-40. Bretz, S. & Sun, R. (2018). Two models of moral judgment. Cognitive Science, 42(S1), 4-37. https://onlinelibrary.wiley.com/doi/pdf/10.1111/cogs.12517
Rethinking behavior in the light of evolution
Paul Cisek Department of Neuroscience University of Montreal
UQÀM ISC DIC CRIA Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am March 30, 2023 Zoom *new*: https://uqam.zoom.us/j/89902403751https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fuqam.zoom.us%2Fj%2F89902403751&data=05%7C01%7Charnad%40ecs.soton.ac.uk%7Cfc38b4e7ba354122a6c708daf31d49f2%7C4a5378f929f44d3ebe89669d03ada9d8%7C0%7C0%7C638089604377408492%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=oH%2FCWMA%2BvUINRP7LM74T3kSbnh5on0AphicTUu2rcTg%3D&reserved=0
Abstract: In psychology and neuroscience, the brain is usually described as an information processing system that encodes and manipulates representations of knowledge to produce plans of action. This leads to a decomposition of brain functions into processes like object recognition, memory, decision-making, action planning, etc. However, neurophysiological data do not support many of these subdivisions. I will explore a different set of functional subdivisions, guided by data on the evolutionary process that produced the human brain. I will summarize a sequence of innovations that appeared in nervous systems from the earliest multicellular animals to humans. Along the way, functional subdivisions and elaborations will be introduced in parallel with the neural specializations that made them possible, gradually building up an alternative conceptual taxonomy of brain functions. These functions emphasize mechanisms for real-time interaction with the world, rather than for building explicit knowledge of the world, and the relevant representations emphasize pragmatic outcomes rather than decoding accuracy, mixing variables in the way seen in real neural data. This alternative taxonomy may better delineate the real functional pieces into which the human brain is organized, offering a more natural mapping between behavior and neural mechanisms.
[Paul Cisek - Département de neurosciences - Faculté de Médecine - Université de Montréal]Paul Cisek is a professor in the Department of Neuroscience at the University of Montreal. He has a background in computer science, artificial intelligence, and neurophysiology. His work combines these in an interdisciplinary approach toward understanding how the brain controls our interactions with the world, suggesting that the brain is organized as a system of parallel sensorimotor streams that have been differentiated and elaborated over millions of years of evolution. His empirical work investigates the neural dynamics of how potential actions are specified and how they compete in cortical and subcortical circuits.
References
Cisek, P. (2022) “Evolution of behavioural control from chordates to primateshttps://cisek.org/pavel/Pubs/Cisek2022-PTRSB.pdf” Philosophical Transactions of the Royal Society B. 377(1844): 20200522 Cisek, P. (2019) “Resynthesizing behavior through phylogenetic refinement” Attention, Perception, and Psychophysicshttps://cisek.org/pavel/Pubs/Cisek2019.pdf. 81(7): 2265-2287 Pezzulo, G. and Cisek, P. (2016) “Navigating the affordance landscape: Feedback control as a process model of behavior and cognitionhttps://cisek.org/pavel/Pubs/PezzuloCisek2016.pdf”. Trends in Cognitive Sciences. 20(6): 414-424. [