Intuitive Physical Reasoning and Mental Simulation
Todd Gureckis
Psychology, NYU
UQÀM ISC DIC CRIA
Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am
December 15, 2022
Zoom: https://uqam.zoom.us/j/88481835073
Abstract: The ability to reason about the physics of our world (e.g., what arrangements of objects are stable, how things will fall or move under a force) is central to human intelligence. One influential hypothesis is that this capacity stems from the ability to perform “mental simulations” of physical events (in effect, playing a mental “movie” of the future evolution of a scene according to the laws of physics). In this talk, I’ll try to pin down several core commitments of the mental simulation approach that must be present for the general theory to be viable. I then will describe experiments we conducted recently trying to test these commitments. Along the way, we stumbled into several curious and novel errors and biases in human physical reasoning ability that we believe represent limits to the universality of contemporary simulation theories. If there is time, I will discuss a related project considering how efficient or optimal people are when they “experiment” in the physical world in order to learn the covert properties of objects such as mass or attractive/repulsive forces like magnetism.
Todd M. Gureckis, Professor of Psychology, New York University, studies how people actively explore their world in order to learn, including everyday reasoning capacities for the physical and social world. His research combines methods of computational modeling, developmental psychology, cognitive neuroscience, and online data collection. He is the founder and a lead developer of the psiTurk<https://psiturk.org/> package, a tool for facilitating online experiments used in hundreds of research labs. His work has been recognized by the NSF CAREER award, the Presidential Early Career Award (PECASE) from the Office of Science and Technology Policy at the White House, the James S. McDonnell Foundation Scholar award, and several paper and conferences awards with his students including the Marr Prize from the Cognitive Science Society, the Clifford T. Morgan Prize from the Psychonomic Society. He has variously served an Associate Editor for Cognitive Science, Topics in Cognitive Science, and Computational Brain and Behavior.
References
https://gureckislab.org/<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckisl…> :
https://gureckislab.org/papers/#/ref/ludwin2021limits<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckisl…>
https://gureckislab.org/papers/#/ref/ludwinpeery2020broken<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckisl…>
https://gureckislab.org/papers/#/ref/bramley2018intuitive<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgureckisl…>
Atlas of Forecasts: Modeling and Mapping Desirable Futures
Katy Börner
Victor H. Yngve Distinguished Professor of Engineering and Information Science
Luddy School of Informatics, Computing, and Engineering, Indiana University, USA
UQÀM ISC DIC CRIA
Séminaire en informatique cognitive/Cognitive Informatics Seminar
Thursday, 10:30 am
December 1, 2022
Zoom: https://uqam.zoom.us/j/88481835073
Abstract: Envisioning and implementing desirable futures requires a deep understanding of developments in science and technology as well as the ability to both simulate and communicate the likely impact of alternative actions. At a time when our relationship to a vulnerable planet Earth is especially important, such a profound awareness of complex, interlinked systems is needed more than ever. Atlas of Forecasts uses advanced data visualizations to introduce different types of computational models and demonstrates how model results can be used to inform effective decision-making. The models aim to capture the structure and dynamics of developments in education and the job market, progress in science and technology, and the impact of government policies—all from the micro to the macro levels. Model results can help us decide which human skills are needed in an artificial intelligence–empowered economy; which courses and degrees are most effective in upskilling and reskilling the current and future workforce; what progress in science and technology is likely to happen; and how policymakers can future-proof regions or nations.
Katy Börner’s research focuses on the development of data analysis and visualization techniques for information access, understanding, and management. She is particularly interested in the formalization, measurement, and systematic improvement of people’s data visualization literacy; the study of the structure and evolution of scientific disciplines; the construction and usage of a Human Reference Atlas; and the development of cyberinfrastructures for large-scale scientific collaboration and computation.
References
Börner, Katy. 2021. Atlas of Forecasts: Modeling and Mapping Desirable Futures<https://www.amazon.com/Atlas-Forecasts-Modeling-Mapping-Desirable/dp/026204…>. Cambridge, MA: The MIT Press.
Börner, Katy, Andreas Bueckle, and Michael Ginda. 2019. Data visualization literacy: Definitions, conceptual frameworks, exercises, and assessments.<https://www.pnas.org/content/116/6/1857> PNAS, 116 (6) 1857-1864.
Börner, Katy. 2015. Atlas of Knowledge: Anyone Can Map<http://scimaps.org/atlas2>. Cambridge, MA: The MIT Press.
Börner, Katy. 2010. Atlas of Science: Visualizing What We Know<http://scimaps.org/atlas/>. Cambridge, MA: The MIT Press.
The observer’s grounding problem in human-robot interaction
Tom Ziemke
Computer and Information Department, Linköping University, Sweden
UQÀM ISC DIC CRIA
Cognitive Informatics Seminar /Séminaire en informatique cognitive
Thursday, November 17
10:30 am
ZOOM: https://uqam.zoom.us/j/88481835073
Abstract: People commonly attribute intentional mental states, such as beliefs and goals, to robots (Thellman et al., 2022; Ziemke, 2020). In a recent paper we formulated the perceptual belief attribution problem (Thellman & Ziemke, 2021): How can people interacting with robots understand what they know about the shared physical environment without knowing much about those robots’ sensors, perception, memory, etc.? In this talk I’ll focus on the observer’s grounding problem, which is the other side of the same coin, i.e., the fact that in interaction with a robot people tend to make anthropomorphic, folk-psychological attributions, based on their own grounding rather than the robot’s
[Tom ZIEMKE | Professor | PhD | Linköping University, Linköping | LiU | Department of Computer and Information Science (IDA)]Bio: Tom Ziemke is Professor of Cognitive Systems at Linkoping University, Sweden. His main research interests are in situated/embodied cognition and social interaction, with a current focus on people’s interaction with different types of autonomous technologies, ranging from social robots to automated vehicles. A long-standing research interest is the relation between cognition and computation – and the resulting (mis-) conceptions of AI among both researchers and the general public
References:
Understanding robots https://www.science.org/doi/10.1126/scirobotics.abe2987
Explainability in Social Robotics https://doi.org/10.1145/3461781
Mental State Attribution to Robots https://doi.org/10.1145/3526112
AI/robotics and active visual and tactile perception
Lorenzo Natale
Institute of Technology, Genoa
10:30 am
Thursday, November 10
Zoom: https://uqam.zoom.us/j/88481835073
Cognitive Informatics Seminar
Séminaire en informatique cognitive
UQÀM ISC DIC CRIA
Abstract: Modern AI algorithms provide exceptional performance but require long training time and large datasets that are expensive to annotate. On the other hand, robots can actively interact with the environment and humans using their sensory system to learn on-line how to perceive and interact with objects. To extract structured information, however, the robot needs to be endowed with appropriate sensors, fast learning algorithms, and exploratory behavior that guide the interaction with the world.
In this talk I will introduce the sensory system we developed for the iCub humanoid robot, and in particular the tactile sensing technology. I will then review work in which we studied how to use visual and tactile feedback to explore unknown objects and to control the interaction between the hand and the objects for shape modelling, object discrimination and tracking. Finally, I will present recent work in which we developed fast learning algorithms for object segmentation that leverage on the interaction with a teacher and active learning for adaptation to new contexts.
Lorenzo Natale, Senior Researcher at the Italian Institute of Technology and coordinator of the Center for Robotics and Intelligent Systems, was one of the main contributors to the design and development of the iCub humanoid robot. His research interests span artificial vision, tactile perception and software architectures for robotics.
References:
Ceola, F., Maiettini, E., Pasquale, G., Meanti, G., Rosasco, L., and Natale, L., Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub Robot, IEEE Transactions on robotics, 2022.
Maiettini, E., Tikhanoff, V., and Natale, L., Weakly-Supervised Object Detection Learning through Human-Robot Interaction, in Proc. International Conference on Humanoid Robotics, Munich, Germany, 2021
Vezzani, G., Pattacini, U., Battistelli, G., Chisci, L., and Natale, L., Memory Unscented Particle Filter for 6-DOF Tactile Localization, in IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1139-1155, 2017