Do we attribute intentional agency to humanoid robots?
Agnieszka Wykowska
Istituto Italiano Di Tecnologia
Genoa, Italy
UQÀM ISC DIC CRIA
Cognitive Informatics Seminar
Séminaire en informatique cognitive
Thursday, 10:30 am ET (Montreal time)
October 6, 2022
Zoom: https://uqam.zoom.us/j/88481835073
Abstract: When predicting and explaining the behavior of other humans, we adopt the intentional stance, and refer to mental states in order to understand others’ actions. It is not clear, however, whether and when we adopt the intentional stance also towards artificial agents, such as humanoid robots.
This talk will provide an overview of research conducted in my lab which addresses this question. I will present a tool for measuring the adoption of the intentional stance. The likelihood of adopting the intentional stance is coded in specific patterns of neural activity at rest. Interactive scenarios influence adoption of the intentional stance more than mere observation of subtle human-like characteristics of a robot’s behavior.
Experiments using interactive joint action protocols with a humanoid robot to study the vicarious and joint sense of agency show that the robots’ motor repertoire and our ability to represent its actions with our own sensorimotor repertoire influence the vicarious sense of agency. Embedding a non-verbal adaptation of a “Turing test” In a human-robot joint action task showed that human-like variability in the robot’s simple button presses makes the robot pass the test.
The talk will conclude with a discussion of the role of the intentional stance and sense of agency in other mechanisms of social cognition, and their implications in applied domains of social robotics in healthcare.
References:
Bossi, F., Willemse, C., Cavazza, J., Marchesi, S., Murino, V., Wykowska, A. (2020). The human brain reveals resting state activity patterns that are predictive of biases in attitudes towards robots. Science Robotics, 5:46, eabb6652: https://www.science.org/doi/10.1126/scirobotics.abb6652
Marchesi, S., De Tommaso, D., Perez-Osorio, J., Wykowska A. (2022). Belief in sharing the same phenomenological experience increases the likelihood of adopting the intentional stance towards a humanoid robot. Technology, Mind and Behavior, 3(3): https://www.apa.org/pubs/journals/releases/tmb-tmb0000072.pdf
Ciardo, F., De Tommaso, D., Wykowska, A. (2022). Human-like behavioural variability blues the distinction between a human and a machine in a nonverbal Turing test. Science Robotics, 7, eabo 1241: https://www.science.org/doi/10.1126/scirobotics.abo1241
Roselli, C., Ciardo, F., De Tommaso, D., Wykowska, A. (2022). Human‐likeness and attribution of intentionality predict vicarious sense of agency over humanoid robot actions. Nature: Scientific Reports, 12:13845: https://www.nature.com/articles/s41598-022-18151-6.pdf
Note: for the papers behind paywall, please visit our website, where you can find access links to all papers: https://instanceproject.eu/publications/list-of-publications
Professor Agnieszka Wykowska leads the unit “Social Cognition in Human-Robot Interaction” at the Italian Institute of Technology (Genoa, Italy). The research foci of Prof. Wykowska are interdisciplinary, bridging psychology, cognitive neuroscience, robotics and healthcare. She combines cognitive neuroscience methods with human-robot interaction to understand the human brain mechanisms in interaction with other humans and with robots. Her research is also dedicated to applications of social robotics to healthcare: her team develops robot-assisted training protocols to help children diagnosed with autism-spectrum disorder in improving social skills.
08-Sep
Bernard Baars
Conscious computing is only a metaphor
15-Sep
Jean-Pierre Briot
Music creation with deep learning technique
22-Sep
Mehdi Khamassi
Active exploration in reinforcement learning
29-Sep
Murray Shanahan
Animal cognition and AI
06-Oct
Agnieszka Wykowska
Do we attribute intentional agency to humanoid robots?
20-Oct
Christian Lebière
Cognitive architectures and their applications
03-Nov
Baptiste Caramiaux
Interactive Machine Learning: Principles and Applications
10-Nov
Lorenzo Natale
AI/robotics and active visual and tactile perception
17-Nov
Ziemke, Tom
The observer’s grounding problem in human-robot interaction
24-Nov
Christian Keysers
Neural Basis of Empathy and Prosociality Across Species
01-Dec
Katy Börner
Atlas of Forecasts: Modeling and Mapping Desirable Futures
08-Dec
Karl Friston
Active inference and artificial curiosity
15-Dec
Todd Gureckis
Intuitive Physical Reasoning and Mental Simulation
Do we attribute intentional agency to humanoid robots?
Agnieszka Wykowska
Istituto Italiano Di Tecnologia
Genoa, Italy
UQÀM ISC DIC CRIA
Cognitive Informatics Seminar
Séminaire en informatique cognitive
Thursday, 10:30am (Montreal Time ET)
October 6, 2022
Zoom: https://uqam.zoom.us/j/88481835073
Abstract: When predicting and explaining the behavior of other humans, we adopt the intentional stance, and refer to mental states in order to understand others’ actions. It is not clear, however, whether and when we adopt the intentional stance also towards artificial agents, such as humanoid robots.
This talk will provide an overview of research conducted in my lab which addresses this question. I will present a tool for measuring the adoption of the intentional stance. The likelihood of adopting the intentional stance is coded in specific patterns of neural activity at rest. Interactive scenarios influence adoption of the intentional stance more than mere observation of subtle human-like characteristics of a robot’s behavior.
Experiments using interactive joint action protocols with a humanoid robot to study the vicarious and joint sense of agency show that the robots’ motor repertoire and our ability to represent its actions with our own sensorimotor repertoire influence the vicarious sense of agency. Embedding a non-verbal adaptation of a “Turing test” In a human-robot joint action task showed that human-like variability in the robot’s simple button presses makes the robot pass the test.
The talk will conclude with a discussion of the role of the intentional stance and sense of agency in other mechanisms of social cognition, and their implications in applied domains of social robotics in healthcare.
References:
Bossi, F., Willemse, C., Cavazza, J., Marchesi, S., Murino, V., Wykowska, A. (2020). The human brain reveals resting state activity patterns that are predictive of biases in attitudes towards robots. Science Robotics, 5:46, eabb6652: https://www.science.org/doi/10.1126/scirobotics.abb6652
Marchesi, S., De Tommaso, D., Perez-Osorio, J., Wykowska A. (2022). Belief in sharing the same phenomenological experience increases the likelihood of adopting the intentional stance towards a humanoid robot. Technology, Mind and Behavior, 3(3): https://www.apa.org/pubs/journals/releases/tmb-tmb0000072.pdf
Ciardo, F., De Tommaso, D., Wykowska, A. (2022). Human-like behavioural variability blues the distinction between a human and a machine in a nonverbal Turing test. Science Robotics, 7, eabo 1241: https://www.science.org/doi/10.1126/scirobotics.abo1241
Roselli, C., Ciardo, F., De Tommaso, D., Wykowska, A. (2022). Human‐likeness and attribution of intentionality predict vicarious sense of agency over humanoid robot actions. Nature: Scientific Reports, 12:13845: https://www.nature.com/articles/s41598-022-18151-6.pdf
Note: for the papers behind paywall, please visit our website, where you can find access links to all papers: https://instanceproject.eu/publications/list-of-publications
Professor Agnieszka Wykowska leads the unit “Social Cognition in Human-Robot Interaction” at the Italian Institute of Technology (Genoa, Italy). The research foci of Prof. Wykowska are interdisciplinary, bridging psychology, cognitive neuroscience, robotics and healthcare. She combines cognitive neuroscience methods with human-robot interaction to understand the human brain mechanisms in interaction with other humans and with robots. Her research is also dedicated to applications of social robotics to healthcare: her team develops robot-assisted training protocols to help children diagnosed with autism-spectrum disorder in improving social skills.
Animal Cognition and AI
Murray Shanahan
Cognitive Robotics, Imperial College & DeepMind
UQÀM ISC DIC CRIA
Cognitive Informatics Seminar
September 29, 2022
Thursday, 10:30 am
Zoom: https://uqam.zoom.us/j/88481835073
Abstract:
Common sense in humans is founded on a set of basic capacities that are possessed by many other animals, capacities pertaining to the understanding of objects, space, and causality. The field of animal cognition has developed numerous experimental protocols for studying these capacities and, thanks to progress in deep reinforcement learning (RL), it is now possible to apply these methods directly to evaluate RL agents in 3D environments. The Animal-AI Environment aims to apply the ability-oriented testing used in comparative psychology to AI systems. Besides evaluation, the animal cognition literature offers a rich source of behavioural data, which can serve as inspiration for RL tasks and curricula.
Bio:
Murray Shanahan is Professor of Cognitive Robotics at Imperial College London and Senior Research Scientist at DeepMind. His publications span artificial intelligence, robotics, logic, dynamical systems, computational neuroscience, and philosophy of mind. His work up to 2000 was in the tradition of classical, symbolic AI. He then turned his attention to the brain and its embodiment. His current interests include neurodynamics, consciousness, machine learning, and the impacts of artificial intelligence.
References:
Shanahan, M., Crosby, M., Beyret, B., & Cheke, L. (2020). Artificial intelligence and the common sense of animals<https://www.sciencedirect.com/science/article/pii/S1364661320302163>. Trends in cognitive sciences, 24(11), 862-872.
Voudouris, K., Crosby, M., Beyret, B., Hernández-Orallo, J., Shanahan, M., Halina, M., & Cheke, L. G. (2022). Direct Human-AI Comparison in the Animal-AI Environment<https://www.frontiersin.org/articles/10.3389/fpsyg.2022.711821/full?&utm_so…>. Frontiers in Psychology, 1884.
Shanahan, M., & Mitchell, M. (2022). Abstraction for Deep Reinforcement Learning<https://arxiv.org/pdf/2202.05839.pdf>. UCAI 2022 arXiv preprint arXiv:2202.05839.
Shanahan, M., Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds<https://www.doc.ic.ac.uk/~mpsha/EIL.html>, Oxford University Press (2010). Full text<https://www.doc.ic.ac.uk/~mpsha/ShanahanBook2010.pdf>
Music creation with deep learning techniques
Jean-Pierre Briot
Université Sorbonne, Paris
COGNITIVE INFORMATICS SEMINAR
UQÀM ISC DIC CRIA
Thursday, 10:30 am
September 15, 2022
Zoom: https://uqam.zoom.us/j/88481835073
Abstract: A growing application area for the current wave of deep learning (the return of artificial neural networks on steroids) is the generation of creative content, notably the case of music (and also images and text). The motivation is in using machine learning techniques to automatically learn musical styles from arbitrary musical corpora and then to generate musical samples from the estimated distribution, with some degree of control over the generation. This talk will survey some recent achievements in deep-learning-based music generation, using recent and dedicated generative architectures such as VAE, GAN and Transformer, analyzing principles, successes as well as challenges, including the limits of automated generation versus providing assistance to human musicians.
Jean-Pierre Briot is a senior researcher (research director) in computer science at LIP6, joint computer science research lab of CNRS (Centre National de la Recherche Scientifique) and Sorbonne Université in Paris, France. He is also permanent visiting professor at PUC-Rio in Rio de Janeiro, Brazil. His general research interests are the design of intelligent adaptive and cooperative software, at the crossroads of artificial intelligence, distributed systems and software engineering, with various applications in the internet of things, decision support systems and computer music. His current interest is the use of AI techniques (notably deep learning-based) within music creation processes. He is the principal author of a recent reference book on deep learning techniques for music generation
Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep learning techniques for music generation (Vol. 1). Heidelberg: Springer. https://link.springer.com/book/10.1007/978-3-319-70163-9<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flink.spri…>
For more details (including access to publications): http://webia.lip6.fr/~briot/cv/<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwebia.lip6…>
Briot, J. P. (2021). From artificial neural networks to deep learning for music generation: history, concepts and trends. Neural Computing and Applications, 33(1), 39-65.
https://hal.sorbonne-universite.fr/hal-02539189v3/file/nn4music-hal-v3.pdf
Briot, J. P. (2019). Apprentissage profond et génération de musique, Hors série Intelligence artificielle, Tangente - L'aventure mathématique, (68):30-37, September 2019.
https://webia.lip6.fr/~briot/cv/apgm-2019<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwebia.lip…>
> Sent on behalf of Jelena Ristic
>
> Dear Colleagues,
>
> This is a quick reminder that Dr Nicolas Burra from the University of Geneva (https://www.unige.ch/fapse/cognition/burra.php) will be presenting his work on the neural correlates of perception of human gaze TODAY from 11.00am-12.30pm EDT (ZOOM).
>
> Talk Title: The role of top-down mechanisms in direct gaze perception.
>
> Abstract: Human beings, as a social species, have a heightened ability to detect and perceive visual features involved in social exchange, such as faces and eyes. In particular, eye gaze conveys information crucial for social interactions. Researchers have posited that in order to engage in dynamic face-to-face communication in real time, our brains need to process another person's gaze direction rapidly and automatically. Evidence indicates that direct gaze enhances face encoding and attentional capture and that direct gaze is perceived and processed more quickly than averted gaze. These findings are summarized as the "direct gaze effect". However, in the recent literature, evidence suggests that the mode of visual information processing modulates the effect of direct gaze. In project we are presenting, we claim that top-down processing, and specifically the task relevance of eye features, promotes early preferential processing of direct compared to indirect gaze. We propose that
low relevance of eye features in the task will prevent differences in processing eye direction between gaze direction because its encoding will be superficial. The differential treatment of direct and indirect gaze will only occur when the eyes are task-relevant. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of the typical ERP markers such as the P1, N170, or P300, we will measure lateralized components such as the lateralized N170 and the N2pc, which are markers of early face encoding and attentional deployment respectively. We hypothesize that the task relevance of eye features is crucial in the direct gaze effect and propose to reexamine previous studies, which had cast doubt on the existence of the direct gaze effect. In this talk, we will present the planned experiments as well as some preliminary data. Overall, these studies
contribute to the gaze processing literature both at empirical and theoretical levels by assessing systematically the role of top-down processing in the early perception of direct gaze.
>
> ZOOM Link: https://mcgill.zoom.us/j/81914134311
>
> I hope many of you can join us for this event to round up the Spring semester!
>
> Please excuse multiple postings.
>
> Best,
>
> Jelena Ristic
>
> ------------------------------------------
> Jelena Ristic, PhD
> Professor & William Dawson Scholar
> Department of Psychology, McGill University
> 1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1
> Phone: 514.398.2091
> Email: jelena.ristic(a)mcgill.ca
> Web: http://www.mcgill.ca/asc
Dear Colleagues,
Professor Nicolas Burra from the University of Geneva (https://www.unige.ch/fapse/cognition/burra.php) will be presenting his work on neural correlates of perception of human gaze on Thursday April 14, 2022, from 11.00am-12.30pm EDT (ZOOM).
Talk Title: The role of top-down mechanisms in direct gaze perception.
Abstract: Human beings, as a social species, have a heightened ability to detect and perceive visual features involved in social exchange, such as faces and eyes. In particular, eye gaze conveys information crucial for social interactions. Researchers have posited that in order to engage in dynamic face-to-face communication in real time, our brains need to process another person's gaze direction rapidly and automatically. Evidence indicates that direct gaze enhances face encoding and attentional capture and that direct gaze is perceived and processed more quickly than averted gaze. These findings are summarized as the “direct gaze effect”. However, in the recent literature, evidence suggests that the mode of visual information processing modulates the effect of direct gaze. In project we are presenting, we claim that top-down processing, and specifically the task relevance of eye features, promotes early preferential processing of direct compared to indirect gaze. We propose that low relevance of eye features in the task will prevent differences in processing eye direction between gaze direction because its encoding will be superficial. The differential treatment of direct and indirect gaze will only occur when the eyes are task-relevant. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of the typical ERP markers such as the P1, N170, or P300, we will measure lateralized components such as the lateralized N170 and the N2pc, which are markers of early face encoding and attentional deployment respectively. We hypothesize that the task relevance of eye features is crucial in the direct gaze effect and propose to reexamine previous studies, which had cast doubt on the existence of the direct gaze effect. In this talk, we will present the planned experiments as well as some preliminary data. Overall, these studies contribute to the gaze processing literature both at empirical and theoretical levels by assessing systematically the role of top-down processing in the early perception of direct gaze.
ZOOM Link: https://mcgill.zoom.us/j/81914134311
I hope many of you can join us for this event to round up the Spring semester!
Best,
Jelena Ristic
------------------------------------------
Jelena Ristic, PhD
Professor & William Dawson Scholar
Department of Psychology, McGill University
1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1
Phone: 514.398.2091
Email: jelena.ristic(a)mcgill.ca<mailto:jelena.ristic@mcgill.ca>
Web: http://www.mcgill.ca/asc
Dear Colleagues,
Professor Reiko Graham from Texas State University (https://www.psych.txstate.edu/faculty/psydirectory/Reiko-Graham.html) will be presenting on her work TODAY, April 8, 2022, from 11.30am-12.30pm. Professor Graham is visiting McGill on her sabbatical.
Talk Title: Can I eat this? Event-related potentials are modulated by feedback regarding edibility
Abstract: Not all mistakes are created equal, and the consequences of errors vary widely. To examine the neural correlates of error magnitude, paradigms using extrinsic rewards and punishments (e.g. monetary gains and losses) are often used. We endeavored to create a task that tapped into intrinsic motivations by asking participants to make judgments about the edibility of ambiguous objects, which was then followed by feedback. We reasoned that edibility judgments would engage evaluative processes associated with potential oral incorporation, such that incorrectly stating that an inedible object was edible would be considered a more serious error (violating the body boundary) than the opposite. Twenty-five undergraduates (15 male, mean age = 21.5 years) viewed close-ups of food/drinks or nonfood/drinks, and indicated whether they could consume the objects. Feedback about stimulus type (unambiguous, zoomed-out images) was then provided. Analyses focused on ERPs to feedback trials; specifically, an earlier frontocentral negativity (feedback-related negativity or FRN) that is sensitive to reward and error magnitude and the centroparietally-distributed P300 that is sensitive to motivationally relevant stimuli. In line with our expectations, a stimulus type by outcome interaction was observed for the FRN, such that amplitude was largest when participants incorrectly identified nonfoods as foods, suggesting that this error was more significant than incorrectly identifying foods as nonfoods. The P300 was also sensitive to feedback, and amplitudes were highest when participants correctly identified foods. These results provide support for the hypothesis that the FRN is an index of error magnitude. Additionally, the enhanced P300 amplitudes to correct feedback regarding food items may index the salience and reinforcing properties of making correct judgments regarding edibility.
ZOOM Link: https://mcgill.zoom.us/j/84653117031
I hope many of you can join us for this event!
Best,
Jelena Ristic
------------------------------------------
Jelena Ristic, PhD
Professor & William Dawson Scholar
Department of Psychology, McGill University
1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1
Phone: 514.398.2091
Email: jelena.ristic(a)mcgill.ca<mailto:jelena.ristic@mcgill.ca>
Web: http://www.mcgill.ca/asc
Dear Colleagues,
Professor Reiko Graham from Texas State University (https://www.psych.txstate.edu/faculty/psydirectory/Reiko-Graham.html) will be presenting on her work on Friday April 8, 2022, from 11.30am-12.30pm. Professor Graham is visiting McGill on her sabbatical.
Talk Title: Can I eat this? Event-related potentials are modulated by feedback regarding edibility
Abstract: Not all mistakes are created equal, and the consequences of errors vary widely. To examine the neural correlates of error magnitude, paradigms using extrinsic rewards and punishments (e.g. monetary gains and losses) are often used. We endeavored to create a task that tapped into intrinsic motivations by asking participants to make judgments about the edibility of ambiguous objects, which was then followed by feedback. We reasoned that edibility judgments would engage evaluative processes associated with potential oral incorporation, such that incorrectly stating that an inedible object was edible would be considered a more serious error (violating the body boundary) than the opposite. Twenty-five undergraduates (15 male, mean age = 21.5 years) viewed close-ups of food/drinks or nonfood/drinks, and indicated whether they could consume the objects. Feedback about stimulus type (unambiguous, zoomed-out images) was then provided. Analyses focused on ERPs to feedback trials; specifically, an earlier frontocentral negativity (feedback-related negativity or FRN) that is sensitive to reward and error magnitude and the centroparietally-distributed P300 that is sensitive to motivationally relevant stimuli. In line with our expectations, a stimulus type by outcome interaction was observed for the FRN, such that amplitude was largest when participants incorrectly identified nonfoods as foods, suggesting that this error was more significant than incorrectly identifying foods as nonfoods. The P300 was also sensitive to feedback, and amplitudes were highest when participants correctly identified foods. These results provide support for the hypothesis that the FRN is an index of error magnitude. Additionally, the enhanced P300 amplitudes to correct feedback regarding food items may index the salience and reinforcing properties of making correct judgments regarding edibility.
ZOOM Link: https://mcgill.zoom.us/j/84653117031
I hope many of you can join us for this event!
Please excuse for cross posting and feel free to advertise to your colleagues and research groups.
Best,
Jelena Ristic
------------------------------------------
Jelena Ristic, PhD
Professor & William Dawson Scholar
Department of Psychology, McGill University
1205 Dr. Penfield Avenue, Montreal, QC, H3A 1B1
Phone: 514.398.2091
Email: jelena.ristic(a)mcgill.ca<mailto:jelena.ristic@mcgill.ca>
Web: http://www.mcgill.ca/asc
Hi Everyone,
Pauline Palma, a graduate student in our department will be presenting
her work at the upcoming CRLBM symposium on November 15th, 2021 at 10AM
EST. Please see below for more information or attached for the complete
program.
Thank you,
The CRAM team
Symposium: Cultural Evolution of Communication
Despite the tremendous structural diversity across languages and across
communication systems in non-human animals, many patterns are prevalent
across languages, across individuals within animal species, and across
species. This symposium will feature talks about cognitive, perceptual,
and production biases that contribute to the formation of common
patterns in humans and non-human animals. There will be several short
presentations, and a keynote address by Professor Kenny Smith
(University of Edinburgh).
When: November 15, 10:00am-noon (Montreal time)
Registration and details here:
https://crblm.ca/cultural-evolution-of-communication/
Hi Everyone,
Sean Devine, a graduate student in our department, will be holding a
workshop on on multilevel models in R on June 29th and July 6th. Please
see below for more details. All are welcome.
Thanks,
Kevin
---------
From: Alexa Ruel
Sent: Thursday, June 24, 2021 3:40:42 PM
To: ccd-brownbag(a)lists.concordia.ca <ccd-brownbag(a)lists.concordia.ca>
Cc: Sean Devine <seandamiandevine(a)gmail.com>
Subject: Multilevel Modelling Workshop Information
Hi everyone! 👋😊
Here is everything you need to know for Sean’s workshops on Multilevel
Modeling (MLM) which will be happening June 29th and July 6th from 9:00
– 11:00am.
The workshops focus mainly on the theory behind MLM, but Sean will work
through some specific examples in R.
Therefore, although you do not need any formal training in R, it helps
to understand the basics, keep reading for a link to a tutorial Sean
recommends.
Downloading R and RStudio:
If you don’t have R installed, you can download it here:
https://utstat.toronto.edu/cran/ [4]
You can also download RStudio which is an integrated development
environment for R (this is the program where code will be entered and
executed): https://www.rstudio.com/products/rstudio/
If you are completely new to R, here is a link to a crash course that
can be explored at your leisure: http://www.r-tutor.com/r-introduction
Workshop Materials:
Sean has kindly provided us with the slides he will use for the
workshops, which can be found here:
https://docs.google.com/presentation/d/1BkCCvfx8W2HOe89tX4FT7uZXQdZK3NWUyvm…
[5]
All the code and data we will be using is also available for download
ahead of time on Github:
https://github.com/seandamiandevine/MLMTutorial_2021
Zoom Link:
For both workshops, we will be meeting here:
https://mcgill.zoom.us/my/sdevine [6].
If you know of someone else or another group that might be interested in
joining, they are of course welcome to do so, but please let Alexa
(organizer: alexa.ruel(a)mail.concordia.ca) or Sean(presenter:
seandamiandevine(a)gmail.com) know in advance.
Finally, if you cannot make it to one or both of the workshop sessions,
not to worry, they will be uploaded to the GitHub repository with the
materials after each workshop
(https://github.com/seandamiandevine/MLMTutorial_2021). _This being
said, if you do not want to be seen or heard in the video, please let us
know so we can make sure to do our very best to keep you out of it._
We look forward to seeing you at these workshops!
If you have any questions or concerns, please do not hesitate to contact
Alexa at alexa.ruel(a)mail.concordia.ca
Best,
------
Alexa Ruel, M.A.
Ph.D. Student in Psychology
Lifespan and Decision-Making Laboratory
Concordia University
7141 Sherbrooke St. West
Montreal, QC H4B 1R6
www.ldmlab.org [1]
www.alexaruel.github.io [2]
Concordia’s Journal of Accessible Psychology (CJAP)
Founder & Editor-in-Chief
Concordia’s Journal of Psychology and Neuroscience (CJPN)
Co-Founder & Psychology Journals Liaison
www.concordiapsychjournals.ca [3]
Links:
------
[1] http://www.ldmlab.org
[2] http://www.alexaruel.github.io
[3] http://www.concordiapsychjournals.ca
[4] https://utstat.toronto.edu/cran/
[5]
https://docs.google.com/presentation/d/1BkCCvfx8W2HOe89tX4FT7uZXQdZK3NWUyvm…
[6] https://mcgill.zoom.us/my/sdevine