Skip to content

Claire Stevenson

Michael Nunez: Using neural data to identify the unidentifiable components of cognition during decision-making

December 12, 2023 REC GS.08 Drift-Diffusion Models (DDMs) are a widely used class of models that assume an accumulation of evidence during a quick decision. These models are often used as measurement models to assess individual differences in cognitive processes, such as an individual’s evidence accumulation rate and response caution. An additional underlying assumption of these models is that there is internal noise in the evidence accumulation process. In fact, individual differences in internal noise are often a more parsimonious explanation than individual differences in both response caution and evidence accumulation rate. However, fitting DDMs to experimental choice-response time data… Read More »Michael Nunez: Using neural data to identify the unidentifiable components of cognition during decision-making

Claire Stevenson: Kids are smarter than chatGPT

Nov 21, 2023 GS.09 Recent work with OpenAI’s GPT language models concludes that analogical reasoning, using what you know about one thing to infer knowledge about a new, somehow related instance, is emerging in these systems. My work shows something different: the newest GPT models are autoregression at its best, excelling at next-word prediction, but they can’t generalize what they’ve learned to novel domains, and they are not “reasoning”. I will present two studies on analogical reasoning and demonstrate through a series of experiments comparing GPTs performance to children that analogical reasoning has not emerged in these systems. I will… Read More »Claire Stevenson: Kids are smarter than chatGPT

Andy Keller: Topographic Organization and Traveling Waves as Natural Representational Structure in Artificial Neural Networks 

Oct 31, 2023 GS.09 In the machine learning community, structured representations have demonstrated themselves to be hugely beneficial for efficient learning from limited data and generalization far beyond the training set. Examples of such structured representations include the spatially organized feature maps of convolutional neural networks, and the group structured activations of other equivariant models. To date however, the integration of such structured representations with deep neural networks has been limited to explicitly geometric transformations (such as spatial translation or rotation), known a priori to model developers. In the real world, we know that natural intelligence is able to efficiently… Read More »Andy Keller: Topographic Organization and Traveling Waves as Natural Representational Structure in Artificial Neural Networks 

Micha Heilbron: A generative AI framework for studying the predictive brain

26 September 2023, location TBA Advances in AI are providing new ways to study cognition. I will argue that this will have a profound impact on the study of the mind, comparable to the impact of cognitive neuroscience in prior decades. I will illustrate this by focussing on one domain, showcasing my own work using generative AI to test the predictive brain hypothesis. This approach relies on generative AI to approximate the predictions that the brain is hypothesised to make, and comparing these predictions to brain responses in natural conditions. This ‘generative AI framework’ allows for strong and precise tests… Read More »Micha Heilbron: A generative AI framework for studying the predictive brain

Ingmar Visser: Deep problems with deep learning and good-old-fashioned hidden Markov models to the rescue

2 June 2023 at 15:00 in REC GS.08 Deep learning models are increasingly used in automating detection and classification tasks and quite successfully so. Examples range from image classification, x-ray interpretations to eye-movement event detection. In all these cases, there is also much left to be desired. Interpretability & reproducibility of modeling exercises are hard to come by in these networks, dataset training bias is a pervasive problem. A more classical approach to these problems suffers less from these drawbacks. I will present work we did in model-based classification of eye-movements. The modeling work is based on good-old-fashioned hidden Markov… Read More »Ingmar Visser: Deep problems with deep learning and good-old-fashioned hidden Markov models to the rescue

Sander Bohte: Biologically plausible models of neural reinforcement learning in working memory tasks

February 17, 2023 at 15:00 in REC GS.08 A key function of brains is the abstraction and maintenance of information from the environment for later use. To this end, we are able to recognize an event or a sequence of events and learn to respond properly. The challenge is to learn to recognize both what is important to either remember or respond to, and also when to act. Reinforcement Learning (RL) is typically used to solve complex tasks: to learn the what. Over the last years, we have developed a biologically plausible neural network model that explains how neurons learn to represent task-relevant information in delayed response tasks where information… Read More »Sander Bohte: Biologically plausible models of neural reinforcement learning in working memory tasks

Jorge Mejias: Large-scale brain models: from neural dynamics to cognitive functions

April 14, 2023 at 15:00 in REC GS.08 Abstract: Computational models of large-scale brain networks have been traditionally focused on reproducing brain dynamics such as resting state activity. However, embedding the mechanisms and structure needed to reproduce brain functions related to perception and cognition, in a way that also matches the neuroanatomical and electrophysiological evidence, has been more challenging. In this talk, I will present two recent examples of computational models of brain networks which include rudimentary but behaviorally relevant functions. The first example will focus on how the delay activity underlying working memory may emerge as a distributed phenomenon… Read More »Jorge Mejias: Large-scale brain models: from neural dynamics to cognitive functions

Jelle Zuidema: Blackbox meets Blackbox: interpreting deep learning models in cognitive neuroscience

December 13, 2022 at 15:00 Deep learning models have become important tools in the cognitive neuroscience of language, vision, mathematics, music and many other domains. In particular, internal states of these deep learning models can successfully be used to predict (with “encoder-decoder models”) the brain activation that one can observe using brain imaging techniques. However, it is unclear whether such successful predictions also lead to a better understanding of how the brain processes language, images, math or music. Are the observed alignments a mere curiosity, or can they inform theories of cognitive processing? In this talk, I argue that the… Read More »Jelle Zuidema: Blackbox meets Blackbox: interpreting deep learning models in cognitive neuroscience

Adrien Doerig: The neuroconnectionist research programme

November 9, 2022 at 15:00 in in REC B3.08 Artificial Neural Networks (ANNs) inspired by biology are beginning to be widely used to model behavioral and neural data, an approach we call neuroconnectionism. ANNs have been lauded as the current best models of information processing in the brain, but also criticized for failing to account for basic cognitive functions. We propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who… Read More »Adrien Doerig: The neuroconnectionist research programme

Rule-Like Behavior in Artificial Neural Networks – The Role of Top-Down Control and Mediating Concepts

June 21, 2022 Daniel v/d Meer supervisors: Maartje Raijmakers, Raoul Grasman, & Han van der Maas As intrinsically connectionist models, artificial neural networks (ANN) are not made to explicitly deal with symbolic representations. Still, in their parallelly distributed way of processing data, their mechanisms have promising similarities to biological neural networks. By using ANNs as model for human processing, Raijmakers and colleagues (1996) investigated rule-like behavior in ANNs by applying a binary encoded version of the discrimination-shift paradigm. Based on their findings, they concluded that simple ANNs do not show rule-like behavior. By replicating the results of Raijmakers and colleagues,… Read More »Rule-Like Behavior in Artificial Neural Networks – The Role of Top-Down Control and Mediating Concepts