Psychologists have always used behavioral tests to make inferences about the brain and, although it has been most commonly associated with psychometrics, the study of individual differences can be used for this aim as well. But how should individual differences be measured? As Haaf and Rouder point out in the main article, raw behavioral data is noisy and contaminated by errors. Because of this, it is becoming more and more common to use sophisticated computational models to measure individual differences, and then use the parameters of these models to analyze neural data. Such models include, for example, decay models of memory (Sense et al., 2016), drift-diffusion models (White et al., 2014), and reinforcement learning models (Collins, 2018). In such an approach, “qualitative” differences do not really occur; individual differences show up as different parameter values, and differences between individuals are all “quantitative” in nature. This modeling approach, however, performs a sort of sleight of hand, making the qualitative differences disappear by imposing constraints on behavior through the equations they are based upon. This raises another problem: how do we know which model is correct? For most phenomena, multiple possible modeling approaches exist. For example, intrusive fearful memories in Post-Traumatic Stress Disorder have been modeled through reinforcement learning (Myers et al. 2013) or through differently decaying memory traces (Smith et al., in press). These models provide very different and incompatible worldviews of the mechanisms underlying this, and it would be desirable to adjudicate between them before analyzing quantitative differences between individuals.
In many ways, this problem is analogous to the one singled out by Newell (1973) in his “You can’t play twenty questions with nature and win” paper. Newell argued that, instead of developing problem-specific models, different phenomena should be modeled within the same, invariant “cognitive architecture”. However, exactly as the same behavior can be explained by modeling frameworks, it can also be accounted for at the level of the architecture or at the level of the specific model.
The work by Haaf and Rouder provides a useful way out of this impasse. Almost by definition, a qualitatively invariant effect must be part of the architecture, as it must be due to processing constraints that are shared across all humans. In other words, each qualitatively invariant effect that exists represents a constraint that any architecture must respect. Cognitive architectures could even be compared on the basis of the number of qualitatively invariant effects they predict. Once an architecture is established, quantitative differences between individuals might still be interpreted as reflecting individual differences in specific architecture parameters.
Although Newell was specifically addressing the architecture of the mind, the problem can be easily extended to understanding the brain—after all, the mantra of biology is that function is determined by structure, and the structure that supports cognition is the brain. In this sense, the architecture can be conceived as the large-scale organization of pathways that connect different brain regions which are functionally specialized. These pathways pose fundamental constraints on how information flows and must pass through the brain and, although our brains are different, they also share a common organization. Indeed, recently, we have been looking precisely at whether this common organization of pathways can be identified in the human brain, using neuroimaging data collected in the Human Connectome Project. The results show that such a high-level, common organization can be identified (Stocco et al., 2021) and that it closely reflects the main tenet that most cognitive architectures have converged upon (Laird et al., 2017). Such a common organization can be interpreted as the source of invariant cognitive effects across individuals.
As Haaf and Rouder point out, many psychologists believe that truly invariant effects might not exist and that every behavioral effect must exhibit some form of qualitative difference. The architectural view, however, provides a framework to interpret rare and exceptional qualitative differences. Although an architecture should be invariant across individuals, some exceptions can be easily thought of. For example, patients suffering from brain damage or neurological disease can be expected to have such a high degree of neural diversity that they can be legitimately claimed to have a different architecture. Consider, for example, an extreme case such as patient HM (Corkin 2002). Due to bilateral removal of the hippocampus, HM was unable to form new memories. His post-surgical behavior cannot be interpreted on a quantitative scale of, for instance extreme memory decay (Zhou et al. 2021); instead, it is consistent with what would be expected if we take any cognitive architecture and selectively remove one specific module.
In summary, the distinction between qualitative and quantitative effects can be put into correspondence with the distinction between a common architecture and individually parametrized models developed within it. Exactly like quantitative differences in model parameters have been used to interpret individuals differences in neural data (Zhou et al. 2021), qualitative effects might provide the key to refine our understanding of the cognitive architecture and the fundamental organization of the human brain.
The author has no competing interests to declare.
Collins, A. G. E. (2018). The Tortoise and the Hare: Interactions between Reinforcement Learning and Working Memory. Journal of Cognitive Neuroscience, 30(10), 1422–1432. DOI: https://doi.org/10.1162/jocn_a_01238
Corkin, S. (2002). What’s new with the amnesic patient HM? Nature reviews neuroscience, 3(2), 153–160. DOI: https://doi.org/10.1038/nrn726
Laird, J. E., Lebiere, C., & Rosenbloom, P. S. (2017). A Standard Model of the Mind: Toward a Common Computational Framework Across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine, 38(4). DOI: https://doi.org/10.1609/aimag.v38i4.2744
Myers, C. E., Moustafa, A. A., Sheynin, J., VanMeenen, K. M., Gilbertson, M. W., Orr, S. P., … & Servatius, R. J. (2013). Learning to obtain reward, but not avoid punishment, is affected by presence of PTSD symptoms in male veterans: empirical data and computational model. PLoS One, 8(8), e72508. DOI: https://doi.org/10.1371/journal.pone.0072508
Newell, A. (1973). You can’t play 20 questions with nature and win: Projective comments on the papers of this symposium. In W. G. Chase (Ed.), Visual information processing (pp. 283–308). Academic Press. DOI: https://doi.org/10.1016/B978-0-12-170150-5.50012-3
Sense, F., Behrens, F., Meijer, R. R., & van Rijn, H. (2016). An individual’s rate of forgetting is stable over time but differs across materials. Topics in Cognitive Science, 8(1), 305–321. DDOI: https://doi.org/10.1111/tops.12183
Smith, B. M., Chiu, M., Yang, Y., Sibert, C., & Stocco, A. (in press). When fear shrinks the brain: A computational model of the effects of post-traumatic stress on hippocampal volume. Topics in Cognitive Science.
Stocco, A., Sibert, C., Steine-Hanson, Z., Koh, N., Laird, J. E., Lebiere, C. J., & Rosenbloom, P. (2021). Analysis of the human connectome data supports the notion of a “Common Model of Cognition” for human and human-like intelligence across domains. NeuroImage, 235, 118035. DOI: https://doi.org/10.1016/j.neuroimage.2021.118035
White, C. N., Congdon, E., Mumford, J. A., Karlsgodt, K. H., Sabb, F. W., Freimer, N. B., London, E. D., Cannon, T. D., Bilder, R. M., & Poldrack, R. A. (2014). Decomposing decision components in the stop-signal task: a model-based approach to individual differences in inhibitory control. Journal of Cognitive Neuroscience, 26(8), 1601–1614. DOI: https://doi.org/10.1162/jocn_a_00567
Zhou, P., Sense, F., Van Rijn, H., & Stocco, A. (2021). Reflections of idiographic long-term memory characteristics in resting-state neuroimaging data. Cognition, 212, 104660. DOI: https://doi.org/10.1016/j.cognition.2021.104660