Frontiers in Psychology, April 24, 2026 – A recent opinion piece published in Frontiers in Psychology argues for a fundamental shift in how cognitive science approaches its subject matter, advocating for the embrace of continuity as a core theoretical principle. The authors, Irene Di Pietro and Gianluca Viviani, contend that by defaulting to discrete, binary, or multilevel categorizations of cognitive phenomena, the field has historically misrepresented the fluid, dynamic, and inherently continuous nature of the mind. This "discretization bias," they argue, not only hinders theoretical development but also compromises the validity and predictive power of research. The Historical Reliance on Discrete Models For decades, cognitive psychology has relied on simplifying complex mental processes into discrete categories to facilitate empirical investigation and detect significant effects. This pragmatic approach, while useful for isolating variables and identifying robust findings, has often led to these simplified models being mistaken for accurate representations of reality. The article posits that this practice runs counter to the principle of continuity, famously articulated by Leibniz as "Natura non facit saltus" – nature does not make leaps. This philosophical tenet suggests that phenomena, including those of the mind, generally vary gradually rather than abruptly. This principle, the authors assert, extends across the entire architecture of cognitive science. In artificial intelligence, it is seen in the gradients of machine learning models. In linguistics, meaning itself is understood as a continuum rather than a set of discrete units. Social coordination and the intricate coupling between agents and their environments are also characterized by continuous dynamics. Neurobiological and Computational Foundations of Continuity The argument for continuity is not purely philosophical; it is deeply rooted in our understanding of the brain’s biological and computational underpinnings. The brain, far from operating on simple binary switches, encodes information through graded population codes. Neural tuning functions map stimuli onto smooth probability distributions, enabling the complex computations necessary for cognition. This suggests that neural representations are best understood as vectors within continuous, high-dimensional spaces rather than discrete states. Recent research, including findings from 2025, indicates that even the large-scale organization of the brain adheres to continuous gradients rather than sharply defined modular boundaries. This biological reality, the authors argue, necessitates an epistemological shift: dimensionality should be the default assumption for psychological constructs. Unless a distinct categorical entity, or "taxon," can be definitively proven, the scientific default should be that cognitive phenomena exist on a continuum. The Pitfalls of Discretization The persistence of discrete frameworks for inherently continuous phenomena creates a dual trap. Theoretically, it leads to a violation of "structural fidelity," misrepresenting continuous mechanisms as discrete steps, thereby undermining the validity of our theories. Methodologically, discretization reduces statistical power and masks the true functional form of effects, leading to incomplete or misleading conclusions. For example, reducing a continuous variable like reaction time to a few discrete bins (e.g., fast, medium, slow) loses valuable information about the underlying distribution and the precise dynamics of the decision-making process. Bridging Theory and Measurement: Semantic Representation The article highlights semantic representation as a prime example where the limitations of discrete models are evident. Historically, studies of linguistic interference, such as the classic Stroop task, have largely relied on binary contrasts like congruent versus incongruent stimuli. However, the authors point out that while congruency is a single configuration, incongruency itself spans a continuous spectrum of mismatches. Naming the ink color of a word, for instance, is not simply a matter of being "correct" or "incorrect," but involves varying degrees of interference based on the semantic and perceptual relationship between the word and its color. The interference experienced when naming the color "blue" on the word "sky" is demonstrably greater than on the word "house," indicating that the relationship between task-relevant and task-irrelevant dimensions is not binary. This observation, made as early as 1964 by Klein, suggests that stimulus relations exist along a continuum of representational overlap, a complexity that binary contrasts fail to capture. This reductionism is particularly stark in semantic processing experiments. Traditional Picture-Word Interference tasks often treat conceptual relatedness as a binary state—either related (e.g., dog-cat) or unrelated (e.g., dog-table). Even designs that introduce ordinal similarity levels (high, medium, low) impose arbitrary cutoffs, equating subtle and substantial differences. This discretization fails to account for the human ability to form flexible, goal-dependent groupings based on graded, "ad hoc" similarity, such as identifying "things I can play with." Empirical evidence increasingly supports a continuous view. Studies have shown that interference effects scale linearly with fine-grained feature overlap, and mouse-tracking data reveals that motor trajectories are continuously shaped by semantic similarity. This suggests that conceptual competition operates within a continuous landscape. Modern semantic memory models, based on networks, features, and distributional approaches, compute similarity via vector proximity, aligning with the idea of continuous embeddings in neural representations. Adaptive Control and the Dynamics of Learning The argument for continuity extends beyond semantic representation to the domain of adaptive control and statistical learning. Cognitive processes are viewed as graded, unfolding through continuous statistical learning. Predictive processing frameworks describe cognition as the incremental updating of probabilistic expectations. The brain continuously integrates weighted evidence to minimize uncertainty and support inference, dynamically adjusting learning rates based on surprise and volatility. This suggests that cognition operates through continuous expectation learning rather than abrupt shifts between discrete contextual states. However, cognitive psychology often creates an ontological mismatch by freezing these dynamics into static, block-level measurements. Adaptive control, a key area in understanding goal-directed behavior, exemplifies this. Control is continuously adjusted to contextual demands inferred through statistical learning. Experience-based regularities, such as the frequency of conflict, are translated into expectations that drive graded control adjustments. These adjustments emerge gradually, trial-by-trial, rather than in response to discrete contextual states. To investigate this, researchers have historically manipulated conflict probability using blocked designs, varying the List-Wide Proportion of Incongruent trials (LWPI) in tasks like the Stroop. While effective in inducing a global conflict context, this approach relies on two fallacies: it unrealistically assumes immediate access to the block’s statistical structure and that conflict probability is homogeneous within a block. This overlooks the learning process itself, implying participants uniformly adjust to a context they haven’t fully experienced. Instead, within LWPI blocks, conflict expectations update trial-by-trial as evidence from recent experience accumulates. Control adaptations track the predictive value of recent events, a dynamically evolving expectation, rather than the experimenter-assigned block context. Consequently, the actual driver of control is a latent, trial-by-trial estimate of conflict probability that should be quantified at the level of the trial. Unifying Principle Across Cognitive Domains The authors argue that continuity is not limited to semantic representation and statistical learning but serves as a unifying principle across various cognitive domains and even timescales. Intra-trial Scale: Binary responses in perceptual decision-making are the final readout of continuous internal dynamics. Evidence-accumulation models and neural recordings demonstrate that decisions evolve through the gradual build-up of sensory evidence, cascading seamlessly into action. Trial-by-trial Scale: Reward-guided behavior extends statistical learning to value-based domains. Humans continuously update expectations about reward probability and volatility, and dopaminergic neurons signal graded prediction errors proportional to expectation violations. This continuous updating generalizes beyond control to reinforcement learning. Lifespan Scale: Continuity in learning extends to the lifespan, reflecting the cumulative integration of statistical regularities in vision. Long-term priors tune perceptual mechanisms and high-level visual processes. Object-scene co-occurrences are naturally represented as graded likelihoods, suggesting that "congruency effects" might be snapshots of an underlying graded predictive system. Social and Cultural Dynamics: The principle of continuity scales up to collective dynamics. Frameworks in social predictive processing and variational neuroethology conceptualize social and cultural interaction as the continuous synchronization of probabilistic expectations between agents. Treating cultural patterns as graded distributions of shared priors, rather than discrete categories, can improve our understanding of how meaning emerges and stabilizes across different scales of social organization. Implications for Research and Theory The central claim is that scientific methods must reflect the continuity of cognitive phenomena. When theoretically continuous constructs are modeled accordingly, explanatory power increases, and previously obscured latent structures become visible. The implications for measurement are profound. Discrete experimental manipulations, like blocked designs or binary contrasts, can be useful probes. However, problems arise when these design choices are reified as inherent properties of the cognitive system. In semantic paradigms, "related" and "unrelated" distractors sample a graded similarity space, but cognition operates over that entire space. Similarly, in adaptive control, block-wise manipulations of conflict frequency provide statistical regularities, but expectations and control adaptations evolve through trial-by-trial learning. Measures that inherit the discreteness of the design as a theoretical commitment risk mischaracterizing the phenomenon. Instead, measures should explicitly capture the underlying continuous structure, using trial-level estimates or continuous predictors. Statistical and Inferential Advantages of Continuous Modeling Beyond aligning with latent structure, continuous modeling offers substantial inferential advantages. Discretization is known to reduce statistical power, measurement reliability, and obscure functional forms. Sampling only two points along a continuum leaves many competing theories indistinguishable. Continuous approaches, in contrast, allow researchers to map the full gradient, using tools like mixed-effects or Bayesian hierarchical models. This increases falsifiability by tightening the link between theoretical structure and observable data. Continuity as the Default Null Hypothesis Consequently, continuity should be treated as the natural null hypothesis for psychological constructs. Conceptualizing constructs as discrete or continuous carries significant epistemological weight. When scientific methods fail to model this continuity, they risk mistaking measurement artifacts for cognitive structure. Any decision to discretize must be justified by both theoretical goals and statistical implications. Discrete distinctions should be tested hypotheses, not assumptions. The authors conclude that identifying robust effects has been a major achievement, but relying on existence proofs is insufficient. The field must now model how these effects emerge and evolve. Aligning measurement with the continuous nature of cognitive processes is a prerequisite for enhancing validity, statistical power, reliability, and falsifiability. By treating continuity as the default, cognitive science can move from static descriptions to predictive, neurobiologically grounded theories. This embrace of continuity offers a framework capable of capturing the seamless transition between neural activity, individual behavior, and the distributed dynamics of social and cultural systems, reflecting the reality that the mind, like the world, is rarely black and white. Post navigation The Construction and Reproduction of Football Talent Narratives in the Italian Sporting Press