Владислав Педдер – The Existential Limits of Reason (страница 5)
In both cases – in mammals and octopuses – the brain serves as an adaptive organ that processes information about the external world and makes decisions based on the organism’s current needs. However, while mammals developed a central brain to coordinate actions and social interactions, octopuses use local brain structures to maintain a high degree of independence for their body parts. This difference reflects distinct evolutionary survival strategies, where mammals rely on collective behavior and complex social interactions, while octopuses depend on individual decision-making and flexibility in manipulating their environment.
The Bayesian Approach to the Mind: The Free Energy Principle and Predictive Coding Theory
Predictive Coding and its foundations, related to Bayesian approaches, play a central role in contemporary understanding of how the brain perceives and processes information. Unlike traditional views of perception, where the brain simply reacts to sensory data, the theory of predictive coding argues that the brain actively constructs models of the world and uses them to predict future events. These predictions are then compared with the actual sensory information received through the senses. Prediction error – the difference between what the brain expects and what it actually perceives – serves as a signal for updating the mental model. This process allows the brain to minimize energy costs, accelerating perception and increasing adaptability, which forms the basis for the effective functioning of cognitive processes.
In recent decades, the theory of predictive coding has increasingly been seen as part of the broader Free Energy Principle, which links it with Bayesian inference, Active Inference, and other approaches focused on minimizing uncertainty and adapting to environmental changes.1. However, despite the growing interest in this integrative approach, predictive coding itself remains a fundamental concept for understanding how the brain constructs models of the world and updates them based on new data. This work will focus primarily on predictive coding, its neurobiological mechanisms, and its role in cognitive processes.
The historical roots of the theory of predictive coding indeed trace back to the works of Pierre-Simon Laplace, who laid the foundation for the concept of determinism. Laplace, one of the first to consider ideas of probability and determinism in the context of predicting the future, proposed that if one had complete knowledge of the current state of the universe, the future could be predicted with absolute certainty. His hypothesis of the “Laplace Demon,” which could predict the future with perfect accuracy, was based on the idea that if we knew all the parameters of microstates, including the position and velocity of every particle, all events – including human thoughts and actions – could be predicted.
This idea of an all-knowing observer and the ability to predict future events based on complete knowledge of present conditions provided an early conceptual foundation for understanding how the brain processes information and makes predictions about the future. Predictive coding and the free energy principle are modern extensions of this concept, where the brain continually updates its internal models of the world to minimize prediction errors and uncertainty.
However, the concept of prediction and world modeling began to develop much later. In the 18th and 19th centuries, Laplace’s ideas about determinism started to be questioned by contemporary philosophers and scientists such as Isaac Newton, Carl Friedrich Gauss, and others. Ideas related to probabilistic calculations and uncertainty gained popularity with the development of statistics and thermodynamics.
The shift toward probabilistic thinking marked a key turning point in the evolution of predictive models. It became increasingly clear that the world is not fully deterministic and that knowledge of the present state is often insufficient to predict the future with absolute certainty. This uncertainty was formally recognized in statistical mechanics, which introduced the concept of entropy – a measure of disorder or uncertainty in a system. As a result, the idea that the brain might work with probabilities, updating predictions based on new information, became more plausible and relevant in the context of cognitive neuroscience.
In the 20th century, the works of Klaus Heisler, Richard Feynman, and Jan Frenkel represented a significant step toward understanding how predictions can operate in conditions of uncertainty and how the brain can construct hypotheses in the context of probability and imperfection. These scientists proposed mathematical approaches that ultimately laid the foundation for the theory of predictive coding in neurobiology.
Equally important contributions to the development of the idea of prediction and coding theory came from researchers in the field of neuroscience in the mid-20th century, such as Benjamin Libet and Nobel laureates Roger Sperry and Jean-Pierre Chevalier. For example, Libet conducted experiments that demonstrated the brain starts the decision-making process several seconds before a person becomes consciously aware of their choice, challenging the idea of full conscious control over behavior.
However, theories similar to predictive coding began to actively develop only in the late 20th and early 21st centuries. A key role in this was played by research into neuroplasticity and the brain’s adaptive mechanisms. Neurobiological studies, including investigations of neurotransmitters such as dopamine and the influence of neural networks, allowed for significant insights into how the brain uses prediction and models to perceive the surrounding world. Founders of predictive coding theory, such as Karl Friedrich von Weizsäcker and Gregory Hooper, proposed that the brain is constantly forming hypotheses about the future based on past experience and correlating them with incoming sensory information.
Bayes’ theorem, proposed by the English mathematician Thomas Bayes in the 18th century, became an important mathematical tool for analyzing and updating probabilistic hypotheses in light of new data.
The essence of the theorem is that it allows for recalculating the probability of a hypothesis based on new data. Bayes’ theorem describes how the belief (or probability) in a hypothesis is updated in response to new information. In the context of the brain, this theorem can be used to explain how neural networks update their predictions about the future, considering both old and new experiences.
In the context of predictive coding theory, this theorem and formula illustrate how the brain updates its hypotheses (or predictions) about the world based on new sensory data. When the brain encounters new events (data), it revises its prior probability (predictions) to incorporate these data, which helps improve the accuracy of future predictions.
Thus, this process reflects a key feature of predictive coding: the brain does not simply react to data, but actively revises its expectations based on new inputs, always striving to minimize prediction errors.
The application of Bayes’ theorem to neurobiology and cognitive science became possible in the 1980s when scientists began to understand how the brain could use probabilistic methods to solve problems of uncertainty. In this paradigm, the brain is seen as a “Bayesian inference” (interpreter) that formulates hypotheses about the world and updates them in response to sensory information using principles of probability. The Bayesian model suggests that the brain maintains probabilistic models of future events and adjusts them based on prediction errors, which is directly connected to the theory of predictive coding.
This updating of probabilistic hypotheses is crucial because it allows the brain not only to adapt to changes in the environment but also to account for uncertainty in the world, even when information is incomplete. In this sense, Bayes’ theorem and its applications have become fundamental to understanding how the brain, when faced with uncertainty, can improve its predictions and forecast the future based on prior knowledge.
Thus, the connection between predictive coding theory and Bayes’ theorem became a key point in the development of neurobiological models explaining how the brain processes information and uses probabilistic computations to predict the future. Bayes’ theory, as the foundation for handling uncertainty and adaptation, provided an important mathematical and cognitive tool for understanding how the brain functions in the context of constant uncertainty and the ever-changing world.
Predictive Coding as an Adaptive Mechanism
The principle behind the theory of predictive coding is that the brain does not simply react to external stimuli, but actively predicts them using existing models of the world. The brain constructs hypotheses about what will happen in the future and compares them with current sensory information. If the predictions match reality, the prediction error is minimized, allowing the brain to use its resources efficiently. If an error occurs – when there is a mismatch between the prediction and reality – the brain updates its models of the world, which helps improve perception and adaptation.