Skip to content
Gallery
Ontology Full Database
Share
Explore
Work in progress — email if you would like to contribute!
Full Ontology
6
Term
Tag
Ref_1
Ref_2
Ref_3
Ref_4
Ref_5
Def_1
Def_2
Def_3
Def_4
Def_5
Proposed_Def_1
Proposed_Def_2
Proposed_Def_3
Proposed_Def_4
Notes_1
Notes_2
Notes_3
Notes_4
Notes_5
Notes_6
Notes_7
Notes_8
Notes_9
Notes_10
Notes_11
Notes_12
Notes_13
SUMO superclass, instanceOf, superrelation
Subclasses, subrelations
Domains
Other SUMO relations
1
Accuracy
Core
Accuracy is simply the surprise about sensations that are expected under the recognition density
*Accuracy is a subclass of PsychologicalAttribute
An
instance
of
*Accuracy
is
the
number
2
argument
abstractCounterpart
.
*Accuracy is internally related to TruthValue
2
Action
Core
Action A : S × R → ℝ... corresponds to action emitted by an agent and depends on its sensory and internal states.
RECOGNITION models update internal parameters that correspond to external states (including hidden causes of the environment), blanket states, and internal states (meta-modeling). In contrast, GENERATIVE models take those same internal parameter estimates and emit expected or plausible observations.
*AbstractAction is a subclass IntentionalProcess.
3
Action Planning
Core
The requirement for an adaptive organism to predict the consequences in the future of engaging in this or that action. Also called the problem of mere versus adaptive active inference, or planning as inference.
Planning is a subclass of IntentionalPsychologicalProcess.
4
Action prediction
Core
In principle, the recognition of action goals might be implemented in perceptual and associative brain areas, similar to the way other events such as visual scenes are (believed to be) recognized, predicted and understood semantically.
We propose that the brain automatically predicts others’ future actions while perceiving their current actions. The human perceptual system constantly engages in this kind of reflexive prediction.
Predicting is a subclass of intentional psychological process
5
Active Inference
Core
(??)
Under the free energy principle, systems can be interpreted as engaging in active inference in order to minimize their free energy. A system can be described to engage in active inference in the sense of performing belief updating and acting such as to fulfil prior preferences about observations. Describing a self-organizing system in terms of active inference means that the system acts upon its external milieu to maintain itself in its preferred states (cf. homoeostasis). Active inference provides a mechanism to derive the dynamics of sensory and active states such that they minimize a variational free energy functional. This allows us to describe an agent as engaging in actions that will get them closer to their preferred sensory states. Belief updates, in turn, contribute to the optimization of internal states, which tightens the (free energy) bound on surprisal, thus enabling action to avoid (statistically) “surprising” sensations; and corresponds to perception.
An extension of predictive coding (and part of the free energy principle), which says that agents can suppress prediction errors by performing actions to bring about sensory states in line with predictions.
1. Computational process in which prediction error is minimized by acting on the world ("making the world more similar to the model"), as opposed to minimizing prediction error by changing the internal model, i.e. perceptual inference ("making the model more similar to the world"). 2. Also used as a generic term for the computational processes which underpin both action and perception, and, in the context of FEP, for all computational processes that minimize free energy.
The minimisation of free energy through changing internal states (perception) and sensory states by acting on the world (action).
active inference is a self-organising process of action policy selection.
Judging is a subclass of selecting
*ActiveInference is internally related to Judging.
6
Active Learning
Core
To reduce the latter type of uncertainty, agents can expose themselves to observations that complete ‘knowledge gaps’ and thereby learn the probabilistic structure of unknown and unexplored (novel) contingencies – hence active learning allowing for ‘model parameter exploration’.
Learning is a subclass of intentional psychological process
7
Active States
Core
Active states are a subset of blanket states that mediate the influence of internal states on external states. Conversely, sensory states are a subset of blanket states that mediate the influence of external states on internal states.
Which states are internal/external? Which are autonomous states?
Action vs Active states?
Partitioning of blanket states into incoming (sense) and outgoing (action) statistical dependencies.
Have outgoing statistical dependencies with external states
Have outgoing statistical dependencies towards external states
*ActiveState is a subset of PhysiologicProcess.
8
Active Vision
Core
Our vision incorporates both the information that falls on the retina and the consequences of eye movement, in particular the consequences of rapid or saccadic eye movements. We refer to this vision as active vision
*ActiveVision is a subclass of Looking. *ActiveVision is a subclass of Searching. Looking is a subclass of intentional process. Searching is a subclass of investigating.
9
Affordance
Core
Gibson defined affordances as action possibilities latent in the environment [3], objectively measurable and independent of the ability to recognize them but always in relation to the actor [4].
*Affordance is equivalent to resource. resource is an instance of case role . resource is a subrelation of patient.
10
Agency
Core
a sense of agency as a probabilistic representation of control that is distinct from the action actually emitted
a sense of agency rests upon prior beliefs about how one will behave
Intentional process is a subclass of process
*Agency is internally realted to IntentionalProcess
11
Agent
Core
Agents defined under active inference: A) sample their environment and calibrate their internal generative model to best explain sensory observations (i.e., reduce surprise) and B) perform actions under the objective of reducing their uncertainty about the environment.
Agent is a subclass of object . (relation) agent is a subrelation of involved in event . agent is an instance of case role.
Group
is
a
subclass
of
agent
The
number
1
argument
of
agent
is
an
instance
process
.
2
Agent.
12
Ambiguity
Core
expected uncertainty of observations, expected under a policy.Ambiguity can be thought of as the expected inaccuracy, where marginal likelihood is equal to accuracy minus complexity
ambiguity is the uncertainty about outcomes given the state of the world.
Thus, ambiguity is the expectation of the conditional entropy — or uncertainty about outcomes — under the current policy
Ambiguity is the loss of a precise or definitive mapping between external states of the world and observed sensory states (as quantified by entropy, denoted by H).
In this paper, we address the issue of ambiguity of objective image quality assessment. We propose an approach to obtain an ambiguity interval of an objective metric, within which the quality score difference is not perceptually significant. In particular, we use the visual difference predictor, which can consider viewing conditions that are important for visual quality perception. In order to demonstrate the usefulness of the proposed approach, we conduct experiments with 33 state-of-the-art image quality metrics in the viewpoint of their accuracy and ambiguity for three image quality databases. The results show that the ambiguity intervals can be applied as an additional figure of merit when conventional performance measurement does not determine superiority between the metrics. The effect of the viewing distance on the ambiguity interval is also shown.
Cannot describe when one thing becomes another -- e.g. forest and trees, or what makes someone rich, thresholds.
Some kind of undecidable uncertainty?
Only related to Observations~State mappings? Or other parameters? Does this have similar use as informal deployment of the term or not?
*Ambiguity is a subclass of StateOfMind .
13
Attention
Core
Inferring the level of uncertainty or precision during hierarchical perception.
Here, we pursue the notion that attention is the optimum weighting of prediction error in the context of action preparation
It is argued here that attention is frequently used to refer to two very different phenomena. One relates to salience, and is fundamentally a property of action plans and epistemic affordance. The other involves the biasing of inference towards sensory channels providing precise information.
Attentional states, s(2), modulate the confidence in sensory observations, and meta-awareness states, s(3), modulate the confidence in higher-order observations.
Machine learning // Conscious or Aware attention
Regimes of attention https://www.frontiersin.org/articles/10.3389/fpsyg.2016.01090/full https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00679/full how is this related to motor/visual and salience "Attention"
Can non-cultural ActInf agents have shared regimes of attention
Attention and Information?
*Attention is a subclass of IntentionalProcess.
*Attention nominalizes the attends CaseRole
14
Autopoiesis
Core
*Autopoiesis is internally related to Reproduction. Replication is a subclass of OrganismProcess.
15
Bayesian Inference
Core
Minimisation of variational free energy with respect to a probability distribution over (fictive) hidden states causing sensory states (observations). Variational Bayesian inference is approximate because it minimises a (free energy) bound approximation to surprise. When free energy equals surprise, inference is exact.
Updating a model in accordance with Bayes' rule, i.e. computing the posterior distribution: p(c|s) = p(s|c)p(c)/p(s).
*BayesianInference is a subclass of PhysiologicProcess.
16
Behavior
Core
Here we use the term behavior to mean what a plant or animal does, in the course of an individual's lifetime, in response to some event or change in its environment
*Behavior is an near synonym of BodyMotion. *Behavior is an near synonym of Process.
17
Belief
Core
The term ‘belief’ is used in the sense of ‘belief propagation’ and ‘Bayesian belief updating’, which are just ways of talking about probability distributions or densities. ‘Beliefs’ in Bayesian terms are posteriors and priors, corresponding to the probability distributions (a world of possible states) that are shaped by physically realized states (i.e., the internal states that parameterize distributions over external states). In general, although we use the term ‘beliefs’ to describe the probability densities defined over external states, it is generally recognized that these densities are not themselves the same as propositional beliefs.
In short, self-evidencing appears to require perceptual inference, in the sense that a belief is formed that approximates the probability of the causes of sensations. Note that the term “belief” is used here in the technical sense of Bayesian belief updating—not to indicate a conscious, propositional belief.
Technically, these (Bayesian) beliefs are referred to as approximate posteriors, variational densities or recognition densities. The recognition model is the inverse of a likelihood model: it is a statistical mapping from obser?vable consequences to hidden causes. This explains why forming a posterior belief is often referred to as model inversion, where Q(s) ’ P(sjo)
*Belief is a subclass of psychological process. (instance believes PropositionalAttitude)
18
Belief updating
Core
a parameterization of posterior beliefs about the past and future that makes state estimation
Belief updating mediates inference and learning, where inference means optimising expectations about hidden states (policies and precision), while learning refers to optimising model parameters. This optimisation entails finding the sufficient statistics of posterior beliefs that minimise variational free energy.
*BeliefUpdating is a subclass of IntentionalPsychologicalProcess. IntentionalPsychologicalProcess is a subclass of IntentionalProcess.
19
Blanket States
Core
Blanket states comprise active states and sensory states. Generally, external states do not influence active states and internal states do not influence sensory states.
If the states of a system, whose dynamics can be described with random or stochastic differential equations (e.g., the Langevin equation), possess a Markov blanket, then... the conditional independence in question means that a set of (internal) states are independent of another (external) set, when conditioned upon blanket states. The internal states can then be cast as representing, in a probabilistic fashion, external states. From this, one can elaborate a physics of sentience or Bayesian mechanics that would be recognised in theoretical neuroscience and biology.
"A Markov blanket is defined in terms of conditional dependencies entailed by the joint density over some states"
Friston: Blanket states comprise active states and sensory states. Generally, external states do not influence active states and internal states do not influence sensory states.
Dave asks, If we enforce as a definition the observation that "internal states do not influence sensory states," do we exclude some examples of predictive processing internal to the CNS? E.g. a wine-taster systematically scans (differentially activates) various olefactory centers (S. Barwich, Smellosophy: "Olfactory receptors, as the interface of the olfactory system, actively structure stimulus input;" Jordan et al., “Active Sampling State Dynamically Enhances Olfactory Bulb Odor Representation," Neuron 98.
Cognitive psychologist Ulric Neisser coined the term "perceptual cycling," to describe perception as a cyclical process in the brain, suggesting that search patterns in foraging behavior filter input information. Alternating oscillation phases mirror the periodic sampling of sensory input and govern the responsiveness of particular brain regions, including their connectivity. Several neural populations are actively competing at any given time. So the brain is primed by its own mechanisms of input selectivity.
What are Markov Blankets? What is the usage in FEP?
Interface / Boundary states for systems and their environments
(Sense and Action)
Values of parameters of the (Markov) Blanket
Blanket states mediate Internal and External states
Things or "Boundaries between things"
Partitioning, how does this relate to boundaries in the real world?
Thermodynamic & Homeostatic systems. H systems do have T properties. But they are not the same thing. Two poles of the analogy. Media & Message. Memeology.
https://pubmed.ncbi.nlm.nih.gov/33607182/ "Recent characterisations of self-organising systems depend upon the presence of a 'Markov blanket': a statistical boundary that mediates the interactions between the inside and outside of a system."
What's the alternative here?
*ThermodynamicBlanketStates are PhysicalStates. *HomeostaticBlanketStates are InternalAttributes.
20
Cognition
Core
Cognition is the updating of hierarchical, probabilistic models of the world.
In active inference, cognition is viewed as an action-inference loop. The beliefs (priors) of an individual, and the expectations that derive from them, lead to action. Action impacts the environment, producing some effects. The individual senses those effects (perhaps imperfectly), and compares sensations to those that were expected based on beliefs. If they differ—if observations are surprising—then learning (updating of the individual’s generative model) might be necessary.
Cognitive agent is a subclass of sentient agent
21
Complexity
Core
complexity is the divergence between posterior and prior beliefs.
complexity is defined in relation to empirical priors based on the approximate posterior expectations of the preceding (forward) and subsequent (backward) states
*Complexity is a subclass of ObjectiveNorm.
22
Cue
Core
In turn, the deontic cue is the probability of an outcome at any given state, which depends upon concentration parameters α that the environment learns as a function of how agents act on the world, which changes the value of the concentration parameters.
A *Cue is internally related to an instance of Perception . AgentPatientProcess is a subclass of Process .
23
Culture
Core
inter-group behavioural and cognitive variations that arise through social learning within members of the same species
Culture is an umbrella term which encompasses the social behavior and norms found in human societies, as well as the knowledge, beliefs, arts, laws, customs, capabilities, and habits of the individuals in these groups.
*Culture is a subclass of Proposition .
Agreement
is
a
subclass
of
Proposition
24
Data
Core
Data are units of information, often numeric, that are collected through observation.[1] In a more technical sense, data are a set of values of qualitative or quantitative variables about one or more persons or objects,[1] while a datum (singular of data) is a single value of a single variable.[2]
sensory data is given by Bayes rule (1)
InformationMeasure is a subclass of ConstantQuantity. Stating is a subclass of LinguisticCommunication.
InformationMeasure
is
a
subclass
of
ConstantQuantity
.
*Data is a near synonym of InformationMeasure. *Data is a near synonym of FactualText. *Data is a near synonym of Stating.
25
Decision-making
Core
This paper considers decision-making and action selection as variational Bayesian inference. It tries to place heuristics in decision theory (in psychology) and expected utility theory (in economics) within the setting of embodied or active inference.
Deciding is a subclass of Selecting. Selecting is a subclass of IntentionalPsychologicalProcess.
26
Ensemble
Core
Here, we consider simulations of a primordial soup reported in [11] to illustrate the emergence of active inference of a simple and prebiotic sort. This soup comprises an ensemble of dynamical subsystems, each with its own structural and functional states, that are coupled through short-range interactions.
ensembles; groups of neurons that tend to fire in synchrony. Importantly, spontaneously active ensembles are similar to those evoked by sensory stimuli suggesting that ensembles encode features of the sensory environment and that their spontaneous activation reflects an intrinsic capacity of the brain to generate an internal model of the environment
*Ensemble is a subclass of Collection .
27
Epistemic value
Core
Epistemic value is the expected information gain under predicted outcomes. In other words, it reports the reduction in uncertainty about hidden states afforded by observations
Interestingly, (35) tells us that maximizing the epistemic value of the policy maximizes opportunity, while at the same time minimizing risk. In the EFE (20), epistemic value is related with the mutual information between states and outcomes. In the CBFE [constrained Bethe Free Energy], the epistemic value of the policy is more inclusive, because it accounts for the information opportunity as well as the risk of the policy.
“epistemic value” (e.g., information gain or the resolution of uncertainty implicit in exploration or curiosity).
Epistemic value is......
From Karl: Epistemic value is the information gain or reduction in uncertainty about latent states afforded by the outcomes of a particular policy. It is variously known as Bayesian surprise, epistemic affordance, the value of information, intrinsic motivation and so on. Mathematically, it is the KL divergence between beliefs about latent states before and after the outcomes of a policy. Epistemic value is the value of a policy that is a functional of beliefs about the causes of sensations.
(Pragmatic and Epistemic) & (Extrinsic & Intrinsic) ---> Whats the relationship, are these the same?
Intrinsic motivation involves performing a task because it’s personally rewarding to you.Extrinsic motivation involves completing a task or exhibiting a behavior because of outside causes such as avoiding punishment or receiving a reward. The main difference between intrinsic and extrinsic motivation is that intrinsic motivation comes from within, and extrinsic motivation comes from outside. While both types of motivation are important, they have different effects on how you work. https://www.rochester.edu/emerging-leaders/understanding-intrinsic-and-extrinsic-motivation/
https://www.tandfonline.com/doi/abs/10.1080/17588928.2015.1020053?journalCode=pcns20 Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes).
From Karl: Epistemic value is the information gain or reduction in uncertainty about latent states afforded by the outcomes of a particular policy. It is variously known as Bayesian surprise, epistemic affordance, the value of information, intrinsic motivation and so on. Mathematically, it is the KL divergence between beliefs about latent states before and after the outcomes of a policy. Epistemic value is the value of a policy that is a functional of beliefs about the causes of sensations.
*EpistemicValue is a subclass of PsychologicalProcess . *EpistemicValue is a subclass of SubjectiveAssessmentAttribute . *EpistemicValue is an instance of InternalAttribute
The abstract counterpart of an *EpistemicValue is an *AbstractEpistemicValue. *EpistemicValue is a relatedInternalConcept to Investigating.
28
Ergodicity
Core
A process is ergodic if its long term time-average converges to its ensemble average. Ergodic processes that evolve for a long time forget their initial states.
29
Evidence
Core
Bayesian model evidence is effectively simplicity plus accuracy./Negative surprise is the same as log evidence; namely, the logarithm of Bayesian model evidence.
*Evidence is internally related to IntentionalPsychologicalProcess .
30
Expected Free Energy
Core
The expected free energy is a functional of posterior beliefs about states.
*ExpectedFreeEnergy is a subclass of RelationalAttribute .
*ExpectedFreeEnergy is internally related to InformationMeasure .
31
External States
Core
Statistically, the existence of a Markov blanket means external states are conditionally independent of internal states, and vice versa, given blanket states. Generally, external states only influence themselves and sensory states.
Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not.
External states correspond to environmental causes that generate sensory samples (also known as sensory input, sensory outcomes, sensory data, or evidence), which affect the system’s internal state.
Realist and Instrumentalist
External/Internal/Interface
Is it important that the partitioning be this way?
Qualify & Quantify
What is the relevance of this partitioning scheme (e.g. with a blanket separating Internal/External states)
The partitions are set by us on something else observed/modeled.
From Colombo & Wright: "For free-energy theorists, the dynamics of such systems will appear to place an upper bound on their informational entropy, and to maximize the evidence for a model M of external states “entailed” by their characteristic properties. This behavior—they would conclude—can be expressed as approximate Bayesian (active) inference about the causes of sensory states in terms of minimizing variational free energy.Footnote6"
What happens to the generative model when we are asleep? Do we have a generative model at that time?
Is it the same model being used differently, but then is it a different model?
Internal & External states -- they are a partitioning, have to be separate.
*ExternalState is a subset of PhysiologicProcess.
32
Free Energy
Core
An information theory measure that bounds (is greater than) the surprise on sampling some data, given a generative model.
In the context of Friston's FEP, free energy is not a thermodynamic quantity, but an information-theoretic quantity that constitutes an upper bound on surprisal. If this bound is tight, the surprisal of sensory signals can therefore be reduced if free energy is minimized by bringing about changes in the world.
*FreeEnergy is a subclass of PhysicalDimension . *FreeEnergy is a subclass of RelationalAttribute .
*FreeEnergy is internally related to InformationMeasure .
33
Free Energy Principle
Core
https://en.wikipedia.org/wiki/Free_energy_principlehttps://en.wikipedia.org/wiki/Free_energy_principle.
"The brain aims at reducing surprise, where this surprise (or surprisal) is quantified as accuracy (expected log likelihood) minus complexity (informational divergence between the posterior probability and prior probability). This complexity is also known as Bayesian surprise (or salience), and represents the extent to which the new data is 'surprising' to the prior model."
"Systems minimise a free energy function of their internal states, which entail beliefs about hidden states in their environment. The implicit minimisation of free energy is formally related to variational Bayesian methods."
A generalization of predictive coding (PC) according to which organisms minimize an upper bound on the entropy of sensory signals (the free energy). Under specific assumptions, free energy translates to prediction error.
"A self organising system—that entails a generative model— minimises the free energy of its sensory and internal states; where internal states encode a recognition probability distribution over (fictive) hidden states causing sensory states."
*FreeEnergyPrinciple is an instance of Proposition .
34
Friston Blanket
Core
For this reason, we propose to distinguish between ‘Pearl blankets’ to refer to the standard use of Markov blankets and ‘Friston blankets’3 to refer to the new construct. While Pearl blankets are unambiguously part of the map, Friston blankets are best understood as part of the territory. Since these are different formal constructs with different metaphysical implications, the scientific credibility of Pearl blankets should not automatically be extended to Friston blankets
*FristonBlanket is a subclass of ProbabilityRelation. *FristonBlanket is a subclass of Proposition .
35
Generalized Free Energy
Core
Equation 14
*GeneralizedFreeEnergy is a subclass of ProbabilityRelation. *GeneralizedFreeEnergy is a subclass of Proposition .
36
Generative model
Core
Generative model or forward model is a probabilistic mapping from causes to observed consequences (data). It is usually specified in terms of the likelihood of getting some data given their causes (parameters of a model) and priors on the parameters.
A probabilistic model that links (hidden) causes and data, usually specified in terms of likelihoods (of observing some data given their causes) and priors (on these causes). Generative models can be used to generate predictions of data , given their causes.
The joint probability distribution of two or more random variables, often given in terms of a prior and a likelihood: p(s,c) = p(s|c)p(c). (Sometimes, only the likelihood p(s|c) is called a "generative model".) The model is generative in the sense that it models how sensory signals s are generated by hidden causes c. Furthermore, it can be used to generate mock sensory signals, given an estimate of hidden causes.
A generative model is a probabilistic mapping from causes in the environment to observed consequences (e.g., sensory data);
A formalism that describes the mapping between inferred hidden states/causes, and expected outcomes/observations
RECOGNITION models update internal parameters that correspond to external states (including hidden causes of environmental states), blanket states, and internal states (meta-modeling). In contrast, GENERATIVE models take those same internal parameter estimates and emit expected or plausible observations.
Generative model of what?
Generative model is starting concept of Active Inference -- Distinction from FEP (?) -- Dynamics and behavior are starting from GM, action as well. All talks about terms should include this.
Generative Model &/of a Generative Process
Recognition models and Generative models.
Recognition model is from empirical observations to updates of inferred hidden states. Generative model is from inferred hidden states to plausible emitted observed states. This is the "tale of two densities" because models are distributions which are statistical densities.
"Processing" is often used in a uni-directional Recognition Model type way -- "Predictive Processing" entails and requires a Prediction
What is the relationship between conscious experience & Generative models? Why are some GM experienced or not? Meta-modeling?
GM of Sense + Action
Enactivism + Predictive Processing ---> PP does deal with action. However in Maria's perspective they do not pay as much attention to the environment, e.g. they are more focused on the organismal dynamics perhaps.
Generative Modeling is the key for perception?
Deep GM / Deep Inference.
Where is the body in GM?
Broadest most-applicable definition ----> Then we specify Computational, Embodied, Enacted.
*GenerativeModel is a subclass of Process.
37
Generative Process
Core
"a generative process 𝑝𝐺𝑃(𝑦, 𝑥𝐺𝑃) [captures] the actual causal structure of the environment where hidden states 𝑥 generate observations"
*GenerativeProcess is a subclass of ProbabilityRelation. *GenerativeProcess is a subclass of Proposition . *GenerativeProcess is a subclass of Process .
38
Hidden state
Core
Hidden states Ψ : Ψ × A × Ω → ℝ... constitute the dynamics of states of the world that cause sensory states and depend on action.
*AbstractHiddenState is a subclass of ProbabilityRelation.
39
Hierarchical Model
Core
Predictive Processing posits a hierarchy of estimators, which operate at different spatio-temporal timescales (so they track features at different scales). The hierarchy does not necessarily have a top level (but it might have a center — think of the levels as rings on a disc or a sphere).
*HierarchicalModel is a subclass of ProbabilityRelation. *HierarchicalModel is a subclass of Proposition . *HierarchicalModel is a subclass of Process .
40
Inference
Core
inference means optimising expectations about hidden states (policies and precision),
Inference in the generative model lies in finding posterior p(s|o) — the probability that the fruit is an apple (or orange) if it lies at a specific location.
*AbstractInference is a subclass of Learning. (This looks wrong. Abstract classes are non-temporal, and Learning changes across time.)
41
Information
Core
Information can be thought of as the resolution of uncertainty; it answers the question of "What an entity is" and thus defines both its essence and the nature of its characteristics. The concept of information has different meanings in different contexts.[1] Thus the concept becomes synonymous to notions of constraint, communication, control, data, form, education, knowledge, meaning, understanding, mental stimuli, pattern, perception, proposition, representation, and entropy.
This notion, known as Bayesian surprise, conceptualises a unit of surprise – a “wow” – in terms of the difference between the prior and posterior beliefs about the world. This allows us to formulate epistemic foraging in terms of the mutual information between an observation, and the unobservable (hidden) states of the world that give rise to it.
*Information is internally related to InformationMeasure .
42
Information Geometry
Core
The central idea that underwrites information geometry [9] is that we can define a space of parameters (a statistical manifold), where each point in that space corresponds to a probability density (e.g. the expectation and variance of a normal density).
One can take this metric treatment further and equip spaces of the sufficient statistics (i.e., parameters) of a density with an information geometry. In brief, information geometry rests on Riemannian metrics that can be used to measure distances on statistical manifolds (Amari, 1998; Ay, 2015)
43
Internal States
Core
Internal states R : R × S × Ω → ℝ... constitute the dynamics of states of the agent that cause action and depend on sensory states.
A system has a boundary that separates it from its environmental niche. It has internal processes (its parts interact). A system behaves. As it does so, there are changes to its internal state.
A common rhetoric used to unpack this is that the blanket states of a given internal state are the parents (things that cause it), children (things that it causes), and parents of its children. The parents of internal states are the sensory states that mediate the influence of the outside world, and their children are the active states that mediate their influence on the outside world.
The existence of a Markov blanket means that internal states will appear to minimize a free energy functional of the states of their Markov blanket.
Since the internal states of the Markov blanket are those states that constitute the system, we can think of the extended phenotype of the organism as literally embodying or encoding information that parameterises a recognition density.
What is an Internal state?
From Demakas et al 2020: "Imagine that every single state of being has a position in an abstract state space. There are 4 important kinds of states (i.e., dimensions) in this space: sensory states (e.g., the sound of a voice), active states (e.g., listening1), internal states (e.g., thoughts, feelings), and environmental states (e.g., location, context). These states are by definition the partitions afforded by Markov blanket."
Is this the only possible partitioning? Only one possible or only one for ActInf? ActInf with other partitionings?
Do partitions align with natural aspects/features/"systems" in the world?
Relationship with the topology of the action loop?
What are Internal States? Nesting of internal states depending on regime of attention, scale, Homeostatic, Cognitive
Utility of separating formal terms and definitions/notions -- Markov Blanket Action: is ...., separation from applications in domains (cell, psychology, social, computational)
Starting with the purely statistical Markov, Pearl, Friston, Beyond --> applications across domains.
Is enactivism realism?
Computational systems to test space of possible loops/partitions since empirical measurements are not always directly suggestive of particular partition
Tale of Two Densities: "The ‘causal bite’ of the generative model comes from the fact that it plays a role in policy selection by inducing free energy gradients (which then guide changes to beliefs about action). In other words, generative models are normative models of ‘what ought to be the case, given the kind of creature that I am’– they are realised physically through adaptive, belief-guided, normative actions that maintain the creature in its phenotypic states."
Inconstant or incompatible use of realism/instrumentalism, action-perception loops, interpretations of the priority of blanket vs. co-equal partitition, etc....
From Karl: Statistically, the existence of a Markov blanket means external states are conditionally independent of internal states, and vice versa, Given the blanket states. Generally, internal states can only influence active states.
*InternalState is a subset of PhysiologicProcess.
44
Latent cause
Core
The generative model shown in Fig. 2 links exteroceptive,proprioceptive, and interoceptive information by jointly repre-senting their hidden or latent causes (e.g., a cause can embody theprior knowledge that the sight of a burger will produce certainvisual and gustatory sensations, that it affords a grasping action,and ultimately that it increases the level of glucose in the blood).
The simplest encoding corresponds to representing the belief with the expected value, or expectation, of a hidden or latent cause. These causes are referred to as hidden because they have to be inferred from sensory consequence
45
Living system
Core
free-energy theorists assume that any living system possesses a random dynamical attractor—a set of states towards which a dynamical system tends to evolve for a wide variety of initial conditions of the system’s state.
In this paper we have argued that the autonomous organization of living systems consists of the hierarchical assembly of Markov blankets of Markov blankets through adaptive active inference.
*Attractor is internally related to SubjectiveAssessmentAttribute.
46
Markov Blanket
Core
A Markov blanket defines the boundary between a system of interest and its environment in a statistical sense. More specifically, it provides a statistical partitioning Into internal and external states that are separated by blanket statesto the system. In this context, a Markov blanket is a set of variables through which states internal and external to a system interact.
Markov blankets were first proposed by Judea Pearl (1988) in the context of graphical models. These models express the statistical dependencies (edges) between different factors or states (nodes). Given a state of the model, its Markov blanket is the set of sufficient states of the network needed to predict that state.
First, a 'thing' is defined stipulatively in terms of a Markov blanket, such that something’s internal states are independent of its external states, when conditioned on its blanket states. Blanket states can be further partitioned into active and sensory states that are not influenced by internal and external states, respectively. This partition is not part of the definition of a Markov blanket but describes a way of characterising the blanket states.
A Markov blanket is a statement of conditional independence between internal and external states given blanket states.
(Statistical) partitioning of system of interest, from environment, by an interface or boundary. A minimal Markov blanket is known as a Markov boundary.
Who is Markov / What is Markovian?
What does "blanket" or "blanketing" mean?
https://en.wikipedia.org/wiki/Markov_property A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past.
Representing boundaries / boundary conditions -- where we have liminalitiy. Conditional independence separates 'things' out from their environment
Node partitioning scheme (where nodes are statistical variables) -- separating into Internal, External, and Blanket states. Blanket states render the Internal and External states conditionally independent.
Starting with System of Interest -- To define separation of System and Environment, we define the boundary of the system.
Difference between Physical boundary & Statistical insulation?
Separation of system from environment requires persistent boundary / Blanket. Will depend on the scale of analysis.
Nested Markov Blankets will have different realizations.
Examples of [Internal, External, Blanket states]
Cell [Cytoplasm Internal, Environment External, Membrane = Markov Blanket]
*ThermodynamicSystem is a subclass of *System
47
Markov Decision Process
Core
A Markov decision process (MDP) is a form of probabilistic generative model, defined in a discrete state space. The latent variables of an MDP are hidden states sτ, and policies, π
Deciding is a subclass of Selecting
48
Markovian Monism
Core
Markovian monism holds that Markov blankets are the type of thing/property that constitutes both mind and matter, and in this sense, their metaphysical monism is dubbed Markovian.
49
Model Inversion
Core
Model inversion uses available experimental observations of the output to determine the set of input parameters that maximize the predictive potential of a model.
When you calculate a prior from a posterior and a likelihood, is that an example of model inversion?
If you must re-calculate specific priors - and these are priors that CANNOT be altered (the evolutionarily-cast-in-base-pairs homeostatic set points) - does something special happen? - maybe fugue, dissociation, fainting, panic, depressive paralysis, shell-shock, repression, "cognitive dissonance," delusion, hysteria?
50
Multi-scale system
Core
Multiscale modeling refers to a style of modeling in which multiple models at different scales are used simultaneously to describe a system. The different models usually focus on different scales of resolution.
51
Narrative (model)
Core
Narratives are reports of real or imagined events, which can be presented in language (verbally or textually) or through sequences of images or other symbols.
52
Niche
Core
Niche construction refers to any (implicit or explicit) modification by organisms of the (biotic or abiotic) states of the niche that they and others inhabi
niche construction “[…] refers to the activities, choices, and metabolic processes of organisms, through which they define, choose, modify, and partly create their own niches”
In cognitive science, cognitive niche construction can be viewed as a form of instrumental intelligence whereby organisms “create and maintain cause–effect models of the world as guides for prejudging which courses of action will lead to which results”
Attribute subsumes *Niche
53
Non-Equilibrium Steady State
Core
https://www.sciencedirect.com/science/article/pii/S1571064517301409?
Under appropriate conditions, any system possessing a random dynamical attractor can be shown to be formally equivalent to any system at a steady state far from equilibrium, where the system’s “characteristic” variables are within homeostatic bounds (Friston 2012; Ramstead et al. 2018, p. 2).
living systems carve out and inhabit minima in free energy landscapes, precluding the dissipation of their states over phase space. This (nonequilibrium steady-state) behaviour differentiates living states from other states, like decay and death
Technically, a steady-state requires a solution to the Fokker Planck equation (i.e., density dynamics). A nonequilibrium steady-state solution entails solenoidal (i.e., conservative or divergence free) dynamics that break detailed balance (and underwrite stochastic chaos). In other words, The dynamics of systems at nonequilibrium steady-state are not time reversible (unlike equilibrium steady states, in which the flow is entirely dissipative).
*NonEquilibriumSteadyState is a subset of Attribute.
54
Novelty
Core
when resolving uncertainty about the contingencies, the corresponding epistemic affordance becomes novelty, i.e. the opportunity to resolve uncertainty about ‘what would happen if I did that?’
hidden contingencies encoded by the param-eters of the agent’s generative model, i.e. novelty
*Novelty is a subclass of SubjectiveAssessmentAttribute. SubjectiveAssessmentAttribute is a subclass of NormativeAttribute
55
Observation
Core
We assume the agent has sensory states that register observa- tions or outcomes ̃ o , where outcomes are a function of the state of the agent’s environment, or hidden states, ̃ s . These states are called “hidden” because they are “shielded off” from internal states by observation states.
but AI proposes a twist — rewarding observations are assumed to be likely under agent's innate beliefs.
In the free energy principle (FEP), proposed by Friston, it is supposed that agents seek to minimize the “surprise”–the negative log (marginal) likelihood of observations (i.e., sensory stimuli)–given the agents’ current belief.
*Observation is internally related to CognitiveAgent.
56
Particle
Core
each particle is distinguished from other particles, in virtue of possessing a Markov blanket.
57
Perception
Core
Perception is an inference about the causes of activity in sensory pathways.
Perception is a subclass of psychological process
58
Policy
Core
Policy is defined as a sequence of actions at time τ that enable an agent to transition between hidden states. ... From definition, in active inference, a policy is simply a sequence of choices for actions through time (i.e., a sequential policy)
Policy is a subclass of Proposition.
59
Policy selection
Core
In active inference, policy selection not only requires Bayesian belief updating; it also entails the imperatives for action. In brief, actions are considered more likely if they maximize the evidence expected under the consequences of that action. Mathematically, this means selecting actions that optimize expected free energy. This expected free energy comprises different terms, such that action policy selection depends on: (i) the potential for information gain about future states of the world (i.e., epistemic value or affordance), and (ii) to potential for fulfilling preferred sensory outcomes (i.e., pragmatic value or affordance).
The endpoint of this argument is that action or policy selection becomes a form of Bayesian model selection, where the evidence for a particular policy becomes the free energy expected in the future.
60
Posterior
Core
After specifying priors, Bayesian inference – also called estimation or inversion – furnishes a posterior probability over the parameters.These posteriors are related to the priors and likelihood by Bayes rule.
We can obtain the answer from the probabilistic model p0 by doing Bayesian inference, yielding the Bayes’ posterior
*Posterior is a subclass of Proposition.
61
Pragmatic value
Core
the expected value of a policy defined in terms of outcomes that are preferred a priori, where the equivalent cost corresponds to prior surprise.
our beliefs about states of the world are as close as possible to the true state of affairs, given the sensory evidence at hand. Based on these beliefs, we can then form beliefs about “what to do” by choosing those actions that minimize expected free energy. The important move here is to separate the divergence and evidence parts of free energy and understand what their expected values mean. It turns out that they correspond to epistemic and pragmatic (i.e., motivational) value respectively. This is remarkable because exactly the same separation emerges from the treatment of cognitive consistency, namely, into epistemic and motivational value. Furthermore, these two components appear to underpin nonspecific and specific closure. In other words, the maximization of epistemic value offers a formal description of nonspecific closure, whereas the maximization of motivational (pragmatic) value corresponds to specific closure
Finally, the pragmatic value just is the value of a policy with respect to its potential of fulfilling preferred outcomes (i.e., potential for supplying expected sensory states).
In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits.
Pragmatic value is the benefit to an organism of a given policy or action, measured in terms of probability of a policy leading to a characteristic outcome (for the agent in question)
Pragmatic value describes the extent to which a given action is aligned with rewarding preferences over sensory outcomes.
What are the connections between Pragmatic/Epistemic Value and Affordances?
Generative model is performing action-selection (as constrained/weighted by E affordance matrix). The value of the Action decomposes into P/E Value ---> We also talk about P/E "actions" but this may not be proper use
Where does niche modification come into play --> e.g. preparing a book shelf.
Utility is definied by the specific situation.
How do the stories/beliefs we have in the world influence action selection? Affective inference.
*PragmaticValue is a subclass of StateOfMind.
*PragmaticValue is a relatedInternalConcept to Selecting.
62
Prediction
Core
A prediction is a deterministic function of an estimate, which can be compared to another estimate (the predicted estimate). Predictions are not necessarily about the future (note that a variable can be predictive of another variable if the first carries information about the second, i.e., if there is a correlation, cf. Anderson and Chemero 2013, p. 204). Still, many estimates in PP are also predictive in the temporal sense.
63
Principle
Core
A principle is a proposition or value that is a guide for behavior or evaluation. In law, it is a rule that has to be or usually is to be followed, or can be desirably followed, or is an inevitable consequence of something, such as the laws observed in nature or the way that a system is constructed. The principles of such a system are understood by its users as the essential characteristics of the system, or reflecting system's designed purpose, and the effective operation or use of which would be impossible if any one of the principles was to be ignored
*Principle is an instance of Proposition .
*Principle is internally related to Reasoning . Reasoning is an instance of Proposit IntentionalPsychologicalProcess
64
Process Theory
Core
a "process theory or hypothesis [is] about how that principle is realized."
65
Recognition Models
Core
which harness posterior beliefs that represent the system’s observationally informed ‘best guess’ about the causes of its sensations
*Recognition is internally related to Realization.
66
Regime of Attention
Core
central kind of patterned cultural practice, and higher level expectations encoded in higher levels of the cortical hierarchy, guide agents’ attentional styles
shared style of allocating attentional resources that characterizes a given cultural group
67
Representation
Core
internal representations: organizational aspects (e.g., having some variable inside a system that is separated from that which it represents outside that system), structural aspects (e.g., having representational vehicles that are structurally similar to the state of affairs in the world that they stand in for), content-related aspects (e.g., having internal models that either encode environmental contingencies or sensorimotor contingencies; specification or description of how the world is taken to be in turn analysed in terms of correctness or truth conditions) and functional role aspects (e.g., supporting vicarious use before or in the absence of external events) of internal variables of a model.
We consider a deflationary account of mental representation, according to which the explanatorily relevant contents of neural representations are mathematical, rather than cognitive, and a fictionalist or instrumentalist account, according to which representations are scientifically useful fictions that serve explanatory (and other) aims
68
Risk
Core
Risk, in this setting, is simply the difference between predicted and prior beliefs about final states. It can be thought of as the expected complexity, where marginal likelihood is equal to accuracy minus complexity.
risk is the relative entropy or uncertainty about outcomes, in relation to preferences,
69
Salience
Core
Stimulus salience is the degree to which a stimulus is likely to attract attention based on its low-level properties and independently of the internal mental state of the observer. This is the driving force behind bottom-up or exogenous attentional control.In active inference, it is just the epistemic value or affordance of a particular eye movement or attentional orientation. Note that salience is an attribute of action; i.e., a function of the stimulus that would sampled actively.
70
Sense States
Core
Sensory states S : Ψ × A × Ω → ℝ... correspond to the agent’s sensations and constitute a probabilistic mapping from action and hidden states.
What do edges represent? E.g. labeling the edges
Have incoming statistical dependencies with external states
Have outgoing statistical dependencies towards internal states
*SenseState is a subset of PhysiologicProcess
71
State
Core
State variables are variables whose values evolve over time in a way that depends on the values they have at any given time and on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables.
[See Active, Blanket, Control, Equilibrium, External, Hidden, Internal, Sense, Steady, World States]
“State” is a near synonym of Attribute. Attribute is a subclass of abstract.
72
State space
Core
abstract space that allows us to describe the time evolution of a system in terms of all the possible states in which it can find itself.
Set of variables/parameters that describe a system.
A state space is the set of all possible configurations of a system
Where is time in the state space? Synchronic & Diachronic.
State = Variable? Value the variable can hold? Space = area that the variables can exist with?
How do we represent CHANGE in state spaces? Constant updating? "Betweenness". It is "OF" a (dynamic) system, and "AS" the system itself.
https://en.wikipedia.org/wiki/State-space_representation "The state of the system can be represented as a state vector within that space."
Realist & Instrumentalist --- State space as being actually what occurs, vs. how we model it.
Any time you abstract out a system, there is a state space.
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time.
Set of variables/parameters that describe a system.
https://en.wikipedia.org/wiki/Phase_space a phase space is a space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space.
Set of all variables/parameters that contextual or describe an action or outcome.
Stationary processes, Ergodicity, etc.
*StateSpace is a subset of Attribute
73
Surprise
Core
The negative log-probability of an outcome. An improbable outcome is therefore surprising. Also called self-information.
74
System
Core
A thermodynamic system is a body of matter and/or radiation, confined in space by boundaries that separate it from its surroundings. It comprises an ensemble of (intensive and extensive) state variables.
Thermodynamic & Homeostatic systems. H systems do have T properties. But they are not the same thing. Two poles of the analogy. Media & Message. Memeology. "What is Life" - Schrodinger's question -- Ramstead et al. 2018 -- Informational Aperiodic Quasicrystal & Anti-dissipation
cognitive systems can be described as instantiating a form of Bayesian inference. That is, their physical properties and patterns of behaviour come to match (or infer, in a statistical sense) those of their embedding ecological niche (Bruineberg, Kiverstein, & Rietveld, 2016; Kiefer, 2017).
A random dynamical system has equations of motion with an element of randomness. These systems are characterized by a state space, the set of all possible equations of motion, and a probability distribution on the set.
Set of relations
Differentiable and Integratable in terms of Variables and functions.
Betweenness/Relational (observer & ecology)
System is the physical parts? Systems Engineering
Synergetics Subsystem, 265.05-06, 266.05, 1053.801, 1071.21 System, 168, 223.67, 224.30, 251.26, 261.01, 264.01, 265.04, 361-63, Chapter 4, 430.06, 501.10-11, 505.64, 505.71-74, 524.11, 526.10-19, 526.22-23, 526.25, 526.30-33, 527.25-26, 530.11, 531.04, 532.17, 538.03, 538.11, 542.01-05, 812.01, 831.01, 960.08, 986.730, 986.819, 986.850-57, 987.011-13, 1006.13, 1007.26, 1007.29, 1011.10-11, 1023.10-16, 1044.03-05, 1044.08, 1050.10, 1054.55, 1071.00-28, 1072.21, 1073.12, 1073.14, 1075.23, 1076.11
Open/Closed system from Thermo?
Open, Closed, Active Inference Systems? However we define system we want to make sure it is in the spirit of what it is for
from the SEBoK (1) A set of elements in interaction. (von Bertalanffy 1968) (2) combination of interacting elements organized to achieve one or more stated purposes (ISO/IEC/IEEE 2015) (3) A system is an arrangement of parts or elements that together exhibit behavior or meaning that the individual constituents do not. (INCOSE Fellows, 2019)
Varela & Maturana, Autopoiesis, Open systems. How to think about systems.
Realism and Instrumentalism
Nested subsystem (what Fuller calls "the system" as opposed to Universe).
Static, dynamic, open, closed -- what is the common feature? Where is the "between"? In e.g. a thermodynamic system. Where is the overlap among the different uses
One aspect: Two or more system elements and their betweenness
Second aspect: Relational insight
Systems, Agents. Intentionality of "betweenness" of the agent in their niche.
Function, Modularity, Physical Place
*System is a subclass of Agent
75
Temporal Depth
Core
In this sense, counterfactual beliefs pertain to the future consequences of action and necessarily entail temporal depth.
76
Uncertainty
Core
A measure of unpredictability or expected surprise (cf, entropy). The uncertainly about a variable is often quantified with its variance (inverse precision).
77
Variational
Core
https://www.sciencedirect.com/science/article/pii/S1053811908002462?
Variational Bayes or ensemble learning (Feynman, 1972, Hinton and von Cramp, 1993; MacKay, 1995, Attias, 2000) is a generic approach to model inversion that approximates the conditional density p(ϑ|y,m) on some model parameters, ϑ, given a model m and data y. We will call the approximating conditional density, q(ϑ)a variational or ensemble density. Variational Bayes also provides a lower-bound on the evidence (marginal or integrated likelihood) p(y|m) of the model itself. These two quantities are used for inference on parameter and model-space respectively.
78
Variational free-energy
Core
A statistical measure used in problems of approximate Bayesian inference as an effective upper bound to surprisal, a (usually incomputable) quantity that represents the negative log-probability of an outcome, e.g., the sensory states for an organism. Under Gaussian assumptions, variational free energy reduces to a weighted sum of prediction errors.
A functional of sensory states and a probability distribution over hidden states that cause sensory states. The variational free energy is an upper bound on the surprise (self information) of sensory states, under a (generative) model. Surprise is the negative logarithm of the Bayesian model evidence or marginal likelihood.
79
Abstract Action
Supplement
Action A : S × R → ℝ... corresponds to action emitted by an agent and depends on its sensory and internal states.
RECOGNITION models update internal parameters that correspond to external states (including hidden causes of the environment), blanket states, and internal states (meta-modeling). In contrast, GENERATIVE models take those same internal parameter estimates and emit expected or plausible observations.
80
Abstract action prediction
Supplement
In principle, the recognition of action goals might be implemented in perceptual and associative brain areas, similar to the way other events such as visual scenes are (believed to be) recognized, predicted and understood semantically.
We propose that the brain automatically predicts others’ future actions while perceiving their current actions. The human perceptual system constantly engages in this kind of reflexive prediction.
81
Abstract Bayesian Inference
Supplement
Minimisation of variational free energy with respect to a probability distribution over (fictive) hidden states causing sensory states (observations). Variational Bayesian inference is approximate because it minimises a (free energy) bound approximation to surprise. When free energy equals surprise, inference is exact.
Updating a model in accordance with Bayes' rule, i.e. computing the posterior distribution: p(c|s) = p(s|c)p(c)/p(s).
82
Abstract epistemic value
Supplement
83
Abstract External State
Supplement
84
Abstract Generative Model
Supplement
85
Abstract Hidden State
Supplement
86
Abstract Internal State
Supplement
87
Abstract Sensory State
Supplement
88
Abstract System
Supplement
89
AbstractAccuracy
Supplement
Accuracy is simply the surprise about sensations that are expected under the recognition density
90
Action
Supplement
Action A : S × R → ℝ... corresponds to action emitted by an agent and depends on its sensory and internal states.
RECOGNITION models update internal parameters that correspond to external states (including hidden causes of the environment), blanket states, and internal states (meta-modeling). In contrast, GENERATIVE models take those same internal parameter estimates and emit expected or plausible observations.
91
Action and Planning as Divergence Minimization
Supplement
Action and Perception as Divergence Minimization unify perception and action unified under a single framework. This framework proposes that both action and perception can be modelled as an agent trying to mininimize a divergence functional between two distributions an ‘actual’ distribution A(x;o), and a target distribution T(x;o).
Divergence minimization... connects deep reinforcement learning to the free energy principle (Friston, 2010; 2019), while simplifying and overcoming limitations of its active inference implementations.
92
Action at a distance
Supplement
Complexity can be thought of as the degrees of freedom used by the plant to anticipate and predict its sensory exchange. This leads naturally to a principle of minimum redundancy (well established in the neurosciences [77]), whereby a good plant will retain just those sparse, frugal structures that are necessary to anticipate the world. This can be evident in the phenotypic form (as unpacked earlier by analogy with robotic gloves) or in terms of conditional dependencies and ‘action at a distance’ mediated in plants by channels and electrochemical waves (very much like axonal connections and electrochemical synaptic transmission in the brain). In short, under the free-energy principle—and the active inference that this entails—one would anticipate that plants would come to distil the essential causal structure in their environment in terms of their physical form and biophysical function. It is this form and functional architecture that constitutes the generative model and underwrites their existence.
93
Action Integral
Supplement
94
Active Inference
Supplement
95
Agency based model
Supplement
A generative model (probability) over hidden states that include control states.
96
Agency free model
Supplement
A generative model (probability) over hidden states that preclude control states.
97
Alignment (of internal states)
Supplement
98
Appraisal theories of emotion
Supplement
A long-standing tradition, dating back to James (but not Lange), according to which emotions depend on cognitive interpretations of physiological changes.
99
Attenuation of response
Supplement
The attenuation of responses encoding prediction error, with perceptual learning, explains repetition suppression (e.g. mismatch negativity in electroencephalography).
100
Augmented reality
Supplement
A technique in which virtual images can be combined with real-world real-time visual input to create hybrid perceptual scenes that are usually presented to a subject via a head-mounted display.
101
Bayes-optimal control
Supplement
Acting to minimise the free energy bound on the (negative logarithm) of Bayesian model evidence – with or without agency.
102
Bayesian
Supplement
103
Bayesian Brain
Supplement
104
Bayesian surprise
Supplement
A measure of unpredictability or expected surprise (cf, entropy). The uncertainly about a variable is often quantified with its variance (inverse precision).
105
Belief updating
Supplement
a parameterization of posterior beliefs about the past and future that makes state estimation
Belief updating mediates inference and learning, where inference means optimising expectations about hidden states (policies and precision), while learning refers to optimising model parameters. This optimisation entails finding the sufficient statistics of posterior beliefs that minimise variational free energy.
106
Bottom-up attentional control
Supplement
Attentional control that is driven by factors external to the observer, such as stimulus salience (e.g., 'pop-out' stimuli that contrast strongly with surrounding items based on a simple feature value, sudden flashes of light, or loud noises in an otherwise quiet environment). Awh, Belopolsky and Theeuwes view this as the same concept as 'exogenous attentional control'.
107
Cognitive Science
Supplement
108
Cognitive System
Supplement
cognitive systems can be described as instantiating a form of Bayesian inference. That is, their physical properties and patterns of behaviour come to match (or infer, in a statistical sense) those of their embedding ecological niche (Bruineberg, Kiverstein, & Rietveld, 2016; Kiefer, 2017).
A random dynamical system has equations of motion with an element of randomness. These systems are characterized by a state space, the set of all possible equations of motion, and a probability distribution on the set.
109
Cognitivism
Supplement
110
Collective behavior
Supplement
111
Conditional density
Supplement
Conditional density q(˜ψ) := q(˜ψ| ˜μ) [is] an arbitrary probability density function over hidden states ˜ψ ∈ Ψ that is parameterized by internal states ˜μ ∈ R.
112
Conditional Probability
Supplement
One of the basic ProbabilityRelations. ConditionalProbability is used to state the numeric value of a conditional probability. (conditionalProbability ?FORMULA1 ?FORMULA2 ?NUMBER) means that the probability of ?FORMULA2 being true given that ?FORMULA1 is true is ?NUMBER.
113
Congruence
Supplement
114
Connectionism
Supplement
115
Control (states)
Supplement
(Fictive) hidden states that are used to explain the consequences of action. Control states are inferred or represented in the generative model.
116
Control theory
Supplement
117
Counterfactual
Supplement
A counterfactual model is a conditional probability distribution that relates possible actions to possible future states (at least following Friston).
118
Cybernetics
Supplement
119
Density
Supplement
120
Deontic Action
Supplement
121
Development
Supplement
Example of regulative development: the two normal bodies resulting when an early embryo is cut in half. Deep evolutionary conservation of ion channel and neurotransmitter mechanisms highlights a fundamental isomorphism between developmental and behavioral processes. At the dawn of developmental biology, Roux wrote of the struggle of the parts in an embryo.
122
Dissisipation
Supplement
123
Divergence (Kullback–Leibler)
Supplement
Information divergence, information gain or relative entropy is a non-commutative measure of the difference between two probability distributions.
A measure of the distance or difference between two probability densities.
124
Domain
Supplement
125
Domain-generality
Supplement
126
Domain-specificity
Supplement
127
Dynamic causal modelling
Supplement
128
Dynamic expectation maximization
Supplement
129
Dynamicism
Supplement
130
Ecology
Supplement
131
Ecology, Evolution, Development
Supplement
132
Embedded Embodied Encultured Enactive Inference
Supplement
133
Embodied Cybernetic Complexity
Supplement
complexity is the divergence between posterior and prior beliefs.
complexity is defined in relation to empirical priors based on the approximate posterior expectations of the preceding (forward) and subsequent (backward) states
134
EmbodiedBelief
Supplement
135
Emotion
Supplement
An affective state with psychological, experiential, behavioral, and visceral components. Emotional awareness refers to conscious awareness of an emotional state.
136
Empirical prior
Supplement
Priors that are induced by hierarchical models; they provide constraints on the recognition density is the usual way but depend on the data.
137
Enactivism
Supplement
Friston Symposium, 6/22/2021 (Pt 1, 22:06-23:30)
[The] enactive perspective... is inference about the consequences of action.
138
Entropy
Supplement
The average surprise of outcomes sampled from a probability distribution or density. A density with low entropy means, on average, the outcome is relatively predictable (certain).
139
Estimator
Supplement
A statistical estimator is a function of random variables that are conceived as samples; so an estimator specifies how to compute an estimate from observed data. An estimate is a particular value of an estimator (which is computed when particular samples, i.e., realizations of random variables, have been obtained).
140
Event-related potential (ERP)
Supplement
141
Evolution
Supplement
142
Expectation maximization
Supplement
An iterative scheme that estimates conditional expectations and maximum likelihoods of model parameters, in an E- and M-step, respectively.
143
Expected Utility Theory
Supplement
144
Experience of body ownership (EBO)
Supplement
The experience of certain parts of the world as belonging to one’s body. EBO can be distinguished into that related to body parts (e.g., a hand) and a global sense of identification with a whole body.
145
Explaining Away
Supplement
The notion of "explaining away" is ambiguous. 1. Some authors write that sensory signals are explained away by top-down predictions. 2. Another sense in which the term is used is that competing hypotheses or models are explained away. 3. A third sense is as in explaining prediction error away.
146
Explanation
Supplement
147
Extended Cognition
Supplement
148
Falsification
Supplement
149
Far-from-equilibrium
Supplement
150
Fokker-Planck Equation
Supplement
151
Foraging
Supplement
152
Friston's Law
Supplement
"All the quantities that can change; i.e. that are part of the system, will change to minimise free energy. These quantities are the internal parameters λ and the action parameters, α."
Friston’s law finds its expression in the fundamental hypothesis of a cathectic signal flowing from a predictive neuron to a sensory one, with the first neuron transmitting the expected signal to the second. These two classes of neuron reflect the basic function of the nervous system, since predicting the incoming signal minimizes surprisal and therefore F. (Solms)
[S]elf-organisation is... how ‘selfhood’ comes about (through the formation of something called a ‘Markov blanket’). Now Friston had reduced self-organisation to a simple law. I call it Friston’s law (see below). Armed with this law, everything that we call mental life becomes mathematically tractable, and reducible to physics: to the laws of thermodynamics (and behind that, statistical mechanics) to be exact. (Solms)
153
functional magnetic resonance imaging (fMRI)
Supplement
154
Gaussian distribution
Supplement
The famous bell-shaped probability distribution (also called the normal distribution). Its prominence is grounded in the central limit theorem, which basically states that many distributions can be approximated by Gaussian distributions.
155
Generalized coordinates
Supplement
Generalized coordinates of motion cover the value of a variable, its motion, acceleration, jerk and higher orders of motion. A point in generalized coordinates corresponds to a path or trajectory over time.
156
Generalized Synchrony
Supplement
157
Generative density
Supplement
Generative density p(˜s, ˜ψ|m) [is] a probability density function over external (sensory and hidden) states under a generative model denoted by m. This model specifies the Gibbs energy of any external states: G(˜s, ˜ψ) = −ln p(˜s, ˜ψ|m).
158
Generative modelling
Supplement
159
Gestalt
Supplement
160
Goal-driven selection
Supplement
The imposition of a selection bias based only on the current selection goals of the observer. This definition of goal-driven selection excludes selection biases that are a lingering consequence of past selection episodes or goals, because it is possible for such influences to countermand current selection goals.
161
Gradient Descent
Supplement
An optimization scheme that finds a minimum of a function by changing its arguments in proportion to the negative of the gradient of the function at the current value.
162
Group Renormalization Theory
Supplement
163
Guidance signal
Supplement
164
Habit learning/formation
Supplement
165
Hamilton's Principle of Least Action
Supplement
166
Helmholtz (inference) machine
Supplement
Device or scheme that uses a generative model to furnish a recognition density. They learn hidden structure in data by optimizing the parameters of generative models.
167
Hierarchically Mechanistic Mind
Supplement
168
Homeostasis
Supplement
The process whereby an open or closed system regulates its internal environment to maintain its states within bounds
169
Homeostatic system
Supplement
A cybernetic or homeostatic system is a 'good regulator' of its external variables and necessarily behaves as a model of the external system being regulated.
170
Homeostatic system
Supplement
A cybernetic or homeostatic system is a 'good regulator' of its external variables and necessarily behaves as a model of the external system being regulated.
171
Hyperprior
Supplement
172
Hypothesis
Supplement
173
Information bottleneck (IB)
Supplement
Information bottleneck (IB) depicts a trade-off between the accuracy and conciseness of encoded representations.
174
Interoception
Supplement
The sense of the internal physiological condition of the body.
175
Interoceptive sensitivity
Supplement
A characterological trait that reflects individual sensitivity to interoceptive signals, usually operationalized via heartbeat detection tasks.
176
Inverse problem
Supplement
From the point of view of predictive coding, the problem of perception requires inverting the mapping from hidden causes to sensory signals. This problem is difficult, to say the least, because there is not usually a unique solution, and sensory signals are typically noisy (which means that the mapping from hidden causes to sensory signals is not deterministic).
177
Lateral geniculate nucleus
Supplement
A relay center in the thalamus for the visual pathway.
178
Likelihood
Supplement
179
Marr's Levels of Description
Supplement
180
Material science
Supplement
181
Memory
Supplement
182
Message Passing
Supplement
[In] active inference... higher cortical levels send descending proprioceptive predictions, rather than motor commands. This process mirrors perceptual inference in sensory cortex, where descending connections convey predictions, while ascending connections convey prediction errors. The anatomical substrate of this recurrent message passing is a hierarchical system consisting of functionally asymmetric driving (ascending) and modulatory (descending) connections: an arrangement that we show is almost exactly recapitulated in the motor system, in terms of its laminar, topographic and physiological characteristics.
183
Mismatch negativity
Supplement
184
Model
Supplement
185
Model accuracy
Supplement
186
Morphogenesis
Supplement
187
Multisensory integration
Supplement
188
Neuronal Ensemble
Supplement
Functionally distinct neuronal ensembles exist within a single memory engram
189
Niche construction
Supplement
190
Noisy signal
Supplement
191
Non-linear dynamical systems
Supplement
192
Optimal control
Supplement
Acting to minimise expected cost.
193
Precision
Supplement
(In general statistical usage) means the inverse variance or dispersion of a random variable. The precision matrix of several variables is also called a concentration matrix. It quantifies the degree of certainty about the variables.
The precision of a random variable is the inverse of its variance. In other words, the greater the average divergence from its mean, the lower the precision of a random variable (and vice versa).
194
Prediction error
Supplement
195
Prediction error minimization
Supplement
196
Predictive Coding
Supplement
Predictive coding is a biologically plausible process theory for which there is a considerable amount of anatomical and physiological evidence.
197
Predictive coding (PC)
Supplement
A data processing strategy whereby signals are represented by generative models. PC is typically implemented by functional architectures in which top-down signals convey predictions and bottom-up signals convey prediction errors.
198
Predictive Processing
Supplement
199
Prior
Supplement
The probability distribution or density on the causes of data that encode beliefs about those causes prior to observing the data.
200
Random variable
Supplement
A random variable is a measurable function between a probability space and a measurable space. For instance, a six-sided die can be modeled as a random variable, which maps each of six equally likely events to one of the numbers in the set {1,2,3,4,5,6}.
201
Receptive field
Supplement
202
Recognition density
Supplement
Recognition density or approximating conditional density is a probability distribution over the causes of data. It is the product of (approximate) inference or inverting a generative model. It is sometimes referred to as a proposal or ensemble density in machine learning.
203
Representationalism
Supplement
204
Reservoir Computing
Supplement
205
Reward
Supplement
206
Salience
Supplement
207
Sample space
Supplement
A sample space Ω [is a] non-empty set from which random fluctuations or outcomes ω ∈ Ω are drawn.
208
Selection bias
Supplement
An early perceptual bias towards a specific defining feature, such as color or location, such that stimuli with that feature are prioritized over other stimuli during initial encoding. This can be distinguished from the ability to render a stimulus-specific response in the absence of a selection bias. For example, without imposing a selection bias, an observer could search for a specific target in a search array by evaluating each item in turn until the target-defining properties are encountered.
209
Selection history
Supplement
The bias to prioritize items that have been previously attended in a given context. Because such selection history effects may contradict current selection goals, we argue that selection history and current goals should be viewed as distinct categories of control.
210
Self-organization
Supplement
211
Selfhood
Supplement
The experience of being a distinct, holistic entity, capable of global self-control and attention, possessing a body and a location in space and time. Selfhood operates on multiple levels – from basic physiological representations to metacognitive and narrative aspects.
212
Semi-Markovian
Supplement
A modified or elaborated Markovian model; e.g. one employing the Partially-Observable Markovian Decison (POMD) structure for visual consciousness.
213
Sense of agency
Supplement
A probabilistic representation of hidden control states, encoded by the internal states (sufficient statistics) of an agency based model.
214
Sensory attenuation
Supplement
Reduced neural responses to self-induced compared to externally generated stimuli.
[A]ttenuating the precision of sensory evidence during movement (or, equivalently, attending away from the consequences of self-made acts) [can resolve] a conflict between action and perception.
A failure of sensory attenuation leads to false (active, Bayesian) inference and beliefs about agency.
215
Sensory Data
Supplement
Data are units of information, often numeric, that are collected through observation.[1] In a more technical sense, data are a set of values of qualitative or quantitative variables about one or more persons or objects,[1] while a datum (singular of data) is a single value of a single variable.[2]
sensory data is given by Bayes rule (1)
216
Sensory input
Supplement
217
Sensory outcome
Supplement
218
Shared Generative Model ('Shared Narrative')
Supplement
219
Signal
Supplement
220
Simulation
Supplement
221
Sophisticated Inference
Supplement
222
spike-timing dependent plasticity
Supplement
223
Stigmergy
Supplement
224
Stochastic
Supplement
The successive states of stochastic processes are governed by random effects.
225
Subjective feeling states
Supplement
Consciously experienced emotional states that underlie emotional awareness.
226
Surprisal
Supplement
Surprisal or self-information is the negative log-probability of an outcome. An improbable outcome is therefore surprising.
An information-theoretic notion which specifies how unlikely an event is, given a model. More specifically, it refers to the negative logarithm of an event's probability (also just called "surprise"). It is important not to confuse this subpersonal, information-theoretic concept with the personal-level, phenomenological notion of "surprise".
227
Synergetics
Supplement
228
Teams
Supplement
229
Theory
Supplement
230
Thermodynamic system
Supplement
A thermodynamic system is a body of matter and/or radiation, confined in space by boundaries that separate it from its surroundings. It comprises an ensemble of (intensive and extensive) state variables.
231
Thermostatistics
Supplement
232
Thinking Through Other Minds
Supplement
233
Top-down attentional control
Supplement
Attentional control that is driven by factors that are 'internal' to the observer. Awh, Belopolsky and Theeuwes view this as the same concept as 'endogenous attentional control'. The key problem they highlight with this construct is that grouping together control signals that are 'internal' (i.e., control signals unrelated to stimulus salience) conflates the effects of current selection goals and selection history. Because current goals and selection history may generate conflicting selection biases, they argue that these should be viewed as distinct categories of control.
234
Umwelt
Supplement
235
Unidirectionality or "mere" active inference
Supplement
The dynamics of Huygens' pendulums exemplifies a Markov chain over time. A Markov chain is a special case of a Markov blanket, in which the dependencies among states are restricted to a chain of successive influences with no reciprocal influences or loops. This means that the core properties of a Markov chain do not generalize to all Markov blankets, e.g. the conditional independencies induced by a Markov chain are unidirectional. When applied to successive states over time, Markov chains capture the notion that events are conditionally independent of previous or past events given the current states of the system [12]. Systems with unidirectional conditional independencies are non-autonomous. The reason is that such systems cannot modulate their relation to the world, since a Markov chained system is entirely ‘enslaved’ by its here-and-now—and, in particular, its precedents.
236
Variational Niche Construction
Supplement
237
Von Economo neurons (VENs)
Supplement
Long-range projection neurons found selectively in hominid primates and certain other species. VENs are found preferentially in the AIC and ACC.
238
Weak mixing
Supplement
239
Working memory
Supplement
Working memory can be thought of as a process of evidence accumulation in deep temporal hierarchies. This represents the evidence for competing hypotheses about the causes of sensory data. By representing policies as sequences of actions, working memory can be used to predict future states according to the trajectory defined by a given policy. In doing so, it allows for optimal policy selection with respect to future outcomes.
240
World States (World Systems)
Supplement
241
Interface
Supplement
242
active
Entailed
243
area
Entailed
244
attitude
Entailed
245
backbone
Entailed
246
causality
Entailed
247
computer
Entailed
248
concentration
Entailed
249
concept
Entailed
250
consensus
Entailed
251
conversation
Entailed
252
current
Entailed
253
default-mode
Entailed
254
dynamics
Entailed
255
ego
Entailed
256
energy
Entailed
257
environment
Entailed
258
error
Entailed
259
feedback
Entailed
260
field
Entailed
261
framework
Entailed
262
free
Entailed
263
genetic
Entailed
264
hierarchical
Entailed
265
idea
Entailed
266
increase
Entailed
267
influence
Entailed
268
interpretation
Entailed
269
inverse
Entailed
270
language
Entailed
271
machine
Entailed
272
metaphor
Entailed
273
neuronal
Entailed
274
object
Entailed
275
objective
Entailed
276
observer
Entailed
277
parameter
Entailed
278
part
Entailed
279
perceptual inference
Entailed
280
perspective
Entailed
281
phase
Entailed
282
physics
Entailed
283
play
Entailed
284
probability
Entailed
285
Probably Approximately Correct (PAC)
Entailed
286
problem
Entailed
287
propositional
Entailed
288
purpose
Entailed
289
question
Entailed
290
random
Entailed
291
recognition
Entailed
292
role
Entailed
293
science
Entailed
294
selection
Entailed
295
self-organization
Entailed
296
social
Entailed
297
states
Entailed
298
technology
Entailed
299
understanding
Entailed
300
resource
Entailed
301
tree
Entailed
302
abstractCounterpart
Entailed
303
represents
Entailed
There are no rows in this table

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.