Skip to content
Active Inference for the Social Sciences ~ AII 2023

icon picker
Questions & Answers

Please use the form below (or this ) to write any questions to the course teachers.
Some will be addressed during Discussion sessions, and all discourse will be published in final form at the end of the course along with the transcript of lectures/discussions.

Questions for Distribution
Not synced yet
Course Response
On the slide on Allostasis it was argued that the „most expected“ predictions are about „homeostatic setpoints within our own body“. I wonder whether these ‚first priors‘ are all interoceptive, i.e. about some state of the body? Some explanations of the framework seem to focus on interoception at this point, but I guess that can‘t be the whole story. Because how could such a claim be compatible with the existence of ‚prosocial motivations‘? For some background: Tomasello et al. (2005) argue that humans are capable of engaging in social practices because of their mind reading skills (ToM) and their unique prosocial motivations. That is, they are inclined to align emotions and goals with others, help others, etc. And they feel terrible when a caregiver or their group rejects them. These vital dispositions look a lot like ‚first priors‘. Yet they are not about some state of the body, but about some aspect of the social environment like the presence or emotions of a caregiver.
Could you please help me with that? Is it correct that first priors are not solely about the system‘s own body, but also about those aspects of the environment which are vital for the organism? Could you point me to a paper discussing these matters? Thanks!
Good question ! I have no idea how to answer it.
To begin with, there is no clear distinction between proprioception and exteroception. Or there is one, but it relies on a specific model of what constitutes me. For exemple, an embryo is treated by its mother’s as a part of its body. When the embryo is born, it’s not directly a component of the mother’s metabolism anymore, although it remains embedded within the mother’s self-model. at some point, they’re gonna be disembedded from this model entirely. What happened ? Birth provides a clear cutting point from a metabolic perspective, but psychologically there is no clear point where the child becomes an alien, definitively and irreversibly. So is the relation between the two “interoceptive” ? I don’t know, but I do know a rigid conception of either response will lead to prediction error.
I assume the article that most directly resonate with your question would be Ciaunica, Anna, Axel Constant, Hubert Preissl, and Katerina Fotopoulou. 2021. “The First Prior: From Co-Embodiment to Co-Homeostasis in Early Life.” Consciousness and Cognition 91 (May): 103117. . I would say the notion of “co-embodiement” and “co-homeostasis” apply more widely to social group, although with widely different forms of coordination than in the case of mother-child dyadic relationships.
Generally speaking, human cognition is based on a permanent dance between internalizing and externalizing systems, either as part of my (self-assessed) extended phenotype or as part of the environment being navigated. Assuming there is a “true” scale of individuation would lead to a never ending chase toward the one scale that “really” drives the other, whose dynamics would ultimately depend heavily on elements of the “environment”. The only elegant solution I can conceive of is that there is no “true” scale of agency, or rather than scale of agency (and the nature of the “interoception” it entails) is relative to specific perception-action cycles.
When examining the agent-arena relationship, there is understandbly a great deal of emphasis on the agent. However, what are your thoughts on the functions and imperatives of the artefacts that yield affordances? More specifically, I’ve been thinking that, for there to be an optimal relationship between agent-arena, in a way that facilitates something akin to a flow state, the artefacts themselves need to be in the business of doing PEM and self-evidencing, and in that way the agent becomes the external part of the Markov blanket. How can we integrate this consideration into our model of agentic behaviour and well-being?
I do not think there exist a straightforward answer to this question.
As such, the FEP entails the existence of a total symmetry between an agent and its environment. So the environment (taken as a whole) is considered to be “self-evidencing” with the agent as its environment.
I think a good theory of cognition would integrate what the enactivists call “interactional asymmetry”, ie the fact that the agent has the exclusive ability to define the terms of the interaction. This resonates with the requirement that within the FEP (and by construction of the underlying information geometry) “external states” are states that the agent can have beliefs over, states that are somehow cognitively meaningful. This open the question of what exactly defines what states happen to be meaningful for a given agent (or agentive interaction, or Markov Blanket, or...).
Overally, I cannot quite wrap my head around the role of artefacts in the agentive interaction. I think that they can constrain an agent’s self-model in a way that properly constitutes normativity (Guénin--Carlut, Avel, and Mahault Albarracin. 2023. “On Embedded Normativity - An Active Inference Account of Agency beyond Flesh.” OSF Preprints. .), opening the door for full-fledged model of embedded, distributed agency. I don’t think they are agentive in and of themselves, but rather that they are means through which bona fide agents coordinates. Overall, I’d be open to a pretty internalist model of agency, where an agent’s internal dynamics (although constrained by its environment’s) are ultimately what produce its world.
The math that would enable me to ground this claim are not quite there yeut, see See Guénin--Carlut, Avel. 2022. “Physics of Creation - Symmetry Breaking, (En)Active Inference, and Unfolding Statespaces.” OSF Preprints. . for a full-lenght discussion of this specific issue.
What is the relationship between error dynamics and the precision of error/prior messages?
Precision is understood within Active Inference to be the confidence of a given prediction (precisely, the inverse of its variance). A nice paper on it would be: Feldman, Harriet, and Karl Friston. 2010. “Attention, Uncertainty, and Free-Energy.” Frontiers in Human Neuroscience 4. .
In Ben White’s lecture titled “Basics of Active Inference,” Lisa Feldman Barrett’s “constructed emotion” is being referred to. I am curious to know why you’ve chosen to highlight this work which puts forward the claim that emotions emerge from the brain’s predictions and interpretation whereas we have quite compelling evidence against it. By compelling evidence, I’m referring to the sets of evidences Mark Solms presents and the “prioritization triangle” he proposes.
I do not have prior knowledge of Solms work. A rapid research allowed me to find several papers where he claimed that the predictive mind pretty much explained away consciousness (see below). You’d have to be more specific regarding what arguments he made that constitute compelling evidence against the idea that prediction underlie emotion.
Solms, Mark. 2019. “The Hard Problem of Consciousness and the Free Energy Principle.” Frontiers in Psychology 9. .
———. 2020. “New Project for a Scientific Psychology: General Scheme.” Neuropsychoanalysis 22 (1–2): 5–35. .
Is there a coda space with topics for the voluntary assignment? I want to write one, but the field is still a little too big for me to tackle without some guided research questions/ essay topics.
We do not propose pre-defined topics, but we can have a chat about it ! Write me at with “[AII Course] Voluntary assignment” as a subject and present yourself to start a discussion about it.
How can computational models based on the Free Energy Principle aid in understanding the dynamics of collective behaviors in non-human species and digital environments, like social networks or multiplayer online games?
In essence, the approach is to model collective dynamics while modelling individual systems as prediction error minimizers. This constitute a more informative, although formally equivalent, approach to a classical dynamical systems approach.

See Heins, Conor, Beren Millidge, Lancelot da Costa, Richard Mann, Karl Friston, and Iain Couzin. 2023. “Collective Behavior from Surprise Minimization.” arXiv. .
How are higher-order beliefs about doing “better than expected” at PEM integrated into the architecture of the agent’s internal model of external dynamics (and itself in those dynamics)? My thought here is that we have prior beliefs to fulfil certain self-evidencing expectations (e.g. homeostatic predictions), but how do those interact with these higher-order predictions which appear, to my mind, to deny or negate the fundamental priors, in so far as they predict the agent to go over-and-above the levels of PEM which otherwise might be achieved by fulfilling those fundamental priors?
How can or does a framework/theory/principle of every “thing”, articulate with moral/normative/ethical/deontic considerations?
How can or does a framework/theory/principle of every “thing”, articulate with moral/normative/ethical/deontic considerations?
Addressed in some way by Enactivists: Biological Individuation → Theory of Ethics.
Varela, Francisco J. 1999. Ethical Know-How: Action, Wisdom, and Cognition. Stanford University Press.
Virtue ethics — things act according to their nature, so cultivate one’s nature to make it good. Recursive/Circular definition of goodness, but practically relevant insight given relationship between agents
Slingerland, Edward. 2014. Trying Not to Try: The Ancient Art of Effortlessness and the Surprising Power of Spontaneity. Canongate Books.
With enactive ActInf as sensemaking theory — can describe the biological=social processes. This may bring reflexivity or distance. That distancing is not evaluating.
Biological individuation is “good” for/from whom?
Mike Levin, multiscale Individuality, Livestream #025 series.
Can’t get “ought” from “is” — normative claims have goals (e.g. directive, aspirational).
What is, is good? Not a strong position to take? — Nature is ethically unmotivated.
Grounded in the biological basis of the agent (autopoetic, etc).
Many results from descriptive work (how ethics/norms are
Hard to derive positive results
Flows well with idea that distributed/empowerment is good.... Everything knowing right place / most expected place is good.
Pangloss = Best of all possible worlds. How is similar/different with “most likely” world?
Nature/Niche is good, bad?
Social Normativity and Active Inference
Descriptive theory of human behavior. To find out what trait distribution are found where.
“Good” by construction of ethics. However, how overfit is that in space or time? And how is this different than e.g. fitness-based ‘survival of fittest’ (another tautology) or ‘might is right’ (coupling relationship between survivial/winning and correctness/justification)
Making a model → Decide what the parameters of interest are.
Empowerment? In nested/hierarchical models.
What are the challenges and potential solutions in using Active Inference to model collective behaviors with various environmental constraints?
What are the challenges and potential solutions in using Active Inference to model collective behaviors with various environmental constraints?
In any model, must pick what you do and do not include
Initial, Boundary conditions.
Modeling country in X or Y status.
Specifying the state space (e.g. all possible states a variable can take).
Then novel state spaces can arise.
These general comments above apply to any modeling
Guénin--Carlut, Avel. 2022. “Cognitive Agency in Sociocultural Evolution.” OSF Preprints. .
Lawful relationships among variables.
Outcome of modeling, we are usually looking for a converging / definite distribution
Not always interested, or attentive to, the open ended
It is not pre-statable what the state space will be for cognitive agent
Then how does this become useful / relevant in modeling

Active Inference advantages.
Different specification of constraints.
Different accounts of explanation
Duality between informational constraints & beliefs/intentions/desires that the organism/thing has, rather than
Purely non-representational dynamics, eg Hebbian learning → no “beliefs” or “norms” can emerge as non-trivial causal entities, or a fortiori create open-ended evolution
Purely representational framework, eg Utility theory → need to specify the intentional states preemptively, so no open-ended evolution either possibility to indirectly specify Normativity and Intentionality. This may not be the case for other frameworks.
Language(s) of Nature?
Message Passing, Quantum Information, Category Theory, .. — possible that these languages can be complementary and non-reducible.
Why does specific language exist in a certain context?
Important inter-operability results among various Physics and formal sciences.
Baez, J., and M. Stay. 2011. “Physics, Topology, Logic and Computation: A Rosetta Stone.” In New Structures for Physics, edited by Bob Coecke, 95–172. Lecture Notes in Physics. Berlin, Heidelberg: Springer. .
Bayes coherence :
Fuchs, Christopher A., and Rüdiger Schack. 2011. “A Quantum-Bayesian Route to Quantum-State Space.” Foundations of Physics 41 (3): 345–56. .
Felin, Teppo, Stuart Kauffman, Roger Koppl, and Giuseppe Longo. 2014. “Economic Opportunity and Evolution: Beyond Landscapes and Bounded Rationality.” Strategic Entrepreneurship Journal 8 (4): 269–82. .
Dictionary of cognitive patterns → Open-ended evolution with other agents and with environment

Where can I find the recommended reading lists mentioned in the lectures?
Where can I find the recommended reading lists mentioned in the lectures?
Where we have slides, they are
Active Inference Journal, we can curate this.
There are no rows in this table

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.