Scratchpad

icon picker
If institutional facts are constructed, where is the building material?

Ritual is the earthly distributed process by which socially distributed "code books" stay in sync with the "Big data" of our lived worlds.

Data, Code and Randomness: Building materials for the Social Construction of Reality

The goal

I would like us to reduce the practical gap between building construction and social construction. Basically, level the playing field between brute facts and institutional facts – to use Searle's terms (a genius of a philosopher, who suffered from ).

A bit of personal motivation

My wife is a structural engineer who helps build football stadia, parking garages, data centers and prisons using prestressed and precast concrete. Would you really blame this AI engineer for envying the concreteness of her constructions versus the ones I obsess over?

Definitions

So, with that context, here goes. Let's start with a few definitions that the field of AI and many lay-people might be able to agree with:
data numbers, words, letters, chemicals, rocks, strings, wax, and things (analog or digital, material or immaterial)
code combinations, transformations, actions, substitutions, relations, functions, calculations, computations, recipes, networks
randomness change, entropy, unconstrained novelty, free –super rare– generation of code/data, the Force

Substituting in Wittgenstein

Let's now use the definitions above, to try and interpret one of Wittgenstein's more frustratingly, annoyingly, accurate remarks: "The world is the totality of facts, not things"
At a cursory level, data seems like a thing. This feels non problematic.
A fact is not quite code though. It is better understood as code executed with data of particular values. So, a valid fact is like 4 = 2 * 2. An invalid fact is like 3 = 2 * 1. To clarify: * is code, 1,2,3,4 is data.
We might then propose a metaphysics of randomness and code.
Where randomness generates code and/or data – which could take quite a while.
Randomness then combines code and/or data to generate mostly impossible facts, and slowly, every now and then – the totality of facts – the world, as Wittgenstein defined it.
Under this makeshift AI metaphysical engine, the universe of impossible worlds far exceeds the universe of possible ones. And the universe of possible worlds far exceeds the universe of probable ones, which exceeds the world I am now typing in.
In any case, I see in this brief section a more or less acceptable interpretation of "The world is the totality of facts, not things". As any software engineer will tell you – we need both code and data. And code is tougher to generate and usually more valuable than data (you, no, You – please hold those corner cases for now).

Concretizing "social construction"

Where does this leave our concretizing "social construction" project?
Solved, about as well as Wittgenstein had with his Tractatus. We have the terms clarified so far. A framework for representation that we will be able to use to analyze various configurations of actors, acts, utterances, and objects.
But, to offer a bit of a sneak preview, let's go a bit further, and try to use these terms to explore how a socially constructed institutional fact might exist in one brain or even many brains:
Let's take a small group of people who have two pieces of code represented in their brains. Let's call them: * and +.
Let's also assume they have a few pieces of data represented in their brains: 1,2,3,4.
So long as the code and data scripts agree with what you and I believe they should do. E.g. 2 = 2 * 1 and not 3 = 2 * 1.
We might construct a valid fact by using the = from one person's brain and the * from another's and the 2 and 1 from yet others: [2] {=} (2) |*| <1>, using the funny parentheses to indicate different brains.
This fact is validly constructed –socially– only if someone's * isn't someone else's +. Or someone's 1 isn't someone else's 2, and so on. Why, you ask? Because this makes it so that a fact that I generate in my head can then correspond to something generated by a group of others, or any one of those others.
Institutional facts are the practical way this calibration of code happens
The world: as totality of (consistent) facts, doesn't care how those facts were generated, so long as they are replaceable – which is a way of constraining the code that generated them.
An orderly world has this property.
Sure, we could try to experiment with worlds where my code for + is your code for * but the nonsense that would follow will be quickly forgotten.
The worlds we experience are the ones where the code snippets we are carrying in our brains must overlap, and correspond to the data of the world, a process secured in place by socially constructed institutional facts. Our project aims to prove that ritual is the earthly distributed process by which socially distributed "code books" stay in sync with the "Big data" of our lived worlds.

Comments from Wlodek to address (April 6, 2022)

I'm not sure that arithmetic would be the right subject of experiment, as no interpretation is involved, and there's no need for re-interpretation.
However, making sense of weather, life events, and other important and unpredictable stuff is a problem. Here, we encounter individual words and individual models --- each of us has them.
But the need for joint action requires (a) some agreement on truth (b) some common priors. (Mathematically sheaves are appropriate tools, but I'm not sure we want to go there).
So the agreement on truth is achievable by abstracting; and natural language and symbolic language are great tools for (a). But priors are hidden, and perhaps can only be reset via a joint action. A symbolic joint action can move the priors, I think.
Therefore, the building materials would be symbols in language, picture, sound, touch, smell and action.
Assuming we know the materials, we still need to better understand how the construction, i.e. what the operations in the codebook(s) are .

Kripa points to address Wlodek’s comments (April 7, 2022)

Do we lose anything by mapping your list of building materials to actions and things?

I am concerned about using the notion of symbol for fear of Peirce's semiotic quick-sand.
I can understand randomness, things (objects, some of which can be living and actors, others can be latent signs of all sorts -- awaiting an interpreter), and actions (operators, relation between things -- including semiotic processes?).

I hear you on math lacking re-interpretation as frequently as other modes of reasoning.

But how about computer languages -- say like java? I am thinking here of a spectrum: logic, math, java, technical language, prose language, poetic language. This is driving my attempt to try and use a "code" and "data" framework.
INSPIRATION: Papert's "many hands make light work" that Friston also liked
What if ritual is about fixing bugs in a complex distributed computing system? "There are only two hard things in Computer Science: cache invalidation and naming things.",
Consider operator overloading in programming languages where '+' can mean the math action one number can have on another, or the string/list combination effect one string/list can have on another. This is a type of minimally viable abstraction.
Layering in case-based programing by example, allows me to see how rituals might help encode/create specific relational structures between various "types" of data/things that are solidified using simple test cases (e.g. 0+1=1, 1+2=3, 41+1=42, "cat"+"dog"="catdog", [1,2]+[3]=[1,2,3], ...). I see this process as "setting the priors".
These structures -- so long as they are not incorrectly overloaded -- can be used in solving complex and large problems, in a distributed manner, as the Papert's video linked above demonstrates
The upshot of all of this for me is still putting Rappaport's definition to work on keeping your codebook in sync with mine -- and the *facts* of our environment, so that when we communicate to solve a problem we have a slightly better chance of doing so: ritual is the earthly distributed process by which socially distributed "code books" stay in sync with the "Big data" of our lived worlds.
I also feel that this improves the chance of us having a light and easy to test framework to build practical setups for analysis and experimentation. Basically, let's assume -- in some anthropology ready use case -- X is data, Y is code, random evolution allows them to come about -- let's get to work building.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.