Tools

icon picker
AI

Context:

DAF was an invited speaker on a panel on Large Language Models (LLM), Artificial Intelligence (AI), and synthetic intelligence more broadly.
The conference was in-person in Berlin, Germany, in the first week of June 2024: .
DAF joined remotely (session held at 2am PT) for a panel discussion with and (moderated by Scott David).
In preparation/anticipation for the session, Scott sent around 11 questions about LLM and personal/organizational dimensions of AI.
Below, for the interested, I present my bulletpoint responses to these questions. They were drafted as final statements or to be recited during the panel, rather to spark conversations & bring in more perspectives.

Relevance:

You may be curious to learn more about what some people are asking & thinking about modern intelligence, from theory and practice perspectives.
If you see anywhere you would like to comment, feel free to do so.
You can add another column entirely if you want to take a crack at all 11 questions, or add more questions as rows.
I share it here because beyond the relevance of the topics, the question-catalyzed approach to inquiry across scales is a fundamental practice. As I look forward to June 2024 and beyond, I am thinking about: learning more about the fractal questions & inquiries people are investigating (the Epistemics of run-time Education systems) and working to develop a science and art of knowledge systems (the Pragmatics of design-time Engineering for aforementioned Education systems).
Any appropriate share, is better than unshared (with a broad semantic understanding of “appropriate” if & where syntactic and procedural customs are respected).
Sharing with context adds meaning. Describing the path of your inquiry, is like viewing the movie of it, and/or interviewing the director. However (little or much) you feel like you are honestly and authentically sharing, actually communicates across levels.
Sharing your experience with reference to particular stands, follow throughs, and reflections — is an effective way to use one of the shared mnemonics of inquiry developed at .

D
EIC 2024
Q
Question
DAF Text
.....?
1
Q1
In light of the advancements in AI, particularly with LLMs, does the concept of intelligence really change when applied to machines versus humans?
How can one provide a satisfying finite response to this classic infinite question, about the semantics of “intelligence” amidst technological change?
Certainly unique contemporary factors must be taken into account: changes over the last decades in availability of digital computation and prevalence of digital interactions (what Scott calls “the fifth-order consequences of Moore’s law”), and changes over the last years in terms of algorithm-driven centralized social media platforms, and the availability of multi-modal, or transmedia, generative technologies like ChatGPT.
And I believe that framings and perspectives from across time can help us with sense-making and decision-making amidst uncertainty. Failure to integrate our learnings and anticipations across timescales, could result in loss of the forest for the trees in our short-term wayfinding, or even loss of the forest entirely through ecosystem deterioration caused by failures of collective action and imagination.
So, “does the concept of intelligence really change when applied to machines versus humans?”.
Short answer: Yes, the semantics, pragmatics, dynamics, and context of intelligence all are in change, in use. Compositionality, Interactivity, and Change are fundamental features of intelligence found across scales and systems.
Another way to say this is: Yes, in practice, intelligence really does change in practice. And, in principle, intelligence really does not change in principle.
This is how we always already find ourselves, in what Bucky Fuller called “Critical Path”: steering an ongoing process of inquiry and action, grappling with the specifics of pasts and futures, on the razor’s edge of survival across scales.
2
Q2
In your opinion, How do we balance engineered intelligence systems in terms of innovation with responsibility?
The question is excellent and has been framed artfully. The question is not about some exclusive balance or implied trade off between Innovation and Responsibility as if on a scales of justice or zero-sum allocation game.
Rather the questions asks how “we balance engineered intelligence systems”, in terms of Innovation WITH Responsibilities. We are considering each with (not opposed to) each other, and thinking about balancing the system so that upon viewing from different Perspectives, Innovation and Responsibilities are outcomes or Properties of certain engineered Processes. So here the “balancing” is more like the design-, build-, and run-time balancing of a ship or plane which needs to operate safely in stormy weather or on some far-flung planet.
The proximate question to address organizationally becomes: How do we Design and Support our intelligence systems amidst a broader ecosystem of shared intelligence processes, such that in Deployment they are able to steer and be steered, to balance and be balanced?
A broader question raised is: how can we come to understand the technical engineering of synthetic intelligence systems in terms of “innovation WITH responsibility”, implying our vision and capacity to communicate the total situation across BOLTS domains (Business Operating Legal Technical Social)?
In this area recently we have developed an open source Requirements Engineering inter-framework called P3IF . At its core, as already utilized earlier in this response, P3IF consists of analyzing system requirements in terms of Processes, giving rise to Properties as viewed from defined Perspectives. The work was released as research article in October 2023, and is now being applied in several organizational settings & being scoped for further technical development.
3
Q3
What are some of the most promising areas of collaboration where the synthesis of AI and human intelligence can lead to breakthroughs?
Only speaking here to the framing of “collaboration as interface” — two promising areas are related to evaluation methods & formal systems modeling.
Development of capacities, norms, evaluations, and expectations for interfaces (ranging from the material through the informational & arbitrary).
AI benchmarking tests, which are really Phenotyping assays, span from the Syntactic on through now the Semantic (LLM) and Pragmatic (Action models). Now, or perhaps plus or minus a few years depending on your threshold, these synthetic intelligence systems are reaching agentic status according to any reasonable application of the intentional stance. So it is a question more open now than ever: What could and should collaboration look like, within and across projects?
New approaches are facilitating design of custom compositional systems:
For organizational design, modeling, and simulation, emerging approaches today include applied category theory, Active Inference, tokenomics, and systems engineering. I feel that in the coming years, these approaches and/or their functional equivalents or cousins, will develop in applicability and enable new organizational phenotypes which will exist in new and different organizational niches. This multiscale thermo-evolutionary grounding connects the particulars of the collaboration to first principles and holistic considerations.
Here there is an analogy to the Linnaean binomial nomenclature for genus and species. What is generic and what is specific? What questions and learnings are transferable across BOLTS domains understood as Maps (of history/present/future)? This analogy points towards a synthesis of science, engineering, and metagovernance in service of organizational ecology, a direction in which today we only incipiently forage towards.
4
Q4
In what ways are current forms of AI the same as or different than other "intelligent" systems?
About the similarities and differences between “current forms of AI” and other “intelligent” systems...
Same:
Bound and enabled by materiality, like any body. And so subject to the same kinds of regularities of mass, energy, and fundamental perspectival limitations.
Different
“Current forms of AI” are Things that largely operate on digital chips as virtualized processes — this is a qualitatively/structurally different environment and thermo-informational coupling than the autopoiesis associated with things that do , like humans and ants.
However — Will this definition/distinction blur, or maybe it already has blurred in fractal way, in terms of the continuity of extended human techno-cyber phenotype, as it gains operational enclosure over bodies and their computational and cognitive niche?
What blurred or bright lines might exist, in the dynamic of information and food supply networks operated by synthetic intelligence, as the “edge” and how it maps itself comes to include itself in its mapping, without heeding Borges’ 1946 call “”?
5
Q5
What are the patterns of challenges when 2 different intelligent systems interact?
One way to approach this question, let’s call it Scenario A, is getting one or each person’s perspective on this question at a time and then having an auxiliary discussion about which responses should be paid attention to, converging down to a finite product describing Challenges, then used for recognition or used to guide decisions. A potential limitation here is that in the winnowing and coarse-graining, the resulting described patterns reflect only a sub-set of the group’s total input, resulting in a sub-optimal artefact which may age poorly or have limited application relevance.
In Scenario B we hear “patterns of challenge” and contextualize this amidst other patterns, using a Pattern Language. We can solicit continuous reporting and provide continuing function, as new (types of) challenge patterns are asserted. Aspirationally, I believe we can use pattern languages and extensions in a calm and deliberate way, as interaction challenges keep rolling in (”if you see something, say something”) and contextual remedies are applied in practice and perhaps only later understood. This relates to our 2023 work “”.
6
Q6
Does the "order" created by the application of one form of intelligence necessarily create disorder in other neighboring forms?
Order is perspectival — it depends on the active interpretant. For example see There's Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-scale Machines”. In this Polycomputing view, the assertion of computational function is understood as agent-specific and only by subjective relationship. For example, a given input to a given observer may look no different from noise on certain symbolic measures; in this situation the observer could not tell whether this is actually a noise process or whether there is an encrypted communication which only appears as noise to the unaware viewer.
Different concepts, mechanisms, and resulting patterns of Order and Randomness are in play with different systems (e.g. digital pseudorandom number, analog randomness, nuclear, quantum, biological). In a world of open-endedness and un/ordering across timescales, in response to the original question, I do not believe that much more than “it depends” can be said.
7
Q7
What are helpful and practical framings for regular people to apply when dealing with AI and AGI systems?
Ask, what/why/when does it matter or not, if I am interacting with a human? E.g. are we seeking from a person; Relationship, providing/giving Care, Knowledge, or Accountability, ...? Where and why do we seek, require, or disfavor (specific kinds of) human and non-human intelligences?
Document your traces & trails — amidst awareness that our digital niche modifications will be used & leveraged for and against our advantage in likely quite complex ways over the years to come.
Build up your practical and conceptual understanding of the kind of system you are working with. For example in a Linear Regression model, nearby input points will have linearly-nearby outputs. AI systems, as all things do, have characteristic regularities, like linear regressions but more nuanced (e.g. related to the neighborhood in semantic space). Hopefully better tools for understanding and working with regularities far outside of our evolutionary prior set, will develop
Where real consequences can occur from interacting with AI (including chains of thought stimulated by your perception), ensure that scaffold/setting and organizational processes are appropriate for what consequences may arise.
8
Q8
How do current forms of "intelligent" systems relate to historical forms? In what ways are they different?
Similar question to Q4.
9
Q9
What are the paradigms and framings of intelligent systems that you find helpful for your understanding when you read about new advancements in AI
Active Inference meeting at the junction of the low road (bottom-up model construction) and high road (top-down physics of cognitive ecosystems).
Cognitive Security = Cognitive Science + Security Mindset-Posture
Embodied & Mortal Computing: Technological Approach to Mind Everywhere (TAME), Quantum Free Energy Principle.
10
Q10
What are 3 things that folks should remember when they are interating with AI and other intelligent systems?
Focus on intention as coordinating an assemblage of diverse intelligences such as affective, temporal-spatial, extended, cultural, aesthetic.
Digital and Cognitive systems are serious and also can be fun.
Just because there is rhetorical and symbolic relevance to lists with 3 members, doesn’t mean that such lists exhaust or exclude what else could be said
11
Q11
Can you share your positive imagery for the future state of human information systems in the year 2040 (15 years in the future)?
I imagine ecosystems of positive affect distributions, like glimmering stars studded in tapestries of Bayesian hypergraphs... Trillions of fractal internal states, doing as well as they expected or possibly even better, on most if not all of the critical functions including identity, shielding, buffering, and message passing with each other.
There are no rows in this table

Load content from www.math4wisdom.com?
Loading external content may reveal information to 3rd parties. Learn more
Allow
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.