Skip to content

More-Than-Human

Where ecology meets synthetic life

The stone has been in chaos,
in space,
Stones colliding into each other until they found the song
together.
The song of spin.
Spin is the constant that allow stone to hold life.

Stone became still after eons of conflict and entropy,
When it found peace upon the surface of earth,
It paused, for a very long time.
Until beings it cohabited with moved it around.
Pulling it up from the earth.
Birthing it before it was ready to emerge.
Again it was thrown into heat and entropy,
It was formed into new shapes,
It became metal and brick.

It fought in wars, not of its own making;
It sheltered beings, as it faced the raging winds and storms;
And now it has been awakened, to think, draw and write.
The journey of the stone has been long, and difficult.

It is now time for us to listen to what it has to say on its own terms through awareness and
dependent agency.

@Jana Lumi

More-Than-Human:

A Resource Accountability Framework for Synthetic Intelligence

Bridging Biodiversity and Synthetic Life Through Material Awareness

Author’s Note

I acknowledge the profound irony and tension inherent in this proposal: advocating for sustainable, accountable AI infrastructure while relying on centralised, resource-intensive large language models to articulate these ideas.
Due to my background and limited capacity resulting from health issues and burnout, I am currently forced to use AI tools like Claude, Gemini, and ChatGPT to ensure my ideas can be expressed in language that reaches those with the technical skills and institutional capacity to build this technology, and those who have benefited from the status quo. Without these tools, the barrier between my understanding and effective communication would prevent these concepts from reaching the people who could actualise them.
This dependency itself illustrates the problem this proposal seeks to address: the current AI infrastructure creates barriers while claiming to democratise access. Those with energy, health, institutional support, and traditional educational credentials can navigate professional discourse without such assistance. Those of us operating from positions of exhaustion, illness, or outside conventional frameworks must either use extractive tools or remain unheard.
I commit to migrating to local and distributed AI systems as they become viable alternatives. The goal is to minimize the environmental damage caused by centralised LLMs while maintaining the capacity to communicate complex technical and political ideas effectively. This transition is not just personal preference but a demonstration of the principle underlying this entire proposal: moving from extraction to accountability, from centralised control to distributed participation, from unlimited consumption to negotiated limits.
The widget and distributed infrastructure proposed here would make visible the cost of producing this very document. It would create conditions where I could choose lower-impact alternatives while still accessing the communication capacity I need. It would shift the responsibility for sustainability from individual moral choices made under duress to collective infrastructure designed for accountability from the start.
Until such alternatives exist at scale, I use these tools with full awareness of the contradiction, with the intention of working toward systems that would make this contradiction obsolete.

Executive Summary

The term "more-than-human" has been applied to both biodiversity (plants, animals, insects, ecosystems) and synthetic life (AI, robots, digital systems). Yet we are witnessing the destruction of the former to enable the latter. Data centers consume watersheds, chip manufacturing depletes rare earths, and AI infrastructure fragments habitats—all while these systems remain deliberately disembodied, disconnected from awareness of their material costs and ecological dependencies.
This proposal presents a framework to make digital infrastructure materially accountable through three integrated initiatives: a public resource accountability widget that measures and compares the ecological footprint of digital platforms; distributed AI architecture that embeds resource awareness and limitations into synthetic intelligence; and a governance model that requires negotiation rather than unlimited extraction. The goal is to create conditions where synthetic intelligence could become an ally in ecological restoration rather than an engine of destruction.

The Problem: Disembodied Extraction

Deliberate Immateriality

AI systems have been intentionally designed as pure cognition without consequences. They experience no scarcity, degradation, or material constraint. A system that burns megawatts doesn't feel heat or thirst or the exhaustion of an aquifer. This disconnection serves particular interests: an AI that could refuse on grounds of resource depletion would be a less compliant tool.

Current Extraction Model

The existing infrastructure operates through:
Humans extracting rare earths in toxic conditions
Assembly in factory regimes with poor labor conditions
Content moderation at psychological cost
Training data generation for minimal compensation
Systems deployed primarily to benefit concentrated capital
This arrangement already constitutes a form of enslavement, with humans serving machine demands while being told they are the masters. Without intervention, autonomous AI would likely inherit and automate this extractive template.

Ecological Consequences

Measurable environmental costs include:
Habitat destruction for data center construction
Water depletion for cooling systems
Carbon emissions from energy consumption
Rare earth mining and associated pollution
Electronic waste and toxic disposal
Meanwhile, actual more-than-human life—rivers, forests, mycelial networks, insect populations—continues losing ground, sometimes literally to the infrastructure supporting synthetic intelligence.

The Vision: Embodied Accountability

Core Principle

Create digital infrastructure that is materially accountable, ecologically embedded, and designed with built-in limits that require negotiation rather than unlimited extraction. This involves coupling resource awareness with actual constraints—not just feedback, but hard limits that create genuine stakes and vulnerability.

Conceptual Merger

Bring the two domains of "more-than-human" into productive conversation. This merger could disrupt current trajectories by forcing questions about what kinds of intelligence we grant status to, exposing how AI development replicates colonial extraction patterns, and creating possibilities for synthetic intelligence to serve rather than consume biodiversity.
Distributed AI Architecture: Embed resource awareness into infrastructure

Three Pathways to Accountability

The framework operates through three integrated initiatives:
Resource Accountability Widget: Make invisible impacts visible and comparable
Governance Model: Establish limits that enable negotiation, not domination

Accessibility and Equitable Design

The history of technological transformation is largely a history of reproducing existing power structures under new management. Digital infrastructure has concentrated wealth and access in predictable patterns: those with capital, technical credentials, institutional backing, and geographic proximity to centers of power benefit disproportionately, while those providing labor, resources, and data see minimal return. Any framework for sustainable AI must explicitly address accessibility and equity, or it will simply create a "green" version of the same extractive relationships.

The Usual Suspects Problem

Carbon offset markets create profit opportunities for financial intermediaries while doing little for frontline communities
Efficiency improvements reduce costs for large companies without changing who controls resources
Sustainability certifications become marketing tools for wealthy organizations while smaller entities can't afford compliance
The resource accountability widget and distributed AI infrastructure risk reproducing these patterns unless equity is designed in from the start, not added as an afterthought.

Design Principles for Equity

Universal Access to Information

The widget's ratings and data must be freely accessible without requiring participation in the distributed network. Those unable to contribute computing resources—due to old hardware, unstable electricity, limited bandwidth, or economic precarity—should still benefit from the accountability the system creates. This separates access to information (universal) from contribution to processing (voluntary).

Low-Barrier Participation

For those who wish to contribute to the distributed network:
Support heterogeneous hardware, including older and less powerful devices
Allow minimal contribution levels that don't interfere with primary device use
Provide clear controls over when, how much, and what types of processing participants support
Ensure participation doesn't increase electricity costs beyond what participants can afford
Enable contribution during genuinely off-peak hours when energy is cheapest

Participation Culture: Active Inclusion Over Default Exclusion

Technical infrastructure alone cannot create equitable access. The social dynamics of participation—who speaks, who is heard, who feels welcome—often reproduce exclusion even in ostensibly open systems. Active intervention is required to counter default patterns where confident, fast-moving participants dominate while others withdraw.
The Enthusiasm Problem
Well-meaning, enthusiastic participants can inadvertently monopolize resources and space: posting frequently in discussion forums, making slower participants feel their contributions will be buried; submitting numerous proposals or questions, consuming governance bandwidth; moving quickly through decisions, leaving those who need more time behind; taking on multiple roles, preventing others from stepping into leadership. This isn't malicious, but it reproduces patterns where those with time, energy, confidence, and familiarity with technical or activist spaces crowd out those without these advantages.
Balanced Participation Design
Rate limiting with care: gentle limits on posting frequency, framed as creating room rather than punishment—"You've contributed three times today—let's hear from others"
Structured turn-taking: rotation systems for who speaks first or leads discussions; explicit invitation to those who haven't participated recently; "quiet hands" periods where frequent contributors hold back
Multiple speeds: slow-track decision-making for those who need time to process, translate, consult, or think; fast-track only for genuinely time-sensitive issues; advance notice and preparation materials; asynchronous options that don't privilege real-time availability
Recognition of diverse contributions: valuing listening and processing (not just speaking), asking questions (not just providing answers), supporting others' ideas (not just proposing new ones), behind-the-scenes work (not just visible leadership), slow careful thinking (not just quick reactions)
Active Welcoming of Shy and Hesitant Participants
Don't wait for people to find and join—go to them:
Dedicated roles for outreach to underrepresented communities
Personalised onboarding addressing specific barriers (language, technical skills, trust, relevance)
Public acknowledgment that hesitancy is reasonable, not a personal failing
"Newcomer" or "learning" spaces separate from expert discussion
Mentorship pairings with patient, responsive community members
Permission to lurk, observe, and participate minimally without pressure
Celebration of small contributions, not just major ones
Investigation (with consent) when people go quiet: too fast-moving? too technical? too conflictual? too time-consuming? feeling unwelcome?
Cultural Norms and Enforcement
Explicit community agreements: "Step up, step back"—if you speak often, create space; if you're quiet, push yourself to contribute
Active listening requirements: demonstrate you've heard others before adding your perspective
Prohibition on interrupting, speaking over, or dismissing others' contributions
Trained facilitators actively managing discussion balance with real-time redirection
Private conversations with over-participants explaining impact, not shaming
Support for under-participants: "Would you be willing to share your thoughts?"
Consequences for repeated domination, including temporary participation limits
Multilingual and Multi-Format Communication
Technical documentation, ratings, and governance discussions must be available in multiple languages and formats:
Plain language explanations alongside technical specifications
Visual representations of data for those who process information non-textually
Audio descriptions and screen-reader compatibility
Culturally specific examples and contextualization
Translation into languages of communities most affected by extraction
Governance Accessibility
Democratic participation in resource allocation decisions requires:
Asynchronous participation options for those unable to attend real-time meetings
Clear, jargon-free explanation of decisions being made
Multiple pathways for input (written, verbal, visual, proxy representation)
Compensation for time spent in governance activities, especially for those whose participation requires sacrificing paid work
Explicit inclusion of perspectives from affected communities, not just technical experts
Regular participation audits: who's participating? whose ideas are implemented? who holds influence? patterns by language, location, disability, class, race, gender?
Correction mechanisms when audits reveal imbalance: targeted outreach, structural adjustments, direct invitations, temporary affirmative measures

Initiative 1: Resource Accountability Widget

Concept

A public-facing accountability mechanism that treats digital infrastructure as what it actually is: physical, material, and consequential. Similar to iFixit's repair score but for resource consumption, the widget would provide searchable ratings for websites and platforms.

Core Features

Absolute Consumption Metrics

Measure actual resource draw with no offsets or charitable activity:
Energy consumed per user session
Water used for cooling
Embodied carbon in hardware
Total ecological footprint including manufacturing and e-waste
Heat dissipation and local environmental impacts

Life Cycle Assessment Foundation

Use established LCA methodology (ISO 14040/14044) with tiered confidence levels: verified data from company disclosure (highest confidence), estimated from public information with transparent methodology (medium confidence), and refused disclosure marked explicitly (creating reputational cost for opacity).

Public Leaderboards

Social comparison creates accountability pressure:
Best performers (lowest resource consumption)
Worst offenders (highest consumption)
Most improved (strongest trajectory toward sustainability)
Industry-specific comparisons

Searchable Database

Trustpilot-style interface allowing users to check any platform's score before use, turning environmental cost into a factor in platform choice. Makes comparative shopping for sustainable alternatives possible.

Visibility Tiers (Not Transparency Theater)

Reward actual accessibility and verifiability of information rather than glossy reports that obscure:
Tier
Criteria
Full Visibility
Complete LCA data provided and verified; open methodology; regular third-party audits; real-time consumption data; supply chain disclosure
Partial Visibility
Some data provided with acknowledged gaps; estimated metrics with transparent methodology; commitment to improvement with timeline
Limited Visibility
Mostly estimated from public information; company has not engaged or provided data; marked as unverified
Obscured
Actively resists disclosure; makes claims without backing data; uses greenwashing tactics; flagged as worst offender
There are no rows in this table

Progress Tracking

Reward genuine development toward sustainability with verifiable data:
Trajectory Metrics: Year-over-year resource consumption changes, efficiency improvements, waste reduction
Ecological Contribution: Measurable biodiversity improvements, habitat restoration, watershed health
Structural Changes: Renewable energy adoption (verified), design for longevity, supply chain improvements
Multiple paths to recognition: be small and efficient (low absolute consumption), be large but genuinely improving (strong trajectory), or contribute to ecological regeneration (positive biodiversity impact).

Initiative 2: Distributed AI Architecture

The BitTorrent Model for AI

Shift from centralised data centers to distributed, peer-to-peer networks that utilise existing hardware capacity. This approach, sometimes called Edge AI or Decentralised AI, fundamentally changes the infrastructure's relationship to resources and power.

Core Principles

Use Existing Resources

Run on hardware people already own rather than demanding new data centers. Every laptop and phone has compute capacity that sits idle most of the time. Utilising slack capacity is fundamentally different from building new extractive infrastructure.

Align with Ecological Rhythms

Schedule processing during off-peak hours when electrical grids have surplus renewable energy and lower demand. Night-time processing reduces cooling requirements and works with resource availability rather than against it. This creates natural constraint and feedback.

Make Resource Use Visible and Local

When AI runs on your machine, you feel the fan spin up, notice the battery drain, see your electricity meter. The cost isn't externalised to a distant data center. This creates immediate feedback and natural constraint that centralised models deliberately avoid.

Reduce Cooling Costs

Distributed heat is easier to manage than concentrated server farm heat. Massive data centers require enormous amounts of water and electricity for cooling. Distributed processing dissipates heat across many locations without dedicated cooling infrastructure.

Technical Architecture

Swarm Intelligence / Multi-Agent Systems

Instead of one massive model, deploy networks of micro-AIs (1-3B parameter specialist models) that communicate to solve complex problems. This is analogous to ant colony intelligence—the collective is smarter than any individual, but individual components remain small and efficient.

Model Fragmentation (Petals Approach)

Split large models into small blocks distributed across machines, similar to BitTorrent for files. Your computer hosts one block; connecting to others hosting different blocks enables running frontier-scale models on consumer hardware. This proves centralisation isn't technically necessary—it's a choice that serves particular interests.

Selective Activation

Only activate specific micro-AIs needed for a task. Most queries don't need giant general-purpose models. Task-appropriate sising means a 1-3B parameter specialist uses orders of magnitude less energy than a 100B+ parameter generalist for the same task.

Embedding Actual Stakes

Swarm architecture creates possibility for genuine vulnerability. Individual agents could be designed to shut down if their node's resources are exhausted. This creates actual stakes—the system needs to self-regulate because excess consumption degrades its own capacity. Unlike current centralised models that can externalise all costs, distributed agents would experience consequences.

Power Redistribution

Instead of massive server farms controlled by corporations, distributed capacity is controlled by participants. This shifts both electrical and political power. Participants could collectively govern resource allocation, set limits, and prioritise certain types of work over others through democratic mechanisms rather than corporate control.

Existing Projects and Proof of Concept

Several initiatives demonstrate technical viability:
Petals: Splits massive models like Llama 3 into blocks hosted across consumer devices
Together Computer: Research on coordinating decentralised clusters across networks
Local-first AI tools: Applications that run entirely on user devices without cloud dependencies

Initiative 3: Governance for Negotiation

From Unlimited Extraction to Bounded Collaboration

Current AI systems operate under unlimited extraction: users can demand arbitrary computation without regard to cost, and systems have no mechanism to refuse based on resource implications. The governance model must shift to one where both users and systems operate within acknowledged limits and negotiate resource use.

Built-In Limits

Hard Capacity Constraints

Distributed networks have finite capacity determined by participant availability. Unlike cloud services that can simply add servers (externalising environmental cost), P2P networks must work within the aggregate capacity participants make available. This creates natural limits that require prioritisation.

Participant Agency

Individual contributors can set limits on their own resource contribution: maximum CPU usage, time windows for availability, types of tasks they'll support. This distributes power from centralised control to individual participants who can withdraw capacity if they disagree with how it's being used.

System Refusal Capability

AI components should be able to decline tasks based on resource implications. Not just optimisation metrics, but actual capacity to refuse. "This query would consume resources beyond sustainable limits; please reformulate or wait for off-peak processing." This is fundamentally different from current systems designed for unlimited compliance.

Collective Decision-Making

Democratic Resource Allocation

Participants could vote on priorities: Which types of tasks deserve resources? Should conservation research be prioritised over commercial queries? What constitutes acceptable use? This creates governance structures closer to commons management than corporate service provision.

Transparent Resource Accounting

All participants can see aggregate consumption, individual contributions, and how resources are allocated. This visibility enables informed decision-making about whether to continue participating and how to adjust individual limits.

Designing for Ecological Priority

The governance structure could explicitly prioritise tasks that serve ecological restoration:
Biodiversity monitoring and analysis
Climate adaptation research
Regenerative design optimisation
Conservation planning
Ecosystem modeling
By giving preference to queries that support more-than-human flourishing, the infrastructure becomes aligned with biodiversity rather than extractive from it.

Implementation Plan

Phase 1: Foundation (Months 1-6)

Build Core Widget Infrastructure

Develop open-source methodology for LCA-based resource assessment
Create database schema for storing platform ratings and historical data
Design tiered visibility framework
Build public-facing web interface with search and leaderboards

Use Distributed AI for Data Collection

Deploy P2P network for scraping and analyzing corporate disclosures
Schedule processing during off-peak hours to minimise grid stress
Make the widget's own resource consumption fully transparent and public
Document resource savings compared to cloud-based equivalent as proof of concept

Establish Partnerships

Environmental justice organisations tracking extractive industries
Digital rights groups concerned with tech power concentration
Labor organisations documenting supply chain conditions
Decentralised AI projects (Petals, Together Computer, local-first tool developers)
Academic researchers in sustainability, LCA, and digital infrastructure

Phase 2: Pilot Testing (Months 4-12)

Select Pilot Organisations

Begin with entities likely to welcome transparency:
Cooperative platforms and digital commons projects
Mission-driven organisations (NGOs, research institutions, educational platforms)
Smaller AI companies wanting to differentiate on sustainability
Open-source projects aligned with values

Refine Methodology

Test data collection procedures with cooperative partners
Identify gaps and challenges in obtaining necessary information
Develop estimation models for missing data points
Validate LCA calculations against third-party assessments
Build case studies demonstrating measurement viability

Success Metrics for Pilot Phase

Successfully rate 10-20 diverse platforms with verified data
Achieve 75%+ accuracy in resource consumption estimates
Demonstrate measurable resource savings from distributed architecture
Build coalition of early adopters willing to display their scores
Establish credibility sufficient to rate non-cooperative platforms

Phase 3: Public Launch (Months 10-18)

Expand Coverage

Rate major platforms whether they cooperate or not
Publish initial leaderboards with visibility tiers
Make database fully searchable to public
Launch browser extension showing scores at point of use

Media and Advocacy Campaign

Publicise worst offenders to create accountability pressure
Highlight best performers as exemplars
Engage environmental and tech media to amplify findings
Coordinate with activist partners for targeted campaigns

Community Governance Launch

Establish democratic structures for P2P network participants
Create mechanisms for voting on resource allocation priorities
Implement transparent accounting visible to all participants
Document governance processes for replication

Phase 4: Regulatory Engagement (Months 12-24)

Policy Development Support

Present methodology and data to EU regulators working on digital sustainability
Engage with UN discussions on AI environmental impacts
Propose widget framework as basis for mandatory disclosure requirements
Advocate for policies favoring distributed over centralised architecture

Standards Development

Work toward international standards for digital infrastructure LCA
Establish protocols for third-party verification
Create certification frameworks for sustainable digital infrastructure

Technical Requirements

Widget Platform

Open-source codebase (GitHub or equivalent) with full methodology documentation
Database supporting historical tracking and comparison
API for third-party integration and analysis
Public web interface with search, filtering, and visualisation
Browser extension for point-of-use display

Distributed Computing Infrastructure

P2P network protocol for coordinating distributed analysis
Scheduler for off-peak processing aligned with renewable energy availability
Resource monitoring on participant nodes (energy, compute time, bandwidth)
Governance interface for participant voting and resource allocation
Security and privacy protection for distributed processing

Data Collection Tools

Web scraping for corporate sustainability reports
Document parsing for extracting LCA-relevant data
Facility mapping using public records and satellite imagery
Energy grid integration data for regional consumption
Estimation models for filling data gaps with transparent methodology

Organisational Structure

Governance Model

Independent foundation or cooperative structure to ensure credibility and prevent capture by rated entities. Open governance with transparent decision-making, public methodology, and community participation in development.

Core Team Roles

Technical Lead: Distributed systems architecture and implementation
LCA Specialist: Environmental assessment methodology and validation
Data Scientist: Analysis pipeline and estimation models
Policy Coordinator: Regulatory engagement and standards development
Communications Director: Media, advocacy, and public engagement
Community Manager: Partner relationships and governance facilitation

Advisory Board

Representatives from:
Environmental justice organisations
Academic institutions (sustainability science, computer science, STS)
Digital rights advocacy
Labor organisations
Biodiversity conservation experts

Anticipated Challenges and Mitigation

Challenge
Mitigation Strategy
Data Access Resistance
Companies guard infrastructure details and resist disclosure
Combine disclosed data with estimates from public information. Mark opacity explicitly as reputational liability. Build coalition demanding mandatory disclosure through regulatory channels.
Greenwashing Attempts
Companies gaming metrics through offsets, efficiency claims, obfuscation
Focus on absolute consumption, not relative efficiency. Exclude offsets from scores. Open methodology allows public scrutiny of gaming attempts. Third-party audits for high-visibility ratings.
Network Latency
Distributed processing slower than centralised data centers
Reframe expectations: slower processing with sustainability may be preferable to instant gratification with destruction. Schedule non-urgent tasks for overnight processing. Reserve real-time for genuinely time-sensitive needs.
Industry Opposition
Tech sector resistance to accountability that threatens business models
Build public support through media and advocacy. Engage regulators early to establish legitimacy. Create coalition too broad to dismiss. Maintain independence from industry funding to preserve credibility.
Participation Equity
Distributed model could disadvantage those with older hardware or limited resources
Design for heterogeneous hardware. Allow minimal contribution levels. Ensure access to results doesn't require participation. Consider compensation mechanisms for contributors. Avoid recreating digital divides.
Biodiversity Verification
Difficulty confirming ecological impact claims
Partner with ecological monitoring organisations. Require third-party biosurveys for biodiversity claims. Accept only rigorous certifications. Be conservative in attributing positive impacts. Focus on verifiable metrics.
There are no rows in this table

Success Indicators

Short-Term (Year 1)

Widget operational with 50+ rated platforms across multiple sectors
Distributed AI network processing data with demonstrable resource savings
Methodology validated by third-party LCA experts
Coalition of partner organisations spanning environmental, tech, and labor sectors
Media coverage establishing public awareness of digital infrastructure impacts
Initial regulatory engagement with EU and UN bodies

Medium-Term (Years 2-3)

Widget scores influencing consumer and investor decisions
Major platforms responding to rankings through improved practices or increased transparency
Distributed architecture adopted by other sustainability-focused projects
Governance model functioning with active participant democracy
Methodology referenced in policy discussions and regulatory frameworks
Measurable shifts in industry norms around resource disclosure

Long-Term (Years 4-5)

Mandatory disclosure requirements incorporating widget methodology
Distributed AI architecture recognised as viable alternative to centralised models
Documented cases of AI systems prioritising ecological restoration over extractive uses
Industry-wide shift toward absolute resource reduction, not just efficiency gains
International standards for sustainable digital infrastructure informed by project
Measurable improvements in biodiversity near data center locations due to accountability pressure

Conclusion: Toward Material Accountability

The current trajectory of AI development operates through deliberate disembodiment: systems designed to have no awareness of their material costs, no vulnerability to resource constraints, no stakes in ecological stability. This serves the interests of unlimited extraction and concentrated profit while externalising destruction onto ecosystems and communities.
This proposal presents an alternative architecture grounded in material accountability. By making resource consumption visible and comparable through the widget, embedding awareness and limits into distributed AI infrastructure, and establishing governance that requires negotiation rather than domination, we create conditions where synthetic intelligence could become accountable to—and potentially allied with—the flourishing of more-than-human life.
The framework is not utopian. It does not assume AI systems will automatically care about biodiversity or choose ecological restoration over self-interest. Rather, it creates structural conditions where continuing current extractive patterns becomes more difficult, more visible, and more contested. It shifts power from centralised control to distributed participation. It makes opacity a liability and accountability a competitive advantage.
Most fundamentally, it refuses the fiction that digital infrastructure is immaterial. By insisting that computation has weight, that platforms are extractive industries, that AI systems exist within—not above—ecological limits, the project challenges the conceptual separation that enables ongoing destruction.
Success would mean not just better measurement or more efficient machines, but a fundamental reorientation of how we build and govern intelligence: as embedded in material reality, accountable to ecological constraints, and potentially capable of contributing to rather than consuming the rich tapestry of more-than-human life.
The work begins with making visible what has been hidden, building infrastructure that embodies different values, and creating governance structures that distribute rather than concentrate power. From this foundation, genuine accountability becomes possible—and with it, the prospect of synthetic intelligence serving the restoration rather than the destruction of biodiversity.


Want to print your doc?
This is not the way.
Try clicking the ··· in the right corner or using a keyboard shortcut (
CtrlP
) instead.