icon picker

AI, Philosophy and Society: philosophical analysis of the objects at hand - epistemology and ethics.
To Do
Connecting ethics and epistemology of AI
"First, we move from trusting the output of an AI system to trusting the process that leads to the outcome”.
“how to create the conditions to internalize values into the whole process of design, implementation, use, and assessment of an AI system, in which values (epistemic and non-epistemic) are explicitly considered at each stage and inspectable by every salient actor involved at any moment.”
“Thus, any time we talk about 'process', it is not merely the algorithmic process that we have in mind but the whole process from design to implementation and use, which of course, does include technical questions about algorithmic procedures.”
“Because we think ethics is not a cherry on the cake, relegated to a post hoc analysis, we start from epistemology and seek to identify relevant 'joints' of the process at which ethics must and should come in. In this sense, we will speak of internalizing values already at the design stage of an AI system. In particular, we argue that many foreseeable, undesirable social consequences can be internalized in the design process in ways that naturally extend precautionary and legal practices. We think the strategies we start developing here can be expanded, promoted, and taught in computer science departments and design schools, internalized in corporate missions, and help create a culture of responsible AI.”
“we shift focus: we are interested in where, in the whole process of design, implementation, use, and assessment, these issues arise, and not just in what kind of ethical issues arise. In other words, we take the mapping of Mittelstadt et al. (2016) to be a valuable tool for running an ethical assessment of an AI after the fact. But our concerns are already at the level of design: how can we anticipate the concerns identified in the map while we develop an AI?”

“While Kearns and Roth have quite a lot to say about possible trade-offs, they do not provide a conceptualization of how to think about the internalization of values and social ends in the design process. In fact, at key junctures they leave such decisions to 'society' and 'policy-makers'. Our point is that some of these key junctures are also in the hands of those who design algorithms, and that is why the question of how to internalize values is so important.”
“But questions about explainability and about ethical compliance can be asked in different ways, or at different levels of abstraction”
“There is a lot to learn from Computational Reliabilism, and we aim to build on CR to develop our epistemology for glass box AI”
“One aspect of CR that may need improvement is the number of stakeholders, or actors, included in the conceptualization of reliabilism. We think it is important to give visibility to as many relevant actors as possible, but the definition of CR, in the current formulations, mentions only one, i.e., the cognitive agent assessing the process. But where are the designers? And where are the quality control managers and users or the evaluator of the AI system? In the literature, some contributors emphasized the need to discuss different actors or stakeholders. For instance, according to Zednik (2021), there are different stakeholders affected by the opacity of an AI system, and his solution is to identify different levels of explanation needed for different stakeholders (drawing on literature on explanation from philosophy of science).”
“We illustrate and articulate this by offering a framework for incorporating attention to harms that affect intersectionally vulnerable populations into the design process.”
This process must be shown to always be within a politico-economic context, cannot avoid it. So, the trusted process must take this into consideration. The process is about democratising the process, otherwise you rely on dictatorial benevolence, even if ethical considerations are implemented throughout. Who checks and how? everyone does at once, both experts and non-experts: the public and np-institutions. We also talk of the whole process. This can be quite vague and large scale, but the point is about the underlying incentives beneath it all. They differ depending on the decision-making parties involved. It should not be ex-post. we must internalise values. This will extend the framework to not only be passively taught, but be infrastructurally implemented into private-sector regulation. Comparison to the next reading below. Important. I will talk about how to internalise values and social ends in the design process. I will bring both texts together to say that it should be a policy to have a certain design process for algorithms. Look at section 3.2 to see what these LoAs are. For me, we care about the socio-economic LoA. Read more about CR. CR of a algorithm design process. It seems that a DAO infrastructure with the voters as part of it would have a high CR rating. Its reliable for validation and verification, robustness and interoperability, history and expert knowledge of the DAO members. The process of a DAO algorithm would have ahigh CR rating. By offering a digital infrastructure framework for incorporating prevention to harms that affect intersectionally vulnerable economic-agents populations into the design process.
The ethics of algorithms: Mapping the debate
Definition of algorithms, and AI systems
"Algorithms are inescapably value-laden (; ). Operational parameters are specified by developers and configured by users with desired outcomes in mind that privilege some values and interests over others”
“‘Algorithm’ has an array of meanings across computer science, mathematics and public discourse.
the term is [often] used not in reference to the algorithm as a mathematical construct, but rather the implementation and interaction of one or more algorithms in a particular program, software or information system.
“Our aim here is to map the ethics of algorithms, with ‘algorithm’ interpreted along public discourse lines.”
“Here, we follow Hill's (
: 47) formal definition of an algorithm as a mathematical construct with “a finite, abstract, effective, compound control structure, imperatively given, accomplishing a given purpose under given provisions.” However, our investigation will not be limited to algorithms as mathematical constructs. As suggested by the inclusion of ‘purpose’ and ‘provisions’ in Hill's definition, algorithms must be implemented and executed to take action and have effects. The popular usage of the term becomes relevant here. References to algorithms in public discourse do not normally address algorithms as mathematical constructs, but rather particular implementations. Lay usage of ‘algorithm’ also includes implementation of the mathematical construct into a technology, and an application of the technology configured for a particular task.2 A fully configured algorithm will incorporate the abstract mathematical structure that has been implemented into a system for analysis of tasks in a particular analytic domain. Given this clarification, the configuration of an algorithm to a specific task or dataset does not change its underlying mathematical representation or system implementation; it is rather a further tweaking of the algorithm's operation in relation to a specific case or problem.
Accordingly, it makes little sense to consider the ethics of algorithms independent of how they are implemented and executed in computer programs, software and information systems. Our aim here is to map the ethics of algorithms, with ‘algorithm’ interpreted along public discourse lines. Our map will include ethical issues arising from algorithms as mathematical constructs, implementations (technologies, programs) and configurations (applications).3 Where discussion focuses on implementations or configurations (i.e. an artefact with an embedded algorithm), we limit our focus to issues relating to the algorithm's work, rather than all issues related to the artefact.”

The aim here is to analyse the innovation cycle of AI systems. Taking inspiration from Hills, I take the public discourse interpretation of ‘algorithm’ and give it a less ambiguous term, ‘AI system’, to delineate the implementation of the algorithmic mathematical structure into a technology and the application of the technology configured for a specific task. So, ‘AI system’ is to be interpreted as the implementation (technologies and programs) and configuration (applications, artefacts with embedded algorithms) of algorithms as mathematical constructs.
While Mittelstadt et al. (2016) rapport that it “makes little sense to consider the ethics of algorithms independent of how they are implemented and executed in computer programs, software and information systems”, I extend the scope to say that it makes little sense to consider the ethics of AI systems (as defined above) independent of who makes the decision of how they are implemented and configured. The aim here is to discuss how decision-making is structured for the innovation cycle of AI system designers.
the Ethical Algorithm
"The algorithms under consideration in The Ethical Algorithm are of a different nature, and the ethical issues are more complex. These algorithms are of fairly recent origin. They arise as we try to make use of vast collections of data to make more-accurate decisions: for example, using income, credit history, current debt level, and education level to approve or disapprove a loan application. A second example would be the use of high school GPA, ACT or SAT scores, and extra-curricular activities to determine college admissions. The algorithms under consideration use machinelearning techniques (a branch of artificial intelligence) to look at the success rates of past student admissions and instruct the machine-learning algorithm to determine a set of criteria that successfully distinguish (with minimal errors) between those past students who graduated and those who didn't. That set of criteria (called a "model") can then be used to predict the success of future applicants. The ethical component is important because such machine-learning algorithms optimize with particular goals as targets. And there tend to be unintended consequences--such as higher rates of rejection of applicants of color who would actually have succeeded. The solution to this problem requires more than just adding social equity goals as part of what is to be optimized--although that is an important step.” “Perhaps one way to apply this command is to be wise as we are devising algorithms to make sure that they do no harm. We should be willing to give up some efficiency in order to achieve more equitable results.”
What I care about is who is making the decisions for any of these algorithm design cycles. This talks about the ethics of algirithms, but not about the way in which such ethics could be implemented. Benevolence is risky and targeted policy is inefficient. Give up efficiency to achieve more equitable results by democratising the decision-making process of the innovation cycle of AI systems.
The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making
This is about the ethics of having algorithms make decisions. This is not reallt relevant to my argument as mine is a discussion of who makes the decisions about the algorithm innovation cycles.
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence
From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices
The ethics of algorithms: Mapping the debate
Transparency in Complex Computational Systems
Vice article.
“ OpenAI kept a number of details private regarding its newest AI model, including its training data, training method, and architecture. Many AI researchers are critical of this, as it makes it more difficult to suggest solutions to the product’s problems, such as the biases the training sets may have and the potential harms of that. Meanwhile, as part of its recent layoffs, leaving the company without a dedicated team dedicated to principles of responsible AI while it continues to adopt GPT models as part of its business.
“Why does OpenAI get to determine what the model can and can't say? If they do determine that, then they should be transparent and very specific about what their values are and allow for public input and feedback—more than they have already. We should demand this from them,” Albert said. “
There are no rows in this table

Digital Political Economy: digital commons and new governance theories
Reimagining the Platform Economy
The roots of the AI-enabled platform economy and value creation.
Private-sector corporate governance has rested on the notion of shareholder value for the last 50 years.
Dominating AI technology platforms have been successful in focusing the larger innovation ecosystem on technologies that focus on wealth extraction. This is evident from the findings of Zuboffs Surveillance Capitalism. This does not mean that the technologies must be directly aimed at wealth extraction, but that it always reduces to a bottom-line.
But extractive and exploitative practices are not required in digital markets.
We must acknowledge that there are distinctions between income derived from adding value and wealth gathered through extraction.
The first is a reward for taking chances that increase an economy's capacity for production; the second results from receiving an excessive share of the reward without making equivalent enhancements to the economy's .
The prevalence of these methods for AI technology companies demonstrates the need for a shift in emphasis away from the "bottom line" and toward the "how" of wealth generation.
A Deeper Investigation of the Importance of Wikipedia Links to the Success of Search Engines
Take Wikipedia as an example. Wikipedia links appear in a variety of Google queries more often than any other knowledge resource. This implies the help Wikipedia offers Google to achieve one of its core use-values is larger than one would expect. The issue is that Wikipedia and its volunteers are not benefitting from this dynamic: they have no say in the present or future of the companies that use their content. “How can we shift some of the power that is concentrating in large tech firms, towards a more distributed outcome among the general public?”. Did the open-source community make a mistake in allowing use by private-sector actors?
AI in the common interest
Institutions and public policy should be designed to ensure that innovations improve the world and not the vacuumed incentives of the designers.
Governance structures should be inclusive and mission oriented, centering around a common good.
Governments should shape this technological progression to serve the interest of the public.
The rewards of collective value creation (The Value of Everything) in the platform economy must be share more equitably using new digital architectures.
We must avert an AI-enabled laissez-faire-induced crisis. We require a "ethical by design" AI mission, supported by sound regulation and competent governments, to shape this technology revolution for the benefit of all, not just the benefit of shareholders. The private sector can and will join the larger effort to make technologies safer and more equitable if these pillars are in place.
Establishing effective public monitoring should ensure public value creation through digitalisation and AI.
UNESCO’s has this point as integral to its purpose, giving 193 member states a normative framework to employ such recommendations.
The problem, however, is that the has increasingly become a barrier to building public-sector capabilities. Governments need to be able to develop AI in ways that they are not reliant on the private sector for sensitive systems, so that they can maintain control over important products and ensure that ethical standards are upheld. Likewise, they must be able to support information sharing and interoperable protocols and metrics across departments and ministries. This will all require public investments in government capabilities, following a .
Given that so much knowledge and experience is now centered in the private sector, synergies between the public and private sectors are both inevitable and desirable. Mission-orientation is about picking the willing – by co-investing with partners that recognize the potential of government-led missions. The key is to equip the state with the ability to manage how AI systems are deployed and used, rather than always playing catch-up. To share the risks and rewards of public investment, policymakers can attach to public funding. They also can, and should, require Big Tech to be .
Open Standards and the Digital Age
OSI Model
Inventing the Internet
Between Scarcity and Abundance
The 'Socialism' of Léon Walras and His Economic Thinking
The Value of Everything
Mission Economy - Guide to changing capitalism
There are no rows in this table

Digital Policies & Recommendations: fitting my project into a larger pragmatic picture.
To Do
Aims and Objectives (UNESCO Recommendation on the Ethics of Artificial Intelligence)
The objectives of this Recommendation are:
(a) To guide the actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle.
Member States should encourage public entities, private sector companies and civil society organizations to involve different stakeholders in their AI governance and to consider adding the role of an independent AI Ethics Officer or some other mechanism to oversee ethical impact assessment, auditing and continuous monitoring efforts and ensure ethical guidance of AI systems. Member States, private sector companies and civil society organizations, with the support of UNESCO, are encouraged to create a network of independent AI Ethics Officers to give support to this process at national, regional and international levels.

“We need an “ethical by design” AI that is underpinned by sound regulation and capable governments working to shape this technological revolution in the common interest, rather than in shareholders’ interest alone. With these pillars in place, the private sector can and will join the broader effort to make technologies safer and fairer.” (UNESCO 2023)
This is great, but it still has an ex-post approach. It does not talk of possible infrastructures capable of supporting such design process changes.
Use my project as a filler for this part. This is what I am addressing. As for the second part, act as a filler for how organisations are to add roles of AI Ethics Officers to oversee the ethical impact assessment.
High-level expert group on artificial intelligence
Digital platforms inquiry - ACCC (AUS)
Not relevant
The Digital Services Act package - EC
Further reading
"non-binding G20 principles or the conclusions of UNESCO’s 2021 AI summit. Eight international organizations, prominently including the EU, have launched , a platform for global standard development.”
“At the same time, the 2020 Global Partnership on AI (GPAI) brings together roughly two dozen “like-minded” countries plus the EU, aspiring to build a shared AI vision around democratic principles and liberal values.”

Digital in the EU-US Trade and Technology Council - EC
"Priorities of the TTC include promoting technology standards”
EU engagement in global AI standard setting - RegulAIte
“a political determination to find multilateral agreement might dilute high regulatory ambitions. After all, global, or at least multilateral, AI rules would require substantive agreement or a political compromise. It is ex ante unclear to what degree cross-border agreement justifies substantive concessions in regulatory negotiations.” The pursuit of a multilateral agreement in politics could potentially water down ambitious regulatory goals. This is because establishing global or multilateral regulations for AI would necessitate significant agreement or political compromise. It is ex-ante uncertain how much regulatory negotiations would need to compromise in order to reach a cross-border agreement.
Use this to show how such regulation is not enough.
Non-governmental AI standards and EU policy - RegulAIte
[We need regulation for] technical, and seemingly mundane, standardization that underpins a globally integrated digital space and technology sector. In technical domains, regulation often is not just a question of legal rules derived from politically charged principles. It also involves seemingly much more mundane shared specifications that allow technologies to be standardized and interoperable. Just about any product that is for same is underpinned, as a whole or through its components, by standards that specify things like its interoperability. These technical standards are typically set through organizations that are private, at least formally. On the international level, they include the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) - two bodies that have set up a Joint Committee known as JTC 1/SC 42, which has begin work on AI standards in 2017. The European counterparts to the ISO and IEC - CEN and CENELEC - are central actors in global standardization dynamics, but they, too, are formally private bodies, not arms of the European Union. The role of bodies such as CEN and CENELEC goes far beyond technical matters. At least in the original AI Act proposal tabled by the European Commission, standard setting organization were charged with formalizing the criteria by which regulatory compliance would be judged. For example, if European rules would require algorithms applied by public bodies to be bias-free, it might be up to private standard setters to develop the technical procedures according to which such freedom from bias would be certified. The higher a field's technical complexity, the higher the chance that standards are ineffective or have unintended consequences. On top, outsourcing standard setting to experts once the political goal posts have been set can avoid an endless political tug-of-war in which conflicting interests still try to pull regulatory outcomes their way.
This is where P2PModels comes in. They want to provide the infrastructure and set some standards up for how to do DAOs. Otherwise, we might still need private regulators to audit P2PModels, as P2PModels might be one interoperable infrastructure amongst many. Regulatory bodies ask private sector to formalise the criteria for standard setting. the body is private because it is better to remain politically neutral, as their choices incite political topics.
There are no rows in this table

DLTs for the sharing economy: realising the digital commons through DAOs.
To Do
Blockchain for Social Good: A field in expansion
This article looks at a study done by the EC about publicly beneficial DLT projects in Europe.
The different agents interviewed stress that the main roles of the European Commission should be (1) to create a positive context for adoption, (2) to create spaces for interconnection and dissemination among projects, professionals and the wider public, and (3) to provide a legal framework that enables these initiatives to flourish safely.“
Start of the BC part of thesis. This can be naively positive, later introducing why more critical nuance must be added.
Who are the owners of collaborative economy?
"What they told us that was going to be a new kind way of sharing resources is, today, another way of extracting value from the work of many people, with the aggravating factor of practically no contribution to the public system.” ^ Value extraction of platform economies. “It is necessary to carry out actions that propose alternatives to the platform model through collective ownership, democratic governance models, ownership of infrastructures and servers and exchanges, this time between equals.” “We can open up a horizon of economic democratisation that we did not have until now. The collaborative economy can point to the scalability of the social and solidarity economy, of cooperativism,”
Collaborative economy for platforms.
Distributed technologies to bootstrap the sharing economy
"At the infrastructure level, sharing economy platforms are centralized and concentrate massive amounts of user data, often making surveillance their business model. At the governance level, communities have no say in the decision-making, since only the platform owner decides how the platform evolves. Finally, the sharing economy markets are owned by just a few major industry players, that appropriate the value created by the communities, without redistributing the profits with their users.” “Commons-based peer production (CBPP) communities...Different experiences of CBPP communities such as Wikipedia, free/open source software, or Couchsurfing, provide radically differing values and practices when compared with those in markets. CBPP communities have three main characteristics which are decentralization, the use of open-access shared resources, and the prevalence of non-monetary motivations.” “These three characteristics of CBPP are aligned with blockchain features. First, both CBPP and blockchain strongly rely on decentralized processes, thus the possibility of using blockchain infrastructure to support CBPP processes arises. Secondly, the shared resources in CBPP are aligned with the shared ledger present in blockchain’s infrastructure, where data and rules are transparent, open and collectively owned. Finally, as previously mentioned , CBPP relies on multi-dimensional forms of value and motivations and blockchain enables the emergence of multiple types of non-monetary interactions and rewards (sharing, voting, reputation, etc).”
There is a study about CBPP and the affordances of DLTs, but there is not one about Plural Property and the affordances of DLTs.
Exploring the potentialities of blockchain for commons governance
"[DLT’s] potential to enable new forms of governance remains largely unexplored. In this text, we explore the potentialities of blockchain for commons governance.”
Techno-determinists ignore the complexity of social organisation.
“In sum, our aim was trying to bring together CBPP with blockchain-based governance. The result is the identification and conceptualisation of a set of affordances of blockchain technologies with regards to commons governance, which we summarise below:”
“The result is the identification and conceptualisation of a set of affordances of blockchain technologies with regards to commons governance, which we summarise below:
Tokenisation: refers to the process of transforming the rights to perform an action on an asset into a transferable data element (a token) on the blockchain.
Self-enforcement and formalisation of rules: refer to the process of embedding organisational rules in the form of smart contracts that embed, partially, commons-based governance logics.
Autonomous automatisation: refers to the process of defining complex sets of smart contracts which may be set up in such a way as to make it possible for multiple parties to interact with each other. This is analogous to software communicating with other software today but in a decentralised manner.
Decentralisation of power over the infrastructure: refers to the process of communalising the ownership and control of the technological artefacts employed by the community through the decentralisation of the infrastructure they rely on, such as the collaboration platforms employed for coordination.
Transparentisation: refers to the process of opening the organisational processes and the associated data by relying on the persistence and immutability properties of blockchain technologies.
Codification of trust: refers to the process of codifying a certain degree of trust into systems which facilitate agreements between agents without requiring a third party.
This list of affordances is not intended to be exhaustive. Instead, it is to be though as a set of analytical categories which can help us to co-design blockchain-based tools to facilitate cooperation and foster CBPP practices together with these collectives.“

Blockchain Technology: Toward a Decentralized Governance of Digital Platforms?
Intro to the internet, start here in the introduction, then skip 30 years to OpenAI.
2.a Market-Driven Innovation
OpenAI is private, so it must engage some method of business to create revenue. This means they must chose some business model that either:
leverage network effects, gathering large user-bases by offering free, freemium, or low-cost services, relying on competitive strategies and exclusionary practices: proprietary IP, limiting competition (Amar Bhidé, “The Venturesome Economy: How Innovation Sustains Pros- perity in a More Connected World,” Journal of Applied Corporate Finance 21, no. 1 (2009): 8–23.)
limited interoperability and data portability — users find themselves locked into walled gardens, limiting competition.
monetizing their user-base through more personalized and targeted advertising campaigns.
AI system designers are innovating within a capitalist structure and have “not significantly contributed to the establishment of a new economic order”.
2.b: Commons-Based Innovation
“Perhaps one of the biggest differences between market-driv- en and commons-based innovation lies in the economic models that underpin the two. While the former is mainly driven by the logic of profit maximization, the latter is driven by a combination of ideological values, a desire to maximize the utility of the products or services provided to the community and an expectation of individual returns or compensation (financial or otherwise). Nevertheless, although profits are not the main drivers for a large majority of commons-based initiatives, the ability to raise money and attract human resources remains an important precondition for their long-term sustainability”

Section 3b.
4.b New Tools for Commons-Based Governance
“While these organizations might be led by a charismatic leader in charge of stewardship of the organization, they are no longer subject to the whims of a benevolent dicta- tor because they operate according to an infrastructure which is decentralized by design. As opposed to traditional online platforms, which are managed and maintained by a centralized operator, decentralized blockchain-based. applications are both managed and maintained by a distributed network of peers, none of which has the ability to change or influence the operations of these blockchain-based systems, unless this is specifically provided for in the underlying protocol. Hence, by encoding a decentralized govern- ance structure directly into the fabric of a blockchain-based system, it becomes difficult for any single party to unilaterally intervene in order to change the current and future operations thereof.
“Yet, most of the blockchain-based applications implemented thus far incorporate game theoretical protocols and market-driven incentives that actually exacerbate—rather than disrupt—ex- isting dynamics of capital accumulation and speculation.”
“Yet, to be really transformative, these initiatives need to transcend the current models of protocol-based governance and game-theoretical in- centives—which can easily be co-opted by powerful actors or lead to dissensus — and to come up with new governance mod- els combining both on-chain and off-chain governance rules. The former can be used to support mechanisms of regulation by code, incentivization schemes and ownership over digital as- sets, whereas the latter are necessary to promote the vision, and facilitate the interaction of commons-based projects and initia- tives with the existing legal and societal framework. Ultimately, whether or not blockchain technology will lead to the rise of a new economic order is not—solely—a technical matter; it is, first and foremost, a political question that requires an in-depth understanding of the social, economic, and political implica- tions that different governance structures will bring to society.”
Read this again when writing the last paragraph. It brings it all together very poetically and with ambitions.
What is important in this text is its delineation of on-chain and off-chain governance. The article claims both are important as they are needed to avoid falling for market dynamics.
“Distributed ledger technology is best understood as an institutional innovation (i.e. a governance technology)”
The internet is broken, how do we fix it? NYT
"But if the internet is broken, how do we fix it?
The answers that predominate among American policymakers tend to circle two main themes — which are, in practice, often joined together. The first involves writing new rules about how companies are allowed to behave, or enforcing existing ones. Examples include the , a 2018 law that gives residents certain rights regarding the collection and processing of their personal data.
The second aims at reducing the market power of the big firms. Last year, President Biden issued an executive that directs more than a dozen federal agencies to pursue pro-competition initiatives. And this year, two seeking to prevent tech companies from using their control of platforms like search engines and app stores to give themselves an unfair advantage over competitors were approved in committee, bringing them closer to a floor vote.
Both strains of internet reform have their merits. The rule-makers are right that tech companies are too lightly regulated. The anti-monopolists are right that rule-making is insufficient without curbing corporate power. Yet neither quite reaches the root of the problem.
The root is simple: The internet is broken because the internet is a business. While the issues are various and complex, they are inextricable from the fact that the internet is owned by private firms and is run for profit. Regulating markets or making them more competitive won’t touch the deeper problem, which is the market itself.
The profit system produces the dysfunctions and depredations of the modern internet.
Today’s internet reformers would leave this system intact. Yet many of the industry practices with the most destructive effects, such as the obsession with user engagement, were developed by companies when they were comparatively leaner and hungrier and needed to grab market share as quickly as possible. In other words, they came out of competition, which suggests that increasing competition won’t automatically generate better outcomes.
Regulation, too, presents difficulties: Corporations are adept at evading or manipulating rules to preserve their dominant position. Indeed, some big tech firms have even called for more regulation in recent years, provided they get to decide how they’re regulated.
Even with the best regulatory and antimonopoly measures, corporations would still own the internet. Immensely consequential decisions would be left in the hands of executives and investors. Most people would have no say in matters that centrally affect their lives.
Fortunately, there is another strategy: deprivatization.
To build a better internet, we need to change how it is owned and organized — not with an eye toward making markets work better, but toward making them matter less. Deprivatization aims at creating an internet where people, and not profit, rule. This sounds like a protest chant but I mean it quite literally.
What would a day on the deprivatized internet look like? You wake up, grab coffee, and sit down at your computer. Your first stop is a social-media site run by your local library. The other users are your neighbors, your co-workers, or residents of your county. There’s a news report in your feed about a coming municipal election, published by a local public media center. In fact, much of the content that circulates on the site comes from public media sources.
The site is a cooperative; you and the other users govern it collectively. You elect the board that designs the filtering algorithms and writes the content moderation policies that determine what you see in your feed. The board’s decisions are carried out by employees of the local library, who act as caretakers of the community, always on hand to help classify, curate and add context to information.
To put people over profit, you need to create spaces where the people can rule.
The scale and complexity of the internet mean there is no silver bullet for creating a democratic digital future. The precise contours of a democratic internet can be discovered only through a democratic process — through people coming together to build the world they want. (this should be the same with corporations
Even so, questions will remain: how to end algorithmic racism, for instance, or the right way to handle content moderation. Liberating the internet from the constraints of the profit motive won’t make these questions go away. It will, however, create the conditions in which the answers can be found. (all other AI issues matter, but this will create the conditions in which the answers can be found)”
Book Review: Internet for the People: The Fight for Our Digital Future by Ben Tarnoff
"Tarnoff suggests deprivatising the Internet through the “imaginative work of abolition” (177), not by merely improving current structures, as he is very clear about the practical and imaginative limits of reorganising the Internet based on what already exists. Tarnoff’s proposal is radical because it is more than a protest or an invitation to reflection; he means to abolish the existing structures and replace them with fundamentally different alternatives. However, even if we assume this envisioned scenario, it is impossible to think of its continuity as a bubble separated from society. The Internet would remain subject to market pressures, private interests, and the dictates of capitalism. Tarnoff acknowledges “there are many possibilities, but one thing is clear: to remake the Internet, we will have to remake everything else”
When Ostrom Meets Blockchain: Exploring the Potentials of Blockchain for Commons Governance
"Blockchain technologies have generated enthusiasm, yet their potential to enable new forms of governance remains largely unexplored. Two confronting standpoints dominate the emergent debate around blockchain-based governance: discourses characterized by the presence of techno-determinist and market-driven values, which tend to ignore the complexity of social organization; and critical accounts of such discourses which, while contributing to identifying limitations, consider the role of traditional centralized institutions as inherently necessary to enable democratic forms of governance.” “We approach blockchain through the identification and conceptualization of six affordances that this technology may provide to communities: tokenization, self-enforcement and formalization of rules, autonomous automatization, decentralization of power over the infrastructure, increasing transparency, and codification of trust. For each affordance, we carry out a detailed analysis situating each in the context of Ostrom’s principles, considering both the potentials of algorithmic governance and the importance of incorporating communities’ social practices into blockchain-based tools to foster forms of self-governance.”
Use this to fit in my project in a nuanced and critical space, within all the chaos that they outline. Could this go closely to the proprietary dynamics paragraph? Could I fit these 6 affordances to stakeholder theory vs shareholder governance?
Analysis of the Potentials of Blockchain for the Governance of Global Digital Commons
Local PDF. “These affordances and the subsequent analysis contribute to the emergent debate on blockchain-based forms of governance, first by providing analytical categories for further research” “In this respect, we agree with Schneider (2019) in understanding decentralization not simply as a technical concept, but as a performative act whose socio-political consequences need further exploration, since the use of decentralized technologies does not inherently imply the decentralization of other outcomes,”
Blockchain Technology and Decentralized Governance: Is the State Still Necessary?
Elinor Ostrom's 8 Principles for Managing A Commmons
8 Principles for Managing a Commons
1. Define clear group boundaries.
2. Match rules governing use of common goods to local needs and conditions.
3. Ensure that those affected by the rules can participate in modifying the rules.
4. Make sure the rule-making rights of community members are respected by outside authorities.
5. Develop a system, carried out by community members, for monitoring members’ behavior.
6. Use graduated sanctions for rule violators.
7. Provide accessible, low-cost means for dispute resolution.
8. Build responsibility for governing the common resource in nested tiers from the lowest level up to the entire interconnected system.
An application to delegate certain operations of DAOs
One of the main challenges facing DAO governance models today is scalability. One possible solution is to form small sub-groups composed of members of the organization with a number of permits that allow them to carry out a reduced set of tasks and activities of the DAO”
Using Aragon to create an application to act as manager for sub-committee groups for scalable DAOs.
Internet and Blockchain: democratic or authoritarian?
There are 4 types of internet: libertarian, bourgeois, commercial, authoritarian. The choice is up to us!
Trust in blockchain-based systems
Trust in benevolence is what we must avoid. DLTs are ‘trustless’, but what does this mean exactly?
Doesnt seem to interesting as an article, but I can use it if I need to define trust more accurately.
P2P Models - WP
“The P2PMODELS project aims to leverage the potential of the new blockchain technology to address CE’s infrastructure challenge while laying the foundations to resolve the governance and economical challenges.” It talk about focusing on the infrastructure, but not the governance. So, I could continue the Gov aspect in relation to AI instead.
Organizational Building Blocks for Blockchain Governance: A Survey of 241 Blockchain White Papers
Decentralisation in the blockchain space
“Decentralised and distributed modes of organisation are well defined in computer science discourses and denote a particular network topology. Even there, they can be understood either as an engineering principle, a design aim, or an aspirational claim. In the decentralisation discourse these three dimensions are often conflated without merit. A decentralised network design might not produce decentralising effects and might not either necessarily be decentralised in its actual deployment.”
^This is relevant to me.
“When the technical decentralisation discourse starts to include social, political, or economic dimensions, the risk of confusion may be even larger, and the potential harms of mistaking a distributed system for something it is not, even more dangerous. Individual autonomy, the reduction of power asymmetries, the elimination of market monopolies, direct involvement in decision making, solidarity among members of voluntary associations are eternal human ambitions. It is unclear whether such aims can now suddenly be achieved by particular engineering solutions.”
^ Need to prove this in my piece.
“An uncritical view on decentralisation as an omnipotent organisational template may crowd out alternative approaches to creating resilient, trustworthy, equitable, fault resistant technical, social, political or economic modes of organisation.”
^ Precisely, it is about distributing decision-making power to more relevant parties. This is justified in circular economic theory.
Put this in the introductory part of DLTs. My goals must fit within this criticism.
Decentralizing what? When blockchain’s perspectives clash
Therefore, at least for the case of Estonia, blockchain technologies applied at the public and institutional level seem to be at odds with the promise of social disruption, and continue to exploit the “rhetoric of empowering the disenfranchised through decentralized decision-making process
In Estonia, the concept of transparency is used as a synonym for data privacy, in which trust in the government would be won by guaranteeing cybersecurity and data control. However, the concept of decentralization remains very much second to generating an illusion of disruption.
In conclusion, because of the lack of universal definitions of what constitutes a blockchain, different technological narratives are emerging around a concept which offers influential areas for exploitation in public imaginaries, according to different political agendas.”
What is important for me here is that it should not be about whether something is truly a blockchain, but it should be assessed in terms of the values it internalises infrastructurally. If decision-making power is achieved, then political decentralisation is playing its much needed part.
Glossary of DAOs (DeFillippi)
Decentralized Autonomous Corporation (DAC) Distributed Cooperative Organization (DisCO), Decentralized Collaborative Organization (DCO) “It is therefore important to understand that a DAO is not a particular type of business model or a particular type of organisation, but a concept that can be used to refer to a wide variety of things.” “In terms of governance, diverse scholars recently started investigating the opportunities of blockchain technology and smart contracts to experiment with open and distributed governance structures (Leonhard, 2017; Rozas et al., 2018; Hsieh et al., 2018; Jones, 2019), along with the challenges and limitations of doing so (Garrod, 2016; DuPont, 2017; Scott et al., 2017; Chohan, 2017; Verstreate, 2018; Minn, 2019; Hutten, 2019). There is also an emerging body of literature from the field of economic and legal theory concerning DAOs. While most of these works focus on the new opportunities of decentralised blockchain-based organisations in the realm of economics and governance (Davidson et al., 2016, 2018; Sims, 2019; Rikken et al., 2019; Kaal, 2020), others focus on the legal issues of DAOs from either a theoretical (De Filippi & Wright, 2018; Reijers et al.. 2018) or practical perspective (Rodrigues, 2018; Werbach, 2018; Riva, 2019)..” “However, as the limitations of blockchain-based governance came into light, especially in the aftermath of the aforementioned TheDAO hack (DuPont, 2017; Reijers et al., 2018; Mehar et al., 2019), the public discourse around DAOs has shifted from describing DAOs as a technical solution to a governance problem (Jentzsch, 2016; Voshmgir, 2017) to a discussion on how DAOs could change the nature of economic and political governance in general (Davidson et al., 2016; Beck et al., 2018; Zwitter & Hazenberg, 2020; De Filippi et al., 2020).” ^ Rephrase this as context of how much work has been done, and then scope in on what I am saying. “(1) First of all, with regard to the “decentralization” aspect of a DAO, it is unclear whether decentralisation only needs to be established on the infrastructural layer (i.e. at the level of the underlying blockchain-based network) or whether it also needs to be implemented at the governance level (i.e. the DAO should not be controlled by any centralised actor or group of actors). ^ Political vs architectural. This is important and shes wrong. (2) Second, it is unclear whether a DAO must be fully autonomous and fully automated (i.e. the DAO should operate without any human intervention whatsoever), or whether the concept of “autonomy” should be interpreted in a weaker sense, (i.e. while the DAO, as an organisation, may require the participation of its members, its governance should not be dependent on the whims of a small group of actors). ^ It is about who enacts the decisions reached at by its committee. (3) Third, there are some debates as to when the community of actors interacting with a smart contract can be regarded as an actual “organization” (independently of any legal recognition). For instance, it is unclear whether the mere act of transacting with a smart contract qualifies as an organisational activity, or whether a stronger degree of involvement is necessary, such as having a governance model or collective interactions amongst participants.” ^ Being part of a committee makes you a stakeholder. Interacting with a smart contract which doesnt need committee access does not.
This is quite superficial and shows very little technical understanding of DAOs. Its focus is on legal responsibility. FUCK Sammer Hassan wrote this too, so its not technically superficial....
Primavera de Filippi: “In a decentralized infrastructure, it becomes more difficult not to have a concentration of power”
Talking back about platforms and power, there are people talking about platform cooperativism as an alternative to platform capitalism (models such as Uber or Deliveroo), what do you think is the role of decentralized technologies in this context?
I think platform cooperativism is completely independent from the infrastructure. You can have a completely centralized infrastructure which is run as a cooperative. The problem with decentralized model, is that you need to design it in such a way as to make sure that at every level of the stack, including the blockchain infrastructure, will not be co-opted by external market forces, which might be difficult to achieve without an institution protecting it.
Once you move into a decentralized infrastructure, it becomes more difficult to ensure that you don’t have a concentration of power, because you lose the power to control or ascertain who owns or controls the infrastructure. Indeed, in a blockchain-based system, you have a lot of different variables to take into account, and often do not have full control (or visibility) over the way in which many of those variables are governed or controlled.”
My argument is different, because it is about what is allowed in the private-sector and who sits at the table, the infrastructure is of less importance if the BC is adequately decentralised: ETH. Technologies are on a spectrum from apolitical to political; BC simply moves them further down towards being apolitical because there is far less political interference between the various LoAs or stack layers.
Blockchain-based application at a governmental level: disruption or illusion? The case of Estonia
Permissioned vs public blockchains. Why this is just a reduction to architectural decentralisation so it is not of interest to us.
Technological Populism and Its Archetypes: Blockchain and Cryptocurrencies
“In this article, we draw a parallel between core foundations of political populism and those of blockchain and propose a theory of technological populism.”
How does Committees work?
Distributed Computing and Artificial Intelligence, 12th International Conference
Awakening Decentralised Real-Time Collaboration: Re-engineering Apache Wave into a General-Purpose Federated and Collaborative Platform
P2P Foundation
The Architecture of a Web3 application
Distributed Ledger Technologies (DTLs) for Social and Public Good – Where to next?
Check PDF
Check PDF →
Distributed Ledger Technologies (DTLs) for Social and Public Good – Where to next_.pdf
The sharing economy in computing: A systematic literature review
Decentralized Collaborative Organizations (DCO)
Find resources.
There are no rows in this table

To Do
ChatGPT Nature.mag
Rather than engage in a futile arms race between AI chatbots and AI-chatbot-detectors, we think the research community and publishers should work out how to use LLMs with integrity, transparency and honesty. Author-contribution statements and acknowledgements in research papers should state clearly and specifically whether, and to what extent, the authors used AI technologies such as ChatGPT in the preparation of their manuscript and analysis.
Invest in truly open LLMs
Currently, nearly all state-of-the-art conversational AI technologies are proprietary products of a small number of big technology companies that have the resources for AI development. OpenAI is funded largely by Microsoft, and other major tech firms are racing to release similar tools. Given the near-monopolies in search, word processing and information access of a few tech companies, this raises considerable ethical concerns.
One of the most immediate issues for the research community is the lack of transparency. The underlying training sets and LLMs for ChatGPT and its predecessors are not publicly available, and tech companies might conceal the inner workings of their conversational AIs. This goes against the move towards transparency and open science, and makes it hard to uncover the origin of, or gaps in, chatbots’ knowledge10. For example, we prompted ChatGPT to explain the work of several researchers. In some instances, it produced detailed accounts of scientists who could be considered less influential on the basis of their h-index (a way of measuring the impact of their work). Although it succeeded for a group of researchers with an h-index of around 20, it failed to generate any information at all on the work of several highly cited and renowned scientists — even those with an h-index of more than 80.
To counter this opacity, the development and implementation of open-source AI technology should be prioritized. Non-commercial organizations such as universities typically lack the computational and financial resources needed to keep up with the rapid pace of LLM development. We therefore advocate that scientific-funding organizations, universities, non-governmental organizations (NGOs), government research facilities and organizations such as the United Nations — as well tech giants — make considerable investments in independent non-profit projects. This will help to develop advanced open-source, transparent and democratically controlled AI technologies.
Critics might say that such collaborations will be unable to rival big tech, but at least one mainly academic collaboration, , called BLOOM. Tech companies might benefit from such a program by open sourcing relevant parts of their models and corpora in the hope of creating greater community involvement, facilitating innovation and reliability. Academic publishers should ensure LLMs have access to their full archives so that the models produce results that are accurate and comprehensive.
What quality standards should be expected of LLMs (for example, transparency, accuracy, bias and source crediting) and which stakeholders are responsible for the standards as well as the LLMs?
There are no rows in this table

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.