Value-laden decision-making is always part of the design process of AI system design. Within our capitalist socio-economic context, economic incentives are what influence much of the design of private-sector AI system designers. Identifying how the configuration of the AI system localises the how of where in the process of design misalignments between shareholder and stakeholder incentives happen will give grounds on how to address such issues. Shareholder theory configures systems to rely upon corporate benevolence and strives for capital accumulation, mechanising in centralised governance rights, shareholder accountability and economic incentives. Relevant public institutions aim to ensure AI system’s alignment with public value through policy recommendations and guidance. This is not enough to address the risk of corporate benevolence and capital accumulation. Stakeholder governance aims to give a voice to stakeholders without any accountability mechanisms in place to ensure stakeholder value. I propose accountable-stakeholder governance as a novel governance model to address AI corporations risk of corporate benevolence and logic of accumulation. Decentralised Autonomous Organisation (DAO) structures enable decentralised governance rights, ex-ante technological accountability mechanisms, and non-pecuniary incentives within an accountable-stakeholder method of governance.
Introduction
The development of a worldwide and decentralised telecommunications network has evolved from a research endeavour into the primary and most consequential information system worldwide. The Internet, serving as a platform for permissionless digital innovation, has spurred considerable progress in information and communication technologies (ICTs), inventive economic models, and creative techniques for social coordination and organisation (De Filippi &Lavayssière, 2022, p.185). However, the interplay between market dynamics and centralised governance has resulted in a concentration of market power among a select group of operators, ultimately transforming the Internet into a network governed and managed by a small set of established players (De Filippi &Lavayssière, 2022, p.186; Tarnoff, 2022). Now, with the recent growth and proliferation of Artificial Intelligence on the Internet (Bloomberg, 2022), there is no sign that current and future players will not leverage their AI expertise and continue to attain wealth and market power using the most advantageous governance models. To illustrate the point, consider the exponential growth of OpenAI.
In 2015 OpenAI was founded as an open-source non-profit artificial intelligence research company. Its founding charter states its incentives: “unconstrained by a need to generate a financial return. As a non-profit, our aim is to build value for everyone rather than shareholders...our patents (if any) will be shared with the world” (Brockman, Sustkever, 2015). In 2018, OpenAI turned to capital resources to fund growth and find a more ‘concrete research direction’ (Xiang, 2023); its charter was updated (OpenAI, 2018). In 2019, OpenAI relinquished its non-profit status to establish a "capped profit" division, OpenAI LP, allowing the company to accept investments and limiting returned profits provided to investors to a maximum of 100 times their initial investment (Brockman, Sustkever, 2019). The ‘General Partnership’ between OpenAI LP and Open AI Nonprofit claims to retain moral integrity: “OpenAI LP’s primary fiduciary obligation is to advance the aims of the OpenAI Charter, and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake” (Brockman, Sustkever, 2019). No official updates about OpenAI Nonprofit’s current board of members or OpenAI LP’s investment partners have been given since 2019. It has since received $11 billion US dollars of investments from Microsoft Inc (Xiang, 2023).
Against this backdrop, this article will investigate the affordances generated by DAO-enabled stakeholder governance models as alternatives to capital-centric affordances of current shareholder-governed AI designers, such as the General Partnership mentioned above. The issue is with shareholder-governed AI corporations’ incentive to pursue a logic of accumulation over public value without any structural stakeholder obstruction. The aim is to show how DAO-enabled stakeholder governance models help prioritise stakeholder value over the logic of accumulation. This is because of three affordances granted by DAOs: decentralised governance rights, ex-ante accountability mechanisms, and non-pecuniary incentives. Because of such affordances, DAO-enabled stakeholder governance models would obstruct AI-system designers from pursuing a logic of accumulation. OpenAI should not have an easy time transitioning to pursue such ends. It currently is, and nothing is stopping it.
The current dynamics of the AI economy: identifying the socio-economic problem.
Algorithms & AI-system
Incentives influence decision-making and decision-making embeds values that determine the design of a system. The aim is to analyse the design process of algorithms within their socio-economic context. Taking inspiration from Hills (2015, p.35), I take the public discourse interpretation of ‘algorithm’ and give it a less ambiguous term, ‘AI-system’, to delineate the implementationof the algorithmic mathematical structure into a technology or program and the application of the technology configuredfor some socio-economic context. In other words, the ‘implementation’ of an AI system is its abstraction as a defined program whilst its ‘configuration’ is its abstraction refined to fulfil a specific socio-economic purpose. In OpenAI’s case, the implementation of the ‘transformer architecture’ used for natural language processing is GPT-3.5, which has been configured as ChatGPT using Microsoft Azure supercomputers and is freely accessible by the public at chat.openai.com. ‘AI’ is an equally ambiguous term (Wang, 2019), but with the abstraction of public discourse in mind, we can let it capture whatever is currently included in the ever-changing rationalisation of which implementations and configurations of algorithmic mathematical structures are considered ‘Artificial Intelligence’. We can do this because rather than focusing on what is and isn’t AI, the focus is on who decides what is done with whatever it is that we consider AI. So, throughout, ‘AI system’ is to be interpreted as the implementation (technologies and programs) and real-world configuration (applications and artefacts) of algorithms as mathematical constructs. Mittelstadt et al.(2016, p.2) rapport that it “makes little sense to consider the ethics of algorithms [or AI-systems] independent of how they are implemented and executed in computer programs, software and information systems [at the implementation level of abstraction]”. I extend the claim to say that it would make little sense to consider the ethics of AI systems independent of who makes the decision of how they are implemented and configured. Therefore, the aim is to unravel how decision-making is structured for the design process of AI-system designers. But what exactly is the design process of AI systems?
Values & AI-system designers
Whether designers are aware of it or not, values are always internalised in the design process of AI systems. According to Russo et al. (2023, p.2), this design process consists of three steps: design, implementation, and use (akin to the implementation and configuration interpretations by Mittelstadt). Unlike Mittelstadt et al. (2016), Russo et al. (2023, p.4) are interested in where in the process of design issues arise, not in what kind of ethical issues arise. Instead, the aim here is to look at how the decision-making structure of the design process localises the how of the Russo et al.'s where in the process of design issues may arise. Because what’s relevant is the implementation and configuration of AI systems, the decision makers and the incentives that influence their design processes, and thus its choice of values, is what will be of interest. Incentives are motivators for decision-making, however, their pursuit is a choice.
Incentives & benevolence
Incentives affect decision-making (Beck et al., 2018, p.1023), and decision-making is what embeds values in the design process of AI systems. Incentives mould the design process of AI systems for both the implementation and configuration steps. This can be justified both deductively and empirically. Deductively, for AI systems designed in a free-market capitalist economy, incentives beyond those that benefit the firm are mere acts of benevolence and are not structurally embedded. Whether it be Plato's idea of the Philosopher King (Reeve 1988) or the 'benevolent dictator' described in Reymond's (2001) Cathedral of the Grand Bazaar, benevolent power can be a precarious stance. As Susskind (2018, p.168) notes, "A freedom that depends on the restraint of the powerful is no kind of freedom at all”. Benevolent decision-making that is not incentivised by free-market dynamics is left in the hands of the AI system designers for both phases of implementation and configuration. Therefore, the internalised values of AI systems - at the configuration level especially - can be freely influenced by self-serving, market-driven incentives.
Shareholder dynamics & capital accumulation
On the other hand, empirically speaking, private-sector AI-system designers have been successful in focusing the larger innovation ecosystem on technologies that focus on capital accumulation (Mazzucato et al., 2021). This is evident from the outcome of the economic incentives of shareholder dynamics and the more normatively laden findings of the Surveillance Capitalism thesis (Zuboff, 2019). Zuboff (2019, p.75) reintroduces the "logic of accumulation" to delineate the capitalist digital economy's flow of capital through the private sector. The logic's goal is to accumulate capital through the implementation of constraining processes that transform digital inputs into digital assets, which are then sold as digital products or services. This is what generates capital for the AI-system corporation (Zuboff, 2019, p.76; Mazzucato & Ramos, 2022). But who decides how this logic is best effectuated for private-sector AI corporations? Private-sector corporate governance has rested on the dominating notion of shareholder value for the last fifty years (Mazzucato et al., 2021). Shareholder theory is based on the belief that the ultimate goal of a corporation is to generate value for its shareholders. The primary objective of shareholder governance is to ensure that the interests of shareholders are aligned with those of the corporation, ultimately to best fulfil the logic of accumulation. In practice, the implementation of shareholder theory can vary depending on industry-specific factors (Mansel, 2013, p.125). However, what is endemic to any instance of shareholder theory is the incentive of capital accumulation. Shareholders are the fundamental decision-makers of a corporation, they hold decision-making rights and influence the management and direction of a corporation (Mansel, 2013, p.126). AI corporations ought to adopt new common market standards to embed certain values in the design process and exclude others.
The fake garden of Eden: Recommending and regulating big tech
Pragmatically, some international organisations advocate for particular types of ‘ethical common market standards’ frameworks for private-sector AI corporations, intending to improve private-sector incentives away from the effects of the logic of accumulation, toward a common good (Mazzucato & Ramos, 2022). Public institutions and public policy should aim to ensure that innovations improve the world and not the insular motives of private-sector designers. Governments should shape technological progression to serve the interest of the public (Mazzucato & Ramos, 2022; UNESCO, 2021, p.27). To achieve this, public institutions ought to appeal to the benevolence of AI corporations. UNESCO’s Recommendation on the Ethics of AI, the European Commission’s High-level expert group on Artificial Intelligence, and G20s GlobalPolicy.ai are examples of multinational non-profit organisations that offer recommendations, standards, and ex-post (retrospectively imposed) policies that aim to maximise the pursuit of public value by private-sector AI corporations (UNESCO, 2021; GlobalPolicy.ai, 2019, European Commission, 2019). Muzzacato & Ramos (2022) claim that to ensure the development and use of AI is ethical, it is necessary to have a mission of "ethical by design" that is supported by effective regulation and responsible governments working towards a collective interest, rather than solely for the benefit of shareholders. When these fundamental principles are established, the private sector will be motivated to participate in the larger effort to make technology safer and more equitable. The hurdle is how this can be most effectively achieved. UNESCO's Recommendation on the Ethics of AI, which was adopted by 193 member states on November 2021, emphasises the importance of robust public oversight to ensure that the digitalisation and deployment of AI systems generate opportunities for creating public value. One of its objectives is guidance over the “actions of individuals, groups, communities, institutions and private sector companies to ensure the embedding of ethics in all stages of the AI system life cycle” (2021, p.15).
Don’t trust the fake flowers
The existence of such institutions and their ambitions is a step in the right direction. However, they do not directly address the structural issues raised about shareholder dynamics and the logic of accumulation. The efforts are either legally non-binding or ex-post policy approaches: they either have no legal power to ensure the embedding of certain values in the AI system life cycle or can only have legal recourse after-the-fact. As a result, they seldom directly address corporate benevolence and the pursuit of capital accumulation. Such institutions have limited real-world impact in the here and now of what private-sector AI systems designers are currently capable of doing. Take the seemingly innocuous change of heart by OpenAI, ditching the open-source ethos in 2019 to be bought out by proprietary magnate Microsoft (Xiang, 2023): whilst legal, the partnership exacerbates the incentive to accumulate capital. Centralised corporate benevolence and the ramifications of the logic of accumulation in conjunction with the prevalence of shareholder governance for AI corporations demonstrate the need for a shift in how such corporations govern and to what end. But when shareholder governance is so economically beneficial, how can change occur? We are compelled to conceptualise more radical approaches to corporate governance. What about stakeholder governance, could this be the answer?
Corporate governance and granting stakeholder decision rights
Stakeholder corporate governance already poses as an alternative to shareholder theory (Mansell, 2013, p.24; Freeman et al., 2004, p.364). However, it has one relevant limitation. Stakeholder governance, as currently advocated and implemented (Mansell, 2013, p.24), enforces no accountability to guarantee real-world stakeholder recognition. The concept of governance, in its abstract form, refers to all social organisation and coordination processes involved for some entity Bevir, 2008, p.4). In this context, we employ the term governance in a comprehensive sense, encompassing all processes of social organisation and coordination that pertain to an AI corporation’s design decisions, as mentioned previously, for both the implementation and configuration phases. In other words, the governance model of an AI-system designer is who decides what values to embed in the configuration of an AI system and how that system is configured. Let's briefly define shareholder and stakeholder governance models to see how values are embedded.
Shareholder & stakeholder governance
Shareholder governance is a model of corporate governance in which shareholders are the primary stakeholders. Shareholders have governance rights. Governance rights are obtained by buying company shares (which attain voting rights in proportion to the number of shares owned) and include the right to company information, the election of directors, fundamental company changes (sale of assets, amendments to charter, bylaws, and governing documents), making proposals, and opposing them (Mansell, 2013; Freeman et al., 2004). The elected board of directors is responsible for making strategic decisions about the company, including selecting and evaluating senior executives, setting executive compensation, corporate strategy, appointing senior executives, declaring dividends, and overseeing the company's financial performance (Mansell, 2013; Freeman et al., 2004). Under shareholder governance, the goal of the corporation, from both the shareholders' and board’s perspective, is to maximise shareholder value through strategies that incentivise a logic of accumulation. To reduce this underlying focus on shareholder value, more stakeholders could be considered in the decision-making process of its governance.
Stakeholder theory on the other hand operates on the premise that values are an integral and overt component of conducting business. It requires managers to express the collective perception of the worth they generate and the unifying factors that bind their primary stakeholders (Freeman, 2004, p.365). This involves taking into account the social, environmental, and ethical impacts of the company's actions, as well as the interests of its employees, customers, and affected communities. Additionally, it urges the board and its executives to be transparent about the approach they intend to take in their business dealings, particularly regarding the type of associations they must establish with their stakeholders to fulfil their objectives. Stakeholder governance emphasises the importance of non-pecuniary incentives, accountability, and stakeholder participation in decision-making processes (Freeman, 2004).
However, the fulfilment of stakeholder values is in no way embedded in the structure of stakeholder models of governance. Structurally, the corporation is merely partaking in acts of benevolence towards its stakeholder. As the author of Donut Economics, Kate Raworth put it (2021), “There is only so far that you can go within a [stakeholder] framework of business and finance in the mainstream stock market, there is only so far they will let you lean...and then they will pull you sharply back”, speaking about stakeholder conscious corporations competing in the shareholder dominated market. Corporate benevolence is not enough when the competition is a strategically saturated and capital-hungry share market (go further into the dynamics of why this is not enough).
Updating stakeholder governance
Let’s imagine an AI economy where common market standards altered the shareholder model such that accountability for fulfilling stakeholder interests is enforced by including stakeholders to the table of internal decision-makers. Under this dispensation, the internal governance model of AI corporations is to be iterated. The gain is a democratised embedding of stakeholder values in the design process of the AI system. This predication is not anti-OpenAI or Microsoft per se. Rather, it is against the shareholder template of governance and the incentives that enable private actors to act against the interest of their stakeholders to satisfy their logic of accumulation. To say it is one thing, to implement it is another. Stakeholder governance requires an infrastructure that allows representative, reliable, and trusted decision-making processes. DAO infrastructures embed such affordances.
DAO-enabled ‘Accountable-Stakeholder Governance’
Shareholder governance for AI corporations functions on corporate benevolence and exacerbates capital accumulation for its constituents. Critical theorists, such as Iliadis and Russo (2016) and Feenberg (2017), have taken an interest in the use of information communication technology (ICT) in governance, power dynamics, and its effects on society as a whole. Although governance literature has extensively explored the topic of power, the effects of ICT on power dynamics within corporate governance have not received much attention. Our goal is to investigate how DAOs, specifically, could alter the power dynamics of current shareholder models of governance. In theory, giving governance rights to stakeholders of AI corporations democratises economically-incentivised decision-making and thus reduces such hindrances. Whilst in practice, a suitable and ex-ante governance infrastructure, like that possible by DAOs, helps implement such a theory.
Introducing DAOs
A Decentralized Autonomous Organization (DAO) is a digitally enabled organisational structure that is bound by rules encoded as computer programs. An effective way to illustrate how DAOs work is to contrast them to digital blockchain wallets, smart contracts, and decentralised applications (DApps). Blockchain wallets are the gateway to human-blockchain interaction. Technically, they are asymmetric public-private key pairs used to execute transactions (where a transaction is a call function to interact with some smart contract) (Antonopoulos and Wood 2019, p.267; Anascavange 2018). Individuals hold wallet key pairs and use their wallets to interact with whatever blockchain that wallet allows them to interact with. Further, a smart contract is code implemented and executed on some arbitrary blockchain. Smart contracts are autonomous similar to any conventional program (they take an input and process an output) but have the added feature of being immutable once deployed on a blockchain. Essentially, they are programs that run and are stored on a blockchain (Antonopoulos and Wood 2019, p.267; Anascavange 2018). Smart contracts can only interact with other (autonomous) smart contracts or (human-operated) wallets. On the other hand, a decentralised application (DApp) is an application that is entirely implemented on smart contracts and only accessible via wallets. DApps are not ‘update-able’ but ‘replaceable’, as their smart contracts are immutable.
A DAO is an iteration of a DApp such that the decision-making process for the Dapp's development and management is granted to a cooperative of wallet-holding members. While there is hesitancy in the literature about defining DAOs (Hassan & De Filippi,2021, p.4), I propose a definition suitable for our purposes requiring only that any configuration on a set of smart contracts implemented as a DApp is executable only by on-chain (function-call) consensus from a distribution of wallets larger than one. In other words, DAOs are what allow a group of wallet-holding members to digitally vote (on-chain)about the configuration of the set of smart contracts the DApp is composed of.
DAO governance: rights, accountability, and incentives
Once a DAO is deployed, decisions about both the development of the DAO and how the DAO is governed are voted on and executed on-chain as ‘proposals’ . According to Weill (2004, p.3), “IT governance represents the framework for decision rights and accountabilities to encourage desirable behaviour in the use of IT”. The analysis concludes that IT governance is composed of three fundamental aspects: the allocation of decision-making authority (decision rights), the assignment of responsibility for outcomes (accountability), and the implementation of motivating factors (incentives). Beck et al. (2018) claim that DAOs demand a reassessment of such notions of governance. Following Weill’s legacy, the following section will begin to investigate how governance affordances (decision rights, accountability, and incentives) should be allocated within DAO-enabled corporations to address the issues of corporate benevolence and capital accumulation for AI corporations.
Governance rights of AI-system stakeholders
The first governance affordance to consider for DAO-enabled accountable-stakeholder theory is, firstly, how DAO governance rights are granted, and secondly, what decision-making power those rights constitute. If the goal is to grant governance rights to relevant stakeholders without share acquisition, a mechanism that is effective at establishing and granting such status without conflicts of interest must be established. Fama and Jensen (1983) classify governance rights into two categories: decision management rights, which enable the creation and execution of decision proposals, and decision control rights, which deal with the monitoring and formation of the decision system. Similarly to Honkanen et al. (2021), we highlight two similar instances of governance mechanisms used to update and maintain the decision rights of DAO members: management rights of on-chain AI-system design proposals, and control rights over on-chain DAO configuration. The degree of management and control rights attributed to DAO stakeholders will be felt in the decision-making outcomes of the board of members and executives: the more rights attributed to stakeholders, the more results the corporation will output to maximise stakeholder value. Granting governance rights to relevant stakeholders, for example, the European Commission’s High-level expert group on Artificial Intelligence would give configuration power to a body that is incentivised to embed values beyond those of capital accumulation. In turn, this would increase the decentralisation of decision-making, reducing the risk of centralised corporate benevolence. For this to take place, there must be a mechanism to ensure accountability for decision-making.
Ex-ante technical accountability
Accountable agents in Information System design address actions taken and the outcomes that result. For this to be effectuated, accountability must be specified, enacted, and enforced successfully (Weitzner et al., 2008; Beck et al., 2018, p.1023). In other words, agents are accountable for their actions and their outcomes by embedded effective enforcement mechanisms. Enforcement mechanisms on DAOs are collaboratively specified by those with governance rights (as mentioned previously), enacted by its stakeholder members, and ex-ante enforced by its constituent smart contracts. Those accountable for action taken on the DAO are just those part of the DAO and their actions are prospectively enforced through IT infrastructure rather than retrospective inter- or intra-organisational settings. Accountable agents enact decision-making just in the way it is specified and enforced by the DAO. Enforcing ex-ante mechanisms of accountability from all DAO members decreases the reliance on corporate benevolence for two reasons. The first is that decisions are constrained by what is enforced by the DAO, while the second is that the specifications that result in such constraints are defined by a decentralised set of stakeholders (as described in the previous section).
Non-pecuniary incentives
Incentive alignment is when the embedded features of an Information System design organically maintain the design objective (Beck et al., 2018, p.1023). For the consensus mechanism of the blockchain, the DAO runs on, the design objective is to maintain congruence of the information state through a decentralised consensus mechanism (Bains, 2022, p.5). Common mechanisms such as Proof-of-Work and Proof-of-Stake embed pecuniary incentives (Bains, 2022, p.8). On the other hand, the governance mechanism for the DAO itself, the design objective is implemented by the DAO members - in our case relevant stakeholders. Because stakeholders are not necessarily for-profit institutions (as exemplified earlier), the embedded features to satisfy the design objective will not be wholly pecuniary as that is not in the interest of non-profit stakeholders. So, the logic of accumulation is disrupted for the internal governance mechanism of DAO-enabled AI corporations.
Conclusion
The aim has been to examine how DAOs can change the power dynamics within current shareholder governance models for AI corporations by giving stakeholders governance rights. Shareholder governance relies on corporate benevolence and is incentivised to amplify capital accumulation for its constituents. By granting governance rights to stakeholders, decentralisation of decision-making is increased. Additionally, by establishing accountability mechanisms for members, reliance on benevolence is minimised. Finally, because non-profit stakeholders have non-pecuniary motives, incentive alignment for AI corporations with this model of governance no longer necessitates capital accumulation. While considerable research is yet to be done to comprehensively delineate what accountable-stakeholder theory could look like in the full context of AI system economies, taking into consideration legal boundaries, technological options and economic models, the role of this piece has been to give initial philosophical context to the issues of corporate benevolence and the logic of accumulation proliferating in the current private-sector AI ecosystem.
Want to print your doc? This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (