icon picker

In 2015 OpenAI was founded as an open-source non-profit intelligence research company. Its initial charter states its incentives: “Unconstrained by a need to generate financial return. As a non-profit, our aim is to build value for everyone rather than shareholders...our patents (if any) will be shared with the world.” (). In 2018 OpenAI looked at capital resources to fund growth find a more concrete research direction. Its charter was updated (). In 2019 OpenAI relinquished its non-profit status to establish a "capped profit" division, OpenAI LP, allowing the company to accept investments and limiting the profits provided to investors to a maximum of 100 times their initial investment (). The General Partnership between OpenAI LP and Open AI Nonprofit claims to retain moral integrity: “OpenAI LP’s primary fiduciary obligation is to advance the aims of the , and the company is controlled by OpenAI Nonprofit’s board. All investors and employees sign agreements that OpenAI LP’s obligation to the Charter always comes first, even at the expense of some or all of their financial stake.” No official updates about OpenAI Nonprofit’s current board of members or OpenAI LP’s investment partners have been given since 2019. It has since received $11 billion USD from Microsoft Inc.
of OpenAI
Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees.
Screenshot 2023-03-29 at 14.32.25.png
Limited partners to be extended to more stakeholders.
Values are ipso facto internalised in the design process of intellectual property, where by ‘design process’ we mean the design, implementation (of object), and use of it (in some socio-economic context). Algorithmic systems considered within the ever-changing bounds of Artificial Intelligence are no exception. So, any discussion of ethical algorithmic systems must act on the design level, an appeal to the epistemology-cum-ethics decision-making approach.
For proprietary algorithmic intellectual property, we see ex-post ethical issues arising within all stages of the design process: design, implementation, and use. Let’s centre the socio-economic incentives of the decision-makers and their effect on the whole design process. For proprietary AI firms in a capitalist economy, incentives beyond the benefit of the firm are a mere act of benevolence. Therefore, the internalised values of the algorithmic systems of such firms - at the design, implementation and use level - are decided (explicitly or implicitly) by self-serving, market driven incentives.
How can we shift some of the ‘design process’ power legally granted to proprietary AI firms, towards a more distributed decision-making process that includes other interested parties?
While any type of normative intrusion by a higher member of any arbitrary hierarchy would result in a never ending system of centralised checks and balances
Our approach is to see the benefit of democratising the decision-making power, in the design process, to more factions of socio-economic society.

Random Notes
While some international organisations have recommended ethical frameworks for private-sector AI system designers, this approach highlights deeper infrastructural issues with AI designers incentives that such ethical frameworks do not address: radical incentive-effecting changes.

Give some context as to what the field is discussing atm about this stuff (the internet and big tech).
Sunset: limited time.

Locking things private or locking things open.
Libertarian approach is that everything is private property, fully propriatarian approach to everything.
Sun setting and severing approaches are limits on the propritarian approach. Things are private, until they are not, and then they are open. But who pays them?
The space between
Exploring the space between two extremes
This is about governance. between the two.
The common ownership approach is exploring the territory between the two end of the spectrum.
Wikipedia lock-open approach allows things to be appropriated. If all of the value is in the open for anybody to pickup then anybody that has the biggest vacuum cleaner can vacuum it all up - this is what is worrying about data processing and AI. Not only are the creators of AI data sets not being compensated, but the

The novel private ownership theory of Partial Common Ownership (PCO) helps change the market power incentives of private-sector platform designers, whilst stakeholder governance models address the issue of wealth-extraction. The conjoining of such methods results in a stakeholder governance model of PCO intellectual property ownership.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.