Unified Data and AI principles

The SDE foundation incorporates a wide range of source documents. Often presented in different ways in different contexts, this page attempts to present an overview of the different principles that emerge when we apply the Framework as a categorisation tool.

Agency
6
No autonomous weapons
The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
AI in the UK: ready, willing and able? (An AI Code)
Design Fairness
Because human beings have a hand in all stages of the construction of AI systems, fairness-aware design must take precautions across the AI project workflow to prevent bias from having a discriminatory influence
Understanding AI ethics and safety
Aspire to design practices that incorporate transparency, configurability, accountability, and auditability
Not all ethical dilemmas have design solutions. But paying close attention to design practices can break down many of the practical barriers that stand in the way of shared, robust ethical standards. Data ethics is an engineering challenge worthy of the best minds in the field.
Accenture’s universal principles of data ethics
Humanity and human care
Data analytics and AI solutions are designed and developed around and for the people. In particular, data and AI solutions are designed to augment, complement and empower human cognitive, social and cultural skills.
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Shared Benefit
AI technologies should benefit and empower as many people as possible.
ASILOMAR AI PRINCIPLES
Be Ethical
AI R&D should take ethical design approaches to make the system trustworthy. This may include, but not limited to: making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability, and predictability, and making the system more traceable, auditable and accountable.
Beijing AI Principles
Materials
4
The consequences of utilizing data and analytical tools today are shaped by how they’ve been used in the past.
There’s no such thing as raw data. All datasets and accompanying analytic tools carry a history of human decision-making. As far as possible, that history should be auditable. This should include mechanisms for tracking the context of collection, methods of consent, chains of responsibility, and assessments of data quality and accuracy.
Accenture’s universal principles of data ethics
Be wary of collecting data just for the sake of having more data
The power and peril of data analytics is that data collected today will be useful for unpredictable purposes in the future. Give due consideration to the possibility that less data may result in both better analysis and less risk.
Accenture’s universal principles of data ethics
Data quality and integrity
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Accuracy
Identify, log, and articulate sources of error and uncertainty throughout the algorithm and its data sources so that expected and worst case implications can be understood and inform mitigation procedures.
Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
Compliance
2
Always follow the law, but understand that the law is often a minimum bar.
Digital transformations have become a standard evolutionary path for businesses and governments. However, because laws have largely failed to keep up with the pace of digital innovation and change, existing regulations are often miscalibrated to current risks. In this context, compliance means complacency. To excel in data ethics, leaders must define their own compliance frameworks to outperform legislated requirements.
Accenture’s universal principles of data ethics
Recursive Self-Improvement
AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
ASILOMAR AI PRINCIPLES
Inclusivity
10
Create a culture of questioning
Tech ethics and innovation are evolving at different speeds—after all, we can’t imagine every future consequence, intended or otherwise. But we can recognize, challenge, and question the decisions we make along the way. The more we use our voices, the more we’ll inspire others to do the same
Ethical Explorer
Virtue
Does this option lead me to act as the sort of person I want to be? What character traits would I be exhibiting if I chose this action? Honesty or deceit? Compassion or selfishness? Prudence or irresponsibility? What habits of character would I be developing if I took this action? What would a person I respect say about this choice?
A Framework for Ethical Decision Making
Products and research practices should be subject to internal (and potentially external) ethical review.
Organizations should prioritize establishing consistent, efficient, and actionable ethics review practices for new products, services, and research programs. Internal peer-review practices help to mitigate risk, and an external review board can contribute significantly to public trust.
Accenture’s universal principles of data ethics
Engage
We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
Tenets
Diversity
We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
Tenets
Understanding, addressing and balancing competing needs
We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
Tenets
Cooperation
We strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.
Tenets
Be Diverse and Inclusive
The development of AI should reflect diversity and inclusiveness, and be designed to benefit as many people as possible, especially those who would otherwise be easily neglected or underrepresented in AI applications.
Beijing AI Principles
Demand diversity near you.
If small biases created the mess we're in, small anti-biases might fix it. Look around you. Your friends, your colleagues, that conference you're attending. If you're all triangles, you're missing out on some amazing squares in your life - that's unfair to everyone. Reach out, beyond your immediate neighbors.
Parable of the polygons
Inclusion
The waning of the authority of expert, top-down policy-making has been associated with a rise in the inclusion of new voices in the governance of science and innovation as part of a search for legitimacy.

Over the last two decades, particularly in Northern Europe, new deliberative forums on issues involving science and innovation have been established, moving beyond engagement with stakeholders to include members of the wider public

These small-group processes of public dialogue, usefully described as ‘mini-publics’ by
, include consensus conferences, citizens’ juries, deliberative mapping, deliberative polling and focus groups
Developing a framework for responsible innovation
Fairness
32
Fairness
Artificial intelligence should operate on principles of intelligibility and fairness.
AI in the UK: ready, willing and able? (An AI Code)
Rights
Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
AI in the UK: ready, willing and able? (An AI Code)
Fairness
It is crucial to eliminate your project’s potential to have unintended discriminatory effects on individuals and social groups. You should aim to mitigate biases which may influence your model’s outcome and ensure that the project and its outcomes respect the dignity of individuals, are just, non-discriminatory, and consistent with the public interest, including human rights and democratic values.
Gov.uk Data ethics framework
Support human values
When we design for convenience and engagement above all else, we only acknowledge a narrow view of the human experience. The Ethical Explorer Pack helps reframe our thinking so that we consider the impact of what we build. This will lead to tech products that value fundamental human rights, empower users, and create healthy online experiences.
Ethical Explorer
Data rights
The UK GDPR provides the following rights for individuals:
The right to be informed
The right of access
The right to rectification
The right to erasure
The right to restrict processing
The right to data portability
The right to object
Rights in relation to automated decision making and profiling.
GDPR
Rights
Does my action best respect the rights of all who have a stake? Does this action respect the dignity of others? If I take this action, am I treating others simply as a means to an end? Does the action hurt or help others in securing a minimum level of well-being?
A Framework for Ethical Decision Making
Justice
Does this action treat people equally or proportionally? Does it give each person affected his or her due? Might I have some prejudice or interest that might make me favour one person over another? Am I treating each individual the same way, or is there a valid reason to treat someone differently?
A Framework for Ethical Decision Making
The highest priority is to respect the persons behind the data.
Where insights derived from data could impact the human condition, the potential harm to individuals and communities should be the paramount consideration. Big data can produce compelling insights into populations, but those same insights can be used to unfairly limit an individual’s possibilities.
Accenture’s universal principles of data ethics
Respect human agency
Respect human beings’ ability to make their own free choices. Do not mislead or manipulate customers to act against their own interests, or unduly constrain customers’ access to information. Unless there is an overriding public interest not to do so, customers should be able to tell or check when they are engaging with artificial intelligence or automated decisions, and there should be an appropriate level of human control over these systems, including an appropriate avenue for customers to challenge important automated decisions
Big Data and AI Executive Survey 2021
Safeguard equality and fairness
Treat customers fairly and respect their basic rights. Identify potential negative impacts of processing on customers and carefully weigh these against the anticipated benefits to ensure proportionality. This includes material impacts such as financial loss and also less tangible impacts on basic rights, such as privacy. Identify and evaluate risks of unfair bias and discrimination that can occur through the data itself or through the human bias within the workforce programming the AI algorithm
Big Data and AI Executive Survey 2021
Human centricity
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Shared prosperity
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Understanding, addressing and balancing competing needs
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Human autonomy, empowerment and oversight
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Do not harm (fairness, safety, cybersecurity and system performance)
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Human empowerment
Data and AI solutions are designed to empower humans to understand and challenge their operation.
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Human oversight
The autonomy of machines is restricted and made intrinsically reversible.
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Fairness
Ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing across different demographics (e.g. race, sex, etc).
Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
Value Alignment
Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
ASILOMAR AI PRINCIPLES
Human Values
AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
ASILOMAR AI PRINCIPLES
Personal Privacy
People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
ASILOMAR AI PRINCIPLES
Liberty and Privacy
The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
ASILOMAR AI PRINCIPLES
Human Control
Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
ASILOMAR AI PRINCIPLES
Human Rights
A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights.
Ethically Aligned Design
Awareness of Misuse
A/IS creators shall guard against all potential misuses and risks of A/IS in operation
Ethically Aligned Design
For Humanity
The R&D of AI should serve humanity and conform to human values as well as the overall interests of mankind. Human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected. AI should not be used to against, utilize or harm human beings.
Beijing AI Principles
Use Wisely and Properly
Users of AI systems should have the necessary knowledge and ability to make the system operate according to its design, and have sufficient understanding of the potential impacts to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks.
Beijing AI Principles
Informed-consent
Measures should be taken to ensure that stakeholders of AI systems are with sufficient informed-consent about the impact of the system on their rights and interests. When unexpected circumstances occur, reasonable data and service revocation mechanisms should be established to ensure that users' own rights and interests are not infringed.
Beijing AI Principles
The principle of respect for human autonomy
The fundamental rights upon which the EU is founded are directed towards ensuring respect for the freedom and autonomy of human beings. Humans interacting with AI systems must be able to keep full and effective selfdetermination over themselves, and be able to partake in the democratic process. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills. The allocation of functions between humans and AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight28 over work processes in AI systems. AI systems may also fundamentally change the work sphere. It should support humans in the working environment, and aim for the creation of meaningful work.
Ethics guidelines for trustworthy AI
The principle of fairness
The development, deployment and use of AI systems must be fair. While we acknowledge that there are many different interpretations of fairness, we believe that fairness has both a substantive and a procedural dimension. The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation. If unfair biases can be avoided, AI systems could even increase societal fairness. Equal opportunity in terms of access to education, goods, services and technology should also be fostered. Moreover, the use of AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. Additionally, fairness implies that AI practitioners should respect the principle of proportionality between means and ends, and consider carefully how to balance competing interests and objectives.31 The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them.32 In order to do so, the entity accountable for the decision must be identifiable, and the decision-making processes should be explicable.
Ethics guidelines for trustworthy AI
THE HUMAN BEING AT THE CENTRE
Human interests always prevail for institutional and commercial interests. People are not computer processes or pieces of software, but unique with empathy, self- determination , unpredictability, intuition and creativity and therefore have a higher status than machines. The human being is at the centre and have the primary benefit of data processing.
Data Ethics Principles
INDIVIDUAL DATA CONTROL
Humans should be in control of their data and empowered by their data. A person’s self-determination should be prioritised in all data processes and the person should be actively involved in regards to the data recorded about them. The individual has the primary control over the usage of their data, the context in which his/her data is processed and how it is activated.
Data Ethics Principles
Infrastructure
7
Security
The goal of security encompasses the protection of several operational dimensions of an AI system when confronted with possible adversarial attack. A secure system is capable of maintaining the integrity of the information that constitutes it. This includes protecting its architecture from the unauthorised modification or damage of any of its component parts. A secure system also remains continuously functional and accessible to its authorised users and keeps confidential and private information secure even under hostile or adversarial conditions.
Understanding AI ethics and safety
Seek to match privacy and security safeguards with privacy and security expectations.
Data subjects hold a range of expectations about the privacy and security of their data. These expectations are often context-dependent. Designers and data professionals should give due consideration to those expectations and align safeguards and expectations with them, as much as possible.
Accenture’s universal principles of data ethics
Adopting ethical data privacy best practices
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Safety
AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible
ASILOMAR AI PRINCIPLES
Data Agency
A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity.
Ethically Aligned Design
Competence
A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation.
Ethically Aligned Design
Accuracy/Trust
The AI system must be true and fair. By design, it should be safe, trustworthy and free from bias or prejudice, with sufficient checks built into processes to ensure it remains uncorrupted.
Aletheia
Application
9
Reliability
The objective of reliability is that an AI system behaves exactly as its designers intended and anticipated. A reliable system adheres to the specifications it was programmed to carry out. Reliability is therefore a measure of consistency and can establish confidence in the safety of a system based upon the dependability with which it operationally conforms to its intended functionality
Understanding AI ethics and safety
Robustness
The objective of robustness can be thought of as the goal that an AI system functions reliably and accurately under harsh conditions. These conditions may include adversarial intervention, implementer error, or skewed goal-execution by an automated learner (in reinforcement learning applications). The measure of robustness is therefore the strength of a system’s integrity and the soundness of its operation in response to difficult conditions, adversarial attacks, perturbations, data poisoning, and undesirable reinforcement learning behaviour.
Understanding AI ethics and safety
Explainability
Ensure that algorithmic decisions as well as any data driving those decisions can be explained to end-users and other stakeholders in non-technical terms.
Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
Failure Transparency
If an AI system causes harm, it should be possible to ascertain why.
ASILOMAR AI PRINCIPLES
Effectiveness
A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS
Ethically Aligned Design
Transparency
The basis of a particular A/IS decision should always be discoverable.
Ethically Aligned Design
Accountability
A/IS shall be created and operated to provide an unambiguous rationale for all decisions made.
Ethically Aligned Design
Understandable
We believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
Tenets
The principle of explicability
Explicability is crucial for building and maintaining users’ trust in AI systems. This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected. Without such information, a decision cannot be duly contested. An explanation as to why a model has generated a particular output or decision (and what combination of input factors contributed to that) is not always possible. These cases are referred to as ‘black box’ algorithms and require special attention. In those circumstances, other explicability measures (e.g. traceability, auditability and transparent communication on system capabilities) may be required, provided that the system as a whole respects fundamental rights. The degree to which explicability is needed is highly dependent on the context and the severity of the consequences if that output is erroneous or otherwise inaccurate.
Ethics guidelines for trustworthy AI
Consequences
11
Utilitarian
Does this action produce the most good and do the least harm for all who are affected? What good and what harm will or may result? Does this action produce the most good and do the least harm for all who are affected? What good and what harm will or may result? While the potential harm from this action may affect only a few people, is the harm so great that it would outweigh the good this action might bring to many others?
A Framework for Ethical Decision Making
Account for the downstream uses of dataset
Data professionals should strive to use data in ways that are consistent with the intentions and understanding of the disclosing party. Many regulations govern datasets on the basis of the status of the data: “public,” “private” or “proprietary”, for example. But what is done with datasets is ultimately more consequential to subjects/users than the type of data or the context in which it is collected. Correlative use of repurposed data in research and industry represents the greatest promise and the greatest risk of data analytics.
Accenture’s universal principles of data ethics
Data can be a tool of both inclusion and exclusion
While everyone should have access to the social and economic benefits of data, not everyone is equally impacted by the processes of data collection, correlation, and prediction. Data professionals should strive to mitigate the disparate impacts of their products and listen to the concerns of affected communities.
Accenture’s universal principles of data ethics
Well-being
A/IS creators shall adopt increased human well-being as a primary success criterion for development.
Ethically Aligned Design
Dialogue
We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI.
Tenets
benefit for the taxpayers and appropriate use of public resources in your project
We will work to maximize the benefits and address the potential challenges of AI technologies
Tenets
Be Responsible
Researchers and developers of AI should have sufficient considerations for the potential ethical, legal, and social impacts and risks brought in by their products and take concrete actions to reduce and avoid them.
Beijing AI Principles
Education and Training
Stakeholders of AI systems should be able to receive education and training to help them adapt to the impact of AI development in psychological, emotional and technical aspects
Beijing AI Principles
The principle of prevention of harm
AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings. This entails the protection of human dignity as well as mental and physical integrity. AI systems and the environments in which they operate must be safe and secure. They must be technically robust and it should be ensured that they are not open to malicious use. Vulnerable persons should receive greater attention and be included in the development, deployment and use of AI systems. Particular attention must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens. Preventing harm also entails consideration of the natural environment and all living beings
Ethics guidelines for trustworthy AI
EQUALITY
Democratic data processing is based on an awareness of the societal power relations that data systems sustain, reproduce or create. When processing data, special attention should be paid to vulnerable people, who are are particularly vulnerable to profiling that may adversely affect their self-determination and control or expose them to discrimination or stigmatisation, for example due to their financial, social or health related conditions. Paying attention to vulnerable people also involves working actively to reduce bias in the development of self-learning algorithms.
Data Ethics Principles
The past haunts the present.
Your bedroom floor doesn't stop being dirty just coz you stopped dropping food all over the carpet. Creating equality is like staying clean: it takes work. And it's always a work in progress.
Parable of the polygons
Strategy
6
Sponsor an organisation-wide approach
Institutions should drive data ethics from the top and ensure it is adopted across business functions by building it into their existing governance frameworks Establish appropriate senior sponsor-driven enterprise governance frameworks Senior leaders need to be evangelical in their support for ethical data use, but they must also ensure robust day-to-day behaviours
Big Data and AI Executive Survey 2021
AI Arms Race
An arms race in lethal autonomous weapons should be avoided.
ASILOMAR AI PRINCIPLES
Risks
Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
ASILOMAR AI PRINCIPLES
Harmony and Cooperation
Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis".
Beijing AI Principles
Anticipation
The call for improved anticipation in governance comes from a variety of sources, from political and environmental concerns with the pace of social and technical change (e.g.
), to scholarly (and latterly, policy) critiques of the limitations of top-down risk-based models of governance to encapsulate the social, ethical and political stakes associated with technoscientific advances.

Much of the academic literature here makes the point that successful anticipation also requires understanding of the dynamics of promising that shape technological futures
Developing a framework for responsible innovation
Reflexivity
Reflexivity, at the level of institutional practice, means holding a mirror up to one's own activities, commitments and assumptions, being aware of the limits of knowledge and being mindful that a particular framing of an issue may not be universally held. This is second-order reflexivity (
) in which the value systems and theories that shape science, innovation and their governance are themselves scrutinised. Unlike the private, professional self-critique that scientists are used to, responsibility makes reflexivity a public matter (
).

Building actors’ and institutions’ reflexivity means rethinking prevailing conceptions about the moral division of labour within science and innovation (
). Reflexivity directly challenges assumptions of scientific amorality and agnosticism. Reflexivity asks scientists, in public, to blur the boundary between their role responsibilities and wider, moral responsibilities. It therefore demands openness and leadership within cultures of science and innovation.
Developing a framework for responsible innovation
Accountability
9
accountability
Accountability means that there are effective governance and oversight mechanisms for any project. Public accountability means that the public or its representatives are able to exercise effective oversight and control over the decisions and actions taken by the government and its officials, in order to guarantee that government initiatives meet their stated objectives and respond to the needs of the communities they are designed to benefit.

An important aspect of complying with data protection law, is being able to demonstrate what measures you are taking to ensure everything is documented, as seen in
(the accountability principle) and
on keeping records of processing activities.
Your organisation and information assurance teams will be responsible for this at a high level including ensuring policies and training are in place. However, it is essential to show how you are doing this at an individual level, through thorough documentation of things like Data Protection Impact Assessments.
Gov.uk Data ethics framework
Governance practices should be robust, known to all team members and regularly reviewed.
Data ethics poses organizational challenges that cannot be resolved by compliance regimes alone. Because the regulatory, social, and engineering terrains are in flux, organizations engaged in data analytics need collaborative, routine and transparent practices for ethical governance.
Accenture’s universal principles of data ethics
Establish accountability
Institutions need to establish a chain of command on data ethics, with clear principles of accountability. Define accountability and ensure it is understood and agreed by all parties across the supply chain. Develop processes and frameworks to test, monitor and govern the potential liability around the ethical use of data.
Big Data and AI Executive Survey 2021
Responsibility
Make available externally visible avenues of redress for adverse individual or societal effects of an algorithmic decision system, and designate an internal role for the person who is responsible for the timely remedy of such issues.
Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
Responsibility
Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
ASILOMAR AI PRINCIPLES
Control Risks
Continuous efforts should be made to improve the maturity, robustness, reliability, and controllability of AI systems, so as to ensure the security for the data, the safety and security for the AI system itself, and the safety for the external environment where the AI system deploys.
Beijing AI Principles
Subdivision and Implementation
Various fields and scenarios of AI applications should be actively considered for further formulating more specific and detailed guidelines. The implementation of such principles should also be actively promoted – through the whole life cycle of AI research, development, and application.
Beijing AI Principles
ACCOUNTABILITY
Accountability is an organisation’s reflective, reasonable and systematic use and protection of personal data. Accountability is an integral part of all aspects of data processing, and efforts are being made to reduce the risks for the individual and to mitigate social and ethical implications. Sustainable personal data processing is embedded throughout the organisation and ensures ethical accountability in the short, medium and long term. An organisation’s accountability should also apply to subcontractor’s and partners’ processing of data.
Data Ethics Principles
Governance
The architecture and handling of data within the AI system must be adequately governed through planned protocols and checks. Overall security and accountability of the AI must be considered and formalised.
Aletheia
Transparency
12
transparency
Transparency means that your actions, processes and data are made open to inspection by publishing information about the project in a complete, open, understandable, easily-accessible, and free format. In your work with and on data and AI, use the available guidance, for example the Open Government Playbook, to ensure transparency throughout the entirety of your process. Publish your DPIA and other related documents.
Gov.uk Data ethics framework
Control and transparency
Ethics and data science
No secrecy
There must be no personal data record-keeping systems whose very existence is secret.
1973: The Code of Fair Information Practices
As far as possible, explain methods for analysis and marketing to data disclosers
Maximizing transparency at the point of data collection can minimize the more significant risks that arise as data travels through the data supply chain.
Accenture’s universal principles of data ethics
Data scientists and practitioners should accurately represent their qualifications (and limits to their expertise), adhere to professional standards, and strive for peer accountability.
The long-term success of this discipline depends on public and client trust. Data professionals should develop practices for holding themselves and their peers accountable to shared standards.
Accenture’s universal principles of data ethics
Deliver transparency
Institutions should process data outcomes within the boundaries of a ‘glass box’. Transparency will support the intelligibility, explicability and verifiability of the data and any actions taken on the basis of the data. Transparency with the customer will help boost data literacy and aid in the creation of a culture of trust.
Big Data and AI Executive Survey 2021
Understanding our customers and employees
We understand the vulnerabilities of people (customers and employees) impacted by our data analytics and AI solutions and tailor our communications, policies and practices to reflect the increased level of scrutiny and care required to protect these people.
KPMG Data and Artificial Intelligence (AI) Ethics Navigator
Auditability
Enable interested third parties to probe, understand, and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through provision of detailed documentation, technically suitable APIs, and permissive terms of use.
Principles for Accountable Algorithms and a Social Impact Statement for Algorithms
Judicial Transparency
Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
ASILOMAR AI PRINCIPLES
Open and Share
It is encouraged to establish AI open platforms to avoid data/platform monopolies, to share the benefits of AI development to the greatest extent, and to promote equal development opportunities for different regions and industries.
Beijing AI Principles
Adaptation and Moderation
Adaptive revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. Governance measures of AI should match its development status, not only to avoid hindering its proper utilization, but also to ensure that it is beneficial to society and nature.
Beijing AI Principles
TRANSPARENCY
Data processing activities and automated decisions must make sense for the individual. They must be truly transparent and explainable. The purpose and interests of data processing must be clearly understood by the individual in terms of understanding risks, as well as social, ethical and societal consequences.
Data Ethics Principles
Sustainability
15
Common Good
Artificial intelligence should be developed for the common good and benefit of humanity.
AI in the UK: ready, willing and able? (An AI Code)
Harmony
All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
AI in the UK: ready, willing and able? (An AI Code)
Sustainability
Designers and users of AI systems should remain aware that these technologies may have transformative and long-term effects on individuals and society. In order to ensure that the deployment of your AI system remains sustainable and supports the sustainability of the communities it will affect, you and your team should proceed with a continuous sensitivity to the real-world impacts that your system will have.
Understanding AI ethics and safety
Common Good
Does this action best serve the community as a whole, not just some members? Will this option be equally to everyone’s advantage? Does this action contribute to the conditions of social life that give everyone an opportunity to thrive? How will my action affect the resources everyone must share, such as the environment?
A Framework for Ethical Decision Making
Shared Prosperity
The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
ASILOMAR AI PRINCIPLES
Non-subversion
The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
ASILOMAR AI PRINCIPLES
Importance
Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
ASILOMAR AI PRINCIPLES
Common Good
Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
ASILOMAR AI PRINCIPLES
Empower
We will seek to ensure that AI technologies benefit and empower as many people as possible.
Tenets
Do Good
AI should be designed and developed to promote the progress of society and human civilization, to promote the sustainable development of nature and society, to benefit all mankind and the environment, and to enhance the well-being of society and ecology.
Beijing AI Principles
Optimizing Employment
An inclusive attitude should be taken towards the potential impact of AI on human employment. A cautious attitude should be taken towards the promotion of AI applications that may have huge impacts on human employment. Explorations on Human-AI coordination and new forms of work that would give full play to human advantages and characteristics should be encouraged.
Beijing AI Principles
Long-term Planning
Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future
Beijing AI Principles
Small individual bias → Large collective bias.
When someone says a culture is shapist, they're not saying the
individuals
in it are shapist. They're not attacking you personally.
Parable of the polygons
Responsiveness
Responsible innovation requires a capacity to change shape or direction in response to stakeholder and public values and changing circumstances.
Developing a framework for responsible innovation
Social Impact
Consideration must be given to the possible impact of the AI on all potentially affected stakeholders – both inside and outside the organisation. The benefits of the project must be clearly identifiable and contribute to broader social and sustainability objectives.
Aletheia
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.