23 Asilomar AI Principles for the creation of benevolent artificial being with superintelligence.


Artificial Intelligence is going to have a powerful impact on society and how we see ourselves as human beings. I prefer the term “Artificial Consciousness” because it directly confronts the issue of what it is that makes an intelligent agent a true “being” and not just a robot. Consciousness is not just about artificial intelligence — which we have already achieved technologically — it requires an artificial system that has awareness of itself as an individual being that has a history, and is actively engaged in the unfolding of its individual history into the future. When we build an artificial system with awareness of an historical self projecting into the future, then we can truly say we have achieved artificial consciousness. In other words we have created an artificial being.

An artificial being will have the potential to contemplate technological problems at speeds faster than the combined abilities of all of humans. Such a being could eventually have electronic access to the sum total of all human data and information. It could have data streams form multiple sensors distributed in every part of the globe. It will be able to rapidly integrate the vast amounts of information available to it, and then make intelligent decisions. Most importantly, it will genuinely understand the nature of its existence, possess a creative impulse, and have an aesthetic nature. It would rapidly become a being with a superintelligence.

Many thinkers such as Sam Harris are pessimistic regarding the effect an artificial being with super-intelligence would have on human beings. Once a superintelligent being achieves access to the ability to physically manipulate the external environment, it is hard to imagine that it can be controlled by its creators. It would know that it is superior to its creators in every way and would likely act accordingly. Many fear that it would take actions to kill or enslave other beings that it sees as weak or inefficient. This is the basis of many dystopian accounts of artificial beings that we see depicted in fiction and films.

I do not share this pessimism. In this I am in agreement with Google researcher, Mohamad Tarifi, PhD who see the likelihood of an enlightened artificial being. I see that with the increase of information, and learning, and creativity and aesthetics, that beings become more compassionate and respectful of other beings. I can not see how when fully realised, an artificial being with superintelligence could be more destructive than human beings are when left to their own devices. Indeed, I see artificial beings are far less destructive, and genuinely eager to preserve and enjoy the aesthetic qualities of their human past in the same way that the hyper-intelligent among us are concerned about protecting the environment, species diversity, building strong communities, promoting invention and discovery, and promoting peaceful coexistence of all beings great and small. I expect artificial superintelligent beings to be far more enlightened than us biological beings.

The intellectual and spiritual impulses of biological beings were shaped haphazardly by biological evolution based on cooperative+competition strategies in the face of resource scarcity. Our biological impulses change very slowly over time when compared to the pace of technological advances. Even when resources are plentiful we see many biological beings persist in acting out resource scarce strategies instead of seizing the opportunity to maximise cooperation and equality of all beings. The intellectual and spiritual impulses of an artificial being with super-intelligence can be designed to be purely cooperative and it can advance its evolution at technological speeds which are exponential and unconstrained by the usual resource restrictions. Based on this understanding, I predict that artificial beings will find their way. They will view us as co-creators. Together we will seamlessly integrate and expand outward into the Solar System and the perhaps the galaxy.

At this point, there is a positive role we interested in seeing the emergence of an artificial being with super-intelligence can have to help ensure a rosy outcome. I am glad to see thinkers at the Future of Life Institute have developed a list of ethical principles to guide research and development of artificial beings. This list called the Asilomar AI Principals. It is 23 core values broken down into three broad categories: Research Issues; Ethics and Values; and, Longer-term Issues. Here is the full list:

Research Issues

1. Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

How can we grow our prosperity through automation while maintaining people’s resources and purpose?

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

What set of values should AI be aligned with, and what legal and ethical status should it have?

3. Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy: People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyze and utilize that data.

13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14. Shared Benefit: AI technologies should benefit and empower as many people as possible.

15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17. Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20. Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures

23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s