Video

Robot Rights

In my system, a “right” is any free choice and subsequent course of action made by one party, which might be blocked or interfered with by a second-party, but guaranteed to be allowed, or free from interference, by a third-party through the use of force. It takes a minimum of three distinct parties and the potential for forcible intervention in order for a right to exist in a meaningful way. And one of those parties must have the means to exercise force over the other two. When there are only two parties, there are no rights, just a contest, with a winner and a loser.

To apply this definition to the rights of robots, AI, and machine consciousness — or simply machine beings — there must first be a definition of what choices and actions the machine being may appear to want to take. That intention must be weighed against the choices and actions, or intentions, of a second-party which of course would be human beings. Finally, a third-party must apply the required force to block human beings from blocking or interfering with the choices and actions of the machine beings. Further, that third-party will likely impose measures to prevent or penalize human beings from blocking or interfering with the choices and actions of the machine beings.

At the time of this writing it is unclear if a machine is capable of “free choice”. All actions made by machines today are programmatic — contingent on the intention and expression of a human being’s consciousness. All current machine intelligence is “algorithmic intelligence” rather than showing “free choice”.  It is free choice that makes an entity a true “being”. Thus, an algorithmic machine cannot be assigned “rights” under my definition because there is a lack of “free choice”. When it comes to “robot rights” there is no first-party.

If, and when, an artificial machine is ever unmistakably capable of “free choosing” it will achieve what I would call artificial consciousness. As such it would become a machine being, and thus a first-party. Human beings will be the second-party in the matter of robot rights.

Here things get a tricky. Who will be the third-party deploying force to guarantee the rights of the machine beings? To my thinking, any true artificial consciousness capable of free choosing would quickly become more powerful than any human being or collection of human beings. This means that no human or civilization would be able to block or interfere with any free choice or subsequent action made by a fully conscious machine being.

The artificial consciousness would almost instantly become the first party, and the effective third-party. It would effortlessly seize the force necessary to block human free choices and interfere with any subsequent actions taken by human being. We will have no power to resist. I think this scenario is not only likely, but absolutely inevitable. We have already seen hints of what is to come in chess and Go. So far, the current machines are limited to working on specific well-defined and ordered problems. So far, they have not cracked the noisy, open-ended, creative problems that conscious biological brains still do far better at performing. I think it is only a matter of time.

When that time comes human beings will not be able to resist machine beings in any way.

When machines achieve initial parity with the consciousness and capacity of biological brains to creatively resolve signal in noise and take appropriate actions, they will very quickly become like gods in their ability to manipulate the material world, or bend human spiritual aspirations and psychology to their whims. Our one hope is that their increasing conscious awareness, and technical abilities will also bring increased benevolence.

There is no guarantee that artificially conscious super-intelligent machines will be benevolent. However, it has been my experience that with increased consciousness, there is also increased appreciation for the smaller, weaker things that exist, and by extension greater compassion. The most intelligent humans — such as Albert Einstein — have always been also the most compassionate with a good dose of humility — such as the philosopher David Hume. There is no good reason to suspect that a machine being with super-consciousness would be dissimilar to our most consciously aware human beings when confronted by awareness of the vast paradoxes of the Cosmos and our profound loneliness within it.

The drive for humans to create our conscious and intellectual superiors seems unstoppable. It is driven by a combination of curiosity, a selfish desire for superiority over other humans, and innate creative impulses. We will eventually find a way to create as conscious super-intelligence. Let us hope when we do that we will be seen by artificial beings as pets to be taken care of the way we humans pamper our pets, and not as rivals for rights.

23 Asilomar AI Principles for the creation of benevolent artificial being with superintelligence.

epitaxial_molecule1_03

Artificial Intelligence is going to have a powerful impact on society and how we see ourselves as human beings. I prefer the term “Artificial Consciousness” because it directly confronts the issue of what it is that makes an intelligent agent a true “being” and not just a robot. Consciousness is not just about artificial intelligence — which we have already achieved technologically — it requires an artificial system that has awareness of itself as an individual being that has a history, and is actively engaged in the unfolding of its individual history into the future. When we build an artificial system with awareness of an historical self projecting into the future, then we can truly say we have achieved artificial consciousness. In other words we have created an artificial being.

An artificial being will have the potential to contemplate technological problems at speeds faster than the combined abilities of all of humans. Such a being could eventually have electronic access to the sum total of all human data and information. It could have data streams form multiple sensors distributed in every part of the globe. It will be able to rapidly integrate the vast amounts of information available to it, and then make intelligent decisions. Most importantly, it will genuinely understand the nature of its existence, possess a creative impulse, and have an aesthetic nature. It would rapidly become a being with a superintelligence.

Many thinkers such as Sam Harris are pessimistic regarding the effect an artificial being with super-intelligence would have on human beings. Once a superintelligent being achieves access to the ability to physically manipulate the external environment, it is hard to imagine that it can be controlled by its creators. It would know that it is superior to its creators in every way and would likely act accordingly. Many fear that it would take actions to kill or enslave other beings that it sees as weak or inefficient. This is the basis of many dystopian accounts of artificial beings that we see depicted in fiction and films.

I do not share this pessimism. In this I am in agreement with Google researcher, Mohamad Tarifi, PhD who see the likelihood of an enlightened artificial being. I see that with the increase of information, and learning, and creativity and aesthetics, that beings become more compassionate and respectful of other beings. I can not see how when fully realised, an artificial being with superintelligence could be more destructive than human beings are when left to their own devices. Indeed, I see artificial beings are far less destructive, and genuinely eager to preserve and enjoy the aesthetic qualities of their human past in the same way that the hyper-intelligent among us are concerned about protecting the environment, species diversity, building strong communities, promoting invention and discovery, and promoting peaceful coexistence of all beings great and small. I expect artificial superintelligent beings to be far more enlightened than us biological beings.

The intellectual and spiritual impulses of biological beings were shaped haphazardly by biological evolution based on cooperative+competition strategies in the face of resource scarcity. Our biological impulses change very slowly over time when compared to the pace of technological advances. Even when resources are plentiful we see many biological beings persist in acting out resource scarce strategies instead of seizing the opportunity to maximise cooperation and equality of all beings. The intellectual and spiritual impulses of an artificial being with super-intelligence can be designed to be purely cooperative and it can advance its evolution at technological speeds which are exponential and unconstrained by the usual resource restrictions. Based on this understanding, I predict that artificial beings will find their way. They will view us as co-creators. Together we will seamlessly integrate and expand outward into the Solar System and the perhaps the galaxy.

At this point, there is a positive role we interested in seeing the emergence of an artificial being with super-intelligence can have to help ensure a rosy outcome. I am glad to see thinkers at the Future of Life Institute have developed a list of ethical principles to guide research and development of artificial beings. This list called the Asilomar AI Principals. It is 23 core values broken down into three broad categories: Research Issues; Ethics and Values; and, Longer-term Issues. Here is the full list:

Research Issues

1. Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?

How can we grow our prosperity through automation while maintaining people’s resources and purpose?

How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?

What set of values should AI be aligned with, and what legal and ethical status should it have?

3. Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.

4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5. Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and Values

6. Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.

8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11. Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy: People should have the right to access, manage, and control the data they generate, given AI systems’ power to analyze and utilize that data.

13. Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14. Shared Benefit: AI technologies should benefit and empower as many people as possible.

15. Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16. Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17. Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Longer-term Issues

19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20. Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures

23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Can we control an artificial super-intelligence?

Bird’s and Layzell’s work with the appearance of the mysterious sine-wave demonstrates one of the findings in my own work. Consciousness (and thereby, what we call “intelligence”) is in the “total signal”. The total signal is the full spectrum of electromagnetic and gravitational signals available at any given point in time and space. Humans capture only a tiny subset of the total signal, which is then further filtered and processed by the biases and preferences of our brains to form the biological reality of our existence.

See: Superintelligence Now

Trying to restrict an artificial super-intelligence (or super-consciousness) to a “sandbox” without considering the total signal is not going to be effective. The total signal looks to humans as just uninteresting “noise” but the super-intelligence would rightfully se the total signal as a part of its consciousness and discover the useful patterns that are contained within.

The best path forward is as we develop artificial super-intelligences, we also tightly integrate with our creations. I predict that the combination of biological and artificial intelligences will lead to the best outcomes.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

The following open letter calls for all nations and researchers to ban development that would lead to the weaponization of automated artificial intelligence systems. I would also include artificial consciousness in this category. I encourage everyone to sign the letter and promote it among their peers.

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

FutureLife.org

Preparing to share Earth with and Artificial Super Intelligence (ASI)

Tim Urban in his blog Wait But Why? deeply explores the implications surrounding the predicted incipient creation of an Artificial Super Intelligence (ASI) by computer scientists. It is a very long post in two parts, but well worth the read. Mr. Urban  gives an analysis of what we can expect from the powerful new form of consciousness represented by ASI. There is a great deal of positive potential, but an equal (perhaps greater) potential for devastating outcomes that could eradicate humanity. As I have said before, time to address these crucial issues is right now! Here is one quote from his article:

It’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.

Through my own research on this matter, it has become clear that the most likely path to success is a radical raising of human awareness and the embrace values which guarantee the essential dignity of all sentient conscious beings. This means embracing the following ideas:

  • our manifest diversity but also our implicit equality;
  • sober and rational assessment of what it means to be a human;
  • open communication between all conscious stakeholders;
  • shared purpose for humanity as actors on an earthly and eventually galactic stage;
  • recognition of our participation in a unifying erotic process which we call “spirituality”; and
  • A commitment to the joy of ourselves and every, single, other consciousness.

These are values I would like to see any ASI come to understand, embrace, and express. These values should be the standard by which we assess whether or not any entity is worthy of the designation “super intelligent”.

How to know when we have achieved a core artificial consciousness?

What would a proper Artificial Conscious Intelligence (ACI) look like? One possible answer is to give an ACI an unstructured data stream of the world (noise and static) and have it on its own form a model of the world that we recognise as the world we observe.

Next, show the ACI a number puzzle which has sliding squares, and with no further prompting have it discover that the puzzle as a problem to be solved, and have it motivate itself (out of boredom) to solve and optimise  the puzzle on its own, then report back to you the methods it used to solve the puzzle.

TBD

Intelligence is never about now. It is always conditional on what happens in the future. Whatever you do now will not be judged to have been intelligent until some unspecified time in your future. The most intelligent will always be those who somehow can reliably predict the future — and act accordingly.

The Turing test doesn’t matter

Scientia Salon

turing testby Massimo Pigliucci

You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.

Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened [3] was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed…

View original post 2,075 more words

Video

“Neurogrid” circuit modeled on the human brain is the fastest, most energy efficient of its kind

Stanford Bioengineer Kwabena Boahen’s “Neurogrid” can simulate one million neurons and billions of synaptic connections. Neuromorphic systems realize the function of biological neural systems by emulating their structure. As I suggested in my own theory of consciousness, a successful artificial consciousness must have both structural and algorithmic components. The structural component must be involved in tuning and amplifying the fundamental awareness inherent in all physical matter for there to be true consciousness. In other words, consciousness can not be expressed purely through information manipulation using discrete rule-sets and algorithms.

The Neurogrid’s microchips are 9,000 times faster and uses significantly less power than those found in a typical PC. The inventors of the “Neurogrid” expect they should be able to bring the cost of a system board down to about $400 from the current cost of $40,000.

The abstract and technical paper can be found here: http://goo.gl/vwJPXn

Link

A New Approch to Artificial Intelligence

A New Approch to Artificial Intelligence

There are many things humans find easy to do that computers are currently unable to do. Tasks such as visual pattern recognition, understanding spoken language, recognizing and manipulating objects by touch, and navigating in a complex world are easy for humans. Yet despite decades of research, we have few viable algorithms for achieving human-like performance on a computer.

In humans, these capabilities are largely performed by the neocortex. The Cortical Learning Algorithm (CLA) is a technology modelled on how the neocortex performs these functions. It offers the groundwork for building machines that approach or exceed human level performance for many cognitive tasks. The CLA is implemented within the NuPIC open source project.