• технология
  • Електрическо оборудване
  • Материална индустрия
  • Дигитален живот
  • Политика за поверителност
  • О име
Location: Home / технология / We Shouldn’t Try to Make Conscious Software—Until We Should

We Shouldn’t Try to Make Conscious Software—Until We Should

techserving |
1123

Robots or advanced artificial intelligences that “wake up” and become conscious are a staple of thought experiments and science fiction. Whether or not this is actually possible remains a matter of great debate. All of this uncertainty puts us in an unfortunate position: we do not know how to make conscious machines, and (given current measurement techniques) we won’t know if we have created one. At the same time, this issue is of great importance, because the existence of conscious machines would have dramatic ethical consequences.

We cannot directly detect consciousness in computers and the software that runs on them, any more than we can in frogs and insects. But this is not an insurmountable problem. We can detect light we cannot see with our eyes using instruments that measure nonvisible forms of light, such as x-rays. This works because we have a theory of electromagnetism that we trust, and we have instruments that give us measurements we reliably take to indicate the presence of something we cannot sense. Similarly, we could develop a good theory of consciousness to create a measurement that might determine whether something that cannot speak was conscious or not, depending on how it worked and what it was made of.

Unfortunately, there is no consensus theory of consciousness. A recent survey of consciousness scholars showed that only 58 percent of them thought the most popular theory, global workspace (which says that conscious thoughts in humans are those broadly distributed to other unconscious brain processes), was promising. The top three most popular theories of consciousness, including global workspace, fundamentally disagree on whether, or under what conditions, a computer might be conscious. The lack of consensus is a particularly big problem because each measure of consciousness in machines or nonhuman animals depends on one theory or another. There is no independent way to test an entity’s consciousness without deciding on a theory.

If we respect the uncertainty that we see across experts in the field, the rational way to think about the situation is that we are very much in the dark about whether computers could be conscious—and if they could be, how that might be achieved. Depending on what (perhaps as-of-yet hypothetical) theory turns out to be correct, there are three possibilities: computers will never be conscious, they might be conscious someday, or some already are.

Meanwhile, very few people are deliberately trying to make conscious machines or software. The reason for this is that the field of AI is generally trying to make useful tools, and it is far from clear that consciousness would help with any cognitive task we would want computers to do.

We Shouldn’t Try to Make Conscious Software—Until We Should

Like consciousness, the field of ethics is rife with uncertainty and lacks consensus about many fundamental issues—even after thousands of years of work on the subject. But one common (though not universal) thought is that consciousness has something important to do with ethics. Specifically, most scholars, whatever ethical theory they might endorse, believe that the ability to experience pleasant or unpleasant conscious states is one of the key features that makes an entity worthy of moral consideration. This is what makes it wrong to kick a dog but not a chair. If we make computers that can experience positive and negative conscious states, what ethical obligations would we then have to them? We would have to treat a computer or piece of software that could experience joy or suffering with moral considerations.

We make robots and other AIs to do work we cannot do, but also work we do not want to do. To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is conscious doesn’t mean that it would have the same preferences we do, or consider the same activities unpleasant. But whatever its preferences are, they would need to be duly considered when putting that AI to work. Making a conscious machine do work it is miserable doing is ethically problematic. This much seems obvious, but there are deeper problems.

Consider artificial intelligence at three levels. There is a computer or robot—the hardware on which the software runs. Next is the code installed on the hardware. Finally, every time this code is executed, we have an “instance” of that code running. To which level do we have ethical obligations? It could be that the hardware and code levels are irrelevant, and the conscious agent is the instance of the code running. If someone has a computer running a conscious software instance, would we then be ethically obligated to keep it running forever?

Consider further that creating any software is mostly a task of debugging—running instances of the software over and over, fixing problems and trying to make it work. What if one were ethically obligated to keep running every instance of the conscious software even during this development process? This might be unavoidable: computer modeling is a valuable way to explore and test theories in psychology. Ethically dabbling in conscious software would quickly become a large computational and energy burden without any clear end.

All of this suggests that we probably should not create conscious machines if we can help it.

Now I’m going to turn that on its head. If machines can have conscious, positive experiences, then in the field of ethics, they are considered to have some level of “welfare,” and running such machines can be said to produce welfare. In fact, machines eventually might be able to produce welfare, such as happiness or pleasure, more efficiently than biological beings do. That is, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Suppose, for example, a future technology would allow us to create a small computer that could be happier than a euphoric human being, but only require as much energy as a light bulb. In this case, according to some ethical positions, humanity’s best course of action would be to create as much artificial welfare as possible—be it in animals, humans or computers. Future humans might set the goal of turning all attainable matter in the universe into machines that efficiently produce welfare, perhaps 10,000 times more efficiently than can be generated in any living creature. This strange possible future might be the one with the most happiness.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.