What makes AI dangerous: Intelligence or Consciousness?

Since the 1950’s scientists have pursued artificial or machine intelligence. The short-term goal has been to create computers or machines that can perform intelligent and useful functions. Long-term we dream of creating something that is as intelligent as we humans are and potentially conscious of its own existence. Our imaginations have created a number of science fiction characters who have achieved both. We see these characters in movies like A.I., Ex Machina, and Start Trek: The Next Generation. Though artificially manufactured by humans, each of these characters somehow achieves consciousness and seeks to fulfill its own ambitions, goals, and self-preservation; in addition to serving the function for which it was created. Movies also portray robots that simply possess intelligence without consciousness. The Terminator and Eve both carry out their missions with machine-like efficiency. But there are also characters like HAL 9000 who becomes aware and seeks to preserve his/its own existence, which requires killing the human astronauts that it was created to assist and protect.

Each of these characters and storylines makes for great entertainment. But what have we achieved in the real world? As we look at the natural and the manufactured world it appears that only humans and animals possess both intelligence and consciousness. Our efforts to create AI are primarily focused on useful and productive intelligence, with little interest or need for it to be accompanied by consciousness. IBM’s Watson and Deep Blue have been impressive and highly publicized versions of what we can accomplish. Assistants like Siri, Cortana, and Ok-Google also exhibit a very limited amount of intelligence. But to our knowledge there has never been a manufactured system which possesses even the smallest amount of consciousness, and scientists appear to have no clue how to approach creating the hardware or software for such a system.

Why would this even matter? Because there are very real fears that an intelligent machine might become more intelligent than humans and then become a nefarious force that directs its actions against its human creators, like the Frankenstein monster who turns against the villagers rather than allowing himself to be terminated. But nefarious actions would seem to require self-awareness and a desire for self-preservation. Without these, would not an AI simply continue to perform the functions it was programmed for? What would cause it to “jump the tracks” and put its own interests and ideas before those it was programmed for by its creators? One might argue that if we are smart enough to program the AI to learn, it may eventually reprogram itself to optimize a system without primary regard for human life – such as distributing a limited supply of electricity fairly, without regard for the impact that this might have on life-saving machines at a hospital as opposed to the lights at a baseball game. Or this learning and self-programming may lead down a previously unknown path which has consciousness at the end. So the AI would program itself, completely unguided and unknowingly, to its own self-consciousness. Once that occurred the machine would begin to think differently and become its own life-form that humans no longer control. Science Fiction is full of characters who have taken this path and turned against the human creators (e.g. The Lawnmower Man, 1992). How realistic is this scenario? Honestly, we cannot know because we do not know what the path is to creating consciousness, self-awareness, and self-preservation. But it is fair for us to proceed with caution as we program AI, entrust it to manage our electrical networks, information grids, dams, and financial systems. We should always be aware of the boundaries of the system and where the “kill switch” is located, but without telling the AI.

There is an incredibly exciting future ahead of both humans and AI. But, for now, the AI will not be our buddy or our nemesis, it will remain our servant. 


Brian Paradis

Healthcare Growth Strategist and Performance Advisor | Author of #LeadWithImagination | Former Healthcare System Executive

6y

Roger, our firm is exploring a partnership with a company applying AI to surgical schedule. I am quite taken with the possibilities for a health system to improve the function in this area. Hope you are doing well and enjoyed this article. Brian

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics