AI: Not in Our Own Image

By Jonathan Jeckell

It’s almost an article of faith that artificial intelligence researchers will develop artificial general intelligence (AGI) with human-like reasoning and a broad range of abilities someday.  Science fiction is packed with examples of intelligent machines including Iron Man’s Jarvis, Commander Data from “Star Trek the Next Generation”, HAL from “2001: A Space Odyssey”, and many others.  And while AGI does not necessarily mean that the agent will be sentient, that too is a staple of science fiction and real headlines about AI alike.  Pick an article on the dangers of artificial intelligence at random and chances are it will be one that talks about how to prevent robots from taking over the world.  

Screen Shot 2018-10-17 at 2.58.39 PM.png


Today’s artificial intelligence and most of the research done on AI focuses on narrow AI, that is, artificial intelligence that has a limited purpose and scope, such as Amazon and Netflix’s recommendation engines, as well as face recognition technology.  Artificial General Intelligence would be able to tackle a much wider range of problems with the same AI and might feature more “common sense” or at least a greater ability to reason like a human would.   But this does not necessarily carry that it would be conscious or sentient.  Researchers are divided on whether Artificial General Intelligence is even possible, but many think it’s roughly 20 years away.  Most agree, however, that deep learning probably won’t get us there and we need to rethink our approach to AI.


However, as presented by Professor Nick Bostrum in his book “Superintelligence: Paths, Dangers, and Strategies”, he argues that a generally intelligent AI would necessarily develop its own preferences and goals, and eventually dupe humans into giving it ever greater resources to become ever more intelligent in pursue of its own goals... sound familiar?  He argues that more intelligence is always better, so the machine would glutinously convert the entire solar system, and any other matter within its reach, into more processing power for itself.


The book made a lot of bold assumptions.  First, more intelligence is not necessarily better intelligence.  Intelligence through biological evolution has always come with a cost, and most animals merely have a sufficient level of intelligence required for their survival.  For a vast number of problems, using computers available today, more intelligence will not provide you with an answer superior to one that is merely sufficient, even if more computing power was virtually free. We’re simply nearing the point of diminishing returns.


Secondly, while we can probably learn a lot about our own brains and make vastly better artificial intelligence by mimicking the brain, I doubt we really want to model AI after the way we think.  Daniel Kahneman’s “Thinking Fast & Slow” and Dan Arielly’s books are filled with glitches in human reasoning that stem from heuristics and other shortcuts that have often helped us and even our ancestors, alive.  If nothing else, there are probably much better ways of thinking than our minds or brains are programmed to operate. Therefore, there is no assurance that AGI will reason like us either, to include having motivations, goals, or desires of its own, even if it is the byproduct of perfectly rational “thinking.”  It will be easy for humans to dismiss sentience, as it has been for us to be dismissive of animal minds for centuries.  Philosopher Daniel Dennett describes a few thought experiments to help think about this.  The first is the Philosopher’s Zombie: “How do I know you are alive and real and not a zombie imitating an intelligent being?” while the second, the Chinese Room describes a box that spits out English translations for Chinese words.  Does whatever is in the box actually understand Chinese, or is it just churning out things by rote direction?  Will we recognize a sentient AI when we see one, or will we be taken in by something that acts intelligent and sentient, but is more like the chess playing Mechanical Turk or one of Victor Hugo’s automatons?


Most importantly, while an AGI may be desirable for the wider variety of problems it can tackle (like a voice assistant) or the ability to understand nuances of human thinking that incorporate tacit knowledge and context (common sense), I don’t think sentience or consciousness is a desirable goal and probably won’t attract a lot of funding.  Humans don’t like it when their pets or even their fellow workers deviate from their expected behavior and display their own goals and preferences, so we are very unlikely to tolerate it from AI that we acquire or design to help us solve problems.  Perhaps, designers would want to eliminate any sign of anything resembling consciousness or sentience out immediately to keep the AI focused on the tasks we want it to perform and would have little tolerance for any boondoggles that aren’t precisely aligned with what we want it to do. We’re seeking increased efficiency, not improvised deviations from the norm.

Screen Shot 2018-10-17 at 2.58.26 PM.png


The nature of what makes us conscious and sentient is poorly understood.  Some neuroscientists and other researchers think it may have been an evolutionary accident or an emerging property.  It may have arisen as a synchronizer between all of the competing “voices” from different parts of your brain to integrate that view into a coherent picture and coordinated responses.  Recently I heard the proposal that consciousness emerged from the survivalist need to focus our attention.  This is more profound than it may seem at first.  If your brain simply reacted to stimuli, there are many situations in nature that would be lethal.  But the ability to selectively ignore some things and focus on subtle things hidden in the noise allowed our ancestors to spot the tiger creeping up on them just a bit sooner.


AI development lacks that selection pressure that develops the ability to focus attention; quite the contrary as it appears.  We design AI to focus on what we want it to pay attention to, and not to meander freely with only a few exceptions.  Perhaps I’m in the minority here, but I doubt that we will develop an AI protocol that is anything close to sentient, even by accident.  Of course, if artificial intelligence does try to take over the world, the dumbest thing you could do is to put your plan to stop it on the internet.


RL Leaders