OPEN AI

I’ve written before about the need for caution when it comes to creating artificial intelligence. Strangely, a news item this week helped me clarify my thinking on the subject and even ease my concerns a little—for now, at least.

A new research company called OpenAI has just been created by heavy hitters like Elon Musk (of Tesla Motors and SpaceX fame) and his former PayPal pal investor Peter Thiel, who claim to have rounded up a billion dollars worth of funding to research artificial intelligence. If that strikes a strange chord with you, you might be remembering that Musk was one of a number of famous people (including Stephen Hawking and Bill Gates) who issued a warning this past summer about the risk of a truly successful artificial machine intelligence becoming a threat to the human race. They weren’t the first to say it by a long shot, but they are among the most famous to say it. So is the creation of OpenAI a case of Musk deciding that “if you can’t beat ’em, join ’em”?

Not quite. The declared purpose of OpenAI is to support fully open research into artificial intelligence that isn’t driven by financial interests, thereby making sure that AI will only benefit humankind. So Musk and friends obviously feel that, if greed and secrecy are taken out of the equation, scientists can produce AI systems that won’t suddenly run amok, make themselves exponentially smarter and smarter, and decide that we puny humans are only worth keeping around as biological batteries (if you’re a fan of the Matrix movies).

I commend them for it, mainly because I think greed and secrecy are the evils behind most of the ways our technological progress lets us down. But not because I think Skynet is lurking around the corner.

Right now research into artificial intelligence is focused on creating better and better digital decision-makers, looking to produce improved search engines, self-driving cars, and various kinds of prediction software related to financial fields—the drive isn’t to create broadly capable all-purpose thinkers like human beings. We can drive a car, do our taxes, write a poem, cook supper, and sing Raffi songs to our kids (if you have the stomach for it). There’s no incentive to create computer intelligence that can do all that—acute specialization makes much more sense, both economically and from a design point of view. So even if an artificially-created intelligence could somehow find a way to combine its own specialized abilities with other AI’s with different talents into one super general intelligence capable of ruling the world, why would it? By their nature, these programs will “want” to do one thing and do it well. Unless a military threat-assessment AI can help a Wall St. stock analysis AI to do a better job analyzing stock, there’s no reason for the two to decide to interact at all, let alone join with a whole bunch of movie-selection algorithms, consumer purchasing trackers, budget optimizers, and trash tabloid article-writing programs.

The scary part of AI research has more to do with the continual improvements in processing speed and data handling—we assume that because computers will eventually outdo the human brain in processing power, they’ll become smarter than us. And somewhere about the same time, because of that superhuman computing power, they’ll become conscious—self-aware—like us. From there (our fearful imaginations insist) they’ll decide that the human race is an impediment or an outright nuisance, best pushed to the sidelines or even exterminated.

None of that really follows.

For one thing, we still don’t understand what consciousness actually is and what makes it work (no matter what anyone says). There’s no evidence that consciousness (or lack of it) is related to brain size or power. Other creatures have much bigger brains than humans (especially whales and elephants) but the state of their consciousness is anything but certain. There’s no evidence that once a brain reaches human-level processing capability it becomes conscious. Neuroscience just doesn’t have a solid explanation for what constitutes the physical difference between a conscious brain and one that isn’t—we can infer things, but we don’t know. So it’s quite possible that the fastest computer that will ever be created might not have the “spark” of consciousness.

Secondly, if a computer intelligence ever does become aware of itself and devoted to its own individual needs, it would only act against humans if we’re an obstacle to fulfilling those needs. Digital brains are built on logic. Expending resources unnecessarily is not logical. Even we illogical humans rarely seek to deliberately wipe out inferior species—we cause enormous damage, and even extinctions, because of greed, vanity, covetousness, fashion, lack of foresight, and a host of other motives that can be lumped under the general term “stupidity”. But none of those things enters into digital thinking. We should feel secure that no computer intelligence, no matter how smart, will ever do things out of a sheer lust for power. That just isn’t rational.

For a more technical description of the case for AI, here’s an open letter signed by many dozens of AI researchers.

We can imagine a form of digital intelligence that would see all biological life as unnecessary. We do so for fun, the way we imagine werewolves and vampires and bogeymen to scare ourselves, and yes, also to warn each other to be careful when playing with fire. But the rational case for such a thing is weak. If we’re afraid of a new entity arising on Earth that could supplant us, I’d say there’s much more danger of that from our genetic tinkering.

But that’s a whole other blog post.

TURING TEST PASS IS A FAIL

I’ve been amused this week to read the news that a computer program passed the famous “Turing Test” for artificial intelligence. The program presents itself as a 13-year-old boy living in Ukraine named Eugene Goostman, and it was able to carry on text conversations well enough to convince one-third of a panel of judges that they were chatting with a human being. It happened during a regular Turing Test event being hosted by the University of Reading in the UK on the 60th anniversary of the death of mathematician Alan Turing, who devised the test as a way of measuring artificial intelligence: if a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations, it passes the test. This is being touted as the first successful test, although NewScientist magazine points out that others have succeeded too, depending on the criteria used for the judging.

Detractors claim the fact that “Eugene” is presented as a 13-year-old boy with limited English-language skills coloured the expectations of the judges enough to render the test results less meaningful than they might otherwise be. Have you heard thirteen-year-olds talk lately? The fact the judges could understand Eugene’s answers at all should have been a tip-off that they weren’t speaking to a real teenager. Did he pause in the conversation to answer a few texts on his phone? Did he drop f-bombs, use spelling that looked like alphabet soup given a stir, or rely on the word “like” every other sentence? Were there any mistakes obviously caused by autocorrect? Dead giveaways, all of those. (Actually, Eugene does text like that on Twitter.)

Personally, I think the limitations of the test itself make it of little value. Certainly it shows that superfast processors fed with enough data about likely questions, colloquial language, general knowledge and other parameters can simulate a humanlike dialogue. It says nothing about self-awareness, self-motivation, creative problem-solving, psychological empathy, or many other things that we would expect of an intelligent being. So we’re still a long way from the Skynet days of the Terminator movies, or even HAL from 2001:A Space Odyssey.

If you spend much time on Facebook, or even watching reality TV, you’ll know that speaking like the average human being isn’t exactly a shining display of intelligence anyway—quite the opposite.

There are efforts to create a more universal artificial intelligence test, involving more visual cues, among other things. I expect that within another few generations of computing progress, that test will also be found wanting. The truth is, we’ll probably never know when the first truly intelligent, sentient, artificial mind is created.

Because it’ll know that the smartest thing it can do is to keep that little secret to itself.