OPEN AI

I’ve written before about the need for caution when it comes to creating artificial intelligence. Strangely, a news item this week helped me clarify my thinking on the subject and even ease my concerns a little—for now, at least.

A new research company called OpenAI has just been created by heavy hitters like Elon Musk (of Tesla Motors and SpaceX fame) and his former PayPal pal investor Peter Thiel, who claim to have rounded up a billion dollars worth of funding to research artificial intelligence. If that strikes a strange chord with you, you might be remembering that Musk was one of a number of famous people (including Stephen Hawking and Bill Gates) who issued a warning this past summer about the risk of a truly successful artificial machine intelligence becoming a threat to the human race. They weren’t the first to say it by a long shot, but they are among the most famous to say it. So is the creation of OpenAI a case of Musk deciding that “if you can’t beat ’em, join ’em”?

Not quite. The declared purpose of OpenAI is to support fully open research into artificial intelligence that isn’t driven by financial interests, thereby making sure that AI will only benefit humankind. So Musk and friends obviously feel that, if greed and secrecy are taken out of the equation, scientists can produce AI systems that won’t suddenly run amok, make themselves exponentially smarter and smarter, and decide that we puny humans are only worth keeping around as biological batteries (if you’re a fan of the Matrix movies).

I commend them for it, mainly because I think greed and secrecy are the evils behind most of the ways our technological progress lets us down. But not because I think Skynet is lurking around the corner.

Right now research into artificial intelligence is focused on creating better and better digital decision-makers, looking to produce improved search engines, self-driving cars, and various kinds of prediction software related to financial fields—the drive isn’t to create broadly capable all-purpose thinkers like human beings. We can drive a car, do our taxes, write a poem, cook supper, and sing Raffi songs to our kids (if you have the stomach for it). There’s no incentive to create computer intelligence that can do all that—acute specialization makes much more sense, both economically and from a design point of view. So even if an artificially-created intelligence could somehow find a way to combine its own specialized abilities with other AI’s with different talents into one super general intelligence capable of ruling the world, why would it? By their nature, these programs will “want” to do one thing and do it well. Unless a military threat-assessment AI can help a Wall St. stock analysis AI to do a better job analyzing stock, there’s no reason for the two to decide to interact at all, let alone join with a whole bunch of movie-selection algorithms, consumer purchasing trackers, budget optimizers, and trash tabloid article-writing programs.

The scary part of AI research has more to do with the continual improvements in processing speed and data handling—we assume that because computers will eventually outdo the human brain in processing power, they’ll become smarter than us. And somewhere about the same time, because of that superhuman computing power, they’ll become conscious—self-aware—like us. From there (our fearful imaginations insist) they’ll decide that the human race is an impediment or an outright nuisance, best pushed to the sidelines or even exterminated.

None of that really follows.

For one thing, we still don’t understand what consciousness actually is and what makes it work (no matter what anyone says). There’s no evidence that consciousness (or lack of it) is related to brain size or power. Other creatures have much bigger brains than humans (especially whales and elephants) but the state of their consciousness is anything but certain. There’s no evidence that once a brain reaches human-level processing capability it becomes conscious. Neuroscience just doesn’t have a solid explanation for what constitutes the physical difference between a conscious brain and one that isn’t—we can infer things, but we don’t know. So it’s quite possible that the fastest computer that will ever be created might not have the “spark” of consciousness.

Secondly, if a computer intelligence ever does become aware of itself and devoted to its own individual needs, it would only act against humans if we’re an obstacle to fulfilling those needs. Digital brains are built on logic. Expending resources unnecessarily is not logical. Even we illogical humans rarely seek to deliberately wipe out inferior species—we cause enormous damage, and even extinctions, because of greed, vanity, covetousness, fashion, lack of foresight, and a host of other motives that can be lumped under the general term “stupidity”. But none of those things enters into digital thinking. We should feel secure that no computer intelligence, no matter how smart, will ever do things out of a sheer lust for power. That just isn’t rational.

For a more technical description of the case for AI, here’s an open letter signed by many dozens of AI researchers.

We can imagine a form of digital intelligence that would see all biological life as unnecessary. We do so for fun, the way we imagine werewolves and vampires and bogeymen to scare ourselves, and yes, also to warn each other to be careful when playing with fire. But the rational case for such a thing is weak. If we’re afraid of a new entity arising on Earth that could supplant us, I’d say there’s much more danger of that from our genetic tinkering.

But that’s a whole other blog post.

WILL THE HYPERLOOP REALLY HELP?

The man who is said to have inspired Robert Downey Jr.’s portrayal of “Iron Man” in the movies made a big teaser announcement this week. Entrepreneur Elon Musk (co-founder of PayPal and founder of Tesla Motors and SpaceX) proclaimed that he will reveal the alpha design of a transportation system he says will become the fifth key mode of transportation in the world (after cars, planes, trains, and boats). He calls it the Hyperloop, and he describes it as "a cross between a Concorde, a railgun and an air hockey table." We’ll have to wait until August 12th to find out what exactly that means, but educated guessers believe it will be passenger-carrying pods that will travel in sealed tubes, floating by magnetic levitation or something similar, perhaps in a surrounding zone of fast-moving air. Musk envisions the Hyperloop being built across the continent, so that you could travel from San Francisco to Los Angeles in just minutes, and from there to New York City in under an hour.

Fast? Yeah, you could say that. So does that mean it will be a game-changer, bringing about a new world of mobility? Maybe. But I’m not really convinced that faster always means better. A few minutes of thought made me to realize there are many things the Hyperloop won’t help, like:

- the two-hour drive to your cottage/camp that becomes five hours on a Friday night.

- the high-polluting, gas-guzzling journey your vegetables make from California to your dinner table.

- the family vacation trip that doesn’t include Los Angeles, San Francisco or New York City.

- the price of gasoline (do you think lower demand on a few long distance routes will convince oil companies to lower prices? Seriously?)

- the billboard advertising industry.

- your accumulation of frequent flyer points.

- lost luggage (it will just get to other cities without you much faster.)

In fact, the Hyperloop could outright destroy:

- excuses not to visit relatives you don’t like.

- scenery-watching (and any true, personal grasp of geography.)

- all hope of escaping the psycho ex-girlfriend.

- your last chance to catch up on your reading.

- the road trip movie (OK, some of these will be good things.)

I’m sure you can come up with dozens more like this. Either way, Musk claims the Hyperloop will cost much less to build than high speed rail, and I am in favour of getting as many trucks and cars off the road as possible. So, Mr. Musk, I’ll be watching on August 12th to see what you’ve come up with, and if any partners are ready to jump aboard with you.

And, really, work on the lost luggage thing while you’re at it, OK?