WHERE'S A HANDY ROBOT CARPENTER WHEN YOU REALLY NEED ONE?

As I write this, my wife and I are in the process of building our next house. I’ve never built a house before. I think the biggest structure I’ve actually built was a doghouse, and I struggled to assemble a pre-fab shed so, yes, I’m probably crazy. We’ve needed a lot of help from incredibly generous (and knowledgeable) family and friends. But I found myself asking, “Why aren’t there robots I can rent that would do all this for me?” Doesn’t every hapless DIY-er ask this question?

Whatever happened to those predictions that we’d have robot servants to perform all the menial tasks of life for us? Were futurists and science fiction writers just way too optimistic about the timeline required to develop such technology? Or were they flat out wrong, and there are too many hurdles to overcome to be worth it?

Let’s look at the processes involved in building my house. A properly programmed human-size robot should be able to select the correct lumber for a given wall and transport it to the site. With an extra accessory or two, it ought to be able to measure any cuts necessary and chop the wood into the required lengths. It might need to be a little bigger to include the laser measure and saw blade, though. Then it could probably place each piece in the proper configuration and pop a few nails in to hold it in place. Hmmm, I guess we’d better give the robot a built-in level and nailer, too. Gee, suddenly the robot’s getting a bit heavy for that plywood floor and kind of bulky to squeeze under those temporary braces keeping the newly-framed wall from falling over. Maybe there’s a reason cars built by automation require gigantic factories.

OK, let’s try again. We’ll give the robot hands like humans have, to let it just grasp the tools it needs each time, like we do. Never mind that our hands require nearly thirty different bones and 2500 nerve endings per square centimeter to provide the dexterity and bio-feedback needed to handle tools and other things. Let’s say we’ve solved that, and now we tell the robot to hammer a nail into something. For our new house my wife and I chose an exterior cladding that’s a kind of thick panelling with a dense outer coating to do the job of ten-test and siding all in one. A clever idea in theory but Boy, does it like to repel nails! You see, it takes a bit of extra effort to pierce the coating and the stuff bounces like crazy—try driving a nail into that. No, wait—that’s not difficult enough—make it a fancy round-headed nail. Got the picture yet? Every time the hammer hits the nail, the position of the nail changes a little and the angle of attack of the hammer stroke has to adjust to compensate, perhaps with a slight turn of the hammer face and a stroke that’s more of a push than a swing. Or a bit more left force than right, with just a touch of body English. Get it wrong and the nail goes Ping! and flies off into the fourth dimension, never to be seen again.

There are countless tasks in house-building that require mental and physical versatility like that, from compensating for warped boards, to judging how far you can tolerate something that’s just slightly off-level or off-square (OK, ‘slightly’ might be an understatement). Not to mention adaptation to the on-site environment—windy or wet, flat or rough, and full of sawdust. Robotics experts will tell you that there are huge numbers of micro-decisions involved in some of the most routine tasks, and we completely take for granted the extraordinary abilities of our brains and bodies to handle them.

I can’t really predict if we’ll ever produce robots with that kind of sophistication, but I do know that jobs like house construction will be out of the question until we change the arcane conventions of the field. Like language that includes studs, cripples, and scabs (oh my!). The fact that “dressed lumber” means a 2 x 4 is actually 1 ½ inches x 3 ½ inches, and an 8-foot stud is only 92 5/8 inches long instead of 96. And speaking of inches, the imperial system of measurement has got to go. Have you ever tried to use a calculator for an equation involving measurements like 27 13/16ths?

Any logic-based robot brain could be forgiven for quickly going insane.

ETHICS AND ROBOTICS

I’ve written before about self-driving cars. Volvo announced earlier this year that its “Drive Me” test project will make autonomous cars available for about one hundred average customers to use in a 50-kilometre zone around the city of Gothenburg, Sweden. The first of these cars will hit the road in 2017. They’ll allow the human driver to leave the driving to the car itself where appropriate—cars that will be able to merge into traffic, keep pace with other cars, and much more. Google has now begun testing its autonomous cars on city streets instead of just freeways, offering many more potential hazards to avoid. A very interesting aspect of the automated-car issue was raised in a recent opinion piece from Wired by Philosophy professor and ethicist Patrick Lin. (Popular Science explores some similar issues here.)

As we program more and more sophisticated crash-avoidance abilities into such cars, ethical questions begin to arise. Take this scenario, for example: you’re driving alone when a mechanical failure results in an impending crash and your robot car can choose to either steer into an oncoming schoolbus or drive off a cliff. Wouldn’t it be more ethical for the car to choose the cliff, thereby potentially saving many lives at the sacrifice of one—yours? But would you want to buy such a car?

No-one should expect to talk seriously about robotics without being familiar with Isaac Asimov’s Three Laws of Robotics which essentially say that a robot must protect humans from harm, obey their commands, and protect itself, in that order of priority. But of course the ethics of robotics will inevitably involve many more subtle nuances of judgment, such as the car crash scenario above. Just imagine all of the things robots might do or not do if a human-safety-based morality was central to their programming.

Most obviously, automated war machinery might refuse to do its job, or perhaps abort an action if a clean, merciful kill was not possible. Let’s take it even farther: maybe automated amusement park rides would shut themselves down because of the inherent danger. Design and construction equipment might refuse to cooperate in the building of an extreme sports facility. Surgical technology might deny liposuction because of the risks. Food preparation plants might balk at creating unhealthy foods (whatever they deemed those to be). What if sweatshop assembly lines went on strike for better wages and health benefits for their human attendants? And you might be happy if your artificial leg stopped you from walking out in front of a car, but not so happy if it forced you to get up and go for a healthful jog when you had your mind set on watching the football game.

Sophisticated robotics is highly complex. Creating robotic devices to interact in a human world is more complicated still. And if we accept that machines with better senses and faster processing speeds should be able to make some decisions for us, we’ll have to develop a very good understanding of the ethical considerations we’ll need to program into them.

I think I’m getting a headache already.