ARE WE ALONE IN THE NEIGHBOURHOOD?

We look up at the stars and wonder if we humans are alone in this vast universe. It’s not a question we can answer yet (unless you’re a true believer in UFOs), but there sure are a lot of people working to estimate how much galactic real estate might support life.

Our Milky Way galaxy alone is thought to include 200 billion stars—maybe many more than that, and at least tens of billions of those are yellow-orange stars similar to our sun. But until the last 20 years or so, we really had no way to know if many of those stars had planets orbiting them. If we’re hoping to find life, the first step is to not only find planets, but planets in the habitable zone of their stars: the so-called Goldilocks zone: not too hot, not too cold. There are other considerations, like gravity and atmosphere, but even just finding rocky planets the right distance from their sun is a good start.

Since 2009 the Kepler spacecraft has been monitoring the brightness of more than 145,000 stars. Why? Because if a planet crosses in front of the star (between it and Earth) the light will dim just a tiny bit. Kepler can measure that. Another astonishing instrument called High Accuracy Radial velocity Planet Searcher (HARPS) in an observatory in Chile uses a high-precision spectrograph to also measure light, but in such a way as to detect motion. A planet orbiting a star will cause a tiny wobble in the star that HARPS can see. The Keck Observatory in Hawaii has been using a similar method to great effect for a couple of decades now. There are other efforts underway in the UK, Spain and elsewhere.

So with all this searching going on, how many planets have scientists really discovered around other stars? The number is approaching 700, and might even be higher than that by the time you read this. More than 1200 other possible candidates have been identified and are just waiting to be confirmed. Admittedly many of those are probably gas giants like Jupiter, but many are also smaller than Neptune and might be much closer to Earth in size and composition. But it’s the ones that appear to be in the habitable zone of their star that grab headlines. That number is still small, but growing. And even if only a small percentage of sun-like stars have a habitable planet, in a place as big as the Milky Way that still means potentially tens of millions of planets where life similar to Earth’s could exist.

It’s still not proof, but it sure does improve the odds. So if you’re getting tired of the Caribbean or Europe, have we got a long distance vacation destination for you!

AH YES, I REMEMBER IT WELL

OK, so I’m dating myself quoting a song from the musical Gigi, but if you remember the scene from the movie, old lovers Maurice Chevalier and Hermione Gingold sing their memories of the time they first fell in love. Of course, each remembers it very differently.

The process of memory has been and still remains mysterious, but there’s a lot of research ongoing in the field.

Brain cells, or neurons, essentially send messages by sparking to other neurons. Even as far back as the 1970’s it was discovered that there are certain neurons in the hippocampus (a structure deep within the lower part of the brain) that appear to be directly connected to our location in space. Researchers have called these “place cells”, because specific neurons will fire according to a specific location, and others will even fire in correlation to the direction travelled. New research has discovered comparable neurons associated with time sequences (quickly dubbed “time cells”). The lead author of the new study, Dr. Howard Eichenbaum from the Center for Memory and Brain at Boston University, revealed that not only did particular neurons of the hippocampus capture frozen moments of a sequence of events, but certain cells also marked out the gap of time between two separated events. The cells could reset that gap if the delay between the events was changed. Now I’m dying to find out how those neurons differentiate between events that seem to be short (like spending time with a lover) and events that seem to take forever (like spending time with the boss).

On a more general level, it’s been known for some time that the brain does most of its data storage during the night, while we sleep. A study released earlier this year involved volunteers who were asked to memorize 40 pairs of words, or perform some other memory task. Some were told that they’d be tested on the task ten hours later. Some were allowed to sleep before the test. As expected, the volunteers who caught some shuteye did better on the test, but the ones who really stood out were the ones who slept and knew they’d be tested. Obviously the brain treats information differently if it knows that information will be needed at some future time. So some advice to guys in a new relationship: understand that in twenty years your lady is going to expect you to remember the dress she was wearing when you first met. Maybe your brain will cooperate, and save you from the glare of death down the road.

One thing is clear to me: research into the process of memory will never be complete until it can explain why my wife can recall every detail of events I can’t even swear actually happened, and she always turns out to be right.

JUST TRYING TO KEEP UP

If you’re reading one of my posts for the first time, it may be because you’ve joined my new Scott Overton page on Facebook. Welcome! I’ve been posting/blogging on my webpage for some time, and you’re welcome to check out previous posts there as well as some samples of my short stories. Mostly I post about science and science fiction, sometimes about the writing process and the publishing business. I hope you enjoy them.

This week I’ve been thinking about the fact that I don’t see much science fiction that’s actually about space travel anymore. A big part of the reason might be that it’s becoming harder all the time for writers to keep up with new developments.

The publishing industry can be very slow—I’ve seen it take a year and a half for one of my stories to go from acceptance to its actual appearance in the magazine. Believe me, scientific research doesn’t wait!

I wrote a story that I set in a solar system known officially as Gliese 581—it’s a red dwarf star that got a lot of attention because one or two of the planets discovered around it are believed to be in the star’s habitable zone, meaning at the right distance for liquid water to exist on the surface, and therefore maybe Life As We Know It (also known as the Goldilocks zone: not too hot, not too cold). I cleverly placed a human colony on the fourth planet, Gliese 581d, but while I was sending the story out to magazines, another planet was discovered in the system, Gliese 581g, that’s a better candidate for a habitable planet. Fortunately, I’d made up my own names for the planets and my story didn’t have to be changed, but the news could easily have screwed up a story already in the publishing pipeline, and left egg on my face.

There have been lots of other developments like that in recent years, thanks to the Hubble Space Telescope, and numerous space probes to various corners of our solar system (including the Dawn probe that just went into orbit around the asteroid Vesta between Mars and Jupiter last week, and will eventually land on it). These are exciting times, but….

Have you written a story that takes place near Pluto? You think you’re up to date because you don’t call it a “planet” anymore, just a “dwarf planet” since its official demotion? Well, how many moons did you give it? Four, I hope. Most of us knew about Pluto’s moon Charon. But two more—Hydra and Nix—were discovered in 2005, and now the Hubble telescope has found a fourth moon probably only a few dozen kilometers across. Flip a coin before you give Pluto a ring, like Saturn’s—that’s not conclusive yet. Or maybe you should just wait until after 2015 when the New Horizons probe will visit Pluto’s corner of the solar system, and might shake things up even more.

It’s enough to give a science fiction writer a migraine.

Space Colonization Must Become More Than Science Fiction

Years ago I read an inspiring book called The Millennial Project: Colonizing the Galaxy in Eight Easy Steps by Marshall T. Savage originally published in 1992. The title wasn’t frivolous—the second edition, in 1994, included an introduction by science fiction legend Arthur C. Clarke. In great detail, Savage laid out eight practical steps for humanity to take to reach the stars, beginning with ocean surface colonies that would produce enough edible algae and other food products to end world hunger while they also used temperature differentials in the deep ocean to generate the electricity required for spacecraft launchers. The launcher he envisioned would use a long track and catapult system to fling transports into the sky, while giant ground-based lasers would power them the rest of the way to space (eliminating the need for the transports to have their own engines—reducing the cost of launching materials from Earth’s surface into space is probably the largest hurdle of all in space exploration). The remaining steps involved bubble colonies in Earth orbit, domed cities on the Moon, Mars terraformed to become habitable, asteroids mined and transformed to not only provide raw materials but also harness the power of the sun much more efficiently, and finally an expansion into interstellar space. Savage wasn’t just dreaming—in 1987 he’d formed something called the First Millennial Foundation to get the ball rolling, first with baby steps like a test aquatic colony/theme park in the Caribbean. Progress was slow.

Jump ahead to 2011: the First Millennial Foundation is now called the “Living Universe Foundation” and Marshall Savage has apparently permanently retired from any association with the concepts he originated. But the dream lives on. Many other possible technologies have been proposed to carry out Savage’s basic eight steps, and discussions continue (though, perhaps, not much concrete progress). On other fronts, there is an International Space Station in orbit, involving multi-national cooperation. The United States government has made announcements about returning to the Moon and then going to Mars—no longer stunts motivated by a political space race, but hopefully true research-based missions. This past Saturday NASA space probe Dawn entered into orbit around the asteroid Vesta, the first man-made object to orbit a body in the asteroid belt. But within the past decade NASA’s NEAR Shoemaker probe landed on the asteroid Eros, and the Hayabusa probe from Japan collected samples from the asteroid Itokawa. NASA has plans for a mission in 2016 to another near-Earth asteroid that has a small chance of one day threatening the Earth. These aren’t stunt missions either, but have to be considered as first steps toward someday making use of the asteroids in practical ways.

For now, human space travel remains horrendously expensive and dangerous, so we need pretty strong motivation to make it happen. If population pressure on one hand, and the benefits of space research on the other aren’t collectively enough, we should look at Marshall Savage’s original argument:

All intelligent life on Earth could be wiped out in a single cosmic catastrophe, whether by a mammoth asteroid strike, a genetic experiment gone-wrong, or any number of doomsday scenarios that are no less plausible because of their dire consequences. We know it can happen—there have been half-a-dozen mass extinction events in Earth’s history. And, for all our wishes, we still have no evidence that intelligent life exists anywhere else in the universe. Humankind may be all there is—all of the universe’s “eggs” in one “basket”. Until we know otherwise, we have a responsibility to preserve sentient life by taking it beyond this fragile place where it could be annihilated at one stroke.

Maybe that’s still too abstract for most people looking over their tax statements. Maybe the idea of cheaper materials or medicines is more appealing. Either way, space colonization can’t remain the stuff of science fiction. We dare not let it.

ONE MAN'S JUNK IS ANOTHER MAN'S GENETIC BLUEPRINT

Researching my next novel, I’ve been reading a lot about “junk DNA”. It’s still usually called that in popular news articles because the nickname got so much traction with the public after it was introduced by geneticist Susumu Ohno in 1972 that the designation has been hard to shake. The more accurate term is non-coding DNA because only 2% - 3% of the DNA in humans actually “codes for” (produces) all the proteins. When that fact was discovered, it was thought that the other 97% was just the leftover junk of evolution that no longer served a purpose. That wouldn’t be very efficient, whether you believe in natural selection or an intelligent designer. So it shouldn’t be a surprise that subsequent research has shown junk DNA is anything but.

It isn’t just the numbers and kinds of proteins produced in a living creature that are important, but also which of the proteins are activated. Those proteins are regulated by certain processes, and it’s looking more and more as if that’s one of the key roles of non-coding DNA. Think of the strings of non-coding DNA providing the instructions that determine which genes are switched on and which are switched off, as well as how much of a particular protein is produced. It also looks to have a role in maintaining gene stability.

Some studies have indicated that repeated strings of non-coding DNA aren’t a mistake or accident, but are important because they enable faster mutations—a key element in adapting to changes in the environment. That makes the so-called “repeaters” a valuable survival mechanism for a species.

Other studies suggest that the differences between individual humans, and between humans and similar species (like chimpanzees), have more to do with their junk DNA than the protein-encoding DNA. We humans share nearly identical genes, but it’s mostly how they’re regulated that produces our variations. Those differences not only include individual traits like eye and hair colour but diseases as well, so this could be a very important area of disease research.

However, this improved understanding of how non-coding DNA reflects the traits of an individual may also lead to improvements in forensic DNA identification of individuals. And there’s word that the Transportation Safety Administration in the U.S. has begun experimenting with portable DNA scanners (able to quickly scan DNA from a drop of saliva) at places like airports, alongside those infamous full-body scanners.

That brings whole new meaning to the cry “don’t touch my junk.

IS THE UNIVERSE A COMPUTER?

There are a lot of SF conventions I’d like to attend, but maybe just as important, from the perspective of having something to write about, there are also a lot of science events that should be part of my itinerary. Like the recent World Science Festival in New York City. Fortunately you can catch some of the sessions in webcast form.

One of this year’s panel discussions asked Is the Universe the Ultimate Computer? The theory under discussion was just as the name suggests, that beneath even the scale of quantum physics the universe is, at heart, bits and bytes—a giant computer program playing out with immense complexity, yet initiated and maintained by a very simple few lines of code that describe its rules. The theory isn’t being offered as an excuse to make more Matrix movies (although the first movie does provide an easy way of understanding one interpretation of it), but because there are still many things that current physics can’t explain, not to mention that classic physics and quantum physics don’t always mesh very well. Seeing the universe as bits of information allows certain thought exercises that the rules of regular physics constrain, which can be helpful in the early stages of developing a theory, for instance. So some proponents see it more as a useful model than a necessarily true picture of reality. Some others, like one of the pioneers of computing, Edward Fredkin, believe there may just be somebody or something in another universe running our universe on their version of an iMac (like The Matrix again). Or the other primary interpretation: that the universe itself is the computer, carrying out an unthinkable number of calculations with every flip (change of state) of sub-atomic particles.

Judging from the panel discussion, there aren’t any obvious experimental means to prove or disprove the concept, and even were it to be proven true the knowledge might not be of any practical value. Being able to someday read the universal code doesn’t mean we’d be able to use it for much (like predicting the weather in your home town next Tuesday from looking at a string of ones and zeroes). But anything that brings us to a more complete understanding of the rules by which the universe operates will probably be worthwhile in ways we can’t yet see.

At the very least, it’s got to be more comprehensible than string theory. Please.

ASTEROIDS AS KILLERS AND SHARKS AS VICTIMS

What does a giant rock hurtling toward Earth have to do with sharks?

My story “Saviour”. It’s an example of how information from widely different sources can come together to provide inspiration.

From 2004 - 2006 there was some scientific concern that the asteroid Apophis would come very close to Earth, or even strike the planet in 2029. Although better calculations disproved that, there was still a chance that the 2029 pass would alter Apophis’ orbit enough to provide a collision with Earth in 2036. The results would be catastrophic, but fortunately more recent computations have put the odds of a collision at one in a quarter-million. Still, the episode is obvious grist for science fictions stories. Who could forget the two 1998 Hollywood blockbusters Armageddon and Deep Impact (even if you’ve tried to)? Both movies had that same doomsday premise: a killer asteroid on a collision course with Earth.

In 2006 I had the opportunity to interview Rob Stewart, the man behind an excellent documentary film called Sharkwater. The film details how underwater cameraman Stewart set out to debunk the myths about sharks and ended up in serious danger from human predators instead. The most unforgettable element of Sharkwater is the devastating damage being done to the shark population by the shark finning industry, harvesting shark fins only for the status dish of shark fin soup. What threatens sharks threatens the whole ecosystem of the oceans. But then there are so many ways in which the human race is chasing other Earth species into extinction.

I was strongly affected by the movie, and its message somehow became entangled in my mind with apocalyptic threats to the human species. The result is “Saviour.”

It’s another story, like “Hurricane”, that seems at first glance to be old hat. Knowing that time-challenged editors buried under an avalanche of submissions look for the earliest-possible excuses to reject stories, I didn’t think many would give “Saviour” a chance and read it all the way to the end. So I’ve rarely submitted it anywhere (and my fears have been proven correct the few times I did submit it). But I think it’s an interesting take on the doomsday premise, and a story with something to say. Please have a look. But read it to the end!



HURRICANES AS A SOURCE OF ENERGY...AND STORIES

We live in a time when we’re encouraged to worry about our future sources of energy. Our society consumes energy at a ferocious rate, and our release of carbon dioxide is making changes to our climate that will last for hundreds of years. So it’s popular to speculate about alternatives, often in the form of “if we could only…” kinds of statements. You know what I mean. If we could only capture all of the sun’s energy that falls on Earth. If we could only harness all the tidal power in the Bay of Fundy. There are lots of them.

One day I heard an off-hand comment about the amount of energy produced by hurricanes. I did some searching, came up with some numbers, and thought Wow! If we could only harness that.

According to the Hurricane Research Division of the Atlantic Oceanographic and Meteorological Laboratory, harnessing the winds of an average hurricane would produce the equivalent of half of the world’s electrical generating capacity—in one day. If we could somehow tap the heat energy produced in one day by the hurricane’s condensation of water vapour into rain, that would equal 200 times the world’s generating capacity! Other estimates claim a hurricane’s whole life cycle involves the energy equivalent of 10,000 nuclear bombs. Now that’s power!

I was sure that someday someone would try to do it, so I had to write a story about it. And my story “Hurricane” was born.

Of course, I had to come up with a plausible scientific way to collect some of that energy. I think I found it, though some may disagree (and I’ll let you read the story itself for the details). But I wanted a story not a science article. So, given the technology I was inventing, and the vast power of these destructive storms, I knew what was bound to happen and where the story had to be set. I also knew right away that, because of those elements, many people would swiftly condemn the story as cheesy, or simply unoriginal, so I’ve never submitted the story anywhere. That’s a shame, and I may do it yet because I think it’s a good yarn. It’s also a good chance to vicariously ride along with the Hurricane Hunters of the 53rd Weather Reconnaissance Squadron at Keesler AFB in Biloxi, Mississippi. So climb aboard, but whatever you do, don’t forget to fasten your seat belt.



HOW DOES YOUR BRAIN WORK?

Fantasy writers can make up all the details of their fictional world. Authors who write about real life can observe what goes on around them. For the rest of us, there’s research. The research I’m doing for the novel I’ve just begun to write (my fourth) is about consciousness, and how the brain works. And it’s enough to make my brain stop working.

Considering we’ve all got a brain (yes, even that @#$*&X driver in front of you), and we think we know how we think, there’s been an awful lot of ink spilled in attempts to explain it. The French philosopher Descartes is famous for saying, “I think, therefore I am.” And one of his main beliefs about how we think has become deeply ingrained in our collective knowledge, because it fits what we intuitively believe about the process of our thoughts. It seems evident that, for every bit of information taken in by our senses, there’s a specific moment when we become conscious of that information, and the details surrounding it (“Oh, look, there’s a very solid baseball coming straight at my face at high speed!”) So each moment of awareness is kind of like an image projected onto a screen in our heads (what some philosophers call the Cartesian theatre). But that forces the question: who’s looking at the screen? It presupposes there’s an inner mind, a deeper you or me who sees the projections on the screen and then does something about them—presumably a center spot of the brain where consciousness happens and decisions are made.

Neuroscientists have never found such a place. Philosophers now discredit the idea of the Cartesian theatre and will fill a book with thought exercises to show you why it’s wrong (a book so thick it qualifies as physical exercise just to lift it). Prevailing theories suggest that consciousness is more like a stream of activity with information coming in constantly, being processed, gaining priority and triggering action, or failing to achieve priority and being discarded. If you think it’s hard to wrap your head around a concept like that, imagine trying to absorb and retain it, along with dozens of other facts about neuroanatomy, brain-scanning technology, cognitive evolution, and more, while you try to write a good old-fashioned yarn about average people and the trouble they get into.

I guess the real question is: was I conscious when I decided to make a living this way?