Yes! We Have No Bananas!

This is fascinating, and — amusing as it sounds — actually pretty serious: we could be as little as five years away from a banana apocalypse…and that’s not even the worst case scenario.

In “Can This Fruit Be Saved?“, Popular Science looks at the threats to the current banana market, and what’s being done to combat them. As trivial as it may seem, there could be a lot at stake for America’s favorite fruit.

For instance, there are actually 300 different types of banana, but chances are, you’ve only ever tasted one kind of banana. And even more than that, in a genetic sense, you’ve only ever tasted one banana.

For nearly everyone in the U.S., Canada and Europe, a banana is a banana: yellow and sweet, uniformly sized, firmly textured, always seedless. Our banana, called the Cavendish, is one variety Aguilar doesn’t grow here. “And for you,” says the chief banana breeder for the Honduran Foundation for Agricultural Investigation (FHIA), “the Cavendish is the banana.”

The Cavendish-—as the slogan of Chiquita, the globe’s largest banana producer, declares-—is “quite possibly the world’s perfect food.” Bananas are nutritious and convenient; they’re cheap and consistently available. Americans eat more bananas than any other kind of fresh fruit, averaging about 26.2 pounds of them per year, per person (apples are a distant second, at 16.7 pounds). It also turns out that the 100 billion Cavendish bananas consumed annually worldwide are perfect from a genetic standpoint, every single one a duplicate of every other. It doesn’t matter if it comes from Honduras or Thailand, Jamaica or the Canary Islands—-each Cavendish is an identical twin to one first found in Southeast Asia, brought to a Caribbean botanic garden in the early part of the 20th century, and put into commercial production about 50 years ago.

That predictability is a problem, though, as what kills one banana will kill them all. It’s happened before…and it’s already happening again.

After 15,000 years of human cultivation, the banana is too perfect, lacking the genetic diversity that is key to species health. What can ail one banana can ail all. A fungus or bacterial disease that infects one plantation could march around the globe and destroy millions of bunches, leaving supermarket shelves empty.

A wild scenario? Not when you consider that there’s already been one banana apocalypse. Until the early 1960s, American cereal bowls and ice cream dishes were filled with the Gros Michel, a banana that was larger and, by all accounts, tastier than the fruit we now eat. Like the Cavendish, the Gros Michel, or “Big Mike,” accounted for nearly all the sales of sweet bananas in the Americas and Europe. But starting in the early part of the last century, a fungus called Panama disease began infecting the Big Mike harvest. The malady, which attacks the leaves, is in the same category as Dutch Elm disease. It appeared first in Suriname, then plowed through the Caribbean, finally reaching Honduras in the 1920s. (The country was then the world’s largest banana producer; today it ranks third, behind Ecuador and Costa Rica.)

Growers adopted a frenzied strategy of shifting crops to unused land, maintaining the supply of bananas to the public but at great financial and environmental expense—the tactic destroyed millions of acres of rainforest. By 1960, the major importers were nearly bankrupt, and the future of the fruit was in jeopardy. (Some of the shortages during that time entered the fabric of popular culture; the 1923 musical hit “Yes! We Have No Bananas” is said to have been written after songwriters Frank Silver and Irving Cohn were denied in an attempt to purchase their favorite fruit by a syntactically colorful, out-of-stock neighborhood grocer.) U.S. banana executives were hesitant to recognize the crisis facing the Gros Michel, according to John Soluri, a history professor at Carnegie Mellon University and author of Banana Cultures, an upcoming book on the fruit. “Many of them waited until the last minute.”

Once a little-known species, the Cavendish was eventually accepted as Big Mike’s replacement after billions of dollars in infrastructure changes were made to accommodate different growing and ripening needs. Its advantage was its resistance to Panama disease. But in 1992, a new strain of the fungus-—one that can affect the Cavendish—-was discovered in Asia. Since then, Panama disease Race 4 has wiped out plantations in Indonesia, Malaysia, Australia and Taiwan, and it is now spreading through much of Southeast Asia. It has yet to hit Africa or Latin America, but most experts agree that it is coming. “Given today’s modes of travel, there’s almost no doubt that it will hit the major Cavendish crops,” says Randy Ploetz, the University of Florida plant pathologist who identified the first Sumatran samples of the fungus.

Lots more in the article, including looks at two different approaches to saving (or, if necessary, replacing) the Cavendish banana: traditional breeding, or genetic engineering.

Neat stuff.

The problem with time travel…

Yes, the problem. Because there is only one. ;)

I don’t even remember how we got on the subject, but something in a conversation with Prairie last night got me rambling on about the biggest problem I have with time travel stories. As fun as they are, there’s always been one thing that bugged me about them — though, admittedly, it’s most likely because in the majority of instances, worrying about it would essentially negate the possibility of the story working at all.

Essentially, it’s that while what makes the story fun is the ability to travel temporally, nobody ever seems to take into account the need to travel spatially as well.

The Earth rotates at a little over 1000 miles per hour. It also orbits the sun at around 67,000 miles per hour. Our solar system is moving through the galaxy at approximately 447,387 miles per hour. Our galaxy is moving at roughly 1.34 million miles an hour through the universe. So, assuming that those are all the variables we have to work with (that is, assuming that time is a constant within our universe, and that there is nothing “outside” our universe to measure its relative speed), we travel (very) roughly 6,679,393,200 miles per second relative to our universe.

So, were I to invent a time machine and move myself one second back in time, I’d end up popping back into the normal time stream somewhere more than six and a half billion miles away from where I started! Needless to say, I’d be incalculably lucky to end up arriving anywhere that would allow me to survive — most likely, I’d just end up floating out in the vacuum of space somewhere.

Any feasible time machine, then, would somehow have to ensure that the traveler was able to move temporally while remaining stationary spatially relative to their starting point, and not to the universe as a whole.

Tricky.

Not that that keeps me from enjoying time travel stories anyway, of course. But there’s always this niggling little voice in the back of my head…

Rethinking

In science it often happens that scientists say, “You know that’s a really good argument; my position is mistaken,” and then they actually change their minds and you never hear that old view from them again. They really do it. It doesn’t happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day. I cannot recall the last time something like that happened in politics or religion.

— Carl Sagan, 1987 CSICOP keynote address

(via Atomic Playboy)

Requested: Women and Science

Requested by Royce:

I’m interested in hearing what you think about the Harvard “women may be congenitally less apt for the sciences” comment.

I’ve got to admit, I’m having a little difficulty with this one.

First off, this was the first I’d heard of it — somehow, this little fracas had managed to pass entirely under my radar until Royce mentioned it.

Secondly, and perhaps more importantly, virtually all there is on the ‘net is _re_action to the statements, which were made at a function that was neither taped or transcribed, so there’s not even complete agreement on what exactly was said. Just a lot of people up in arms about it.

From the first article that Royce linked to, I was at first inclined to write Harvard president Lawrence Summers off as a misogynistic shmuck. Trying to track down information about all this didn’t seem to support that, though.

The best account of what happened that I’ve found so far comes from the Washington Post and even here, it doesn’t really account for much of the story:

…[Summers] has provoked a new storm of controversy by suggesting that the shortage of elite female scientists may stem in part from “innate” differences between men and women.

…Summers laid out a series of possible explanations for the underrepresentation of women in the upper echelons of professional life, including upbringing, genetics and time spent on child-rearing. No transcript was made of Summers’s remarks, which were extemporaneous but delivered from notes. There was disagreement about precisely what he said.

…Summers pointed to research showing that girls are less likely to score top marks than boys in standardized math and science tests, even though the median scores of both sexes are comparable. He said yesterday that he did not offer any conclusion for why this should be so but merely suggested a number of possible hypotheses.

From that and other similar accounts I’ve found, it seems to me that Summers is being rather unnecessarily roasted over the flames. He didn’t say that women were any more or less intelligent or capable than men, only that there may be differences in the way men and women process and deal with information that may account for some of the disparity in the numbers of men and women in the higher sciences, and that these possibilities should be investigated. He was putting forth a hypothesis, not a conclusion — unfortunately, it’s a politically incorrect hypothesis, and because of that, he’s being lambasted for his remarks. It’s very possible that he might have badly chosen his words, and that’s much of what’s adding fuel to the fire here, but without a transcript that’s going to be difficult to determine.

One of the best overviews of the situation I’ve found comes from William Saletan at Slate:

Everyone agrees Summers’ remarks were impolitic. But were they wrong? Is it wrong to suggest that biological differences might cause more men than women to reach the academic elite in math and science?

[…]

What’s the evidence on Summers’ side? Start with the symptom: the gender gap in test scores. Next, consider biology. Sex is easily the biggest physical difference within a species. Men and women, unlike blacks and whites, have different organs and body designs. The inferable difference in genomes between two people of visibly different races is one-hundredth of 1 percent. The gap between the sexes vastly exceeds that. A year and a half ago, after completing a study of the Y chromosome, MIT biologist David Page calculated that male and female human genomes differed by 1 percent to 2 percent — “the same as the difference between a man and a male chimpanzee or between a woman and a female chimpanzee,” according to a paraphrase in the New York Times. “We all recite the mantra that we are 99 percent identical and take political comfort in it,” Page said. “But the reality is that the genetic difference between males and females absolutely dwarfs all other differences in the human genome.” Another geneticist pointed out that in some species 15 percent of genes were more active in one sex than in the other.

You’d expect some of these differences to show up in the brain, and they do. A study of mice published a year ago in Molecular Brain Research found that just 10 days after conception, at least 50 genes were more active in the developing brain of one sex than in the other. Comparing the findings to research on humans, the Los Angeles Times observed that “the corpus callosum, which carries communications between the two brain hemispheres, is generally larger in women’s brains [than in men’s]. Female brains also tend to be more symmetrical. … Men and women, on average, also possess documented differences in certain thinking tasks and in behaviors such as aggression.”

Let’s be clear about what this isn’t. It isn’t a claim about overall intelligence. Nor is it a justification for tolerating discrimination between two people of equal ability or accomplishment. Nor is it a concession that genetic handicaps can’t be overcome. Nor is it a statement that girls are inferior at math and science: It doesn’t dictate the limits of any individual, and it doesn’t entail that men are on average better than women at math or science. It’s a claim that the distribution of male scores is more spread out than the distribution of female scores — a greater percentage at both the bottom and the top. Nobody bats an eye at the overrepresentation of men in prison. But suggest that the excess might go both ways, and you’re a pig.

Also interestingly, yesterday I came across an article from the University of California, Irvine, where a study is showing that men and women of similar IQs process the information in very different ways — very much what it sounds to me like Summers was talking about and proposing that more work be done in studying these differences.

While there are essentially no disparities in general intelligence between the sexes, a UC Irvine study has found significant differences in brain areas where males and females manifest their intelligence.

The study shows women having more white matter and men more gray matter related to intellectual skill, revealing that no single neuroanatomical structure determines general intelligence and that different types of brain designs are capable of producing equivalent intellectual performance.

[…]

In general, men have approximately 6.5 times the amount of gray matter related to general intelligence than women, and women have nearly 10 times the amount of white matter related to intelligence than men. Gray matter represents information processing centers in the brain, and white matter represents the networking of — or connections between — these processing centers….

This, according to Rex Jung, a UNM neuropsychologist and co-author of the study, may help to explain why men tend to excel in tasks requiring more local processing (like mathematics), while women tend to excel at integrating and assimilating information from distributed gray-matter regions in the brain, such as required for language facility. These two very different neurological pathways and activity centers, however, result in equivalent overall performance on broad measures of cognitive ability, such as those found on intelligence tests.

At this point, I’m inclined to think that Summers is the victim of political correctness run amok. While it’s all very nice and fuzzy to say that no matter what, we’re all identical across the board, it’s not a very realistic idea. Of course, that doesn’t mean that different people, different sexes, different races, or different cultures are inherently better or worse than others, only that they’re different.

Trying to gloss over these differences under the veneer of political correctness is foolish, but when suggesting that we should look at these areas for more study results in a controversy like this, is it really that likely that we’re going to learn anything about ourselves? Sadly, I’m afraid not.

Abstinence courses wildly off base

Our tax dollars at work: the abstinence programs that Bush is so heavily in favor of (as opposed to real sex education) are distributing wildly inaccurate information to teens:

Many American youngsters participating in federally funded abstinence-only programs have been taught over the past three years that abortion can lead to sterility and suicide, that half the gay male teenagers in the United States have tested positive for the AIDS virus, and that touching a person’s genitals “can result in pregnancy,” a congressional staff analysis has found.

Those and other assertions are examples of the “false, misleading, or distorted information” in the programs’ teaching materials, said the analysis, released yesterday, which reviewed the curricula of more than a dozen projects aimed at preventing teenage pregnancy and sexually transmitted disease.

In providing nearly $170 million next year to fund groups that teach abstinence only, the Bush administration, with backing from the Republican Congress, is investing heavily in a just-say-no strategy for teenagers and sex. But youngsters taking the courses frequently receive medically inaccurate or misleading information, often in direct contradiction to the findings of government scientists, said the report, by Rep. Henry A. Waxman (D-Calif.), a critic of the administration who has long argued for comprehensive sex education.

Several million children ages 9 to 18 have participated in the more than 100 federal abstinence programs since the efforts began in 1999. Waxman’s staff reviewed the 13 most commonly used curricula — those used by at least five programs apiece.

The report concluded that two of the curricula were accurate but the 11 others, used by 69 organizations in 25 states, contain unproved claims, subjective conclusions or outright falsehoods regarding reproductive health, gender traits and when life begins. In some cases, Waxman said in an interview, the factual issues were limited to occasional misinterpretations of publicly available data; in others, the materials pervasively presented subjective opinions as scientific fact.

Among the misconceptions cited by Waxman’s investigators:

  • A 43-day-old fetus is a “thinking person.”
  • HIV, the virus that causes AIDS, can be spread via sweat and tears.
  • Condoms fail to prevent HIV transmission as often as 31 percent of the time in heterosexual intercourse.

One curriculum, called “Me, My World, My Future,” teaches that women who have an abortion “are more prone to suicide” and that as many as 10 percent of them become sterile. This contradicts the 2001 edition of a standard obstetrics textbook that says fertility is not affected by elective abortion, the Waxman report said.

“I have no objection talking about abstinence as a surefire way to prevent unwanted pregnancy and sexually transmitted diseases,” Waxman said. “I don’t think we ought to lie to our children about science. Something is seriously wrong when federal tax dollars are being used to mislead kids about basic health facts.”

Bad enough that Bush is pushing teaching abstinence instead of safe sex, rather than teaching both concurrently (doubly stupid considering “Nonpartisan researchers have been unable to document measurable benefits of the abstinence-only model.”), but when the information in the abstinence courses is this ridiculous — really, it borders on nothing more than right-wing propaganda — there is no way that the government should be funding these programs!

Inaccurate information like this helps nobody, least of all the kids in the classes.

Ugh. Makes me see red.

(via William Gibson)

ADD, Hyperactivity, and Ritalin

Jacqueline is curious about using drugs to offset the effects of ADD:

It’s been 13 years since I’ve taken anything for my attention deficit disorder — my childhood experience with Ritalin was awful. But things haven’t been going so well in school lately and I may have to relax my “no drugs, no way” position if I want to get it together and actually do the grad school thing.

Now, before I go any further, I need to put a big disclaimer on what follows: I am not a doctor — I don’t even play one on TV. I don’t have children. I don’t have ADD. I have never been on any prescription medication for anything other than antibiotics. I did go through a period of time when I was playing with recreational drug use, but that was confined to three drugs: a few instances of getting stoned (boring), three attempts at ‘shrooming (two of which times I went to sleep before they kicked in), and about two years of dropping acid on a fairly regular basis (fun for a while, then it was time to stop).

In other words, the following is opinion, and opinion only. Take it as such.

Now.

I have serious issues with the current obsession with ADD and the associated pharmaceutical treatments. My personal belief is that it’s an incredibly overblown and overmedicated issue. This does not mean that I don’t “believe” in ADD, or that I don’t believe that there are people who are affected by it and can benefit from treatment. What it means is that I believe that it’s often diagnosed too quickly, and that the current trend is too quick to depend on chemical treatments that are likely more detrimental in the long run.

My little brother Kevin was an unusually active baby. He had problems paying attention for more than a few minutes at a time, and was rarely still — even in his sleep, he was so constantly wired that he would bruise himself in his sleep thrashing around in his crib. Eventually, it got to the point where my parents were concerned enough that they decided to take him to a doctor and see if there was any medical explanation.

Now, this was back in the late 70’s, long before ADD/ADHD became the catchphrase of the decade. My brother was diagnosed with hyperactivity — an overabundance of energy and inability to focus, brought on by a chemical imbalance within his system. My parents were given a few choices on how to combat this. I don’t know if there were more options given than the two I’m about to mention, but I believe these were the primary options.

The first was Ritalin, a drug that is actually a central nervous system stimulant that has a calming effect on hyperactive individuals because of their unusual body chemistry.

The second was a more natural remedy — dealing with the hyperactivity by monitoring and adjusting Kevin’s diet. The chemical imbalance that triggered Kevin’s hyperactivity was brought on by excessive amounts of certain types of sugars in his system. The hyperactivity was believed to be an allergic reaction to sucrose and a few other compounds: essentially, he was allergic to cane sugar (sucrose), artificial flavors and colors, and honey. It was thought that by eliminating those elements as much as possible from his diet, it should be possible to regulate the imbalance and allow Kevin to lead a calmer, more normal life.

A little bit of Googling has turned up a few pages on the subject of hyperactivity and diet, leading me to this Q and A page that pinpoints this approach to treating hyperactivity as the Feingold Diet (further searching for “finegold diet” returned that same page as the top result). It’s apparently a somewhat controversial approach, as testing Dr. Finegold’s theories resulted in “mixed and inconsistent results” — see paragraph eight of the “20th Century History” section of Wikipedia’s ADD page for more information.

I don’t know how much was known about the Finegold Diet at the time that my parents were investigating Kevin’s unusual behavior, or how it was viewed at the time. Whatever the situation was, my parents decided that it was at least worth trying before resorting to drugs, and so Kevin’s diet was changed (along with the rest of us, of course — something that I’ve always half-believed is responsible for why I have such a sweet tooth: until the age of about four or five, I had a normal little-kid diet high in sugars; suddenly, nearly all sugars and sweets were removed from the house, and I missed them — but I digress…). We found that he could process fructose (fruit sugars) normally, and so that became the sweetener of choice in our family.

And it worked. It worked quite well, in fact. Suddenly, Kevin was manageable — at least, no more hyper than any other young child. And, in case there were ever any doubts as to whether it was the diet making the difference, the changes in his behavior when he did manage to get ahold of anything with high amounts of sugar were staggering (I remember one instance where after getting into a stash of Oreos I had hidden in my room he got to the point of physically attacking our dad — a rather scary situation for all of us). When his sugar levels did start to get a little high, all it took was a couple cups of coffee to calm him down, as the caffeine worked with his body chemistry in a similar way to how the Ritalin works: what’s a stimulant to a normal person acts as a depressant to a hyperactive person.

Now, obviously, no two people are going to have the same body chemistry, and a solution for one person isn’t necessarily a solution for all. Even when one solution does present itself, something as simple as time can make a huge difference — as my brother aged, he became less and less adversely affected by the sugars that sent him into fits as a child, and to my knowledge, he hasn’t had to worry about any medical dietary restrictions for quite a few years now. According to the above referenced Wikipedia article, testing on Dr. Finegold’s methods resulted in wildly inconclusive results, with success rates reported as anywhere from as much as 60% to as little as 5% of the test subjects.

So no, it’s not a catch-all, and I harbor no wild beliefs that because it worked for my brother, it will work for everyone else. However, I know it helped my brother, and even working with the low end of the reported success rate — five percent — if four million children are diagnosed with ADHD each year, then that’s around 200,000 that could see a substantial difference simply by experimenting with their diet (and I’d bet that choosing your foods wisely is a lot cheaper than filling a Ritalin prescription for years).

It just seems to me that if there’s a possibility of being able to help someone with something as simple as a little attention to their diet, than shouldn’t that be one of the first things investigated? It may not work — there may even be a 95% chance that it won’t — but if it does, than it’s easier, healthier, cheaper, and it would probably take no more than a few weeks or a few months to be certain as to whether a different diet is making the difference. Why start with the howitzer when a slingshot might be all you need?

What concerns me are two things: firstly, that I rarely (if ever) hear of people who know about the potential benefits of the dietary approach; and secondly (and more importantly), I really wonder sometimes if people these days are overly quick to assign their children the label of ADHD.

<soapbox>

Quite simply, children are supposed to be hyper! Yes, if it’s excessive, get it checked — but please don’t jump to the conclusion that a child is hyperactive simply because you’re having troubles controlling them. Children need to be active and interested in everything around them, it’s how they learn. They’re plopped down in the middle of this huge world, with all sorts of stuff to explore and investigate and taste and pound on and break and put together and figure out how it all works — and it really worries me when it seems to me that some parents are in far too much of a hurry to drug their children into insensibility because it would make their life easier.

Okay, I think I’m done.

</soapbox>

Versus

What ever happened to concepts like tolerance and respect of others? Polite disagreement? Discussion as opposed to argument? Open minded acceptance of other people’s views, even if they differ from your own?

This may not be my most coherent or well-organized post, but a couple things popped up today that have been rumbling around in the back of my head, and I wanted to at least make a stab at getting some of them out.

Yesterday, I posted a link and excerpt from a story in the Seattle Times about a local Native American burial ground that has been uncovered due to construction on the Hood Canal bridge. The story caught my attention both for the archaeological significance of the find, and for the care and concern that the local tribes have for the spirituality of the site and their ancestors.

This morning, my post got a Trackback ping when Paul Myers of Pharyngula posted about the article. When I read his post, though, I was more than a little taken aback at what I felt to be the cavalier and rude tone he took in regard to the tribe’s religious beliefs.

There’s a fair bit of religious hokum in the article; goofy stuff such as the claim that pouring a concrete slab would trap the spirits forever (piling dirt and rocks on top of them doesn’t, apparently, nor does rotting into a smear), and spiritual advisors on site and ritual anointings to protect people from angry spirits. That’s all baloney….

The religious/spiritual crap cuts no ice with me….

It wasn’t that he didn’t agree with the spirituality of the tribe that bothered me (I don’t know Paul’s personal religious beliefs) — rather, it was the utter lack of respect in how he addressed it. It was the old stereotype of the scientist so convinced of the utter righteousness of the purely scientific world view that he’s utterly contemptuous of those fools who believe in any sort of higher power (see Ellie Arroway in Carl Sagan’s Contact, for example).

That bothered me, but I wasn’t quite sure how to start expressing it, so I just filed it away on the back burner to percolate for a little bit.

A couple of days ago, I’d posted a link on my linklog to a Gallup poll which showed that only one third of Americans believe that evidence supports Darwin’s theory of evolution, and had added the comment, “how depressing.” This morning, I got a comment on that post from Swami Prem that raised my eyebrows:

What’s depressing about this? There is no evidence that supports Darwin’s theories. No scientist has ever shown that there exists a link between humans and apes. Darwin’s theories are theories afterall.

Suddenly, I found myself coming dangerously close to stepping right into Paul’s shoes, and had to wait a while before responding to Prem’s comment. My first impulse was surprise and, quite honestly, a little bit of, “oh, here we go again…” — Prem and I have had strong disagreements in the past, and while I don’t believe that he’s at all unintelligent, his earlier espousal of viewpoints that are so diametrically opposed to my own strongly colored my initial reaction to this new comment.

After taking some time to let that roll around in my brain I did respond, and Prem’s responded to that. As yet, I haven’t taken it any further, both because I want to do my best to respond intelligently and because I’m somewhat stumped as to just how to start (I probably need to take some time to do a little research [this site looks like a good place to start] — as I’ve never progressed beyond attaining my high school diploma, and I was never that good in the sciences to begin with, I’m not entirely comfortable with trying to engage in a full-on creationism-vs.-Darwinism debate without a little brushing up [and actually, Paul would probably be far more qualified than I to tackle Prem’s question, judging by his obvious interest in both biology and evolution — just check out the links in his sidebar!]).

Anyway, both of these items have been bouncing around my head all day.

I think a lot of what’s been bothering me about the exchanges is that I try hard to be polite and respectful in my discussions with people, even when (and sometimes especially when) I disagree with them, and that seems to be a trait that has gone by the wayside far too often these days. Sure, I don’t always succeed — I’ll fly off the handle and rant and rave from time to time — but I do make an effort to keep those instances to a minimum.

Unfortunately, it seems that we’re living in a world where differences are all anybody sees anymore: us vs. them, me vs. you, religion vs. science, liberal vs. conservative, democrat vs. republican, urban vs. rural, red vs. blue, etc. Nobody’s actually listening to what anyone else has to say — we’re all so sure that we’re right and everyone else is wrong, too busy banging our shoes on the table to really listen to anyone else.

It’s a pretty sad state of affairs, all told.

Bouncing back a bit, but touching on both of the incidents that started all this rambling, I think the thing that frustrates me the most about the science vs. religion debate — and creationism vs. Darwinism in particular — is that in my mind, there is absolutely nothing that says that the two theories are incompatible. It’s never seemed to me as if it was an either/or equation — coming back to Carl Sagan’s book, and most pointedly the end of it (and if you haven’t read or don’t want to read the book, feel free to watch the movie — it’s one of the single most intelligent science-fiction films I’ve seen in my lifetime), why is it so hard for people to wrap their heads around the concept that it’s entirely possible that both Ellie Arroway and Palmer Joss are “right”?

I’ve always found it interesting that the most commonly known of the two creation stories in Genesis fairly accurately parallels the scientific view of the formation of the universe, our planet, and the life upon it. First space, then stars, then the earth, then oceans, then plants, then fish, then animals, then man. Two different ways of telling the same story — one measured in days and one measured in millennia, but the same story. Of course, this does hinge on the ability to accept the Bible without taking it literally (which is probably another subject for another time, but it’s probably fairly obvious that I don’t subscribe to a literal interpretation of the Bible), which trips up a lot of people.

Meh. I don’t know…and I think I’m starting to run out of steam. As I warned at the beginning of this, probably not the most coherent or well-organized post I’ve ever made here.

Had to get some of this out of my head, though.

Questions? Comments? Words of wisdom? Bring ’em on….

The Gamesters of Triskelion

This is jaw-droppingly cool — a simple ‘brain in a jar’ that can learn how to play a flight simulator.

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.

The “brain” – a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish – gives scientists a unique real-time window into the brain at the cellular level.

[…]

“Initially when we hook up this brain to a flight simulator, it doesn’t know how to control the aircraft,” DeMarse said. “So you hook it up and the aircraft simply drifts randomly. And as the data comes in, it slowly modifies the (neural) network so over time, the network gradually learns to fly the aircraft.”

Sure, today they’re flying a flight simulator. Tomorrow, they’ll be betting Quatloos on how well we fight. Don’t say I didn’t warn you…

(via Ben Hammersley)

Ten Tech Items Inspired by Science Fiction

(Originally posted on Google Answers, I’ve taken the liberty of reformatting this fascinating look at past visions of the future that influenced the technology of today. Note that I am not the author of this piece.)

Question:

I WAS going to ask you to research whether or not there have been any women in Sci-Fi but I have answered that myself, having found Flash Gordon’s moll.

However it is a Sci-Fi question.

Can you list 10 real technological ‘things’ that have reputedly come out of Sci-Fi stuff written in the 20th Century?

Here’s an example, computer viruses were reputedly inspired by ‘When Harlie Was One’ by David Gerrold.

Answer:

I have chosen ten outstanding technological concepts which had their
popular origins in the world of sci-fi. It is debatable, in some
cases, whether the science fiction source was the actual originator,
but it’s certainly true that each of these ideas was given a boost
into reality by an SF writer.

THE GEOSTATIONARY SATELLITE: Arthur C. Clarke

Although this concept was not described in a work of fiction, it was popularized by a man primarily known for his flights of fancy, Arthur C. Clarke:

A geostationary orbit (abbreviated GSO) is a circular orbit in the Earth’s equatorial plane, any point on which revolves about the Earth in the same direction and with the same period as the Earth’s rotation. It is a special case of the geosynchronous orbit, and the one which is of most interest to artificial satellite operators.

Geosynchronous orbits and geostationary orbits were first popularised by science fiction author Arthur C. Clarke Sir Arthur C. Clarke in 1945 as useful orbits for communications satellites. As a result they are sometimes referred to as Clarke orbits. Similarly, the ‘Clarke Belt’ is the part of space approximately 35,790 km above mean sea level in the plane of the equator where near-geostationary orbits may be achieved.

The Free Dictionary: Clarke Orbit

THE COMPUTER WORM: John Brunner

1975…John Shoch and Jon Hupp at the Xerox Palo Alto Research Center discover the computer ‘worm,’ a short program that searches a network for idle processors. Initially designed to provide more efficient use of computers and for testing, the worm had the unintended effect of invading networked computers, creating a security threat.

Shoch took the term ‘worm’ from the book ‘The Shockwave Rider,’ by John Brunner, in which an omnipotent ‘tapeworm’ program runs loose through a network of computers. Brunner wrote: ‘No, Mr. Sullivan, we can’t stop it! There’s never been a worm with that tough a head or that long a tail! It’s building itself, don’t you understand? Already it’s passed a billion bits and it’s still growing. It’s the exact inverse of a phage – whatever it takes in, it adds to itself instead of wiping… Yes, sir! I’m quite aware that a worm of that type is theoretically impossible! But the fact stands, he’s done it, and now it’s so goddamn comprehensive that it can’t be killed. Not short of demolishing the net!’ (247, Ballantine Books, 1975).

Computer History Museum: Timeline

ORGANLEGGING: Larry Niven

A few organ transplants were being performed in the 1970s, but author Larry Niven was one of the first to write about some of the social problems that might accompany widespread use of this life-extending technology. Niven wrote several stories which involved huge “organ banks,” some of which were kept stocked by unwilling “donations” from prisoners who had committed petty crimes. A lucrative black market of human organ trafficking, which many believe exists today, was foreseen by Niven:

Organlegging is the removal of human organs by a means of theft for resale for profit. Larry [Niven] coined the phrase in his Gil the ARM Stories. The main character and detective of the future police force or ARM tracks down many of the ‘Organleggers’ and their crime syndicates and brings them to justice. Gil Hamilton’s most astonishing special ability is his telepathic psychic arm – but read the stories! The original Long ARM of Gil Hamilton collection was published in 1976.

Today the practice of selling organs for profit is becoming commonplace in the third world and increasingly these organs are being removed without the donor’s consent.

Nivenisms in the News

THE WALDO: Robert A. Heinlein

Robert A. Heinlein, one of science fiction’s greatest visionaries, is credited with creating the name (and popularizing the concept) of the Waldo, a device with which a human can manipulate objects by remote. In Heinlein’s tale, titled “Waldo,” a wealthy genius who is enfeebled by disease uses mechanical hands to interact with the world:

Afflicted with myasthenia gravis from earliest childhood, Waldo lacks the muscular strength to walk or lift things with his arms. By living in the weightlessness of space he is able to move freely. His primary invention is a system of remote-controlled mechanical hands which the world has nicknamed waldoes.

We Grok It: Waldo & Magic, Inc., 1942

Before their application in motion pictures and television, ‘Waldos’ primarily referred to the mechanical arms, telemetry, and other anthropomorphic gadgetry aboard the NASA spacefleet. NASA engineers in turn took the name from a 1940 Robert A. Heinlein novella about a disabled scientist named Waldo who built a robot to amplify his limited abilities.

Character Shop: What’s a Waldo, Anyway?

GYRO-STABILIZED PERSONAL CONVEYANCE: Robert A. Heinlein

Robert A. Heinlein again. In a 1940 short story, “The Roads Must Roll,” RAH described the “Tumblebug,” a one-person vehicle that is stabilized gyroscopically, much like the Segway Human Transporter (now available) or the Bombardier Embrio (which is still in development). The same story described a public transport system, the “rolling road,” that is similar to mass people-moving devices now in use at large airports.

A tumblebug does not give a man dignity, since it is about the size and shape of a kitchen stool, gyro-stabilized on a singe wheel…. It can go through an opening the width of a man’s shoulders, is easily controlled, and will stand patiently upright, waiting, should its rider dismount.

Danny’s Blog Cabin: Sci-fi authors predict the future (kind of)

THE WATERBED: Robert A. Heinlein

I’m not finished with Heinlein yet. ;-)

The modern waterbed was created by Charles Hall in 1968, while he was design student at San Francisco State University in California. Hall originally wanted to make an innovative chair. His first prototype was a vinyl bag with 300 pounds of cornstarch, but the result was uncomfortable. He next attempted to fill it with Jell-O, but this too was a failure. Ultimately, he abandoned working on a chair, and settled on perfecting a bed. He succeeded. His timing could not have been more perfect: the Sexual Revolution was under way, and Hall’s waterbed became enormously popular, making it one of the most notable icons of the 1970s. However, because a waterbed is described in the novel Stranger in a Strange Land… by Robert A. Heinlein, which was first published in 1961, Hall was unable to obtain a patent on his creation.

The Free Dictionary: Waterbed

Heinlein described the mechanical details of the waterbed in Stranger [in a Strange Land], which is where the rest of the world learned about it. But what’s more interesting, and less known, is why he came up with the idea: Heinlein, a man of chronically poor health, was trying to create the perfect hospital bed.

TSAT: Predicting the Future

HOME THEATER & WALL-MOUNTED TV: Ray Bradbury

Ray Bradbury is associated more with “soft” SF or fantasy than with “hard” science fiction. Nevertheless, there are several high-tech devices in Bradbury’s classic 1953 dystopian novel Fahrenheit 451 (which is absolutely unrelated to Michael Moore’s recent filmic diatribe). Most notable is Bradbury’s description of huge, photorealistic flat-screen televisions with elaborate sound systems in home entertainment rooms called “parlours,” which provide an array of soap operas and other mind-numbing diversions in a future society which has banned most books.

This may sound unremarkable to younger readers, but those of us who remember the tiny, indistinct black-and-white TV sets of the early 1950s were (and are) duly impressed by Mr. Bradbury’s vision.

THE FLIP-PHONE: Gene Roddenberry et al.

I’ve got to get my “Star Trek” plug in here somehow. The original, ’60s Trek looks extremely dated today; although it’s set hundreds of
years in the future, technology has caught up with it (and in some
cases surpassed it in ways that the creators could not have
anticipated). One thing that I find quite striking is the resemblance,
both in appearance and function, between the flip-open communicator
devices used by the crew of the Starship Enterprise and today’s
wireless flip-phones.

Star Trek communicatorHere’s a photo of a communicator, circa 1967.

Samsung v200 Flip PhoneAnd here’s a Samsung flip-phone.

When “Star Trek: The Next Generation” replaced the flip-style communicators with a “com badge” in the late 1980s, the future was again prefigured. Today, wireless LAN-based lapel communicators are commonly used in hospitals.

THE TASER: “Victor Appleton”

Author Victor Appleton (the pseudonym of Howard Garis, also known for the “Uncle Wiggily” books) provided inspiration for the modern personal protection device, the taser (or “stun gun.”) The word “TASER” is an acronym for “Thomas A. Swift’s Electrical Rifle,” so named because the inventor was an admirer of Tom Swift when he was a child. The book “Tom Swift and His Electric Rifle” was published in 1911. Tom Swift was the adolescent hero of a series of books aimed at juvenile readers. Tom was the Harry Potter of his day. The books typically told of Tom’s adventures involving high-tech equipment such as a “sky train” or an “electric runabout.” Monorails and hybrid cars, anyone?

The Taser was developed in the late 1960’s by Jack Cover, who came up with the idea as a result of hearing about a U.S. commission which was looking into non-lethal ways police could deal with violent offenders. Cover based the Taser on a kind of stun gun he had read about in the Tom Swift fantasy stories of his childhood, thus the acronym, ‘Thomas A. Swift Electrical Rifle’.

First used by the Los Angeles Police Department in 1976, the Taser is now used by hundreds of police departments in the U.S.

Smith Secretarial: High-Tech Non-Lethal Weapon New Option for Police!

MULTI-USER DOMAINS IN CYBERSPACE: Vernor Vinge

While many fans attribute numerous important details of cyberspace to author William Gibson, I’d like to look a bit farther back, to the seminal novella “True Names,” by Vernor Vinge. In this striking work of fiction (written in 1979 and published in 1981, long before personal computers and the Web became part of our daily lives), Vinge offers vividly imagined depictions of many concepts which are everyday Internet realities today. Vinge’s online communities presage chatrooms and multi-user domains in an uncannily accurate fashion (complete with a few disagreeable and destructive individuals who take pleasure in wreaking havoc). Vinge was, as far as I can tell, the first writer to use the term “avatar” to describe a digital image that represents an anonymous computer user. Vinge called the online access point a “portal.” As you read this 25-year-old story, it seems totally contemporary: much of what was fictional in 1979 is factual today.

True Names is about Roger Pollack, a well-to-do individual living in the early 21st century. In this wired world, Pollack is known on the ‘Other Plane’ of the computer net as Mr. Slippery, a top-flight warlock (hacker) and member of one of the foremost covens of such. Unfortunately, the government have figured out Mr. Slippery’s True Name, and captures him. But it’s not him they want: They want his assistance in finding and stopping another warlock, the Mailman, who they suspect of far worse plots than anything the garden-variety warlocks have concocted. With no choice, Pollack agrees.

Pollack contacts the rest of his coven, which the Mailman – who only communicates through time delay – has recently joined. The Other Plane is perceived by most as a fantasy world, and the details of the network are mapped to concepts familiar to that milieu. Individuals on the Other Plane adopt new identities, but keep their true names secret, since – as Roger has found out – blackmail is all too easy when someone knows who you are in the real world…

True Names was prescient in its day, foreseeing cyberspace and virtual reality in all its glory several years before William Gibson’s Neuromancer, and building on 70s stories like John Brunner’s The Shockwave Rider. Vinge correctly understood the importance of secrecy and cryptography, the coming pervasiveness of computer networks, and how the personal computer would open up the world of computing to the everyman.

Pages of Michael Rawdon: Vernor Vinge

Read it! You’ll be entertained and amazed.

A personal note: I regard this novella so highly that, when choosing my Google Answers screen name in 2002, I very nearly went with the name “Erythrina,” a major character from “True Names.” I decided not to use this name after I told a friend about my plans, and she said “Erythrina??? Isn’t that a disease?”

Others…

A wonderful site called Technovelgy.com has a list of 652 science fiction devices and concepts, some of which have “come true.” I’ve selected a few of the most interesting items:

Thanks

Many thanks for a truly fascinating question. I shall sign off by borrowing a charming phrase from my friend and colleague Denco-ga:

Looking Forward,

Pink

The Day After Tomorrow

I got back home a bit ago from seeing Roland Emmerich‘s latest death, doom, and destruction lovefest: The Day After Tomorrow. The verdict? Surprisingly, not nearly as bad as I was expecting it to be, as long as you keep in mind that it’s your typical summer disaster movie, big on special effects, and short on plausible plot.

The first half of the film, dealing with all of the cataclysmic weather tearing through the world (mostly the US, though we are treated to shots of gargantuan hailstones in Tokyo and snow in New Delhi), is by far the stronger half. Since it doesn’t have to worry about niggling little details about why things are happening or how people are coping and is free to just let the effects department run rampant, it’s actually a lot of fun. Okay, so this is defining “fun” in a somewhat odd way — wholesale destruction and massive loss of life — but hey, it works.

It’s the latter half of the film where things get iffy. None of the various plot threads are really that gripping, and many of the actions taken are silly at best, and fairly ludicrous at worst. When a small group of survivors hole up inside a room in the New York Library and start burning books in the fireplace in order to stay warm, one really has to wonder why they don’t start breaking down the heavy wooden tables, chairs and sofas, or tear into some of the wood paneling all around the room for some longer-lasting and better burning fuel, for instance.

One thing that was bugging me a bit as the movie went on was how badly the passage of time was managed. While there were numerous remarks about the superstorm that glaciates the entire Northern hemisphere lasting for seven to ten days, it was very difficult to tell when time jumps were being made. Scenes just cut one to another, and aside from the occasional easily-missed line about something happening “a couple of days ago”, there was no real way to tell when scenes were changing between events taking place at roughly the same time, and when scenes were jumping forward hours or days at a time. Anything from a few quick montages, or even wipes or dissolves rather than jump cuts could have done a lot to make the passage of time a little more obvious.

I will say that I think (hope) that Emmerich may be on a bit of an upswing again, though. I’ve watched his career as a director sink pretty steadily downwards through the years, but even with all its flaws, I found TDAT entertaining enough that it gives me hope that there may be more in the future that is at least watchable. ;) My basis for this is as follows:

  • 1994: Stargate — maybe it should be a guilty pleasure, but I’ve always enjoyed Stargate. The effects were good, I loved the design work mixing Egyptian themes with sci-fi technology, and while the plot was a little shaky in spots, overall it wasn’t that bad, and it was a fun look at the ever-popular theory that the construction of the pyramids was assisted by alien technology.
  • 1996: Independence Day — while it was still enjoyable, even if only for the sheer ridiculous spectacle of it all, Emmerich was definitely favoring effects over plot for ID4. The effects were definitely a blast (almost literally, I suppose), but the script got to be so ludicrous at times (the virus upload, for instance) that you really had to turn your brain off to enjoy it. Still, even with good effects and a shoddy plot, it was still entertaining.
  • 1998: Godzilla — some of the best trailers I’d seen in a long time…and one of the worst movies. Nothing worked in this one — bad script, bad plot, bad acting, and bad effects. All in all, a bad idea. Plus they took one of the most-loved movie monsters in history and turned him into a joke of a giant iguana. Easily the low point of Emmerich’s career to date.
  • 2000: The Patriot — never saw it, so I don’t know how it figures into this. :)
  • 2004: The Day After Tomorrow — here, we’ve returned to the good effects and shoddy plot combination of ID4. I don’t think that TDAT is as enjoyable as ID4 was, but it was certainly more enjoyable than Godzilla (admittedly, not hard to do).

Here’s hoping that the pendulum will continue to swing in Emmerich’s favor, and that his next film — King Tut, according to the IMDB — will actually be at least decent, and maybe even actually something close to worthwhile.

Scientifically, of course, the whole movie is laughable. I had some fun after I got home looking up some of the articles that have popped up on the web in the past week or so taking a critical look at the science in the film.

From the Seattle PI, Scientists scoff as climates run amok on big screen:

“Shameless scientific prostitution,” blasted Gerard Roe, professor in the Department of Earth and Space Sciences.

The Statue of Liberty knee-deep in snow with taxi-sized icicles dangling off her nose? A bit of a stretch?

“It was a gross distortion of almost everything we know,” Roe slammed.

And the team of tornadoes that leveled half of Los Angeles? A tad over the top?

“The whole thing is absurd,” declared David Battisti, director of the Earth Initiative, a UW-wide program looking at the effect of humans on the planet.

From TechNewsWorld, ‘The Day After Tomorrow’ Heats Up a Political Debate:

“I’m heartened that there’s a movie addressing real climate issues,” says Marshall Shepherd, a research meteorologist at NASA’s Goddard Space Flight Center in Greenbelt, Md. “But as for the science of the movie, I’d give it a D minus or an F.”

From MTV’s review, ‘Day After Tomorrow’ Rich In Effects But Hilariously Implausible:

And where did this “science” come from? Well, it’s worth noting that “The Day After Tomorrow” was “suggested in part” by a book called “The Coming Global Superstorm,” by Art Bell and Whitley Strieber. Art Bell is a UFO buff who hosts a syndicated radio show devoted to the paranormal. Whitley Strieber is the author of a best-selling 1987 book about his many encounters with space aliens. The name of the book is “Communion: A True Story.”

Lastly, MSNBC has a good Q-and-A page about some of the climactic theories put forth in the film.