Stupid Question ™
May 23, 2005
By John Ruch
© 2005
Q: Did the military ever really try to build a “death ray,” or is it just science fiction?
—Michelle, from the Internet
A: The militaries of all major world powers have definitely been interested in “death rays” since the 1930s. In fact, most of the early research on lasers was funded by the US Department of Defense.
However, the idea of using some kind of concentrated radiation as a killing tool has foundered on enormous cost, size, fragility, atmospheric scattering and a host of other problems.
While a “death ray” that could kill armies or destroy cities is not at all practical, lasers have successfully shot down planes, helicopters and missiles. But reliability and bang for the buck still hamper these limited uses.
The primary military uses for so-called beam weapons like lasers, microwave weapons and particle beams is targeting, sensor/computer jamming and blinding enemy soldiers. The only laser pistols the US Army uses are toys featured in an arcade game operated by recruiters at sports events.
The death ray genre typically begins with medieval legends of the ancient Greeks’ supposed use of mirrors to focus sunlight to destroy enemy ships. But modern military interest began in the 1930s with the successes of electricity and radio. The idea of using microwave radiation to bring down planes or kill large numbers of troops started to pop up in military minds.
The pioneering but increasingly eccentric scientist Nikola Tesla claimed in 1934 he had a system for hurling some type of electrically charged particles at great distances to cause damage. (Exactly how much damage is unclear; some press accounts suggest the kind of damage that would only be realized by nuclear weapons, while others say he proposed only freezing airplane engines and the like.) By 1940 he was calling it “teleforce,” though the press preferred “death ray” or “death beam.”
Whatever Tesla had in mind, he never demonstrated it. And the idea of a microwave weapon—which would produce heat in humans—similarly faded as it became clear the waves spread out too quickly over distance to cause any sort of personnel damage.
The potential for microwave devices as computer-jammers, however, remains significant. As of 1991, five US Army helicopters had crashed because nearby civilian towers carrying microwave signals had fried their systems.
This journey from supposed soldier-killer to sensor/computer-destroyer is the path taken by most death ray concepts.
The particle beam (charged subatomic particles with great velocity) and the laser (intensified light) came about in the late 1950s with major funding from the Defense Advanced Research Projects Agency (DARPA), the Defense Department’s research and development branch known for letting wild ideas flourish.
It’s doubtful researchers seriously considered the laser or particle beam would result in a hand-held ray gun.
And indeed, devices that could do that kind of damage are far too large, too expensive, too difficult to aim and require too much power for the job. Laser expert Jeff Hecht in his book “Beam Weapons” calculated that for a laser to burn a 1-centimeter-wide hole through a human body would take at minimum 100 times the energy of a conventional .45-caliber bullet, which does far more physical damage besides.
Lasers in particular did prove capable of bringing down thin-skinned drone aircrafts and missiles. As early as 1975, the Army in a test shot down planes and helicopters at range with a large laser mounted in a turret atop an armored personnel carrier.
However, outside test conditions, lasers are less reliable for such tasks, especially if the smoke of battle and scattering effect of the atmosphere at long range is figured in. And even with the successes, the exact reasons lasers brought down flying objects varied from burning an actual hole to jamming navigation hardware or software.
Lasers continue to be studied as missile-destroyers, including in the current version of a “missile-defense shield” against nuclear weapons. (A preposterous specialty X-ray laser cluster, which needed so much power it could only be powered by a nuclear blast, was one of the ideas behind the original “Star Wars” missile defense program.)
But targeting and frying sensors on weapons or satellites remain the main uses for lasers.
And lasers can blind not only mechanical sensors, but human eyes, too. As early as the 1970s, lasers were used to dazzle enemy pilots, with definitive use by the Soviets and the British. In the 1980s, the Army tested hand-held lasers codenamed Dazer and Cobra with the intent of dazzling or blinding the enemy.
The business of blinding enemy troops is generally frowned upon, but that side effect of laser targeting systems is not lost on troops in the field. Laser-blocking goggles and visors have become commonplace in the modern arsenal.
Particle beams can pack a wallop. As Hecht notes, lightning is essentially a particle beam. But, like lightning, they’re also unreliable in their targeting and effect. They’re probably much less useful than lasers. The Soviets raised the possibility of using a particle beam generator to create a truly devastating death ray by stimulating the atmosphere into releasing lethal secondary radiation—but as usual, that would be insanely inefficient and expensive. Full-scale nuclear war would be easier and cheaper.
Specialty use and general harassment appear to be the future of ray guns. The Army has explored the relatively feasible realm of infrared lasers than can painfully but not seriously burn exposed human skin. There is also military research into sonic weapons that could disorient or nauseate enemies, but the familiar problems of aiming and reliability have cropped up.
It is worth noting, barely, that conspiracy theorists have a variety of death-ray beliefs about the High Frequency Active Auroral Research Program (HAARP) facility in Alaska, which uses extremely high-powered radio to study the ionosphere. While such beliefs are encouraged by DARPA’s funding of HAARP, it has yet to lay waste to the countryside.
March 29, 2008
Execution As Homicide
Stupid Question ™
May 16, 2005
By John Ruch
© 2005
Q: Why isn’t a person who carries out a state execution guilty of homicide themselves?
—Jackie, Mashpee, Massachusetts
A: An executioner certainly commits homicide. However, in law, that’s a neutral term that doesn’t necessarily imply criminal guilt.
There are three broad subclasses of homicide in law. The one we generally think of “homicide” in the murder-mystery sense is felonious homicide. In a case of terrible tautology, it simply means homicide that is punishable as a felony (in medieval times, a felony was any crime punishable by death—a much longer list than ours today).
In practical terms, it means a homicide carried out with criminal intent. Felonious homicide is broken down into two more familiar types: murder and manslaughter.
Murder is felonious homicide conducted with “malice aforethought”—premeditation or malicious intent. While definitions of manslaughter vary widely, it generally means felonious homicide without premeditated malice. The classic example is a “crime of passion”—killing someone in the heat of an argument.
Murder is considered worse than manslaughter and punished accordingly.
Felonious homicide also includes “felony murder”—a killing conducted while committing some other, primary felony. It’s essentially a way of punishing what would normally be manslaughter as murder to discourage violent crimes. For example, let’s say you use explosives to blow open a bank vault in the middle of the night with no expectation of anyone being inside. Your intent is to commit various felonies, but not murder. However, if someone turns out to be inside by chance and is killed by the blast, you could be charged with felony murder in some jurisdictions.
Murder and manslaughter don’t necessarily involve direct action. You can be charged with either one for a killing carried out by “procurement” (getting someone else to do the deed)—or “omission” (failing to act in a way that kills someone, such as not feeding them, or acting recklessly).
Another broad legal category is excusable homicide. This covers accidental killings that cannot be considered a crime. Say you hit a golf ball on a course, which bounces off a tree, goes into a crowd, hits a man on the head, and kills him. You have committed homicide, but obviously you had no criminal intent or recklessness. The law would excuse you from criminal penalties.
A legal execution is obviously not excusable homicide. In fact, it sounds like felonious homicide. It fits the bill of the most severe kind of murder—premeditated and malicious. It also involves procurement; because the executioner is a hired killer, it would seem the entire state, and arguably the entire populace of the democracy, could be charged with murder.
But a legal execution is not murder because of the third legal category: justifiable homicide. It covers a killing conducted in self-defense or under some other legal right. Justifiable homicides clearly involve intent, and possibly malice and premeditation, but are “justified” by the law. Because the law justifies it, it is by definition not a crime. (Definitions of excusable homicide sometimes include justifiable homicide.)
Justifiable homicide includes cases in which police officers kill armed suspects, and anyone who kills someone criminally attacking them. (Such killings must still be demonstrably justifiable.) It also covers legal executions, which are penalties justified by the law.
To put a fine point on it, an execution is not legal simply because the state carries it out. It has to be authorized by law. Therefore, if a state conducted an execution without having a death penalty law, it would be murder, not justifiable homicide.
May 16, 2005
By John Ruch
© 2005
Q: Why isn’t a person who carries out a state execution guilty of homicide themselves?
—Jackie, Mashpee, Massachusetts
A: An executioner certainly commits homicide. However, in law, that’s a neutral term that doesn’t necessarily imply criminal guilt.
There are three broad subclasses of homicide in law. The one we generally think of “homicide” in the murder-mystery sense is felonious homicide. In a case of terrible tautology, it simply means homicide that is punishable as a felony (in medieval times, a felony was any crime punishable by death—a much longer list than ours today).
In practical terms, it means a homicide carried out with criminal intent. Felonious homicide is broken down into two more familiar types: murder and manslaughter.
Murder is felonious homicide conducted with “malice aforethought”—premeditation or malicious intent. While definitions of manslaughter vary widely, it generally means felonious homicide without premeditated malice. The classic example is a “crime of passion”—killing someone in the heat of an argument.
Murder is considered worse than manslaughter and punished accordingly.
Felonious homicide also includes “felony murder”—a killing conducted while committing some other, primary felony. It’s essentially a way of punishing what would normally be manslaughter as murder to discourage violent crimes. For example, let’s say you use explosives to blow open a bank vault in the middle of the night with no expectation of anyone being inside. Your intent is to commit various felonies, but not murder. However, if someone turns out to be inside by chance and is killed by the blast, you could be charged with felony murder in some jurisdictions.
Murder and manslaughter don’t necessarily involve direct action. You can be charged with either one for a killing carried out by “procurement” (getting someone else to do the deed)—or “omission” (failing to act in a way that kills someone, such as not feeding them, or acting recklessly).
Another broad legal category is excusable homicide. This covers accidental killings that cannot be considered a crime. Say you hit a golf ball on a course, which bounces off a tree, goes into a crowd, hits a man on the head, and kills him. You have committed homicide, but obviously you had no criminal intent or recklessness. The law would excuse you from criminal penalties.
A legal execution is obviously not excusable homicide. In fact, it sounds like felonious homicide. It fits the bill of the most severe kind of murder—premeditated and malicious. It also involves procurement; because the executioner is a hired killer, it would seem the entire state, and arguably the entire populace of the democracy, could be charged with murder.
But a legal execution is not murder because of the third legal category: justifiable homicide. It covers a killing conducted in self-defense or under some other legal right. Justifiable homicides clearly involve intent, and possibly malice and premeditation, but are “justified” by the law. Because the law justifies it, it is by definition not a crime. (Definitions of excusable homicide sometimes include justifiable homicide.)
Justifiable homicide includes cases in which police officers kill armed suspects, and anyone who kills someone criminally attacking them. (Such killings must still be demonstrably justifiable.) It also covers legal executions, which are penalties justified by the law.
To put a fine point on it, an execution is not legal simply because the state carries it out. It has to be authorized by law. Therefore, if a state conducted an execution without having a death penalty law, it would be murder, not justifiable homicide.
Alan Smithee
Stupid Question ™
May 9, 2005
By John Ruch
© 2005
Q: Where did the fake name Alan Smithee come from for directors of bad movies?
—Scott and Doug, Columbus, Ohio
A: Originally “Allen Smithee,” this pseudonym has become a kind of Hollywood in-joke for directors who are embarrassed—or simply don’t care—about their work. But it began as a serious solution to a union problem.
It goes back to the Richard Widmark/Lena Horne Western (you can see where its problems began) “Death of a Gunfighter,” which began lensing in 1967 under the direction of Robert Totten.
Totten was predominantly known—well, not for anything. But he had experience directing the TV series “Gunsmoke.” The studio, not liking his work, quickly yanked him from the project.
He was replaced by the great action director Don Siegel (“Dirty Harry,” “Invasion of the Body Snatchers”). Siegel did his job, but he wasn’t very happy with the mishmash, either.
When the movie finally hit theaters two years later, egos and embarrassments led to neither director wanting his name on the movie. It being unheard of for a movie to be released anonymously, this was a problem.
The Directors Guild of America, the trade union that sets rules on this sort of thing, at first wasn’t having any of it. But finally, it decided to establish an official fake name to shelter unsatisfied directors (especially those unsatisfied due to forces beyond their control).
The general approach was to create a name that was unobtrusive, yet wouldn’t be confused with any innocent director.
I haven’t found any records of how “Allen” was selected. “Smithee” began as the mega-discreet “Smith,” until it was realized there would eventually be some innocent Allen Smith out there would get saddled with “Death of a Gunfighter.” And so the extra “E”s were added on the theory nobody would ever have that name.
The Allen Smithee deception worked, and Hollywood lore now includes the reviews of “Death of a Gunfighter” that praised Smithee for his deft direction.
Nobody would mistake the name for real today. The name, which has curiously morphed into “Alan Smithee,” is often self-consciously used by directors of straight-to-video dreck and similar movies that are simply bad out of the box, not the result of some kind of tampering.
And the joke really came out of the closet (or would have, if anyone had seen such a wretched movie) with the 1997 satire “Burn Hollywood Burn,” about a “real” Alan Smithee who comes to Hollywood, wants to remove his name from an awful film and finds out the pseudonym is his real name.
The Directors Guild is less uptight these days, and has less control over various filmmaking outlets, so other pseudonyms are used, with none being standard.
The Internet Movie Database has an amusing faux biography of Alan Smithee, including a shooting-down of the theory that the name came from an anagram for “The Alias Men.” Naturally, the biography is credited to “Alan Smithee.”
May 9, 2005
By John Ruch
© 2005
Q: Where did the fake name Alan Smithee come from for directors of bad movies?
—Scott and Doug, Columbus, Ohio
A: Originally “Allen Smithee,” this pseudonym has become a kind of Hollywood in-joke for directors who are embarrassed—or simply don’t care—about their work. But it began as a serious solution to a union problem.
It goes back to the Richard Widmark/Lena Horne Western (you can see where its problems began) “Death of a Gunfighter,” which began lensing in 1967 under the direction of Robert Totten.
Totten was predominantly known—well, not for anything. But he had experience directing the TV series “Gunsmoke.” The studio, not liking his work, quickly yanked him from the project.
He was replaced by the great action director Don Siegel (“Dirty Harry,” “Invasion of the Body Snatchers”). Siegel did his job, but he wasn’t very happy with the mishmash, either.
When the movie finally hit theaters two years later, egos and embarrassments led to neither director wanting his name on the movie. It being unheard of for a movie to be released anonymously, this was a problem.
The Directors Guild of America, the trade union that sets rules on this sort of thing, at first wasn’t having any of it. But finally, it decided to establish an official fake name to shelter unsatisfied directors (especially those unsatisfied due to forces beyond their control).
The general approach was to create a name that was unobtrusive, yet wouldn’t be confused with any innocent director.
I haven’t found any records of how “Allen” was selected. “Smithee” began as the mega-discreet “Smith,” until it was realized there would eventually be some innocent Allen Smith out there would get saddled with “Death of a Gunfighter.” And so the extra “E”s were added on the theory nobody would ever have that name.
The Allen Smithee deception worked, and Hollywood lore now includes the reviews of “Death of a Gunfighter” that praised Smithee for his deft direction.
Nobody would mistake the name for real today. The name, which has curiously morphed into “Alan Smithee,” is often self-consciously used by directors of straight-to-video dreck and similar movies that are simply bad out of the box, not the result of some kind of tampering.
And the joke really came out of the closet (or would have, if anyone had seen such a wretched movie) with the 1997 satire “Burn Hollywood Burn,” about a “real” Alan Smithee who comes to Hollywood, wants to remove his name from an awful film and finds out the pseudonym is his real name.
The Directors Guild is less uptight these days, and has less control over various filmmaking outlets, so other pseudonyms are used, with none being standard.
The Internet Movie Database has an amusing faux biography of Alan Smithee, including a shooting-down of the theory that the name came from an anagram for “The Alias Men.” Naturally, the biography is credited to “Alan Smithee.”
Trial By Combat
Stupid Question ™
May 2, 2005
By John Ruch
© 2005
Q: Is it true you could win a court case in the Middle Ages by fighting a duel? That doesn’t seem fair.
—Jamie, from the Internet
A: “Judicial combat” or “trial by combat” indeed could be used to settle certain types of court cases from early medieval times until as late as the 1500s, especially in duel-crazy France.
It was considered fair because of the notion—evidently still popular today—that whoever wins was de facto favored by God. However, it’s probably more accurate to say that judicial combat was a form of macho gambling given a patina of righteousness. (The Catholic Church never approved of the method—especially in the days when priests weren’t exempted.)
The legal concepts involved shouldn’t be confused with the way a modern court works. Judicial combat was rooted in the legal theory of “ordeals”—torture tests that you would survive or otherwise pass by the grace of God if you were (in modern parlance) innocent.
Judicial combat generally had to involve a supposed witness to a crime accusing a “defendant” (in modern terms). But there were a variety of other ordeals for cases with a higher level of doubt, or where there were more facts to be determined.
By and large, these were straight-up torture tests, such as carrying a piece of red-hot iron in one’s hand for a proscribed period, or plunging one’s hand into boiling water and trying really hard not to blister too much.
At its simplest (and most simple-minded), the ordeals were literal gambling, such as drawing a straw or marked stone in judgment-by-lottery.
In the viewpoint of the times, these weren’t random practices, because nothing was; God directs everything, and always favors the righteous.
Faith did not run so deep that such laws applied to everyone, however. The torture tests, with their obvious presumption of guilt, were reserved almost exclusively for commoners.
Judicial combat, on the other hand, thrived in the nobility, where it was wrapped in the gaudy philosophy of chivalry. In later periods it was often carried out in what would be considered military courts today, with its outcome recognized by common law as well.
I said before that the outcome of an ordeal judged innocence in modern terms. In the terms of the day, however, what it really determined was truthfulness. Then, as now, the basis of the legal system was one party making a claim against another party—and both sides claiming the other is making it all up.
However, we start from a position of the accused being innocent, and they can leave court with a determination of “not guilty.” Judicial combat was much more focused on personal honor and the truth of each party’s claim.
As historian Francois Billacois noted, the fundamental idea behind the duel of the era was calling each other a liar. The loser of judicial combat wasn’t merely dishonored (and likely dead); he was automatically considered a perjurer.
While perjury is still a crime in our modern system, it’s rarely prosecuted. And it’s unimaginable that someone would be executed for perjury, let alone punished before a determination of the facts of the larger crime is made. But that’s what judicial combat did.
In the system of chivalry, being called a liar was an affront to a man’s all-important honor. While fighting it out was oh-so-manly, there was another way to resolve the situation—you could get friends and allies to swear to your good character.
If that didn’t work, combat it was. Of course, fighting to the death was something of a deterrent, and out-of-court settlements were still possible (such as withdrawing the accusation).
The exact ceremony for judicial combat varied with time and place. In France, which kept the custom the longest and in the most elaborate form, it eventually could be invoked only for the most serious crimes and only with the king himself officiating.
However, some general rules appear to have applied. The fight wasn’t necessarily to the death, there always being some kind of judgment as to what meant defeat. Forfeit was also possible (though it could be followed by execution).
Women were never dragged into combat. Elderly men and those too impaired to fight could also get out of it. Historian Bradford Broughton noted that “even broken foreteeth could disqualify a man, for these teeth helped greatly in the victory”—indicating the savagery judicial combat could involve.
That’s not say the actual accuser and defendant would be the ones stabbing and biting each other. It was often possible (sometimes legally, sometimes not) to hire “champions,” or professional fighters, to duel in one’s place. Apparently God’s favor could be bought by proxy.
In keeping with the irrationality of the whole business, another universal practice was inspecting the combatants for magical talismans—supernatural steroids that amounted to cheating.
May 2, 2005
By John Ruch
© 2005
Q: Is it true you could win a court case in the Middle Ages by fighting a duel? That doesn’t seem fair.
—Jamie, from the Internet
A: “Judicial combat” or “trial by combat” indeed could be used to settle certain types of court cases from early medieval times until as late as the 1500s, especially in duel-crazy France.
It was considered fair because of the notion—evidently still popular today—that whoever wins was de facto favored by God. However, it’s probably more accurate to say that judicial combat was a form of macho gambling given a patina of righteousness. (The Catholic Church never approved of the method—especially in the days when priests weren’t exempted.)
The legal concepts involved shouldn’t be confused with the way a modern court works. Judicial combat was rooted in the legal theory of “ordeals”—torture tests that you would survive or otherwise pass by the grace of God if you were (in modern parlance) innocent.
Judicial combat generally had to involve a supposed witness to a crime accusing a “defendant” (in modern terms). But there were a variety of other ordeals for cases with a higher level of doubt, or where there were more facts to be determined.
By and large, these were straight-up torture tests, such as carrying a piece of red-hot iron in one’s hand for a proscribed period, or plunging one’s hand into boiling water and trying really hard not to blister too much.
At its simplest (and most simple-minded), the ordeals were literal gambling, such as drawing a straw or marked stone in judgment-by-lottery.
In the viewpoint of the times, these weren’t random practices, because nothing was; God directs everything, and always favors the righteous.
Faith did not run so deep that such laws applied to everyone, however. The torture tests, with their obvious presumption of guilt, were reserved almost exclusively for commoners.
Judicial combat, on the other hand, thrived in the nobility, where it was wrapped in the gaudy philosophy of chivalry. In later periods it was often carried out in what would be considered military courts today, with its outcome recognized by common law as well.
I said before that the outcome of an ordeal judged innocence in modern terms. In the terms of the day, however, what it really determined was truthfulness. Then, as now, the basis of the legal system was one party making a claim against another party—and both sides claiming the other is making it all up.
However, we start from a position of the accused being innocent, and they can leave court with a determination of “not guilty.” Judicial combat was much more focused on personal honor and the truth of each party’s claim.
As historian Francois Billacois noted, the fundamental idea behind the duel of the era was calling each other a liar. The loser of judicial combat wasn’t merely dishonored (and likely dead); he was automatically considered a perjurer.
While perjury is still a crime in our modern system, it’s rarely prosecuted. And it’s unimaginable that someone would be executed for perjury, let alone punished before a determination of the facts of the larger crime is made. But that’s what judicial combat did.
In the system of chivalry, being called a liar was an affront to a man’s all-important honor. While fighting it out was oh-so-manly, there was another way to resolve the situation—you could get friends and allies to swear to your good character.
If that didn’t work, combat it was. Of course, fighting to the death was something of a deterrent, and out-of-court settlements were still possible (such as withdrawing the accusation).
The exact ceremony for judicial combat varied with time and place. In France, which kept the custom the longest and in the most elaborate form, it eventually could be invoked only for the most serious crimes and only with the king himself officiating.
However, some general rules appear to have applied. The fight wasn’t necessarily to the death, there always being some kind of judgment as to what meant defeat. Forfeit was also possible (though it could be followed by execution).
Women were never dragged into combat. Elderly men and those too impaired to fight could also get out of it. Historian Bradford Broughton noted that “even broken foreteeth could disqualify a man, for these teeth helped greatly in the victory”—indicating the savagery judicial combat could involve.
That’s not say the actual accuser and defendant would be the ones stabbing and biting each other. It was often possible (sometimes legally, sometimes not) to hire “champions,” or professional fighters, to duel in one’s place. Apparently God’s favor could be bought by proxy.
In keeping with the irrationality of the whole business, another universal practice was inspecting the combatants for magical talismans—supernatural steroids that amounted to cheating.
State Names
Stupid Question ™
April 11, 2005
By John Ruch
© 2005
Q: What are the origins of the names of all the U.S. states?
—Johannsen, from the Internet
A: The most common origin is terms from the Native Americans the states displaced, typically of dubious translation, and filtered through English, French, Spanish or even Russian.
Where a credible translation is known, it often refers either to a local tribe or a major waterway or region. (Many of the following states were definitely named based on the Native American-derived name for a river or lake.) Possible tribe-derived names include: Alabama, Arkansas, North/South Dakota, Illinois, Kansas, Massachusetts, Missouri, Texas and Utah.
Possible waterway/region-derived names include: Alaska, Arizona, Connecticut, Kentucky, Michigan, Minnesota, Mississippi, Nebraska, Ohio, Oregon and Wisconsin.
Tennessee is known to be of Native American origin, but its meaning is a mystery. Some sources propose it came from the name of a town.
Hawaii is a native Polynesian name of unknown origin, possibly referring to its original discoverer or a legendary Polynesian motherland.
Iowa comes with varying guesses, either as a possible place name, or an insulting term for a tribe known as the “sleepy ones.”
Wyoming is an oddball in the bunch. It’s named for the Wyoming Valley in Pennsylvania, which in turn was derived from an Algoquin/Delaware term meaning something like “large plains,” which in turn may be an English invention made by mashing Native American words together.
Naming states for people is also popular. Georgia was named for England’s King George II.
Maryland was dedicated to Queen Henrietta Maria, the wife of England’s King Charles I. Ol’ Chuck himself was memorialized with North/South Carolina, the feminized (as place names always are) adjectival version of his name in Latin.
Virginia (and by extension, West Virginia) was named for England’s Queen Elizabeth I, known as the “Virgin Queen.” Louisiana comes from the French La Louisianne, their name for the whole Mississippi Valley, which was a French colony under the eponymous King Louis XIV.
Among lesser royalty, Delaware was named for the bay and river, which in turn were named for Thomas West, Baron (Lord) De La Warr, who was a governor at Virginia’s Jamestown colony. New York was a tribute to James Stuart, the Duke of York and Albany. York, of course, is a district of England.
George Washington, who refused to become royalty at all, gave his name to Washington State. Pennsylvania, a Latinized form of “Penn’s woodland,” is named in theory for state founder William Penn’s father, conveniently also named William.
Foreign (though not foreign during the original colonial period) or modern Latin descriptive phrases are responsible for several names. Colorado (Spanish for “reddish”) originally referred to the Colorado River. Florida is Spanish for “filled with flowers”; it can also refer to “Feast of Flowers,” or Easter, and the state’s land was reputed spotted by Ponce de le Leon on Easter Sunday.
Montana is modern Latin for “mountainous area.” Nevada, which means “snow-covered” in Spanish, originally referred to the Sierra Nevada mountains, not the desert interior. Vermont is a flipped-around corruption of the French “Les Monts Verts,” or “Green Mountains”—also the name of the state’s major mountain chain.
New Mexico’s origin is pretty obvious, as are the English-dubbed New Hampshire and New Jersey.
Rhode Island’s name is a mystery that could be a foreign phrase or a borrowed place name. Some argue it means “red island,” from the Dutch “roodt,” while others propose it was named in honor of the Greek island and seaport of Rhodes.
A few state names were deliberately invented. Indiana is modern Latin for “land of the Indians.” Idaho is fake Native American and doesn’t mean anything as far as anyone can tell.
Oklahoma comes from the Choctaw words for “red” and “people.” But it’s not a Choctaw term. Instead, it was invented by European-Americans as a term for the area in which they planned to dump all the Native Americans they drove out of other places.
California is an invented term of fantastical, quasi-mythical origins. It’s the name of an island populated by gold-clad Amazons, ruled by a Queen Calafia, in a 1510 Spanish romance. Taken seriously by many explorers, it gave its name to the modern state, which indeed was depicted as an island on early maps of the Pacific coast.
The only name of utterly mysterious origin is the rather plain Maine. Some speculate it referred to the mainland of New England (a la the “Spanish Main”). Others suggest an inspiration in the French province of Maine, though no clear link has been established.
April 11, 2005
By John Ruch
© 2005
Q: What are the origins of the names of all the U.S. states?
—Johannsen, from the Internet
A: The most common origin is terms from the Native Americans the states displaced, typically of dubious translation, and filtered through English, French, Spanish or even Russian.
Where a credible translation is known, it often refers either to a local tribe or a major waterway or region. (Many of the following states were definitely named based on the Native American-derived name for a river or lake.) Possible tribe-derived names include: Alabama, Arkansas, North/South Dakota, Illinois, Kansas, Massachusetts, Missouri, Texas and Utah.
Possible waterway/region-derived names include: Alaska, Arizona, Connecticut, Kentucky, Michigan, Minnesota, Mississippi, Nebraska, Ohio, Oregon and Wisconsin.
Tennessee is known to be of Native American origin, but its meaning is a mystery. Some sources propose it came from the name of a town.
Hawaii is a native Polynesian name of unknown origin, possibly referring to its original discoverer or a legendary Polynesian motherland.
Iowa comes with varying guesses, either as a possible place name, or an insulting term for a tribe known as the “sleepy ones.”
Wyoming is an oddball in the bunch. It’s named for the Wyoming Valley in Pennsylvania, which in turn was derived from an Algoquin/Delaware term meaning something like “large plains,” which in turn may be an English invention made by mashing Native American words together.
Naming states for people is also popular. Georgia was named for England’s King George II.
Maryland was dedicated to Queen Henrietta Maria, the wife of England’s King Charles I. Ol’ Chuck himself was memorialized with North/South Carolina, the feminized (as place names always are) adjectival version of his name in Latin.
Virginia (and by extension, West Virginia) was named for England’s Queen Elizabeth I, known as the “Virgin Queen.” Louisiana comes from the French La Louisianne, their name for the whole Mississippi Valley, which was a French colony under the eponymous King Louis XIV.
Among lesser royalty, Delaware was named for the bay and river, which in turn were named for Thomas West, Baron (Lord) De La Warr, who was a governor at Virginia’s Jamestown colony. New York was a tribute to James Stuart, the Duke of York and Albany. York, of course, is a district of England.
George Washington, who refused to become royalty at all, gave his name to Washington State. Pennsylvania, a Latinized form of “Penn’s woodland,” is named in theory for state founder William Penn’s father, conveniently also named William.
Foreign (though not foreign during the original colonial period) or modern Latin descriptive phrases are responsible for several names. Colorado (Spanish for “reddish”) originally referred to the Colorado River. Florida is Spanish for “filled with flowers”; it can also refer to “Feast of Flowers,” or Easter, and the state’s land was reputed spotted by Ponce de le Leon on Easter Sunday.
Montana is modern Latin for “mountainous area.” Nevada, which means “snow-covered” in Spanish, originally referred to the Sierra Nevada mountains, not the desert interior. Vermont is a flipped-around corruption of the French “Les Monts Verts,” or “Green Mountains”—also the name of the state’s major mountain chain.
New Mexico’s origin is pretty obvious, as are the English-dubbed New Hampshire and New Jersey.
Rhode Island’s name is a mystery that could be a foreign phrase or a borrowed place name. Some argue it means “red island,” from the Dutch “roodt,” while others propose it was named in honor of the Greek island and seaport of Rhodes.
A few state names were deliberately invented. Indiana is modern Latin for “land of the Indians.” Idaho is fake Native American and doesn’t mean anything as far as anyone can tell.
Oklahoma comes from the Choctaw words for “red” and “people.” But it’s not a Choctaw term. Instead, it was invented by European-Americans as a term for the area in which they planned to dump all the Native Americans they drove out of other places.
California is an invented term of fantastical, quasi-mythical origins. It’s the name of an island populated by gold-clad Amazons, ruled by a Queen Calafia, in a 1510 Spanish romance. Taken seriously by many explorers, it gave its name to the modern state, which indeed was depicted as an island on early maps of the Pacific coast.
The only name of utterly mysterious origin is the rather plain Maine. Some speculate it referred to the mainland of New England (a la the “Spanish Main”). Others suggest an inspiration in the French province of Maine, though no clear link has been established.
Pirate Flags
Stupid Question ™
April 4, 2005
By John Ruch
© 2005
Q: Did pirate ships really fly pirate flags? If so, why? Wouldn’t they want to be stealthy?
—anonymous, Chicago, Illinois
A: Pirate ships indeed flew flags—variously a red flag, a black flag, and/or the infamous “Jolly Roger,” a black or red flag with some morbid motif.
But they didn’t sail around all the time with this dead giveaway flying. The flag would be hoisted at the last minute prior to engaging with a victim ship. The idea was to sneak up and then instill hopeless fear so the ship would peacefully surrender—which often happened. As Stuart A. Kellen’s puts it in his book “Life Among the Pirates,” the pirate flag spoke “the universal language of fear.”
At least one pirate elaborated this by playing martial music—drums and trumpets—at the time of assault.
At a time in which the vicious nature of most pirates was common knowledge, the flag was very effective.
While the language may be universal, the flags weren’t. The classic Jolly Roger is known only from 1700, especially in the Caribbean. Before that, and in other areas, many pirates simply flew national flags (not surprisingly, since many were privateers paid by national governments to disrupt shipping).
Still, the red flag was popular fairly early, raised as a blood-colored sign to indicate an intention to fight. The black flag had similar effect, with connotations of death.
The first known Jolly Roger was sighted around 1700 in the Caribbean flying on the ship of a little-known French pirate, Emanuel Wynne. It was a black flag bearing a skull with crossed bones behind it, and beneath it an hourglass.
Pirates did not invent such symbolism. In fact, most of their designs were common on gravestones of the era, which emphasized an awareness of the shortness of life. The hourglass indicated the fleeting nature of time.
There were many variations on the Jolly Roger, with each pirate crew having its own, much like a gang logo. Edward “Blackbeard” Teach’s flag had a devil-headed skeleton with an hourglass in one hand and a spear in the other, which was jabbing at a red, bleeding heart.
Christopher Moody’s flag was red and featured a winged hourglass, an arm wielding a dagger and a skull and crossbones.
The now-classic design of a skull with bones crossed beneath it, on a black background, was the flag of Edward Seegar, aka Edward England, who flew it from 1717 to 1720. In fact, England wasn’t so bloodthirsty; he ended up marooned by his own crew for showing too much mercy to a captured captain. He was undoubtedly sympathetic, since he himself was originally a captive who chose to join the pirate crew.
The term “Jolly Roger” is first found in print in the late 1700s. It’s unclear if pirates ever actually used it themselves. In any case, it’s a playfully morbid personification of the skull on the flag as a pirate himself.
“Jolly” at the time had connotations not only of fun-loving and agreeable, but also of drunkenness and lust. “Roger” was a generic term for a man, as well as slang for “penis.” “Old Roger,” a slang term for Satan also based on the generic-male usage, may have been an influence.
Pop etymologies that attribute “Jolly Roger” to a corrupted French phrase or Indian name should be made to walk the plank.
April 4, 2005
By John Ruch
© 2005
Q: Did pirate ships really fly pirate flags? If so, why? Wouldn’t they want to be stealthy?
—anonymous, Chicago, Illinois
A: Pirate ships indeed flew flags—variously a red flag, a black flag, and/or the infamous “Jolly Roger,” a black or red flag with some morbid motif.
But they didn’t sail around all the time with this dead giveaway flying. The flag would be hoisted at the last minute prior to engaging with a victim ship. The idea was to sneak up and then instill hopeless fear so the ship would peacefully surrender—which often happened. As Stuart A. Kellen’s puts it in his book “Life Among the Pirates,” the pirate flag spoke “the universal language of fear.”
At least one pirate elaborated this by playing martial music—drums and trumpets—at the time of assault.
At a time in which the vicious nature of most pirates was common knowledge, the flag was very effective.
While the language may be universal, the flags weren’t. The classic Jolly Roger is known only from 1700, especially in the Caribbean. Before that, and in other areas, many pirates simply flew national flags (not surprisingly, since many were privateers paid by national governments to disrupt shipping).
Still, the red flag was popular fairly early, raised as a blood-colored sign to indicate an intention to fight. The black flag had similar effect, with connotations of death.
The first known Jolly Roger was sighted around 1700 in the Caribbean flying on the ship of a little-known French pirate, Emanuel Wynne. It was a black flag bearing a skull with crossed bones behind it, and beneath it an hourglass.
Pirates did not invent such symbolism. In fact, most of their designs were common on gravestones of the era, which emphasized an awareness of the shortness of life. The hourglass indicated the fleeting nature of time.
There were many variations on the Jolly Roger, with each pirate crew having its own, much like a gang logo. Edward “Blackbeard” Teach’s flag had a devil-headed skeleton with an hourglass in one hand and a spear in the other, which was jabbing at a red, bleeding heart.
Christopher Moody’s flag was red and featured a winged hourglass, an arm wielding a dagger and a skull and crossbones.
The now-classic design of a skull with bones crossed beneath it, on a black background, was the flag of Edward Seegar, aka Edward England, who flew it from 1717 to 1720. In fact, England wasn’t so bloodthirsty; he ended up marooned by his own crew for showing too much mercy to a captured captain. He was undoubtedly sympathetic, since he himself was originally a captive who chose to join the pirate crew.
The term “Jolly Roger” is first found in print in the late 1700s. It’s unclear if pirates ever actually used it themselves. In any case, it’s a playfully morbid personification of the skull on the flag as a pirate himself.
“Jolly” at the time had connotations not only of fun-loving and agreeable, but also of drunkenness and lust. “Roger” was a generic term for a man, as well as slang for “penis.” “Old Roger,” a slang term for Satan also based on the generic-male usage, may have been an influence.
Pop etymologies that attribute “Jolly Roger” to a corrupted French phrase or Indian name should be made to walk the plank.
"Bulldozer"
Stupid Question ™
March 28, 2005
By John Ruch
© 2005
Q: What’s the origin of the word “bulldozer”?
—anonymous, from the Internet
A: Nobody really knows, which makes “bulldozer” one of those fascinating words whose meaning shifts depending on what people imagine its origin to be.
“Bulldozer” today most commonly refers to a kind of earthmoving tractor with a blade attached to the front. It can also mean a kind of bully or steamrolling force—a metaphorical meaning that, one would guess, came from the earthmover idea. In fact, the exact reverse is true.
Bulldozer and bulldoze (the verb) are US slang that first turned up in the late 1800s to mean something intimidating or bullying, either literally or metaphorically. In at least some uses, large-caliber handguns were called “bulldozers.”
The word first popped up in print in newspapers, where it was frequently used along with a self-conscious explanation of its meaning. These etymologies generally claimed it came from “bull-dose,” a supposed slave plantation term for a whipping so severe it was a “dose” of punishment that would harm even a bull.
This is fairly believable, especially presuming the influence of such terms as “bullwhip.” However, there are reasons to doubt it, especially since no one has found an independent prior usage of “bull-dose.” It is just as likely to be pop etymology that helped cement the intimidating tone of the word.
It’s significant that at the time, “bull” could be used as a prefix in slang to denote something large. So a “bull-dose” could simply be a large dose of anything. But then, it’s also unclear how “dose” shifted into “doze”—if that’s indeed what happened. Early citations varying in their spellings, making it unclear which came first.
The overall sense of these early newspaper definitions is of someone trying to definite and phonetically spell a slang term from the street. So I don’t put a lot of stock in either the meaning or the spelling, though they do agree on the word “dose” being the basis. Still, a corruption of “bulldogs” or even some playful term like “bull does” (as in “how a bull behaves”) seem as likely to me.
It’s worth noting that while somewhat synonymous with “bully,” “bulldoze” has no etymological relation to that word. But they certainly came to converge in meaning nonetheless. In part, that seems to be because people mistakenly assume “bully” stems from the word “bull,” which it doesn’t.
In that vein, it’s interesting to note that at the same time “bulldozer” was slang for a gun, so was “bulldog.”
Anyhow, it was about 1930 that the earthmover industry picked up the pushy-sounding “bulldozer” as a term for its tractor.
The metaphorical meaning of a bully or overwhelming force remains, of course. But today, virtually everybody presumes it comes from the earthmover term. So there is almost always now an implication of a pushing, thrusting, flattening force that was not present in earlier meanings of “bulldoze.”
March 28, 2005
By John Ruch
© 2005
Q: What’s the origin of the word “bulldozer”?
—anonymous, from the Internet
A: Nobody really knows, which makes “bulldozer” one of those fascinating words whose meaning shifts depending on what people imagine its origin to be.
“Bulldozer” today most commonly refers to a kind of earthmoving tractor with a blade attached to the front. It can also mean a kind of bully or steamrolling force—a metaphorical meaning that, one would guess, came from the earthmover idea. In fact, the exact reverse is true.
Bulldozer and bulldoze (the verb) are US slang that first turned up in the late 1800s to mean something intimidating or bullying, either literally or metaphorically. In at least some uses, large-caliber handguns were called “bulldozers.”
The word first popped up in print in newspapers, where it was frequently used along with a self-conscious explanation of its meaning. These etymologies generally claimed it came from “bull-dose,” a supposed slave plantation term for a whipping so severe it was a “dose” of punishment that would harm even a bull.
This is fairly believable, especially presuming the influence of such terms as “bullwhip.” However, there are reasons to doubt it, especially since no one has found an independent prior usage of “bull-dose.” It is just as likely to be pop etymology that helped cement the intimidating tone of the word.
It’s significant that at the time, “bull” could be used as a prefix in slang to denote something large. So a “bull-dose” could simply be a large dose of anything. But then, it’s also unclear how “dose” shifted into “doze”—if that’s indeed what happened. Early citations varying in their spellings, making it unclear which came first.
The overall sense of these early newspaper definitions is of someone trying to definite and phonetically spell a slang term from the street. So I don’t put a lot of stock in either the meaning or the spelling, though they do agree on the word “dose” being the basis. Still, a corruption of “bulldogs” or even some playful term like “bull does” (as in “how a bull behaves”) seem as likely to me.
It’s worth noting that while somewhat synonymous with “bully,” “bulldoze” has no etymological relation to that word. But they certainly came to converge in meaning nonetheless. In part, that seems to be because people mistakenly assume “bully” stems from the word “bull,” which it doesn’t.
In that vein, it’s interesting to note that at the same time “bulldozer” was slang for a gun, so was “bulldog.”
Anyhow, it was about 1930 that the earthmover industry picked up the pushy-sounding “bulldozer” as a term for its tractor.
The metaphorical meaning of a bully or overwhelming force remains, of course. But today, virtually everybody presumes it comes from the earthmover term. So there is almost always now an implication of a pushing, thrusting, flattening force that was not present in earlier meanings of “bulldoze.”
Toll House Morsels
Stupid Question ™
March 21, 2005
By John Ruch
© 2005
Q: What the heck is a “Toll House morsel”? What does a toll house have to do with chocolate chips?
—Stephen D., Ohio
A: Nestle Toll House Semi-Sweet Chocolate Morsels are those chocolate chips sold in bags for cooking purposes.
On each package is a good-sized picture of some kind of cottage, the words “Toll House,” and the slogan, “Since 1939.”
These days, “tolls” tend to conjure up an image of tossing change into a basket on a smoggy turnpike, not morsels of yummy chocolate. However, it was indeed an old toll house that spawned the modern chocolate chip, and reputedly the chocolate chip cookie, as well.
It all goes back to a 1709 toll house in what is now Whitman, Massachusetts, built along an early turnpike that is now Bedford Street. A toll house was something like a modern truck stop combined with a toll booth—you could not only pay your toll, but get a meal and lodgings, too.
The old property was bought in 1930 by Ruth Graves Wakefield and her husband Kenneth. They reopened it as an inn called, appropriately if not imaginatively, the Toll House Inn.
Ruth, a former home ec teacher, came up with the food for the place. As the legend goes, one day she decided to break up a bar of Nestle’s semi-sweet chocolate and toss it into the batter of her butter cookies.
She expected the chocolate to melt and swirl around in the batter, but the bits stayed in place instead. She liked it, so did everybody else, and the chocolate chip cookie was born. According to Nestle corporate history, Ruth’s recipe became a New England hit when it was published in several newspapers. Nestle noticed when its chocolate bar sales started going up.
It was Ruth, however, who had the smarts to go cut a deal with Nestle. She agreed to let them put the recipe on their chocolate bar wrappers as a sales-booster, while Nestle agreed to provide her with free chocolate for life.
The year 1939 comes in because that’s when Nestle got the bright idea to start selling actual chips instead of a chocolate bar you had to bust into pieces yourself. (Previously they had started selling the bars with score marks for breaking.) These chips were (and are) tiny versions of the shape known in the trade as a “kiss,” and were given the catchy name “morsels.”
The Toll House name was originally a sign of Nestle’s connection to the original chocolate chip. Now it’s just a kind of meaningless nostalgia, but still effective as the brand many people think of when it comes to buying “morsels.”
Free chocolate for life is a heck of a deal, but like all bargains with the devil, did not work out in the long run. Now Ruth is dead and the Toll House Inn is scattered ashes from a New Year’s Eve, 1984 blaze. But Nestle still gets to flog the name.
A Wendy’s fast-food restaurant now sits on the old Toll House Inn property. Wendy’s carries chocolate chip cookies in select stores—but, for the record, the Whitman franchise isn’t one of them.
March 21, 2005
By John Ruch
© 2005
Q: What the heck is a “Toll House morsel”? What does a toll house have to do with chocolate chips?
—Stephen D., Ohio
A: Nestle Toll House Semi-Sweet Chocolate Morsels are those chocolate chips sold in bags for cooking purposes.
On each package is a good-sized picture of some kind of cottage, the words “Toll House,” and the slogan, “Since 1939.”
These days, “tolls” tend to conjure up an image of tossing change into a basket on a smoggy turnpike, not morsels of yummy chocolate. However, it was indeed an old toll house that spawned the modern chocolate chip, and reputedly the chocolate chip cookie, as well.
It all goes back to a 1709 toll house in what is now Whitman, Massachusetts, built along an early turnpike that is now Bedford Street. A toll house was something like a modern truck stop combined with a toll booth—you could not only pay your toll, but get a meal and lodgings, too.
The old property was bought in 1930 by Ruth Graves Wakefield and her husband Kenneth. They reopened it as an inn called, appropriately if not imaginatively, the Toll House Inn.
Ruth, a former home ec teacher, came up with the food for the place. As the legend goes, one day she decided to break up a bar of Nestle’s semi-sweet chocolate and toss it into the batter of her butter cookies.
She expected the chocolate to melt and swirl around in the batter, but the bits stayed in place instead. She liked it, so did everybody else, and the chocolate chip cookie was born. According to Nestle corporate history, Ruth’s recipe became a New England hit when it was published in several newspapers. Nestle noticed when its chocolate bar sales started going up.
It was Ruth, however, who had the smarts to go cut a deal with Nestle. She agreed to let them put the recipe on their chocolate bar wrappers as a sales-booster, while Nestle agreed to provide her with free chocolate for life.
The year 1939 comes in because that’s when Nestle got the bright idea to start selling actual chips instead of a chocolate bar you had to bust into pieces yourself. (Previously they had started selling the bars with score marks for breaking.) These chips were (and are) tiny versions of the shape known in the trade as a “kiss,” and were given the catchy name “morsels.”
The Toll House name was originally a sign of Nestle’s connection to the original chocolate chip. Now it’s just a kind of meaningless nostalgia, but still effective as the brand many people think of when it comes to buying “morsels.”
Free chocolate for life is a heck of a deal, but like all bargains with the devil, did not work out in the long run. Now Ruth is dead and the Toll House Inn is scattered ashes from a New Year’s Eve, 1984 blaze. But Nestle still gets to flog the name.
A Wendy’s fast-food restaurant now sits on the old Toll House Inn property. Wendy’s carries chocolate chip cookies in select stores—but, for the record, the Whitman franchise isn’t one of them.
Explosion And Implosion
Stupid Question ™
March 14, 2005
By John Ruch
© 2005
Q: What’s the difference between an explosion and an implosion?
—Daniel (age 10), from the Internet
A: In everyday language, an explosion is a violent burst directed outward, while an implosion is a violent burst directed inward.
For example, fireworks are explosions caused by superheated gases and flames bursting outward from the center of a rocket.
An example of an implosion is a submarine under very deep water losing its air pressure because of a leak and then being crushed from the outside in, because of the pressure of the water.
The terms “explosion” and “implosion” almost always imply that pressurized gases or liquids are involved. If you jump on a soda can, that’s also something collapsing from the outside in, but it would rarely be called an implosion.
The words have a history that may surprise you, because “explosion” originally had nothing to do with bombs or blasts.
The verb “explode” is taken straight from the Latin word explaudere (or explodere), which literally means “to clap off.” It’s a theater term referring to a bad actor being forced off the stage by audience clapping. The clapping in this case was not a good thing, like we think of it today.
In Latin, it also came to mean expressing contempt for anything.
This was the meaning the word had when it came into English, too, around the early 1500s. In fact, it’s a meaning the word still has today, in phrases such as, “His argument was exploded.”
In the mid-1600s we start to see another meaning of the word, pretty close to what we use today: to force air or some other gas out with a loud sound. This is where the linguist term “explodent consonants” comes from, which means the kinds of letters we pronounce by blowing out air.
By the late 1700s, the word took on its full, common meaning of a violent, inside-out burst.
The Romans had a different word for our common meaning of “explode.” They used displodere, which roughly means “to clap into two pieces.” It seems to have referred especially to bladders, used for holding water and other liquids or gases, bursting with a loud sound. In the 1600s, the English version of the word, “displode,” was used to mean what “explode” means today.
“Implosion” is a modern invention from the 1800s, a reversal of “explode”; it literally means “to clap (or burst) inwards.”
“Ex-” in “explosion” means “off” or “out.” The “im-” in “implosion” means “in” or “inward.”
Why “im-” instead of “in-,” which would seem to make more sense?
Well, it really is “in-.” In Latin words that began with “in-” the letter “n” was changed to an “m” if it came before the letters “i,” “m” or “p.” That’s because the “n” sounds like an “m” in front of those letters anyway.
Try it yourself. If you say, “inplosion,” it sounds like “implosion” because of the way the letter sounds hit each other.
March 14, 2005
By John Ruch
© 2005
Q: What’s the difference between an explosion and an implosion?
—Daniel (age 10), from the Internet
A: In everyday language, an explosion is a violent burst directed outward, while an implosion is a violent burst directed inward.
For example, fireworks are explosions caused by superheated gases and flames bursting outward from the center of a rocket.
An example of an implosion is a submarine under very deep water losing its air pressure because of a leak and then being crushed from the outside in, because of the pressure of the water.
The terms “explosion” and “implosion” almost always imply that pressurized gases or liquids are involved. If you jump on a soda can, that’s also something collapsing from the outside in, but it would rarely be called an implosion.
The words have a history that may surprise you, because “explosion” originally had nothing to do with bombs or blasts.
The verb “explode” is taken straight from the Latin word explaudere (or explodere), which literally means “to clap off.” It’s a theater term referring to a bad actor being forced off the stage by audience clapping. The clapping in this case was not a good thing, like we think of it today.
In Latin, it also came to mean expressing contempt for anything.
This was the meaning the word had when it came into English, too, around the early 1500s. In fact, it’s a meaning the word still has today, in phrases such as, “His argument was exploded.”
In the mid-1600s we start to see another meaning of the word, pretty close to what we use today: to force air or some other gas out with a loud sound. This is where the linguist term “explodent consonants” comes from, which means the kinds of letters we pronounce by blowing out air.
By the late 1700s, the word took on its full, common meaning of a violent, inside-out burst.
The Romans had a different word for our common meaning of “explode.” They used displodere, which roughly means “to clap into two pieces.” It seems to have referred especially to bladders, used for holding water and other liquids or gases, bursting with a loud sound. In the 1600s, the English version of the word, “displode,” was used to mean what “explode” means today.
“Implosion” is a modern invention from the 1800s, a reversal of “explode”; it literally means “to clap (or burst) inwards.”
“Ex-” in “explosion” means “off” or “out.” The “im-” in “implosion” means “in” or “inward.”
Why “im-” instead of “in-,” which would seem to make more sense?
Well, it really is “in-.” In Latin words that began with “in-” the letter “n” was changed to an “m” if it came before the letters “i,” “m” or “p.” That’s because the “n” sounds like an “m” in front of those letters anyway.
Try it yourself. If you say, “inplosion,” it sounds like “implosion” because of the way the letter sounds hit each other.
Victorian Crime Slang
Stupid Question ™
March 7, 2005
By John Ruch
© 2005
Q: I’m reading a mystery novel set in Victorian London, and a character is described as a “smasher” and “screwsman.” What does this mean?
—Holly, Columbus, Ohio
A: It means he’s at least two sorts of criminal, in the language of the Victorian underworld.
The terms are among the hundreds of colorful and sometimes deliberately confusing bits of slang that arose in cant, the lingo of the criminal underground. It mixed Cockney rhyming slang, Romany (or “Gypsy”), neologisms and other influences to create a kind of code language.
In cant, a screwsman was a particular sort of burglar—one who uses lockpicks or (usually stolen) keys. That’s as opposed to a cracksman, who just busts things open with a hammer or what have you. According to Kellow Chesney in “The Anti-Society,” a great overview of Victorian crime, “cracksman” was the preferred generic term for any sort of burglar.
“Screwsman” comes from “screw,” a slang term for a skeleton key or similar device used to pick a lock. The exact origin of “screw” in this sense is unclear, but can be guessed—especially if one is willing to be vulgar.
A smasher is someone who passes counterfeit coinage, which was a widespread crime in the Victorian era. “Coiners” or “bit-fakers” were specialists who actually made fake coins. In something akin to modern drug-dealing set-ups, these gurus then passed the product along to bulk distributors.
Smashers were the bottom rung of this criminal ladder, the ones who actually put the coins into circulation—typically by buying them from bulk dealers and then buying things illegally with them.
In the high Victorian era, fake coins were known as “snide,” and the distribution the smashers engaged in was called “snide-pitching.” However, it was also sometimes known as “smashing.”
That in turn came from “smash,” meaning a counterfeit coin. The term is unknown, but possibly is related to the main technique at the time—stamping coins out of sheet metal. Of course, it could also derive from some complicated Cockney rhyme.
In any case, from that we get the natural “smasher.”
Whatever their place in the hierarchy, counterfeiters were collectively known as “shofulmen.” That’s from shoful, a straight-from-Yiddish term roughly meaning “worthless garbage.”
Chesney says snide-pitching wasn’t big business and typically was done as a side venture by people into other forms of crime. So it is entirely believable for someone to have been both a smasher and a screwsman.
March 7, 2005
By John Ruch
© 2005
Q: I’m reading a mystery novel set in Victorian London, and a character is described as a “smasher” and “screwsman.” What does this mean?
—Holly, Columbus, Ohio
A: It means he’s at least two sorts of criminal, in the language of the Victorian underworld.
The terms are among the hundreds of colorful and sometimes deliberately confusing bits of slang that arose in cant, the lingo of the criminal underground. It mixed Cockney rhyming slang, Romany (or “Gypsy”), neologisms and other influences to create a kind of code language.
In cant, a screwsman was a particular sort of burglar—one who uses lockpicks or (usually stolen) keys. That’s as opposed to a cracksman, who just busts things open with a hammer or what have you. According to Kellow Chesney in “The Anti-Society,” a great overview of Victorian crime, “cracksman” was the preferred generic term for any sort of burglar.
“Screwsman” comes from “screw,” a slang term for a skeleton key or similar device used to pick a lock. The exact origin of “screw” in this sense is unclear, but can be guessed—especially if one is willing to be vulgar.
A smasher is someone who passes counterfeit coinage, which was a widespread crime in the Victorian era. “Coiners” or “bit-fakers” were specialists who actually made fake coins. In something akin to modern drug-dealing set-ups, these gurus then passed the product along to bulk distributors.
Smashers were the bottom rung of this criminal ladder, the ones who actually put the coins into circulation—typically by buying them from bulk dealers and then buying things illegally with them.
In the high Victorian era, fake coins were known as “snide,” and the distribution the smashers engaged in was called “snide-pitching.” However, it was also sometimes known as “smashing.”
That in turn came from “smash,” meaning a counterfeit coin. The term is unknown, but possibly is related to the main technique at the time—stamping coins out of sheet metal. Of course, it could also derive from some complicated Cockney rhyme.
In any case, from that we get the natural “smasher.”
Whatever their place in the hierarchy, counterfeiters were collectively known as “shofulmen.” That’s from shoful, a straight-from-Yiddish term roughly meaning “worthless garbage.”
Chesney says snide-pitching wasn’t big business and typically was done as a side venture by people into other forms of crime. So it is entirely believable for someone to have been both a smasher and a screwsman.
Selection Of New Pope
Stupid Question ™
Feb. 28, 2005
By John Ruch
© 2005
Q: How is a new pope selected?
—Anonymous, from the Internet
A: Not to be pessimistic about Pope John Paul II, but…well, you’re not asking to be optimistic, are you?
Fittingly, the current pope himself made the most recent changes to an electoral system that has been regularly tweaked to avoid the riots and antipopes that once plagued the choosing of the peaceful pastor of the God of love.
The history of selecting popes is long, colorful and wildly brutal, as amusingly described in Catholic almanacs and encyclopedias.
At first, choosing a guy who was basically just the archbishop of Rome was a relatively casual affair. But then the whole Christianity thing really took off, and next thing you know—riots like there should have been over the 2000 U.S. election.
The Roman emperors, being law-and-order types, said enough of that, and stepped in to start running—not to mention fixing—the elections. No more riots, but a few centuries of Machiavellian politics, assassinations and general unholiness (involving other countries and Italian families, too).
Eventually, the church got sick of all that and decided that only cardinals would choose the new pope. A further tweak was requiring a two-thirds majority vote. Next thing you know—more riots!
Cardinals have to choose among themselves who gets the plum pope-for-life job. One can only imagine the horrific jockeying and office politics involved.
Actually, one doesn’t have to imagine. Just take my personal favorite, the election following the 1268 death of Pope Clement IV. The cardinals jerked around for three years, arguing about who would get the job, and even then didn’t decide until, in the words of Our Sunday Voice’s Catholic Almanac, “the citizens of the city [Viterbo, Italy] reduced them to bread and water and tore the roof off the palace in which they were residing.”
Finally, the church came up with the still-current practice of the Conclave. I.e., the cardinals are locked into the Sistine Chapel and not allowed to come out until they’ve made up their minds. Unsurprisingly, solitary confinement has greatly speeded the election process.
Things begin when the so-called College of Cardinals convenes in the Vatican for the election, called there by the Chamberlain of the Holy Roman Church. They have 15 days to gather, and no more than five days to mess around, with an introductory Mass and processional and so on, before they enter the Conclave. (Cardinals over 80 years old don’t get to vote and hence don’t enter the Conclave.)
The Conclave means they’re sealed inside the Vatican with almost literally no outside contact whatsoever. They’re allowed to spend the night in Vatican lodgings, but the rest of the time they’re locked inside the Vatican Palace. In either case, they’re not allowed to say anything about the voting or deliberations, ever. Secrecy is a huge and significant part of the process, avoiding unsightly Florida-style election embarrassments and scandals. No one ever knows who the “candidates” even are—except for the winning one, of course.
And I mean locked inside, literally. The Vatican’s Swiss Guard puts a padlock on the door .
Once inside, the cardinals can presumably mill about and deliberate as at any political convention—though trading votes is reportedly not allowed. The actual voting takes place in the Sistine Chapel; during the election of the current pope, ballots were counted at a table set up in front of the altar of the Last Judgement. Yikes!
In theory, a pope can be chosen by “acclamation”—that is, all the cardinals agreeing on one guy. Unsurprisingly, this apparently never happens, judging by the amount of time recent Conclaves have lasted.
This leaves “compromise” or “scrutiny”—a normal casting of ballots, as we would call it. Votes are written on paper ballots, put in a ballot box, and read aloud. Four votes are taken each day of the Conclave until somebody wins.
Winning consists of pulling two-thirds of the votes plus one. After a certain number of unsuccessful ballots—30, according to one source—the rule changes to allow the man who wins a simple majority to win the election.
Keeping in mind the obsession with secrecy, all of the ballots, notes and any other election documents are tossed into a stove and burned.
This burning is what led to the famous smoke signal from a Vatican chimney—the sfumata—that is the only indication to the outside world whether a new pope has been chosen. If the smoke is white, that means they’ve made up their minds. If it’s dark, it means they’re still deciding.
The smoke distinction used to be made by adding straw to produce thick, dark smoke if necessary. However, this was never reliable, and more recently chemicals have been used to produce either white or black smoke.
Even this screwed up in the 1978 election of Pope John Paul I, when a reported accident with the “white” chemical unintentionally produced thick, dark smoke.
Feb. 28, 2005
By John Ruch
© 2005
Q: How is a new pope selected?
—Anonymous, from the Internet
A: Not to be pessimistic about Pope John Paul II, but…well, you’re not asking to be optimistic, are you?
Fittingly, the current pope himself made the most recent changes to an electoral system that has been regularly tweaked to avoid the riots and antipopes that once plagued the choosing of the peaceful pastor of the God of love.
The history of selecting popes is long, colorful and wildly brutal, as amusingly described in Catholic almanacs and encyclopedias.
At first, choosing a guy who was basically just the archbishop of Rome was a relatively casual affair. But then the whole Christianity thing really took off, and next thing you know—riots like there should have been over the 2000 U.S. election.
The Roman emperors, being law-and-order types, said enough of that, and stepped in to start running—not to mention fixing—the elections. No more riots, but a few centuries of Machiavellian politics, assassinations and general unholiness (involving other countries and Italian families, too).
Eventually, the church got sick of all that and decided that only cardinals would choose the new pope. A further tweak was requiring a two-thirds majority vote. Next thing you know—more riots!
Cardinals have to choose among themselves who gets the plum pope-for-life job. One can only imagine the horrific jockeying and office politics involved.
Actually, one doesn’t have to imagine. Just take my personal favorite, the election following the 1268 death of Pope Clement IV. The cardinals jerked around for three years, arguing about who would get the job, and even then didn’t decide until, in the words of Our Sunday Voice’s Catholic Almanac, “the citizens of the city [Viterbo, Italy] reduced them to bread and water and tore the roof off the palace in which they were residing.”
Finally, the church came up with the still-current practice of the Conclave. I.e., the cardinals are locked into the Sistine Chapel and not allowed to come out until they’ve made up their minds. Unsurprisingly, solitary confinement has greatly speeded the election process.
Things begin when the so-called College of Cardinals convenes in the Vatican for the election, called there by the Chamberlain of the Holy Roman Church. They have 15 days to gather, and no more than five days to mess around, with an introductory Mass and processional and so on, before they enter the Conclave. (Cardinals over 80 years old don’t get to vote and hence don’t enter the Conclave.)
The Conclave means they’re sealed inside the Vatican with almost literally no outside contact whatsoever. They’re allowed to spend the night in Vatican lodgings, but the rest of the time they’re locked inside the Vatican Palace. In either case, they’re not allowed to say anything about the voting or deliberations, ever. Secrecy is a huge and significant part of the process, avoiding unsightly Florida-style election embarrassments and scandals. No one ever knows who the “candidates” even are—except for the winning one, of course.
And I mean locked inside, literally. The Vatican’s Swiss Guard puts a padlock on the door .
Once inside, the cardinals can presumably mill about and deliberate as at any political convention—though trading votes is reportedly not allowed. The actual voting takes place in the Sistine Chapel; during the election of the current pope, ballots were counted at a table set up in front of the altar of the Last Judgement. Yikes!
In theory, a pope can be chosen by “acclamation”—that is, all the cardinals agreeing on one guy. Unsurprisingly, this apparently never happens, judging by the amount of time recent Conclaves have lasted.
This leaves “compromise” or “scrutiny”—a normal casting of ballots, as we would call it. Votes are written on paper ballots, put in a ballot box, and read aloud. Four votes are taken each day of the Conclave until somebody wins.
Winning consists of pulling two-thirds of the votes plus one. After a certain number of unsuccessful ballots—30, according to one source—the rule changes to allow the man who wins a simple majority to win the election.
Keeping in mind the obsession with secrecy, all of the ballots, notes and any other election documents are tossed into a stove and burned.
This burning is what led to the famous smoke signal from a Vatican chimney—the sfumata—that is the only indication to the outside world whether a new pope has been chosen. If the smoke is white, that means they’ve made up their minds. If it’s dark, it means they’re still deciding.
The smoke distinction used to be made by adding straw to produce thick, dark smoke if necessary. However, this was never reliable, and more recently chemicals have been used to produce either white or black smoke.
Even this screwed up in the 1978 election of Pope John Paul I, when a reported accident with the “white” chemical unintentionally produced thick, dark smoke.
Woman Pope
Stupid Question ™
Feb. 21, 2005
By John Ruch
© 2005
Q: Was there ever a female pope?
—Tracy, from the Internet
A: The short answer is, “No”—despite an 800-year-old legend that a woman masquerading as a man indeed beat Vatican sexism at its own game.
The long answer is more complicated, because the legend has gone through a strange progression of being strongly believed to strongly reviled and/or debunked, and then believed again in some circles. Almost no one, including modern Catholic historians, has written about it with objectivity.
I relied on Rosemary and Darroll Pardoe’s history “The Female Pope: The Mystery of Pope Joan” for much of what follows. It’s the only rational, in-depth study available.
The story first appears in obscure histories dating to the mid-1200s. They have an unnamed, highly-educated woman, dressed as a man for traveling purposes, being hired into the Vatican and eventually rising to pope around 1100. She is discovered when she becomes pregnant and inconveniently gives birth while mounting a horse for a public procession. A demon outs her, to boot.
The tale was later interpolated into the contemporary manuscripts of Martin of Troppau’s extremely influential history of the popes and Roman emperors. This slightly different and more elaborate version identified the woman as a German called “Johann Anglicus” (indicating an English connection), and further confused matters by placing the action in the 800s.
Other histories quickly picked up the story, too, adding various details and commentary, usually describing her death by stoning. She had some aliases, with Agnes being popular in the 1400s. “Johannes” or “Joan,” the feminized “Johann” or “John,” became popular in the late 1400s through the present.
There is no doubt that the story somehow was taken seriously by both the public and the Church. A bust of her—later removed—was included in a series of papal portraits in the cathedral of Siena, Italy around 1400.
And popular legend connected to her some type of statue, now lost, in a Roman street said to be the location of her unmasking, along with an inscription of unknown content and location. The elaborated legend was that the popes avoided this street due to the connection, and there appears to be some validity to that.
The legend did have its doubters, including the future Pope Pius II. However, the Church didn’t really change its tune until Protestant reformers in the late 1500s began making fun of the supposed female pope, which was held up as a sexist example of hypocrisy and corruption in the Church. It also incarnated the Protestant image of the Church as the “Whore of Babylon” and spun off other insulting legends, such as the pope being required to undergo a proof-of-sex display.
It didn’t help that many Catholics continued to believe the legend, forcing them to attempt to explain her as a hermaphrodite.
Ironically, it was Protestant authors, influenced by modern rationalism, who took the first shots at debunking the legend.
Women gaining power by dressing up as men was a motif of medieval storytelling. The Pardoes point out specifically that it’s an element of the legends of many women Catholic saints. They even found a similar legend, dating to about 980, about a woman ascending to a bishopric of the Eastern church that way; the tale apparently stuck in the imagination, being brought up in 1054 in a nasty letter to the Eastern church from Pope Leo IX.
The Pardoes suggest the woman pope legend—which first appears in histories written by Franciscan monks—was invented, based on pre-existing material, as part of potshots in an internal dispute between their order and Rome.
Why the story would be taken seriously and even embraced is another question. The setting around 1100 was credible, since there were several would-be popes vying for power (and short reigns for those who got it). Perhaps the warning to troublemaking women or anyone who would sneak into power was considered useful.
Interestingly, however, the idea of the woman pope seems to have been responded to positively in the 1400s, the same period in which its popularity appears to have peaked. Some sources credit the humanist movement for embracing papal equal opportunity.
While the popularity of the legend is mysterious, there is no doubt is simply a legend. There are no contemporary references to a woman pope or such a remarkable scandal. There isn’t even room in the currently acknowledged papal chronology to fit her in.
Things are now coming full circle among some feminists, who argue “Pope Joan” was real, her history now somehow censored by the Vatican. Her bust in Siena was indeed censored—one might say, properly edited out—during the Protestant controversy. But it takes a lot of wishful reading to locate her historically.
Likewise, some feminist sources refer to Maifreda (sometimes “Manfreda”) as a woman pope. She was a figure in a late-1200s Italian sect called the Guglielmites, which formed around a deceased woman preacher. The group supposedly plotted to overthrow the Roman Catholic Church and elected Maifreda as its would-be pope; the Inquisition had them all killed around 1300.
While Maifreda may have called herself a pope, that’s like me declaring myself president. The Guglielmites were a small group that never threatened the formal Catholic papacy.
Feb. 21, 2005
By John Ruch
© 2005
Q: Was there ever a female pope?
—Tracy, from the Internet
A: The short answer is, “No”—despite an 800-year-old legend that a woman masquerading as a man indeed beat Vatican sexism at its own game.
The long answer is more complicated, because the legend has gone through a strange progression of being strongly believed to strongly reviled and/or debunked, and then believed again in some circles. Almost no one, including modern Catholic historians, has written about it with objectivity.
I relied on Rosemary and Darroll Pardoe’s history “The Female Pope: The Mystery of Pope Joan” for much of what follows. It’s the only rational, in-depth study available.
The story first appears in obscure histories dating to the mid-1200s. They have an unnamed, highly-educated woman, dressed as a man for traveling purposes, being hired into the Vatican and eventually rising to pope around 1100. She is discovered when she becomes pregnant and inconveniently gives birth while mounting a horse for a public procession. A demon outs her, to boot.
The tale was later interpolated into the contemporary manuscripts of Martin of Troppau’s extremely influential history of the popes and Roman emperors. This slightly different and more elaborate version identified the woman as a German called “Johann Anglicus” (indicating an English connection), and further confused matters by placing the action in the 800s.
Other histories quickly picked up the story, too, adding various details and commentary, usually describing her death by stoning. She had some aliases, with Agnes being popular in the 1400s. “Johannes” or “Joan,” the feminized “Johann” or “John,” became popular in the late 1400s through the present.
There is no doubt that the story somehow was taken seriously by both the public and the Church. A bust of her—later removed—was included in a series of papal portraits in the cathedral of Siena, Italy around 1400.
And popular legend connected to her some type of statue, now lost, in a Roman street said to be the location of her unmasking, along with an inscription of unknown content and location. The elaborated legend was that the popes avoided this street due to the connection, and there appears to be some validity to that.
The legend did have its doubters, including the future Pope Pius II. However, the Church didn’t really change its tune until Protestant reformers in the late 1500s began making fun of the supposed female pope, which was held up as a sexist example of hypocrisy and corruption in the Church. It also incarnated the Protestant image of the Church as the “Whore of Babylon” and spun off other insulting legends, such as the pope being required to undergo a proof-of-sex display.
It didn’t help that many Catholics continued to believe the legend, forcing them to attempt to explain her as a hermaphrodite.
Ironically, it was Protestant authors, influenced by modern rationalism, who took the first shots at debunking the legend.
Women gaining power by dressing up as men was a motif of medieval storytelling. The Pardoes point out specifically that it’s an element of the legends of many women Catholic saints. They even found a similar legend, dating to about 980, about a woman ascending to a bishopric of the Eastern church that way; the tale apparently stuck in the imagination, being brought up in 1054 in a nasty letter to the Eastern church from Pope Leo IX.
The Pardoes suggest the woman pope legend—which first appears in histories written by Franciscan monks—was invented, based on pre-existing material, as part of potshots in an internal dispute between their order and Rome.
Why the story would be taken seriously and even embraced is another question. The setting around 1100 was credible, since there were several would-be popes vying for power (and short reigns for those who got it). Perhaps the warning to troublemaking women or anyone who would sneak into power was considered useful.
Interestingly, however, the idea of the woman pope seems to have been responded to positively in the 1400s, the same period in which its popularity appears to have peaked. Some sources credit the humanist movement for embracing papal equal opportunity.
While the popularity of the legend is mysterious, there is no doubt is simply a legend. There are no contemporary references to a woman pope or such a remarkable scandal. There isn’t even room in the currently acknowledged papal chronology to fit her in.
Things are now coming full circle among some feminists, who argue “Pope Joan” was real, her history now somehow censored by the Vatican. Her bust in Siena was indeed censored—one might say, properly edited out—during the Protestant controversy. But it takes a lot of wishful reading to locate her historically.
Likewise, some feminist sources refer to Maifreda (sometimes “Manfreda”) as a woman pope. She was a figure in a late-1200s Italian sect called the Guglielmites, which formed around a deceased woman preacher. The group supposedly plotted to overthrow the Roman Catholic Church and elected Maifreda as its would-be pope; the Inquisition had them all killed around 1300.
While Maifreda may have called herself a pope, that’s like me declaring myself president. The Guglielmites were a small group that never threatened the formal Catholic papacy.
Mammoth Meat
Stupid Question ™
Feb. 14, 2005
By John Ruch
© 2005
Q: Has anybody ever eaten the meat of a frozen mammoth?
—Dan F., Columbus, Ohio
A: There is no reliable report of a modern human eating any part of a frozen mammoth—and very few unreliable reports, for that matter.
The story seems to have sprouted from a thought-about-it-but-decided-not-to report from a groundbreaking mammoth find in 1903. And the idea of people feasting on flash-frozen mammoth steaks has become a staple of loony fundamentalist Christian literature, feeding their ideas of a young Earth and a Biblical flood that knocked off prehistoric species.
All that being said, there is a solid example of a scientist and his friends tasting the flesh of a similarly frozen bison. However, that case—involving a dry, tough scrap of flesh boiled in a stew—only underlines why frozen mummy flesh is not generally eaten.
It must also be noted that dogs and scavenging animals have certainly eaten frozen mammoth meat. So at least it is true that modern animals have had the rare experience of feasting on meat aged 10,000 years or more, since the days when our ancestors were hunting the beasts with spears.
The mammoth-meat idea rests on misconceptions. Mammoths (and other frozen, mummified fossil animals ranging from rhinos to mice) are found in frozen silt, not a giant ice cube.
The silt may contain veins of ice. Also, the corpses are often immediately surrounded by ice created by the drawing out of moisture from the body itself, which results in the mummification. Nonetheless, when these things thaw enough to be discovered, they are in balls of muck, not a piece of ice. This also means they are desiccated, not big, juicy hunks of meat—though unquestionably some red meat is still preserved in certain specimens.
Only a couple mammoths have been found in anywhere near an intact state. Most were already scavenged, preyed upon and/or decayed to some degree before their freezing in the permafrost, leaving little soft tissue behind. And when they become exposed, typically through erosion, they start to rot fast. Thawed soft tissue is quickly consumed by modern scavengers.
The organic muck combined with the rotting of whatever flesh remains produces what is reportedly an unbelievable stench.
So, to recap, finding a mammoth is not walking into nature’s meat locker. It’s exhuming an icy grave.
Frozen mammoths are found in Siberia. It is not out of the realm of imagination that someone in such a harsh climate would consider eating some mammoth flesh. However, there is no reputable report of it.
It should be remembered that in early times, mammoths were exclusively found only by being washed out into the open, and could be reached by scientific investigators only after months or even years, and even then with no refrigeration equipment to preserve the find.
There’s a stray 1800s report from a Russian scientist who claimed that one tribe, the Yukats, occasionally ate mammoth meat, but his was not an eyewitness account. He did note, like many other reputable reports, that the Yukats’ dogs sometimes ate the meat, which is a more likely explanation for any missing flesh. Especially because the Yukats, like all other natives of the region, have a strong taboo about unearthing mammoth fossils. (Theoretically, this could be rooted in bad dining experiences of the past, but there’s no evidence of that.)
Going much farther back, about 2,000 years, there’s a Chinese tale that refers to underground beasts in the north whose flesh can be eaten as a kind of jerky. This has been presented as referring to mammoths, and may well do so. However, it is not first-hand information and is part of a highly fanciful account of foreign regions, replete with diamond-bladed swords and the like.
The crux of the whole issue appears to be the 1903 scientific examination of the Berezkova, Siberia, mammoth, the find that put “frozen mammoth” into the popular consciousness.
Otto Herz led the expedition to the site, which back then was a months-long journey across barren tundra and across mountain ranges. Once they finally located the mammoth, the Siberian winter was already coming in and the men were subsisting on horsemeat—a context that is probably significant in what Herz later reported.
To wit: He said some of the mammoth’s meat remained red and marbled with fat and appeared “as fresh as well-frozen beef or horse meat. It looked so appetizing that we wondered for some time whether we should not taste it, but no one would venture to take it into his mouth, and horseflesh was given the preference.”
The party’s dogs did eat some of the meat, he added. Note that while it looked pretty yummy—it wasn’t yummy enough. Herz also reported that the find stunk to holy heck, so that probably dissuaded even the least picky eaters in the group.
It’s worth noting the Berezkova find was one of the most complete ever, with an unusual amount of soft tissue intact.
I can find no record since then, in all the ensuing mammoth finds, of anybody popping the meat in their mouths. Typically, it’s the local dogs who benefit.
That brings us to sole reputable report of modern eating of ancient meat. In “Frozen Fauna of the Mammoth Steppe,” R. Dale Guthrie, professor emeritus at the University of Alaska, describes his inspection of the 1979 find of “Blue Babe,” a 36,000-year-old frozen bison. The carcass had already been largely eaten in prehistory, but, Guthrie reports, it still showed some red muscle.
After a thorough examination of the remains, which were kept frozen in a lab, Blue Babe was reconstructed with taxidermy for museum display. That task done, Guthrie set about eating part of the ancient bison along with taxidermist Eirik Granqvist, the late paleontologist Bjorn Kurten and apparently unnamed others.
“A small part of the mummy’s neck was diced and simmered in a pot of stock and vegetables,” Guthrie wrote. “We had Blue Babe for dinner. The meat was well aged but still a little tough, and it gave the stew a strong Pleistocene aroma, but nobody there would have dared miss it.” Kurten later wrote that the bison stew was “agreeable.”
Guthrie was an experienced hunter who had a very well-preserved corpse on his hands—he had washed it out of the sediments himself and kept it frozen from the first moment until examination in a controlled lab. That—and probably the aura of the preexisting mammoth-eating legends, probably influenced his decision to munch.
Feb. 14, 2005
By John Ruch
© 2005
Q: Has anybody ever eaten the meat of a frozen mammoth?
—Dan F., Columbus, Ohio
A: There is no reliable report of a modern human eating any part of a frozen mammoth—and very few unreliable reports, for that matter.
The story seems to have sprouted from a thought-about-it-but-decided-not-to report from a groundbreaking mammoth find in 1903. And the idea of people feasting on flash-frozen mammoth steaks has become a staple of loony fundamentalist Christian literature, feeding their ideas of a young Earth and a Biblical flood that knocked off prehistoric species.
All that being said, there is a solid example of a scientist and his friends tasting the flesh of a similarly frozen bison. However, that case—involving a dry, tough scrap of flesh boiled in a stew—only underlines why frozen mummy flesh is not generally eaten.
It must also be noted that dogs and scavenging animals have certainly eaten frozen mammoth meat. So at least it is true that modern animals have had the rare experience of feasting on meat aged 10,000 years or more, since the days when our ancestors were hunting the beasts with spears.
The mammoth-meat idea rests on misconceptions. Mammoths (and other frozen, mummified fossil animals ranging from rhinos to mice) are found in frozen silt, not a giant ice cube.
The silt may contain veins of ice. Also, the corpses are often immediately surrounded by ice created by the drawing out of moisture from the body itself, which results in the mummification. Nonetheless, when these things thaw enough to be discovered, they are in balls of muck, not a piece of ice. This also means they are desiccated, not big, juicy hunks of meat—though unquestionably some red meat is still preserved in certain specimens.
Only a couple mammoths have been found in anywhere near an intact state. Most were already scavenged, preyed upon and/or decayed to some degree before their freezing in the permafrost, leaving little soft tissue behind. And when they become exposed, typically through erosion, they start to rot fast. Thawed soft tissue is quickly consumed by modern scavengers.
The organic muck combined with the rotting of whatever flesh remains produces what is reportedly an unbelievable stench.
So, to recap, finding a mammoth is not walking into nature’s meat locker. It’s exhuming an icy grave.
Frozen mammoths are found in Siberia. It is not out of the realm of imagination that someone in such a harsh climate would consider eating some mammoth flesh. However, there is no reputable report of it.
It should be remembered that in early times, mammoths were exclusively found only by being washed out into the open, and could be reached by scientific investigators only after months or even years, and even then with no refrigeration equipment to preserve the find.
There’s a stray 1800s report from a Russian scientist who claimed that one tribe, the Yukats, occasionally ate mammoth meat, but his was not an eyewitness account. He did note, like many other reputable reports, that the Yukats’ dogs sometimes ate the meat, which is a more likely explanation for any missing flesh. Especially because the Yukats, like all other natives of the region, have a strong taboo about unearthing mammoth fossils. (Theoretically, this could be rooted in bad dining experiences of the past, but there’s no evidence of that.)
Going much farther back, about 2,000 years, there’s a Chinese tale that refers to underground beasts in the north whose flesh can be eaten as a kind of jerky. This has been presented as referring to mammoths, and may well do so. However, it is not first-hand information and is part of a highly fanciful account of foreign regions, replete with diamond-bladed swords and the like.
The crux of the whole issue appears to be the 1903 scientific examination of the Berezkova, Siberia, mammoth, the find that put “frozen mammoth” into the popular consciousness.
Otto Herz led the expedition to the site, which back then was a months-long journey across barren tundra and across mountain ranges. Once they finally located the mammoth, the Siberian winter was already coming in and the men were subsisting on horsemeat—a context that is probably significant in what Herz later reported.
To wit: He said some of the mammoth’s meat remained red and marbled with fat and appeared “as fresh as well-frozen beef or horse meat. It looked so appetizing that we wondered for some time whether we should not taste it, but no one would venture to take it into his mouth, and horseflesh was given the preference.”
The party’s dogs did eat some of the meat, he added. Note that while it looked pretty yummy—it wasn’t yummy enough. Herz also reported that the find stunk to holy heck, so that probably dissuaded even the least picky eaters in the group.
It’s worth noting the Berezkova find was one of the most complete ever, with an unusual amount of soft tissue intact.
I can find no record since then, in all the ensuing mammoth finds, of anybody popping the meat in their mouths. Typically, it’s the local dogs who benefit.
That brings us to sole reputable report of modern eating of ancient meat. In “Frozen Fauna of the Mammoth Steppe,” R. Dale Guthrie, professor emeritus at the University of Alaska, describes his inspection of the 1979 find of “Blue Babe,” a 36,000-year-old frozen bison. The carcass had already been largely eaten in prehistory, but, Guthrie reports, it still showed some red muscle.
After a thorough examination of the remains, which were kept frozen in a lab, Blue Babe was reconstructed with taxidermy for museum display. That task done, Guthrie set about eating part of the ancient bison along with taxidermist Eirik Granqvist, the late paleontologist Bjorn Kurten and apparently unnamed others.
“A small part of the mummy’s neck was diced and simmered in a pot of stock and vegetables,” Guthrie wrote. “We had Blue Babe for dinner. The meat was well aged but still a little tough, and it gave the stew a strong Pleistocene aroma, but nobody there would have dared miss it.” Kurten later wrote that the bison stew was “agreeable.”
Guthrie was an experienced hunter who had a very well-preserved corpse on his hands—he had washed it out of the sediments himself and kept it frozen from the first moment until examination in a controlled lab. That—and probably the aura of the preexisting mammoth-eating legends, probably influenced his decision to munch.
Pine, Fur, Spruce
Stupid Question ™
Feb. 7, 2005
By John Ruch
© 2005
Q: What’s the difference between a pine, a fir and a spruce? Are they the same thing?
—Mike, Columbus, Ohio
A: This is one of those sad situations that arise when modern scientific classification attempts to embrace the common names that came before it.
Only such rational irrationalism could produce the modern distinction between “pines” and “true pines,” or insist that the Douglas fir is an imposter and actually a pine in disguise.
Let’s go back to a simpler time, when people just wanted to know, “Does that tree have pointy, evergreen leaves or not?”
The Old Teutonic answer, with typical brevity, was “fir.” (Well, “fir” is the Old English version of it, but you get the idea.) This meant pretty much anything pine-y.
Farther south, the Romans had need of similar words, and one of them was “pinus.” Those Old English speakers who were fancy-pants enough to be using Latin turned that into “pine.”
Over time, these dueling terms sorted out into specific meanings, often based on the quality of wood produced (pine wood generally being better for construction than fir). As a northern name, “fir” also became pegged to the rugged, northern kinds of conifers.
Naturally, people noticed various species of these trees, resulting in “white pine” and other specific terms.
“Spruce” began in the 1300s as one of these more specific terms: it’s short for “Spruce fir.” (The abbreviation was in use by the 1600s.)
“Spruce” was an alternative form of “Pruce,” which was a contemporary term for the region of modern Germany, Poland and Russia then known as Prussia. (Later a state, Prussia officially went away as part of the deal ending World War II.)
Among the products Prussia (a.k.a. Spruce) was known for at the time was quality fir wood. Naturally, it became known as Spruce fir. Possibly it referred to various types of fir wood collectively. The abbreviation to Spruce is fairly contemporary with the introduction of “spruce” (as in “spruce up”) in English, which is likely also a reference to Prussia (some speculate a basis in sharp-looking clothing made of then-famous Prussian leather).
Shortly thereafter came modern scientific classification, which did not improve on things very much.
All of the aforementioned trees are conifers (their seeds come in woody cones) with needle-like leaves and a resinous wood. Wisely, science lumps them all under the order of conifers. So far, so good!
However, all of these trees (along with larches, hemlocks and cedars) are put under the family of Pinaceae, or “pine-like,” trees. Pine is the natural preference since scientific classification is written in Latin.
Within the family is the genus Pinus—the “true pines.” Pretty much all this means is a tree with clustered needles that have a sort of cuticle or “sheath” at the base. Several trees that look like all the other pines are not considered “true pines.”
That includes the modern separation of the firs into the genus Abies. Basically it means any kind of northern, high-altitude conifer; the “scientific” distinction is noting the needles are flat, do not come in bunches and grow straight out of the branch without a stem.
Meanwhile, “spruce”—simply a type of fir—gets its own genus, Picea. The only scientific difference between a spruce and a fir is that spruces have rectangular needles. The scientific name for the famed Norway spruce admits as much: it’s Picea abies, or “spruce fir,” the original common term.
Granted, these genera do make reasonably worthwhile distinctions between trees that have evolved in somewhat different environments. There’s some biological credibility behind them.
But their distinctions are too sharp and literal to extrapolate from broadly defined common words. And by retaining the attachments to those common names, the scientific terms have branched out into confusion.
Arborists, landscapers and nature guides invariably refer to Picea trees as “spruces” and can go into the intricacies of identifying them. But a spruce is simply a fir from northeastern Europe, etymologically speaking.
Likewise, a fir is just a cold-weather pine, and they’re all pines in the end. Or all firs, if you’re feeling Teutonic. Scientific classification won’t tell you anything more than that—it’ll just make you count needles to do it.
Feb. 7, 2005
By John Ruch
© 2005
Q: What’s the difference between a pine, a fir and a spruce? Are they the same thing?
—Mike, Columbus, Ohio
A: This is one of those sad situations that arise when modern scientific classification attempts to embrace the common names that came before it.
Only such rational irrationalism could produce the modern distinction between “pines” and “true pines,” or insist that the Douglas fir is an imposter and actually a pine in disguise.
Let’s go back to a simpler time, when people just wanted to know, “Does that tree have pointy, evergreen leaves or not?”
The Old Teutonic answer, with typical brevity, was “fir.” (Well, “fir” is the Old English version of it, but you get the idea.) This meant pretty much anything pine-y.
Farther south, the Romans had need of similar words, and one of them was “pinus.” Those Old English speakers who were fancy-pants enough to be using Latin turned that into “pine.”
Over time, these dueling terms sorted out into specific meanings, often based on the quality of wood produced (pine wood generally being better for construction than fir). As a northern name, “fir” also became pegged to the rugged, northern kinds of conifers.
Naturally, people noticed various species of these trees, resulting in “white pine” and other specific terms.
“Spruce” began in the 1300s as one of these more specific terms: it’s short for “Spruce fir.” (The abbreviation was in use by the 1600s.)
“Spruce” was an alternative form of “Pruce,” which was a contemporary term for the region of modern Germany, Poland and Russia then known as Prussia. (Later a state, Prussia officially went away as part of the deal ending World War II.)
Among the products Prussia (a.k.a. Spruce) was known for at the time was quality fir wood. Naturally, it became known as Spruce fir. Possibly it referred to various types of fir wood collectively. The abbreviation to Spruce is fairly contemporary with the introduction of “spruce” (as in “spruce up”) in English, which is likely also a reference to Prussia (some speculate a basis in sharp-looking clothing made of then-famous Prussian leather).
Shortly thereafter came modern scientific classification, which did not improve on things very much.
All of the aforementioned trees are conifers (their seeds come in woody cones) with needle-like leaves and a resinous wood. Wisely, science lumps them all under the order of conifers. So far, so good!
However, all of these trees (along with larches, hemlocks and cedars) are put under the family of Pinaceae, or “pine-like,” trees. Pine is the natural preference since scientific classification is written in Latin.
Within the family is the genus Pinus—the “true pines.” Pretty much all this means is a tree with clustered needles that have a sort of cuticle or “sheath” at the base. Several trees that look like all the other pines are not considered “true pines.”
That includes the modern separation of the firs into the genus Abies. Basically it means any kind of northern, high-altitude conifer; the “scientific” distinction is noting the needles are flat, do not come in bunches and grow straight out of the branch without a stem.
Meanwhile, “spruce”—simply a type of fir—gets its own genus, Picea. The only scientific difference between a spruce and a fir is that spruces have rectangular needles. The scientific name for the famed Norway spruce admits as much: it’s Picea abies, or “spruce fir,” the original common term.
Granted, these genera do make reasonably worthwhile distinctions between trees that have evolved in somewhat different environments. There’s some biological credibility behind them.
But their distinctions are too sharp and literal to extrapolate from broadly defined common words. And by retaining the attachments to those common names, the scientific terms have branched out into confusion.
Arborists, landscapers and nature guides invariably refer to Picea trees as “spruces” and can go into the intricacies of identifying them. But a spruce is simply a fir from northeastern Europe, etymologically speaking.
Likewise, a fir is just a cold-weather pine, and they’re all pines in the end. Or all firs, if you’re feeling Teutonic. Scientific classification won’t tell you anything more than that—it’ll just make you count needles to do it.
Tokyo Rose
Stupid Question ™
Jan. 31, 2005
By John Ruch
© 2005
Q: Who was Tokyo Rose?
—S. Andersen, from the Internet
A: “Tokyo Rose” did not exist; she was a mythical figure invented by World War II Pacific G.I.s to personify a wide range of actual female Japanese propaganda DJs.
Unfortunately, her mythical status did not prevent many G.I.s from convincing themselves of her reality. That in turn led to an apparently innocent woman, Iva (born Ikuko) Toguri (later Toguri d’Aquino), being convicted of treason for being “the” Tokyo Rose. She was pardoned by President Gerald Ford in 1977.
Toguri d’Aquino’s case is a horrifying blend of accident, misunderstanding and patriotic witchhunt. In short, a typical American story.
Toguri d’Aquino, of Japanese-American ethnicity, was born with rich irony on the Fourth of July, 1916, in Los Angeles—a city that would later formally ban her from its domain. She was a registered Republican.
Her bad luck began when she was sent as an emissary to a sick aunt back in Japan in July, 1941. When Japan and the U.S. declared war against each other in December, she was stuck there and branded an “enemy alien.”
She refused to accept Japanese citizenship nor to renounce her status as an American. This made life hard, as one can imagine. Meanwhile, her parents back home were tossed into the racist-paranoia concentration camps the government set up; her mother died there.
As the war dragged on, her money and enormous store of American food (which she generally preferred to traditional Japanese fare) began running out. She was forced to find a job.
A part-time job at the Danish consulate in Tokyo and a typing job at Radio Tokyo put her in contact with Allied prisoners of war who had been coerced into writing English-language propaganda material for government radio. She began sneaking them food and news.
In late 1943, the POWs got her a job on their show “Zero Hour,” a mix of news and music. According to Australian Army Maj. Charles Cousens, who ran and scripted the show, its intent was to be subversive, a kind of knowing wink to Allied soldiers.
In that vein, he hired Toguri d’Aquino as an announcer—because she had a horrible radio voice.
Be that as it may, all she did was introduce jazz and classical pieces, doing nothing worse than referring to her Allied audience as “boneheads.” Her on-air handle was “Ann”—short for “announcer”—which was later expanded to “Orphan Ann,” “Orphan Annie,” “Enemy Ann,” “Your Playmate Ann” and so on.
She was one of nearly 30 known women who did English-language Japanese radio propaganda broadcasts. The shows were largely a failure in terms of demoralization as evidenced by the large numbers of G.I.s who tuned in.
It was soldiers who began referring to this gaggle of female voices collectively as “Tokyo Rose.” Military intelligence during the war established there never was a radio personality who used that name or even mentioned it on the air.
Despite that, many soldiers later convinced themselves they had heard a self-identified Tokyo Rose, and further embellished the accounts with her supposed accurate revelations of forthcoming bombing raids and the like (also completely unsubstantiated by surviving recordings and scripts). There were similar reports of a supposed “Madame Tojo.”
The idea was probably supported by the actual existence of Mildred “Axis Sally” Gillars, the Nazi radio propagandist, who was later convicted of treason virtually simultaneously with Toguri d’Aquino’s trial.
In post-war Japan, U.S. journalists began hunting for “the” Tokyo Rose and eventually found Toguri d’Aquino, who was convinced to claim the title by offers of payment for her story and assurances that G.I.s loved her. She began signing stuff “Tokyo Rose.”
Next thing she knew, she was under arrest for treason and rotted in prison for months without counsel or trial. Eventually, the military authorities found no evidence against her, declared “Tokyo Rose” a fiction once again, and released her.
Unfortunately, she never had a passport, and found herself declared a “stateless person” back in the good ol’ USA. Her attempts to acquire a passport raised her profile, and renewed attempts to find the witch known as Tokyo Rose.
The American Legion began a prominent campaign against her, as did the notorious columnist Walter Winchell, who was intimate with FBI Director J. Edgar Hoover. Soon the FBI and the US Attorney General’s Office produced an obviously contrived case that eventually put Toguri d’Aquino in prison for more than eight years.
The sole crime she was convicted of was saying something like the following (no actual transcript or recording was ever produced): “Now, you fellows have lost all your ships. You really are orphans of the Pacific [she frequently referred to soldiers as “orphans” like herself, “Orphan Ann”], and how do you think that you will ever get home?” As Russell Warren Howe noted in his book “The Hunt for ‘Tokyo Rose,’” this was supposedly in reference to a naval battle the Allies won.
The American, Filipino and Australian POWs who actually ran and scripted “Zero Hour” were never charged with a crime.
Jan. 31, 2005
By John Ruch
© 2005
Q: Who was Tokyo Rose?
—S. Andersen, from the Internet
A: “Tokyo Rose” did not exist; she was a mythical figure invented by World War II Pacific G.I.s to personify a wide range of actual female Japanese propaganda DJs.
Unfortunately, her mythical status did not prevent many G.I.s from convincing themselves of her reality. That in turn led to an apparently innocent woman, Iva (born Ikuko) Toguri (later Toguri d’Aquino), being convicted of treason for being “the” Tokyo Rose. She was pardoned by President Gerald Ford in 1977.
Toguri d’Aquino’s case is a horrifying blend of accident, misunderstanding and patriotic witchhunt. In short, a typical American story.
Toguri d’Aquino, of Japanese-American ethnicity, was born with rich irony on the Fourth of July, 1916, in Los Angeles—a city that would later formally ban her from its domain. She was a registered Republican.
Her bad luck began when she was sent as an emissary to a sick aunt back in Japan in July, 1941. When Japan and the U.S. declared war against each other in December, she was stuck there and branded an “enemy alien.”
She refused to accept Japanese citizenship nor to renounce her status as an American. This made life hard, as one can imagine. Meanwhile, her parents back home were tossed into the racist-paranoia concentration camps the government set up; her mother died there.
As the war dragged on, her money and enormous store of American food (which she generally preferred to traditional Japanese fare) began running out. She was forced to find a job.
A part-time job at the Danish consulate in Tokyo and a typing job at Radio Tokyo put her in contact with Allied prisoners of war who had been coerced into writing English-language propaganda material for government radio. She began sneaking them food and news.
In late 1943, the POWs got her a job on their show “Zero Hour,” a mix of news and music. According to Australian Army Maj. Charles Cousens, who ran and scripted the show, its intent was to be subversive, a kind of knowing wink to Allied soldiers.
In that vein, he hired Toguri d’Aquino as an announcer—because she had a horrible radio voice.
Be that as it may, all she did was introduce jazz and classical pieces, doing nothing worse than referring to her Allied audience as “boneheads.” Her on-air handle was “Ann”—short for “announcer”—which was later expanded to “Orphan Ann,” “Orphan Annie,” “Enemy Ann,” “Your Playmate Ann” and so on.
She was one of nearly 30 known women who did English-language Japanese radio propaganda broadcasts. The shows were largely a failure in terms of demoralization as evidenced by the large numbers of G.I.s who tuned in.
It was soldiers who began referring to this gaggle of female voices collectively as “Tokyo Rose.” Military intelligence during the war established there never was a radio personality who used that name or even mentioned it on the air.
Despite that, many soldiers later convinced themselves they had heard a self-identified Tokyo Rose, and further embellished the accounts with her supposed accurate revelations of forthcoming bombing raids and the like (also completely unsubstantiated by surviving recordings and scripts). There were similar reports of a supposed “Madame Tojo.”
The idea was probably supported by the actual existence of Mildred “Axis Sally” Gillars, the Nazi radio propagandist, who was later convicted of treason virtually simultaneously with Toguri d’Aquino’s trial.
In post-war Japan, U.S. journalists began hunting for “the” Tokyo Rose and eventually found Toguri d’Aquino, who was convinced to claim the title by offers of payment for her story and assurances that G.I.s loved her. She began signing stuff “Tokyo Rose.”
Next thing she knew, she was under arrest for treason and rotted in prison for months without counsel or trial. Eventually, the military authorities found no evidence against her, declared “Tokyo Rose” a fiction once again, and released her.
Unfortunately, she never had a passport, and found herself declared a “stateless person” back in the good ol’ USA. Her attempts to acquire a passport raised her profile, and renewed attempts to find the witch known as Tokyo Rose.
The American Legion began a prominent campaign against her, as did the notorious columnist Walter Winchell, who was intimate with FBI Director J. Edgar Hoover. Soon the FBI and the US Attorney General’s Office produced an obviously contrived case that eventually put Toguri d’Aquino in prison for more than eight years.
The sole crime she was convicted of was saying something like the following (no actual transcript or recording was ever produced): “Now, you fellows have lost all your ships. You really are orphans of the Pacific [she frequently referred to soldiers as “orphans” like herself, “Orphan Ann”], and how do you think that you will ever get home?” As Russell Warren Howe noted in his book “The Hunt for ‘Tokyo Rose,’” this was supposedly in reference to a naval battle the Allies won.
The American, Filipino and Australian POWs who actually ran and scripted “Zero Hour” were never charged with a crime.
Tsunamis
Stupid Question ™
Jan. 24, 2005
By John Ruch
© 2005
Q: Was the Dec. 26, 2004 tsunami the biggest and deadliest ever?
—William R., from the Internet
A: Tsunamis, you mean. Like virtually all of its brethren, the Indian Ocean tsunamis of Dec. 26 came as a series in which the first is not usually the worst.
The Dec. 26 tsunamis, which killed an estimated 230,000 people, were indeed the deadliest ever, at least in terms of clear record-keeping. But they were in the average height range of 10-30 feet—nothing compared to the mind-blowing, 1,720-foot freak wave that some claim as the tsunami king.
In terms of death toll, tsunamis killed about 70,000 people worldwide in the century between 1880 and 1980. The Dec. 26 tsunamis killed more than three times that amount in one day.
Runners-up are muddled by being more closely associated with direct earthquake damage. More than 200,000 people died in India’s Bengal area in 1876 from an earthquake and tsunami, but it’s unclear how many died from each. No other tsunami-related disaster comes close to the 200,000 mark.
Tsunami height is a tricky business that varies widely with locality. Tsunamis begin as long-period, high-velocity submarine waves that are barely perceptible in deep water. It is only on sloping shorelines and sharp inlets that they slow down enough to get squashed together and built up to devastating heights.
A 1960 tsunami that smashed Hawaii harbor towns hit as only a six-inch rise in the tide on an island with very steep sides. (And on Hawaii itself, a devastating 1946 tsunami ranged from 2 to 55 feet high, depending on the locale.)
So there’s no such thing as an absolute height for a tsunami. It all depends on what gets hit. The average range for past destructive tsunamis is 10 to 30 feet—which is plenty scary, as you know if you’ve seen any of the Dec. 26 amateur video footage.
Also, many tsunamis don’t hit as the big, breaking wave familiar from disaster movies. They’re more of a flood surge, a big swell that pushes in and just keeps on coming. (The undertow as it pulls back out to sea can be even more deadly than the incoming wave.)
Even when they do come as surf (as the Dec. 26 ones frequently did), it’s hard to get a height measurement because people either run away or die if they stick around to get a look.
So what many tsunami reports discuss is not wave height, but wave “run-up.” The terms are often confused and are related but not identical. Run-up is the elevation on land above mean sea level the tsunami floodwaters attain once they reach shore.
Run-up can be higher than the wave if the wave breaks violently and sprays upwards. And a wave can be higher than its run-up; for example, a wave could break at 30 feet high, but then result in a 10-foot-deep push of water into the shore.
Furthermore, run-up is generally calculated after all the waves have come and gone, so it’s not a good measure of any one particular tsunami in the series.
Run-up confusion clouds reports of the most astonishing tsunamis—ones so big they’re known as “mega-tsunamis.”
The potential highest tsunami of all time was an almost unimaginable 1,720-foot-high splash in an Alaskan bay, apparently caused by a huge earthquake-driven rockslide. It can be measured with precision because it denuded a forested bluff to that height; eyewitnesses also saw the water spraying up from behind a mountaintop.
However, this freak phenomenon, which happened on July 9, 1958 in Lituya Bay, was not necessarily a tsunami, and not necessarily that high if it was.
It might be better classed as simply the biggest splash ever witnessed, rather than a wave. Even if it was a wave, 1,720 feet was its run-up, not necessarily its height. A wave of 500 feet high or even less could still splash up that high once it struck the bluff. Granted, that would still make it the biggest tsunami ever known.
And in any case, the splash was certainly followed by a tsunami, possibly the largest ever witnessed, ranging anywhere from 100 to 500 feet high at its peak according to witnesses—four of whom, incredibly, rode it out in boats. This aftershock wave swept the length of the bay and out to sea, stripping four square miles of shoreline as far as 3,600 feet inland.
Based on scanty historical records and geological evidence, it appears Lituya Bay has a past of similar freak tsunamis that could vie for the height record. Records indicate run-ups of 150 feet in 1936; 180 feet in 1899; and 350 feet in 1854, among other, lesser events. All were big splashes related to rock or ice falls.
There are also speculative claims of even bigger mega-tsunamis produced by the toppling of entire mountains or asteroid impacts. The idea of the entire Eastern seaboard of the U.S. being inundated all the way to the Appalachian Mountains is credulity-stretching. But the claim has been put forth based on rather meager geological evidence.
In the realm of classical tsunamis caused by undersea earthquakes and rockslides, the biggest claim is a wave about 250 feet high that supposedly washed over Ishigaki in Japan’s Okinawa island chain on April 24, 1771. The claim relies on geological evidence and an eyewitness account, but again, seems more about run-up than wave height. It’s still widely debated.
The Aug. 27, 1883 volcanic explosion of Krakatoa produced tsunamis reportedly in excess of 100 feet high, possibly up to 130 feet, in some parts of Sumatra and Java in Indonesia, killing more than 36,000 people. Again, these estimates are largely based on run-up.
An April 1, 1946 Alaskan tsunami destroyed a lighthouse in the Aleutian Islands built 50 feet above sea level, and had run-up onto a promontory 118 feet above sea level. By some estimates, the wave was at least 100 feet high; no one survived in the lighthouse to say.
Japan’s major island of Honshu has a lengthy history of tsunamis which credibly supports repeated strikes by waves of 80 feet in some inlets.
Jan. 24, 2005
By John Ruch
© 2005
Q: Was the Dec. 26, 2004 tsunami the biggest and deadliest ever?
—William R., from the Internet
A: Tsunamis, you mean. Like virtually all of its brethren, the Indian Ocean tsunamis of Dec. 26 came as a series in which the first is not usually the worst.
The Dec. 26 tsunamis, which killed an estimated 230,000 people, were indeed the deadliest ever, at least in terms of clear record-keeping. But they were in the average height range of 10-30 feet—nothing compared to the mind-blowing, 1,720-foot freak wave that some claim as the tsunami king.
In terms of death toll, tsunamis killed about 70,000 people worldwide in the century between 1880 and 1980. The Dec. 26 tsunamis killed more than three times that amount in one day.
Runners-up are muddled by being more closely associated with direct earthquake damage. More than 200,000 people died in India’s Bengal area in 1876 from an earthquake and tsunami, but it’s unclear how many died from each. No other tsunami-related disaster comes close to the 200,000 mark.
Tsunami height is a tricky business that varies widely with locality. Tsunamis begin as long-period, high-velocity submarine waves that are barely perceptible in deep water. It is only on sloping shorelines and sharp inlets that they slow down enough to get squashed together and built up to devastating heights.
A 1960 tsunami that smashed Hawaii harbor towns hit as only a six-inch rise in the tide on an island with very steep sides. (And on Hawaii itself, a devastating 1946 tsunami ranged from 2 to 55 feet high, depending on the locale.)
So there’s no such thing as an absolute height for a tsunami. It all depends on what gets hit. The average range for past destructive tsunamis is 10 to 30 feet—which is plenty scary, as you know if you’ve seen any of the Dec. 26 amateur video footage.
Also, many tsunamis don’t hit as the big, breaking wave familiar from disaster movies. They’re more of a flood surge, a big swell that pushes in and just keeps on coming. (The undertow as it pulls back out to sea can be even more deadly than the incoming wave.)
Even when they do come as surf (as the Dec. 26 ones frequently did), it’s hard to get a height measurement because people either run away or die if they stick around to get a look.
So what many tsunami reports discuss is not wave height, but wave “run-up.” The terms are often confused and are related but not identical. Run-up is the elevation on land above mean sea level the tsunami floodwaters attain once they reach shore.
Run-up can be higher than the wave if the wave breaks violently and sprays upwards. And a wave can be higher than its run-up; for example, a wave could break at 30 feet high, but then result in a 10-foot-deep push of water into the shore.
Furthermore, run-up is generally calculated after all the waves have come and gone, so it’s not a good measure of any one particular tsunami in the series.
Run-up confusion clouds reports of the most astonishing tsunamis—ones so big they’re known as “mega-tsunamis.”
The potential highest tsunami of all time was an almost unimaginable 1,720-foot-high splash in an Alaskan bay, apparently caused by a huge earthquake-driven rockslide. It can be measured with precision because it denuded a forested bluff to that height; eyewitnesses also saw the water spraying up from behind a mountaintop.
However, this freak phenomenon, which happened on July 9, 1958 in Lituya Bay, was not necessarily a tsunami, and not necessarily that high if it was.
It might be better classed as simply the biggest splash ever witnessed, rather than a wave. Even if it was a wave, 1,720 feet was its run-up, not necessarily its height. A wave of 500 feet high or even less could still splash up that high once it struck the bluff. Granted, that would still make it the biggest tsunami ever known.
And in any case, the splash was certainly followed by a tsunami, possibly the largest ever witnessed, ranging anywhere from 100 to 500 feet high at its peak according to witnesses—four of whom, incredibly, rode it out in boats. This aftershock wave swept the length of the bay and out to sea, stripping four square miles of shoreline as far as 3,600 feet inland.
Based on scanty historical records and geological evidence, it appears Lituya Bay has a past of similar freak tsunamis that could vie for the height record. Records indicate run-ups of 150 feet in 1936; 180 feet in 1899; and 350 feet in 1854, among other, lesser events. All were big splashes related to rock or ice falls.
There are also speculative claims of even bigger mega-tsunamis produced by the toppling of entire mountains or asteroid impacts. The idea of the entire Eastern seaboard of the U.S. being inundated all the way to the Appalachian Mountains is credulity-stretching. But the claim has been put forth based on rather meager geological evidence.
In the realm of classical tsunamis caused by undersea earthquakes and rockslides, the biggest claim is a wave about 250 feet high that supposedly washed over Ishigaki in Japan’s Okinawa island chain on April 24, 1771. The claim relies on geological evidence and an eyewitness account, but again, seems more about run-up than wave height. It’s still widely debated.
The Aug. 27, 1883 volcanic explosion of Krakatoa produced tsunamis reportedly in excess of 100 feet high, possibly up to 130 feet, in some parts of Sumatra and Java in Indonesia, killing more than 36,000 people. Again, these estimates are largely based on run-up.
An April 1, 1946 Alaskan tsunami destroyed a lighthouse in the Aleutian Islands built 50 feet above sea level, and had run-up onto a promontory 118 feet above sea level. By some estimates, the wave was at least 100 feet high; no one survived in the lighthouse to say.
Japan’s major island of Honshu has a lengthy history of tsunamis which credibly supports repeated strikes by waves of 80 feet in some inlets.
"I Hate White Rabbits"
Stupid Question ™
Jan. 16, 2005
By John Ruch
© 2005
Q: What is the origin of saying, “I hate white rabbits” to ward off offending campfire smoke? I remember hearing this as a kid at summer camp.
—Matthew S. Schweitzer, Columbus, Ohio
A: You know, I’ve done a lot of goofy things in the name of “Stupid Question,” but this may top them all. “I hate white rabbits” is reputedly effective against tobacco smoke as well. So today I sat in a public park, lit up a big cigar, and held it up so the smoke blew in my face as I repeated the magic phrase.
It “worked”—the first time. But not on retries. And never when I positioned the cigar firmly between me and the prevailing wind. Meanwhile, by observing the cigar against a black backdrop, I could see the smoke changed direction regularly even without my magical influence. Wind changes frequently at ground level, and the heat of combustion generates its own currents as well.
So, unsurprisingly, this appears to be just a silly thing to say as a way of passing the time until the smoke shifts again. That’s right, smoke doesn’t understand English.
But then, do we? Why this particular phrase?
Research on this one is almost nonexistent. All I could find was a tiny sample of Internet reports too small to draw any significant conclusions. Most even lacked a clear geographical base; the identifiable ones were from the Midwest and Canada.
However, there is general agreement that the phrase is, “I hate white rabbits” (or just “White rabbit” two cases) and that it makes smoke blow away from the speaker. A couple reports mentioned learning it as a child. All reports seem to involve people no more than about 40 years old, though the Internet self-samples for such folks.
I did find one brief report that seems to suggest “white rabbit” is a phrase said when the campfire blows away from the speaker—marking the change rather than causing it. However, the report may just be poorly written.
Perhaps the phrase is deliberate nonsense, just the sort of thing to keep kids entertained around a campfire. Maybe there’s a dash of functionalism, the idea that just saying something will produce enough exhalation to push the smoke away.
I’ll note for the record that Jefferson Airplane’s 1967 hit song “White Rabbit,” an ode to drug use, includes mention of a “hookah-smoking caterpillar.” I can almost imagine anti-drug camp counselors of the 1970s teaching kids to say, “I hate white rabbits!” Almost.
Of course, rabbits figure in a variety of customs and bits of folklore, usually attributed to fertility symbolism (with “white” typically symbolizing magical beneficence). Think of the lucky rabbit’s foot. Which brings us to our most intriguing possibility.
There’s a British custom going back at least to 1920 of repeating either “white rabbit” or just “rabbit,” typically three times, on the first day of the month, or at the very least on New Year’s Day.
Remember that British-influenced Canada is one location for our campfire phrase—there’s a possible connection between the two there.
In some reports both new and old, the British phrase is part of a game with friends or family to see who remembers to say it first. Similar games of social tag, with equally nonsensical names and phrases (“Buffalo” is one I just read about) were common fads around 1900.
Nobody knows what the British charm-phrase means, either. But is not hard to imagine it being adapted for around-the-campfire use, especially in the form of a game in which it is said when the smoke changes direction, or something along those lines. Its transformation into an anti-smoke charm would not be surprising and would explain the addition of the negative, “I hate…” prefix.
But until someone conducts wide-ranging field studies on this phrase, we’re really just blowing smoke.
Jan. 16, 2005
By John Ruch
© 2005
Q: What is the origin of saying, “I hate white rabbits” to ward off offending campfire smoke? I remember hearing this as a kid at summer camp.
—Matthew S. Schweitzer, Columbus, Ohio
A: You know, I’ve done a lot of goofy things in the name of “Stupid Question,” but this may top them all. “I hate white rabbits” is reputedly effective against tobacco smoke as well. So today I sat in a public park, lit up a big cigar, and held it up so the smoke blew in my face as I repeated the magic phrase.
It “worked”—the first time. But not on retries. And never when I positioned the cigar firmly between me and the prevailing wind. Meanwhile, by observing the cigar against a black backdrop, I could see the smoke changed direction regularly even without my magical influence. Wind changes frequently at ground level, and the heat of combustion generates its own currents as well.
So, unsurprisingly, this appears to be just a silly thing to say as a way of passing the time until the smoke shifts again. That’s right, smoke doesn’t understand English.
But then, do we? Why this particular phrase?
Research on this one is almost nonexistent. All I could find was a tiny sample of Internet reports too small to draw any significant conclusions. Most even lacked a clear geographical base; the identifiable ones were from the Midwest and Canada.
However, there is general agreement that the phrase is, “I hate white rabbits” (or just “White rabbit” two cases) and that it makes smoke blow away from the speaker. A couple reports mentioned learning it as a child. All reports seem to involve people no more than about 40 years old, though the Internet self-samples for such folks.
I did find one brief report that seems to suggest “white rabbit” is a phrase said when the campfire blows away from the speaker—marking the change rather than causing it. However, the report may just be poorly written.
Perhaps the phrase is deliberate nonsense, just the sort of thing to keep kids entertained around a campfire. Maybe there’s a dash of functionalism, the idea that just saying something will produce enough exhalation to push the smoke away.
I’ll note for the record that Jefferson Airplane’s 1967 hit song “White Rabbit,” an ode to drug use, includes mention of a “hookah-smoking caterpillar.” I can almost imagine anti-drug camp counselors of the 1970s teaching kids to say, “I hate white rabbits!” Almost.
Of course, rabbits figure in a variety of customs and bits of folklore, usually attributed to fertility symbolism (with “white” typically symbolizing magical beneficence). Think of the lucky rabbit’s foot. Which brings us to our most intriguing possibility.
There’s a British custom going back at least to 1920 of repeating either “white rabbit” or just “rabbit,” typically three times, on the first day of the month, or at the very least on New Year’s Day.
Remember that British-influenced Canada is one location for our campfire phrase—there’s a possible connection between the two there.
In some reports both new and old, the British phrase is part of a game with friends or family to see who remembers to say it first. Similar games of social tag, with equally nonsensical names and phrases (“Buffalo” is one I just read about) were common fads around 1900.
Nobody knows what the British charm-phrase means, either. But is not hard to imagine it being adapted for around-the-campfire use, especially in the form of a game in which it is said when the smoke changes direction, or something along those lines. Its transformation into an anti-smoke charm would not be surprising and would explain the addition of the negative, “I hate…” prefix.
But until someone conducts wide-ranging field studies on this phrase, we’re really just blowing smoke.
Asphalt Vs. Concrete
Stupid Question ™
Jan. 10, 2005
By John Ruch
© 2005
Q: Why are streets paved with asphalt, while sidewalks tend to be concrete? Why not the same material for both?
—N.R., from the Internet
A: First, a bit of terminology. Asphalt roads are actually a form of “concrete,” that being a generic term for a substance that combines a base with crushed or powdered stones.
That black road surfacing stuff is asphalt concrete, made with a hydrocarbon asphalt base that uses bitumen as a cement. That gray sidewalk stuff is Portland cement concrete, made with a limestone base and named for a naturally occurring form of limestone.
As Prof. Stefan Romanoschi of Kansas State University’s department of civil engineering pointed out to me, you can find streets made of cement and sidewalks made of asphalt, so there is certainly a degree of interchangeability to the substances.
The choice, he said, comes down to cost, durability and configuration.
While cement is more vulnerable to anti-icing chemicals, it is more durable than asphalt under heavy traffic loads. But asphalt is cheaper to the tune of tens of thousands of dollars per mile, according to the asphalt industry. It can also be repaired less expensively with simple patches.
Therefore, Romanoschi said, asphalt is very popular for streets with relatively low traffic (especially truck traffic) volumes and speeds. It holds up well enough and repairs do not have to be extensive due to low-speed traffic, he said.
But cement can be better for highways and industrial areas, Romanoschi said, noting, “They cost more initially but you do not need to spend money on maintenance.” That also means fewer construction-area slowdowns.
This difference plays out in the asphalt industry’s own numbers, which report that 94 percent of the U.S.’s total paved streets are asphalt—but within that figure, only 65 percent of interstate highways are asphalt. The rest are cement.
As for sidewalks, Romanoschi said it comes down to the way the two materials are set down. Asphalt has to paved and compacted with a mechanical roller that by necessity is of a fixed width and shape. Cement, on the other hand, can be poured into any form and scraped level, and hardens on its own.
He noted that many sidewalks have “irregular shapes and widths”—consider street trees, sign poles, manholes and so on—and hence are easier to create with cement.
While I asked for a technical answer to this question and got one, I suspect aesthetics and habit probably play a role as well, especially on the sidewalk end of things. Cement may be considered to have a cleaner look, and it certainly doesn’t smell on a hot day.
Jan. 10, 2005
By John Ruch
© 2005
Q: Why are streets paved with asphalt, while sidewalks tend to be concrete? Why not the same material for both?
—N.R., from the Internet
A: First, a bit of terminology. Asphalt roads are actually a form of “concrete,” that being a generic term for a substance that combines a base with crushed or powdered stones.
That black road surfacing stuff is asphalt concrete, made with a hydrocarbon asphalt base that uses bitumen as a cement. That gray sidewalk stuff is Portland cement concrete, made with a limestone base and named for a naturally occurring form of limestone.
As Prof. Stefan Romanoschi of Kansas State University’s department of civil engineering pointed out to me, you can find streets made of cement and sidewalks made of asphalt, so there is certainly a degree of interchangeability to the substances.
The choice, he said, comes down to cost, durability and configuration.
While cement is more vulnerable to anti-icing chemicals, it is more durable than asphalt under heavy traffic loads. But asphalt is cheaper to the tune of tens of thousands of dollars per mile, according to the asphalt industry. It can also be repaired less expensively with simple patches.
Therefore, Romanoschi said, asphalt is very popular for streets with relatively low traffic (especially truck traffic) volumes and speeds. It holds up well enough and repairs do not have to be extensive due to low-speed traffic, he said.
But cement can be better for highways and industrial areas, Romanoschi said, noting, “They cost more initially but you do not need to spend money on maintenance.” That also means fewer construction-area slowdowns.
This difference plays out in the asphalt industry’s own numbers, which report that 94 percent of the U.S.’s total paved streets are asphalt—but within that figure, only 65 percent of interstate highways are asphalt. The rest are cement.
As for sidewalks, Romanoschi said it comes down to the way the two materials are set down. Asphalt has to paved and compacted with a mechanical roller that by necessity is of a fixed width and shape. Cement, on the other hand, can be poured into any form and scraped level, and hardens on its own.
He noted that many sidewalks have “irregular shapes and widths”—consider street trees, sign poles, manholes and so on—and hence are easier to create with cement.
While I asked for a technical answer to this question and got one, I suspect aesthetics and habit probably play a role as well, especially on the sidewalk end of things. Cement may be considered to have a cleaner look, and it certainly doesn’t smell on a hot day.
Planets Outside The Solar System
Stupid Question ™
Sept. 13, 2004
By John Ruch
© 2004
Q: How many planets are there outside of our solar system?
—anonymous, from the Internet
A: This question came in before the Sept. 10 announcement of a possible composite photograph of an “extrasolar” planet, so—one more than before, I guess.
The current count is anywhere from 124 to 140, depending on who’s counting, how and with what degree of skepticism.
They are all immensely huge. The smallest (and most recently discovered) would be about the size of Neptune—which is about 60 times the size of Earth. Most are much larger than Jupiter, our solar system’s largest planet, which is about 1,300 times the size of Earth. Like both Neptune and Jupiter, the “known” extrasolar planets are likely big balls of gas.
You wouldn’t know it from the very definitive language in astronomy textbooks and NASA and university web sites, but there is certainly room to argue the reality of any of these observations—at least, if they’re to be called “planets.”
Leaving aside the question of the possible photograph (and another similar image announced in May), none of the extrasolar planet “discoveries” involves direct observation.
At best, this is an art of interpreting “wobbles” and (most frequently and credibly) velocity shifts (sometimes also known as “wobbles”) in the movements of stars, from which it is inferred that gravitational pull from an orbiting planet is the cause. The size of the planet can be guessed from the movement.
Another mode is presuming a regular change in the beat of a pulsar—a star that emits bursts of energy rather than just glowing steadily—is due to the gravitational suction of a nearby planet. And still another is that the slight dimming of a star is due to a planet coming between it and us (a frequent “confirmation” method to other forms of detection).
The latest method is optical interferometry, which in theory will allow planets to be literally, actually seen (via infrared telescopy, usually) by using multiple images taken from various angles that cancel out the otherwise overwhelming glare of the accompanying star. If planets are out there, anyhow. It can also include adding spectrography that can determine chemical elemental makeup based on the light the object emits. (This led to the claimed discovery of a sodium-containing “atmosphere” on one “planet” in 2001.)
The Sept. 10 image of a possible planet around the star identified as 2M1207 was produced by that method by the European Southern Observatory.
The velocity-shift detection method kicked off the whole discovery boom in the early 1990s, and since then there’s essentially been an extrasolar planet discovered every month.
It’s an exciting field of observation, but there are philosophical and practical reasons for skepticism.
We’re talking about very minute, indirect observations driven by a romantic quest to, as NASA’s web site baldly puts it, be able to say we “have found another Earth.” There is a strong tea-leaf aspect to this.
And what is often left out of discussions of this field is its history of wishful-thinking errors and suspicious data. For example, as NASA again puts it, “many” of the observed planets “are bizarre,” displaying highly eccentric orbits and a proximity to their stars that doesn’t square with accepted planetary-creation theory. Why such a high degree of “bizarre” behavior? How about observer error or wishful thinking? The individual explanations become very involved.
Even more curiously, it has been determined that our Sun just so happens to have no detectable wobble from all the planets around it—this being blamed on sunspots masking the electromagnetic detection of such a shimmy. Hmm.
It’s also worth noting there is no agreement on what a “planet” is. You and I think of a ball orbiting a star; but some observing astronomers accept a definition that includes a body that is not in (stellar) orbit. They may be “failed stars,” if you will.
The earliest claimed extrasolar planet detection, in the late 1960s, was almost certainly bunk. It involved a supposed wobble of a star. Just about nobody else could detect any wobble. Instrument error and wishful thinking were likely culprits. The wobble method is not favored today.
Extrasolar excitement renewed in 1991 with a pulsar observation that was later acknowledged to be a calculation error. This did not stop a 1992 observation that claimed to find three planets around another pulsar. The observation remains controversial. (It’s also worth noting that pulsars are infamous for bizarre changes and fluctuations, and nobody has satisfactorily explained their nature in the first place.)
Wisely moving to yet another detection method, astronomers really kicked things into gear in 1995, with a flood of velocity-shift observations. Many of these are considered quite convincing and will prove to be either a watershed or a scientific fad. Things have not let up since.
I am being hyper-skeptical here because somebody has to be. Astronomers certainly put individual observations to the fire in their journals, but the overall idea of the reality of these observations is rarely questioned openly and certainly not for public consumption; quite the opposite, in fact, as misleadingly positive statements are made by highly respected sources.
The good news is that interferometry has the potential to provide direct images of some types of extrasolar planets. That could really settle some matters.
On the other hand, it may only raise more questions. Take the Sept. 10 picture, for example. The “planet” is pretty convincing as an image, but since that image is two-dimensional, there’s no way to tell the two objects’ actual relationship to each other, let alone if one is orbiting the other. The “planet” could be light-years closer or farther away from us than the star is.
Bringing most or all of the observation methods together may paint a convincing “yea” or “nay” picture of extrasolar planets. Or maybe it will take interstellar space flight to determine the question for sure.
Sept. 13, 2004
By John Ruch
© 2004
Q: How many planets are there outside of our solar system?
—anonymous, from the Internet
A: This question came in before the Sept. 10 announcement of a possible composite photograph of an “extrasolar” planet, so—one more than before, I guess.
The current count is anywhere from 124 to 140, depending on who’s counting, how and with what degree of skepticism.
They are all immensely huge. The smallest (and most recently discovered) would be about the size of Neptune—which is about 60 times the size of Earth. Most are much larger than Jupiter, our solar system’s largest planet, which is about 1,300 times the size of Earth. Like both Neptune and Jupiter, the “known” extrasolar planets are likely big balls of gas.
You wouldn’t know it from the very definitive language in astronomy textbooks and NASA and university web sites, but there is certainly room to argue the reality of any of these observations—at least, if they’re to be called “planets.”
Leaving aside the question of the possible photograph (and another similar image announced in May), none of the extrasolar planet “discoveries” involves direct observation.
At best, this is an art of interpreting “wobbles” and (most frequently and credibly) velocity shifts (sometimes also known as “wobbles”) in the movements of stars, from which it is inferred that gravitational pull from an orbiting planet is the cause. The size of the planet can be guessed from the movement.
Another mode is presuming a regular change in the beat of a pulsar—a star that emits bursts of energy rather than just glowing steadily—is due to the gravitational suction of a nearby planet. And still another is that the slight dimming of a star is due to a planet coming between it and us (a frequent “confirmation” method to other forms of detection).
The latest method is optical interferometry, which in theory will allow planets to be literally, actually seen (via infrared telescopy, usually) by using multiple images taken from various angles that cancel out the otherwise overwhelming glare of the accompanying star. If planets are out there, anyhow. It can also include adding spectrography that can determine chemical elemental makeup based on the light the object emits. (This led to the claimed discovery of a sodium-containing “atmosphere” on one “planet” in 2001.)
The Sept. 10 image of a possible planet around the star identified as 2M1207 was produced by that method by the European Southern Observatory.
The velocity-shift detection method kicked off the whole discovery boom in the early 1990s, and since then there’s essentially been an extrasolar planet discovered every month.
It’s an exciting field of observation, but there are philosophical and practical reasons for skepticism.
We’re talking about very minute, indirect observations driven by a romantic quest to, as NASA’s web site baldly puts it, be able to say we “have found another Earth.” There is a strong tea-leaf aspect to this.
And what is often left out of discussions of this field is its history of wishful-thinking errors and suspicious data. For example, as NASA again puts it, “many” of the observed planets “are bizarre,” displaying highly eccentric orbits and a proximity to their stars that doesn’t square with accepted planetary-creation theory. Why such a high degree of “bizarre” behavior? How about observer error or wishful thinking? The individual explanations become very involved.
Even more curiously, it has been determined that our Sun just so happens to have no detectable wobble from all the planets around it—this being blamed on sunspots masking the electromagnetic detection of such a shimmy. Hmm.
It’s also worth noting there is no agreement on what a “planet” is. You and I think of a ball orbiting a star; but some observing astronomers accept a definition that includes a body that is not in (stellar) orbit. They may be “failed stars,” if you will.
The earliest claimed extrasolar planet detection, in the late 1960s, was almost certainly bunk. It involved a supposed wobble of a star. Just about nobody else could detect any wobble. Instrument error and wishful thinking were likely culprits. The wobble method is not favored today.
Extrasolar excitement renewed in 1991 with a pulsar observation that was later acknowledged to be a calculation error. This did not stop a 1992 observation that claimed to find three planets around another pulsar. The observation remains controversial. (It’s also worth noting that pulsars are infamous for bizarre changes and fluctuations, and nobody has satisfactorily explained their nature in the first place.)
Wisely moving to yet another detection method, astronomers really kicked things into gear in 1995, with a flood of velocity-shift observations. Many of these are considered quite convincing and will prove to be either a watershed or a scientific fad. Things have not let up since.
I am being hyper-skeptical here because somebody has to be. Astronomers certainly put individual observations to the fire in their journals, but the overall idea of the reality of these observations is rarely questioned openly and certainly not for public consumption; quite the opposite, in fact, as misleadingly positive statements are made by highly respected sources.
The good news is that interferometry has the potential to provide direct images of some types of extrasolar planets. That could really settle some matters.
On the other hand, it may only raise more questions. Take the Sept. 10 picture, for example. The “planet” is pretty convincing as an image, but since that image is two-dimensional, there’s no way to tell the two objects’ actual relationship to each other, let alone if one is orbiting the other. The “planet” could be light-years closer or farther away from us than the star is.
Bringing most or all of the observation methods together may paint a convincing “yea” or “nay” picture of extrasolar planets. Or maybe it will take interstellar space flight to determine the question for sure.
Trojan Horse
Stupid Question ™
Aug. 30, 2004
By John Ruch
© 2004
Q: Why was the Trojan horse a horse?
—S.W. and K.P., Columbus, Ohio
A: Would you believe that nobody seems to have asked this question—at least since the Trojans supposedly did 3000 years ago?
The “Trojan horse” of Greek myth was a large, hollow horse statue built by Greek troops engaged in the siege of Troy, a city they wanted to loot. After 10 years of having it, things weren’t going so well. Hence, the idea to build this outlandish horse, hide Special Forces-style troops inside, and wait for the stupid Trojans to drag the thing into town. Which they did, convinced partly by a faked retreat by the rest of the Greek troops. The Greeks hopped out, opened the gates, and the whole army returned and swept in, sacking the city.
The story is simple to tell, but we have to watch our sources carefully. While Troy did exist, there is no certainty that even a grain of truth underlies the Trojan war myths. The surviving tales date to at least 500 years later.
The earliest surviving Trojan war story—Homer’s “The Iliad”—makes no mention of the horse. It focuses instead on a brief but key period of the war featuring the superhero Achilles.
The horse shows up in two other ancient works. One is Homer’s “The Odyssey,” in which it comes up in old war stories told during Odysseus’ current adventure.
The other is Virgil’s “The Aeneid,” which is an ancient Roman poem imitative of Greek epics. “The Aeneid” provides most of the details found in our understanding of the Trojan horse tale—and the story isn’t even Greek. Therefore, most of the narrative elements that would lead to an educated guess about the horse symbolism are suspect (or, at best, accurate only for the Roman understanding of the symbolism).
The horse is mentioned only briefly in “The Odyssey.” The epic gives no explanation for the design, but makes clear it was weird enough that the Trojans spent days debating what to do with it. Specifically, they thought it might be a trick—or a divine offering.
“The Odyssey” also credits the goddess Athena with coming up with the idea and putting it in the mind of the hero Odysseus—who was among those who wound up hiding inside it. Athena is also described as having intervened so that the kidnapped Greek Helen didn’t inadvertently give away the horse’s secret.
“The Aeneid” is much more interested in the story and gives it rich embellishment. In this version, the Greeks also leave behind Sinon, a con man who pretends to be a traitor so he can convince the Trojans to take the horse into the city.
Like us, the Trojans wanted to know what the heck this horse thing was. Sinon’s pitch was that it was a divine offering to Athena.
He points out that the Greeks Odysseus and Diomedes had sacked Athena’s local temple and stolen the Palladium—a wooden statue of Athena. This, Sinon says, made Athena so mad she not only wouldn’t let the Greeks win the war, she won’t even let them go home safely.
Hence, he continues, the Greeks built this enormous wooden horse statue to appease her. Why in horse form is not clear, but the wood obviously correlates with the composition of the Palladium. Hoping the statue will do its job, the Greeks have headed home, he says.
Sinon also juices his tale with a further tweak. Another reason the statue is so big, he says, is so the Trojans can’t fit it back inside their city or temple and thus benefit from its divine powers themselves. A taunt no Trojan could avoid, apparently.
If we go back to the scant details of the original Greek myth, a couple of simple possibilities present themselves. Horses were frequently used in war (and mentioned in such terms throughout “The Odyssey” and “The Iliad”) and could be seen as symbolic of it. Thus, the Trojan horse could have been a double symbol of surrender—and invasion.
More specifically, Achilles’ supernatural horses Xanthus and Balius figure fairly prominently in the narrative of “The Iliad.” Perhaps the later “Odyssey” elaborated this idea of super-warhorses.
Here’s what I think is the best answer: Horses were also sometimes used as sacrificial animals. This might explain the idea of the Trojans thinking it to be a divine offering.
Poseidon, creator of the horse and lord of the rivers and seas, was a frequent target of horse sacrifices. He was also, in the myth, on the Greeks’ side, and also the god who would need appeasement if the Greeks were to retreat over the seas. So perhaps the Trojans assumed it was an offering to him.
On the other hand, the references to Athena’s involvement can’t be overlooked. She was a goddess of extremely varied aspects, including war (perhaps suggesting the horse shape). She was also credited with teaching horse ridership to humanity.
Of course, the Trojans could have thought the statue was some type of offering just because it was so big and weird. It could mean nothing except a striking image and a celebration of sheer trickery. Remember, it first appears in “The Odyssey,” a tale focused on the trickster Odysseus. Athena, a lover of cunning, was his patron.
In that vein, the main reason almost nobody ever asks why it was a horse is because the whole point is, we know it’s a trick. The delight of the story lies in our omniscience. The details recounted in “The Aeneid” are told by a Trojan who naturally had a lot of questions, which only reinforces that we know all the answers.
There is a final idea recently circulated by Eric Cline, an archaeology professor at George Washington University. He suggests the “horse” may be a metaphor for an earthquake that actually destroyed Troy at an opportune moment. He notes that Poseidon was credited with earthquakes and also the horse, and suggests a poetic substitution.
This is a lame idea without basis in either archaeology or the texts, especially inasmuch as it ignores Athena being credited with the horse.
Aug. 30, 2004
By John Ruch
© 2004
Q: Why was the Trojan horse a horse?
—S.W. and K.P., Columbus, Ohio
A: Would you believe that nobody seems to have asked this question—at least since the Trojans supposedly did 3000 years ago?
The “Trojan horse” of Greek myth was a large, hollow horse statue built by Greek troops engaged in the siege of Troy, a city they wanted to loot. After 10 years of having it, things weren’t going so well. Hence, the idea to build this outlandish horse, hide Special Forces-style troops inside, and wait for the stupid Trojans to drag the thing into town. Which they did, convinced partly by a faked retreat by the rest of the Greek troops. The Greeks hopped out, opened the gates, and the whole army returned and swept in, sacking the city.
The story is simple to tell, but we have to watch our sources carefully. While Troy did exist, there is no certainty that even a grain of truth underlies the Trojan war myths. The surviving tales date to at least 500 years later.
The earliest surviving Trojan war story—Homer’s “The Iliad”—makes no mention of the horse. It focuses instead on a brief but key period of the war featuring the superhero Achilles.
The horse shows up in two other ancient works. One is Homer’s “The Odyssey,” in which it comes up in old war stories told during Odysseus’ current adventure.
The other is Virgil’s “The Aeneid,” which is an ancient Roman poem imitative of Greek epics. “The Aeneid” provides most of the details found in our understanding of the Trojan horse tale—and the story isn’t even Greek. Therefore, most of the narrative elements that would lead to an educated guess about the horse symbolism are suspect (or, at best, accurate only for the Roman understanding of the symbolism).
The horse is mentioned only briefly in “The Odyssey.” The epic gives no explanation for the design, but makes clear it was weird enough that the Trojans spent days debating what to do with it. Specifically, they thought it might be a trick—or a divine offering.
“The Odyssey” also credits the goddess Athena with coming up with the idea and putting it in the mind of the hero Odysseus—who was among those who wound up hiding inside it. Athena is also described as having intervened so that the kidnapped Greek Helen didn’t inadvertently give away the horse’s secret.
“The Aeneid” is much more interested in the story and gives it rich embellishment. In this version, the Greeks also leave behind Sinon, a con man who pretends to be a traitor so he can convince the Trojans to take the horse into the city.
Like us, the Trojans wanted to know what the heck this horse thing was. Sinon’s pitch was that it was a divine offering to Athena.
He points out that the Greeks Odysseus and Diomedes had sacked Athena’s local temple and stolen the Palladium—a wooden statue of Athena. This, Sinon says, made Athena so mad she not only wouldn’t let the Greeks win the war, she won’t even let them go home safely.
Hence, he continues, the Greeks built this enormous wooden horse statue to appease her. Why in horse form is not clear, but the wood obviously correlates with the composition of the Palladium. Hoping the statue will do its job, the Greeks have headed home, he says.
Sinon also juices his tale with a further tweak. Another reason the statue is so big, he says, is so the Trojans can’t fit it back inside their city or temple and thus benefit from its divine powers themselves. A taunt no Trojan could avoid, apparently.
If we go back to the scant details of the original Greek myth, a couple of simple possibilities present themselves. Horses were frequently used in war (and mentioned in such terms throughout “The Odyssey” and “The Iliad”) and could be seen as symbolic of it. Thus, the Trojan horse could have been a double symbol of surrender—and invasion.
More specifically, Achilles’ supernatural horses Xanthus and Balius figure fairly prominently in the narrative of “The Iliad.” Perhaps the later “Odyssey” elaborated this idea of super-warhorses.
Here’s what I think is the best answer: Horses were also sometimes used as sacrificial animals. This might explain the idea of the Trojans thinking it to be a divine offering.
Poseidon, creator of the horse and lord of the rivers and seas, was a frequent target of horse sacrifices. He was also, in the myth, on the Greeks’ side, and also the god who would need appeasement if the Greeks were to retreat over the seas. So perhaps the Trojans assumed it was an offering to him.
On the other hand, the references to Athena’s involvement can’t be overlooked. She was a goddess of extremely varied aspects, including war (perhaps suggesting the horse shape). She was also credited with teaching horse ridership to humanity.
Of course, the Trojans could have thought the statue was some type of offering just because it was so big and weird. It could mean nothing except a striking image and a celebration of sheer trickery. Remember, it first appears in “The Odyssey,” a tale focused on the trickster Odysseus. Athena, a lover of cunning, was his patron.
In that vein, the main reason almost nobody ever asks why it was a horse is because the whole point is, we know it’s a trick. The delight of the story lies in our omniscience. The details recounted in “The Aeneid” are told by a Trojan who naturally had a lot of questions, which only reinforces that we know all the answers.
There is a final idea recently circulated by Eric Cline, an archaeology professor at George Washington University. He suggests the “horse” may be a metaphor for an earthquake that actually destroyed Troy at an opportune moment. He notes that Poseidon was credited with earthquakes and also the horse, and suggests a poetic substitution.
This is a lame idea without basis in either archaeology or the texts, especially inasmuch as it ignores Athena being credited with the horse.
Bats And Flying Foxes
Stupid Question ™
Aug. 23, 2004
By John Ruch
© 2004
Q: What’s the difference between a bat and a flying fox?
—Bill D., Columbus, Ohio
A: It’s not so much what the difference is as how much of a difference there is. Because you have unwittingly stumbled into a 30-year scientific war about whether the two suborders of bats share a common ancestor or evolved separately. Some scientists claim the flying foxes are more closely related to us than to fellow bats.
As usual for one of our forays into the nightmarish world of scientific classification, I’m going to have to start by correcting your terms. All flying mammals are bats. The flying fox is a type of bat that looks pretty much like the name suggests—a miniature fox with bat wings.
More specifically, “flying fox” is the common name for the genera Pteopodidae, which is a subcategory of the fox-like South Asian/Australian bats widely known as fruit bats. (Most of them eat fruit and/or flower nectar.)
In scientific terminology, all bats fall under the order Chiroptera (that’s literally “hand-wing”). But the differences between the fruit bats and the “regular” bats we see in North America and the rest of the world are so distinct that no bat is simply called a Chiropteran.
Instead, science has instantly broken Chiroptera down into two suborders. There’s Megachiroptera (or “megabats”), which covers the fruit bats (“mega” because they include the largest bats, with wingspans tipping 6 feet). And there’s Microchiroptera (or “microbats”), which includes the classic bats with which Americans, Europeans and Africans are familiar.
All this scientific gibberish aside, I think the German language makes the obvious distinction much more beautifully. To them, a microbat is a “fledermaus”—a “flying mouse.” And a megabat is a “flughund”—a “flying dog.” This really hits the broad anatomical differences on the head.
Of course, neither term is biologically accurate. Microbats are more like flying shrews; they are mostly insectivorous, like shrews, and there are arguments for a common ancestor. (In this vein, there are also arguments that some shrews echolocate, like bats do.)
And megabats are not related to canines in any direct fashion. But what, exactly, are they related to? That is our burning question.
What unites the Mega- and Microchiropterans is their wings and flight mechanisms (and the fact they’re all mammals). All bats have forelimbs and paws that have morphed into membranous, scallop-edged wings which extend down to include the hind limbs. They all fly in the same way, by alternately flexing back and chest muscles. (This is different from birds, incidentally). They also have similar hind legs that can swivel the claws forward, essentially reversing their direction.
And that’s about it for similarities. Now, the differences.
Microbats use echolocation; megabats don’t (except for two species, which do so crudely, in an entirely different physiological manner). Microbats have snub snouts and fancy ears often highly evolved for echolocation; megabats have a prominent, toothy, dog-like snout and typical mammalian animal ears (they look more like an average mammal).
Microbats generally have tails; megabats generally don’t. Microbats mostly eat insects; megabats mostly eat fruit/nectar. Microbats have one mobile, clawed finger on each wing; megabats have two.
In short, they are really, really different, except for their manner and physiology of flight.
In the 1970s, a controversy finally erupted over this obvious difference, with several scientists proposing megabats and microbats had entirely different evolutionary origins. Rival studies flew back and forth, and while the controversy has largely died down in favor of a single-origin theory, studies still pop up occasionally on both sides.
Like most scientific controversies, the matter has been temporarily settled more out of boredom with the whole thing than any sort of case-closing smoking gun turning up.
Also like most such controversies, the debate has been tinged with all sorts of ulterior concerns. The main one is the complexity a separate-origin theory introduces. The main argument for a single origin remains: “Look, they both have wings that look the same, and they fly the same way!”
If separate origins were accepted, the separate evolution of flight in two different mammalian orders would also have to be accepted, and we know flight is biologically rare indeed. Furthermore, scientists know there are creationists waiting in the wings, so to speak, to seize such a rhetorical opportunity and wax on about how utterly fantastical it is to think that flight would evolve in an identical way twice.
I’m not daunted. It would be far more fantastical for flight to evolve in two totally different ways in the same family of animals; biological morphology has built-in restraints. For that matter, I’m not sure we can accept the implicit presumption that flight in birds and insects all comes from a single origin, either. But then, I’m not a bat scientist.
There are other arguments in favor of a single origin, including studies of mitochondrial DNA indicating a common origin. However, mitochondrial DNA has its own limitations; interestingly, it hasn’t settled the question about whether humanity itself has a common or multiple origins, though it points toward the single-origin theory with us as well.
Another single-origin argument sometimes made is that all known extinct bats in the fossil record are essentially microbats, with the implication being that megabats branched off later. However, the fossil record for bats is so scanty as to render this line of argument virtually meaningless.
As for the separate-origin theory, it has not so much presented evidence of differences from microbats as it has presented evidence of similarities to primates. Specifically, the argument is they are descended from lemurs, or have a common ancestor.
These arguments are almost exclusively anatomically based. It is argued that megabat nervous system and related arterial system are much closer to primates than microbats in arrangement. There’s also a naughty-bits argument that the megabat penis and female breasts are much more like those of primates than of microbats.
Most scientists are more willing to accept separate evolution of similar neurological and sexual apparatus than they are of winged flight.
There is one study that claims megabats have amino-acid characteristics closer to primates than to fellow bats. But that is far less compelling than mitochondrial DNA.
Thus, the single-origin theory stays dominant. And, to finally answer your question, the difference is…well, it’s either an actual difference or just a distinction. It’s an aesthetic matter more than a scientific one; it’s less Megachiroptera and Microchiroptera than it is fledermaus and flughund.
Aug. 23, 2004
By John Ruch
© 2004
Q: What’s the difference between a bat and a flying fox?
—Bill D., Columbus, Ohio
A: It’s not so much what the difference is as how much of a difference there is. Because you have unwittingly stumbled into a 30-year scientific war about whether the two suborders of bats share a common ancestor or evolved separately. Some scientists claim the flying foxes are more closely related to us than to fellow bats.
As usual for one of our forays into the nightmarish world of scientific classification, I’m going to have to start by correcting your terms. All flying mammals are bats. The flying fox is a type of bat that looks pretty much like the name suggests—a miniature fox with bat wings.
More specifically, “flying fox” is the common name for the genera Pteopodidae, which is a subcategory of the fox-like South Asian/Australian bats widely known as fruit bats. (Most of them eat fruit and/or flower nectar.)
In scientific terminology, all bats fall under the order Chiroptera (that’s literally “hand-wing”). But the differences between the fruit bats and the “regular” bats we see in North America and the rest of the world are so distinct that no bat is simply called a Chiropteran.
Instead, science has instantly broken Chiroptera down into two suborders. There’s Megachiroptera (or “megabats”), which covers the fruit bats (“mega” because they include the largest bats, with wingspans tipping 6 feet). And there’s Microchiroptera (or “microbats”), which includes the classic bats with which Americans, Europeans and Africans are familiar.
All this scientific gibberish aside, I think the German language makes the obvious distinction much more beautifully. To them, a microbat is a “fledermaus”—a “flying mouse.” And a megabat is a “flughund”—a “flying dog.” This really hits the broad anatomical differences on the head.
Of course, neither term is biologically accurate. Microbats are more like flying shrews; they are mostly insectivorous, like shrews, and there are arguments for a common ancestor. (In this vein, there are also arguments that some shrews echolocate, like bats do.)
And megabats are not related to canines in any direct fashion. But what, exactly, are they related to? That is our burning question.
What unites the Mega- and Microchiropterans is their wings and flight mechanisms (and the fact they’re all mammals). All bats have forelimbs and paws that have morphed into membranous, scallop-edged wings which extend down to include the hind limbs. They all fly in the same way, by alternately flexing back and chest muscles. (This is different from birds, incidentally). They also have similar hind legs that can swivel the claws forward, essentially reversing their direction.
And that’s about it for similarities. Now, the differences.
Microbats use echolocation; megabats don’t (except for two species, which do so crudely, in an entirely different physiological manner). Microbats have snub snouts and fancy ears often highly evolved for echolocation; megabats have a prominent, toothy, dog-like snout and typical mammalian animal ears (they look more like an average mammal).
Microbats generally have tails; megabats generally don’t. Microbats mostly eat insects; megabats mostly eat fruit/nectar. Microbats have one mobile, clawed finger on each wing; megabats have two.
In short, they are really, really different, except for their manner and physiology of flight.
In the 1970s, a controversy finally erupted over this obvious difference, with several scientists proposing megabats and microbats had entirely different evolutionary origins. Rival studies flew back and forth, and while the controversy has largely died down in favor of a single-origin theory, studies still pop up occasionally on both sides.
Like most scientific controversies, the matter has been temporarily settled more out of boredom with the whole thing than any sort of case-closing smoking gun turning up.
Also like most such controversies, the debate has been tinged with all sorts of ulterior concerns. The main one is the complexity a separate-origin theory introduces. The main argument for a single origin remains: “Look, they both have wings that look the same, and they fly the same way!”
If separate origins were accepted, the separate evolution of flight in two different mammalian orders would also have to be accepted, and we know flight is biologically rare indeed. Furthermore, scientists know there are creationists waiting in the wings, so to speak, to seize such a rhetorical opportunity and wax on about how utterly fantastical it is to think that flight would evolve in an identical way twice.
I’m not daunted. It would be far more fantastical for flight to evolve in two totally different ways in the same family of animals; biological morphology has built-in restraints. For that matter, I’m not sure we can accept the implicit presumption that flight in birds and insects all comes from a single origin, either. But then, I’m not a bat scientist.
There are other arguments in favor of a single origin, including studies of mitochondrial DNA indicating a common origin. However, mitochondrial DNA has its own limitations; interestingly, it hasn’t settled the question about whether humanity itself has a common or multiple origins, though it points toward the single-origin theory with us as well.
Another single-origin argument sometimes made is that all known extinct bats in the fossil record are essentially microbats, with the implication being that megabats branched off later. However, the fossil record for bats is so scanty as to render this line of argument virtually meaningless.
As for the separate-origin theory, it has not so much presented evidence of differences from microbats as it has presented evidence of similarities to primates. Specifically, the argument is they are descended from lemurs, or have a common ancestor.
These arguments are almost exclusively anatomically based. It is argued that megabat nervous system and related arterial system are much closer to primates than microbats in arrangement. There’s also a naughty-bits argument that the megabat penis and female breasts are much more like those of primates than of microbats.
Most scientists are more willing to accept separate evolution of similar neurological and sexual apparatus than they are of winged flight.
There is one study that claims megabats have amino-acid characteristics closer to primates than to fellow bats. But that is far less compelling than mitochondrial DNA.
Thus, the single-origin theory stays dominant. And, to finally answer your question, the difference is…well, it’s either an actual difference or just a distinction. It’s an aesthetic matter more than a scientific one; it’s less Megachiroptera and Microchiroptera than it is fledermaus and flughund.
US-Soviet Team-Ups
Stupid Question ™
Aug. 16, 2004
By John Ruch
© 2004
Q: A common Cold War-era plot device in spy thrillers was to have the U.S. or NATO forces team up with their Soviet counterparts to battle a common menace. Did anything like that happen in real life?
—Kim Philby, Chicago, Illinois
A: I’ll never say never, since there could be some mission hidden in still-secret files. But with all due respect to GI Joe’s comic-book partnership with the Oktober Guard and Arnold Schwarzenegger buddying up with Jim Belushi in “Red Heat,” it almost certainly never happened. However, both countries did offer (or demand) such a super-team-up—on only two occasions, and unsuccessfully both times.
It’s funny to look back on some of our most militaristic Cold War entertainment and realize it had a heavy dose of peace ’n’ love wish fulfillment to it. But that’s what was going on in most cases (and in the rest, a dire warning about our real mutual enemy—nuclear war). Now the Soviet Union is gone, we do partner more with the Russians, and there really are independent, international terrorist conspiracies we can both fight. Er, hooray?
Even when the U.S. and the U.S.S.R. were World War II allies, the two countries conducted very few joint military operations. A major one, the airbase-sharing plan Operation Frantic, didn’t get very far.
In the first decades of the Cold War, the two countries weren’t inclined to share much of anything. The Cuban Missile Crisis in 1963 sobered both countries about the seriousness of their nuclear posturing. After that, they buddied up with lots of joint programs to reduce the tension—or at least keep the lines of communication open so that neither side would destroy the world over a misunderstanding.
Nuclear disarmament was first and foremost. But as time went on, there were U.S.-Soviet joint efforts in nearly every field, from agriculture to the space program to postage stamps.
Starting in 1988, there was even a U.S.-Soviet military exchange program (a U.S. Air Force general was in the U.S.S.R. when it collapsed in 1991). But this wasn’t a team-up to battle international villains. It was a softball goodwill program of ships visiting the enemy’s ports, planes landing at the enemy’s airbases, and officers visiting their counterparts’ headquarters.
In fact, the Americans and the Soviets got together from time to time to do just about everything except fight side-by-side.
There were a couple of really good reasons for that. For one thing, the two countries really did still hate each other. Putting U.S. and Soviet troops together in any number in combat could instantly create a sticky situation. Who would have the most troops? Who would leave first? Who would stay the longest?
Keep in mind there was absolutely nowhere on the planet that was not of strategic interest to the two superpowers. Anywhere they teamed up, it would have major implications, probably lead to disagreements and posturing, and could easily blow up into World War III.
The threat of nuclear war was so strong that the two countries could never fight each other directly. They either did it through proxies, as in Korea and Vietnam, or through chess-style pawns, as in Central America and the Middle East. They were already opposing each other in so many places, it’s hard to think of where they could have collaborated. And it was impossible for them to have minor collaborations—everything either side did was magnified by the nuclear threat.
This also meant that the raving dictators, terrorist masterminds and drug barons who we might think of today as excellent targets for U.S.-Soviet combined assault were in fact already being used by one side or the other. And whenever a new one popped up, both countries would most likely try to co-opt him, not suggest a joint effort at eradicating him. So, for example, the U.S. embraced hijackers of Soviet airliners, and both countries helped set up dictators for a variety of countries.
The countries even avoided the main, fundamentally neutral way they might have joined military forces—United Nations peacekeeping missions. In some cases this was because one country or another had some regional interest already there; but in general it was because the presence of either country’s troops would have been a provocation more than a peacekeeper.
A case study is the first time they were asked to work together—the chaos following the Arab-Israeli Yom Kippur War in 1973. This was an invasion of U.S.-backed Israel by Soviet-backed Egypt and Syria. It led to threats of World War III.
The Israelis successfully fought back thanks to massive U.S. military aid, and pressed their advantage to the point of nearly destroying the main Egyptian army, despite two United Nations resolutions telling them to stop.
Egyptian President Anwar Sadat asked the U.S. and the U.S.S.R. to form a peacekeeping force to enforce the ceasefire—essentially, asking Mommy and Daddy to step in and stop the fight.
The U.S. refused. For one thing, it didn’t want to make the Soviets look like equals or somebody developing countries could turn to for help. More importantly, it knew that such intervention was as likely to start a U.S.-Soviet war than to end an Arab-Israeli one. It would be a huge standoff, not a bilateral force.
But the U.S.S.R. not only said “yes” to Sadat, it demanded the U.S. join in or it would intervene unilaterally. In other words, it would occupy the entire region. The U.S. responded to this threat with its own nuclear saber-rattling until Sadat gave up and got a United Nations peacekeeping force instead.
In 1991, it was the U.S.’s turn to ask for a team-up. Asking for Soviet support for an invasion of Iraq and occupied Kuwait, the U.S. even invited the Soviets to contribute troops to the coalition. This apparently appealed to then-President George H.W. Bush’s highly flawed comparison of Saddam Hussein with Adolf Hitler and the coming Gulf War with World War II—this would be the Americans and Soviets fighting together again; even more closely, in fact, than they did back then.
The Soviets declined, because they probably couldn’t spare the troops; they weren’t going to play second fiddle to the U.S.; and they were still seeking a peaceful solution right up to the moment of the coalition invasion. Of course, these were all the reasons the U.S. felt comfortable asking for the team-up in the first place. It was very late in the Cold War, and the U.S. knew it was winning, so some of the old taboos were gone.
Indeed, the U.S. and the Soviets both ended up serving on the United Nations observer team in post-war Iraq and Kuwait in 1991. It was still an unarmed, non-military team at the time of the Soviet Union’s disintegration in December of that year.
Aug. 16, 2004
By John Ruch
© 2004
Q: A common Cold War-era plot device in spy thrillers was to have the U.S. or NATO forces team up with their Soviet counterparts to battle a common menace. Did anything like that happen in real life?
—Kim Philby, Chicago, Illinois
A: I’ll never say never, since there could be some mission hidden in still-secret files. But with all due respect to GI Joe’s comic-book partnership with the Oktober Guard and Arnold Schwarzenegger buddying up with Jim Belushi in “Red Heat,” it almost certainly never happened. However, both countries did offer (or demand) such a super-team-up—on only two occasions, and unsuccessfully both times.
It’s funny to look back on some of our most militaristic Cold War entertainment and realize it had a heavy dose of peace ’n’ love wish fulfillment to it. But that’s what was going on in most cases (and in the rest, a dire warning about our real mutual enemy—nuclear war). Now the Soviet Union is gone, we do partner more with the Russians, and there really are independent, international terrorist conspiracies we can both fight. Er, hooray?
Even when the U.S. and the U.S.S.R. were World War II allies, the two countries conducted very few joint military operations. A major one, the airbase-sharing plan Operation Frantic, didn’t get very far.
In the first decades of the Cold War, the two countries weren’t inclined to share much of anything. The Cuban Missile Crisis in 1963 sobered both countries about the seriousness of their nuclear posturing. After that, they buddied up with lots of joint programs to reduce the tension—or at least keep the lines of communication open so that neither side would destroy the world over a misunderstanding.
Nuclear disarmament was first and foremost. But as time went on, there were U.S.-Soviet joint efforts in nearly every field, from agriculture to the space program to postage stamps.
Starting in 1988, there was even a U.S.-Soviet military exchange program (a U.S. Air Force general was in the U.S.S.R. when it collapsed in 1991). But this wasn’t a team-up to battle international villains. It was a softball goodwill program of ships visiting the enemy’s ports, planes landing at the enemy’s airbases, and officers visiting their counterparts’ headquarters.
In fact, the Americans and the Soviets got together from time to time to do just about everything except fight side-by-side.
There were a couple of really good reasons for that. For one thing, the two countries really did still hate each other. Putting U.S. and Soviet troops together in any number in combat could instantly create a sticky situation. Who would have the most troops? Who would leave first? Who would stay the longest?
Keep in mind there was absolutely nowhere on the planet that was not of strategic interest to the two superpowers. Anywhere they teamed up, it would have major implications, probably lead to disagreements and posturing, and could easily blow up into World War III.
The threat of nuclear war was so strong that the two countries could never fight each other directly. They either did it through proxies, as in Korea and Vietnam, or through chess-style pawns, as in Central America and the Middle East. They were already opposing each other in so many places, it’s hard to think of where they could have collaborated. And it was impossible for them to have minor collaborations—everything either side did was magnified by the nuclear threat.
This also meant that the raving dictators, terrorist masterminds and drug barons who we might think of today as excellent targets for U.S.-Soviet combined assault were in fact already being used by one side or the other. And whenever a new one popped up, both countries would most likely try to co-opt him, not suggest a joint effort at eradicating him. So, for example, the U.S. embraced hijackers of Soviet airliners, and both countries helped set up dictators for a variety of countries.
The countries even avoided the main, fundamentally neutral way they might have joined military forces—United Nations peacekeeping missions. In some cases this was because one country or another had some regional interest already there; but in general it was because the presence of either country’s troops would have been a provocation more than a peacekeeper.
A case study is the first time they were asked to work together—the chaos following the Arab-Israeli Yom Kippur War in 1973. This was an invasion of U.S.-backed Israel by Soviet-backed Egypt and Syria. It led to threats of World War III.
The Israelis successfully fought back thanks to massive U.S. military aid, and pressed their advantage to the point of nearly destroying the main Egyptian army, despite two United Nations resolutions telling them to stop.
Egyptian President Anwar Sadat asked the U.S. and the U.S.S.R. to form a peacekeeping force to enforce the ceasefire—essentially, asking Mommy and Daddy to step in and stop the fight.
The U.S. refused. For one thing, it didn’t want to make the Soviets look like equals or somebody developing countries could turn to for help. More importantly, it knew that such intervention was as likely to start a U.S.-Soviet war than to end an Arab-Israeli one. It would be a huge standoff, not a bilateral force.
But the U.S.S.R. not only said “yes” to Sadat, it demanded the U.S. join in or it would intervene unilaterally. In other words, it would occupy the entire region. The U.S. responded to this threat with its own nuclear saber-rattling until Sadat gave up and got a United Nations peacekeeping force instead.
In 1991, it was the U.S.’s turn to ask for a team-up. Asking for Soviet support for an invasion of Iraq and occupied Kuwait, the U.S. even invited the Soviets to contribute troops to the coalition. This apparently appealed to then-President George H.W. Bush’s highly flawed comparison of Saddam Hussein with Adolf Hitler and the coming Gulf War with World War II—this would be the Americans and Soviets fighting together again; even more closely, in fact, than they did back then.
The Soviets declined, because they probably couldn’t spare the troops; they weren’t going to play second fiddle to the U.S.; and they were still seeking a peaceful solution right up to the moment of the coalition invasion. Of course, these were all the reasons the U.S. felt comfortable asking for the team-up in the first place. It was very late in the Cold War, and the U.S. knew it was winning, so some of the old taboos were gone.
Indeed, the U.S. and the Soviets both ended up serving on the United Nations observer team in post-war Iraq and Kuwait in 1991. It was still an unarmed, non-military team at the time of the Soviet Union’s disintegration in December of that year.