Recently, North Korea revealed that, unbeknownst to just about everyone, it had built an “ultra-modern uranium enrichment facility.” Victor Cha, a member of the National Security Council during the Bush administration, told Reuters that “from an intelligence perspective, it’s sort of your worst nightmare.”
Cha was referring to the fact that the facility was located at the “Yongbyon nuclear complex — a well-known site under close scrutiny by U.S. spy satellites.” Okay, that’s embarrassing but the real nightmare, for the U.S. at least, is what Pyongyang plans to do with the enriched uranium. North Korea is what you would call a “serial proliferator.” It will sell anything to anyone to finance Kim’swine collection; it will even counterfeit $100 bills.
If Pyongyang’s nuclear enrichment program is the stuff of nightmares, our options on how to deal with it are the stuff of frustration. A military response is highly unlikely, not because we lack the capability — we have it in spades — but because North Korea would respond by destroying Seoul, which is within range of North Korean artillery, not to mention its missiles. North Korea’s recent attack on a South Korean island makes this danger clear.
If “war-war” is unlikely, that leaves, as Churchill once put it, “jaw-jaw.” Not only is Pyongyang an unreliable negotiating partner, but the other interested parties, especially South Korea and China, care less about proliferation issues than they do about stability on the Korean peninsula. Pyongyang knows this, which makes North Korea the kind of nightmare you have to live with for a lack of viable options.
This is an example of what Andrew Bacevich would call The Limits of Power. The Pax Americana is long gone. (Actually, as Joseph Nye argues in the latest issue of Foreign Affairs, the United States never had the kind of power to control events that Americans believed it did.) Other nations have their own interests and priorities, and they don’t mind telling us so.
This has been true for a while. Unfortunately, our government, Democrats and Republicans alike, insist on acting — more to the point, they insist on appropriating — as if this weren’t true.
Some numbers: this budget year, we will spend $700 billion on defense. As Gregg Easterbrook points out in the December 2nd issue of the New Republic, “Adjusting for inflation, that’s more than America has spent on defense in any year since World War II — more than during the Korean war, the Vietnam war, or the Reagan military buildup.” Since 2001, “military and security expenditures havesoared by 119 percent.”
In fact, the United States spends approximately the same on defense as the rest of the worldcombined. No one else is in the same solar system, never mind neighborhood. We have eleven super carriers, each “accompanied by guided-missile cruisers and destroyers, plus two nuclear submarines unseen beneath.” The rest of the world has none. No other country’s “warships could draw within firing distance before being sunk.”
By any reasonable, and virtually every unreasonable, standard, our military power is literally unchallengeable. Thing is, that was also true in 2001, when we spent $315 billion less. Yet, we are well on our way to spending $1 trillion on defense by 2030, if not a lot sooner.
Even if we weren’t already looking at some frightening piles of debt, this would be unsustainable: if you eliminated every last farthing of non-defense discretionary spending — Homeland Security, the FBI, FDA, national parks, agriculture subsidies, the whole lot — you would still add trillions to the national debt over the next decade.
That makes the recent recommendations for cuts weak sauce. For instance, the best the president’s National Commission on Fiscal Responsibility and Reform can do is freezing noncombat military pay, reducing the number of contractors, closing some overseas bases and cancelling procurement programs that Secretary Gates has already targeted for elimination.
Even these modest proposals “are not likely to be adopted by Congress wholesale.” As Easterbrook put it, “‘cancelled’ Pentagon projects are more enduring than brooding teen vampires.” What President Eisenhower famously called the “military industrial complex” — which Bacevich extends to include “elected and appointed officials, corporate executives and corporate lobbyists, admirals and generals, functionaries staffing the national security apparatus, media personalities and policy intellectuals from universities and research organizations” — make a return to 2001 levels, never mind pre-Cold War levels, of defense spending unlikely.
I hope you like value-added taxes, because under what Bacevich has dubbed the “Washington Rules,” it’s the only alternative to insolvency. The numbers don’t add up otherwise.
Of course, you could change the numbers but that would require re-thinking our relationship to the rest of the world and (here’s the hardest part) ridding ourselves of the messianic conceit that is part and parcel of American exceptionalism.
This is the kind of idea that gets the full-on distortion and demagoguery treatment so let’s be clear about what it doesn’t mean: it doesn’t mean that the United States should not continue to be the richest and most powerful nation on earth — it doesn’t even forgo the possibility of being the richest and most powerful nation of all time. On the contrary, since the present dispensation is unsustainable, a new dispensation is necessary to ensure America’s continued place at the top of the charts.
Nor does it mean becoming “isolationist.” With the exception of a few years in the interwar period, the United States has never been isolationist. (Ask every other nation in the hemisphere.) Only someone who confuses engagement with being the guarantor of the global order, a.k.a., the global policeman, would call what Bacevich and others are calling for “isolationism.”
The bad news is that we can’t afford the role we assigned ourselves and the rest of the world, kvetching and resentments notwithstanding, expects us to play. The good news is we don’t need to.
A more multi-polar international order in which the United States tended its own garden would probably be less stable but not necessarily more dangerous, at least not to the U.S. Then, as now, the principal threats to our national security would come from non-state actors who are neither impressed nor deterred by V-22 tilt-rotor transports, F-35s, or our eleven carrier groups.
We don’t need to station troops in Europe; the “Carter Doctrine,” which states that “any outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America,” is, in effect, a guarantee to protect other nation’s oil supplies, since less than 20 percent of our imports come from that region and the percentage is going down.
Our messianic self-conception, and not actual threats to the homeland, is why we spend $700 billion a year on a military that Easterbrook rightly characterizes as “almost entirely expeditionary.” It’s why after hundreds of billions of dollars and 4300-plus deaths, Thomas Friedman still urges readers “to finish our work in Iraq, which still has the potential to be a long-term game-changer in the Arab-Muslim world.”
He may think that we can change other peoples’ world while we change our own. Both history and the numbers say otherwise.
Attention all shoppers
It’s Cancellation Day
Yes the Big Adios
Is just a few hours away
It’s last call
To do your shopping
At the last mall*
The consuming life
The last time I got my haircut, the “stylist” was an immigrant from Cambodia. I told her that I would like to visit Angkor Wat someday to which she replied “I don’t like history – I like shopping.”
I thought “you’ll fit right in here in America.” While I may have sinned against charity when I thought that, I didn’t offend against the truth. American citizenship is inextricably intertwined with being an American consumer.
The idea of consumption as the means by which a person fully participates in American life is the subject of “A Consumer’s Republic: The Politics of Mass Consumption in Postwar America” by Harvard historian Lizabeth Cohen. According to Cohen, consumption, as in buying stuff – houses, cars, appliances, food, etc. – is both how you express your full membership in American society and your reward for that membership. Our consumption both shapes our expectations of what it means to be an American and the ability to consume defines our status as Americans.
Cohen’s account begins with the Great Depression, although the centrality of consumption to the American experience was evident as early as the late 19th century, as described in Thorstein Veblen’s “The Theory of the Leisure Class,” the impact of the Model T, and the revolution in manufacturing that made it possible, and the birth of modern advertising as chronicled by Jackson Lears in “Fables of Abundance.”
But it was the Depression that forged the still-existing link between consumption and citizenship. Government and business leaders shared the conviction that the crisis was caused by a lack of demand: simply put, Americans were saving too much and spending too little. Thrift may be a virtue, but in the 1930s “the man who spent freely was seen as a national hero while the one who saved his money as a public enemy.” A good citizen was, increasingly by definition, a good consumer.
A good consumer?
But what did it mean to be a “good consumer?” Two definitions vied for prominence. In the first definition, a “consumer citizen,” in conjunction with others, tried to leverage her – it was usually a “her,” at least in the 1930s and 40s – purchasing power to reform and reshape the market place.
This went beyond protesting unfair prices and shoddy goods. The goal was to give consumers a voice in policy making alongside business and government leaders. In short, they wanted to make the marketplace more democratic.
The alternative, what Cohen dubs the “purchaser consumer,” had no such concerns: her goal wasn’t to build a more democratic or egalitarian society but, instead, a more prosperous one and, in the process, raise her family’s standard of living. For her, being an Americans wasn’t about creating a more perfect union or promoting the general welfare, at least not in a way that couldn’t be measured by GDP or the amount of stuff at home.
In this political dispensation, what was expected of Americans was that they should spend and, in so doing, create economic growth and promote personal prosperity. In turn, government and business leaders promised to promote policies that would make the benefits of prosperity more widely available. This shouldn’t be confused with promoting equality or, more to the point, reducing inequality. To borrow an oft-used metaphor, the goal was to increase the size the pie enough so that everyone (at least everyone who was white) might get a piece, not to divide the pie more evenly.
Before this could happen, the country had to grow and create jobs, which it didn’t, at least not in the numbers required. That required our entry into World War II. Ironically, the years that assured the triumph of the “purchaser consumer” model marked the heyday of the “citizen consumer.” Wage and price controls, rationing of foodstuffs and fuel, and the recycling of everything from tin cans to bacon grease were the epitome of a making our consumption an expression of civic virtue.
Little wonder that most Americans hated it. Notwithstanding the sepia-colored nostalgia about life on the home front during the Good War, many Americans chafed at the war-imposed restrictions on domestic consumption; the vast majority of them couldn’t wait for them to be lifted; and a not-insignificant percentage of them spent the war trying to evade them.
By 1943, 1944 at the latest, Americans, whose income had risen with the explosion in war-related industry jobs, yearned for a country where they could spend it instead of buying yet another war bond. And it wasn’t only those on the home front: one G.I., in a Saturday Evening Post piece entitled “What I am Fighting For,” began by saying “I am fighting for that big house with that bright green roof and the big front lawn.”
Ready, set, spend!
By VJ day, Americans had money burning holes in their pockets and nearly two decades of pent-up demand longing to be satisfied. In this setting, the “Consumer Citizen” didn’t stand a chance. Americans wanted the good life they had been promised and there were plenty of people ready to sell to them.
What’s more, the extraordinarily large cohort of children born in the years after the war, the notorious Baby Boomers, meant that demand for homes, cars, clothes, appliances, including that new thing called “television,” and just about anything you could imagine a family consuming, such as vacations and the roads you needed to take you to them, would continue to grow. Or it seemed that way.
The satisfying of this demand and the way it literally, as well as figuratively, transformed the American landscape is the “Consumers Republic” of Cohen’s title. Cohen tells the story of the triumph of the Consumers Republic by focusing on her (and my) home state of New Jersey: the most suburban state, as well as one of the richest. In her home town of Paramus, “seven miles west of the George Washington Bridge” (and a few miles from my childhood home, as well as those of Colson regulars Jim Tonkowich and Ken Boa), two shopping centers, Bergen Mall and Garden State Plaza, a few miles apart exemplified the way the Consumers Republic transformed American life. They provided a setting for the “happy-go-shopping” ethos that defined the Consumers Republic; they represented the triumph of the suburbs over core cities; and they even led to a reconsideration of the idea of “public space.”
Even as millions of Americans were moving to the suburbs or aspiring to, it was commonplace to decry the “sterility” of these suburbs. Gerry Goffin’s and Carole King’s “Pleasant Valley Sunday” was set – where else? – in New Jersey. Whatever. For a large majority of Americans, “charcoal burning everywhere,” “rows of houses that are all the same,” and a “T.V. in every room” was the good life they had been promised.
An exclusive society
There were and are better criticisms of the Consumers Republic than denunciations of identical houses in “status symbol land.” One of these was its virtual exclusion of African-Americans. Much of the early civil rights movement, especially in the north, drew its energy from black anger at not being able to consume on an equal footing with their white counterparts: African-Americans protested against their lack of choices in the marketplace: downtown stores that wouldn’t allow them to shop there, local merchants who charged them higher prices for inferior goods, movie theaters that restricted them to the balcony, etc.
In the Consumers Republic, this lack of choices was a continual reminder of their second-class status. You only vote (or don’t) one day a year – you buy goods and services almost every day. That’s hundreds of reminders of where you stand.
Or sit. It’s not a coincidence that the defining protests of the civil rights movement, the sit-ins and the Montgomery bus boycott, were about being to spend your money in the same manner and in the same places as whites: full participation in the marketplace, provided you can afford it, was seen as the sine qua non of full citizenship.
Another telling criticism of the Consumers Republic is that it was unsustainable: the Consumers Republic was initially paid for out of Americans’ WWII-era savings, with help from government programs like the G.I. bill. After they spend their savings on pent-up demand, Americans turned to credit, which in the 1950s and early 60s that meant merchant-issued charge cards and installment plans.
The conservative nature of this credit – for instance, many issuers did not take the wife’s income into account when determining credit-worthiness – limited the potential for damage to family finances, especially when you factor in the rising wages created by the post-war economic expansion.
But, as the Spanish say, “Jesuits die in threes”: the invention of the mass-market credit card coincided with the end of the post-war economic expansion, and the rise in inflation. Americans still saw consumption as a birth-right but how they were going to pay for it was unclear. They still do and, even more than in late 1960s and 1970s, it’s still unclear how this birth-right will be paid for, especially since real median income has risen by only 10 percent in the last three-and-a-half decades.
(One answer was suggested by Reihan Salam: “cheap food, cheap gas, cheap credit, and, of course, cheap Chinese-made goods” sustained consumption despite stagnant and, for many Americans, declining real incomes. What he calls a “consumption compromise” kept the seventy-plus year old pact between Americans and their leaders going. Of course, part of the “cheap credit” included using their homes as piggy banks and that party turned out, well, you know how it turned out. )
Sustainable or not, when President George W. Bush urged Americans to go shopping in the aftermath of 9/11, it wasn’t, as his critics on both the right and left claimed, some sad departure from historical norms of citizenship – on the contrary, it was an appeal to arguably the dominant idea about what it means to be an American.
“Dominant” isn’t the same thing as “exclusive.” Being an American is about more than buying stuff, although how deep this other stuff goes beyond ritual piety such as “supporting the troops” isn’t all that clear.
If this sounds cynical, please note that it wasn’t the decline of the traditional family or the assault on the sanctity of life that got average Americans mad as hornets: it was the drop in their standard of living. Abortion-on-demand didn’t cause Americans to say that the country was moving in the “wrong direction,” but a drop in home prices drove them to despair for the future and start mourning for America.
Alternatives to the Consumers Republic
As impoverished as citizenship in the Consumers Republic might be, I’m not sure that it doesn’t beat the alternatives. The idea of America as a creedal nation has its appeal – it’s noble-sounding and high-minded – but what exactly that creed is and how much hold it has over the average American is a matter of debate. Sure, we are big on personal freedom and individualism but is that really a creed?
Alternatives such as the one outlined by the late Samuel Huntington in “Who are We?” are even more problematic, even if your last name isn’t “Gonzalez” or “Garcia.” It’s a matter of simple fact that we have not been an “Anglo-Protestant” nation in a long time and the last time we were many Americans we now regard as “white” were not considered as such, much less as fully American. (See, “Working Towards Whiteness”, “The History of White People,” etc.) Reviving an ethno-cultural conception of American identity and citizenship requires sweeping an enormous amount of history under the rug and then pretending not see the Pikes Peak-sized lump in the middle of the room.
And of course, religion is out of the question: unlike Poland, we can’t appeal to the idea that Jesus is king in American hearts, much less politically. That leaves us with the Consumer Republic and its version of the business cycle: happy-go-shopping followed by mourning in America.
*Written by Donald Fagen from, of course, New Jersey.
For more insight to this topic, get the book, Christ and Consumerism: A Critical Analysis of the Spirit of the Age, by Craig G. Bartholomew. Or read the article, “Me and My Mammon,” by Sarah E. Hinlicky.
There were three men came out of the West,
Their fortunes for to try,
And these three men made a solemn vow:
John Barleycorn must die.
They’ve ploughed, they’ve sewn, they’ve harrowed him in,
Threw clods upon his head,
And these three men made a solemn vow:
John Barleycorn was dead.
A forgotten man
At the time of his death in 1927, Wayne Wheeler, the legislative superintendant of the Anti-Saloon League (ASL), wielded the kind of power that today’s “Washington insiders” can only dream about. In its obituary, the Milwaukee Journal wrote that his “conquest is the most notable thing of our times.” The Baltimore Evening Sun added that “nothing is more certain than that when the next history of this age is examined by dispassionate men, Wheeler will be considered one of its most extraordinary figures.”
You’ve probably never heard of Wayne Wheeler—I hadn’t, until I read Last Call: The Rise and Fall of Prohibition by Daniel Okrent.
According to Okrent, the Evening Sun was “absolutely right” and “completely wrong” at the same time. Wheeler was arguably the most powerful and influential figure Americans under the age of 100 have never heard of. His sway over Congress was absolute – even H.L. Mencken, who despised Wheeler and everything he stood for, wrote that “in fifty years, the United States [had] seen no more adept a political manipulator.”
Yet by 1931, the ASL’s Washington office had had to cancel its newspaper subscription for lack of funds; by 1933 his life’s work was completely undone; and within a few decades Wheeler was completely forgotten.
When he wasn’t busy destroying Venezuela’s economy or risking war with Colombia, Hugo Chavez decided that he was going to solve the mystery of Simon Bolivar’s death.
Mind you, the vast majority of scholars don’t think that the great liberator’s death at age 42 is all that mysterious: he died from tuberculosis while living in exile in Colombia. But Chavez, whose ideology Andres Oppenheimer of the Miami Herald has characterized as “Narcissist-Leninist,” insists that Bolivar was murdered, despite the lack of any evidence. Since Narcissism-Leninism means never having to hear “no!” Bolivar’s body was exhumed in mid-July.
Of course, Chavez’s reasons for desecrating Bolivar’s remains have little, if anything, to do with facts or a search for truth: it’s about diverting attention from his comical-to-the-point-of-criminal mismanagement of Venezuela’s economy.
In a land synonymous with oil, daily blackouts, some lasting four hours or more, are a way of life. He has threatened Polar, Venezuela’s largest and most-loved company, with expropriation despite the company’s reputation as a model employer and its employees’, in whose name the expropriation is ostensibly being threatened, opposition to the government’s plans. Exhuming Bolivar is the Bolivarian equivalent of waving a crucifix at a vampire.
No rest in peace?
Speaking of dead people who won’t stay interred, NPR’s All Things Considered reported that Bolivar “was just one of several notable dead not allowed to rest in peace.” In July, chess champion Bobby Fischer’s remains were exhumed in connection with a paternity suit and the remains of former Romanian dictator Nicolae Ceausescu and his wife were dug up to see if they are the genuine article.
All this exhumation prompted NPR to ask “Is our final resting place no longer that final?” and if not, why. The possible answers ranged from the legal – the Latin phrase corpus nullius in bonis, “the body belongs to no one” – to the scientific: “the greater use of DNA evidence has made exhumations more common.”
One expert even speculated that “because of things like ‘CSI’ and vampire films, et cetera, maybe people are just a little more accustomed to the notion of partially decomposed bodies or just the dead in general, and sort of the taboo [of disturbing the dead] might have begun to erode a little bit.”
A sizable omission
What was missing from the speculation was any mention of the possible role of religion. And when you think about it, it’s a sizeable omission, what my friend Terry Mattingly calls a “religion ghost.”
It’s sizable because religion, far more than anything else, has been the context in which human attitudes towards death have been shaped. By “death,” I mean not only what happens after you die but also what those left behind do with your body. The treatment of human remains is inseparable from our beliefs in an afterlife. That’s why the discovery of Neanderthal burial sites, complete with personal possessions, led anthropologists to suspect that they believed in life after death.
This link between belief and our treatment of human remains is a constant in human history. In about a month I will be in Varanasi, the most sacred city in India. As the Times of London once put it, “Death is an industry in Varanasi.” It’s where Hindus go to die in the hope that that, “at the moment of passing, Shiva arrives to whisper the tarak mantra, the secret of the attainment of nirvana, in [their ear] . . .” They believe that “all those who expire in the precincts of the holy city are destined to escape the endless cycles of rebirth and suffering.”
The iconic image of Varanasi, thought to be the oldest continually-inhabited city in the world, are the cremations along the banks of the Ganges, in accordance with Hindu beliefs about life after death and the significance of the body.
Christianity and the dead
Of course, Western civilization’s beliefs about how we treat our dead weren’t shaped by Hinduism but by Christianity, which has plenty to say on the subject. From its beginnings, one of the ways Christians distinguished themselves from their pagan contemporaries was in their treatment of the dead.
Burial of its dead played a central role in the life of early Christian communities. When they weren’t being persecuted, the most likely point of contact between Christian leaders and Roman officials concerned access to local cemeteries. Concern over access prompted Christians to acquire the right to dig tunnels for burial in the soft stone around and under Rome – thus creating the Christian catacombs.
When leaders of the church in Cirta (present-day Constantine, Algeria) were arrested during the Diocletian persecution, six of the seventeen leaders listed their occupation as “grave digger.”
Thus, Pliny the Younger wasn’t that far of when, in a letter to the emperor Trajan, he described Christians as a kind of burial society.
The key word here (besides “Christian”) is burial. From the start, Christians buried their dead. Cremation was for pagans who didn’t believe in the resurrection of the dead. Christian treatment of the remains of the faithful departed flowed from Christian beliefs: not just the resurrection of the dead but also the Incarnation. In becoming one of us, God sanctified the human body and made our treatment of it a part of Christian piety. Sinning against our bodies became sinning against God.
That’s why, to this day, the Orthodox Church forbids cremation except when it can’t be avoided, such as during epidemics or following natural disasters. While the Catholic Church (regrettably) permits it under more circumstances, it forbids cremation where the act “demonstrate[s] a denial of faith in the resurrection of the body.” (Prompting the obvious question of when voluntary cremation does not demonstrate such a denial. Answer: almost never.) It also prohibits the scattering of ashes or keeping them in your living room next to a bowling trophy.
Christianity’s way of treating its dead went wherever it did and supplanted pagan practices. Not surprisingly, as its influence has waned, there has been a corresponding decline in respect for human remains.
Some of this decline has been subtle: for instance, the ancient Christian custom of burying the faithful departed facing east – in anticipation of Christ’s return – has not only fallen into disuse but is probably unintelligible. The farrago of pragmatism, materialism, pop Gnosticism, and subjectivity that constitutes most moderns’ beliefs makes it difficult for them to understand why it makes it any difference how we treat a person’s remains.
(There have been exceptions and pockets of resistance: during the Russian civil war, a surplus of bodies and a shortage of coffins and grave-diggers led the Bolsheviks to promote cremation as an alternative to Orthodox Christian burial, to which Russian peasants responded “Nyet!”)
Thus, instead of being buried facing east, we are cremated and our ashes are spread over Fenway Park, the Grand Canyon or some other place that was “special” to the deceased. That’s only possible because we don’t really think of these places as anyone’s “resting place.”
Disturbing the remains of Simon Bolivar or your aunt Margaret is no big deal because we don’t believe that there’s anything to be disturbed. To the extent we “believe” in life after death it’s in the whole “Ghost Whisperer walk-into-the-light” sense that bears the same relationship to Christian faith as a filet of fish sandwich at McDonalds bears to real seafood, a distinction so obvious that even a caveman could figure it out.
For more understanding of what it means to be made in God’s image, get the book, Created in God’s Image, by Anthony A. Hoekema, from our online store. Or read the article, “Grave Signs: The Godly Waste of Christian Burial,” by Russell D. Moore.
A just machine to make big decisions
Programmed by fellows with compassion and vision
We'll be clean when their work is done
We'll be eternally free yes and eternally young
What a beautiful world this will be
What a glorious time to be free
On June 26, 2000, then-president Clinton announced that the working draft of the Human Genome Project had been completed. He told the world that the work being done by geneticists would “revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases.”
The president wasn’t alone in his enthusiasm and optimism: Francis Collins, the director of the project, predicted that within ten years the task of genetically diagnosing diseases would be complete and that five years after that we could expect treatments based on those diagnoses to become available. “Over the longer term, perhaps in another 15 or 20 years . . . you will see a complete transformation in therapeutic medicine,” was how he put it.
Well, it’s been ten years and as the New York Times put it, “A Decade Later Genetic Map Yields Few Cures.” The primary goal of the project, which was to discover the “genetic roots of common diseases such as cancer and Alzheimer’s and then generate treatments,” remains “elusive.”
Actually, “elusive” is a “glass half-full” way of putting it: “after 10 years of effort, geneticists are almost back to square one in knowing where to look for the roots of common disease.”
Instead of the genetic roots of common diseases and treatments, what we’ve gotten are “discoveries of disease-causing mutations in the human genome” that, by themselves, only explain a “small part of the risk of getting the disease.”
For instance, a study involving genetic variants linked to heart disease found that, when it came to predicting who would actually get heart disease, “the old-fashioned method of taking a family history was a better guide.”
After all the hype, we have learned that the “genetics of most diseases are more complex than anticipated.” Harold Varmus of Sloan-Kettering and, soon, the National Cancer Institute, summed it nicely: “Genomics is a way to do science, not medicine.”
Thus, on the living up to the hype scale, the Human Genome Project ranks somewhere below Stephen Strasburg. Is it, therefore, a bust? How does it compare to, say, Tony Mandarich?
It’s not a Tony Mandarich, not by a long shot. While it hasn’t had and may never have the impact we were expecting and hoping for, its impact has still been considerable. It has revolutionized biology and related scientific disciplines. Genomics has transformed – or at least could transform – the way people see themselves for the better. My favorite scientific finding of the last twenty years or so has been the lack of genetic diversity in modern humans: a single troop of chimpanzees may be more genetically diverse than the nearly 7 billion people on Earth. In Chris Stringer’s words, “relative to many other species, we’re almost clones of each other.” That’s really cool!
There are other potential consequences that I wouldn’t describe as “cool.”
While genomics has so far proven unable to actually cure diseases, it does provide us with the tools we need to prevent them by identifying those who genetically at-risk and, not to put too fine a point on it, preventing their birth.
Case in point: the recent announcement that researchers had “uncovered dozens of previously unknown genetic mutations that contribute to autism in children.” While some of the mutations are inherited from the parents, other “tiny genetic errors may occur during formation of the parents’ eggs and sperm, and these variations are copied during creation of their child's DNA.”
Notwithstanding headlines to the contrary, this research does not herald a “cure” for autism. If anything it augurs the opposite: researchers found that “every [autistic] child showed a different disturbance in a different gene.” These “private genetic mutations” make “may make it more difficult to design drug therapies that work across a wide range of autistic spectrum disorders.”
But if a “cure” is no closer than it was before the findings were announced what is closer is our ability to genetically identify people at a high risk for autism in utero.
And what will we do with this information? Do you really have to ask? We will kill to be kind.
That’s what happened to people with Down Syndrome. The combination of amniocentesis and abortion-on-demand put these folks on an “endangered humans” list of their own. Ninety-two percent of all prenatal diagnoses of Down Syndrome are followed by an abortion.
Only naïveté and/or sentimentality would lead you to believe that things will be different with prenatal diagnoses of autism, or even a predisposition to autism. After all, prospective parents of children with less-challenging genetic prognoses than autism abort their unborn children a majority of the time.
It’s not hard to understand why: our less-than-discretely-charming bourgeoisie have adopted a kind of unofficial “one, at the most two, child” policy. This ratchets up the pressure on the unborn Ethans or Emmas to be as close to flawless as possible – mom and dad are only going to do this once, twice at most. While no genetic test can determine, in utero, if Ethan and Emma are going to fulfill all of their lofty expectations, it may be able to tell you if they definitely won’t.
Everyone who believes in telekinesis, raise my hand. Everyone who trusts people not to avail themselves of the edge provided by what Edwin Black calls “newgenics” lend me your ATM card and your password.
In a more charitable vein, a child whose dependence may extend well the beyond the teen years and whose needs can be exhausting is a daunting prospect. Every parent of an autistic child can tell you about the kick-in-the-gut feeling that accompanied the first time they heard the word “autistic” used in connection with their child. And we had already known and loved them for three or so years! I can’t imagine what I would think hearing that about a child I had never seen.
Then there’s the expense. While autistic kids generally don’t have medical expenses that are out-of-the-ordinary, children with other genetically-based disorders often do. Factor in the additional expenses of special education, and assisted living, whether paid for by the family, the state or a combination of both and, to be completely honest, the cost of honoring the sanctity and dignity of human life can be expensive.
Everyone who believes in telekinesis, raise my hand. Everyone who thinks that these expenses won’t be a factor in deliberations about what to do with the new genomic knowledge, sign this power of attorney I’m about to hand you.
Again, the treatment of people with Down Syndrome provides a glimpse of this genial dystopia: prospective parents face pressure from doctors and insurance companies to undergo genetic screening. The economic reason is simple and doesn’t require a terminal case of cynicism to understand: the least-expensive way to care for people with special needs is to prevent them from being born.
I call it a “genial” dystopia because, unlike China, for instance, the coercion will be subtle, so subtle that even sensitive souls who went out of their way to see My Name is Khan and cried as they watched Jason McElwain on YouTube will see the logic and even kindness in getting tested and acting on the results.
We won’t need “death panels” because our sentimentality, self-righteousness, and unexamined assumptions about what makes life sacred and worthy of respect will make them unnecessary. Our ability to draw distinctions without differences will come in handy: targeting the weak outside the womb, Nazi; inside the womb, the heart of liberty. Life and death decisions made by government, totalitarian; made by the “private sector,” liberty in action, not to mention good for the bottom line.
That’s a result I’m pretty sure won’t be elusive.
It’s a beautiful world we live in,
A sweet romantic place,
Beautiful people everywhere,
The way they show they care
Makes me want to say,
It’s a beautiful world
For you. It’s not for me!
For more insight to this topic, get the book, Bioethics: A Primer for Christians, by Gilbert Meilaender, from our online store. Or read the article, “Bioethics for Believers,” by Sarah J. Flashing.
Current plans call for the withdrawal of all but 50,000 troops from Iraq by August 31st. While, by any reasonable standard, 50,000 troops are a lot of troops, the political reality is that the American people, with the obvious exception of service members and their families, have already put the war, Iraq, and its people behind them.
This would be true even if the ostensible mission had been accomplished. Given the – shall we say? – uncertain outcome of our involvement in Mesopotamia, it’s understandable that we are especially ready to get the you-know-what out of Dodge.
We’re in such a hurry to get out that we don’t notice the bodies we’re stepping over: the ones belonging to Iraqi Christians.
On May 2nd, two buses containing Christian university students and workers were bombed as they traveled between Hamdaniya to Mosul in northern Iraq. One person, a nineteen-year-old named Sandy Shibib, was killed and 188 were injured. The sense of outrage and vulnerability caused by this attack and others like it were captured by one of Shibib’s schoolmates who told Reuters that “we were heading to university, not to a battlefield. We carried no weapons. Nevertheless, we were targeted.”
Being targeted is something that Iraqi Christians have become accustomed to: last Christmas Eve, in the run-up to Iraq’s national elections, a Christian bus driver in Mosul was pulled from his bus and killed. The day before, a bomb inside a 1200-year-old church killed two worshippers and injured another five.
Attacks on Christians led to a “toning-down” of Christmas celebrations across Iraq and caused Christmas Eve masses to be celebrated in the afternoon instead of at midnight, as tradition holds.
If being a Christian minority in an overwhelming Islamic society weren’t bad enough, Iraqi Christians’ ancient homeland, the Nineveh Plains, sits atop some of the world’s richest oilfields. The other groups, especially the Kurds, covet the revenues these fields represent and they, unlike the Christians, have militias.
So whether it’s being despised as Christians or being targeted by those coveting their land, the end result is the same: at best, Christians have to “keep a low profile,” even at Christmas, so as not to “provoke” attacks or they become Iraq’s punching bag.
As I have noted elsewhere, even a nail sometimes yells “enough!” The latest attack prompted approximately 3,000 Christians to march through the streets of Hamdaniya. The Council of Christian Church Leaders of Iraq issued a statement that called on the government to take steps to insure the safety of Iraqi Christians in Nineveh province.
As if to underscore how low their expectations of life in Iraq have become, they also demanded that the students targeted by the bombers be allowed to take their final exams “in a safe place” and not “forfeit the current academic year.”
They shouldn’t hold their breath: at best, the fate of Christians isn’t a priority, or even a concern, of the various Shiite factions jockeying for power in Baghdad.
Little wonder, then, that since the 2003 invasion, half of Iraq’s Christians have fled the country. It’s why, despite only being 3 or 4 percent of the population, Christians are 40 percent of Iraqi refugees.
The bottom line is that, as the U.S. Conference of Catholic Bishops put it, Christianity in Iraq is being “obliterated.” A campaign of what Daniel Jonah Goldhagen would call “eliminationist” violence against Iraq’s Christians is being waged in full view of the world for the second time in less than 100 years.
The first one, called Sayfo in Aramaic, took place at the same time and was perpetrated by the same people as the Armenian genocide. Between 1914 and 1920, at least 250,000 Assyrian Christians died at the hands of the Turks and their Kurdish allies.
Inasmuch as only Sweden has officially recognized the Sayfo, it probably shouldn’t come as a surprise that the rest of the world has turned a blind eye to the current attempt to finish what the Turks started.
Nina Shea, who has been a tireless exception to the aforementioned indifference, is correct when she writes that “Unless the Obama administration acts fast to develop policies to help [Iraqi Christians], their hope [of remaining in Iraq] will likely be in vain.”
It’s also true that the current administration’s likely failure to act will be yet another example of the foreign policy continuity with the Bush administration that has dismayed its more liberal supporters.
In truth, the fate of the Assyrians and other Iraqi Christians was sealed by Dick and Don’s not-so-excellent (shockingly awesome?) adventure in Mesopotamia. To be fair, the Christian population of Iraq had been in decline since at least the end of the first Gulf War. The combination of economic sanctions that beggared Iraq’s people and Saddam Hussein’s brutality, which was periodically directed at some, but not all, parts of Iraq’s Christian population, prompted Christians, who are Iraq’s best-educated and most entrepreneurial group, to emigrate in search of a better life.
But while the Christian population had been declining in both relative and absolute terms, the invasion of Iraq and the sectarian violence it unleashed turned that decline into the aforementioned obliteration.
People who scarcely, if at all, knew the difference between Shiites and Sunnis gave scant thought to the aftermath of toppling Saddam Hussein. When the expected choruses of “See, the Conquering Hero Comes” turned out to be the sound of automatic weapons fire and IEDs, they scrambled to put together a government that would allow them to extricate the United States from Iraq with a minimum of losses and a maximum of face.
By definition, the Iraqi parties to that scramble were the ones with the guns: Shiites, who comprise 60 percent of Iraq and who, as Vail Nasr and others have documented, had long been positioning themselves to govern Iraq in the post-Saddam era; Sunnis; and the Kurds.
Apart from ritual obeisance to “religious freedom,” the well-being of people whose ancestors worshipped the God and Father of our Lord Jesus Christ while ours were bowing before idols and sacrificing the occasional virgin didn’t enter the negotiations.
The result was as predictable as it was brutal. Before the invasion, church officials in Iraq spoke of a “rising tide of Muslim fanaticism.” While they decried Saddam’s increasing “appeasement” of the fanatics, they warned that a “a poorly managed transition to democracy” would produce something a lot worse.
That’s exactly what has happened and no one should be surprised. Figuring this out didn’t require psychic powers – all you needed was some curiosity and, oh yeah, to care about what would happen.
Still, my ire isn’t so much directed at our government. Modern nation-states don’t go to war out of love for neighbor or other ideals. They wage war for reasons having to do with their perceived interests. As the best-known treatise about modern warfare famously says “war is the continuation of policy by other means.” (Der Krieg ist eine bloße Fortsetzung der Politik mit anderen Mitteln.)
Sometimes these reasons meet the requirements of jus ad bellum. More often, they don’t. Given what we know – or should know – about why and how wars are fought it would be silly to expect governments, including ours, to automatically take into account the impact on tiny religious minorities.
But it would be great if the minorities’ co-religionists did and that’s exactly what didn’t happen in the run-up to the invasion of Iraq. With some honorable exception, nearly all of it on the “religious left,” the consequences of an American invasion on Iraq’s Christians played no role in our deliberations and debate about the morality of that invasion.
That’s because we didn’t see them. We didn’t ask what would happen to them because, as far as we were concerned, they didn’t exist. What was about to happen in Iraq was about us: our security, our well-being, and our role in the world. Channeling William McKinley, some of us even justified the upcoming war as a way to bring Christianity to Iraq.
The kind of ignorance at work here is culpable. It’s at best negligent and at worst willful. Whether out of laziness or arrogance we didn’t ask the questions we needed to ask – in fact, we didn’t think any questions were necessary. Our guy told us war was necessary so we learned a little Pidgin Just War theory and helped make the case for war.
As a result, our brethren are now sporting t-shirts that read “My Co-Religionists Supported Invading Iraq and All I Have to Show for it is this lousy t-shirt. Really. It’s literally all I have.” (The writing is on both the front and back.) Could we have prevented the invasion? No. Could we have made sure that the well-being of Iraq’s Christians played a greater role in policy deliberations? Probably not. As I said, modern nation-states wage wars for their own reasons and then expect, sometimes even coerce, their citizens to go along.
But is it too much to expect that, next time, when our guy asks us to join the war party we stop and think about it first? We might even want to venture outside our epistemic cocoons and do some reading. And if that’s not possible, can we at least try to keep a low profile?
For more insight to this topic, get the book, Who Are the Christians In the Middle East?, by Betty Jane Bailey and J. Martin Bailey, from our online store. Or read the article, “Forgotten Christians,” by Virginia Stem Owens.
For more insight to this topic, get the book, Who Are the Christians In the Middle East?, by Betty Jane Bailey and J. Martin Bailey, from our online store. Or read the article, “Forgotten Christians,” by Virginia Stem Owens.