Choice Beyond Reason

The Hill reports this week that “Democrats will not withhold financial support for candidates who oppose abortion rights,” a development that has sent Internet feminists into a bit of a tizzy. Jill Filipovic, for one, remarked: “What better strategy than to betray their base and reaffirm that women’s basic rights are negotiable and disposable,” while Teen Vogue columnist Lauren Duca claimed: “This is a betrayal of every woman who has ever supported the Democratic party.”

Sure, okay. It is to the Democrats’ credit that they recognize what the feminist guard is terminally incapable of seeing: that most Americans, while their views on abortion are decidedly incoherent and nonsensical, are still generally much less dogmatic and monomaniacal about abortion than the feminists are. And what dogmatism it is! Lauren Duca, for one, is on-record as believing that climate change is “the single most pressing issue facing humanity” and that it will “destroy the planet, if we do nothing.” So you’d think she’d be a little more open to some moderate political horse trading when it comes to the one political party that actually buys into the global warming hype. Yet no: abortion must reign supreme. You can believe that “the planet” is going to be “destroyed” if we “do nothing” about global warming, but if you try and soften your stance a bit when it comes to a moderately pro-life city councilman from Gribbler’s Wabe, Wyoming, then you’re suddenly a “betrayer.” Got it!

Abortion, in the end, is not about “women’s basic rights,” as much as pro-abortoinists may insist otherwise; it has nothing to do with bodily autonomy or “a woman’s right to choose” or any of the other pretty ways the Left dresses it up. It has always been principally about dead babies, specifically the convenience that dead babies engender. Increasingly, for example, there are respected and well-paid philosophers and medical officials who argue that it should be legal to kill babies outside the womb—and these arguments, it should be noted, are couched entirely in the terms already used to justify legal abortion. And now there is growing concern that the developing technology of “artificial wombs” could threaten abortion rights more than ever before:

In the future, [Harvard bioethicist I. Glenn] Cohen said, it stands to reason that this technology could save the lives of fetuses born even earlier. Imagine then, that you had made the decision to terminate a pregnancy at 18 weeks, but that such a technology technically made it viable for the fetus to be born at that point in development, then finish developing outside the womb. Would an abortion still be legal?

“It could wind up being that you only have the right to an abortion up until you can put [a fetus] in the artificial womb,” said Cohen. “It’s terrifying…”

Developing technology…tests the rhetoric surrounding the right to choose. A woman’s right to control her own body is a common legal and ethical argument made in favor of abortion. Under that logic, though, the law could simply compel a woman to put her fetus into an external womb, giving her back control of her own body but still forcing her into parenthood

Yes, what a “terrifying” thought. Indeed, the “rhetoric surrounding the right to choose” has always carried with it an unstated qualifier: it’s not just about choice, but about a specific choice, and more importantly the consequences of that choice. The purpose of abortion isn’t to give women the “right to control their own bodies;” if pro-choicers were genuinely as considered with bodily autonomy as they say they are, they would be marching on Washington every day demanding the wholesale shuttering of the FDA, the USDA, the DEA and every other agency that regularly and openly comes between American citizens and their own body choices.

Abortion is not just about “choice;” it is also, perhaps more so, about parenthood. A new technology that might preserve the former while underlining the latter will likely be seen as a direct threat to the underlying motivation for pro-abortion politics: a pro-choice regime that allows for “abortion” while saving the life of the baby is only half-effective, and the less important half, too.

It is good that Democrats are softening on abortion. And while I am not entirely comfortable with the idea of “artificial wombs,” I would be happy to use them for the purpose of saving unborn humans from abortion. But do not expect our devout and determined pro-abortion friends to take these changes quietly; judging by what we’re seeing, we might expect them to grow even more single-minded about abortion, even as public opinion shifts and the pro-life option grows ever-more viable. This is what you expect from zealots—the people who are both ideologically fanatical and immune to reason. But when you truck in dead babies, I suppose it is best to disassociate yourself from rational discourse.

Maybe Just a Cigarette More

I am generally an optimistic man, and I remain as much about the future prospects of the American experiment, but the dark and idiot specter of Obamacare does sometimes give me pause. The Republican party’s persistent inability to scuttle this deeply stupid law really seems to confirm what we’ve more or less known all along: that Barack Obama and his congressional Democrats knew very well that repeal was very likely going to be politically impossible.

To be fair, American political lexicon tends to equate “politically impossible” with “literally impossible,” but that’s not really the case: what it actually means is that anyone who axes Obamacare is going to have to deal with a bunch of angry voters and a possibly terminal election cycle. If the GOP believed any of its eight-year-long bluster, it would do a clean repeal of this miserable law, start working on some decent deregulatory fixes to the American health care market, and let the chips fall where they may in November. But we’re not getting that, because of “political impossibility,” i.e. the threat that Republican senators might have to justify acting on the things they’ve been swearing by for nearly a decade.

At Slate, Jim Newell offers a compelling theory that Republicans “never really hated Obamacare:”

Despite the aura around it, Obamacare, in its individual market reforms, is essentially just the idea that sick people should be able to purchase quality insurance at roughly the same price as healthy people. All of the law’s regulations, carrots, and sticks—guaranteed issue, community rating, essential health benefits, the individual mandate, subsidies, single risk pools, etc.—were put in place to make such a market feasible. To “repeal Obamacare” is to segregate sick people from healthy people, so that the healthy are not subsidizing the sick.

It turns out, most people don’t really want to do this. Which is why, in each chamber, when the conservative bloc would put forth a version of an amendment that would truly “repeal Obamacare,” it was met with a revolt from the rest of the party.

This is not a crazy hypothesis—the supposition, I mean, that Republicans “don’t really want to” repeal the Affordable Care Act. Surely there are a few among the more ideological wings of the party that want to see it go. But when you can’t rally enough members of the nominally conservative party to repeal the one law they all allegedly hate—and when you can’t even get enough members to vote to kinda-sorta-half repeal certain parts of the law for just a few years, as the “skinny repeal” promised to do—then surely it is worth asking: what do you people even believe in?

The idea that “sick people should be able to purchase quality insurance at roughly the same price as healthy people” is, of course, both economically illiterate and commercially ignorant: insurance does not, and cannot, and should not, work this way. The reason that sick people have generally paid more for health insurance is that sick people cost more to insure. The parameters of our healthcare debate have largely excluded this objectively factual characteristic of the health insurance market because we are under the impression that, if we simply say “Sick people shouldn’t pay more for insurance!”, then we can magically will it into reality. But this isn’t the case. A central platform of progressive government policy over the past dozen decades or so has been that you can wish away the hard realities of the medical economy without incurring some kind of negative externality in the process. But you can’t.

It should not be hard for a genuinely conservative party to repudiate such illogical ideology and substitute for it a more grown-up and honest look at the way the world works: yes, sick people are going to have to pay more for health insurance and health care, but if we focus on genuine reforms to bring down the price of both, then it won’t seem as outrageous as the Left has made it out to be. We have been denied such a discussion, however, largely because Democrats have re-defined the acceptable criteria of our healthcare debate: instead of one side proposing the idiot insurance regulations of Obamacare and the other side arguing for a more fact-based approach to health care policy, we have both sides largely arguing for the same thing: full-on Obamacare on the one hand, gradualist ACA-lite six-year-temporary-repeal Obamacare on the other.

Republicans, who might have spent nearly the past decade exposing the fundamental stupidity of Democrat health economics, have instead evidently accepted the premise of those economics, framing their legislative ambitions strictly within the Overton window established by Barack Obama in 2010. In a sense, then, I don’t blame them for failing to repeal the Affordable Care Act: if you’ve more or less decided that it’s a necessary part of federal policy, then why risk your congressional seat over it? The unpleasant side effects of Obamacare—the higher premiums, the useless coverage, the restricted market, the grossly expensive and unworkable health care economy generally—are not really their concern. It falls to us—the people they represent—to deal with it. And so we will; apparently there is no avoiding it.

The Dead Baby Express

A little while ago Freddie deBoer argued that, though he disagrees with our principles, “there’s one thing [conservatives] get right: they are correct when they predict the consequences of the next social change.”

This is true—well, aside from the “one thing” bit, anyway. Conservatives get stuff right. Consider, for example, the fact that infanticide is becoming a defensible position to hold:

A biologist at one of the most prestigious universities in the country has come out in favor of killing disabled newborn babies, declaring that “it is time to add to the discussion the euthanasia of newborns.”

In a post on his personal blog, University of Chicago Prof. Jerry Coyne expressed approval over the idea of killing “newborns who have horrible conditions or deformities, or are doomed to a life that cannot by any reasonable light afford happiness…”

“If you are allowed to abort a fetus that has a severe genetic defect, microcephaly, spina bifida, or so on,” Coyne asks, “then why aren’t you able to euthanize that same fetus just after it’s born?”

Comparing newborns to dogs and cats, Coyne claims that “some day the practice [of killing disabled newborns] will be widespread, and it will be for the better.”

Now, it would be easy enough for me to point out that, yes, we called this, we predicted it. Pro-lifers have been arguing for years that the pro-choice principle does not, in any meaningful way, exclude newborn babies from being killed: all of the arguments in favor of abortion—that unborn humans have no self-awareness, no desires, no consciousness, no hopes, dreams, aspirations, and that it is thus acceptable to kill them—apply more or less entirely to newborn babies. Indeed, this is an argument Coyne makes explicit: “[N]ewborn babies aren’t aware of death,” he writes, “aren’t nearly as sentient as an older child or adult, and have no rational faculties to make judgment.” Off with their heads! (Editor’s note: I know that doctors wouldn’t actually be beheading babies—it’s a figure of speech. They’d simply be injecting them with a lethal dose of barbiturates. Much cleaner.)

Now, I know what you’re thinking: Coyne is only arguing for killing babies with “horrible conditions or deformities,” or those who are “doomed to a life that cannot by any reasonable light afford happiness!” Our civilized culture would never sanction infanticide for healthy babies! But you don’t really believe that—not at this late date, not as a respected professor at a world-class university is honestly arguing in favor of a Sparta-style infant murder regime. In any event, Coyne has already given up the game: he justifies disabled-baby euthanasia by claiming that “you are [already] allowed to abort a fetus that has a severe genetic defect, microcephaly, spina bifida, or so on.” But that’s only half of it: you are actually allowed to abort for whatever reason you want—so why shouldn’t the same principle apply to killing your newborn?

This idea already has some currency in certain dank corners of academia; the British Journal of Medical Ethics some years ago ran a paper by two professors arguing for “after-birth abortion,” i.e. killing babies after they are born, “including cases where the newborn is not disabled.” I suppose, for now, it is largely still required that one qualify one’s infanticide with certain restrictions, e.g. that your baby should have spina bifida before you execute it. But that will surely change, and probably more quickly than you can imagine: in the next decade, if not less, we can probably expect more than a few of the enlightened progressives that run most of our academic institutions to come out in favor of  healthy newborn euthanasia, after which they’ll start working their way up to toddlers and eventually retarded and otherwise-disabled adolescents.

This will happen—it is not a question of “if.” And when it does, you can thank us conservatives for at least having the foresight to call it ahead of time, even if nobody listened to us. “Some day the practice will be widespread,” Coyne writes, “and it will be for the better.” He’s right about the first part.

Welcome to Progressive Utopia

At the Federalist this week I wrote about the depressing case of Charlie Gard and its horrifying implications: in the United Kingdom it is apparently official state policy for government officials to run out the clock on a dying boy’s life in order to force his parents to pull the plug on his medical equipment. It is a bizarre fact of life in modern Western civilization that this isn’t the scandal of the century; instead it’s treated as another ho-hum undertaking by a government that evidently has the power to dictate whether or not parents can seek potentially life-saving treatment for their children.

This is the kind of government our progressive friends want. Your garden-variety statist is generally happy to accept a certain amount of collateral damage in exchange for a healthy regime of statism: give them enough single-payer health care and enough hate-speech laws and they’ll happily look the other way when the bureaucrats sentence a baby boy to die by way of government-mandated attrition. And why not? I suppose once you’ve got “abortion free of charge on the NHS,” it’s just a hop, skip and a jump to mandatory passive baby euthanasia.

A great deal of the response to this scandal has run along these lines: “The treatment almost certainly wouldn’t have worked. It was overly wishful of the parents to want to take him to the United States when he very likely would have died either way.” Sure, stipulated—I believe both of those things. That doesn’t matter. What matters is that it was their call, not the government’s, not the doctors at GOSH, not the judge’s.

It is exceedingly difficult for liberals to separate an inadvisable undertaking from an illegal one: if a leftist thinks you shouldn’t do something, he generally thinks you shouldn’t be allowed to do it, either. But that’s not how a free society works, least of all where parental authority is concerned: the family unit, preceding the state and contravening its authority in a million different ways, should not be subject to the same idiot management prerogative that the government applies to its legions of bureaus and departments and offices.

It is more or less beyond question that, had the Gards been permitted to take Charlie to the United States for treatment without any fanfare, and had the treatment ultimately failed, nobody would be arguing that the British government should have forbid them from doing so in the first place. But the Left really only responds to power—above all government power—and so if the government says that something is bad and should not be done, most leftists are incapable of doing anything other than agreeing with it, even if they would have otherwise had no comment on the matter.

Perhaps aware that it more or less resembles a petulant small-scale tinpot fiefdom on the world stage, the Great Ormond Street Hospital issued a statement attempting to rationalize its valiant efforts to ensure Charlie Gard’s young death. It is almost uncomfortably pathetic, less a communique from a medical institution and more a whiny bully’s rationalization for why he hit someone much smaller than him. GOSH essentially says: “We decided Charlie Gard was going to die, so we determined that he should die.” At one point, justifying the hospital’s desire to unplug Charlie’s equipment so that he might die more quickly, GOSH claims: “Charlie shows physical responses to stressors that some of those treating him interpret as pain and when two international experts assessed him last week, they believed that they elicited a pain response.” Got it? “Some of those treating him” “interpret” some of his physical responses as pain; additionally, two other experts “believed” that they elicited a pain response from him. On this kind of irrefutable testimony the British government stole the Gards’ familial rights and allowed a boy’s last chance at life to whither away. Isn’t that an encouraging indication of the competency of the British government in the 21st century?

The hospital goes on to question the competency of the doctor who proposed to treat Charlie, up to and including a strong implication that he was only in it for the money; the evidence presented by the doctor, GOSH argues, “confirms that whilst [the treatment] may well assist others in the future, it cannot and could not have assisted Charlie.” Maybe that’s true. But it is astonishing that any civilized country could interpret such evidence to mean that the Gards should be forbidden by law from pursuing it. Note that the hospital is not delegitimizing the cure itself, as it would snake-oil medicine or quack remedies; rather, they are simply saying that the cure would not work in this one instance, which is ultimately a subjective opinion, and on a topic that should ultimately defer to the parents above all else.

I suppose such an idea is mildly outmoded, at least in England if not elsewhere. Earlier this week the Guardian published a piece by University College London professor Ian Kennedy, who—after expressing the customary politically-correct pieties regarding the Gard case—argued in all seriousness that “parents do not belong to their children.” This is a fashionable idea among progressives, argued by diverse minds from number-one American public intellectual Melissa Harris-Perry to the more zealous intellectuals of Red China during the Great Leap Forward. It sounds good on paper—if you’re a liberal, I suppose. In the end what you end up with is a baby boy wasting his brief and precious life away while a hospital dithers about his “best interests.” Maybe Charlie Gard would have died over here in America—in fact it’s likely that he would have. But decisions about his medical care were not mine to make, nor were they your’s, nor were they the hospital’s or the British court system’s. That right fell solely on his parents—who now must watch their baby boy die, knowing that they had a chance to save him, one that was stolen from them by their own government.

And Now, a Word From Our Censors

I am not positive, but it seems to me that being a progressive must be utterly exhausting, in that the constant lurching from one outrage or social paranoia to the next must eventually take a mental toll on a body. Out of Britain comes the latest lurch:

One ad for baby formula showed a little girl growing up to be a ballerina and a little boy becoming a mathematician.

Another ad, for a weight-loss drink, asked if viewers were “beach body ready” and showed a bikini-wearing woman whose bronzed image, critics said, promoted an unrealistic standard of beauty.

A third ad, for the video game “Game of War,” showed the American actress Kate Upton scantily dressed on a horse, making it seem as though sexual desirability were a prerequisite for leadership.

Britain’s advertising regulator, reacting to these ads and similar ones, announced Tuesday that new rules would be developed to ban advertising that promotes gender stereotypes or denigrates people who do not conform to them; sexually objectifies women; or promotes unhealthy body images…

The specifics have yet be developed, but the regulator offered some examples.

“It would be inappropriate and unrealistic to prevent ads from, for instance, depicting a woman cleaning,” the report said. But, it said, “an ad which depicts family members creating mess while a woman has sole responsibility for cleaning it up” might be banned under the new guidelines.

I have to say, for someone who has been accused of harboring extreme sexual prudence and retrograde presumptions about public modesty, I can’t hold a candle to the gender-neurotic regulators of Western Europe, who now consider “Kate Upton scantily dressed on a horse” to be an image literally worth banning. If I really want to develop my conservative sexual ethic, I apparently need to study the lessons of the progressive bureaucrats of Great Britain!

What is most instructive about this episode is not that England has an utterly dismal and shameful free speech regime; we have known that for decades. The striking thing about these proposed regulations is the ultimate disdain that the regulators themselves are expressing for a society beyond their control and immune to their desires. The likely reason that the formula ad showed “a little girl growing up to be a ballerina and a little boy becoming a mathematician,” for instance, is because—now here is a big shocker—more women are ballerinas than men, and more men are mathematicians than women. The idea that an advertisement should be banned for reflecting something that is actually true is really kind of astonishing, that is if you discard the obsessive-compulsive and insular presumptions of modern social progressivism.

As for “an ad which depicts family members creating [a] mess while a woman has sole responsibility for cleaning it up,” look: at a certain point it is difficult to mock the paranoid hang-ups of progressive cultural fixations, and you just have to laugh instead, at least for a little while. There are serious problems in British society—a stunted and sinking underclass, a serious Islamic immigration problem, a bloated and debt-ridden and foundering government system—and yet there really is a government bureau dedicated to banning (I’m just going to say it again) “ads which depicts family members creating [a] mess while a woman has sole responsibility for cleaning it up.” This is a thing that actual people are actually worried about.

It is true that women in the UK do more housework than do men. But so what? British women are also overwhelmingly more likely to work part-time than are men, suggesting that at least some of that gap can be explained by the fact that women are home throughout more of the day than men are. In a country with exceedingly generous family leave policies, such arrangements can mostly be chalked up to women’s choices—which is to say that an advertisement for a cleaning product is arguably irrelevant in determining how men and women structure their family lives and domestic responsibilities. Nonetheless, free speech must suffer in Great Britain, all because of the exceeding vanities of progressive political belief.

The Mass of Order

Reviewing Peter Kwasniewski’s new book on the case for the traditional Latin Mass, Dorothy McLean writes:

One thing that confuses at Mass today is just what the priest is doing at the altar/table up front. Depending on the prayers he chooses, he is either offering a sacrifice to God or preparing a communal meal: Which is it? Similarly, if the priest is booming prayers into a microphone, is he speaking to God (Who has perfect hearing) or is he really addressing the congregation? Meanwhile, if the modern Mass is such an improvement over the old (as we hear so often), why have most Catholics in the West stopped attending it?…

These are excellent questions that highlight a very real problem, which is the rather wholesale collapse of both practical and experiential Catholic faith throughout Western civilization over the past half-century or so. The last question in particular—if the revised missal is such a vast improvement over the Tridentine Mass, then why has Catholicism suffered so profoundly since the former’s implementation?—is a vital one for understanding the predicament we are in today.

That predicament is stark: since the middle of the last century, regular Mass attendance has plummeted among all age groups, most strikingly among young adults, whose regular attendance numbers have plummeted from three-quarters in 1955 to barely one-third at the turn of the century. Overall, during that same time period, the number of Catholics who claimed to have attended church in the past seven days dropped a full thirty percentage points; Protestants have suffered nowhere near these levels of attendance reduction.

Now, overall church attendance across Western society has declined over the past fifty years or so regardless of denomination. One might be willing to chalk the Catholic Church’s reduction in attendance to various disparate phenomenon: secularized culture, poor catechesis, bad clergy, any number of other things. Some or all of these could be true to varying degrees. But I would submit that the rewriting of the Liturgy, and the cavalier and slipshod way in which that rewriting was applied, has a great deal to do with the decimation of the Catholic faithful. Ultimately the assurances of midcentury Catholic reformers—that the New Mass would be more “relatable” and “accessible” to the laity, and would reverse the slow decline in regular attendance already present by the late 1950s—rings brutally hollow. The empty pews attest to it.

It is hard to overstate the radical shift of the Mass from the Tridentine to Novus Ordo, the liturgical abuses it invited, and the eagerness of many of the clergy and the lay faithful to pervert the beauty and the sacred order of the Mass in favor of a kind of modern spiritual variety show. My mother, herself a cradle Catholic—and, to be sure, a thoughtful critic of what she sees as various deficiencies of the pre-concilliar Church—relays a story in which a priest, during a Christmas Eve liturgy, dressed up as an elf and skipped around the nave in order to entertain the children. I have seen a priest don a professional football jersey mid-liturgy because he lost a bet; I once witnessed a priest, during a homily, place a couple of cheap toy statutes of gauchos on the altar and prattle on about them with no discernible connection to the Gospel or indeed anything else; I remember the scofflaw in the Philippines when a priest, in an aggressive and shameless excess of vanity, rode a toy scooter during Mass in order to get a rise out of everyone. Away on my bachelor party weekend a number of years ago I popped into a local church for the Saturday vigil; at the end of the Mass, following the closing prayer but prior to the recessional hymn, the celebrant declared: “See you next time!” At which point the congregation responded, in unison, “Same time, same place!” The priest, I remember, was a kind and welcoming man, and surely he loves Christ as much as any faithful cleric—but if it were a choice between attending that kind of game show-ified Mass every week or else building my own rustic chapel out of Atlantic white cypress and forcibly conscripting a local Trappist choir monk to personally say Mass for me every weekend, surely I would choose the latter.

A Mass—and a religious culture more generally—that allows for such things (and even encourages them!) is broken in some strange, sad and vital way. As McLean points out, by way of Kwasniewski, the Novus Ordo has created a “maelstrom of confusion” as to just what the Mass is about and what the Church is supposed to be expressing in the Mass. In that swirling chaos, a kind of crude laxity has arisen, one in which the repeated assurances of “modernization” and “accessibility” and “active participation” have been exposed for the meaningless assurances they always were.

None of which is to imply that I am somehow holier or else less sinful than even the staunchest and most aggressive of liturgical reformers; I am not. Nor is it to imply that the Church could solve all its problems—attendance-based or otherwise—with a return to the exclusive use of the Extraordinary Form. Nor is it even to say that the Novus Ordo can not be done with great respect, fidelity and beauty, for surely it can. It is simply to say this: after more than fifty years of novel experimentation and repeated assurances that this is what the Church needs, we must all be prepared to admit that the great reforms of the 20th Century have not played out the way that they were supposed to, that there is a genuine case to be made for the Extraordinary Form, and that, as McLean puts it, we would do very well, at this critical juncture in the Church’s history, not to ignore “the human longing for something challenging, complicated, and mysterious in the worship of God.”

The Butt of the Joke

Children have always been vulnerable targets for sexual exploitation, but—in most modern societies, anyway—there has generally been a rather strong taboo against it, inasmuch as, for the most part, it is considered unacceptable to make children into sexual beings. The old way of doing things was, if you wanted to be a pervert, you were expected to conduct your perversion with a modicum of circumspection, and if you were caught acting on any of your perverted desires then you were thrown into jail.

Quaint stuff. Last week Teen Vogue, an aggressively shallow magazine dedicated to making teenage girls feel insecure and self-conscious, ran an article entitled: “Anal Sex: What You Need To Know.” Now, I know what you’re thinking: what does any teenager “need to know” about anal sex? There was a time—like, before last week—when only perverts would have felt comfortable in claiming that thirteen-year-olds should be educated about the finer aspects of putting penises in their rectums. But those days are over. A new era has dawned!

To be sure, these are just tentative first steps into this brave new world. Teen Vogue’s teen butt sex piece, after all, is ultimately a bit shy about what it is proposing: it offers anal sex lessons to both “prostate owners” and “non-prostate owners,” likely because the editors at Teen Vogue are uncomfortable writing the words “girls” and “boys” in connection with anal sex. Rather than admit that they are openly advising children on how to stick sex organs up their anuses, they have retreated into the comforting anonymity of anatomy. This is a work in progress, people.

Now, one might be moved to point out the obvious: that, whatever your feelings on combining sex and feces, it’s probably something that should ultimately be left to adults, and teen-centric magazines of all things shouldn’t be in the business of encouraging young people—girls and boys who may not have even begun puberty yet—to do any kind of sex, full stop.

The modern, post-sexual-revolution response to this perfectly reasonable argument usually runs along these lines: “Kids are going to have sex anyway—there is nothing we can do to stop them—so we might as well teach them how to do it safely!” This belief is exemplified nicely by feminist hero Amanda Marcotte, who tweeted in defense of Teen Vogue’s anal sex advice:

[I]t’s really, really stupid to refuse [kids] information and just let them have sex without any education on safety.

I’m not positive, but I think Marcotte is, at this point in her life, childless—so she may very well be completely unaware that there is a third option, namely that you don’t have to “let” your underage child have sex at all. Put another way, as a parent you are not simply a helpless idiot who is powerless to stop your child from banging the nearest prostate owner at will. Parents can, you know, do things to stop their children from engaging in sexual activity. It’s not rocket science; it’s not even non-rocket science. It’s just fact.

That being said, this argument—“Kids are just going to have sex, so we should teach them how to do it ‘safely!'”—is, while flatly untrue, nonetheless pervasive. So let us imagine a rhetorical corollary to such an argument: the promotion of, say, “safe” drug use.

Imagine that Teen Vogue ran an article educating teenagers on how to “safely” use cocaine: how to ensure the proper amount to snort so as to avoid overdosing, how to verify that you’re ingesting pure, high-quality blow, the necessity of clean spoons and trustworthy drug dealers. Should we “refuse kids information” when it comes to hard drugs? After all, statistics show that many teens are just going to do cocaine—and it seems stupid and risky to “just let them take drugs without any education or safety,” doesn’t it?

At one point I might have believed that such an argument would never pass muster with any magazine that appears in supermarket checkout lines. But why should that be the case anymore? A popular and nominally respectable magazine is encouraging thirteen-year-olds to explore the possibility of what Teen Vogue calls “butt stuff,” urging them to experiment sexually while assuring them that “yes, you will come in contact with some fecal matter” (this is advice for thirteen-year-olds, people). Once you have descended to this level, the question of “standards” becomes a blurry one, if it’s even a question at all anymore.

A Narrative, Blown Up

One weird way in which Western citizens, particularly American citizens, are apt to excuse Islamic terrorism is to say something along the lines of, “Non-Islamic people commit acts of terrorism as well!” For their part, Americans are often given to pointing towards Timothy McVeigh as a counterweight to terrorism committed by Muslims. It is an odd hand to play: when confronted with the geopolitical and religious reality of modern-day terrorism, many people are apt to say: “Yeah, but what about this thing that happened over two decades ago?”

All of which is ultimately a distraction of sorts. Yes, non-Muslims can and do carry out acts of terrorism, sometimes very brutal and deadly acts of terrorism. But the right way to fight terrorism as a whole in the 21st century isn’t to join hands and gaily sing “We’re All In This Together,” it’s to ask: from where and who are the vast majority of these attacks coming, and why? This is the question the people in counterterrorism efforts ask themselves every day; they don’t stand around saying, “Well, sure, Saudi Arabia seems to produce an overlarge number of terrorists, which might signify something noteworthy about Saudi Arabia—but hey, what about that white guy twenty years ago?!”

Just the same, the equivocations and the excuses roll on, driven in large part by a media and a cultural zeitgeist that is deeply committed to protecting Islam from any real criticism whatsoever. If you propose that maybe elderly charity nuns should maybe not be forced to provide abortion drugs to their staff, a screeching cadre of feminists will materialize to accuse you of attempting to install a Catholic caliphate in the United States. If you point out, on the other hand, that Islam sure seems to attract and inspire a lot of terrorists, you’ll be treated to a decidedly different kind of dialogue.

Case in point, from the Huffington Post:

Extremists and Islamophobes alike have attempted to paint violent factions within Islam as the true expression of the faith. But a new study gives credence to what countless Muslim leaders, activists and scholars have argued: that groups like the self-proclaimed Islamic State are Muslim in name alone.

A group of German scholars at the Universities of Bielefeld and Osnabrück analyzed 5,757 WhatsApp messages found on a phone seized by police following a terrorist attack in the spring of 2016. The messages were exchanged among 12 young men involved in the attack…

Researchers conducting the study said the young men’s conversations demonstrated little understanding of their professed faith and that the group constructed a “Lego Islam” to suit their purposes.

Bacem Dziri, a researcher at the University of Osnabrück and co-author on the report, examined the messages from an Islamic studies perspective and concluded: “The group had no basic knowledge about Islam.”

Well, maybe they didn’t. And yet they still carried out a terrorist attack in the name of Islam. Which is kind of weird. What is more strange is this: I know plenty of Christians of varying denominations who are as clueless about their nominal religious beliefs as these young Muslim men allegedly were about theirs. I have known Catholics who believe the Church allows them to “divorce” their spouses and “remarry;” I have known Jews with zero understanding of their faith from either practical or historical perspectives; I have known Presbyterians who believe…well, whatever it is that Presbyterians believe, which is probably enough said. Curiously none of these people was even remotely motivated to construct a “Lego religion” as part of a plot to murder a bunch of innocent people. What gives?

That’s the trouble with discussing Islamic terrorism: we are dealing with a regular procession of young men (and some young women) who claim Islam as a mantle, who scream “Allahu Akbar” as they blow themselves up, who are part of terrorist groups with names like “the Islamic State…” and yet nevertheless, endlessly, day after day after day, we are assured that these incidents have nothing to do with Islam. It is, of course, not improbable that many terrorists are ignorant of, and/or ultimately disinterested in, many aspects of Islamic faith. But just the same, they continually gravitate towards Islam—not Catholicism, not Methodism, not coconut milk Buddhist yogaism, but invariably the same religion. Why? What is so special about Islam—even half-developed, poorly-studied Islamic belief—that makes so many young men want to self-detonate?

To their credit—sort of—the same people who so assiduously deny a link between Islamic terrorists and Islamic faith are weirdly willing to be semi-honest when it comes to the realization of their own policy goals: we are ceaselessly told that if, say, Trump’s “Muslim ban” is allowed to stand, then one of its principal achievements will be to create more terrorists, not less. So we are left with a most curious cultural narrative: Islam has nothing to do with terrorism, but Muslims can be driven to terrorism by a mildly controversial temporary immigration policy. Talk about “legos!”

You May Kiss the [Censored]

A couple of years ago, almost to the day, I wrote:

Endlessly, it was repeated: if gay marriage is legalized, it will have nothing to do with you. Well, here we are. Gay marriage is legal. And it is clear that it will have everything to do with every one of us.

It was true then and it is true now—truer, even. More than a few readers took my position to mean that I am “anti-gay marriage,” which isn’t quite the right way to put it: it’s hard to be “anti” something that you believe doesn’t exist, after all. Still more assumed that I am, more broadly speaking, “anti-gay,” which is untrue. I’m not even sure what that would look like, and in any event I am not going to play the deeply stupid progressive game where a set of moving policy goalposts are used to determine, on any given day, whether or not you are a bigot.

Anyway, the truth remains: gay marriage has a lot to do with you, no matter what gay marriage partisans may insist. The folks in Malta, unfortunately, are just figuring this out for themselves:

The overwhelmingly Catholic island of Malta has voted to legalise same-sex marriage.

Parliament agreed to amend Malta’s marriage act, replacing words like “husband” and “wife” with the gender-neutral alternative “spouse”.

It also replaced “mother” and “father” with “parent who gave birth” and “parent who did not give birth”.

The change marks another major milestone for the island, which only introduced divorce in 2011.

From heterosexual divorce to homosexual matrimony in six short years: that’s progress, if you’re into that sort of thing anyway. For the rest of us it is a bit dismaying.

So “gay marriage” is legal in Malta now. And with it the entire institution has been revamped more or less beyond historical recognition. If “this has nothing to do with you” is the biggest lie of the whole debacle, then directly behind it is the lie that “gay marriage” represents an expansion of marriage instead of a redefinition. It is indisputably the latter, at least in this case: married residents of Malta are no longer “husbands” and “wives,” anymore, they are simply “spouses,” a dry, bureaucratic, bean-counting approximation of married life. Gone, too, is the notion of “mothers” and “fathers;” rather, you are both simply “parents,” one of whom  “gave birth” and one of whom “did not give birth.”

The old way of looking at things, of course, held that marriage and childbirth could not be separated from their objective biological moorings: a marriage required a husband (a man) and a wife (a woman), as did giving birth. That has changed. The most telling part of Malta’s recent capitulation is the fact that, even within the context of the novelty of “gay marriage,” there is arguably no need to abolish distinctions like “husband” and “wife:” most gay couples, men and women, are happy to refer to their so-called spouses as husbands and wives, respectively, and so the language could have easily been kept on government forms and state documents. The Maltese government, however, called such language “discriminatory.” So there you have it. To all of my Maltesian readers, I want to express my heartfelt condolences that your married identities have been stolen from you by your own government.

Now, the pro-“gay marriage” counter-argument likely runs as such: it doesn’t matter what the government calls you in its official records, you’re still welcome to call yourself a husband or a wife or a spouse or nothing at all, so why bitch about it? Curiously enough, this argument was never good enough for gay people, who could have easily referred to their relationships or civil unions as “marriages” without having to re-define the institution as a whole. But gay activists instinctively understood why, for the purposes of gay marriage, this argument was and is bogus. The government’s definition of what is and isn’t marriage, after all, suggests and imparts a certain type of society and a certain way of life. The government’s recognition of “traditional marriage” as the sole type of comprehensive and permanent matrimonial union implied, in theory at the very least, a whole host of things: order, stability, continuation, civilization.

The increasing acceptance of “gay marriage” by governments around the world implies a whole host of other things, not the least of which is the abolition of marriage as we have known it, and its being replaced by a rough and uncomfortable effigy of matrimony, one in which there are no husbands and no wives, no mothers and no fathers, only “spouses” and “parents.” But hey, if Maltesians aren’t happy with their new, re-defined marriage paradigm, it should be easy enough for them to get a divorce.

Making No Sense of It All

There are a number of reasons why many people remain resistant to the cultural zeitgeist of transgenderism, chief among them the fact that it is a mental illness and many people are loathe to normalize and celebrate mental illness. But coming from a more practical perspective is the simple fact that nobody really seems to know what transgenderism is, or how to describe it, or how to define it. If you ask ten different people to quantify the transgender phenomenon, you are very likely to get ten different answers in response, each of them subtly yet critically different than the others.

Surely there is a way to explicate transgenderism in a manner that clarifies the issue sensibly and logically. You would think they would have nailed it after all this time. You would be wrong.

Consider the first few results one comes across when one asks Google to “define transgenderism,” a search that throws back a panoply of conflicting interpretations. Google’s dictionary bumper claims the word describes “a person whose sense of personal identity and gender does not correspond with their birth sex.” Next in line, dictionary.com says it describes “a person whose gender identity does not correspond to that person’s biological sex assigned at birth.” Note the differences here: we’ve lost any reference to “personal identity” from the one definition to the next, and we’ve gone from “birth sex” to “biological sex assigned at birth.”

Next up, Wikipedia’s definition claims that transgender people “have a gender identity or gender expression that differs from their assigned sex.” So now in addition to gender identity, “gender expression” is a marker of transgenderism—an indication that actions, and not just essence, is a determining factor in one’s transgender status.

Urban Dictionary, on the other hand, claims that transgender people “identify as a gender other than what they were assigned at birth.” Wait—the other definitions claim that people are assigned a sex, not a gender, and that transgender peoples’ gender differs from the former, not the latter. Which one is correct?

Consider, next, the National Center for Transgender Equality’s definition: they claim the word is “a term for people whose gender identity, expression or behavior is different from those typically associated with their assigned sex at birth.” So now, in addition to “gender identity” and “gender expression,” we have “gender behavior,” these factors now being qualified by the phrase “typically associated with their assigned sex at birth.” But then there is GLAAD’s definition, which is virtually identical to the NCTE’s—except they’ve left out the term “behavior.” The APA, meanwhile, interviews Columbia PhD Walter Bockting, who claims simply that transgenderism “refers to having a gender identity that differs from one’s sex assigned at birth,” with no mention of “expression” or “behavior.”

The fact that transgenderism has become such a powerhouse cultural phenomenon while remaining such an undefined mystery is rather baffling; activists should be seeking to consolidate these rather disparate definitions, not fragment them. But it’s not just the basic nuances of transgenderism that are at issue; the very terminology of the movement is beset by a weird irreconcilability that is difficult, if not impossible, to figure out.

Consider one of the core tenets of transgenderism, one that appears to varying degrees within most (albeit disparate and confusing) definitions of the term: the idea that transgender people are people whose gender identity does not “match” their biological sex in some way. Activists have traditionally drawn a bright line between “gender identity” (defined by the Human Rights Council as “one’s innermost concept of self as male, female, a blend of both or neither”) and sex (which is an objective matter of biological chromosomes).

But wait: if gender identity and biological sex are two wholly different concepts, then how could they ever “match” in the first place?

Put another way, transgender activists would have us believe that two incompatible elements—biology on the one hand, “innermost concepts” on the other—somehow exist on a congruous spectrum of experience and can thus be “matched” (or else “unmatched”). But this doesn’t really make sense. If the “innermost concept” of gender is indeed an entirely distinct notion from that of sex, then it is meaningless to equate the two in the first place. It would be a little like saying, “My preference for spicy food does not match my shoe size.” The one has nothing to do with the other, and thus the two can never be “matched” in the first place.

There is a growing movement within the ranks of transgender activists to attempt to resolve this difficult conundrum by pointing to a burgeoning body of science that suggests that transgender individuals experience their gender dysphoria due to unique differences in brain structure; more specifically that, as Francine Russo wrote in Scientific American last year, “the brain structures of the trans people were more similar in some respects to the brains of their experienced gender than those of their natal gender.”

Here again we have a problem of definition: “natal gender,” is yet another confusing and unclear concept (one imagines the term actually refers to sex). But the proposition is itself odd and more than a little flawed. It is not clear, after all, why one biological phenomenon (i.e., “experienced gender”) should take precedent over another (i.e., genetic sex). If the biological basis for transgenderism is indeed airtight, what about the biological basis for genetic sex, which is far more widely understood and well-established? Put more simply: why should we say that a person’s “experienced gender” determines whether or not they are male or female, rather than their genes or their genitalia or some other genotypical or phenotypical factor?

To their credit, transgender activists appear to have recognized the difficulties inherent in reconciling their bizarre and rather inexplicable beliefs. As a result, the transgender debate has mostly centered not around the phenomenon itself—which is difficult if not impossible to cohere, and is ultimately nonsensical—but around the language of civil rights and bigotry and discrimination: it doesn’t matter if transgenderism is fundamentally a nonsense ideology, what matters is that you’re basically like Bull Connor or John Calhoun if you come down on the wrong side of it. This tactic, it should go without saying, is remarkably effective: many people seem content to just accept the tenets of transgenderism, even if those tenets make absolutely no sense at all. The rest of us, however, are skeptical—and for good and obvious reasons.