More stories

  • in

    Language Paranoia and the Binary Exclusion Syndrome

    In the olden days, which some of us remember as the 20th century, news stories and commentary tended to focus on people and their actions. The news would sometimes highlight and even debate current ideas circulating about society and politics. New stories quite often sought to weigh the arguments surrounding serious projects intended to improve things. The general tendency was to prefer substance over form.

    Things have radically changed since the turn of the century. It may be related to a growing sentiment of fatalism that defines our Zeitgeist. Outside of the billionaire class, people feel powerless, a feeling that is already wreaking havoc in the world of politics. After banks that were “too big to fail,” we have inherited problems that appear too big to solve. Climate change and COVID-19 have contributed powerfully to the trend, but a series of chaotic elections in several of our most stable democracies, accompanied by newer wars or prospects of war called upon to replace the old ones all serve to comfort the trend.

    Language and the News

    READ MORE

    In the United States, this feeling of helplessness has had the unfortunate effect of turning people’s attention away from the issues and the facts that matter to focus on the language individuals use to describe them. Words that inspire aggressive emotional reactions now dominate the news cycle, eclipsing the people, events and ideas that should be at the core of the news cycle.

    One reason we have launched Fair Observer’s new feature, Language and the News, and are continuing with a weekly dictionary of what was formerly The Daily Devil’s Dictionary is that, increasingly, the meaning of the words people use has been obscured and replaced by the emotions different groups of combative people attach to words.

    What explains this drift into a state of permanent combat over words? Addressing the issues — any issues — apparently demands too much effort, too much wrestling with nuance and perspective. It is much easier to reduce complex political and moral problems to a single word and load that word with an emotional charge that disperses even the possibility of nuance. This was already the case when political correctness emerged decades ago. But the binary logic that underlies such oppositional thinking has now taken root in the culture and goes well beyond the simple identification of words to use or not use in polite society.

    The Problem of Celebrities Who Say Things Out Loud

    Last week, US podcast host Joe Rogan and actress Whoopi Goldberg submitted to concerted public ostracism (now graced with the trendy word “canceled”) over the words and thoughts they happened to express in contexts that used to be perceived as informal, exploratory conversations. Neither was attempting to make a formal pronouncement about the state of the world. They were guilty of thinking out loud, sharing thoughts that emerged spontaneously.

    It wasn’t James Joyce (who was at one time canceled by the courts), but it was still a stream of consciousness. Human beings have been interacting in that way ever since the dawn of language, at least 50,000 years. The exchange of random and sometimes focused thoughts about the world has been an essential part of building and regulating every human institution we know, from family life to nation-states.

    Embed from Getty Images

    During these centuries of exchanges, many of the thoughts uttered were poorly or only partially reasoned. Dialogue with others helped them to evolve and become the constructs of culture. Some were mistaken and bad. Others permitted moments of self-enlightenment. Only popes have ever had the privilege of making ex cathedra pronouncement deemed infallible, at least to the faithful. The rest of us have the messy obligation of debating among ourselves what we want to understand as the truth.

    Dialogue never establishes the truth. It permits us to approach it. That doesn’t preclude the fact that multiple groups have acquired the habit of thinking themselves endowed with papal certainty allowing them to close the debate before it even begins. Everyone has noticed the severe loss of trust in the institutions once counted upon to guide the mass of humanity: governments, churches and the media.

    That general loss of trust means that many groups with like-minded tastes, interests or factors of identity have been tempted to impose on the rest of society the levels of certainty they feel they have attained. Paradoxically, internationally established churches, once dominant across vast swaths of the globe, have come to adopt an attitude of humble dialogue just as governments, the media and various interest groups have become ensconced in promulgating the certainty of their truth while displaying an intolerance of dialogue.

    Dialogue permits us to refine our perceptions, insights and intuitions and put them into some kind of perspective. That perspective is always likely to shift as new insights (good) and social pressures (not always so good) emerge. The sane attitude consists of accepting that no linguistically formulated belief — even the idea that the sun rises in the east — should be deemed to be a statement of absolute truth. (After all, despite everyone’s daily experience, the sun doesn’t rise — the Earth turns.) Perspective implies that, however stable any of our ideas may appear to us at a particular time, we can never be absolutely sure they are right and even less sure that the words we have chosen to frame such truths sum up their meaning.

    Truth and the US State Department

    A quick glance at the media over the past week demonstrates the complexity of the problem. Theoretically, a democratic society will always encourage dialogue, since voting itself, though highly imperfect, is presented as a means for the people to express their intentions concerning real world issues. In a democracy, a plurality of perspectives is not only desirable, but inevitable and should be viewed as an asset. But those who are convinced of their truth and have the power to impose their truth see it as a liability.

    On February 3, State Department spokesman Ned Price spent nearly four minutes trying to affirm, in response to a journalist’s persistent objections, that his announced warning about a Russian false flag operation wasn’t, as the journalist suspected, itself a false flag. The journalist, Matt Lee of the Associated Press, asked for the slightest glimpse of the substance of the operation before accepting to report that there actually was something to report on. What he got were words.

    Embed from Getty Images

    Price, a former CIA officer, believed that the term was self-explanatory. He clearly expected members of the press to be grateful for receiving “information that is present in the US government.” Price sees Lee’s doubt as a case of a reporter seeking “solace in information that the Russians are putting out.” In other words, either a traitor or a useful idiot. Maggie Haberman of The New York Times reacted by tweeting, “ This is really something as an answer. Questioning the US government does not = supporting what Russia is saying.”

    Haberman is right, though she might want to instruct some of her fellow journalists at The Times, who have acquired the habit of unquestioningly echoing anything the State Department, the Defense Department or the intelligence community shares with them. Especially when for more than five years, The Times’ specialized in promoting alarmism about Russia’s agency in the “Havana syndrome” saga. Because the CIA suspected, all the cases were the result of “hostile acts.” Acts, by the way, for which the only physically identified perpetrator was a species of Cuban crickets.

    The back and forth concerning Russia’s false flag operation, like the Havana syndrome itself, illustrates a deeper trend that has seriously eroded the quality of basic communication in the United States. It takes the form of an increasingly binary, even Manichean type of reasoning. For Price, it’s the certainty of the existence of evil acts by Russians before needing any proof and even before those acts take place. But it also appears in the war of obstinate aggression waged by those who seek to silence anyone who suggests that the government’s vaccine mandates and other COVID-19 restrictions may not be justified.

    This binary syndrome now permeates all levels of US culture, and not only the political sphere. The constraining force of the law is one thing, which people can accept. The refusal of dialogue is literally anti-human, especially in a democracy. But it also takes the form of moral rage when someone expresses an idea calling into question some aspect of authority or, worse, pronounces a word whose sound alone provokes a violent reaction. There is a residual vigilante culture that still infects US individualism. The willingness, or rather the need people feel, to apply summary justice helps to explain the horrendous homicide rate in the United States. Vigilantism has gradually contaminated the world of politics, entertainment and even education, where parents and school boards go to battle over words and ideas.

    George W. Bush’s contribution

    US culture has always privileged binary oppositions and shied away from nuance because nuance is seen as an obstacle to efficiency in a world where “time is money.” But a major shift began to take place at the outset of the 21st century that seriously amplified the phenomenon. The 1990s were a decade in which Americans believed their liberal values had triumphed globally following the collapse of the Soviet Union. For many people, it turned out to be boring. The spice of having an enemy was missing.

    In 2001, the Manichean thinking that dominated the Cold War period was thus programmed for a remake. Although the American people tend to prefer both comfort and variety (at least tolerance of variety in their lifestyles), politicians find it useful to identify with an abstract mission consisting of defending the incontestable good against the threat posed by inveterate evil. The updated Cold War was inaugurated by George W. Bush in September 2001 when the US president famously proclaimed, “Every nation, in every region, now has a decision to make: either you are with us, or you are with the terrorists.”

    Unique Insights from 2,500+ Contributors in 90+ Countries

    The cultural attitude underlying this statement is now applied to multiple contexts, not just military ones. I like to call it the standard American binary exclusionist worldview. It starts from the conviction that one belongs to a camp and that camp represents either what is right or a group that has been unjustly wronged. Other camps may exist. Some may even be well-intentioned. But they are all guilty of entertaining false beliefs, like Price’s characterization of journalists who he imagines promote Russian talking points. That has long been standard fare in politics, but the same pattern applies in conflicts concerning what are called “culture issues,” from abortion to gender issues, religion or teaching Critical Race Theory.

    In the political realm, the exclusionist worldview describes the dark side of what many people like to celebrate as “American exceptionalism,” the famous “shining city on a hill.” The idea it promotes supposes that others — those who don’t agree, accept and obey the stated rules and principles — are allied with evil, either because they haven’t yet understood the force of truth, justice and democracy and the American way, or because they have committed to undermining it. That is why Bush claimed they had “a decision to make.” Ned Price seems to be saying something similar to Matt Lee.

    A General Cultural Phenomenon

    But the exclusionist mentality is not just political. It now plays out in less straightforward ways across the entire culture. Nuance is suspected of being a form of either cowardice or hypocrisy. Whatever the question, debate will be cut short by one side or the other because they have taken the position that, if you are not for what I say, you are against it. This is dangerous, especially in a democracy. It implies an assumption of moral authority that is increasingly perceived by others to be unfounded, whether it is expressed by government officials or random interest groups.

    The example of Price’s false flag and Lee’s request for substance — at least something to debate — reveals how risky the exclusionist mentality can be. Anyone familiar with the way intelligence has worked over the past century knows that false flags are a very real item in any intelligence network’s toolbox. The CIA’s Operation Northwoods spelled out clearly what the agency intended to carry out. “We could blow up a U.S. ship in Guantanamo Bay and blame Cuba,” a Pentagon official wrote, adding that “casualty lists in U.S. newspapers would cause a helpful wave of national indignation.”

    There is strong evidence that the 2001 anthrax attacks in the US, designed to incriminate Saddam Hussein’s Iraq and justify a war in the immediate aftermath of 9/11, was an attempted false flag operation that failed miserably when it was quickly discovered that the strain of anthrax could only have been produced in America. Lacking this proof, which also would have had the merit of linking Hussein to the 9/11 attacks, the Bush administration had to struggle for another 18 months to build (i.e., fabricate) the evidence of Iraq’s (non-existent) weapons of mass destruction.

    Embed from Getty Images

    This enabled the operation “shock and awe” that brought down Hussein’s regime in 2003. It took the FBI nearly seven years to complete the coverup of the anthrax attacks designed to be attributed to Iraq. They did so by pushing the scientist Bruce Ivins to commit suicide and bury any evidence that may have elucidated a false flag operation that, by the way, killed five Americans.

    But false flags have become a kind of sick joke. In a 2018 article on false flags, Vox invokes the conventional take that false flag reports tend to be the elements of the tawdry conspiracy theories that have made it possible for people like Alex Jones to earn a living.  “So ‘false flag’ attacks have happened,” Vox admits, “but not often. In the world of conspiracy theorists, though, ‘false flags’ are seemingly everywhere.” If this is true, Lee would have been on the right track if he were to suspect the intelligence community and the State Department of fabricating a conspiracy theory.

    Although democracy is theoretically open to a diversity of competing viewpoints, the trend in the political realm has always pointed toward a binary contrast rather than the development of multiple perspectives. The founding fathers of the republic warned against parties, which they called factions. But it didn’t take long to realize that the growing cultural diversity of the young nation, already divided into states that were theoretically autonomous, risked creating a hopelessly fragmented political system. The nation needed to construct some standard ideological poles to attract and crystallize the population’s political energies. In the course of the 19th century, a two-party system emerged, following the pattern of the Whigs and Tories in England, something the founders initially hoped to avoid.

    It took some time for the two political parties to settle into a stable binary system with the labels: Democrat and Republican. Their names reflected the two pillars of the nation’s founding ideology. Everyone accepted the idea that the United States was a democratic republic, if only because it wasn’t a monarchy. It was democratic because people could vote on who would represent them.

    It took nearly 200 years to realize that because the two fundamental ideas that constituted an ideology had become monopolized by two parties, there was no room for a third, fourth or fifth party to challenge them. The two parties owned the playing field. At some point in the late 20th century, the parties became competitors only in name. They morphed into an ideological duopoly that had little to do with the idea of being either a democracy or a republic. As James Carville insisted in his advice to candidate Bill Clinton in the 1992 presidential campaign, “It’s the economy, stupid.” He was right. As it had evolved, the political system represented the economy and no longer the people.

    Nevertheless, the culture created by a two-century-long rivalry contributed mightily to the triumph of the binary exclusionist worldview. In the 20th century, the standard distinction between Democrats and Republicans turned around the belief that the former believed in an active, interventionist government stimulating collective behavior on behalf of the people, and the latter in a minimalist barebones government committed to reinforcing private enterprise and protecting individualism.

    Where, as a duopoly, the two parties ended up agreeing is that interventionism was good when directed elsewhere, in the form of a military presence across the globe intended to demonstrate aggressive potential. Not because either party believed in the domination of foreign lands, but because they realized that the defense industry was the one thing that Republicans could accept as a legitimate highly constraining collective, national enterprise and that the Democrats, following Carville’s dictum, realized underpinned a thriving economy in which ordinary people could find employment.

    The Crimes of Joe Rogan and Whoopi Goldberg

    Politics, therefore, set in place a more general phenomenon: the binary exclusionist worldview that would soon spread to the rest of the culture. Exclusionism is a common way of thinking about what people consider to be issues that matter. It has fueled the deep animosity between opposing sides around the so-called cultural issues that, in reality, have nothing to do with culture but increasingly dominate the news cycle.

    Until the launch of the culture wars around issues such as abortion, gay marriage, identity and gender, many Americans had felt comfortable as members of two distinct camps. As Democrats and Republicans, they functioned like two rival teams in sport. Presidential elections were always Super Bowls of a sort at which the people would come for the spectacle. The purpose of the politicians that composed the parties was not to govern, but to win elections. But, for most of the 20th century, the acrimony they felt and generated focused on issues of public policy, which once implemented the people would accept, albeit grudgingly if the other party was victorious. After the storm, the calm. In contrast, cultural issues generate bitterness, resentment and ultimately enmity. After the storm, the tempest.

    Embed from Getty Images

    The force of the raging cultural winds became apparent last week in two entirely different celebrity incidents, concerning Joe Rogan and Whoopi Goldberg. Both were treated to the new style of excommunication that the various churches of correct thinking and exclusionary practices now mete out on a regular basis. In an oddly symmetrical twist, the incriminating words were what is now referred to as “the N-word” spoken by a white person and the word “race” spoken by a black person. Later in the week, a debate arose about yet another word with racial implications — apartheid — when Amnesty International formally accused the state of Israel of practicing it against Palestinians.

    The N-word has become the locus classicus of isolating an item of language that — while muddled historically and linguistically — is so definitively framed that, even while trying to come to grips with it informally as an admittedly strange and fascinating phenomenon in US culture, any white person who utters the reprehensible term will be considered as having delivered a direct insult to a real person or an entire population. Years ago, Joe Rogan made a very real mistake that he now publicly regrets. While examining the intricate rules surrounding the word and its interdiction, he allowed himself the freedom to actually pronounce the word.

    In his apology, Rogan claimed that he hasn’t said the word in years, which in itself is an interesting historical point. He recognizes that the social space for even talking about the word has become exaggeratedly restricted. Branding Rogan as a racist just on that basis may represent a legitimate suspicion about the man’s character, worth examining, but it is simply an erroneous procedure. Using random examples from nearly 10 years ago may raise some questions about the man’s culture, but it makes no valid case for proving Rogan is or even was at the time a racist.

    The Whoopi Goldberg case is less straightforward because it wasn’t about a word but an idea. She said the Holocaust “was not about race.” Proposing the hypothesis that Nazi persecution of Jews may be a case of something other than simple racism is the kind of thought any legitimate historian might entertain and seek to examine. It raises some serious questions not only about what motivated the Nazis, but about what our civilization means by the words “race” and “racism.” There is considerable ambiguity to deal with in such a discussion, but any statement seeking to clarify the nature of what is recognized as evil behavior should be seen as potentially constructive.

    Once some kind of perspective can be established about the terms and formulations that legitimately apply to the historical case, it could be possible to conclude, as many think, that either Goldberg’s particular formulation is legitimate, inaccurate or inappropriate. Clearly, Goldberg’s critics found her formulation inappropriate, but, objectively speaking, they were in no position to prove it inaccurate without engaging in the meaning of “race.”

    The problem is complex because history is complex, both the history of the time and the historical moment today. One of the factors of complexity appeared in another controversy created by Amnesty International’s publication of a study that accuses Israel of being an apartheid state, which considered in international law is to be a crime against humanity.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    Interestingly, The Times of Israel gives a fair and very complete hearing to Amnesty International’s spokespersons, whereas American media largely ignored the report. When they did cover it, US media focused on the dismissive Israeli reaction. PBS News Hour quoted Ned Price, who in another exchange with Matt Lee stated that the department rejects “the view that Israel‘s actions constitute apartheid.”

    Once again, the debate is over a word, the difference in this case being that the word is specifically defined in international law. The debate predictably sparked, among some commentators, another word, whose definition has often been stretched in extreme directions in the interest of provoking strong emotions: anti-Semitism. Goldberg’s incriminating sentence itself was branded by some as anti-Semitism.

    At the end of the day, the words used in any language can be understood in a variety of ways. Within a culture that has adopted the worldview of binary exclusionism, the recourse to constructive dialogue is rapidly disappearing. Instead, we are all saddled with the task of trying to memorize the lists of words one can and cannot say and the ideas it will be dangerous to express.

    What this means is that addressing and solving real problems is likely to become more and more difficult. It also means that the media will become increasingly less trustworthy than it already is today. For one person, a “false flag” corresponds to a fact, and for another, it can only be the component of a conspiracy theory. The N-word is a sound white people must never utter, even if reading Mark Twain’s Huckleberry Finn aloud. And the word “race” — a concept that has no biological reality — now may apply to any group of people who have been oppressed by another group and who choose to be thought of as a race.

    The topics these words refer to are all serious. For differing reasons, they are all uncomfortable to talk about. But so are issues spawned by the COVID-19 pandemic, related to health and prevention, especially when death and oppressive administrative constraints happen to be involved. The real problem is that as soon as the dialogue begins to stumble over a specific word or ill-defined concept or the feeling of injustice, reasoning is no longer possible. Obedient acceptance of what becomes imposed itself as the “norm” is the only possible survival strategy, especially for anyone visible to the public. But that kind of obedience may not be the best way to practice democracy.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    The Art of Prince Andrew’s Lawyers

    With everything that has been going on as the world seeks to weigh the chances of a nuclear war and a realignment of nations across the globe, fans of the media may have failed to tune into the real news that broke in recent weeks. Forget Ukraine, there is another drama whose suspense is building. It obviously concerns the fate of the battered Prince Andrew because of his role in the Jeffrey Epstein/Ghislaine Maxwell saga that has already produced an officially (and conveniently) declared “suicide” (Epstein’s) and a celebrity criminal trial (Maxwell’s). 

    Since a US judge has now agreed to bring Virginia Giuffre’s civil lawsuit to trial, it means that for the first time, a prince of England, a member of the royal family, will be officially put on the hot seat in an American courtroom. The rebelling colonists couldn’t get King George III to answer for his crimes, but they now appear to have a son of Elizabeth II in their grasp.

    Boris Johnson’s Convenient Bravado

    READ MORE

    For weeks, the media have been running updates specifically on speculation about the legal strategy Andrew’s attorneys are likely to adopt. Though for the moment it remains mere speculation, it does have the power for attentive observers to provoke a few comic effects. The latest hypothesis has the lawyers seeking to turn the tables on Giuffre by accusing her of sex trafficking. They aren’t claiming Andrew is innocent, but they want her to appear guilty. Business Insider considers that ploy “risky” because the tactic consists of getting a witness — another of Epstein’s victims — to make that claim about Giuffre. It risks backfiring because the witness could actually contradict Andrew’s adamant claim that he never had sex with Giuffre.

    Embed from Getty Images

    Actually, the legal team appears already to have prepared a strategy for that eventuality. On January 26, NPR reported that Andrew’s lawyers addressed a message to the court saying, “that if any sexual activity did occur between the prince and Virginia Giuffre, it was consensual.” This may sound odd because the accused’s lawyers should know if he did or didn’t, but the law is never about knowledge, only the impression a good attorney can make on a judge or a jury.

    NPR continues its description of the lawyers’ position: “The court filing made clear that Andrew wasn’t admitting sexual contact with Giuffre. But it said if the case wasn’t dismissed, the defense wants a trial in which it would argue that her abuse claims ‘are barred by the doctrine of consent.’”

    Today’s Weekly Devil’s Dictionary definition:

    Consent:

    Agreement on something perceived as illicit between two or more people, including, in some extreme cases, a member of the British royal family and a 17-year-old American girl turned into a sex slave by the royal’s best American friend

    Contextual Note

    Since lawyers live in a world of hypotheticals, evoking the idea that “if” a judge and jury were to decide sexual contact between the two was real, it should enable the legal team to make a claim they expect the court to understand as: She was asking for it. In civil cases, all lawyers know that attack is the best defense.

    Thus, Andrew’s legal team is now being paid, not to prove the prince’s innocence, but to establish the guilt of the victim. They are seeking to create the impression that the Virginia Roberts of two decades ago was already a wolf in sheep’s clothing when she consented to consorting with a prince. And, of course, continues to be one as she seeks to profit from the civil trial today.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    Most commentators doubt that Andrew has a case. This has permitted the media to revel in the humiliation of a man who has always been perceived as supercilious and deserving of no one’s attention apart from being the queen’s “favourite son.” That is why this has been nothing but bad news for Buckingham Palace. 

    And it looks to get worse. So stay tuned.

    Historical Note

    Legal experts tell us that what the prince’s lawyers refer to as the “doctrine of consent” is officially described as the “doctrine of informed consent.” More pertinently, the consent referred to focuses entirely on cases in the realm of medical treatment. It is all about a patient’s agreement to a medical procedure that may be risky. It defines the physician’s duty to inform the patient of all the risks associated with a recommended procedure. If consent is obtained, the physician will be clear of responsibility should any of the risks be realized.

    It may seem odd that Prince Andrew’s lawyers are appealing to a doctrine established specifically for medical practice. But while many will not think of lawyers themselves as appealing, whenever they lose a case, you can be sure that they will be appealing it. But that isn’t the only kind of appealing they do. When preparing a case, they will appeal to any random principle or odd fact that appears to serve their purpose. This should surprise no one because, just like politicians who focus on winning elections rather than governing, lawyers focus on winning cases for their clients rather than on justice.

    The sad truth, however, for those who believe that justice is a fine thing to have as a feature of an advanced civilization is that the lawyers are not only right to follow that logic; the best of their lot are also very skillful in making it work. Which is why what we call the justice system will always be more “just” for those who can afford to pay for the most skillful lawyers.

    The final irony of this story lies in the fact that, in their diligence, the lawyers have borrowed the idea behind the doctrine of consent, not from the world of sexual predation, but from the realm of therapy and medical practice. They need to be careful at this point. Even Andrew and his lawyers should know that if you insert a space in the word “therapist,” it points to the image Prince Andrew has in some people’s minds: “the rapist.” The mountains of testimony from Jeffrey Epstein’s countless victims reveal that, though they were undoubtedly consenting in some sense to the masterful manipulation of the deceased billionaire and friend to the famous and wealthy (as well as possibly a spy), all of them have been to some degree traumatized for life by the experience.

    Embed from Getty Images

    As Bill Gates explained when questioned about the problem of his own (he claims ill-informed) consent to whatever he was up to with Epstein, for him there could be no serious regrets. The problem no longer exists because, well, “he’s dead” (referring to his pal, Jeffrey). Prince Andrew is still alive, though this whole business has deprived him of all his royal privileges, making him something of a dead branch on the royal family tree. Virginia Giuffre is also still alive, though undoubtedly disturbed by her experience as a tool in the hands of Jeffrey Epstein, Ghislaine Maxwell and Prince Andrew.

    So, unless a nuclear war intervenes in the coming weeks between the US and Russia making everything else redundant (including the collapse of Meta’s stock), the interesting news will turn around the legal fate in the US of two prominent Brits. The first is a socialite (and possibly also a spy) as well as a high-profile heiress, Ghislaine Maxwell. She is expected to have a retrial sometime in the future. The second is none other than the queen’s favorite son.

    *[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of The Fair Observer Devil’s Dictionary.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    A Personal Boycott of the Beijing Olympic Games

    The International Olympic Committee (IOC) and the world’s largest corporations are allowing the government of China to use the Winter Olympic Games to promote and advance its notion of the superiority of one-party, one-man authoritarian rule, much as was done at the 1936 Nazi-hosted Olympic Games in Berlin.

    I’m boycotting these games in Beijing. Doing so does not come easy for me. As a life-long sports enthusiast, I have always looked forward to the Olympics. Watching the world’s preeminent athletes compete on the world stage and rooting for my own national team and others who seem to defy the oddsmakers never failed to excite me. As a kid, I even once dreamed of becoming an Olympic competitor myself. (Alas, my 1.7-meter frame was simply not up to the task of throwing the shot put or discus on the world, or any other, stage!)

    Why Democratic Nations Must Boycott the Beijing Winter Olympics

    READ MORE

    Here in the United States, NBC television is broadcasting the Winter Olympics, devoting at least six hours per day of coverage. Traditionally, its broadcasts dominate the ratings as Americans gather in front of their TV sets and computer and phone screens to watch and cheer on US athletes. I will be cheering on our athletes, too. But I won’t be watching.

    The IOC’s Charter

    I will not watch these games because they betray the very values enshrined in the IOC’s charter and its definition of “Olympism.” That is, it “seeks to create a way of life based on the joy of effort, the educational value of good example, social responsibility and respect for universal fundamental ethical principles.” It further states its goal “to place sport at the service of the harmonious development of humankind, with a view to promoting a peaceful society concerned with the preservation of human dignity.”

    Based on its charter, the IOC should have flatly denied China’s petition to host the 2022 Winter Games. How could the IOC have been so blind to its values in awarding the games to Beijing? How was it possible to allow China to host the Olympic Games when the government of the People’s Republic of China has systematically persecuted, incarcerated, shackled and tortured up to 2 million Uyghurs, sterilized their women and sought to snuff out their Muslim faith? Uyghurs, a Muslim-majority, Turkic-speaking people, have inhabited China’s western Xinjiang province for at least 1,000 years.

    Embed from Getty Images

    But the suffering of the Uyghurs at the hands of an overbearing, intolerant Beijing isn’t a one-off. The Chinese have been doing largely the same thing for decades to the people of Tibet, effectively carrying out a campaign of cultural genocide.

    Several years ago, the world again witnessed China’s notion of “respect for universal fundamental ethical principles” and “promoting a peaceful society concerned with the preservation of human dignity.” Beijing-directed henchmen attacked the people and institutions of Hong Kong, decimating the last vestiges of democracy in the enclave. The government has been arresting and trying any and all opponents, dissidents, journalists and human rights advocates unwilling to buckle under Beijing’s iron-fisted, authoritarian order.

    More recently, the world has observed Beijing turn its aggression to the island of Taiwan, the lone democratic outpost today within China’s one-party, one-man “Asian Reich.” Taiwan presents an unquestionably complex and difficult issue. But the inhabitants of Taiwan have embraced democracy and the freedoms that come with it. Resolving Beijing’s differences with the island and its people with menacing and aggressive behavior — dozens of mass warplane incursions, repeated threats and belligerent bombast — cannot possibly lead to a solution. Rather, a threatened invasion of the island would not only likely crush its democracy, but also inject enormous instability in Asia and torpedo the global economy in a manner unseen since World War II.

    To the IOC, however, none of this mattered. Its president, Thomas Bach, and even UN Secretary-General Antonio Guterres traveled to Beijing for the opening ceremony of the games with nary a word about China’s abysmal human rights policies in Xinjiang, Hong Kong or Tibet. Instead, the IOC wants to see another “successful” games, which typically means an Olympics that makes money. Lots of it.

    The IOC, NBC and Sponsors

    Enter the American media giant, NBC. For exclusive broadcast rights to the Olympics through 2023, the network has paid the IOC $7.75 billion. That comes out to roughly $1.8 billion for the Beijing Games alone, or about 20% of the cost of the games. Tragically, revenues trump rights for China and for the IOC.

    One would think that with that kind of leverage, NBC and the IOC’s numerous sponsors and advertisers — globally recognized names like Allianz, Toyota, Bridgestone, Panasonic, Coca-Cola, Airbnb, Intel, Proctor & Gamble, Visa, Samsung and others — would have stood up to the IOC, explaining the harm to their brands of awarding the games to Beijing.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    And what about NBC itself? The Chinese government has imposed restrictions on journalists covering the games. The sort of 360-type coverage that is traditionally featured in its coverage of the Olympics — not just the events themselves but also the athletes, their lives and backgrounds, the host country and its people — is being severely restricted. One Dutch journalist has already experienced China’s intolerance, having been dragged away while reporting live on camera.

    Are the dollar earnings so great that NBC will sacrifice its journalistic ethics and responsibilities, all while other members of the profession suffer under Beijing’s crackdown on truth and free journalism?

    China is not Nazi Germany. But Germany in 1936 was not yet the depraved hell of human suffering — the tens of millions of destroyed lives of Jews, Slavs, Roma and so many others — that it would become under Nazi rule. But we might have seen it, given the way the Nazis and Adolf Hitler engaged in over-the-top self-promotion and outward, sensational displays of Aryan superiority and Nazi rule.

    The IOC, NBC and their many sponsors and advertisers have given China center stage to arrogantly parade and shamelessly hawk its own brand of unyielding, intolerant authoritarian rule. In China, the power of the state, its ruling Communist Party and great leader, XI Jinping, vitiate Olympism’s concepts of “social responsibility and respect for universal fundamental ethical principles” and “basic human dignity.”

    If they won’t recognize this contemptible undertaking for what it is, I will. I will miss the world’s best athletes and the great ritual of the world coming together for 17 days to celebrate individual struggle and achievement. I won’t be watching these Winter Olympic Games.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Truths, Not Myths, About Pakistan’s Founder Muhammad Ali Jinnah

    Many scholars have spilled much ink on Pakistan’s founder, Muhammad Ali Jinnah. A giant has now waded into the fray and penned a masterpiece. 

    Ishtiaq Ahmed is a professor emeritus at Stockholm University who first made his name with a pathbreaking book, “The Pakistan Garrison State: Origins, Evolution, Consequences.” He then went on to pen the award-winning “The Punjab Bloodied Partitioned and Cleansed,” a tour de force on the partition of Punjab in 1947. Now, Ahmed has published “Jinnah: His Successes, Failures and Role in History,” a magisterial 800-page tome on Pakistan’s founder.

    Is Afghanistan Going to Break Apart?

    READ MORE

    Ahmed is a meticulous scholar who has conducted exhaustive research on the writings and utterances of Jinnah from the moment he entered public life. Pertinently, Ahmed notes the critical moments when Jinnah “spoke” by choosing to remain quiet, using silence as a powerful form of communication. More importantly, Ahmed has changed our understanding of the history of the Indian subcontinent.

    Setting the Record Straight

    Until now, scholars like Stanley Wolpert, Hector Bolitho and Ayesha Jalal have painted a pretty picture of Jinnah, putting him on a pedestal and raising him to mythical status. Wolpert wrote, “Few individuals significantly alter the course of history. Fewer still modify the map of the world. Hardly anyone can be credited with creating a nation-state. Muhammad Ali Jinnah did all three.” Both Wolpert and Bolitho argued that Jinnah created Pakistan. Jalal has argued that “Jinnah did not want Partition.” She claims Jinnah became the sole spokesman of Muslims and the Congress Party forced partition upon him. 

    Jalal’s claim has become a powerful myth on both sides of the border. In this myth, the Congress in general and India’s first prime minister, Jawaharlal Nehru, in particular opted for partition instead of sharing power with the Muslim League and Jinnah. Jalal makes the case that “Punjab and Bengal would have called the shots” instead of Uttar Pradesh, making the emergence of the Nehru dynasty impossible. Her claim that “the Congress basically cut the Muslim problem down to size through Partition” has cast Jinnah into the role of a tragic hero who had no choice but to forge Indian Muslims into a qaum, a nation, and create Pakistan.

    Embed from Getty Images

    The trouble with Jalal’s compelling argument is that it is not based on facts. She fails to substantiate her argument with even one of Jinnah’s speeches, statements or messages. Ahmed’s close examination of the historical record demonstrates that Jinnah consistently demanded the partition of British India into India and Pakistan after March 22, 1940. Far from the idea of Nehru forcing partition on a reluctant Jinnah, it was an intransigent Jinnah who pushed partition upon everyone else.

    Ahmed goes on to destroy Jalal’s fictitious claim that Nehru engineered the partition of both Punjab and Bengal to establish his dynasty. Punjab’s population was 33.9 million, of which 41% was Hindu and Sikh. Bengal’s population was 70.5 million, of which 48% was Hindu. The population of United Provinces (UP), modern-day Uttar Pradesh, was 102 million, of which Hindus formed an overwhelming 86%. When Bihar, Bombay Presidency, Madras Presidency, Central Provinces, Gujarat and other states are taken into account, the percentage of the Hindu population was overwhelming. In 1941, the total Muslim population of British India was only 24.9%. This means that Nehru would have become prime minister even if India had stayed undivided.

    Ahmed attests another fact to buttress his argument that Nehru’s so-called dynastic ambitions had nothing to do with the partition. When Nehru died, Gulzarilal Nanda became interim prime minister before Lal Bahadur Shastri took charge. During this time in power, Nehru did not appoint Indira Gandhi as a minister. It was Kumaraswami Kamaraj, a Congress Party veteran, and other powerful regional satraps who engineered the ascent of Indira Gandhi to the throne. These Congress leaders believed that Nehru’s daughter would be weak, allowing them greater say over party affairs than their eccentric colleague Morarji Desai. Once Indira Gandhi took over, she proved to be authoritarian, ruthless and dynastic. By blaming the father for the sins of the daughter, Jalal demonstrates that she neither understands India’s complex demography nor its complicated history.

    To get to “the whole truth, and nothing but the truth” about India’s partition, we have to read Ahmed. This fastidious scholar analyzes everything Jinnah wrote and said from 1906 onward, the year Pakistan’s founder entered into public life. Ahmed identifies four stages in Jinnah’s career. In the first, Jinnah began as an Indian nationalist. In the second, he turned into a Muslim communitarian. In the third, Jinnah transformed himself into a Muslim nationalist. In the fourth and final stage, he emerged as the founder of Pakistan where he is revered as Quaid-i-Azam, the great leader, and Baba-i-Qaum, the father of the nation.

    Ahmed is a political scientist by training. Hence, his analysis of each stage of Jinnah’s life is informed both by historical context and political theory. Jinnah’s rise in Indian politics occurred at a time when leaders like Motilal Nehru, Mahatma Gandhi, Sardar Vallabhbhai Patel, Jawaharlal Nehru, Maulana Abul Kalam Azad and Subhas Chandra Bose were also major players in India’s political life and struggle for freedom. Jinnah’s role in the tortured machinations toward dominion status and then full independence makes for fascinating reading. Ahmed also captures the many ideas that impinged on the Indian imagination in those days from Gandhi’s nonviolence, Jinnah’s religious nationalism and Nehru’s Fabian socialism.

    Jinnah’s Tortured Journey

    As an Indian nationalist, Jinnah argued that religion had no role in politics. His crowning achievement during these days was the 1916 Lucknow Pact. Together with Congress leader Bal Gangadhar Tilak, Jinnah forged a Hindu-Muslim agreement that “postulated complete self-government as India’s goal.” That year, Jinnah declared that India was “not to be governed by Hindus, and … it [was] not to be governed by the Muslims either, or certainly not by the English. It must be governed by the people and the sons of this country.” Jinnah advocated constitutionalism, not mass mobilization, as a way to achieve this ideal. 

    Unique Insights from 2,500+ Contributors in 90+ Countries

    When the Ottoman Empire collapsed at the end of World War I, Indian Muslims launched a mass movement to save this empire. Among them was Jinnah who sailed to England as part of the Muslim League delegation in 1919 to plead that the Ottoman Empire not be dismembered and famously described the dismemberment of the empire as an attack on Islam. 

    To support the caliph, Indian Muslim leaders launched the Khilafat Movement. Soon, this turned into a mass movement, which Gandhi joined with much enthusiasm. Indian leaders were blissfully unaware that their movement ran contrary to the nationalistic aspirations of Turks and Arabs themselves.

    Later, Islam would emerge as the basis of a rallying cry in Indian politics. The nationalist Jinnah started singing a different tune: He argued that Muslims were a distinct community from Hindus and sought constitutional safeguards to prevent Hindu majoritarianism from dominating. In the 1928 All Parties Conference that decided upon India’s future constitution, Jinnah argued that residuary powers should be vested in the provinces, not the center, in order to prevent Hindu domination of the entire country. Ahmed meticulously documents how the British used a strategy of divide and rule, ensuring that the chasm between the Congress and the Muslim League would become unbridgeable.

    As India turned to mass politics under Gandhi, Jinnah retreated to England. After a few quiet years there, he returned to India in 1934 and was elected to the Central Legislative Assembly, the precursor to the parliaments of both India and Pakistan. Jinnah argued that there were four parties in India: the British, the Indian princes, the Hindus and the Muslims. He took the view that the Congress represented the Hindus while the Muslim League spoke for the Muslims.

    Importantly, Jinnah now claimed that no one except the Muslim League spoke for the Muslims. This severely undercut Muslim leaders in the Congress. Jinnah had a visceral hatred for the erudite Congress leader Azad, who was half Arab and a classically-trained Islamic scholar with an encyclopedic knowledge of the Quran, the hadith and the various schools of Islamic thought. Furthermore, Azad’s mastery of the Urdu language stood unrivaled. He wrote voluminously in this pan-national Muslim lingua franca. In contrast, Jinnah was an anglicized lawyer who wrote in English and spoke poor Urdu.

    Jinnah’s argument that the Muslim League was the only party that could represent Muslims was not only conceptually flawed, but also empirically inaccurate. Muslims in Bengal, Punjab, Sindh and the North-West Frontier Province (NWFP) supported and voted for regional political parties, not the Muslim League. In fact, voters gave the Muslim League a drubbing in 1937. This hardened Jinnah’s attitude, as did the mass contact program with Muslims that the Congress launched under Nehru. When the Congress broke its gentleman’s agreement with the Muslim League to form a coalition government in United Provinces (UP) after winning an absolute majority, Jinnah turned incandescent.

    In retrospect, the decision of the Congress to go it alone in UP was a major blunder. After taking office, the Congress started hoisting its flag instead of the Union Jack and disallowed governors from attending cabinet meetings. Many leaders of the Muslim League joined the Congress, infuriating Jinnah. He drew up a list of Congress actions that he deemed threatening to Islam. These included the Muslim mass contact campaign, the singing of Vande Mataram, Gandhi’s Wardha Scheme of Basic Education and restrictions on cow slaughter. Jinnah came to the fateful decision that he could no longer truck with the Congress and the die was cast for a dark era in Indian history.

    The Two-Nation Champion

    In March 1940, Jinnah threw down the gauntlet to the Congress. At a speech in Lahore, he argued that India’s unity was artificial, it dated “back only to the British conquest” and was “maintained by the British bayonet.” He asserted that “Hindus and Muslims brought together under a democratic system forced upon the minorities can only mean Hindu Raj.” 

    Embed from Getty Images

    In this speech, Jinnah argued that Hindus and Muslims belonged “to two different civilisations which are based mainly on conflicting ideas and conceptions.” He claimed that Muslims were “a nation according to any definition of a nation, and they must have their homelands, their territory, and their state.” Ahmed rightly points out that this speech was Jinnah’s open declaration of his politics of polarization. From now on, Jinnah had set the stage for the division of India.

    Ahmed also goes into the claims of Chaudhry Sir Muhammad Zafarullah Khan, popularly known as Sir Zafarullah, an Ahmadi leader who was Pakistan’s first foreign minister. Khan and his admirers have claimed credit for the Muslim League’s Lahore resolution for Pakistan, following Jinnah’s historic speech. It turns out that Khan was implicitly supported by British Viceroy Lord Linlithgow who cultivated Khan and extended his tenure as a member of the Viceroy’s Executive Council. This indicates that Jinnah’s bid for Pakistan had the support of a canny Scot who wanted Indian participation in World War II, something the Congress was opposed to without the promise of postwar independence.

    While Jalal might trumpet Jinnah as the sole spokesman of the Muslims, the historical record reveals a very different picture. Within a month of Jinnah’s Lahore speech, the All India Azad Muslim Conference met in Delhi. Its attendance was five times that of the Muslim League’s Lahore session. This conference opposed partition, repudiated Jinnah’s two-nation theory and made a strong case for a united India.

    Others argued for a united India too. Ahmed tells us that Bhimrao Ramji Ambedkar, the towering Dalit social reformer who drafted India’s constitution, reversed his position on partition and on Pakistan. After the Lahore resolution, Ambedkar wrote a 400-page piece titled “Thoughts on Pakistan” that advised Hindus to concede Pakistan to the Muslims. By 1945, Ambedkar had come to the view that “there was already a Pakistan” in the Muslim-majority states. As a Dalit, he also turned against the hierarchy in the Muslim community where the high-born Ashrafs lorded it over the low-born Ajlafs and women had very limited rights.

    Jinnah took the haughty view that Muslims were not a large minority but a political nation entitled to self-determination. In 1941, he claimed that Muslims “took India and ruled for 700 years.” So, they were not asking the Hindus for anything. He was making the demand to the British, the rulers of India. Jinnah might have been arrogant but he had a genius for propaganda. He constantly fed the press with stories about impending dangers to Muslims once the Congress took over, fueling insecurities, distrust and division.

    While Jinnah was ratcheting up the pressure, the Congress made a series of political blunders. It vacated the political space when World War II broke out in 1939. Gandhi idealistically opposed the British while Jinnah collaborated with them, extracting valuable concessions from his colonial masters. When Field Marshal Archibald Wavell took over from Lord Linlithgow as the Viceroy, Jinnah wormed himself into Wavell’s confidence. It helped that Wavell despised the anti-colonial Congress. Ahmed observes that this British general “wanted to ensure that Britain’s military interest in the form of bases and manpower was secured.” Jinnah offered him that option while Gandhi did not. 

    Embed from Getty Images

    Jinnah was bloody-minded and shrewd but he was also plain lucky. Many of those who could have contested his leadership simply passed away. Sir Mian Muhammad Shafi, an aristocrat from the historic city of Lahore and a founder of the Muslim League, died in 1932. Sir Mian Fazl-i-Husain, a founding member of Punjab’s Unionist Party who served as counselor to the British Viceroy, died in 1936. Sir Sikandar Hayat Khan, the towering premier of Punjab, died in December 1942. Allah Baksh Soomro, the premier of Sindh, was assassinated in 1943. Sir Chhotu Ram, the co-founder of the National Unionist Party that dominated Punjab, died in 1945. With such giants of Punjab and Sindh dying, the Gujarati Jinnah gained an opportunity to dominate two Muslim-majority provinces where the Muslim League had struggled to put down roots.

    Last-Ditch Efforts to Preserve the Indian Union

    It was not all smooth sailing for Jinnah, though. In 1945, the Conservatives led by Winston Churchill lost the general election. Clement Attlee formed a Labour government committed to India’s independence. By this time, Jinnah was in full-fledged confrontation mode. When Wavell convened the 1945 Simla Conference, Jinnah had insisted that the Congress could not appoint any Muslim representatives. As a result, the conference failed and the last chance for a united independent India went up in smoke.

    Ironically, Jinnah wanted the partition of India but opposed the partition of Punjab and Bengal. In December 1945, Wavell observed that if Muslims could have their right to self-determination, then non-Muslim minorities in Muslim areas could not be compelled to remain in Pakistan against their will. Therefore, the partition of Punjab and Bengal was inevitable. Jinnah would only get his moth-eaten version of Pakistan.

    By now, the British wanted to leave. The 1946 Naval Uprising shook British rule to the core. Weary after World War II, a revolt by naval ratings, soldiers, police personnel and civilians made the British realize that the loyalty of even the armed forces could not be taken for granted. During World War II, large numbers had joined Bose’s Indian National Army and fought against the British. After the 1946 uprising, the writing was on the wall. Soon, the Cabinet Mission arrived to discuss the transfer of power from the British government to Indian political leaders. It proposed provinces, groups of provinces and a federal union. The union was to deal only with foreign affairs, defense and communications, and the power to raise finances for these three areas of government activity. The remaining powers were to be vested in the provinces. 

    Everyone rejected the Cabinet Mission Plan. Jinnah did not get his beloved Pakistan. The Congress was unwilling to accept such a weak federal government. The Sikhs bridled at the prospect of being “subjected to a perpetual Muslim domination.” Needless to say, the plan was dead on arrival.

    Even as deliberations about the transfer of power were going on, members to the Constituent Assembly were elected during July-August. Of a total of 296 seats for the British provinces, the Congress won 208, the Muslim League 73 and independents 15. British India also had 584 princely states that had a quota of 93 seats in the Constituent Assembly. These states decided to stay away from the assembly until their relationship with independent India became clearer. This turned out to be a historic blunder.

    Embed from Getty Images

    By now, the British had decided to leave. On August 24, 1946, Wavell made a radio announcement that his government was committed to Indian independence and that an interim government would be formed under the leadership of Nehru and that the Muslim League would be invited to join it. Initially, no member of the Muslim League was in the first interim government formed on September 2, but five members joined this government on October 26 that remained in power until India and Pakistan emerged as two independent states.

    The Run-up to Partition

    Before the two main parties joined the same coalition government, riots broke out across the country. Jinnah called for Direct Action Day on August 16, 1946. Calcutta, now known as Kolkata, experienced the worst violence. SciencesPo estimates that 5,000 to 10,000 died, and some 15,000 were wounded, between August 16 and 19.

    At the time, Bengal was the only province with a Muslim League government, whose chief minister was the controversial and colorful Hussain Suhrawardy. During the “Great Calcutta Killing,” his response was less than even-handed, deepening divisions between Hindus and Muslims. To add fuel to the fire, riots broke out in Noakhali, a part of the Chittagong district now in Bangladesh. In a frenzy of violence, Muslims targeted the minority Hindu community, killing thousands, conducting mass rape, and abducting women to convert them to Islam and forcibly marry them.

    As riots spread across the country and British troops failed to control the violence, India stood on the brink of anarchy. On June 3, 1947, the new Viceroy Louis Mountbatten announced India would be independent on August 15, chosen symbolically as the date Imperial Japan surrendered and Japanese troops submitted to his lordship in Southeast Asia two years earlier. 

    Importantly, independent India was to be partitioned into India and Pakistan. While the border was yet to be demarcated, the contours fell along expected lines. Yet partition came as a bolt from the blue for the Sikhs. In the dying days of the British Empire, this community had created a short-lived empire that died only in 1849. Yet the Sikhs were a minority in Punjab and widely dispersed around the state. The British had co-opted the Sikhs by recruiting them into the army in large numbers. The colonial authorities had given retired soldiers land in colonies they had settled near irrigation canals. These canal colonies were dotted around Punjab and Mountbatten noted that “any partition of this province [would] inevitably divide them.”

    Ahmed is critical of the way the British planned the partition of Punjab. They assumed that the transfer of power would be peaceful. Mountbatten trusted the Congress, the Muslim League and the Akali leadership of the Sikhs who promised to control their followers. Evan Meredith Jenkins, the British governor of Punjab, did not. He predicted that “bloodbath was inevitable in Punjab unless there were enough British troops to supervise the transfer of power.” History has proved Jenkins right.

    Embed from Getty Images

    Ahmed’s award-winning earlier work, “The Punjab: Bloodied, Partitioned and Cleansed” records those macabre days in grim detail. By this time, colonial troops were acting on communal sentiment. In Sheikhupura, the Muslim Baluch regiment participated in the massacre of Hindus and Sikhs. In Jullundur and Ludhiana, Hindu and Sikh soldiers killed Muslims. Even princely states were infected by this toxic communal sentiment. Ian Copland details how troops of Punjab’s princely states, including Patiala and Kapurthala, slaughtered Muslims.

    In the orgy of violence that infected Punjab, all sorts of characters from criminals and fanatics to partisan officials and demobilized soldiers got involved. The state machinery broke down. The same was true in Bengal. As a result, independence in 1947 came at a terrible cost.

    Jinnah Takes Charge

    Right from the outset, India and Pakistan embarked on different trajectories. Mountbatten remained as governor-general of India, a position instituted in 1950 to facilitate the transition to full-fledged Indian rule. In contrast, Jinnah took over as governor-general of Pakistan. This move weakened both Parliament and the prime minister. As the all-powerful head of a Muslim state, Jinnah left no oxygen for the new parliamentary democracy of Pakistan.

    Nawabzada Liaquat Ali Khan, an Oxford-educated aristocrat from UP, took charge as prime minister. Yet it was an open secret that Khan had little authority and Jinnah called all the shots. In India, Rajendra Prasad took charge as the president of the Constituent Assembly of India and the Dalit scholar Ambedkar became the chair of the drafting committee. In contrast, Jinnah was elected unanimously as the president of the Constituent Assembly of Pakistan that failed to draft a constitution and was acrimoniously dissolved in 1954.

    This assembly might not have amounted to much, but a speech by Jinnah lives on in history books and is a subject of much debate. On August 11, 1947, Jinnah declared: “If you change your past and work together in a spirit that every one of you, no matter to what community he belongs, no matter what relations he had with you in the past, no matter what is his colour, caste, or creed, is first, second, and last a citizen of this State with equal rights, privileges, and obligations, there will be no end to the progress you will make.”

    Jinnah summoned his 1916 self that championed Hindu-Muslim unity and blamed the colonization of 400 million souls on internal division. His rhetoric took flight and he claimed that “in course of time all these angularities of the majority and minority communities, the Hindu community and the Muslim community — because even as regards Muslims you have Pathans, Punjabis, Shias, Sunnis and so on, and among the Hindus you have Brahmins, Vashnavas, Khatris, also Bengalees, Madrasis and so on — will vanish.” 

    Jinnah also made a grand promise to Pakistan’s citizens: “You are free; you are free to go to your temples, you are free to go to your mosques or to any other place or worship in this State of Pakistan. You may belong to any religion or caste or creed — that has nothing to do with the business of the State.” Toward the end of his speech, Jinnah’s rhetoric soared. He envisioned that “in course of time Hindus would cease to be Hindus, and Muslims would cease to be Muslims, not in the religious sense, because that is the personal faith of each individual, but in the political sense as citizens of the State.”

    Unique Insights from 2,500+ Contributors in 90+ Countries

    No scholar has analyzed this speech better than Ahmed. This professor emeritus at Stockholm University points out that Jinnah neither mentions Islam nor secularism as a foundational principle of the state. Instead, Jinnah refers to the clash between Roman Catholics and Protestants in England. It seems this London-trained barrister is looking at the constitutionalism of Merry England as the way forward for Pakistan.

    Ahmed makes another astute observation. Jinnah’s speech might have been addressed less to his audience in a rubber stamp assembly and more to his counterparts in the Indian government. Jinnah did not want another 30 to 40 million Muslims from Delhi and UP immigrating to Pakistan, adding even more pressure on an already financially stretched state. If these Muslims were driven out in retaliation for what was going on to Sikhs and Hindus in West Punjab and East Pakistan (Bangladesh since 1971), then Pakistan could well have collapsed.

    Ahmed’s Evaluation of Jinnah

    Jinnah excites much emotion in the Indian subcontinent. For some, he is the devil incarnate. For others, he is a wise prophet. Ahmed evaluates Jinnah in the cold light of the day with reason, judgment and, above all, fairness. 

    Jinnah was indubitably an impressive character with wit, will and vision. He forged a disparate nation of Balochs, Pashtuns, Sindhis, Punjabis and Muhajirs, the Urdu term for refugees in the name of Islam, including those coming from India in the west and Bengalis in the east. However, Jinnah never attained a status worthy of Thomas Carlyle’s heroes. Unlike Gandhi, Jinnah did not come up with a new way to deal with the existing political situation. Gandhi insisted on ahimsa and satyagraha, non-violence and adherence to truth. He put means before ends. He was a mass leader but was only the first among equals in the Congress Party, which had many towering leaders. Gandhi was outvoted many times and accepted such decisions, strengthening his party’s democratic tradition. On the other hand, Jinnah was determined to be the sole spokesman who put ends before means and did not hesitate to spill blood to achieve his political ambitions.

    It is true that Gandhi erred in calling Jinnah a Gujarati Muslim in 1915 when Jinnah would have been preferred to be known as an Indian nationalist. Yet Gandhi genuinely believed that everyone living in India was an Indian and had equal rights as citizens. Jinnah championed the two-nation theory and argued that Muslims in India were a separate nation. For him, religious identity trumped linguistic, ethnic or national identity. Ahmed’s magnum opus might focus on Jinnah but Gandhi emerges as a true hero in his book.

    In the short run, Jinnah succeeded. Pakistan was born. Yet Jinnah also left Pakistan with many of its current problems. He centralized all power, reduced states to the level of municipalities and postponed the drafting of a constitution. Even though Jinnah himself neither spoke his native Gujarati or urbane Urdu fluently, he made Urdu the official language of Pakistan. This infuriated East Pakistan, which eventually achieved independence in 1971. As Atul Singh, Vikram Sood and Manu Sharma point out in an article on Fair Observer, the rise of ethnic nationalism threatens the further disintegration of Pakistan for which Jinnah must take some blame.

    Embed from Getty Images

    Ahmed’s book also brings into the spotlight the role of facts, factlets and factoids. His facts are based on sources that are empirically verifiable. Factlets are interesting asides, which have value in themselves but may or may not have a bearing on the meta narrative. Factoids are just plain lies that are repeated so many times that many people start believing in them. The biggest factoid in the Indian subcontinent about the partition is the assertion that a majority of Muslims in British India wanted Pakistan. Another factoid is the belief that the Congress Party was as keen on Partition as the Muslim League. Ahmed’s book is strong on facts, keeps the readers interested by providing riveting factlets and demolishes several factoids.  

    Three Takeaways for Today

    Ahmed’s masterpiece offers us three important lessons.

    First and foremost, facts matter. For a while, myth may obscure facts, narratives might cloud truth, but eventually a scrupulous scholar will ferret out facts. As the English adage goes, “the truth will out sooner or later.”

    Second, religion and politics may make a heady cocktail but leave a terrible hangover. At some point, things spin out of control, riots break out on the streets, fanaticism takes over, jihadists go berserk and a garrison state emerges with a logic of its own. Such a state can be deep, oppressive and even somewhat effective but is largely disconnected from the needs and aspirations of civil society. Such a state is also unable to create a dynamic economy and most people remain trapped in poverty.

    Last but not the least, the zeal of new converts becomes doubly dangerous when religion and politics mix. These new converts can turn into fanatics who outdo their co-religionists. As the adage goes, they seek to be more Catholic than the pope. The noted Punjabi Hindu leader Lala Lajpat Rai’s father returned to Hinduism after converting to Islam. Master Tara Singh, the champion of an independent Sikh nation, was born a Hindu but converted to Sikhism in his youth. 

    Jinnah’s grandfather, Premjibhai Meghji Thakkar, was a Bhatia Rajput who converted to Islam after orthodox Hindus excommunicated Thakkar for entering the fishing business. Similarly, Pakistan’s national poet Muhammad Iqbal, who studied at Trinity College, Cambridge and the University of Munich, came from a Kashmiri Brahmin family. Iqbal’s father, Rattan Lal, was a Sapru who reportedly embraced Islam to save his life and was consequently disowned by his family. Pakistan was not created by a Pashtun like Abdul Ghaffar Khan or a half-Arab, blue-blooded sayyid like Maulana Abul Kalam Azad but by a Rajput and a Brahmin who were recent converts. Ironically, this nation now names its ballistic missiles after Turkish invaders, makes it compulsory for its children to learn Arabic and pretends its roots lie in the Middle East instead of the Indian subcontinent.

    *[Ishtiaq Ahmed’s book, “Jinnah: His Successes, Failures and Role in History” is published by Penguin Random House and available here. The same book is published in Pakistan by Vanguard Books and is available here.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Iraq Still Feels the Consequences of US Assassinations

    The assassination of Iranian Major General Qasem Soleimani, the head of the Islamic Revolutionary Guard Corps’ (IRGC) elite Quds Force, and Abu Mahdi al-Muhandis, an Iraqi militia commander, head of Kataib Hezbollah and de facto leader of the Popular Mobilization Forces (PMF), by a US drone strike outside Baghdad International Airport in January 2020 continues to reverberate across Iraq.

    The Evolution of National Security in the UAE

    READ MORE

    The killings, ordered by then US President Donald Trump, have served to exacerbate the severe security challenges the government of Prime Minister Mustafa al-Kadhimi already faces. The PMF, without al-Muhandis’ leadership, is becoming increasingly splintered, threatening even more insecurity for ordinary Iraqis who are trying to recover from nearly two decades of war and terrorism.

    Growing Security Challenges

    Security is a prerequisite for the prosperity, welfare and economic development of any society. However, as long as Iran continues its extensive influence over Iraq and uses Iraqi territory as a venue to play out its conflict with the United States, security cannot be achieved.

    After the assassinations of Soleimani and al-Muhandis, the PMF appeared to be even more aggressively pursuing Iranian Supreme Leader Ali Khamenei’s strategic goal, namely the withdrawal of all US troops from Iraq. The US Embassy, the Baghdad Green Zone and US military bases have been repeatedly targeted by PMF militias. The US responded in kind and bombed PMF positions in various parts of the country, further escalating an already fragile security situation.

    Meanwhile, al-Kadhimi, viewed by his critics as catering to Washington, blamed the US for violating Iraqi sovereignty by launching unilateral operations inside the country. At the same time, he faced strenuous demands from the Americans for his government to do more to stop PMF attacks on US targets.

    Embed from Getty Images

    The withdrawal of foreign military forces had been approved by the Iraqi parliament just two days after the high-profile assassinations. Following the US-Iraqi strategic dialogue that launched in June 2020, the US evacuated some of its bases that have been in place since 2003, handing them over to the Iraqi army. But a final withdrawal agreed to be completed by the end of last year has stalled, and the remaining 2,500 US troops have stayed on, no longer in a combat role but rather to “advise, assist and enable” the Iraqi military.

    This quasi-exit was met with a stern reaction from the PMF, who threatened to treat the US forces as aggressors if they did not withdraw completely from Iraq. “Targeting the US occupation in Iraq is a great honor, and we support the factions that target it,” was how a spokesperson for one of the PMF militias put it. Such threats underline the risk of further confrontations between the militias and the US and the potential for more insecurity for ordinary Iraqis.

    The targeting of Baghdad’s airport on January 28, with at least six rockets landing on the runway and areas close to the non-military side, causing damage to parked passenger planes, underlines just how fragile the security situation remains.

    The PM and the PMF

    The conflicts over differences between the PMF and the government are another reason for growing insecurity in the post-assassination period. The PMF has a competitive relationship with the prime minister’s government, and this competition has only intensified over the past two years. PMF groups consider al-Kadhimi to be pro-US, seeking to reduce the influence of Shia militant groups in Iraq.

    Initially, in March 2020, major Shia factions rejected his nomination, accusing him of being inordinately close to the US. The Fatah Coalition, composed of significant Shia groups close to Iran, later accepted his candidacy. Still, tensions remain as al-Kadhimi strives to strike a balance between Iran on the one hand and the US and its allies on the other.

    The prime minister believes that the PMF should exit the political stage. He also believes that the PMF should be freed from party affiliation and be fully controlled by the government. This would mean that their budget would come from the federal government and not from private sources or other states. In this regard, al-Kadhimi is seeking to strengthen government control over border crossings to fight corruption and smuggling.

    Embed from Getty Images

    The crossings are used by militias, including those reportedly active at Diyala’s border crossing into Iran. If the government effectively controls these vital channels, financial inflows from smuggling, which strengthens the militias, will decrease in the long term while federal coffers will directly benefit.

    The dispute between the PMF and the prime minister escalated in May of last year when police arrested Qasem Mosleh, the PMF commander in Anbar province, over the assassination of a prominent Iraqi activist. In response, the PMF stormed and took control of the Green Zone. Al-Kadhimi, not wanting to escalate the conflict, found no evidence against Mosleh and released him after 14 days.

    In November 2021, al-Kadhimi himself was targeted in an assassination attempt following clashes between various Iraqi parties during protests against the results of the parliamentary elections. Despite its failure, an armed drone attack on the prime minister’s Baghdad residence presented a disturbing development for contemporary Iraq and was attributed to a PMF militia loyal to Iran.

    Internal Struggles

    The assassination of al-Muhandis had a huge impact on the PMF. He was a charismatic figure able to mediate more effectively than anyone else between various Iraqi groups, from Shia clerics in Najaf to Iraqi government politicians and Iranian officials. After his death, the militia groups in the PMF face internal division.

    The PMF’s political leadership, including its chairman, Falih Al-Fayyadh, has tried to present itself as committed to the law and accepting the authority of the prime minister. In contrast, two powerful PMF factions, Kataib Hezbollah and Asaib Ahl al-Haq, have taken a hardline stance, emphasizing armed resistance against US forces. Tehran’s efforts to mediate between the leaders of the two factions and the Iraqi government have yielded few results.

    Meanwhile, internal disagreements over the degree of Iranian control caused four PMF brigades to split off and form a new structure called Hashd al-Atabat, or Shrine Units. Their avowed intention is to repudiate Iranian influence while supporting the Iraqi state and the rule of law.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    Another divide in the PMF has opened up between groups such as Kataib Hezbollah on the one hand, and Badr, Asaib Ahl al-Haq and Saraya al-Salam on the other, due to poor relationship management by Kataib Hezbollah in the PMF Commission after Muhandis’ death. While it is unsurprising that a number of critical PMF functions like internal affairs and intelligence are controlled by Kataib Hezbollah given that Muhandis founded the group before assuming the PMF’s leadership, he managed to exercise control in a manner that kept other factions onboard.

    But Kataib Hezbollah’s imposition, in February 2020, of another one of its commanders, Abu Fadak al Mohammadawi, to succeed al-Muhandis on the PMF Commission alienated key groups such as Badr and Asaib. Clearly, a severely factionalized and heavily armed PMF continues to pose a significant security threat in the country.

    Announcing the assassinations on January 3, 2020, Donald Trump said of Soleimani that “we take comfort knowing his reign of terror is over.” Two years on from the killing of the IRGC general and the PMF boss, ordinary Iraqis beset by violence and insecurity take no such comfort.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

    *[This article was originally published by Arab Digest, a partner of Fair Observer.] More

  • in

    The Evolution of National Security in the UAE

    The United Arab Emirates, a small and ambitious country in the Persian Gulf, faces a variety of security threats. Its geographic location puts it at the center of instability, sectarianism and regional rivalries in the Middle East, which has led the country to pay particular attention to its security. 

    In recent years, the Arab countries of the Persian Gulf, especially the UAE, have recognized that trusting foreign governments, such as the United States, cannot offer them the best possible protection. The US has had a presence in the Persian Gulf since the 1990s and the Gulf Arab countries have relied on it to provide security. However, events in recent years have shown that the Gulf Arab states cannot rely solely on Washington.

    Can Self-Help Diplomacy Lower Political Heat in the Middle East?

    READ MORE

    Such developments include the Taliban takeover of Afghanistan amid the US withdrawal; the US pivot to Asia; the US retraction of most advanced missile defense systems and Patriot batteries from Saudi Arabia; and the lack of a US military response to threats, missile and drone attacks on Saudi oil bases by the Houthis in Yemen.

    This has encouraged the Arab countries in the Persian Gulf to pursue security autonomy. The UAE, in particular, has sought to transform its strategy from dependence on the US and Saudi Arabia to a combination of self-reliance and multilateral cooperation.

    Self-Reliance Security Strategy

    Although the UAE is an important ally of America in the Persian Gulf, over recent years, the US has sought to push the Emiratis toward security self-reliance. Sociopolitical events in the Middle East over the last decade following the Arab Spring of 2010-11 have made it clear to the UAE that the primary goal of ensuring national security, in addition to benefiting from international cooperation, should be the use of national facilities and resources.

    Embed from Getty Images

    Hosni Mubarak’s ouster from Egypt during the Arab Spring protests and the reluctance of the US to defend him as an ally — which led to the rise of Egyptian President Mohamed Morsi of the Muslim Brotherhood — further demonstrated to Abu Dhabi that it should not exclusively depend on the US for security assistance. Thus, the UAE began to develop a professional army.

    The UAE‘s self-reliance strategy is divided into different branches, but most of all, its military security efforts have been given the highest priority. The UAE‘s determination to create an independent and professional military is evident from its years of investment in the defense industry.

    Indeed, security is a top priority for the United Arab Emirates, and defense spending continues to make up a large portion of the national budget. The UAE’s defense spending typically accounts for 11.1% to 14% of the total budget. In 2019, the UAE’s defense spending was $16.4 billion. This was 18% more than the 2018 budget of $13.9 billion.

    The UAE has invested heavily in the military sector and defense industry in recent years. In November 2019, the UAE formed the EDGE Group from a merger of 25 companies. The company has 12,000 employees and $5 billion in total revenue. It is also among the top 25 advocacy groups in the world, ahead of firms such as Booz Allen Hamilton in the US and Rolls-Royce in the UK.

    EDGE is structured around five clusters: platforms and systems, missiles and weapons, cyber defense, electronic warfare and intelligence, and mission support. It comprises several major UAE companies in the defense industry, such as ADSB (shipbuilding), Al Jasoor, NIMR (vehicles), SIGN4L (electronic warfare services) and ADASI (autonomous systems). The main goal of EDGE is to develop weapons to fight “hybrid warfare” and to bolster the UAE’s defense against unconventional threats, focusing on electronic attacks and drones.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    The UAE has also come up with detailed plans to improve the quality of its military personnel, spending large sums of money each year on training its military recruits in American colleges and war academies. It also founded the National Defense College; most of its students are citizens of the UAE, because of its independence in military training. In addition, in 2014, the UAE introduced general conscription for men between the ages of 18 and 30 to increase numbers and strengthen national identity in its military. As a result, it gathered about 50,000 people in the first three years.

    Contrary to traditional practice, the UAE’s growing military power has made it eager to use force and hard power to protect its interests. The UAE stands ready to use military force anywhere in the region to contain Iran’s growing influence and weaken Islamist groups such as the Muslim Brotherhood. Participating in the Yemeni War was a test of this strategy.

    The UAE‘s military presence in Yemen began in March 2015. It sent a brigade of 3,000 troops to Yemen in August 2015, along with Saudi Arabia and a coalition of Arab countries. Over the past five years, the UAE has pursued an ambitious strategic agenda in the Red Sea, building military installations and securing control of the southern coasts of Yemen along the Arabian Sea in the Bab al-Mandab Strait and Socotra Island. Despite reducing its military footprints in Yemen in 2019, the UAE has consolidated itself in the southern regions. It has continued to finance and impart training to thousands of Yemeni fighters drafted from various groups like the Security Belt Forces, the Shabwani and Hadrami Elite Forces, Abu al-Abbas Brigade and the West Coast Forces.

    The UAE‘s goal in adopting a self-reliance strategy is to increase strategic depth in the Middle East and the Horn of Africa. Thus, along with direct military presence or arms support for groups engaged in proxy wars, it affects the internal affairs of various countries in the region, such as Yemen, Somalia, Eritrea, Ethiopia, Sudan, Egypt and Libya. With its influence, the UAE can turn the tide in its favor in certain areas.

    Multilateralism Security Strategy

    The United Arab Emirates faces a variety of security challenges in the Middle East, and addressing them requires cooperation with other countries. Currently, the most significant security threats in the UAE are: countering Iranian threats and power in the Middle East, especially in Arab countries under Iranian influence, such as Yemen, Syria and Lebanon; eliminating threats from terrorist groups and political Islam in the region, the most important of which — according to the UAE — is the Muslim Brotherhood; and economic threats and efforts to prepare for the post-oil world.

    Embed from Getty Images

    In its multilateral strategy, the UAE seeks to counter these threats with the help of other countries in the region or beyond. It has used soft power through investments or providing humanitarian aid, suggesting that economic cooperation is more important than political competition and intervention. In this regard, the UAE has cooperated with Turkey, Saudi Arabia, Egypt, Britain and France, as well as normalized relations with Israel.

    On August 13, 2020, the UAE became the first Gulf state to normalize relations with Israel. The UAE‘s goal in normalizing relations with Israel is to counter threats from Iran and the region. The Abraham Accords have not only a security aspect, but also an economic one. Following the signing of the accords, on October 20, 2020, the US, Israel and the UAE announced the establishment of the Abraham Fund, a joint fund of $3 billion “in private sector-led investment and development initiatives,” aimed at “promoting economic cooperation and prosperity.” In addition, it outlined a banking and finance memorandum between the largest banks in Israel and Dubai, and a joint bid between Dubai’s DP World port operator and an Israeli shipping firm for the management of Israel’s Haifa port.

    Through the Abraham Accords, the United Arab Emirates seeks to invest and transfer Israeli technologies to the UAE through mutual agreements. The UAE has discovered that Israel is one of the bridges to the US economy and high technology. If the UAE intends to have an oil-free economy in the future, Israel may be the best option to achieve this by pursuing a strategy of multilateralization.

    UAE relations with Turkey also have a multilateral dimension to reaching common security goals. The two countries had good relations until the Arab Spring protests jeopardized ties between them. Abu Dhabi and Ankara began to defuse tensions after a phone call in August 2021 between UAE Crown Prince Mohamed bin Zayed Al Nahyan and Turkish President Recep Tayyip Erdogan. The nations mainly have differences around issues in Libya, Syria and Egypt. The UAE is trying to resolve its disputes with Turkey by investing in the country.

    Embed from Getty Images

    Turkey is the largest backer of the Muslim Brotherhood in the region. The Turks claim the UAE participated in the failed coup of July 2016 against the Turkish government. Nonetheless, the UAE wants to end frictions with Turkey and has attracted Ankara by investing and increasing commercial ties. The Turkish lira has depreciated in recent years and Erdogan’s popularity has plummeted due to mismanagement in Turkey. Erdogan will not miss this economic opportunity with the UAE and welcomes Emirati investments. In this way, the UAE will likely easily resolve its differences with Turkey.

    The current tendency to use force is contrary to traditional Abu Dhabi policy, yet increasing the strategic depth of the UAE is one of Abu Dhabi‘s most achievable goals in its strategy of self-reliance. This plan is the exact opposite of multilateralism. Unlike the use of force and hard power, Abu Dhabi seeks to achieve its objectives by using soft power, investment and humanitarian aid. In this situation, the tactical exploitation of economic cooperation takes precedence over political competition and military intervention in the region.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    What Would Helsinki 2.0 Look Like Today?

    The European security order has broken down. You might think that’s an overstatement. NATO is alive and well. The Organization for Security and Cooperation (OSCE) in Europe is still functioning at a high level.

    Of course, there’s the possibility of a major war breaking out between Russia and Ukraine. But would Russian President Vladimir Putin really take such an enormous risk? Moreover, periodic conflicts in that part of the world — in Ukraine since 2014, in Georgia in 2008, in Transnistria between 1990 and 1992 — have not escalated into Europe-wide wars. Even the horrific bloodletting of Yugoslavia in the 1990s was largely contained within the borders of that benighted former country, and many of the Yugoslav successor states have joined both the European Union and NATO.

    In Ukraine, More Than European Peace Is at Stake

    READ MORE

    So, you might argue, the European security order is in fine shape, and it’s only Putin who’s the problem. The United States and Europe will show their resolve in the face of the Russian troops that have massed at the border with Ukraine, Putin will accept some face-saving diplomatic compromise and the status quo will be restored.

    Even if that were to happen and war is averted this time, Europe is still in a fundamental state of insecurity. The Ukraine conflict is a symptom of this much deeper problem.

    The current European security order is an overlay of three different institutional arrangements. NATO is the surprisingly healthy dinosaur of the Cold War era with 30 members, a budget of $3 billion and collective military spending of over a trillion dollars.

    Embed from Getty Images

    Russia has pulled together a post-Cold War military alliance of former Soviet states, the Collective Security Treaty Organization (CSTO), that is anemic by comparison with a membership that includes only Armenia, Belarus, Kazakhstan, Kyrgyzstan and Tajikistan. Instead of expanding, the CSTO is shrinking, having lost Azerbaijan, Georgia and Uzbekistan over the course of its existence.

    And then there’s the Helsinki framework that holds East and West together in the tenuous OSCE. Neither Russia nor its military alliance was able to prevent the march of NATO eastward to include former Soviet republics. Neither NATO nor the OSCE was able to stop Russia from seizing Crimea, supporting a separatist movement in eastern Ukraine or orchestrating “frozen conflicts” in Georgia and Moldova.

    Presently, there are no arms control negotiations between East and West. China became Russia’s leading trade partner about a decade ago, and the United States and European countries have only fallen further behind since. Human rights and civil liberties are under threat in both the former Soviet Union and parts of the European Union.

    So, now do you understand what I mean by the breakdown of the European security order? The Cold War is back, and it threatens once again to go hot, if not tomorrow then perhaps sometime soon.

    So, yes, Ukrainian sovereignty must be defended in the face of potential Russian aggression. But the problem is much bigger. If we don’t address this bigger problem, then we’ll never really safeguard Ukraine, deal with Russia’s underlying concerns of encirclement or tackle the worrying militarization of Europe. What we need is Helsinki 2.0.

    The Origins of Helsinki 1.0

    In the summer of 1985, I was in Helsinki after a stint in Moscow studying Russian. I was walking down one of the streets in the Finnish capital when I came across a number of protesters holding signs.

    “Betrayal!” said one of them. “Appeasement!” said another. Other signs depicted a Russian bear pressing its claws into the then-Baltic republics of Lithuania, Latvia and Estonia.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    I’d happened on this band of mostly elderly protesters outside a building where dignitaries from around the world had gathered to celebrate the 10th anniversary of the Helsinki Accords. At the time, I had only a vague understanding of the agreement, knowing only that it was a foundational text for East-West détente, an attempt to bridge the Iron Curtain.

    As I found out that day, not everyone was enthusiastic about the Helsinki Accords. The pact, signed in 1975 by the United States, Canada, the Soviet Union and all European countries except Albania, finally confirmed the post-war borders of Europe and the Soviet Union, which meant acknowledging that the Baltic states were not independent but instead under the Kremlin’s control. To legitimize its control over the Baltics in particular, a concession it had been trying to win for years, the Soviet Union was even willing to enter into an agreement mandating that it “respect human rights and fundamental freedoms, including the freedom of thought, conscience, religion or belief, for all without distinction as to race, sex, language or religion.”

    At the time, many human rights advocates were skeptical that the Soviet Union or its Eastern European satellites would do anything of the sort. After 1975, “Helsinki” groups popped up throughout the region — the Moscow Helsinki Group, Charter 77 in Czechoslovakia — and promptly discovered that the Communist governments had no intention of honoring their Helsinki commitments, at least as they pertained to human rights.

    Most analysts back then saw the recognition of borders as cold realpolitik and the human rights language as impossibly idealistic. History has proved otherwise. The borders of the Soviet Union had an expiration date of 15 years. And, ultimately, it would be human rights — rather than war or economic sanctions — that spelled the end of the Soviet Union and the Warsaw Pact. Change came in the late 1980s from ordinary people who exercised the freedom of thought enshrined in the Helsinki Accords to protest in the streets of Vilnius, Warsaw, Prague and Tirana. The decisions made in 1975 ensured that the transitions of 1989-91 would be largely peaceful.

    After the end of the Cold War, the Helsinki Accords became institutionalized in the OSCE, and briefly, that promised to be the future of European security. After all, the collapse of the Soviet Union meant that NATO no longer had a reason for existence.

    Embed from Getty Images

    But institutions do not die easily. NATO devised new missions for itself, becoming involved in out-of-area operations in the Middle East, intervening in the Yugoslav wars and beginning in 1999 expanding eastward. The first Eastern European countries to join were the Czech Republic, Hungary and Poland, which technically brought the alliance to Russia’s very doorstep (since Poland borders the Russian territory of Kaliningrad). NATO expansion was precisely the wrong answer to the question of European security — my first contribution to Foreign Policy in Focus back in 1996 was a critique of expansion — but logic took a backseat to appetite.

    The OSCE, meanwhile, labored in the shadows. With its emphasis on non-military conflict resolution, it was ideally suited to the necessities of post-Cold War Europe. But it was an unwieldy organization, and the United States preferred the hegemonic power it wielded through NATO.

    This brings us to the current impasse. The OSCE has been at the forefront of negotiating an end to the war in eastern Ukraine and maintains a special monitoring mission to assess the ceasefire there. But NATO is mobilizing for war with Russia over Ukraine, while Moscow and Washington remain as far apart today as they were during the Cold War.

    The Helsinki Accords were the way to bridge the unbridgeable in 1975. What would Helsinki 2.0 look like today?

    Toward Helsinki 2.0

    The Helsinki Accords were built around a difficult compromise involving a trade-off on borders and human rights. A new Helsinki agreement needs a similar compromise. That compromise must be around the most important existential security threat facing Europe and indeed the world: climate change.

    As I argue in a new article in Newsweek, “In exchange for the West acknowledging Russian security concerns around its borders, Moscow could agree to engage with its OSCE partners on a new program to reduce carbon emissions and transition from fossil fuels. Helsinki 2.0 must be about cooperation, not just managing disagreements.”

    The Russian position on climate change is “evolving,” as politicians like to say. After years of ignoring the climate crisis — or simply seeing it as a good opportunity to access resources in the melting Arctic — the Putin administration change its tune last year, pledging to achieve carbon neutrality by 2060.

    Embed from Getty Images

    There’s obviously room for improvement in Russia’s climate policy — as there is in the United States and Europe. But that’s where Helsinki 2.0 can make a major contribution. The members of a newly energized OSCE can engage in technical cooperation on decarbonization, monitor country commitments to cut emissions, and apply new and stringent targets on a sector that has largely gotten a pass: the military. It can even push for the most effective decarbonization strategy around: demilitarization.

    What does Russia get out of the bargain? A version of what it got in 1975: reassurances around borders.

    Right now, everyone is focused on the question of NATO expansion as either an unnecessary irritant or a necessary provocation in American-Russian relations. That puts too much emphasis on NATO’s importance. In the long term, it’s necessary to reduce the centrality of NATO in European security calculations and to do so without bulking up all the militaries of European states and the EU. By all means, NATO should be going slow on admitting new members. More important, however, are negotiations as part of Helsinki 2.0 that reduce military exercises on both sides of Russia’s border, address both nuclear and conventional buildups, and accelerate efforts to resolve the “frozen conflicts” in Ukraine, Georgia and Moldova. Neither NATO nor the CSTO is suited to these tasks.

    As in 1975, not everyone will be satisfied with Helsinki 2.0. But that’s what makes a good agreement: a balanced mix of mutual satisfaction and dissatisfaction. More importantly, like its predecessor, Helsinki 2.0 offers civil society an opportunity to engage — through human rights groups, arms control advocates, and scientific and educational organizations. This might be the hardest pill for the Kremlin to swallow, given its hostile attitude toward civil society. But the prospect of securing its borders and marginalizing NATO might prove simply too irresistible for Vladimir Putin.

    The current European security order is broken. It can be fixed by war. Or it can be fixed by a new institutional commitment by all sides to negotiations within an updated framework. That’s the stark choice when the status quo cannot hold.

    *[This article was originally published by FPIF.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Boris Johnson’s Convenient Bravado

    In the prelude to World War I, Western nation-states, from North America to the Urals, found themselves involved in a strange game nobody really understood. It turned around their perception of each nation’s individual image on the world stage. Each nation imagined itself as wielding a form of geopolitical power whose hierarchy was impossible to define.

    Even the borders of nations, the ultimate criterion for defining a nation-state, had become hard to understand. The idea of each nation was built on a mix of geographical, cultural, linguistic, ethnic, religious and ideological considerations. These became infinitely complicated by shifting relationships of dependency spawned by the dominant colonial model they all accepted as normal. And not just normal. Colonialism appeared to both Europeans and Americans as an ideal to aspire to.

    Coming to Terms With the Game Being Played on the Russia-Ukraine Border

    READ MORE

    Two world wars in the first half of the 20th century had the effect of seriously calming the obsession of Western nations with their individual images. For most of the nation-states emerging from the Second World War, an air of humility became the dominant mood. Two hegemons emerged: the United States and the Soviet Union. But even those powerhouses accepted to work within the framework of an idealized system, the United Nations. That forced them to respect, at least superficially, a veneer of outward humility. The Cold War’s focus on ideologies — capitalism vs. communism — served to hide the fact that the new hegemons were the last two political entities authorized to assert the geopolitical power associated with the previous century’s colonial nation-states.

    The current showdown between the US and Russia over events at the Ukrainian border shows signs of a return to the ambience that preceded the First World War. The Soviet Union disappeared 30 years ago, leaving a weak Russian state in its stead. The US has been on a steep decline for two decades since the confusion created on 9/11.

    Embed from Getty Images

    That should signify the existence of an opportunity for non-hegemonic nation-states to reemerge and potentially vie for influence on the world stage, as they did before World War I. After a century of adaptation to the consumer society on a global scale, however, the similarities may only be an illusion. 

    Still, some people appear to believe in an idea definitively discarded by history. The New York Times’ take on the latest posturing of Great Britain proves that the illusion is still alive in some people’s heads. In recent days, Prime Minister Boris Johnson has been diligently seeking to drag his isolated, Brexited nation into the fray of Eastern European border disputes, conjuring up reminiscences of pre-1914 Europe.  

    Over the weekend, British intelligence spread the “intelligence” that President Vladimir Putin is seeking to install a pro-Russian leader in Kyiv. Times reporter Mark Lander cites unnamed “British officials” who “cast it as part of a concerted strategy to be a muscular player in Europe’s showdown with Russia — a role it has played since Winston Churchill warned of an ‘Iron Curtain’ after World War II.”

    Today’s Weekly Devil’s Dictionary definition:

    Muscular player:

    An actor or performer whose wardrobe and makeup teams have the ability to turn the player into an image of Atlas or Hercules during a performance on a stage

    Contextual Note

    In the games that precede a major military conflagration, nations feel compelled to adopt attitudes that go well beyond their ability to perform. Lander quotes Malcolm Chalmers, the deputy director-general of a think tank in London, who explains that Johnson’s Britain “is differentiating itself from Germany and France, and to some extent, even the U.S.” He adds this pertinent observation: “That comes out of Brexit, and the sense that we have to define ourselves as an independent middle power.”

    There’s much that is pathetic in this observation. In a totally globalized economy, it is reasonable to doubt the idea of a “middle power” has any meaning, at least not the meaning it once had. Outside of the US and China, Russia may be the only remaining middle power, because of two things. First, its geography, its sheer landmass and its future capacity to dominate the Arctic. Second, its military capacity carried over from the Soviet era. The rest of the world’s nations, whether middle or small, should not even be called powers, but “powerlessnesses,” nations with no hope of exercising power beyond their borders. Alongside the middle and small, there may also be two or three “major” powerless nations: India, Brazil and Australia.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    But, of course, the most pathetic aspect of the description of Britain’s ambition is the fact that Johnson’s days as prime minister appear to be numbered. He is already being hauled over the coals by his own party for his impertinent habit of partying during a pandemic. 

    In a press conference in Kyiv on February 1, Johnson deployed his most muscular rhetoric. For once finding himself not just on the world stage but in the eye of the hurricane, he felt empowered to rise to the occasion. “This is a clear and present danger,” he solemnly affirmed. “We see large numbers of troops massing, we see preparations for all kinds of operations that are consistent with an imminent military campaign,”

    The hollowness of Johnson’s discourse becomes apparent with his use of the expression, “clear and present danger,” a locution that derives from a US Supreme Court case concerning the limits on free speech guaranteed by the First Amendment. Chief Justice Oliver Wendell Holmes used the phrase in his draft of the majority decision in 1919. It became a cliché in American culture, even reaching the distinction of providing the title of a Hollywood action movie based on a Tom Clancy novel.

    As for his analysis of the clear and present danger, Johnson, who studied the classics at Oxford but maybe missed Aristotle, seems to ignore the logical inconsistency of assuming that if A (military buildup) is consistent with B (a military campaign), it does not make B predictable and even less “imminent.” That, however, is the line the Biden administration has been pushing for weeks. Johnson’s abject adherence to it may be a sign of the fact that Johnson is incapable of doing what Chalmers claimed he was trying to do: differentiate Britain — even “to some extent” — from the US.

    Historical Note

    The Times’ Mark Lander is well aware of the hyperreal bravado that explains Johnson’s move. “The theatrical timing and cloak-and-dagger nature of the intelligence disclosure,” Lander writes, “which came in the midst of a roiling political scandal at home, raised a more cynical question: whether some in the British government were simply eager to deflect attention from the problems that threaten to topple Prime Minister Boris Johnson.”

    Lander goes on to cite Karen Pierce, the British ambassador to the United States, eager to remind people of the historical logic of Johnson’s move. She refers to a British tradition rife with cloaks and daggers. “Where the Russians are concerned, you’ll always find the U.K. at the forward end of the spectrum.” She wants us to think back to Britain’s active participation in the Cold War, punctuated by an occasionally embarrassing episode such as the 1961 Profumo affair, starring model and escort Christine Keeler. But she knows that what best illustrates that glorious period for Britain in its holy struggle against the Soviet Union is James Bond, who has long been “at the forward end” of the Hollywood spectrum. In our hyperreal world, Pierce knows that fiction will always dominate and replace our understanding of reality. 

    Embed from Getty Images

    We need to ask another question in a world conditioned by the image of Sylvester Stallone, Arnold Schwarzenegger and Dwayne “the Rock” Johnson. Does the world really need muscular players today? The ancient Greeks imagined Heracles as a naturally muscular hero, who built up his bulk through his deeds, not through his workouts in the gym or to prepare for body-building competitions. Heracles was about killing lions with his bare hands, slaying Hydras, capturing bulls, and even cleaning stables — that is, getting things done. For the Greeks, Heracles was a muscular being, not a muscular player. 

    When Greek playwrights actually put Heracles on the stage, he could be tragic (Euripides, “The Tragedy of Herakles”) or comic (Aristophanes, “The Frogs”). In that sense, Arnold Schwarzenegger, from “Conan the Barbarian” to “Twins,” fits the role. The difference is that Heracles was a deity (the son of Zeus with the mortal Alcmene) and, thanks to the completion of his seven labors, became a god on Mount Olympus. When Schwarzenegger completed his labors as a muscular player in more than seven films, he became a Republican politician in California.

    *[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of The Fair Observer Devil’s Dictionary.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More