More stories

  • in

    Chinese bots had key role in debunked ballot video shared by Eric Trump

    A Chinese bot network played a key role in spreading disinformation during and after the US election, including a debunked video of “ballot burning” shared by Eric Trump, a new study reveals.The misleading video shows a man filming himself on Virginia Beach, allegedly burning votes cast for Donald Trump. The ballots were actually samples. The clip went viral after Trump’s son Eric posted it a day later on his official Twitter page, where it got more than 1.2m views.The video was believed to have originated from an account associated with the QAnon conspiracy theory. But the study by Cardiff University found two China-linked accounts had shared the video before this. Twitter has since suspended one of them.The same Chinese network has spread anti-US propaganda, including calls for violence in the run-up to the 6 January storming of the US Capitol building by a pro-Trump mob. Afterwards. It compared the west’s response to the DC riot to political protests in Hong Kong.The accounts previously posted hostile messages about Trump and Joe Biden, made allegations of election fraud and promoted “negative narratives” about the US response to the coronavirus pandemic.Professor Martin Innes, director of Cardiff University’s crime and security institute, said open-source analysis strongly suggested “multiple links” to Beijing.Researchers initially thought the hidden network was not especially complex, he said. Further evidence, however, revealed what he called a “sophisticated and disciplined” online operation. Accounts did not use certain hashtags in an apparent attempt to avoid Twitter’s counter-measures. They posted during regular Chinese working hours, with gaps on a national holiday, and used machine tools to translate into English.“The network appears designed to run as a series of almost autonomous ‘cells’, with minimal links connecting them,” Innes said. “This structure is designed to protect the network as a whole if one ‘cell’ is discovered, which suggests a degree of planning and forethought.“Therefore, this marks the network as a significant attempt to influence the trajectory of US politics by foreign actors.”Efforts by Russian-linked social media actors to influence US elections are well known. The special counsel Robert Mueller detailed an extensive troll operation run out of building in St Petersburg. Its goal was to “disparage” Hillary Clinton and to promulgate “divisive” content, Mueller found.The Chinese accounts cannot be definitely linked to the state. But ordinary Chinese citizens do not have access to Twitter and it appears that Beijing may be seeking to emulate Kremlin practices by setting up its own US-facing political influence operation.Last year the university’s research team uncovered more than 400 accounts engaging in suspicious activities. These were forwarded to Twitter, which suspended them within a few days. The latest analysis suggests further accounts are still working, with the network more resilient than previously thought.There is compelling evidence of links to China. Posts feature the Chinese language and a focus upon topics reflecting Chinese geopolitical interests. Some 221 accounts spread content in favour of the Chinese Communist party, encompassing some 42,618 tweets, the study found.The accounts also attacked Trump for referring to Covid-19 as the China virus. One claimed the virus originated outside China and had actually come from the US laboratory at Fort Detrick, in Frederick, Maryland. The network’s main goal was “encouragement of discord” in the US, the study concluded. Most tweets about Trump were negative. The handful that were positive urged Americans to “fetch their guns”, to “fight for democracy” and to “call gunmen together” in order to win a second Trump term.The bots complained of “double standards” after the Capitol building riot, saying US politicians had hypocritically backed protesters who entered the Hong Kong legislative building. “The riots in Congress are a disgrace to the United States today, and will soon become the fuse of the American order,” one remarked. More

  • in

    Big tech facilitated QAnon and the Capitol attack. It’s time to hold them accountable

    Donald Trump’s election lies and the 6 January attack on the US Capitol have highlighted how big tech has led our society down a path of conspiracies and radicalism by ignoring the mounting evidence that their products are dangerous.But the spread of deadly misinformation on a global scale was enabled by the absence of antitrust enforcement by the federal government to rein in out-of-control monopolies such as Facebook and Google. And there is a real risk social media giants could sidestep accountability once again.Trump’s insistence that he won the election was an attack on democracy that culminated in the attack on the US Capitol. The events were as much the fault of Sundar Pichai, Jack Dorsey and Mark Zuckerberg – CEOs of Google, Twitter and Facebook, respectively – as they were the fault of Trump and his cadre of co-conspirators.During the early days of social media, no service operated at the scale of today’s Goliaths. Adoption was limited and online communities lived in small and isolated pockets. When the Egyptian uprisings of 2011 proved the power of these services, the US state department became their cheerleaders, offering them a veneer of exceptionalism which would protect them from scrutiny as they grew exponentially.Later, dictators and anti-democratic actors would study and co-opt these tools for their own purposes. As the megaphones got larger, the voices of bad actors also got louder. As the networks got bigger, the feedback loop amplifying those voices became stronger. It is unimaginable that QAnon could gain a mass following without tech companies’ dangerous indifference.Eventually, these platforms became immune to forces of competition in the marketplace – they became information monopolies with runaway scale. Absent any accountability from watchdogs or the marketplace, fringe conspiracy theories enjoyed unchecked propagation. We can mark networked conspiracies from birtherism to QAnon as straight lines through the same coterie of misinformers who came to power alongside Trump.Today, most global internet activity happens on services owned by either Facebook or Alphabet, which includes YouTube and Google. The internet has calcified into a pair of monopolies who protect their size by optimizing to maximize “engagement”. Sadly, algorithms designed to increase dependency and usage are far more profitable than ones that would encourage timely, local, relevant and, most importantly, accurate information. The truth, in a word, is boring. Facts rarely animate the kind of compulsive engagement rewarded by recommendation and search algorithms.The best tool – if not the only tool – to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutions.Antitrust enforcement has historically been the US government’s greatest weapon against such firms. From breaking up the trusts at the start of the 20th century to the present day, antitrust enforcement spurs competition and ingenuity while re-empowering citizens. Most antitrust historians agree that absent US v Microsoft in 1998, which stopped Microsoft from bundling products and effectively killing off other browsers, the modern internet would have been strangled in the crib.The best tool to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutionsIronically, Google and Facebook were the beneficiaries of such enforcement. Over two decades would pass before US authorities brought antitrust suits against Google and Facebook last year. Until then, antitrust had languished as a tool to counterbalance abusive monopolies. Big tech sees an existential threat in the renewed calls for antitrust, and these companies have aggressively lobbied to ensure key vacancies in the Biden administration are filled by their friends.The Democratic party is especially vulnerable to soft capture by these tech firms. Big tech executives are mostly left-leaning and donate millions to progressive causes while spouting feelgood rhetoric of inclusion and connectivity. During the Obama administration, Google and Facebook were treated as exceptional, avoiding any meaningful regulatory scrutiny. Democratic Senate leadership, specifically Senator Chuck Schumer, has recently signaled he will treat these companies with kid gloves.The Biden administration cannot repeat the Obama legacy of installing big tech-friendly individuals to these critical but often under-the-radar roles. The new administration, in consultation with Schumer, will be tasked with appointing a new assistant attorney general for antitrust at the Department of Justice and up to three members of the Federal Trade Commission. Figures friendly to big tech in those positions could abruptly settle the pending litigation against Google or Facebook.President Joe Biden and Schumer must reject any candidate who has worked in the service of big tech. Any former White House or congressional personnel who gave these companies a pass during the Obama administration should also be disqualified from consideration. Allowing big tech’s lawyers and plants to run the antitrust agencies would be the equivalent of allowing a climate-change-denying big oil executive run the Environmental Protection Agency.The public is beginning to recognize the harms to society wrought by big tech and a vibrant and bipartisan anti-monopoly movement with diverse scholars, and activists has risen over the past few years. Two-thirds of Democratic voters believe, along with a majority of Republicans, that Biden should “refuse to appoint executives, lobbyists, or lawyers for these companies to positions of power or influence in his administration while this legal activity is pending”. This gives the Democratic party an opportunity to do the right thing for our country and attract new voters by fighting for the web we want.Big tech played a central role in the dangerous attack on the US Capitol and all of the events which led to it. Biden’s antitrust appointees will be the ones who decide if there are any consequences to be paid. More

  • in

    The Science of Rebuilding Trust

    During his inauguration, President Joe Biden appealed to us, American citizens, repeatedly and emphatically, to defend unity and truth against corrosion from power and profit. Fortunately, the bedrock tensions between unity, truth, power and profit have newly-discovered mathematical definitions, so their formerly mysterious interactions can now be quantified, predicted and addressed. So in strictly (deeply) scientific terms, Biden described our core problem exactly right.

    Can We Build Social Trust in an Online World?

    READ MORE

    I applaud and validate President Biden’s distillation of the problem of finding and keeping the truth, and of trusting it together. Human trust is based on high-speed neuromechanical interaction between living creatures. Other kinds of trust not based on that are fake to some degree. Lies created for money and power damage trust most of all.

    A Moment of Silence

    As Biden showed in his first act in office, the first step toward rebuilding is a moment of silence. Avoiding words, slowing down, taking time, breathing, acknowledging common grievances and recognizing a common purpose are not just human needs, but necessary algorithmic steps as well. Those are essential to setting up our common strategy and gathering the starting data that we need to make things right.

    The next step, as Biden also said, is to recognize corrupting forces such as money and power — and I would also add recognition. The third step, as I propose below, is to counter those three forces explicitly in our quest for public truth, to do the exact opposite of what money, power and careerism do, and to counter and reverse every information-processing step at which money, power and recognition might get a hold.

    Embed from Getty Images

    Instead of using one panel of famous, well-funded experts deliberating a few hours in public, employ a dozen groups of anonymous lone geniuses, each group working separately in secret for months on the same common question. Have them release their reports simultaneously in multiple media. That way, the unplanned overlap shows most of what matters and a path to resolving the rest — an idea so crazy it just might work.

    Since I’m describing how to restore democracy algorithmically, I might as well provide an example of legislation in the algorithmic language too. To convey data-processing ideas clearly, and thereby to avoid wasting time and money building a system that won’t work, technologists display our proposals using oversimplified examples that software architects like myself call “reference implementations” and which narrative architects like my partner call “tutor texts.”

    These examples are not meant to actually work, but to unambiguously show off crucial principles. In the spirit of reference implementations, I present the following legislative proposal, written to get to the truth about one particular subject but easily rewritten to find the truth about other subjects such as global warming or fake news: The Defend the Growing Human Nervous System With Information Sciences Act.

    The Defend Act

    Over centuries, humankind has defended its children against physical extremes, dangerous chemicals and infectious organisms by resolute, rational application of the laws of nature via technology and medical science. Now is the time to use those same tools to defend our children’s growing nervous systems against the informational damage that presently undermines their trust in themselves, their families and their communities. Therefore, we here apply information science in order to understand how man-made communication helps and hurts the humans whom God made.

    The human race has discovered elemental universal laws governing processes from combustion to gravitation and from them created great and terrible technologies from fire and weapons to electricity grids and thermonuclear reactions. But no laws are more elemental than the laws of data and mathematics, and no technologies more universal and fast-growing than the mathematically-grounded technologies of information capture, processing and dissemination. Information science is changing the world we live in and, therefore, changing us as living, breathing human beings. How?

    The human race has dealt with challenges from its own technologies before. Slash-and-burn tactics eroded farmland; lead pipes poisoned water; city wells spread cholera; radioactivity caused cancer; refrigerants depleted ozone. And we have dealt with epidemics that propagated in weird and novel ways — both communicable diseases spread by touch, by body fluids, by insects, by behaviors, by drinking water, by food, and debilitating diseases of chemical imbalance, genetic dysregulation, immune collapse and misfolded proteins. Our science has both created and solved monumental problems.

    But just as no technology is more powerful than the information sciences, when deployed against an immature, growing, still-learning human nervous system, no toxin is more insidious than extractive or exploitive artificial information.

    The Defend the Growing Human Nervous System With Information Sciences Act aims to understand first and foremost the depth and texture of the threat to growing human nervous systems in order to communicate the problem to the public at large (not to solve the problem yet). This act’s approach is based on five premises about the newly-discovered sciences of information.

    .custom-post-from {float:left; margin: 0 10px 10px; max-width: 50%; width: 100%; text-align: center; background: #000000; color: #ffffff; padding: 15px 0 30px; }
    .custom-post-from img { max-width: 85% !important; margin: 15px auto; filter: brightness(0) invert(1); }
    .custom-post-from .cpf-h4 { font-size: 18px; margin-bottom: 15px; }
    .custom-post-from .cpf-h5 { font-size: 14px; letter-spacing: 1px; line-height: 22px; margin-bottom: 15px; }
    .custom-post-from input[type=”email”] { font-size: 14px; color: #000 !important; width: 240px; margin: auto; height: 30px; box-shadow:none; border: none; padding: 0 10px; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-pen-icon.svg”); background-repeat: no-repeat; background-position: center right 14px; background-size:14px;}
    .custom-post-from input[type=”submit”] { font-weight: normal; margin: 15px auto; height: 30px; box-shadow: none; border: none; padding: 0 10px 0 35px; background-color: #1878f3; color: #ffffff; border-radius: 4px; display: inline-block; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-email-icon.svg”); background-repeat: no-repeat; background-position: 14px center; background-size: 14px; }

    .custom-post-from .cpf-checkbox { width: 90%; margin: auto; position: relative; display: flex; flex-wrap: wrap;}
    .custom-post-from .cpf-checkbox label { text-align: left; display: block; padding-left: 32px; margin-bottom: 0; cursor: pointer; font-size: 11px; line-height: 18px;
    -webkit-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
    order: 1;
    color: #ffffff;
    font-weight: normal;}
    .custom-post-from .cpf-checkbox label a { color: #ffffff; text-decoration: underline; }
    .custom-post-from .cpf-checkbox input { position: absolute; opacity: 0; cursor: pointer; height: 100%; width: 24%; left: 0;
    right: 0; margin: 0; z-index: 3; order: 2;}
    .custom-post-from .cpf-checkbox input ~ label:before { content: “f0c8”; font-family: Font Awesome 5 Free; color: #eee; font-size: 24px; position: absolute; left: 0; top: 0; line-height: 28px; color: #ffffff; width: 20px; height: 20px; margin-top: 5px; z-index: 2; }
    .custom-post-from .cpf-checkbox input:checked ~ label:before { content: “f14a”; font-weight: 600; color: #2196F3; }
    .custom-post-from .cpf-checkbox input:checked ~ label:after { content: “”; }
    .custom-post-from .cpf-checkbox input ~ label:after { position: absolute; left: 2px; width: 18px; height: 18px; margin-top: 10px; background: #ffffff; top: 10px; margin: auto; z-index: 1; }
    .custom-post-from .error{ display: block; color: #ff6461; order: 3 !important;}

    First of all, there is an urgent global mental-health crisis tightly correlated over decades with consuming unnatural sensory inputs (such as from TV screens) and interacting in unnatural ways (such as using wireless devices). These technologies seem to undermine trust in one’s own senses and in one’s connections to others, with the youngest brains bearing the greatest hurt.

    Second, computer science understands information flowing in the real world. Numerical simulations faithfully replicate the laws of physics — of combustion, explosions, weather and gravitation — inside computers, thereby confirming we understand how nature works. Autonomous vehicles such as ocean gliders, autonomous drones, self-driving cars and walking robots, select and process signals from the outside to make trustworthy models, in order to move through the world. This neutral, technological understanding might illuminate the information flows that mature humans also use to do those same things and which growing humans use to learn how to do them.

    Third, the science of epidemiology understands the information flows of medical research. Research has discovered and countered countless dangerous chemical and biological influences through concepts like clinical trials, randomization, viral spread, dose-response curves and false positive/negative risks. These potent yet neutral medical lenses might identify the most damaging aspects of artificial sensory interactions, in preparation for countering them in the same way they have already done for lead, tar, nicotine, sugar, endocrine disruptors and so on. The specific approach will extend the existing understanding of micro-toxins and micro-injuries to include the new micro-deceptions and micro-behavioral manipulations that undermine trust.

    Fourth, the mathematics of management and communication understands the information flows of businesses. The economic spreadsheets and prediction models that presently micromanage business and market decisions worldwide can, when provided with these new metrics of human health and damage, calculate two new things. First, the most cost-effective ways to prevent and reduce damage. Second, such spreadsheets can quantify the degree to which well-accepted and legal practices of monetized influence — advertising, branding, lobbying, incentivizing, media campaigns and even threats — potentially make the information they touch untrustworthy and thereby undermine human trust.

    America has risen to great challenges before. At its inception, even before Alexis De Tocqueville praised the American communitarian can-do spirit, this country gathered its most brilliant thinkers in a Constitutional Convention. In war, it gathered them to invent and create a monster weapon. In peace, it gathered them to land on the Moon. Over time, Americans have understood and made inroads against lead poisoning, ozone destruction, polluted water, smog, acid rain, nicotine and trans-fats. Now, we need to assemble our clearest thinkers to combat the deepest damage of all: the damage to how we talk and think.

    Finally, we humans are spiritual and soulful beings. Our experiences and affections could never be captured in data or equations, whether of calorie consumption, body temperature, chemical balance or information flow. But just as we use such equations to defend our bodies against hunger, hypothermia or vitamin deficiency, we might also use them to defend against confusion, mistrust and loneliness, without in the process finding our own real lives replaced or eclipsed. In fact, if the human nervous system and soul are indeed damaged when mathematically-synthesized inputs replace real ones, then they will be freed from that unreality and that damage only when we understand which inputs help and hurt us most.

    Informational Threat

    The Defend Act tasks its teams to treat the human nervous system as an information-processing system with the same quantitative, scientific neutrality as medicine already treats us as heat-generating, oxygen-consuming, blood-pumping, self-cleaning systems. Specifically, teams are to examine human informational processing in the same computational terms used for self-driving vehicles that are also self-training and to examine our informational environments, whether man-made or God-made, in the same terms used for the “training data” consumed by such artificial foraging machines.

    An informational threat such as the present one must be met in new ways. In particular, the current threat differs from historic ones by undermining communication itself, making unbiased discussion of the problem nearly impossible in public or in subsidized scientific discourse. Thus, the first concern of the Defend Act is to insulate the process of scientific discovery from the institutional, traditional and commercial pressures that might otherwise contaminate its answers. Thus, the act aims to maximize scientific reliability and minimize commercial, traditional and political interference as follows.

    The investigation will proceed not by a single dream team of famous, respected and politically-vetted experts but by 10 separate teams of anonymous polymaths, living and working together in undisclosed locations, assembled from international scientists under international auspices; for example, the American Centers for Disease Control and Prevention will collaborate with the World Health Organization.

    Embed from Getty Images

    Each team will be tasked with producing its best version of the long-term scientific truth, that is of the same truth each other team ought to also obtain based on accepted universal principles. Teams pursuing actual scientific coherence thus ought to converge in their answers. Any team tempted to replace the law of nature with incentivized convenience would then find its results laughably out of step with the common, coherent consensus reported by the other teams.

    Choosing individual team members for intellectual flexibility and independence, rather than for fame or institutional influence, will ensure they can grasp the scope of the problem, articulate it fearlessly and transmit in their results no latent bias toward their home colleagues, institution, technology or discipline.

    Each team will contain at least two experts from each of the three information-science fields, each able to approximately understand the technical language of the others and thus collectively to understand all aspects of human informational functionality and dysfunctionality. To ensure the conclusions apply to humans everywhere, at least one-third of each team will consider themselves culturally non-American.

    Each team will operate according to the best practices of deliberative decision-making, such as those used by “deliberative democracy”: live nearby, meet in person a few hours a day over months in a quiet place and enjoy access to whatever experts and sources of information they choose to use. Their budget (about $4 million per team) will be sufficient for each to produce its report in one year, through a variety of public-facing communications media: written reports, slide decks, video recordings, private meetings and public speeches. Between the multiple team members, multiple teams and multiple media, it will be difficult for entrenched powers to downplay inconvenient truths.

    Released simultaneously, all public reports will cover four topics with a broad brush:

    1. Summarizing the informational distractions and damage one would expect in advance, based only on the mathematical principles of autonomous navigation mentioned above, including not only sensory distractions but also the cognitive load of attending to interruptions and following rules, including rules intended to improve the situation.

    2. Summarizing, as meta-studies, the general (and generally true) conclusions of scientifically reputable experimental studies and separately the general (and generally misleading) conclusions of incentivized studies.

    3. Providing guideline formulae of damage and therapy, based on straightforward technical metrics of each specific information source such as timing delay, timing uncertainty, statistical pattern, information format, etc., with which to predict the nature, timescale, duration and severity of informational damage or recuperation from it.

    4. Providing guidelines for dissemination, discussion and regulatory approaches most likely not to be undermined by pressures toward the status quo.

    Within two years of passing this act, for under $100 million dollars, the world will understand far better the human stakes of artificial input, and the best means for making our children safe from it again.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    US lawmakers ask FBI to investigate Parler app's role in Capitol attack

    American lawmakers have asked the FBI to investigate the role of Parler, the social media website and app popular with the American far right, in the violence at the US Capitol on 6 January.Carolyn Maloney, chair of the House oversight and reform Committee, asked the FBI to review Parler’s role “as a potential facilitator of planning and incitement related to the violence, as a repository of key evidence posted by users on its site, and as a potential conduit for foreign governments who may be financing civil unrest in the United States”.Maloney asked the FBI to review Parler’s financing and its ties to Russia.Maloney cited press reports that detailed violent threats on Parler against state elected officials for their role in certifying the election results before the 6 January attack that left five dead. She also noted numerous Parler users have been arrested and charged with threatening violence against elected officials or for their roles in the attack.She cited justice department charges against a Texas man who used a Parler account to post threats that he would return to the Capitol on 19 January “carrying weapons and massing in numbers so large that no army could match them”.The justice department said the threats were viewed by other social media users tens of thousands of times.Parler was launched in 2018 and won more users in the last months of the Trump presidency as social media platforms like Twitter and Facebook cracked down more forcefully on falsehoods and misinformation.The social network, which resembles Twitter, fast became the hottest app among American conservatives, with high-profile proponents like Senator Ted Cruz recruiting new users.But following the 6 January insurrection at the US Capitol, Google banned it from Google Play and Apple suspended it from the App Store.Amazon then suspended Parler from its web hosting service AWS, in effect taking the site offline unless it could find a new company to host its services.The website partially returned online this week, though only displaying a message from its chief executive, John Matze, saying he was working to restore functionality, with the help of a Russian-owned technology company.Reuters reported this week that Parler partially resumed online operations.The FBI and Parler did not immediately respond to requests for comment.More than 25,000 national guard troops and new fencing ringed with razor wire were among the unprecedented security steps put in place ahead of Wednesday’s inauguration of President Joe Biden. More

  • in

    Engineer who stole trade secrets from Google among those pardoned by Trump

    Sign up for the Guardian’s First Thing newsletterIn his final hours of office, Donald Trump pardoned a former Google engineer who was convicted of stealing trade secrets from the company before taking up a new role with competitor Uber.Anthony Levandowski, 40, had been sentenced in August 2020 to 18 months in prison after pleading guilty to inappropriately downloading trade secrets from Google’s self-driving car operation Waymo, where he was an engineer.The surprise pardon was remarkable for its star-studded list of supporters and its justification. “Mr Levandowski [pleaded] guilty to a single criminal count arising from civil litigation,” read the White House announcement. “Notably, his sentencing judge called him a ‘brilliant, groundbreaking engineer that our country needs’.”The single guilty count was the result of a plea bargain; the engineer was originally charged with 33 counts of theft and attempted theft of trade secrets. And the sentencing judge, William Alsup, described Levandowski’s theft as “the biggest trade secret crime I have ever seen” and refused the engineer’s request for home confinement, saying, it would give “a green light to every future brilliant engineer to steal trade secrets. Prison time is the answer to that.”Levandowski had not yet begun his prison sentence due to the Covid-19 pandemic. A hearing on the timing of his prison sentence had been scheduled for 9 February.Levandowski was a leader in the race to develop self-driving cars. He made a name for himself in the autonomous vehicle space after building a driverless motorcycle in a contest organized by the Pentagon’s research arm, Darpa, in 2004.Levandowski went on to found his own startup, 510 systems, which was acquired by Google in 2011. At Google, he helped to develop driverless cars until 2016. Upon leaving the company and while negotiating a new role at Uber, he later admitted, he downloaded more than 14,000 Google files to his personal laptop.Whether any secrets from those files made their way into Uber’s self-driving technology became the center of a bitter legal battle between the two tech giants that resulted in a $245m settlement for Google’s self-driving spin-off, Waymo, and criminal prosecution for Levandowski.The White House cited the support of 13 individuals in its pardon statement, including the billionaire Facebook board member Peter Thiel and several members of his coterie: Trae Stephens and Blake Masters, who have both worked for Thiel’s various investment firms, and Ryan Petersen, James Proud and Palmer Luckey, who have all received investments for startups from Thiel.Thiel donated to Trump’s 2016 campaign, spoke at his nominating convention, and gave a press conference in which he argued that the then-candidate’s calls for a ban on immigration by Muslims should not be taken “literally”. In 2016, as Thiel was growing more engaged with the pro-Trump far right, Thiel met with a prominent white nationalist, BuzzFeed News reported. As Trump’s presidency floundered, Thiel distanced himself from his former support.Luckey is best known as the founder of Oculus, the virtual reality headset startup that was acquired by Facebook for $2bn in 2014. His politics came under scrutiny during the 2016 campaign when it was revealed that he was funding a group dedicated to “shitposting” and anti-Hillary Clinton memes, and he was pushed out of Facebook in 2017. In July, his new startup, Anduril Industries, won a five-year contract with US Customs and Border Protection to provide AI technology for a border surveillance.Other supporters of the pardon include the former Disney executive Michael Ovitz and three of Levandowski’s attorneys.Levandowski was one of 143 people to be granted clemency by Trump on his last day in office. The former president has pardoned 70 people and commuted the sentences of a further 73 people. The recipients include Trump’s former senior adviser Steve Bannon, rappers Lil Wayne and Kodak Black, the Detroit mayor Kwame Kilpatrick and scores of others.The White House said Levandowski had “paid a significant price for his actions and plans to devote his talents to advance the public good”.Since his legal troubles began, Levandowski has founded a new self-driving car company and established a church focused on “the realization, acceptance and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software”. The website for the Way of the Future Church appears to have become defunct at some point in March or April 2020.Reuters contributed to this report. More

  • in

    Welcome to The Economist’s Technological Idealism

    Every publication has a worldview. Each cultivates a style of thought, ideology or philosophy designed to comfort the expectations of its readers and to confirm a shared way of perceiving the world around them. Even Fair Observer has a worldview, in which, thanks to the diversity of its contributors, every topic deserves to be made visible from multiple angles. Rather than emphasizing ideology, such a worldview places a quintessential value on human perception and experience.

    Traditional media companies profile their readership and pitch their offering to their target market’s preferences. This often becomes its central activity. Reporting the news and informing the public becomes secondary to using news reporting to validate a worldview that may not be explicitly declared. Some media outlets reveal their bias, while others masquerade it and claim to be objective. The Daily Devil’s Dictionary has frequently highlighted the bias of newspapers like The New York Times that claim to be objective but consistently impose their worldview. In contrast, The Economist, founded in 1843, has, throughout its history, prominently put its liberal — and now neoliberal — worldview on public display. 

    Zambia Is The Economist’s Damsel in Distress

    READ MORE

    Many of The Economist’s articles are designed to influence both public opinion and public policy. One that appeared at the end of last week exemplifies the practice, advertising its worldview. It could be labeled “liberal technological optimism.” The title of the article sets the tone: “The new era of innovation — Why a dawn of technological optimism is breaking.” The byline indicates the author: Admin. In other words, this is a direct expression of the journal’s worldview.

    The article begins by citing what it assesses as the trend of pessimism that has dominated the economy over the past decade. The text quickly focuses on the optimism announced in the title. And this isn’t just any optimism, but an extreme form of joyous optimism that reflects a Whiggish neoliberal worldview. The “dawn” cliché makes it clear that it is all about the hope of emerging from a dark, ominous night into the cheer of a bright morning with the promise of technological bliss. Central to the rhetoric is the idea of a break with the past, which takes form in sentences such as this one: “Eventually, synthetic biology, artificial intelligence and robotics could upend how almost everything is done.”

    Today’s Daily Devil’s Dictionary definition:

    Upend:

    As used by most people: knock over, impede progress, halt a person’s or an object’s stability.

    As used by The Economist: to move forward, to embody progress.

    Contextual Note

    In recent decades, the notion of “disruptive innovation” has been elevated to the status of the highest ideal of modern capitalism. Formerly, disruption had a purely negative connotation as a factor of risk. Now it has become the obligatory goal of dynamic entrepreneurs. Upending was something to be avoided. Now it is actively pursued as the key to success. Let “synthetic biology, artificial intelligence and robotics” do their worst as they disrupt the habits and lifestyles of human beings, The Economist seems to be saying the more upending they entrepreneurs manage to do, the more their profits will grow.

    .custom-post-from {float:left; margin: 0 10px 10px; max-width: 50%; width: 100%; text-align: center; background: #000000; color: #ffffff; padding: 15px 0 30px; }
    .custom-post-from img { max-width: 85% !important; margin: 15px auto; filter: brightness(0) invert(1); }
    .custom-post-from .cpf-h4 { font-size: 18px; margin-bottom: 15px; }
    .custom-post-from .cpf-h5 { font-size: 14px; letter-spacing: 1px; line-height: 22px; margin-bottom: 15px; }
    .custom-post-from input[type=”email”] { font-size: 14px; color: #000 !important; width: 240px; margin: auto; height: 30px; box-shadow:none; border: none; padding: 0 10px; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-pen-icon.svg”); background-repeat: no-repeat; background-position: center right 14px; background-size:14px;}
    .custom-post-from input[type=”submit”] { font-weight: normal; margin: 15px auto; height: 30px; box-shadow: none; border: none; padding: 0 10px 0 35px; background-color: #1878f3; color: #ffffff; border-radius: 4px; display: inline-block; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-email-icon.svg”); background-repeat: no-repeat; background-position: 14px center; background-size: 14px; }

    .custom-post-from .cpf-checkbox { width: 90%; margin: auto; position: relative; display: flex; flex-wrap: wrap;}
    .custom-post-from .cpf-checkbox label { text-align: left; display: block; padding-left: 32px; margin-bottom: 0; cursor: pointer; font-size: 11px; line-height: 18px;
    -webkit-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
    order: 1;
    color: #ffffff;
    font-weight: normal;}
    .custom-post-from .cpf-checkbox label a { color: #ffffff; text-decoration: underline; }
    .custom-post-from .cpf-checkbox input { position: absolute; opacity: 0; cursor: pointer; height: 100%; width: 24%; left: 0;
    right: 0; margin: 0; z-index: 3; order: 2;}
    .custom-post-from .cpf-checkbox input ~ label:before { content: “f0c8”; font-family: Font Awesome 5 Free; color: #eee; font-size: 24px; position: absolute; left: 0; top: 0; line-height: 28px; color: #ffffff; width: 20px; height: 20px; margin-top: 5px; z-index: 2; }
    .custom-post-from .cpf-checkbox input:checked ~ label:before { content: “f14a”; font-weight: 600; color: #2196F3; }
    .custom-post-from .cpf-checkbox input:checked ~ label:after { content: “”; }
    .custom-post-from .cpf-checkbox input ~ label:after { position: absolute; left: 2px; width: 18px; height: 18px; margin-top: 10px; background: #ffffff; top: 10px; margin: auto; z-index: 1; }
    .custom-post-from .error{ display: block; color: #ff6461; order: 3 !important;}

    In the neoliberal scheme of things, high profit margins resulting from the automatic monopoly of disruptive innovation will put more money in the hands of those who know how to use it — the entrepreneurs. Once they have settled the conditions for mooring their yachts in Monte Carlo, they may have time to think about creating new jobs, the one thing non-entrepreneurial humans continue to need and crave.

    For ordinary people, the new jobs may mean working alongside armies of artificially intelligent robots, though in what capacity nobody seems to know. In all likelihood, disruptive thinkers will eventually have to imagine a whole new set of “bullshit jobs” to replace the ones that have been upended. The language throughout the article radiates an astonishingly buoyant worldview at a moment of history in which humanity is struggling to survive the effects of an aggressive pandemic, to say nothing of the collapse of the planet’s biosphere, itself attributable to the unbridled assault of disruptive technology over the past 200 years.

    What The Economist wants us to believe is that the next round of disruption will be a positive one, mitigating the effects of the previous round that produced, alongside fabulous financial prosperity, a series of increasingly dire negative consequences.

    The article’s onslaught of rhetoric begins with the development of the cliché present in the title telling us that “a dawn of technological optimism is breaking.” The authors scatter an impressive series of positively resonating ideas through the body of the text: “speed,” “prominent breakthroughs,” “investment boom,” “new era of progress,” “optimists,” “giddily predict,” “advances,” “new era of innovation,” “lift living standards,” “new technologies to flourish,” “transformative potential,” “science continues to empower medicine,” “bend biology to their will,” “impressive progress,” “green investments,” “investors’ enthusiasm,” “easing the constraints,” “boost long-term growth,” “a fresh wave of innovation” and “economic dynamism.”

    The optimism sometimes takes a surprising twist. The authors forecast that in the race for technological disruption, “competition between America and China could spur further bold steps.” Political commentators in the US increasingly see conflict with China. Politicians are pressured to get tough on China. John Mearsheimer notably insists on the necessity of hegemonic domination by the US. Why? Because liberal capitalism must conquer, not cooperate. But in the rosy world foreseen by The Economist, friendship will take the day.

    Historical Note

    We at the Daily Devil’s Dictionary believe the world would be a better place if schools offered courses on how to decipher the media. That is unlikely to happen any time soon because today’s schools are institutions that function along the same lines as the media. They have been saddled with the task of disseminating an official worldview designed to support the political and economic system that supports them. 

    Official worldviews always begin with a particular reading of history. Some well-known examples show how nations design their history, the shared narrative of the past, to mold an attitude about the future. In the US, the narrative of the war that led to the founding of the nation established the cultural idea of the moral validity associated with declaring independence, establishing individual rights and justifying rebellion against unjust authority. Recent events in Washington, DC, demonstrate how that instilled belief, when assimilated uncritically, can lead to acts aiming at upending both society and government.

    In France, the ideas associated with the French Revolution, a traumatically upending event, spawned a different type of belief in individual rights. For the French, it must be expressed collectively through organized actions of protest on any issue. US individualism, founded on the frontier ideal of self-reliance, easily turns protestation into vigilante justice by the mob. In France, protests take the form of strikes and citizen movements.

    Embed from Getty Images

    The British retain the memory of multiple historical invasions of their island by Romans, Angles, Saxons, Vikings, Normans and more recent attempts by Napoleon and Hitler. The British people have always found ways of resisting. This habit led enough of them to see the European Union as an invader to vote for Brexit.

    The Italian Renaissance blossomed in the brilliant courts and local governments of its multiple city-states. Although Italy was unified in 1870, its citizens have never fully felt they belonged to a modern nation-state. The one serious but ultimately futile attempt was Mussolini’s fascism, which represented the opposite extreme of autonomous city-states.

    The article in The Economist contains some examples of its reading of economic history. At the core of its argument is this reminder: “In the history of capitalism rapid technological advance has been the norm.” While asserting neoliberal “truths,” like that “Governments need to make sure that regulation and lobbying do not slow down disruption,” it grudgingly acknowledges that government plays a role in technological innovation. Still, the focus remains on what private companies do, even though it is common knowledge that most consumer technology originated in taxpayer-funded military research. 

    Here is how The Economist defines the relationship: “Although the private sector will ultimately determine which innovations succeed or fail, governments also have an important role to play. They should shoulder the risks in more ‘moonshot’ projects.” The people assume the risks and the corporations skim off the profit. This is neoliberal ideology in a nutshell.

    *[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of The Daily Devil’s Dictionary on Fair Observer.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    How Trump supporters are radicalised by the far right

    Far right “playbooks” teaching white nationalists how to recruit and radicalise Trump supporters have surfaced on the encrypted messaging app Telegram ahead of Joe Biden’s inauguration.
    The documents, seen by the Observer, detail how to convert mainstream conservatives who have just joined Telegram into violent white supremacists. They were found last week by Tech Against Terrorism, an initiative launched by the UN counter terrorism executive directorate.
    Large numbers of Trump supporters migrated on to Telegram in recent days after Parler, the social media platform favoured by the far right, was forced offline for hosting threats of violence and racist slurs after the attack on the US Capitol on 6 January.
    The documents have prompted concern that far right extremists congregating on Telegram instead of Parler has made it far harder for law enforcement to track where the next attack could come from.
    Already, hundreds of suspects threatening violence during this week’s inauguration of Biden have been identified by the FBI.
    One of the playbooks, found on a channel with 6,000 subscribers, was specially drawn up to radicalise Trump supporters who had just joined Telegram and teach them “how to have the proper OPSEC [operations security] to keep your identity concealed”.
    The four-page document encourages recruiters to avoid being overtly racist or antisemitic initially when approaching Trump supporters, stating: “Trying to show them racial IQ stats and facts on Jewish power will generally leave them unreceptive… that material will be instrumental later on in their ideological journey.
    “The point of discussion you should focus on is the blatant anti-white agenda that is being aggressively pushed from every institution in the country, as well as white demographic decline and its consequences.”
    The document concludes with its author stating: “Big Tech made a serious mistake by banishing conservatives to the one place [Telegram] where we have unfettered access to them, and that’s a mistake they’ll come to regret!”
    The document is named the “comprehensive redpill guide”, a reference to the online term red-pilling, used to describe a conversion to extreme far-right views.
    The document adds: “Not every normie can be redpilled, but if they’re receptive and open-minded to hearing what you have to say, you should gradually be sending them edgier pro-white/anti-Zionist content as they move along in their journey.”
    Another white nationalist recruitment guide uncovered by Tech Against Terrorism, which is working with global tech firms to tackle terrorist use of the internet, shares seven steps of “conservative conversion”. More

  • in

    The silencing of Trump has highlighted the authoritarian power of tech giants | John Naughton

    It was eerily quiet on social media last week. That’s because Trump and his cultists had been “deplatformed”. By banning him, Twitter effectively took away the megaphone he’s been masterfully deploying since he ran for president. The shock of the 6 January assault on the Capitol was seismic enough to convince even Mark Zuckerberg that the plug finally had to be pulled. And so it was, even to the point of Amazon Web Services terminating the hosting of Parler, a Twitter alternative for alt-right extremists.The deafening silence that followed these measures was, however, offset by an explosion of commentary about their implications for freedom, democracy and the future of civilisation as we know it. Wading knee-deep through such a torrent of opinion about the first amendment, free speech, censorship, tech power and “accountability” (whatever that might mean), it was sometimes hard to keep one’s bearings. But what came to mind continually was H L Mencken’s astute insight that “for every complex problem there is an answer that is clear, simple and wrong”. The air was filled with people touting such answers.In the midst of the discursive chaos, though, some general themes could be discerned. The first highlighted cultural differences, especially between the US with its sacred first amendment on the one hand and European and other societies, which have more ambivalent histories of moderating speech. The obvious problem with this line of discussion is that the first amendment is about government regulation of speech and has nothing whatsoever to do with tech companies, which are free to do as they like on their platforms.A second theme viewed the root cause of the problem as the lax regulatory climate in the US over the last three decades, which led to the emergence of a few giant tech companies that effectively became the hosts for much of the public sphere. If there were many Facebooks, YouTubes and Twitters, so the counter-argument runs, then censorship would be less effective and problematic because anyone denied a platform could always go elsewhere.Then there were arguments about power and accountability. In a democracy, those who make decisions about which speech is acceptable and which isn’t ought to be democratically accountable. “The fact that a CEO can pull the plug on Potus’s loudspeaker without any checks and balances,” fumed EU commissioner Thierry Breton, “is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organised in the digital space.” Or, to put it another way, who elected the bosses of Facebook, Google, YouTube and Twitter?What was missing from the discourse was any consideration of whether the problem exposed by the sudden deplatforming of Trump and his associates and camp followers is actually soluble – at least in the way it has been framed until now. The paradox that the internet is a global system but law is territorial (and culture-specific) has traditionally been a way of stopping conversations about how to get the technology under democratic control. And it was running through the discussion all week like a length of barbed wire that snagged anyone trying to make progress through the morass.All of which suggests that it’d be worth trying to reframe the problem in more productive ways. One interesting suggestion for how to do that came last week in a thoughtful Twitter thread by Blayne Haggart, a Canadian political scientist. Forget about speech for a moment, he suggests, and think about an analogous problem in another sphere – banking. “Different societies have different tolerances for financial risk,” he writes, “with different regulatory regimes to match. Just like countries are free to set their own banking rules, they should be free to set strong conditions, including ownership rules, on how platforms operate in their territory. Decisions by a company in one country should not be binding on citizens in another country.”In those terms, HSBC may be a “global” bank, but when it’s operating in the UK it has to obey British regulations. Similarly, when operating in the US, it follows that jurisdiction’s rules. Translating that to the tech sphere, it suggests that the time has come to stop accepting the tech giant’s claims to be hyper-global corporations, whereas in fact they are US companies operating in many jurisdictions across the globe, paying as little local tax as possible and resisting local regulation with all the lobbying resources they can muster. Facebook, YouTube, Google and Twitter can bleat as sanctimoniously as they like about freedom of speech and the first amendment in the US, but when they operate here, as Facebook UK, say, then they’re merely British subsidiaries of an American corporation incorporated in California. And these subsidiaries obey British laws on defamation, hate speech and other statutes that have nothing to do with the first amendment. Oh, and they pay taxes on their local revenues.What I’ve been reading Capitol ideasWhat Happened? is a blog post by the Duke sociologist Kieran Healy, which is the most insightful attempt I’ve come across to explain the 6 January attack on Washington’s Capitol building.Tweet and sourHow @realDonaldTrump Changed Politics — and America. Derek Robertson in Politico on how Trump “governed” 140 characters at a time.Stay safeThe Plague Year is a terrific New Yorker essay by Lawrence Wright that includes some very good reasons not to be blase about Covid. More