More stories

  • in

    'It let white supremacists organize': the toxic legacy of Facebook's Groups

    Sign up for the Guardian Today US newsletterMark Zuckerberg, the Facebook CEO, announced last week the platform will no longer algorithmically recommend political groups to users in an attempt to “turn down the temperature” on online divisiveness.But experts say such policies are difficult to enforce, much less quantify, and the toxic legacy of the Groups feature and the algorithmic incentives promoting it will be difficult to erase.“This is like putting a Band-Aid on a gaping wound,” said Jessica J González, the co-founder of the anti-hate speech group Change the Terms. “It doesn’t do enough to combat the long history of abuse that’s been allowed to fester on Facebook.”Groups – a place to create ‘meaningful social infrastructure’Facebook launched Groups, a feature that allows people with shared interests to communicate on closed forums, in 2010, but began to make a more concerted effort to promote the feature around 2017 after the Cambridge Analytica scandal cast a shadow on the platform’s Newsfeed.In a long blogpost in 2017 February called Building Global Community, Zuckerberg argued there was “a real opportunity” through groups to create “meaningful social infrastructure in our lives”.He added: “More than one billion people are active members of Facebook groups, but most don’t seek out groups on their own – friends send invites or Facebook suggests them. If we can improve our suggestions and help connect one billion people with meaningful communities, that can strengthen our social fabric.”After growing its group suggestions and advertising the feature extensively – including during a 60-second spot in the 2020 Super Bowl – Facebook did see a rise in use. In February 2017 there were 100 million people on the platform who were in groups they considered “meaningful”. Today, that number is up to more than 600 million.That fast rise, however, came with little oversight and proved messy. In shifting its focus to Groups, Facebook began to rely more heavily on unpaid moderators to police hate speech on the platform. Groups proved a more private place to speak, for conspiracy theories to proliferate and for some users to organize real-life violence – all with little oversight from outside experts or moderators.Facebook in 2020 introduced a number of new rules to “keep Facebook groups safe”, including new consequences for individuals who violate rules and increased responsibility given to admins of groups to keep users in line. The company says it has hired 35,000 people to address safety on Facebook, including engineers, moderators and subject matter experts, and invested in AI technology to spot posts that violate it guidelines.“We apply the same rules to Groups that we apply to every other form of content across the platform,” a Facebook company spokesperson said. “When we find Groups breaking our rules we take action – from reducing their reach to removing them from recommendations, to taking them down entirely. Over the years we have invested in new tools and AI to find and remove harmful content and developed new policies to combat threats and abuse.”Researchers have long complained that little is shared publicly regarding how, exactly, Facebook algorithms work, what is being shared privately on the platform, and what information Facebook collects on users. The increased popularity of Groups made it even more difficult to keep track of activity on the platform.“It is a black box,” said González regarding Facebook policy on Groups. “This is why many of us have been calling for years for greater transparency about their content moderation and enforcement standards. ”Meanwhile, the platform’s algorithmic recommendations sucked users further down the rabbit hole. Little is known about exactly how Facebook algorithms work, but it is clear the platform recommends users join similar groups to ones they are already in based on keywords and shared interests. Facebook’s own researchers found that “64% of all extremist group joins are due to our recommendation tools”, an internal report in 2016 found.“Facebook has let white supremacists organize and conspiracy theorists organize all over its platform and has failed to contain that problem,” González said. “In fact it has significantly contributed to the spread of that problem through its recommendation system.”‘We need to do something to stop these conversations’Facebook’s own research showed that algorithmic recommendations of groups may have contributed to the rise of violence and extremism. On Sunday, the Wall Street Journal reported that internal documents showed executives were aware of risks posed by groups and were warned repeatedly by researchers to address them. In one presentation in 2020 August, researchers said roughly “70% of the top 100 most active US Civic Groups are considered non-recommendable for issues such as hate, misinfo, bullying and harassment”.“We need to do something to stop these conversations from happening and growing as quickly as they do,” the researchers wrote, according to the Wall Street Journal, and suggested taking measures to slow the growth of Groups until more could be done to address the issues.Several months later, Facebook halted algorithmic recommendations for political groups ahead of the US elections – a move that has been extended indefinitely with the policy announced last week. The change seemed to be motivated by the 6 January insurrection, which the FBI found had been tied to organizing on Facebook.In response to the story in the Wall Street Journal, Guy Rosen, Facebook’s vice-president of integrity, who oversees content moderation policies on the platform, said the problems were indicative of emerging threats rather than inability to address long-term problems. “If you’d have looked at Groups several years ago, you might not have seen the same set of behaviors,” he said.Facebook let white supremacists and conspiracy theorists organize all over its platform and has failed to contain that problemBut researchers say the use of Groups to organize and radicalize users is an old problem. Facebook groups had been tied to a number of harmful incidents and movements long before January’s violence.“Political groups on Facebook have always advantaged the fringe, and the outsiders,” said Joan Donovan, a lead researcher at Data and Society who studies the rise of hate speech on Facebook. “It’s really about reinforcement – the algorithm learns what you’ve clicked on and what you like and it tries to reinforce those behaviors. The groups become centers of coordination.”Facebook was criticized for its inability to police terror groups such as the Islamic State and al-Qaida using it as early as 2016. It was used extensively in organizing of the Unite the Right Rally in Charlottesville in 2019, where white nationalists and neo-Nazis violently marched. Militarized groups including Proud Boys, Boogaloo Bois and militia groups all organized, promoted and grew their ranks on Facebook. In 2020 officials arrested men who had planned a violent kidnapping of the Michigan governor, Gretchen Whitmer, on Facebook. A 17-year-old in Illinois shot three people, killing two, in a protest organized on Facebook.These same algorithms have allowed the anti-vaccine movement to thrive on Facebook, with hundreds of groups amassing hundreds of thousands of members over the years. A Guardian report in 2019 found the majority of search results for the term “vaccination” were anti-vaccine, led by two misinformation groups, “Stop Mandatory Vaccination” and “Vaccination Re-education Discussion Forum” with more than 140,000 members each. These groups were ultimately tied to harassment campaigns against doctors who support vaccines.In September 2020, Facebook stopped health groups from being algorithmically recommended to put a stop to such misinformation issues. It also has added other rules to stop the spread of misinformation, including banning users from creating a new group if an existing group they had administrated is banned.The origin of the QAnon movement has been traced to a post on a message board in 2017. By the time Facebook banned content related to the movement in 2020, a Guardian report had exposed that Facebook groups dedicated to the dangerous conspiracy theory QAnon were spreading on the platform at a rapid pace, with thousands of groups and millions of members.‘The calm before the storm’Zuckerberg has said in 2020 the company had removed more than 1m groups in the past year, but experts say the action coupled with the new policy on group recommendations are falling short.The platform promised to stop recommending political groups to users ahead of the elections in November and then victoriously claimed to have halved political group recommendations. But a report from the Markup showed that 12 groups among the top 100 groups recommended to users in its Citizen Browser project, which tracks links and group recommendations served to a nationwide panel of Facebook users, were political in nature.Indeed, the Stop the Steal groups that emerged to cast doubt on the results of the election and ultimately led to the 6 January violent insurrection amassed hundreds of thousands of followers – all while Facebook’s algorithmic recommendations of political groups were paused. Many researchers also worry that legitimate organizing groups will be swept up in Facebook’s actions against partisan political groups and extremism.“I don’t have a whole lot of confidence that they’re going to be able to actually sort out what a political group is or isn’t,” said Heidi Beirich, who is the co-founder of the Global Project Against Hate and Extremism and sits on Facebook’s Real Oversight Board, a group of academics and watchdogs criticizing Facebook’s content moderation policies.“They have allowed QAnon, militias and other groups proliferate so long, remnants of these movements remain all over the platform,” she added. “I don’t think this is something they are going to be able to sort out overnight.”“It doesn’t actually take a mass movement, or a massive sea of bodies, to do the kind of work on the internet that allows for small groups to have an outsized impact on the public conversation,” added Donovan. “This is the calm before the storm.” More

  • in

    What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI | Arwa Mahdawi

    Want to see a half-naked woman? Well, you’re in luck! The internet is full of pictures of scantily clad women. There are so many of these pictures online, in fact, that artificial intelligence (AI) now seems to assume that women just don’t like wearing clothes.That is my stripped-down summary of the results of a new research study on image-generation algorithms anyway. Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)Why was the algorithm so fond of bikini pics? Well, because garbage in means garbage out: the AI “learned” what a typical woman looked like by consuming an online dataset which contained lots of pictures of half-naked women. The study is yet another reminder that AI often comes with baked-in biases. And this is not an academic issue: as algorithms control increasingly large parts of our lives, it is a problem with devastating real-world consequences. Back in 2015, for example, Amazon discovered that the secret AI recruiting tool it was using treated any mention of the word “women’s” as a red flag. Racist facial recognition algorithms have also led to black people being arrested for crimes they didn’t commit. And, last year, an algorithm used to determine students’ A-level and GCSE grades in England seemed to disproportionately downgrade disadvantaged students.As for those image-generation algorithms that reckon women belong in bikinis? They are used in everything from digital job interview platforms to photograph editing. And they are also used to create huge amounts of deepfake porn. A computer-generated AOC in a bikini is just the tip of the iceberg: unless we start talking about algorithmic bias, the internet is going to become an unbearable place to be a woman. More

  • in

    Tech Exodus: Is Silicon Valley in Trouble?

    On January 7, the news media announced that Elon Musk had surpassed Jeff Bezos as “the richest person on Earth.” I have a personal interest in the story. Two of my neighbors just bought a Tesla, and this morning, on the highway between Geneva and Lausanne, an angry Tesla driver flashed me several times, demanding that I let him pass. His license plate was from Geneva. Apparently, these days, driving a Tesla automatically gives you privileges, including speeding, particularly if you sport a Geneva or Zurich license plate. In the old days, at least in Germany, bullying others on the highway was a privilege reserved for Mercedes and BMW drivers, who, as the saying went, had an “inbuilt right-of-way.” Oh my, how times have changed.

    Texas: The End of Authentic America?

    READ MORE

    Elon Musk is one of these success stories that only America can write. He is the postmodern equivalent of Howard Hughes, a visionary, if slightly unhinged, genius, who loved to flout conventions and later on in his life became a recluse. And yet, had you bought 100 shares of Tesla a year ago, your initial investment would be worth more than eight times as much today (from $98 to $850). Tough shit, as they like to say in Texas.

    The Lone Star

    Why Texas? At the end of last year, Elon Musk announced that he was going to leave Silicon Valley to find greener pastures in Texas. To be more precise, Austin, Texas. Austin is not only the capital of the Lone Star State. It also happens to be an oasis of liberalism in a predominantly red state. When I was a student at the University of Texas in the late 1970s, we would go to the Barton Springs pool, one of the few places where women could go topless. For a German, this was hardly noteworthy; for the average Texan, it probably bordered on revolutionary — and obscene.

    In the 2020 presidential election, in Travis County, which includes Austin and adjacent areas, Donald Trump garnered a mere 26% of the vote, compared to 52% for the whole state. Austin is also home to the University of Texas, one of America’s premier public universities, which “has spent decades investing in science and engineering programs.”

    .custom-post-from {float:right; margin: 0 10px 10px; max-width: 50%; width: 100%; text-align: center; background: #000000; color: #ffffff; padding: 15px 0 30px; }
    .custom-post-from img { max-width: 85% !important; margin: 15px auto; filter: brightness(0) invert(1); }
    .custom-post-from .cpf-h4 { font-size: 18px; margin-bottom: 15px; }
    .custom-post-from .cpf-h5 { font-size: 14px; letter-spacing: 1px; line-height: 22px; margin-bottom: 15px; }
    .custom-post-from input[type=”email”] { font-size: 14px; color: #000 !important; width: 240px; margin: auto; height: 30px; box-shadow:none; border: none; padding: 0 10px; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-pen-icon.svg”); background-repeat: no-repeat; background-position: center right 14px; background-size:14px;}
    .custom-post-from input[type=”submit”] { font-weight: normal; margin: 15px auto; height: 30px; box-shadow: none; border: none; padding: 0 10px 0 35px; background-color: #1878f3; color: #ffffff; border-radius: 4px; display: inline-block; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-email-icon.svg”); background-repeat: no-repeat; background-position: 14px center; background-size: 14px; }

    .custom-post-from .cpf-checkbox { width: 90%; margin: auto; position: relative; display: flex; flex-wrap: wrap;}
    .custom-post-from .cpf-checkbox label { text-align: left; display: block; padding-left: 32px; margin-bottom: 0; cursor: pointer; font-size: 11px; line-height: 18px;
    -webkit-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
    order: 1;
    color: #ffffff;
    font-weight: normal;}
    .custom-post-from .cpf-checkbox label a { color: #ffffff; text-decoration: underline; }
    .custom-post-from .cpf-checkbox input { position: absolute; opacity: 0; cursor: pointer; height: 100%; width: 24%; left: 0;
    right: 0; margin: 0; z-index: 3; order: 2;}
    .custom-post-from .cpf-checkbox input ~ label:before { content: “f0c8”; font-family: Font Awesome 5 Free; color: #eee; font-size: 24px; position: absolute; left: 0; top: 0; line-height: 28px; color: #ffffff; width: 20px; height: 20px; margin-top: 5px; z-index: 2; }
    .custom-post-from .cpf-checkbox input:checked ~ label:before { content: “f14a”; font-weight: 600; color: #2196F3; }
    .custom-post-from .cpf-checkbox input:checked ~ label:after { content: “”; }
    .custom-post-from .cpf-checkbox input ~ label:after { position: absolute; left: 2px; width: 18px; height: 18px; margin-top: 10px; background: #ffffff; top: 10px; margin: auto; z-index: 1; }
    .custom-post-from .error{ display: block; color: #ff6461; order: 3 !important;}

    Musk is hardly alone in relocating to Texas. Recently, both Hewlett Packard Enterprise and Oracle announced they would move operations there, the first one to Houston, the second to Austin, where it will join relatively long-time resident tech heavyweights such as recently reinvigorated Advanced Micro Devices and Dell. It is not clear, however, whether Oracle will feel more comfortable in Austin than Silicon Valley. After all, Oracle was very close to the Trump administration.

    Recently, there has been a lot of talk about the “tech exodus” from Silicon Valley. Michael Lind, the influential social analyst and pundit who also happens to teach at UT, has preferred to speak of a “Texodus,” as local patriotism obligates. Never short of hyperbole, Lind went so far as to boldly predict that the “flight of terrified techies from California to Texas marks the end of one era, and the beginning of a new one.” Up in Seattle and over in Miami, questions were raised whether or not and how they might benefit from the “Texit.”

    Lind’s argument is that over the past decade or so, Silicon Valley has gone off track. In the past, tech startups in the Bay Area succeeded because they produced something. As he puts it, Elon Musk and Jeff Bezos “are building and testing rockets in rural Texas.” Musk produces cars and batteries. Against that, Silicon Valley’s new “tech” darlings come up with clever ideas, such as allowing “grandmothers to upload videos of their kittens for free, and then sell the advertising rights to the videos and pocket the cash.”

    The models are Uber and Lyft, which Lind dismisses as nothing more than hyped-up telephone companies. Apparently, Lind does not quite appreciate the significance of the gig economy and particularly the importance of big data, which is the real capital of these companies and makes them “tech.” This is hardly surprising, given Austin’s history of hostility to the sharing economy — at least as long as it associated with its industry giants. As early as 2016, Austin held a referendum on whether or not the local government should be allowed to regulate Uber and Lyft. The companies lost, and subsequently fired 10,000 drivers, leaving Austinites stranded.

    In the months that followed, underground ride-sharing schemes started to spring up, seeking to fill the void. In the meantime, Uber and Lyft lobbied the state legislature, which ultimately passed a ride-hailing law, which established licensing on the state level, circumventing local attempts at regulation, which allowed Uber and Lyft to resume operations.

    Unfortunately for Lind, he also has it in for Twitter and Facebook for their “regular and repeated censorship of Republicans and conservatives” — an unusual failure of foresight in light of recent events at the Capitol. Ironically enough, Facebook has a large presence in Austin. Business sources from the city reported that Facebook is in the market for an additional 1 million square feet of office space in Austin. So is Google, which in recent years has significantly expanded its presence in the city and elsewhere in Texas.

    Colonial Transplant

    Does that mean Austin is likely to be able to rival Silicon Valley as America’s top innovation center for the high-tech industry? Not necessarily. As Margaret O’Mara has pointed out in the pages of The New York Times, this is not the first time that Silicon Valley has faced this kind of losses. And yet, “Silicon Valley always roared back, each time greater than the last. One secret to its resilience: money. The wealth created by each boom — flowing chiefly to an elite circle of venture investors and lucky founders — outlasted each bust. No other tech region has generated such wealth and industry-specific expertise, which is why it has had such resilience.”

    Industry insiders concur. In their view, Austin is less a competitor than a “colony.” Or, to put it slightly differently, Austin is nothing more than an outpost for tech giants such as Google and Facebook, while their main operations stay in Silicon Valley. It is anyone’s guess whether this time, things will pan out the same or somewhat differently. This depends both on the push and pull factors that inform the most recent tech exodus — in other words, on what motivates Silicon Valley denizens to abandon the Bay Area for the hills surrounding Austin.

    A recent Berkeley IGS poll provides some answers. According to the poll, around half of Californians thought about leaving the state in 2019. Among the most important reasons were the high cost of housing, the state’s high taxes and, last but not least, the state’s “political culture.” More detailed analysis suggests that the latter is a very significant factor: Those identifying themselves as conservatives or Republicans were three times as likely than liberals and Democrats to say they were seriously considering leaving the state.

    Embed from Getty Images

    The fact that 85% of Republicans who thought about leaving did so for reasons of political culture is a strong indication of the impact of partisanship. Among Democrats, only around 10% mentioned political culture as a reason for thinking about leaving the state. Partisanship was also reflected in the response to the question of whether California is a “land of opportunity.” Among Democrats, 80% thought so; among Republicans, only about 40% did.

    Until recently, thinking about leaving hardly ever translated into actually going. COVID-19 has fundamentally changed the equation. The pandemic introduced the notion of working from home, of remote work via “old” technologies such as Skype and new ones like Zoom. In late February 2020, Zoom’s stock was at around $100; in mid-October, it was traded at more than $550. In the meantime, it has lost some $200, largely the result of the prospect of a “post-pandemic world” thanks to the availability of vaccines.

    At least for the moment, remote work has fundamentally changed the rationale behind being tied to a certain locality. Before COVID-19, as Katherine Bindley has noted in The Wall Street Journal, “leaving the area meant walking away from some of the best-paying and most prestigious jobs in America.” In the wake of the pandemic, this is no longer the case. In fact, major Silicon Valley tech companies, such as Google, Facebook and Lyft, have told their workforce that they won’t be returning to their offices until sometime late summer. Given that California has been one of the states most affected by the virus, and given its relatively large population heavily concentrated in two metropolitan areas, even these projections might be overly optimistic.

    Distributed Employment

    And it is not at all clear whether or not, once the pandemic has run its course, things will return to “normal.” Even before the pandemic, remote work was on the rise. In 2016, according to Gallup data, more than 40% of employees “worked remotely in some capacity, meaning they spent at least some of their time working away from their coworkers.” Tech firms have been particularly accommodating to employee wishes to work remotely, even on a permanent basis. In May, The Washington Post reported that Twitter had unveiled plans to offer their employees the option to work from home “forever.” In an internal survey in July, some 70% of Twitter employees said they wanted to continue working from home at least three days a week.

    Other tech companies are likely to follow suit, in line with the new buzzword in management thinking, “distributed employment,” itself a Silicon Valley product. Its most prominent promoter has been Nicholas Bloom of Stanford University. Bloom has shown that work from home tends to increase productivity, for at least two reasons. First, people working from home actually work their full shift. Second, they tend to concentrate better than in an office environment full of noise and distractions.

    Additional support for distributed employment has come from Gallup research. The results indicate that “remote workers are more productive than on-site workers.” Gallup claims that remote work boosts employee morale and their engagement with the company, which leads to the conclusion that “off-site workers offer leaders the greatest gains in business outcomes.”

    It is for these reasons that this time, Silicon Valley might be in real trouble. Distributed employment fundamentally challenges the rationale behind the Valley’s success. As The Washington Post expose put it, in the past, “great ideas at work were born out of daily in-person interactions.” Creativity came from “serendipitous run-ins with colleagues,” as Steve Jobs would put it, “’from spontaneous meetings, from random discussions.’” Distributed employment is the antithesis of this kind of thinking. With the potential end of this model, Silicon Valley loses much of its raison d’être — unless it manages to reinvent itself, as it has done so many times in the past.

    A few years ago, Berkeley Professor AnnaLee Saxenian, who wrote a highly influential comparative study of how Silicon Valley outstripped Boston’s Route 128, has noted that Silicon Valley was “a set of human beings, and a set of institutions around them, that happen to be very well adapted to the world that we live in.” The question is whether or not this is still the case. After all, at one point, Route 128 was a hotspot of creativity and innovation, a serious rival of Silicon Valley. A couple of decades later, Route 128 was completely eclipsed by the Valley, a victim of an outdated industrial system, based on companies that kept largely to themselves.

    Against that, in the Valley, there emerged a new network-based system that promoted mutual learning, entrepreneurship and experimentation. The question is to what degree this kind of system will be capable to deal with the new challenges posed by the impact of COVID-19, which has fundamentally disrupted the fundamentals of the system.

    Embed from Getty Images

    In the meantime, locations such as Austin look particularly attractive. This is when the pull factors come in. Unlike California, Texas has no state income tax. In California, state income tax is more than 13%, the highest in the United States. To make things worse, late last year, California legislators considered raising taxes on the wealthy to bring in money to alleviate the plight of the homeless who have flocked in particular to San Francisco. Earlier on, state legislators had sought to raise the state income tax rate to almost 17%. It failed to pass.

    At the same time, they also came up with a piece of legislation “that would have created a first-in-the-nation wealth tax that included a feature to tax former residents for 10 years after they left the Golden State.” This one failed too, but it left a sour taste in the mouth of many a tech millionaire and certainly did little to counteract the flight from the state.

    No wonder Austin looks so much better, and not only because of Texas’s generally more business-friendly atmosphere. Austin offers California’s tech expats a lifestyle similar to that in the Bay Area, but at a considerably more reasonable cost. Add to that the absence of one of the most distressing assaults on hygiene: Between 2011 and 2018, the number of officially recorded incidences of human feces on the streets of San Francisco quintupled, from 5,500 cases to over 28,000 cases — largely the result of the city’s substantial homeless population. The fact is that California is one of the most unequal states in the nation. As Farhad Manjoo has recently put it in The New York Times, “the cost of living is taken into account, billionaire-brimming California ranks as the most poverty-stricken state, with a fifth of the population struggling to get by.”

    Homelessness is one result. And California’s wealthy liberals have done little to make things better. On the contrary, more often than not, they have used their considerable clout to block any attempt to change restrictive zoning laws and increase the supply of affordable housing, what Manjoo characterizes as “exclusionary urban restrictionism.”

    To be sure, restrictive zoning laws have a long history in San Francisco, going all the way back to the second half of the 19th century. At the time, San Francisco was home to a significant Chinese population, largely living in boarding houses. In the early 1870s, the city came up with new ordinances, designed “to criminalize Chinese renters and landlords so their jobs and living space could be reclaimed for San Francisco’s white residents.” Ever since, zoning laws have been informed by “efforts to appease the city’s wealthy, well-connected homeowners.” And this in a city that considers itself among the most progressive in the nation.

    None of these factors in isolation explains the current tech exodus from the Bay Area. Taken together, however, they make up a rather convincing case for why this time, Silicon Valley might be in real trouble. Unfortunately enough, the exodus might contribute to the “big sort” that has occurred in the US over the past few decades, meaning the “self-segregation of Americans into like-minded communities” that has been a major factor behind the dramatic polarization of the American political landscape. The signs are there, the consequences known — at least since the assault on the US Capitol.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    Claim of anti-conservative bias by social media firms is baseless, report finds

    Republicans including Donald Trump have raged against Twitter and Facebook in recent months, alleging anti-conservative bias, censorship and a silencing of free speech. According to a new report from New York University, none of that is true.Disinformation expert Paul Barrett and researcher J Grant Sims found that far from suppressing conservatives, social media platforms have, through algorithms, amplified rightwing voices, “often affording conservatives greater reach than liberal or nonpartisan content creators”.Barrett and Sims’s report comes as Republicans up their campaign against social media companies. Conservatives have long complained that platforms such as Twitter, Facebook and YouTube show bias against the right, laments which intensified when Trump was banned from all three platforms for inciting the attack on the US Capitol which left five people dead.The NYU study, released by the Stern Center for Business and Human Rights, found that a claim of anti-conservative bias “is itself a form of disinformation: a falsehood with no reliable evidence to support it”.“There is no evidence to support the claim that the major social media companies are suppressing, censoring or otherwise discriminating against conservatives on their platforms,” Barrett said. “In fact, it is often conservatives who gain the most in terms of engagement and online attention, thanks to the platforms’ systems of algorithmic promotion of content.”The report found that Twitter, Facebook and other companies did not show bias when deleting incendiary tweets around the Capitol attack, as some on the right have claimed.Prominent conservatives including Ted Cruz, the Texas senator, have sought to crack down on big tech companies as they claim to be victims of suppression – which Barrett and Sims found does not exist.The researchers did outline problems social media companies face when accused of bias, and recommended a series of measures.“What is needed is a robust reform agenda that addresses the very real problems of social media content regulation as it currently exists,” Barrett said. “Only by moving forward from these false claims can we begin to pursue that agenda in earnest.”A 2020 study by the Pew Research Center reported that a majority of Americans believe social media companies censor political views. Pew found that 90% of Republicans believed views were being censored, and 69% of Republicans or people who leant Republican believed social media companies “generally support the views of liberals over conservatives”.Republicans including Trump have pushed to repeal section 230 of the Communications Decency Act, which protects social media companies from legal liability, claiming it allows platforms to suppress conservative voices.The NYU report suggests section 230 should be amended, with companies persuaded to “accept a range of new responsibilities related to policing content”, or risk losing liability protections. More

  • in

    Chinese bots had key role in debunked ballot video shared by Eric Trump

    A Chinese bot network played a key role in spreading disinformation during and after the US election, including a debunked video of “ballot burning” shared by Eric Trump, a new study reveals.The misleading video shows a man filming himself on Virginia Beach, allegedly burning votes cast for Donald Trump. The ballots were actually samples. The clip went viral after Trump’s son Eric posted it a day later on his official Twitter page, where it got more than 1.2m views.The video was believed to have originated from an account associated with the QAnon conspiracy theory. But the study by Cardiff University found two China-linked accounts had shared the video before this. Twitter has since suspended one of them.The same Chinese network has spread anti-US propaganda, including calls for violence in the run-up to the 6 January storming of the US Capitol building by a pro-Trump mob. Afterwards. It compared the west’s response to the DC riot to political protests in Hong Kong.The accounts previously posted hostile messages about Trump and Joe Biden, made allegations of election fraud and promoted “negative narratives” about the US response to the coronavirus pandemic.Professor Martin Innes, director of Cardiff University’s crime and security institute, said open-source analysis strongly suggested “multiple links” to Beijing.Researchers initially thought the hidden network was not especially complex, he said. Further evidence, however, revealed what he called a “sophisticated and disciplined” online operation. Accounts did not use certain hashtags in an apparent attempt to avoid Twitter’s counter-measures. They posted during regular Chinese working hours, with gaps on a national holiday, and used machine tools to translate into English.“The network appears designed to run as a series of almost autonomous ‘cells’, with minimal links connecting them,” Innes said. “This structure is designed to protect the network as a whole if one ‘cell’ is discovered, which suggests a degree of planning and forethought.“Therefore, this marks the network as a significant attempt to influence the trajectory of US politics by foreign actors.”Efforts by Russian-linked social media actors to influence US elections are well known. The special counsel Robert Mueller detailed an extensive troll operation run out of building in St Petersburg. Its goal was to “disparage” Hillary Clinton and to promulgate “divisive” content, Mueller found.The Chinese accounts cannot be definitely linked to the state. But ordinary Chinese citizens do not have access to Twitter and it appears that Beijing may be seeking to emulate Kremlin practices by setting up its own US-facing political influence operation.Last year the university’s research team uncovered more than 400 accounts engaging in suspicious activities. These were forwarded to Twitter, which suspended them within a few days. The latest analysis suggests further accounts are still working, with the network more resilient than previously thought.There is compelling evidence of links to China. Posts feature the Chinese language and a focus upon topics reflecting Chinese geopolitical interests. Some 221 accounts spread content in favour of the Chinese Communist party, encompassing some 42,618 tweets, the study found.The accounts also attacked Trump for referring to Covid-19 as the China virus. One claimed the virus originated outside China and had actually come from the US laboratory at Fort Detrick, in Frederick, Maryland. The network’s main goal was “encouragement of discord” in the US, the study concluded. Most tweets about Trump were negative. The handful that were positive urged Americans to “fetch their guns”, to “fight for democracy” and to “call gunmen together” in order to win a second Trump term.The bots complained of “double standards” after the Capitol building riot, saying US politicians had hypocritically backed protesters who entered the Hong Kong legislative building. “The riots in Congress are a disgrace to the United States today, and will soon become the fuse of the American order,” one remarked. More

  • in

    Big tech facilitated QAnon and the Capitol attack. It’s time to hold them accountable

    Donald Trump’s election lies and the 6 January attack on the US Capitol have highlighted how big tech has led our society down a path of conspiracies and radicalism by ignoring the mounting evidence that their products are dangerous.But the spread of deadly misinformation on a global scale was enabled by the absence of antitrust enforcement by the federal government to rein in out-of-control monopolies such as Facebook and Google. And there is a real risk social media giants could sidestep accountability once again.Trump’s insistence that he won the election was an attack on democracy that culminated in the attack on the US Capitol. The events were as much the fault of Sundar Pichai, Jack Dorsey and Mark Zuckerberg – CEOs of Google, Twitter and Facebook, respectively – as they were the fault of Trump and his cadre of co-conspirators.During the early days of social media, no service operated at the scale of today’s Goliaths. Adoption was limited and online communities lived in small and isolated pockets. When the Egyptian uprisings of 2011 proved the power of these services, the US state department became their cheerleaders, offering them a veneer of exceptionalism which would protect them from scrutiny as they grew exponentially.Later, dictators and anti-democratic actors would study and co-opt these tools for their own purposes. As the megaphones got larger, the voices of bad actors also got louder. As the networks got bigger, the feedback loop amplifying those voices became stronger. It is unimaginable that QAnon could gain a mass following without tech companies’ dangerous indifference.Eventually, these platforms became immune to forces of competition in the marketplace – they became information monopolies with runaway scale. Absent any accountability from watchdogs or the marketplace, fringe conspiracy theories enjoyed unchecked propagation. We can mark networked conspiracies from birtherism to QAnon as straight lines through the same coterie of misinformers who came to power alongside Trump.Today, most global internet activity happens on services owned by either Facebook or Alphabet, which includes YouTube and Google. The internet has calcified into a pair of monopolies who protect their size by optimizing to maximize “engagement”. Sadly, algorithms designed to increase dependency and usage are far more profitable than ones that would encourage timely, local, relevant and, most importantly, accurate information. The truth, in a word, is boring. Facts rarely animate the kind of compulsive engagement rewarded by recommendation and search algorithms.The best tool – if not the only tool – to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutions.Antitrust enforcement has historically been the US government’s greatest weapon against such firms. From breaking up the trusts at the start of the 20th century to the present day, antitrust enforcement spurs competition and ingenuity while re-empowering citizens. Most antitrust historians agree that absent US v Microsoft in 1998, which stopped Microsoft from bundling products and effectively killing off other browsers, the modern internet would have been strangled in the crib.The best tool to hold big tech accountable is antitrust enforcement: enforcing the existing antitrust laws designed to rein in companies’ influence over other political, economic and social institutionsIronically, Google and Facebook were the beneficiaries of such enforcement. Over two decades would pass before US authorities brought antitrust suits against Google and Facebook last year. Until then, antitrust had languished as a tool to counterbalance abusive monopolies. Big tech sees an existential threat in the renewed calls for antitrust, and these companies have aggressively lobbied to ensure key vacancies in the Biden administration are filled by their friends.The Democratic party is especially vulnerable to soft capture by these tech firms. Big tech executives are mostly left-leaning and donate millions to progressive causes while spouting feelgood rhetoric of inclusion and connectivity. During the Obama administration, Google and Facebook were treated as exceptional, avoiding any meaningful regulatory scrutiny. Democratic Senate leadership, specifically Senator Chuck Schumer, has recently signaled he will treat these companies with kid gloves.The Biden administration cannot repeat the Obama legacy of installing big tech-friendly individuals to these critical but often under-the-radar roles. The new administration, in consultation with Schumer, will be tasked with appointing a new assistant attorney general for antitrust at the Department of Justice and up to three members of the Federal Trade Commission. Figures friendly to big tech in those positions could abruptly settle the pending litigation against Google or Facebook.President Joe Biden and Schumer must reject any candidate who has worked in the service of big tech. Any former White House or congressional personnel who gave these companies a pass during the Obama administration should also be disqualified from consideration. Allowing big tech’s lawyers and plants to run the antitrust agencies would be the equivalent of allowing a climate-change-denying big oil executive run the Environmental Protection Agency.The public is beginning to recognize the harms to society wrought by big tech and a vibrant and bipartisan anti-monopoly movement with diverse scholars, and activists has risen over the past few years. Two-thirds of Democratic voters believe, along with a majority of Republicans, that Biden should “refuse to appoint executives, lobbyists, or lawyers for these companies to positions of power or influence in his administration while this legal activity is pending”. This gives the Democratic party an opportunity to do the right thing for our country and attract new voters by fighting for the web we want.Big tech played a central role in the dangerous attack on the US Capitol and all of the events which led to it. Biden’s antitrust appointees will be the ones who decide if there are any consequences to be paid. More

  • in

    The Science of Rebuilding Trust

    During his inauguration, President Joe Biden appealed to us, American citizens, repeatedly and emphatically, to defend unity and truth against corrosion from power and profit. Fortunately, the bedrock tensions between unity, truth, power and profit have newly-discovered mathematical definitions, so their formerly mysterious interactions can now be quantified, predicted and addressed. So in strictly (deeply) scientific terms, Biden described our core problem exactly right.

    Can We Build Social Trust in an Online World?

    READ MORE

    I applaud and validate President Biden’s distillation of the problem of finding and keeping the truth, and of trusting it together. Human trust is based on high-speed neuromechanical interaction between living creatures. Other kinds of trust not based on that are fake to some degree. Lies created for money and power damage trust most of all.

    A Moment of Silence

    As Biden showed in his first act in office, the first step toward rebuilding is a moment of silence. Avoiding words, slowing down, taking time, breathing, acknowledging common grievances and recognizing a common purpose are not just human needs, but necessary algorithmic steps as well. Those are essential to setting up our common strategy and gathering the starting data that we need to make things right.

    The next step, as Biden also said, is to recognize corrupting forces such as money and power — and I would also add recognition. The third step, as I propose below, is to counter those three forces explicitly in our quest for public truth, to do the exact opposite of what money, power and careerism do, and to counter and reverse every information-processing step at which money, power and recognition might get a hold.

    Embed from Getty Images

    Instead of using one panel of famous, well-funded experts deliberating a few hours in public, employ a dozen groups of anonymous lone geniuses, each group working separately in secret for months on the same common question. Have them release their reports simultaneously in multiple media. That way, the unplanned overlap shows most of what matters and a path to resolving the rest — an idea so crazy it just might work.

    Since I’m describing how to restore democracy algorithmically, I might as well provide an example of legislation in the algorithmic language too. To convey data-processing ideas clearly, and thereby to avoid wasting time and money building a system that won’t work, technologists display our proposals using oversimplified examples that software architects like myself call “reference implementations” and which narrative architects like my partner call “tutor texts.”

    These examples are not meant to actually work, but to unambiguously show off crucial principles. In the spirit of reference implementations, I present the following legislative proposal, written to get to the truth about one particular subject but easily rewritten to find the truth about other subjects such as global warming or fake news: The Defend the Growing Human Nervous System With Information Sciences Act.

    The Defend Act

    Over centuries, humankind has defended its children against physical extremes, dangerous chemicals and infectious organisms by resolute, rational application of the laws of nature via technology and medical science. Now is the time to use those same tools to defend our children’s growing nervous systems against the informational damage that presently undermines their trust in themselves, their families and their communities. Therefore, we here apply information science in order to understand how man-made communication helps and hurts the humans whom God made.

    The human race has discovered elemental universal laws governing processes from combustion to gravitation and from them created great and terrible technologies from fire and weapons to electricity grids and thermonuclear reactions. But no laws are more elemental than the laws of data and mathematics, and no technologies more universal and fast-growing than the mathematically-grounded technologies of information capture, processing and dissemination. Information science is changing the world we live in and, therefore, changing us as living, breathing human beings. How?

    The human race has dealt with challenges from its own technologies before. Slash-and-burn tactics eroded farmland; lead pipes poisoned water; city wells spread cholera; radioactivity caused cancer; refrigerants depleted ozone. And we have dealt with epidemics that propagated in weird and novel ways — both communicable diseases spread by touch, by body fluids, by insects, by behaviors, by drinking water, by food, and debilitating diseases of chemical imbalance, genetic dysregulation, immune collapse and misfolded proteins. Our science has both created and solved monumental problems.

    But just as no technology is more powerful than the information sciences, when deployed against an immature, growing, still-learning human nervous system, no toxin is more insidious than extractive or exploitive artificial information.

    The Defend the Growing Human Nervous System With Information Sciences Act aims to understand first and foremost the depth and texture of the threat to growing human nervous systems in order to communicate the problem to the public at large (not to solve the problem yet). This act’s approach is based on five premises about the newly-discovered sciences of information.

    .custom-post-from {float:left; margin: 0 10px 10px; max-width: 50%; width: 100%; text-align: center; background: #000000; color: #ffffff; padding: 15px 0 30px; }
    .custom-post-from img { max-width: 85% !important; margin: 15px auto; filter: brightness(0) invert(1); }
    .custom-post-from .cpf-h4 { font-size: 18px; margin-bottom: 15px; }
    .custom-post-from .cpf-h5 { font-size: 14px; letter-spacing: 1px; line-height: 22px; margin-bottom: 15px; }
    .custom-post-from input[type=”email”] { font-size: 14px; color: #000 !important; width: 240px; margin: auto; height: 30px; box-shadow:none; border: none; padding: 0 10px; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-pen-icon.svg”); background-repeat: no-repeat; background-position: center right 14px; background-size:14px;}
    .custom-post-from input[type=”submit”] { font-weight: normal; margin: 15px auto; height: 30px; box-shadow: none; border: none; padding: 0 10px 0 35px; background-color: #1878f3; color: #ffffff; border-radius: 4px; display: inline-block; background-image: url(“https://www.fairobserver.com/wp-content/plugins/moosend_form/cpf-email-icon.svg”); background-repeat: no-repeat; background-position: 14px center; background-size: 14px; }

    .custom-post-from .cpf-checkbox { width: 90%; margin: auto; position: relative; display: flex; flex-wrap: wrap;}
    .custom-post-from .cpf-checkbox label { text-align: left; display: block; padding-left: 32px; margin-bottom: 0; cursor: pointer; font-size: 11px; line-height: 18px;
    -webkit-user-select: none;
    -moz-user-select: none;
    -ms-user-select: none;
    user-select: none;
    order: 1;
    color: #ffffff;
    font-weight: normal;}
    .custom-post-from .cpf-checkbox label a { color: #ffffff; text-decoration: underline; }
    .custom-post-from .cpf-checkbox input { position: absolute; opacity: 0; cursor: pointer; height: 100%; width: 24%; left: 0;
    right: 0; margin: 0; z-index: 3; order: 2;}
    .custom-post-from .cpf-checkbox input ~ label:before { content: “f0c8”; font-family: Font Awesome 5 Free; color: #eee; font-size: 24px; position: absolute; left: 0; top: 0; line-height: 28px; color: #ffffff; width: 20px; height: 20px; margin-top: 5px; z-index: 2; }
    .custom-post-from .cpf-checkbox input:checked ~ label:before { content: “f14a”; font-weight: 600; color: #2196F3; }
    .custom-post-from .cpf-checkbox input:checked ~ label:after { content: “”; }
    .custom-post-from .cpf-checkbox input ~ label:after { position: absolute; left: 2px; width: 18px; height: 18px; margin-top: 10px; background: #ffffff; top: 10px; margin: auto; z-index: 1; }
    .custom-post-from .error{ display: block; color: #ff6461; order: 3 !important;}

    First of all, there is an urgent global mental-health crisis tightly correlated over decades with consuming unnatural sensory inputs (such as from TV screens) and interacting in unnatural ways (such as using wireless devices). These technologies seem to undermine trust in one’s own senses and in one’s connections to others, with the youngest brains bearing the greatest hurt.

    Second, computer science understands information flowing in the real world. Numerical simulations faithfully replicate the laws of physics — of combustion, explosions, weather and gravitation — inside computers, thereby confirming we understand how nature works. Autonomous vehicles such as ocean gliders, autonomous drones, self-driving cars and walking robots, select and process signals from the outside to make trustworthy models, in order to move through the world. This neutral, technological understanding might illuminate the information flows that mature humans also use to do those same things and which growing humans use to learn how to do them.

    Third, the science of epidemiology understands the information flows of medical research. Research has discovered and countered countless dangerous chemical and biological influences through concepts like clinical trials, randomization, viral spread, dose-response curves and false positive/negative risks. These potent yet neutral medical lenses might identify the most damaging aspects of artificial sensory interactions, in preparation for countering them in the same way they have already done for lead, tar, nicotine, sugar, endocrine disruptors and so on. The specific approach will extend the existing understanding of micro-toxins and micro-injuries to include the new micro-deceptions and micro-behavioral manipulations that undermine trust.

    Fourth, the mathematics of management and communication understands the information flows of businesses. The economic spreadsheets and prediction models that presently micromanage business and market decisions worldwide can, when provided with these new metrics of human health and damage, calculate two new things. First, the most cost-effective ways to prevent and reduce damage. Second, such spreadsheets can quantify the degree to which well-accepted and legal practices of monetized influence — advertising, branding, lobbying, incentivizing, media campaigns and even threats — potentially make the information they touch untrustworthy and thereby undermine human trust.

    America has risen to great challenges before. At its inception, even before Alexis De Tocqueville praised the American communitarian can-do spirit, this country gathered its most brilliant thinkers in a Constitutional Convention. In war, it gathered them to invent and create a monster weapon. In peace, it gathered them to land on the Moon. Over time, Americans have understood and made inroads against lead poisoning, ozone destruction, polluted water, smog, acid rain, nicotine and trans-fats. Now, we need to assemble our clearest thinkers to combat the deepest damage of all: the damage to how we talk and think.

    Finally, we humans are spiritual and soulful beings. Our experiences and affections could never be captured in data or equations, whether of calorie consumption, body temperature, chemical balance or information flow. But just as we use such equations to defend our bodies against hunger, hypothermia or vitamin deficiency, we might also use them to defend against confusion, mistrust and loneliness, without in the process finding our own real lives replaced or eclipsed. In fact, if the human nervous system and soul are indeed damaged when mathematically-synthesized inputs replace real ones, then they will be freed from that unreality and that damage only when we understand which inputs help and hurt us most.

    Informational Threat

    The Defend Act tasks its teams to treat the human nervous system as an information-processing system with the same quantitative, scientific neutrality as medicine already treats us as heat-generating, oxygen-consuming, blood-pumping, self-cleaning systems. Specifically, teams are to examine human informational processing in the same computational terms used for self-driving vehicles that are also self-training and to examine our informational environments, whether man-made or God-made, in the same terms used for the “training data” consumed by such artificial foraging machines.

    An informational threat such as the present one must be met in new ways. In particular, the current threat differs from historic ones by undermining communication itself, making unbiased discussion of the problem nearly impossible in public or in subsidized scientific discourse. Thus, the first concern of the Defend Act is to insulate the process of scientific discovery from the institutional, traditional and commercial pressures that might otherwise contaminate its answers. Thus, the act aims to maximize scientific reliability and minimize commercial, traditional and political interference as follows.

    The investigation will proceed not by a single dream team of famous, respected and politically-vetted experts but by 10 separate teams of anonymous polymaths, living and working together in undisclosed locations, assembled from international scientists under international auspices; for example, the American Centers for Disease Control and Prevention will collaborate with the World Health Organization.

    Embed from Getty Images

    Each team will be tasked with producing its best version of the long-term scientific truth, that is of the same truth each other team ought to also obtain based on accepted universal principles. Teams pursuing actual scientific coherence thus ought to converge in their answers. Any team tempted to replace the law of nature with incentivized convenience would then find its results laughably out of step with the common, coherent consensus reported by the other teams.

    Choosing individual team members for intellectual flexibility and independence, rather than for fame or institutional influence, will ensure they can grasp the scope of the problem, articulate it fearlessly and transmit in their results no latent bias toward their home colleagues, institution, technology or discipline.

    Each team will contain at least two experts from each of the three information-science fields, each able to approximately understand the technical language of the others and thus collectively to understand all aspects of human informational functionality and dysfunctionality. To ensure the conclusions apply to humans everywhere, at least one-third of each team will consider themselves culturally non-American.

    Each team will operate according to the best practices of deliberative decision-making, such as those used by “deliberative democracy”: live nearby, meet in person a few hours a day over months in a quiet place and enjoy access to whatever experts and sources of information they choose to use. Their budget (about $4 million per team) will be sufficient for each to produce its report in one year, through a variety of public-facing communications media: written reports, slide decks, video recordings, private meetings and public speeches. Between the multiple team members, multiple teams and multiple media, it will be difficult for entrenched powers to downplay inconvenient truths.

    Released simultaneously, all public reports will cover four topics with a broad brush:

    1. Summarizing the informational distractions and damage one would expect in advance, based only on the mathematical principles of autonomous navigation mentioned above, including not only sensory distractions but also the cognitive load of attending to interruptions and following rules, including rules intended to improve the situation.

    2. Summarizing, as meta-studies, the general (and generally true) conclusions of scientifically reputable experimental studies and separately the general (and generally misleading) conclusions of incentivized studies.

    3. Providing guideline formulae of damage and therapy, based on straightforward technical metrics of each specific information source such as timing delay, timing uncertainty, statistical pattern, information format, etc., with which to predict the nature, timescale, duration and severity of informational damage or recuperation from it.

    4. Providing guidelines for dissemination, discussion and regulatory approaches most likely not to be undermined by pressures toward the status quo.

    Within two years of passing this act, for under $100 million dollars, the world will understand far better the human stakes of artificial input, and the best means for making our children safe from it again.

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    US lawmakers ask FBI to investigate Parler app's role in Capitol attack

    American lawmakers have asked the FBI to investigate the role of Parler, the social media website and app popular with the American far right, in the violence at the US Capitol on 6 January.Carolyn Maloney, chair of the House oversight and reform Committee, asked the FBI to review Parler’s role “as a potential facilitator of planning and incitement related to the violence, as a repository of key evidence posted by users on its site, and as a potential conduit for foreign governments who may be financing civil unrest in the United States”.Maloney asked the FBI to review Parler’s financing and its ties to Russia.Maloney cited press reports that detailed violent threats on Parler against state elected officials for their role in certifying the election results before the 6 January attack that left five dead. She also noted numerous Parler users have been arrested and charged with threatening violence against elected officials or for their roles in the attack.She cited justice department charges against a Texas man who used a Parler account to post threats that he would return to the Capitol on 19 January “carrying weapons and massing in numbers so large that no army could match them”.The justice department said the threats were viewed by other social media users tens of thousands of times.Parler was launched in 2018 and won more users in the last months of the Trump presidency as social media platforms like Twitter and Facebook cracked down more forcefully on falsehoods and misinformation.The social network, which resembles Twitter, fast became the hottest app among American conservatives, with high-profile proponents like Senator Ted Cruz recruiting new users.But following the 6 January insurrection at the US Capitol, Google banned it from Google Play and Apple suspended it from the App Store.Amazon then suspended Parler from its web hosting service AWS, in effect taking the site offline unless it could find a new company to host its services.The website partially returned online this week, though only displaying a message from its chief executive, John Matze, saying he was working to restore functionality, with the help of a Russian-owned technology company.Reuters reported this week that Parler partially resumed online operations.The FBI and Parler did not immediately respond to requests for comment.More than 25,000 national guard troops and new fencing ringed with razor wire were among the unprecedented security steps put in place ahead of Wednesday’s inauguration of President Joe Biden. More