More stories

  • in

    YouTube Deletes Jan. 6 Video That Included Clip of Trump Sharing Election Lies

    The House select committee investigating the Jan. 6 riot has been trying to draw more eyes to its televised hearings by uploading clips of the proceedings online. But YouTube has removed one of those videos from its platform, saying the committee was advancing election misinformation.The excerpt, which was uploaded June 14, included recorded testimony from former Attorney General William P. Barr. But the problem for YouTube was that the video also included a clip of former President Donald J. Trump sharing lies about the election on the Fox Business channel.A screenshot of the committee’s website showing the video removal notification. The message initially said the video had been removed.Select Committee to Investigate the January 6th Attack on the United States Capitol“We had glitches where they moved thousands of votes from my account to Biden’s account,” Mr. Trump said falsely, before suggesting the F.B.I. and Department of Justice may have been involved.The excerpt of the hearing did not include Mr. Barr’s perspective, stated numerous times elsewhere in the hearing, that Mr. Trump’s assertion that the election was stolen was wrong. The video initially was replaced with a black box stating that the clip had been removed for violating YouTube’s terms of service.“Our election integrity policy prohibits content advancing false claims that widespread fraud, errors or glitches changed the outcome of the 2020 U.S. presidential election, if it does not provide sufficient context,” YouTube spokeswoman Ivy Choi said in a statement. “We enforce our policies equally for everyone, and have removed the video uploaded by the Jan. 6 committee channel.”The message on the video page has since been changed to “This video is private,” which may mean that YouTube would allow the committee to upload a version of the clip that makes clear that Trump’s claims are false. More

  • in

    Jan. 6 Committee Subpoenas Twitter, Meta, Alphabet and Reddit

    The panel investigating the attack on the Capitol is demanding information from Alphabet, Meta, Reddit and Twitter.WASHINGTON — The House committee investigating the Jan. 6 attack on the Capitol issued subpoenas on Thursday to four major social media companies — Alphabet, Meta, Reddit and Twitter — criticizing them for allowing extremism to spread on their platforms and saying they have failed to cooperate adequately with the inquiry.In letters accompanying the subpoenas, the panel named Facebook, a unit of Meta, and YouTube, which is owned by Alphabet’s Google subsidiary, as among the worst offenders that contributed to the spread of misinformation and violent extremism. The committee said it had been investigating how the companies “contributed to the violent attack on our democracy, and what steps — if any — social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.”“It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,” said the panel’s chairman, Representative Bennie Thompson, Democrat of Mississippi.The committee sent letters in August to 15 social media companies — including sites where misinformation about election fraud spread, such as the pro-Trump website TheDonald.win — seeking documents pertaining to efforts to overturn the election and any domestic violent extremists associated with the Jan. 6 rally and attack.After months of discussions with the companies, only the four large corporations were issued subpoenas on Thursday, because the committee said the firms were “unwilling to commit to voluntarily and expeditiously” cooperating with its work. A committee aide said investigators were in various stages of negotiations with the other companies.In the year since the events of Jan. 6, social media companies have been heavily scrutinized for whether their sites played an instrumental role in organizing the attack.In the months surrounding the 2020 election, employees inside Meta raised warning signs that Facebook posts and comments containing “combustible election misinformation” were spreading quickly across the social network, according to a cache of documents and photos reviewed by The New York Times. Many of those employees criticized Facebook leadership’s inaction when it came to the spread of the QAnon conspiracy group, which they said also contributed to the attack.Frances Haugen, a former Facebook employee turned whistle-blower, said the company relaxed its safeguards too quickly after the election, which then led it to be used in the storming of the Capitol.Critics say that other platforms also played an instrumental role in the spread of misinformation while contributing to the events of Jan. 6.In the days after the attack, Reddit banned a discussion forum dedicated to former President Donald J. Trump, where tens of thousands of Mr. Trump’s supporters regularly convened to express solidarity with him.On Twitter, many of Mr. Trump’s followers used the site to amplify and spread false allegations of election fraud, while connecting with other Trump supporters and conspiracy theorists using the site. And on YouTube, some users broadcast the events of Jan. 6 using the platform’s video streaming technology.Representatives for the tech companies have been in discussions with the investigating committee, though how much in the way of evidence or user records the firms have handed over remains unclear.The committee said letters to the four firms accompanied the subpoenas.The panel said YouTube served as a platform for “significant communications by its users that were relevant to the planning and execution of Jan. 6 attack on the United States Capitol,” including livestreams of the attack as it was taking place.“To this day, YouTube is a platform on which user video spread misinformation about the election,” Mr. Thompson wrote.The panel said Facebook and other Metaplatforms were used to share messages of “hate, violence and incitement; to spread misinformation, disinformation and conspiracy theories around the election; and to coordinate or attempt to coordinate the Stop the Steal movement.”Public accounts about Facebook’s civic integrity team indicate that Facebook has documents that are critical to the select committee’s investigation, the panel said.“Meta has declined to commit to a deadline for producing or even identifying these materials,” Mr. Thompson wrote to Mark Zuckerberg, Meta’s chief executive.Key Figures in the Jan. 6 InquiryCard 1 of 12The House investigation. More

  • in

    YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
    Source: Center for Social Media and Politics at New York UniversityBy The New York TimesYouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.” More

  • in

    Germany Struggles to Stop Online Abuse Ahead of Election

    Scrolling through her social media feed, Laura Dornheim is regularly stopped cold by a new blast of abuse aimed at her, including from people threatening to kill or sexually assault her. One person last year said he looked forward to meeting her in person so he could punch her teeth out.Ms. Dornheim, a candidate for Parliament in Germany’s election on Sunday, is often attacked for her support of abortion rights, gender equality and immigration. She flags some of the posts to Facebook and Twitter, hoping that the platforms will delete the posts or that the perpetrators will be barred. She’s usually disappointed.“There might have been one instance where something actually got taken down,” Ms. Dornheim said.Harassment and abuse are all too common on the modern internet. Yet it was supposed to be different in Germany. In 2017, the country enacted one of the world’s toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries.But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law’s weaknesses. Much of the toxic speech, researchers say, has come from far-right groups and is aimed at intimidating female candidates like Ms. Dornheim.Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law.Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.The country’s experience may shape policy across the continent. German officials are playing a key role in drafting one of the world’s most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites. Ursula von der Leyen, a German who is president of the European Commission, the 27-nation bloc’s executive arm, has called for an E.U. law that would list gender-based violence as a special crime category, a proposal that would include online attacks.“Germany was the first to try to tackle this kind of online accountability,” said Julian Jaursch, a project director at the German think tank Stiftung Neue Verantwortung, which focuses on digital issues. “It is important to ask whether the law is working.”Campaign billboards in Germany’s race for chancellor, showing, from left, Annalena Baerbock of the Green Party, Olaf Scholz of the Social Democrats and Christian Lindner of the Free Democrats.Sean Gallup/Getty ImagesMarc Liesching, a professor at HTWK Leipzig who published an academic report on the policy, said that of the posts that had been deleted by Facebook, YouTube and Twitter, a vast majority were classified as violating company policies, not the hate speech law. That distinction makes it harder for the government to measure whether companies are complying with the law. In the second half of 2020, Facebook removed 49 million pieces of “hate speech” based on its own community standards, compared with the 154 deletions that it attributed to the German law, he found.The law, Mr. Liesching said, “is not relevant in practice.”With its history of Nazism, Germany has long tried to balance free speech rights against a commitment to combat hate speech. Among Western democracies, the country has some of the world’s toughest laws against incitement to violence and hate speech. Targeting religious, ethnic and racial groups is illegal, as are Holocaust denial and displaying Nazi symbols in public. To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law. “The aim of our legislative package is to protect all those who are exposed to threats and insults on the internet,” Christine Lambrecht, the justice minister, who oversees enforcement of the law, said after the amendments were adopted. “Whoever engages in hate speech and issues threats will have to expect to be charged and convicted.”Germans will vote for a leader to replace Angela Merkel, the country’s longtime chancellor.Markus Schreiber/Associated PressFacebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.Facebook said that as part of an agreement with the government it now provided more figures about the complaints it received. From January through July, the company received more than 77,000 complaints, which led it to delete or block about 11,500 pieces of content under the German law, known as NetzDG.“We have zero tolerance for hate speech and support the aims of NetzDG,” Facebook said in a statement. Twitter, which received around 833,000 complaints and removed roughly 81,000 posts during the same period, said a majority of those posts did not fit the definition of illegal speech, but still violated the company’s terms of service.“Threats, abusive content and harassment all have the potential to silence individuals,” Twitter said in a statement. “However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech.”YouTube, which received around 312,000 complaints and removed around 48,000 pieces of content in the first six months of the year, declined to comment other than saying it complies with the law.The amount of hate speech has become increasingly pronounced during election season, according to researchers at Reset and HateAid, organizations that track online hate speech and are pushing for tougher laws.The groups reviewed nearly one million comments on far-right and conspiratorial groups across about 75,000 Facebook posts in June, finding that roughly 5 percent were “highly toxic” or violated the online hate speech law. Some of the worst material, including messages with Nazi symbolism, had been online for more than a year, the groups found. Of 100 posts reported by the groups to Facebook, roughly half were removed within a few days, while the others remain online.The election has also seen a wave of misinformation, including false claims about voter fraud.Annalena Baerbock, the 40-year-old leader of the Green Party and the only woman among the top candidates running to succeed Ms. Merkel, has been the subject of an outsize amount of abuse compared with her male rivals from other parties, including sexist slurs and misinformation campaigns, according to researchers.Ms. Baerbock, the Green Party candidate for chancellor, taking a selfie with one of her supporters.Laetitia Vancon for The New York TimesOthers have stopped running altogether. In March, a former Syrian refugee running for the German Parliament, Tareq Alaows, dropped out of the race after experiencing racist attacks and violent threats online.While many policymakers want Facebook and other platforms to be aggressive in screening user-generated content, others have concerns about private companies making decisions about what people can and can’t say. The far-right party Alternative for Germany, which has criticized the law for unfairly targeting its supporters, has vowed to repeal the policy “to respect freedom of expression.”Jillian York, an author and free speech activist with the Electronic Frontier Foundation in Berlin, said the German law encouraged companies to remove potentially offensive speech that is perfectly legal, undermining free expression rights.“Facebook doesn’t err on the side of caution, they just take it down,” Ms. York said. Another concern, she said, is that less democratic countries such as Turkey and Belarus have adopted laws similar to Germany’s so that they could classify certain material critical of the government as illegal.Renate Künast, a former government minister who once invited a journalist to accompany her as she confronted individuals in person who had targeted her with online abuse, wants to see the law go further. Victims of online abuse should be able to go after perpetrators directly for libel and financial settlements, she said. Without that ability, she added, online abuse will erode political participation, particularly among women and minority groups.In a survey of more than 7,000 German women released in 2019, 58 percent said they did not share political opinions online for fear of abuse.“They use the verbal power of hate speech to force people to step back, leave their office or not to be candidates,” Ms. Künast said.The Reichstag, where the German Parliament convenes, in Berlin.Emile Ducke for The New York TimesMs. Dornheim, the Berlin candidate, who has a master’s degree in computer science and used to work in the tech industry, said more restrictions were needed. She described getting her home address removed from public records after somebody mailed a package to her house during a particularly bad bout of online abuse.Yet, she said, the harassment has only steeled her resolve.“I would never give them the satisfaction of shutting up,” she said. More

  • in

    Jeffrey Katzenberg Talks About His Billion-Dollar Flop

    The public failure of his start-up Quibi hasn’t stopped Jeffrey Katzenberg from doubling down on tech. A Hollywood power broker, he headed up Disney in the 1980s and ’90s and co-founded a rival studio, DreamWorks, before finding a puzzle he could not yet solve: getting people to pay for short-format content. Investors gave him and the former Hewlett-Packard C.E.O. and California gubernatorial candidate Meg Whitman $1.75 billion to build a video platform, but not enough customers opened up their wallets, at $4.99 a month, and Quibi folded within a year of its launch. Katzenberg says the problems were product-market fit and the Covid pandemic, not competition from TikTok or YouTube.[You can listen to this episode of “Sway” on Apple, Spotify, Google or wherever you get your podcasts.]In this conversation, Kara Swisher and Katzenberg delve into Quibi’s demise, the shifting power dynamics in Hollywood and his pivot to Silicon Valley. They also discuss his influence in another sphere: politics. And the former Hollywood executive, who co-chaired a fund-raiser to help fend off California’s recent recall effort, offers some advice to Gov. Gavin Newsom.(A full transcript of the episode will be available midday on the Times website.)Photograph by WndrCoThoughts? Email us at sway@nytimes.com.“Sway” is produced by Nayeema Raza, Blakeney Schick, Matt Kwong, Daphne Chen and Caitlin O’Keefe and edited by Nayeema Raza; fact-checking by Kate Sinclair; music and sound design by Isaac Jones; mixing by Carole Sabouraud and Sonia Herrero; audience strategy by Shannon Busta. Special thanks to Kristin Lin and Liriel Higa. More

  • in

    YouTube Suspends Trump’s Channel for at Least Seven Days

    #masthead-section-label, #masthead-bar-one { display: none }Capitol Riot FalloutliveLatest UpdatesInside the SiegeVisual TimelineNotable ArrestsFar-Right SymbolsAdvertisementContinue reading the main storySupported byContinue reading the main storyYouTube Suspends Trump’s Channel for at Least Seven DaysYouTube is the latest tech company to bar the president from posting online, following Twitter, Facebook and others.YouTube headquarters in San Bruno, Calif. Credit…Jim Wilson/The New York TimesPublished More

  • in

    From Voter Fraud to Vaccine Lies: Misinformation Peddlers Shift Gears

    AdvertisementContinue reading the main storySupported byContinue reading the main storyFrom Voter Fraud to Vaccine Lies: Misinformation Peddlers Shift GearsElection-related falsehoods have subsided, but misleading claims about the coronavirus vaccines are surging — often spread by the same people.Sidney Powell, who was a member of President Trump’s legal team, on Capitol Hill last month. She has started posting inaccurate claims about the coronavirus vaccines online.Credit…Jonathan Ernst/ReutersDavey Alba and Dec. 16, 2020, 5:00 a.m. ETSidney Powell, a lawyer who was part of President Trump’s legal team, spread a conspiracy theory last month about election fraud. For days, she claimed that she would “release the Kraken” by showing voluminous evidence that Mr. Trump had won the election by a landslide.But after her assertions were widely derided and failed to gain legal traction, Ms. Powell started talking about a new topic. On Dec. 4, she posted a link on Twitter with misinformation that said that the population would be split into the vaccinated and the unvaccinated and that “big government” could surveil those who were unvaccinated.“NO WAY #America,” Ms. Powell wrote in the tweet, which collected 22,600 shares and 51,000 likes. “This is more authoritarian communist control imported straight from #China.” She then tagged Mr. Trump and the former national security adviser Michael T. Flynn — both of whom she had represented — and other prominent right-wing figures to highlight the post.Ms. Powell’s changing tune was part of a broader shift in online misinformation. As Mr. Trump’s challenges to the election’s results have been knocked down and the Electoral College has affirmed President-elect Joseph R. Biden Jr.’s win, voter fraud misinformation has subsided. Instead, peddlers of online falsehoods are ramping up lies about the Covid-19 vaccines, which were administered to Americans for the first time this week.Apart from Ms. Powell, others who have spread political misinformation such as Rep. Marjorie Taylor Greene, a Republican of Georgia, as well as far-right websites like ZeroHedge, have begun pushing false vaccine narratives, researchers said. Their efforts have been amplified by a robust network of anti-vaccination activists like Robert F. Kennedy Jr. on platforms including Facebook, YouTube and Twitter.Among their misleading notions is the idea that the vaccines are delivered with a microchip or bar code to keep track of people, as well as a lie that the vaccines will hurt everyone’s health (the vaccines from Pfizer and Moderna have been proved to be more than 94 percent effective in trials, with minimal side effects). Falsehoods about Bill Gates, the Microsoft co-founder and philanthropist who supports vaccines, have also increased, with rumors that he is responsible for the coronavirus and that he stands to profit from a vaccine, according to data from media insights company Zignal Labs.The shift shows how political misinformation purveyors are hopping from topic to topic to maintain attention and influence, said Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation.It is “an easy pivot,” she said. “Disinformation about vaccines and the pandemic have long been staples of the pro-Trump disinformation playbook.”The change has been particularly evident over the last six weeks. Election misinformation peaked on Nov. 4 at 375,000 mentions across cable television, social media, print and online news outlets, according to an analysis by Zignal. By Dec. 3, that had fallen to 60,000 mentions. But coronavirus misinformation steadily increased over that period, rising to 46,100 mentions on Dec. 3, from 17,900 mentions on Nov. 8.NewsGuard, a start-up that fights false stories, said that of the 145 websites in its Election Misinformation Tracking Center, a database of sites that publish false election information, 60 percent of them have also published misinformation about the coronavirus pandemic. That includes right-wing outlets such as Breitbart, Newsmax and One America News Network, which distributed inaccurate articles about the election and are now also running misleading articles about the vaccines.John Gregory, the deputy health editor for NewsGuard, said the shift was not to be taken lightly because false information about vaccines leads to real-world harm. In Britain in the early 2000s, he said, a baseless link between the measles vaccine and autism spooked people into not taking that vaccine. That led to deaths and serious permanent injuries, he said.“Misinformation creates fear and uncertainty around the vaccine and can reduce the number of people willing to take it,” said Carl Bergstrom, a University of Washington evolutionary biologist who has been tracking the pandemic.Dr. Shira Doron, an epidemiologist at Tufts Medical Center, said the consequences of people not taking the Covid-19 vaccines because of misinformation would be catastrophic. The vaccines are “the key piece to ending the pandemic,” she said. “We are not getting there any other way.”Ms. Powell did not respond to a request for comment.To deal with vaccine misinformation, Facebook, Twitter, YouTube and other social media sites have expanded their policies to fact-check and demote such posts. Facebook and YouTube said they would remove false claims about the vaccines, while Twitter said it pointed people to credible public health sources.Business & EconomyLatest UpdatesUpdated Dec. 16, 2020, 9:57 a.m. ETThe latest: Domino’s will pay its hourly workers a bonus.LVMH takes a stake in WhistlePig, an American rye whiskey brand.U.S. retail sales decline more than expected in November.The flow of vaccine falsehoods began rising in recent weeks as it became clear that the coronavirus vaccines would soon be approved and available. Misinformation spreaders glommed onto interviews by health experts and began twisting them.On Dec. 3, for example, Dr. Kelly Moore, the associate director for immunization education at the nonprofit Immunization Action Coalition, said in an interview with CNN that when people receive the vaccine, “everyone will be issued a written card” that would “tell them what vaccine they had and when their next dose is due.”Dr. Moore was referring to a standard appointment reminder card that could also be used as a backup vaccine record. But skeptics quickly started saying online that the card was evidence that the U.S. government intended to surveil the population and limit the activities of people who were unvaccinated.That unfounded idea was further fueled by people like Ms. Powell and her Dec. 4 tweet. Her post pushed the narrative to 47,025 misinformation mentions that week, according to Zignal, making it the No. 1 vaccine misinformation story at the time.To give more credence to the idea, Ms. Powell also appended a link to an article from ZeroHedge, which claimed that immunity cards would “enable CDC to track Covid-19 vaxx status in database.” On Facebook, that article was liked and commented on 24,600 times, according to data from CrowdTangle, a Facebook-owned social media analytics tool. It also reached up to one million people.ZeroHedge did not respond to a request for comment.In an interview, Dr. Moore said she could not believe how her words had been distorted to seem as if she was supporting surveillance and restrictions on unvaccinated members of the public. “In fact, I was simply describing an ordinary appointment reminder card,” she said. “This is an old-school practice that goes on around the world.”Angela Stanton-King, a Republican candidate for Congress in Georgia, in Atlanta last month.Credit…Megan Varner/Getty ImagesOther supporters of Mr. Trump who said the election had been stolen from him also began posting vaccine falsehoods. One was Angela Stanton-King, a former Republican candidate for Congress from Georgia and a former reality TV star. On Dec. 5, she tweeted that her father would be forced to take the coronavirus vaccine, even though in reality the government has not made it mandatory.“My 78 yr old father tested positive for COVID before Thanksgiving he was told to go home and quarantine with no prescribed medication,” Ms. Stanton-King wrote in her tweet, which was liked and shared 13,200 times. “He had zero symptoms and is perfectly fine. Help me understand why we need a mandatory vaccine for a virus that heals itself…”Ms. Stanton-King declined to comment.Anti-vaccination activists have also jumped in. When two people in Britain had an adverse reaction to Pfizer’s Covid-19 vaccine this month, Mr. Kennedy, a son of former Senator Robert F. Kennedy who campaigns against vaccines as chairman of the anti-vaccination group Children’s Health Defense, pushed the unproven notion on Facebook that ingredients in the vaccine led to the reactions. He stripped out context that such reactions are usually very rare and it is not yet known whether the vaccines caused them.His Facebook post was shared 556 times and reached nearly a million people, according to CrowdTangle data. In an email, Mr. Kennedy said the Food and Drug Administration should “require pre-screening” of vaccine recipients and “monitor allergic and autoimmune reactions,” without acknowledging that regulators have already said they would do so.Ms. Ryan, the disinformation researcher, said that as long as there were loopholes for misinformation to stay up on social media platforms, purveyors would continue pushing falsehoods about the news topic of the day. It could be QAnon today, the election tomorrow, Covid-19 vaccines after that, she said.“They need to stay relevant,” she said. “Without Trump, they’re going to need new hobbies.”AdvertisementContinue reading the main story More

  • in

    YouTube to Forbid Videos Claiming Widespread Election Fraud

    AdvertisementContinue reading the main storyTracking Viral MisinformationYouTube to Forbid Videos Claiming Widespread Election FraudDec. 9, 2020, 12:25 p.m. ETDec. 9, 2020, 12:25 p.m. ETYouTube’s announcement is a reversal of a much-criticized company policy on election videos.Credit…Dado Ruvic/ReutersYouTube on Wednesday announced changes to how it handles videos about the 2020 presidential election, saying it would remove new videos that mislead people by claiming that widespread fraud or errors influenced the outcome of the election.The company said it was making the change because Tuesday was the so-called safe harbor deadline — the date by which all state-level election challenges, such as recounts and audits, are supposed to be completed. YouTube said that enough states have certified their election results to determine that Joseph R. Biden Jr. is the president-elect.YouTube’s announcement is a reversal of a much-criticized company policy on election videos. Throughout the election cycle, YouTube, which is owned by Google, has allowed videos spreading false claims of widespread election fraud under a policy that permits videos that comment on the outcome of an election. Under the new policy, videos about the election uploaded before the safe harbor deadline would remain on the platform, with YouTube appending an information panel linking to the Office of the Federal Register’s election results certification notice.In a blog post on Wednesday, YouTube pushed back on the idea that it had allowed harmful and misleading elections-related videos to spread unfettered on its site. The company said that since September, it had shut down over 8,000 channels and “thousands” of election videos that violated its policies. Since Election Day, the company said, it had also shown fact-check panels over 200,000 times above relevant election-related search results on voter fraud narratives such as “Dominion voting machines” and “Michigan recount.”AdvertisementContinue reading the main story More