More stories

  • in

    Hun Sen’s Facebook Page Goes Dark After Spat with Meta

    Prime Minister Hun Sen, an avid user of the platform, had vowed to delete his account after Meta’s oversight board said he had used it to threaten political violence.The usually very active Facebook account for Prime Minister Hun Sen of Cambodia appeared to have been deleted on Friday, a day after the oversight board for Meta, Facebook’s parent company, recommended that he be suspended from the platform for threatening political opponents with violence.The showdown pits the social media behemoth against one of Asia’s longest-ruling autocrats.Mr. Hun Sen, 70, has ruled Cambodia since 1985 and maintained power partly by silencing his critics. He is a staunch ally of China, a country whose support comes free of American-style admonishments on the value of human rights and democratic institutions.A note Friday on Mr. Hun Sen’s account, which had about 14 million followers, said that its content “isn’t available right now.” It was not immediately clear whether Meta had suspended the account or if Mr. Hun Sen had preemptively deleted it, as he had vowed to do in a post late Thursday on Telegram, a social media platform where he has a much smaller following.“That he stopped using Facebook is his private right,” Phay Siphan, a spokesman for the Cambodian government, told The New York Times on Friday. “Other Cambodians use it, and that’s their right.”The company-appointed oversight board for Meta had on Thursday recommended a minimum six-month suspension of Mr. Hun Sen’s accounts on Facebook and Instagram, which Meta also owns. The board also said that one of Mr. Hun Sen’s Facebook videos had violated Meta’s rules on “violence and incitement” and should be taken down.In the video, Mr. Hun Sen delivered a speech in which he responded to allegations of vote-stealing by calling on his political opponents to choose between the legal system and “a bat.”“If you say that’s freedom of expression, I will also express my freedom by sending people to your place and home,” Mr. Hun Sen said in the speech, according to Meta.Meta had previously decided to keep the video online under a policy that allows the platform to allow content that violates Facebook’s community standards on the grounds that it is newsworthy and in the public interest. But the oversight board said on Thursday that it was overturning the decision, calling it “incorrect.”A post on Facebook by Cambodian government official Duong Dara, which includes an image of the official Facebook page of Mr. Hun Sen.Tang Chhin Sothy/Agence France-Presse — Getty ImagesThe board added that its recommendation to suspend Mr. Hun Sen’s accounts for at least six months was justified given the severity of the violation and his “history of committing human rights violations and intimidating political opponents, and his strategic use of social media to amplify such threats.”Meta later said in a statement that it would remove the offending video to comply with the board’s decision. The company also said that it would respond to the suspension recommendation after analyzing it.Critics of Facebook have long said that the platform can undermine democracy, promote violence and help politicians unfairly target their critics, particularly in countries with weak institutions.Mr. Hun Sen has spent years cracking down on the news media and political opposition in an effort to consolidate his grip on power. In February, he ordered the shutdown of one of the country’s last independent news outlets, saying he did not like its coverage of his son and presumed successor, Lt. Gen. Hun Manet.Under Mr. Hun Sen, the government has also pushed for more government surveillance of the internet, a move that rights groups say makes it even easier for the authorities to monitor and punish online content.Mr. Hun Sen’s large Facebook following may overstate his actual support. In 2018, one of his most prominent political opponents, Sam Rainsy, argued in a California court that the prime minister used so-called click farms to accumulate millions of counterfeit followers.Mr. Sam Rainsy, who lives in exile, also argued that Mr. Hun Sen had used Facebook to spread false news stories and death threats directed at political opponents. The court later denied his request that Facebook be compelled to release records of advertising purchases by Mr. Hun Sen and his allies.In 2017, an opposition political party that Mr. Sam Rainsy had led, the Cambodia National Rescue Party, was dissolved by the country’s highest court. More recently, the Cambodian authorities have disqualified other opposition parties from running in a general election next month.At a public event in Cambodia on Friday, Mr. Hun Sen said that his political opponents outside the country were surely happy with his decision to quit Facebook.“You have to be aware that if I order Facebook to be shut down in Cambodia, it will strongly affect you,” he added, speaking at an event for garment workers ahead of the general election. “But this is not the path that I choose.” More

  • in

    Facebook Failed to Stop Ads Threatening Election Workers

    The ads, submitted by researchers, were rejected by YouTube and TikTok.Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote. More

  • in

    Elon Musk Takes a Page Out of Mark Zuckerberg’s Social Media Playbook

    As Mr. Musk takes over Twitter, he is emulating some of the actions of Mr. Zuckerberg, who leads Facebook, Instagram and WhatsApp.Elon Musk has positioned himself as an unconventional businessman. When he agreed to buy Twitter this year, he declared he would make the social media service a place for unfettered free speech, reversing many of its rules and allowing banned users like former President Donald J. Trump to return.But since closing his $44 billion buyout of Twitter last week, Mr. Musk has followed a surprisingly conventional social media playbook.The world’s richest man met with more than six civil rights groups — including the N.A.A.C.P. and the Anti-Defamation League — on Tuesday to assure them that he will not make changes to Twitter’s content rules before the results of next week’s midterm elections are certified. He also met with advertising executives to discuss their concerns about their brands appearing alongside toxic online content. Last week, Mr. Musk said he would form a council to advise Twitter on what kinds of content to remove from the platform and would not immediately reinstate banned accounts.If these decisions and outreach seem familiar, that’s because they are. Other leaders of social media companies have taken similar steps. After Facebook was criticized for being misused in the 2016 presidential election, Mark Zuckerberg, the social network’s chief executive, also met with civil rights groups to calm them and worked to mollify irate advertisers. He later said he would establish an independent board to advise his company on content decisions.Mr. Musk is in his early days of owning Twitter and is expected to make big changes to the service and business, including laying off some of the company’s 7,500 employees. But for now, he is engaging with many of the same constituents that Mr. Zuckerberg has had to over many years, social media experts and heads of civil society groups said.Mr. Musk “has discovered what Mark Zuckerberg discovered several years ago: Being the face of controversial big calls isn’t fun,” said Evelyn Douek, an assistant professor at Stanford Law School. Social media companies “all face the same pressures of users, advertisers and governments, and there’s always this convergence around this common set of norms and processes that you’re forced toward.”Mr. Musk did not immediately respond to a request for comment, and a Twitter spokeswoman declined to comment. Meta, which owns Facebook and Instagram, declined to comment.Elon Musk’s Acquisition of TwitterCard 1 of 8A blockbuster deal. More

  • in

    Twitter and TikTok Lead in Amplifying Misinformation, Report Finds

    A new analysis found that algorithms and some features of social media sites help false posts go viral.It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.” More

  • in

    Meta Removes Chinese Effort to Influence U.S. Elections

    Meta, the parent company of Facebook and Instagram, said on Tuesday that it had discovered and taken down what it described as the first targeted Chinese campaign to interfere in U.S. politics ahead of the midterm elections in November.Unlike the Russian efforts over the last two presidential elections, however, the Chinese campaign appeared limited in scope — and clumsy at times.The fake posts began appearing on Facebook and Instagram, as well as on Twitter, in November 2021, using profile pictures of men in formal attire but the names of women, according to the company’s report.The users later posed as conservative Americans, promoting gun rights and opposition to abortion, while criticizing President Biden. By April, they mostly presented themselves as liberals from Florida, Texas and California, opposing guns and promoting reproductive rights. They mangled the English language and failed to attract many followers.Two Meta officials said they could not definitively attribute the campaign to any group or individuals. Yet the tactics reflected China’s growing efforts to use international social media to promote the Communist Party’s political and diplomatic agenda.What made the effort unusual was what appeared to be the focus on divisive domestic politics ahead of the midterms.In previous influence campaigns, China’s propaganda apparatus concentrated more broadly on criticizing American foreign policy, while promoting China’s view of issues like the crackdown on political rights in Hong Kong and the mass repression in Xinjiang, the mostly Muslim region where hundreds of thousands were forced into re-education camps or prisons.Ben Nimmo, Meta’s lead official for global threat intelligence, said the operation reflected “a new direction for Chinese influence operations.”“It is talking to Americans, pretending to be Americans rather than talking about America to the rest of the world,” he added later. “So the operation is small in itself, but it is a change.”The operation appeared to lack urgency and scope, raising questions about its ambition and goals. It involved only 81 Facebook accounts, eight Facebook pages and one group. By July, the operation had suddenly shifted its efforts away from the United States and toward politics in the Czech Republic.The posts appeared during working hours in China, typically when Americans were asleep. They dropped off noticeably during what appeared to be “a substantial lunch break.”In one post, a user struggled with clarity: “I can’t live in an America on regression.”Even if the campaign failed to go viral, Mr. Nimmo said the company’s disclosure was intended to draw attention to the potential threat of Chinese interference in domestic affairs of its rivals.Meta also announced that it had taken down a much larger Russian influence operation that began in May and focused primarily on Germany, as well as France, Italy and Britain.The company said it was “the largest and most complex” operation it had detected from Russia since the war in Ukraine began in February.The campaign centered around a network of 60 websites that impersonated legitimate news organizations in Europe, like Der Spiegel, Bild, The Guardian and ANSA, the Italian news agency.The sites would then post original articles criticizing Ukraine, warning about Ukrainian refugees and arguing that economic sanctions against Russia would only backfire. Those articles were then promoted across the internet, including on Facebook and Instagram, but also on Twitter and Telegram, the messaging app, which is widely used in Russia.The Russian operation involved 1,633 accounts on Facebook, 703 pages and one group, as well as 29 different accounts on Instagram, the company’s report said. About 4,000 accounts followed one or more of the Facebook pages. As Meta moved to block the operation’s domains, new websites appeared, “suggesting persistence and continuous investment in this activity.”Meta began its investigation after disclosures in August by one of Germany’s television networks, ZDF. As in the case of the Chinese operation, it did not explicitly accuse the government of the Russian president, Vladimir V. Putin, though the activity clearly mirrors the Kremlin’s extensive information war surrounding its invasion.“They were kind of throwing everything at the wall and not a lot of it was sticking,” said David Agranovich, Meta’s director of threat disruption. “It doesn’t mean that we can say mission accomplished here.”Meta’s report noted overlap between the Russian and Chinese campaigns on “a number of occasions,” although the company said they were unconnected. The overlap reflects the growing cross-fertilization of official statements and state media reports in the two countries, especially regarding the United States.The accounts associated with the Chinese campaign posted material from Russia’s state media, including those involving unfounded allegations that the United States had secretly developed biological weapons in Ukraine.A French-language account linked to the operation posted a version of the allegation in April, 10 days after it had originally been posted by Russia’s Ministry of Defense on Telegram. That one drew only one response, in French, from an authentic user, according to Meta.“Fake,” the user wrote. “Fake. Fake as usual.” More