More stories

  • in

    Why did the US just ban TikTok from government-issued cellphones?

    ExplainerWhy did the US just ban TikTok from government-issued cellphones?Trump tried to impose a total ban on the China-based app and some states have already prohibited its use on official devices The US government has approved an unprecedented ban on the use of TikTok on federal government devices. The restrictions – tucked into a spending bill just days before it was passed by Congress, and signed by Joe Biden on Thursday – add to growing uncertainty about the app’s future in the US amid a crackdown from state and federal lawmakers.Officials say the ban is necessary due to national security concerns about the China-based owner of the app, ByteDance. But it also leaves many questions unanswered. Here’s what you need to know.TikTok admits using its app to spy on reporters in effort to track leaksRead moreWhy did the ban happen?The US government has banned TikTok on federal government-issued devices due to national security concerns over its China-based parent company, ByteDance. The US fears that the Chinese government may leverage TikTok to access those devices and US user data. TikTok spokesperson Brooke Oberwetter said the company was “disappointed” that Congress moved forward with the proposal and that it was “a political gesture that will do nothing to advance national security interests”.The ban means that, in about two months, federal government employees will be required to remove TikTok from their government-issued devices unless they are using the app for national security or law enforcement activities.The director of the US Office of Management and Budget and other offices have 60 days to come up with standards and processes for all government employees to remove the app from their phones. Several federal agencies such as the White House and the defense, homeland security and state departments have already banned TikTok, so it won’t change anything for those employees. And earlier this week, Catherine Szpindor, the chief administrator of the House of Representatives, also instructed all staff and lawmakers to delete the app from their devices.How did we get here?US security concerns about TikTok have existed for years. Donald Trump first attempted, unsuccessfully, to ban TikTok in 2020, but bipartisan efforts to regulate and rein in use of the app reached a fever pitch in 2022 after news outlets reported ByteDance employees were accessing US TikTok user information.National security concerns were reinforced by warnings from the FBI director, Christopher Wray, that the Chinese government could use the app to gain access to US users’ devices. Several, predominantly Republican-led states – including Texas, South Dakota and Virginia – have also recently banned the use of TikTok on state government-issued devices.In April, Senator Josh Hawley of Missouri introduced a similar ban to the one now taking effect, calling TikTok a “Trojan horse for the Chinese Communist party”. The measure, the contours of which were largely replicated in the ban that was passed on Friday, was unanimously approved by the Senate earlier in December.Have other countries taken similar actions against TikTok?While other countries such as Indonesia have imposed temporary bans on TikTok, the biggest country that continues to prohibit the use of the app is India. India permanently banned TikTok along with more than 50 other Chinese apps after a deadly border dispute with China, citing national security concerns. National bans in other countries have not lasted more than, at most, a few months.Should we be more worried about TikTok than other apps?It depends on whom you ask. Several digital privacy and civil advocacy groups such as the Electronic Frontier Foundation (EFF) and Fight for the Future say while the potential for China to exploit access to TikTok is indeed concerning, other apps and services offer government entities, including in the US, similar access to user data.“Unless we’re also [going to] ban Twitter and Facebook and YouTube and Uber and Grubhub, this is pointless,” said the Fight for the Future director, Evan Greer. “Yes, it’s possibly a bit easier for the Chinese government to gain access to data through TikTok than other apps, but there’s just so many ways governments can get data from apps.”But lawmakers on both sides of the aisle have introduced bills and applauded efforts to limit the use of TikTok. In addition to Hawley’s bill, Senator Marco Rubio of Florida introduced a bill to ban the company from operating in the US entirely. “This isn’t about creative videos – this is about an app that is collecting data on tens of millions of American children and adults every day,” Rubio said in a press release announcing the bipartisan bill.The Democratic senator Mark Warner of Virginia has also encouraged efforts to ban TikTok on government devices and called for more states to “take action to keep our government technology out of the CCP’s [Chinese Communist party’s] reach”.What are the geopolitical implications of this ban?The US has ramped up its efforts to address potential national security concerns from China over the last few years, including adding more China-based companies and entities to a commerce department blacklist limiting exports to those firms. The focus on TikTok is part of this larger campaign, but some groups warn that a ban on TikTok would lead to similar moves from China.“Blanket bans on apps based on a company’s foreign ownership will only hurt US businesses in the long run because countries could seek to block US online services over similar national security concerns,” said Gillian Diebold, a policy analyst at the Center for Data Innovation.Like other privacy advocates, Diebold said that “policymakers should pursue more promising solutions that address the underlying risks.“For example, to address data concerns, lawmakers should prioritize passing federal privacy legislation to protect consumer data that would explicitly require companies to disclose who they share data with and hold them accountable for those statements,” Diebold said.Could the US ever ban TikTok outright?There have been several attempts at banning TikTok from operating in the US entirely. Rubio’s bill, for instance, would block all of the company’s commercial operations in the US.But the viability of such bans have yet to be proved. Trump’s previous attempt to ban new users from downloading TikTok was blocked in court in part due to free speech concerns. The EFF general counsel, Kurt Opsahl, said a total ban is a violation of free speech and while Rubio’s bill and similar proposed laws to ban TikTok purportedly “protect America from China’s authoritarian government”, they actually adopt “one of the hallmarks of the Chinese internet strategy”.“A government is within its rights to set rules and restrictions on use of official devices it owns, but trying to ban TikTok from public use is something else entirely,” Opsahl said.“TikTok’s security, privacy and its relationship with the Chinese government is indeed concerning, but a total ban is not the answer,” he continued. “A total ban is not narrowly tailored to the least restrictive means to address the security and privacy concerns, and instead lays a censorial blow against the speech of millions of ordinary Americans.”TopicsTikTokUS CongressChinaInternetAppsAsia PacificUS politicsexplainersReuse this content More

  • in

    US bans China-based TikTok app on all federal government devices

    US bans China-based TikTok app on all federal government devicesMove follows House of Representatives ban, which TikTok called a ‘political measure that will do nothing’ for national security TikTok has been banned on all federal government devices in the US, with limited exceptions, after Joe Biden signed a $1.7tn (£1.4tn) spending bill on Thursday containing a provision that outlaws the China-based app over growing security concerns.The ban – which was approved by Congress in a vote last week – is a major step targeting the fastest-growing social media platform in the world as opponents express worry user data stored in China could be accessed by the government.Various government agencies will develop rules for implementing the ban over the next two months. It will mean that federal government employees are required to remove TikTok from their government-issued devices unless they are using the app for national security or law enforcement activities.TikTok banned on devices issued by US House of RepresentativesRead moreIt follows a flurry of legislative action against the platform in the US, after more than a dozen governors have issued similar orders prohibiting state employees from using TikTok on state-owned devices. Earlier this week, Congress passed legislation to ban TikTok on devices issued to members of the House of Representatives.TikTok did not immediately respond to request for comment. In a statement released after the initial House ban, TikTok said the move was a “political gesture that will do nothing to advance national security interests”.Meanwhile, there has been a push to ban TikTok outright in the US, with legislation introduced by Senator Marco Rubio earlier this month to “ban Beijing-controlled TikTok for good”. That bill echoes moves from the previous administration, after Donald Trump issued an executive order in August 2020 prohibiting US companies from doing business with TikTok’s parent company ByteDance.The order was later revoked by Biden in June 2021 under the condition that the US committee on foreign investment conducted a security review of the platform and suggested a path forward. That investigation has been ongoing for several years.Although ByteDance is based in China, the company has long claimed all US user data is stored in data centers in Virginia and backed up in Singapore.But political pressure began to build anew after BuzzFeed reported in June that China-based ByteDance employees had accessed US TikTok user data multiple times between September 2021 and January 2022.Legislators have expressed concern that the Chinese Communist party could manipulate young users with pro-China content on its algorithmic home page and access sensitive user data.“TikTok, their parent company ByteDance, and other China-based tech companies are required by Chinese law to share their information with the Communist party,” Senator Mark Warner said in July when calling for further investigation of the platform.“Allowing access to American data, down to biometrics such as face prints and voice prints, poses a great risk to not only individual privacy but to national security,” he added.The legislative pressure on TikTok comes as the app has exploded in popularity in recent years, amassing a user base of more than 1 billion after reporting a 45% increase in monthly active users between July 2020 and July 2022. In 2022 it became the most downloaded app in the world, quietly surpassing longstanding forebears Instagram and Twitter.With the meteoric rise has come broad concerns about the app’s impact on its relatively young users. Nearly half of people between 18 and 30 in the US use the platform, a recent Pew Research Center report showed – and 67% of users between the ages of 13 and 18 use the app daily.TopicsTikTokChinaUS politicsBiden administrationnewsReuse this content More

  • in

    TikTok banned on devices issued by US House of Representatives

    TikTok banned on devices issued by US House of RepresentativesPoliticians ordered to delete Chinese-owned social video app that House has said represents ‘high risk to users’ TikTok has been banned from any devices issued by the US House of Representatives, as political pressure continues to build on the Chinese-owned social video app.The order to delete the app was issued by Catherine Szpindor, the chief administrative officer (CAO) of the House, whose office had warned in August that the app represented a “high risk to users”.According to a memo obtained by NBC News, all lawmakers and staffers with House-issued mobile phones have been ordered to remove TikTok by Szpindor.“House staff are NOT allowed to download the TikTok app on any House mobile devices,” NBC quoted the memo as saying. “If you have the TikTok app on your House mobile device, you will be contacted to remove it.” The move was also reported by Reuters.In a statement the US house of representatives confirmed the ban, saying “we can confirm that the Committee on House Administration has authorized the CAO Office of Cybersecurity to initiate the removal of TikTok Social Media Service from all House-managed devices.”In August the CAO issued a “cyber advisory” labelling TikTok a high-risk app due to its “lack of transparency in how it protects customer data”. It said TikTok, which is owned by Beijing-based ByteDance, “actively harvests content for identifiable data” and stores some user data in China. TikTok says its data is not held in China, but in the US and Singapore.The U.S. House of Representatives’ Chief Administrative Officer has issued a cyber advisory on TikTok, labeling it “high-risk” with personal info accessed from inside China:“we do not recommend the download or use of this application due to these security and privacy concerns.” pic.twitter.com/F87qwFiHhR— Brendan Carr (@BrendanCarrFCC) August 17, 2022
    The CAO move comes amid multiple attempts to restrict the use of TikTok by government and state employees.Last week Congress passed a $1.7tn spending bill, which includes a provision banning TikTok from government devices. The ban will take effect once President Joe Biden signs the legislation into law. According to Reuters, at least 19 US states have partially blocked the app from state-managed devices over security concerns. In a statement released after the Congress ban, TikTok said the move was a “political gesture that will do nothing to advance national security interests”.TikTok admits using its app to spy on reporters in effort to track leaksRead moreThis month the US senator Marco Rubio, a former Republican presidential contender, unveiled a legislative proposal to ban TikTok from the US entirely. Rubio said it was time to “ban Beijing-controlled TikTok for good”.Biden has revoked presidential orders targeting TikTok issued by his predecessor, Donald Trump, which included requiring TikTok to sell its US business. However, the US Committee on Foreign Investment, which scrutinises business deals with non-US companies, is also conducting a security review of TikTok.According to a recent Reuters report, TikTok is offering to operate more of its US business at arm’s length and subject it to outside scrutiny.The office of the House’s chief administrative officer and TikTok have been approached for comment.TopicsTikTokUS CongressUS politicsChinanewsReuse this content More

  • in

    Senate votes to ban TikTok on US government-owned devices

    Senate votes to ban TikTok on US government-owned devicesBill comes after several states barred employees from downloading the app on state-owned gadgets over data concerns The US Senate late on Wednesday passed by voice vote a bill to bar federal employees from using Chinese-owned video-sharing app TikTok on government-owned devices.The bill must still be approved by the US House of Representatives before going to President Joe Biden for approval. The House of Representatives would need to pass the Senate bill before the current congressional session ends, which is expected next week.The vote is the latest action on the part of US lawmakers to crackdown on Chinese companies amid national security fears that Beijing could use them to spy on Americans.Trump’s bid to ban TikTok and WeChat: where are we now?Read moreThe Senate action comes after North Dakota and Iowa this week joined a growing number of states in banning TikTok, owned by ByteDance, from state-owned devices amid concerns that data could be passed on to the Chinese government.During the last Congress, the Senate in August 2020 unanimously approved legislation to bar TikTok from government devices. The bill’s sponsor, Republican Senator Josh Hawley, reintroduced in legislation in 2021.Many federal agencies including the defense, Homeland Security and state departments already ban TikTok from government-owned devices. “TikTok is a major security risk to the United States, and it has no place on government devices,” Hawley said previously.North Dakota Governor Doug Burgum and Iowa Governor Kim Reynolds issued directives prohibiting executive branch agencies from downloading the app on any government-issued equipment. Around a dozen US states have taken similar actions, including Alabama and Utah this week.TikTok has said the concerns are largely fueled by misinformation and are happy to meet with policymakers to discuss the company’s practices.“We’re disappointed that so many states are jumping on the political bandwagon to enact policies based on unfounded falsehoods about TikTok that will do nothing to advance the national security of the United States,” the company said Wednesday.Other states taking similar actions include Texas, Maryland and South Dakota.Republican Senator Marco Rubio on Tuesday unveiled bipartisan legislation to ban TikTok altogether in the United States, ratcheting up pressure on ByteDance due to US fears the app could be used to spy on Americans and censure content. Rubio also is a sponsor of Hawley’s TikTok government-device ban bill.The legislation would block all transactions from any social media company in or under the influence of China and Russia, Rubio’s office said.At a hearing last month, FBI Director Chris Wray said TikTok’s US operations raise national security concerns.In 2020, then President Donald Trump attempted to block new users from downloading TikTok and ban other transactions that would have effectively blocked the apps’ use in the United States but lost a series of court battles over the measure.The government’s committee on foreign investment in the United States, a powerful national security body, in 2020 ordered ByteDance to divest TikTok because of the fears that US user data could be passed to the Chinese government, though ByteDance has not done so.CFIUS and TikTok have been in talks for months to reach a national security agreement to protect the data of TikTok’s more than 100 million users but it does not appear any deal will be reached before the end of the year.TopicsUS SenateTikTokHouse of RepresentativesUS politicsnewsReuse this content More

  • in

    ‘We risk another crisis’: TikTok in danger of being major vector of election misinformation

    ‘We risk another crisis’: TikTok in danger of being major vector of election misinformation A study suggests the video platform is failing to filter false claims and rhetoric in the weeks leading up to US midterms

    Read the new Guardian series exploring the increasing power and reach of TikTok
    In the final sprint to the US midterm elections social media giant TikTok risks being a major vector for election misinformation, experts warn, with the platform’s massive user base and its design making it particularly susceptible to such threats.Preliminary research published last week from digital watchdog Global Witness and the Cybersecurity for Democracy team at New York University suggests the video platform is failing to filter large volumes of election misinformation in the weeks leading up to the vote.TikTok approved 90% of advertisements featuring election misinformation submitted by researchers, including ads containing the wrong election date, false claims about voting requirements, and rhetoric dissuading people from voting.From dance videos to global sensation: what you need to know about TikTok’s riseRead moreTikTok has for several years prohibited political advertising on the platform, including branded content from creators and paid advertisements, and ahead of midterm elections has automatically disabled monetization to better enforce the policy, TikTok global business president Blake Chandlee said in a September blog post. “TikTok is, first and foremost, an entertainment platform,” he wrote.But the NYU study showed TikTok “performed the worst out of all of the platforms tested” in the experiment, the researchers said, approving more of the false advertisements than other sites such as YouTube and Facebook.The findings spark concern among experts who point out that – with 80 million monthly users in the US and large numbers of young Americans indicating the platform is their primary source of news – such posts could have far reaching consequences.Yet the results come to little surprise, those experts say. During previous major elections in the US, TikTok had far fewer users, but misinformation was already spreading widely on the app. TikTok faced challenges moderating misinformation about elections in Kenya and the war in Ukraine.And the company, experts say, is doing far too little to rein in election lies spreading among its users.“This year is going to be much worse as we near the midterms,” said Olivia Little, a researcher who co-authored the Media Matters report. “There has been an exponential increase in users, which only means there will be more misinformation TikTok needs to proactively work to stop or we risk facing another crisis.”A crucial testWith Joe Biden himself warning that the integrity of American elections is under threat, TikTok has announced a slew of policies aimed at combatting election misinformation spreading through the app.The company laid out guidelines and safety measures related to election content and launched an elections center, which “connect[s] people who engage with election content” to approved news sources in more than 45 languages.“To bolster our response to emerging threats, TikTok partners with independent intelligence firms and regularly engages with others across the industry, civil society organizations, and other experts,” said Eric Han, TikTok’s head of US safety, in August.In September, the company also announced new policies requiring government and politician accounts to be verified and said it would ban videos aimed at campaign fundraising. TikTok added it would block verified political accounts from using money-making features available to influencers on the app, such as digital payments and gifting.Still, experts have deep concerns about the spread of election falsehoods on the video app.Those fears are exacerbated by TikTok’s structure, which makes it difficult to investigate and quantify the spread of misinformation. Unlike Twitter, which makes public its Application Programming Interface (API), software that allows researchers to extract data from platforms for analysis, or Meta, which offers its own internal search engine called Crowdtangle, TikTok does not offer tools for external audits. However, independent research as well as the platform’s own transparency reports highlight the challenges it has faced in recent years moderating election-related content.TikTok removed 350,000 videos related to election misinformation in the latter half of 2020, according to a transparency report from the company, and blocked 441,000 videos containing misinformation from user feeds globally. Internet nonprofit Mozilla warned in the run-up to Kenya’s 2022 election that the platform was “failing its first real test” to stem dis- and misinformation during pivotal political moments. The nonprofit said it had found more than 130 videos on the platform containing election-related misinformation, hate speech, and incitement against communities prior to the vote, which together gained more than 4m views. “Rather than learn from the mistakes of more established platforms like Facebook and Twitter, TikTok is following in their footsteps,” Mozilla researcher Odanga Madung wrote at the time.Why TikTok is so vulnerable to misinformationPart of the reason TikTok is uniquely susceptible to misinformation lies in certain features of its design and algorithm, experts say.Its For You Page, or general video feed, is highly customized to users’ individual preferences via an algorithm that’s little understood, even by its own staff. That combination lends itself to misinformation bubbles, said Little, the Media Matters researcher.“TikTok’s hyper-tailored algorithm can blast random accounts into virality very quickly, and I don’t think that is going to change anytime soon because it’s the reason it has become such a popular platform,” she said.Meanwhile, the ease with which users’ remix, record, and repost videos – few of which have been fact-checked – allows misinformation to spread easily while making it more difficult to remove.TikTok’s video-exclusive content brings up additional moderation hurdles, as artificial intelligence processes may find it more difficult to automatically scrape video content for misinformation compared to text. Several recent studies have highlighted how those features have exacerbated the spread of misinformation on the platform. When it comes to TikTok content related to the war in Ukraine, for example, the ability to “remix media” without fact checking it has made it difficult “even for seasoned journalists and researchers to discern truth from rumor, parody and fabrication”, said a recent report from Harvard’s Shorenstein Center on Media.That report cited other design features in the app that make it an easy pathway for misinformation, including that most users post under pseudonyms and that, unlike on Facebook, where users’ feeds are filled primarily with content from friends and people they know, TikTok’s For You Page is largely composed of content from strangers.Some of these problems are not unique to TikTok, said Marc Faddoul co-director of Tracking Exposed, a digital rights organization investigating TikTok’s algorithm.Studies have shown that algorithms across all platforms are optimized to detect and exploit cognitive biases for more polarizing content, and that any platform that relies on algorithms rather than a chronological newsfeed is more susceptible to disinformation. But TikTok is the most accelerated model of an algorithmic feed yet, he said.At the same time, he added, the platform has been slow in coming to grips with issues that have plagued its peers like Facebook and Twitter for years.“Historically, TikTok has characterized itself as an entertainment platform, denying they host political content and therefore disinformation, but we know now that is not the case,” he said.Young user base is particularly at riskExperts say an additional cause for concern is a lack of media literacy among TikTok’s largely young user base. The vast majority of young people in the US use TikTok, a recent Pew Research Center report showed. Internal data from Google revealed in July that nearly 40% of Gen Z – the generation born between the late 1990s and early 2000s – globally uses TikTok and Instagram as their primary search engines.In addition to being more likely to get news coverage from social media, Gen Z also has far higher rates of mistrust in traditional institutions such as the news media and the government compared with past generations, creating a perfect storm for the spread misinformation, said Helen Lee Bouygues, president of the Reboot Foundation, a media literacy advocacy organization.“By the nature of its audience, TikTok is exposing a lot of young children to disinformation who are not trained in media literacy, period,” she said. “They are not equipped with the skills necessary to recognize propaganda or disinformation when they see it online.”The threat is amplified by the sheer amount of time spent on the app, with 67% of US teenagers using the app for an average of 99 minutes per day. Research conducted by the Reboot Foundation showed that the longer a user spends on an app the less likely they are to distinguish between misinformation and fact.To enforce its policies, which prohibit election misinformation, harassment, hateful behavior, and violent extremism, TikTok says it relies on “a combination of people and technology” and partners with fact checkers to moderate content. The company directed questions to this blog post regarding election misinformation measures, but declined to share how many human moderators it employs.Bouygues said the company should do far more to protect its users, particularly young ones. Her research shows that media literacy and in-app nudges towards fact checking could go a long way when it comes to combating misinformation. But government action is needed to force such changes.“If the TikToks of the world really want to fight fake news, they could do it,” she said. “But as long as their financial model is keeping eyes on the page, they have no incentive to do so. That’s where policymaking needs to come into play.”TopicsTikTokThe TikTok takeoverSocial mediaUS politicsfeaturesReuse this content More

  • in

    TikTok tightens policies around political issues in run-up to US midterms

    TikTok tightens policies around political issues in run-up to US midtermsPoliticians will be banned from using social media platform for campaign fundraising Politicians on TikTok will no longer be able to use the app tipping tools, nor access advertising features on the social network, as the company tightens its policies around political issues in the run-up to the US midterm elections in six weeks’ time.Political advertising is already banned on the platform, alongside “harmful misinformation”, but as TikTok has grown over the past two years, new features such as gifting, tipping and ecommerce have been embraced by some politicians on the site.Now, new rules will again limit political players’ ability to use the app for anything other than organic activity, to “help ensure TikTok remains a fun, positive and joyful experience”, the company said.“TikTok has long prohibited political advertising, including both paid ads on the platform and creators being paid directly to make branded content,” it added. “We currently do that by prohibiting political content in an ad, and we’re also now applying restrictions at an account level. “This means accounts belonging to politicians and political parties will automatically have their access to advertising features turned off, which will help us more consistently enforce our existing policy.”Political accounts will be blocked from other monetisation features, and will also be removed from eligibility for the company’s “creator fund”, which distributes cash to some of the most successful video producers on the site. They will also be banned from using the platform for campaign fundraising, “such as a video from a politician asking for donations, or a political party directing people to a donation page on their website,” the service has said.“TikTok is first and foremost an entertainment platform, and we’re proud to be a place that brings people together over creative and entertaining content. By prohibiting campaign fundraising and limiting access to our monetisation features, we’re aiming to strike a balance between enabling people to discuss the issues that are relevant to their lives while also protecting the creative, entertaining platform that our community wants.”The rules are in contrast to those of Meta’s Facebook and Instagram, both of which have long allowed political advertising and encouraged politicians to use their services for campaigning purposes. In August, Meta announced its own set of policy updates for the US midterm elections, and promised to devote “hundreds of people across more than 40 teams” to ensuring the safety and security of the elections.Meta will ban all new political, electoral and social issue adverts on both its platforms for the final weeks of the campaign, its head of global affairs, Nick Clegg, said, and will remove adverts that encourage people not to vote, or call into question the legitimacy of the election. But the company won’t remove “organic” content that does the same.After years of being effectively unregulated, more and more countries are bringing online political advertising under the aegis of electoral authorities. On Monday, Google said it would begin a program that ensured that political emails never get sent to spam folders, after Republican congressional leaders accused it of partisan censorship and introduced legislation to try to ban the practice. “We expect to begin the pilot with a small number of campaigns from both parties and will test whether these changes improve the user experience, and provide more certainty for senders during this election period,” the company said in a statement.TopicsTikTokUS midterm elections 2022US politicsUS political financingnewsReuse this content More

  • in

    Facebook owner reportedly paid Republican firm to push message TikTok is ‘the real threat’

    Facebook owner reportedly paid Republican firm to push message TikTok is ‘the real threat’Meta, owner of Facebook and Instagram, solicited campaign accusing TikTok of being a danger to American children Meta, the owner of Facebook, Instagram and other social media platforms, is reportedly paying a notable GOP consulting firm to create public distrust around TikTok.The campaign, launched by Republican strategy firm Targeted Victory, placed op-eds and letters to the editor in various publications, accusing TikTok of being a danger to American children, along with other disparaging accusations.The firm wanted to “get the message out that while Meta is the current punching bag, TikTok is the real threat especially as a foreign owned app that is #1 in sharing data that young teens are using,” wrote a director for the firm in a February email, part of a trove of emails revealed by the Washington Post.“Dream would be to get stories with headlines like ‘From dances to danger: how TikTok has become the most harmful social media space for kids,’” another staffer wrote.Campaign operatives promoted stories to local media, including some unsubstantiated claims, that tied TikTok to supposedly dangerous trends popular among teenagers – despite those trends originating on Facebook.Such trends included the viral 2021 “devious lick” trend, where students vandalized school property. Targeted Victory pushed stories on “devious lick” to local publications in Michigan, Minnesota, Rhode Island, Massachusetts and Washington DC. But the trend originally spread on Facebook, according to an investigation by Anna Foley with the podcast Reply All.Campaign workers also used anti-TikTok messages to deflect from criticisms that Meta had received for its privacy and antitrust policies.“Bonus point if we can fit this into a broader message that the current bills/proposals aren’t where [state attorneys general] or members of Congress should be focused,” wrote a Targeted Victory staffer.In a comment to the Post, a TikTok representative said that the company was “deeply concerned” about “the stoking of local media reports on alleged trends that have not been found on the platform”.A Meta representative, Andy Stone, defended the campaign to the Washington Post, saying: “We believe all platforms, including TikTok, should face a level of scrutiny consistent with their growing success.”TopicsTikTokRepublicansFacebookMetaSocial networkingUS politicsnewsReuse this content More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More