More stories

  • in

    TikTok banned on devices issued by US House of Representatives

    TikTok banned on devices issued by US House of RepresentativesPoliticians ordered to delete Chinese-owned social video app that House has said represents ‘high risk to users’ TikTok has been banned from any devices issued by the US House of Representatives, as political pressure continues to build on the Chinese-owned social video app.The order to delete the app was issued by Catherine Szpindor, the chief administrative officer (CAO) of the House, whose office had warned in August that the app represented a “high risk to users”.According to a memo obtained by NBC News, all lawmakers and staffers with House-issued mobile phones have been ordered to remove TikTok by Szpindor.“House staff are NOT allowed to download the TikTok app on any House mobile devices,” NBC quoted the memo as saying. “If you have the TikTok app on your House mobile device, you will be contacted to remove it.” The move was also reported by Reuters.In a statement the US house of representatives confirmed the ban, saying “we can confirm that the Committee on House Administration has authorized the CAO Office of Cybersecurity to initiate the removal of TikTok Social Media Service from all House-managed devices.”In August the CAO issued a “cyber advisory” labelling TikTok a high-risk app due to its “lack of transparency in how it protects customer data”. It said TikTok, which is owned by Beijing-based ByteDance, “actively harvests content for identifiable data” and stores some user data in China. TikTok says its data is not held in China, but in the US and Singapore.The U.S. House of Representatives’ Chief Administrative Officer has issued a cyber advisory on TikTok, labeling it “high-risk” with personal info accessed from inside China:“we do not recommend the download or use of this application due to these security and privacy concerns.” pic.twitter.com/F87qwFiHhR— Brendan Carr (@BrendanCarrFCC) August 17, 2022
    The CAO move comes amid multiple attempts to restrict the use of TikTok by government and state employees.Last week Congress passed a $1.7tn spending bill, which includes a provision banning TikTok from government devices. The ban will take effect once President Joe Biden signs the legislation into law. According to Reuters, at least 19 US states have partially blocked the app from state-managed devices over security concerns. In a statement released after the Congress ban, TikTok said the move was a “political gesture that will do nothing to advance national security interests”.TikTok admits using its app to spy on reporters in effort to track leaksRead moreThis month the US senator Marco Rubio, a former Republican presidential contender, unveiled a legislative proposal to ban TikTok from the US entirely. Rubio said it was time to “ban Beijing-controlled TikTok for good”.Biden has revoked presidential orders targeting TikTok issued by his predecessor, Donald Trump, which included requiring TikTok to sell its US business. However, the US Committee on Foreign Investment, which scrutinises business deals with non-US companies, is also conducting a security review of TikTok.According to a recent Reuters report, TikTok is offering to operate more of its US business at arm’s length and subject it to outside scrutiny.The office of the House’s chief administrative officer and TikTok have been approached for comment.TopicsTikTokUS CongressUS politicsChinanewsReuse this content More

  • in

    Senate votes to ban TikTok on US government-owned devices

    Senate votes to ban TikTok on US government-owned devicesBill comes after several states barred employees from downloading the app on state-owned gadgets over data concerns The US Senate late on Wednesday passed by voice vote a bill to bar federal employees from using Chinese-owned video-sharing app TikTok on government-owned devices.The bill must still be approved by the US House of Representatives before going to President Joe Biden for approval. The House of Representatives would need to pass the Senate bill before the current congressional session ends, which is expected next week.The vote is the latest action on the part of US lawmakers to crackdown on Chinese companies amid national security fears that Beijing could use them to spy on Americans.Trump’s bid to ban TikTok and WeChat: where are we now?Read moreThe Senate action comes after North Dakota and Iowa this week joined a growing number of states in banning TikTok, owned by ByteDance, from state-owned devices amid concerns that data could be passed on to the Chinese government.During the last Congress, the Senate in August 2020 unanimously approved legislation to bar TikTok from government devices. The bill’s sponsor, Republican Senator Josh Hawley, reintroduced in legislation in 2021.Many federal agencies including the defense, Homeland Security and state departments already ban TikTok from government-owned devices. “TikTok is a major security risk to the United States, and it has no place on government devices,” Hawley said previously.North Dakota Governor Doug Burgum and Iowa Governor Kim Reynolds issued directives prohibiting executive branch agencies from downloading the app on any government-issued equipment. Around a dozen US states have taken similar actions, including Alabama and Utah this week.TikTok has said the concerns are largely fueled by misinformation and are happy to meet with policymakers to discuss the company’s practices.“We’re disappointed that so many states are jumping on the political bandwagon to enact policies based on unfounded falsehoods about TikTok that will do nothing to advance the national security of the United States,” the company said Wednesday.Other states taking similar actions include Texas, Maryland and South Dakota.Republican Senator Marco Rubio on Tuesday unveiled bipartisan legislation to ban TikTok altogether in the United States, ratcheting up pressure on ByteDance due to US fears the app could be used to spy on Americans and censure content. Rubio also is a sponsor of Hawley’s TikTok government-device ban bill.The legislation would block all transactions from any social media company in or under the influence of China and Russia, Rubio’s office said.At a hearing last month, FBI Director Chris Wray said TikTok’s US operations raise national security concerns.In 2020, then President Donald Trump attempted to block new users from downloading TikTok and ban other transactions that would have effectively blocked the apps’ use in the United States but lost a series of court battles over the measure.The government’s committee on foreign investment in the United States, a powerful national security body, in 2020 ordered ByteDance to divest TikTok because of the fears that US user data could be passed to the Chinese government, though ByteDance has not done so.CFIUS and TikTok have been in talks for months to reach a national security agreement to protect the data of TikTok’s more than 100 million users but it does not appear any deal will be reached before the end of the year.TopicsUS SenateTikTokHouse of RepresentativesUS politicsnewsReuse this content More

  • in

    ‘We risk another crisis’: TikTok in danger of being major vector of election misinformation

    ‘We risk another crisis’: TikTok in danger of being major vector of election misinformation A study suggests the video platform is failing to filter false claims and rhetoric in the weeks leading up to US midterms

    Read the new Guardian series exploring the increasing power and reach of TikTok
    In the final sprint to the US midterm elections social media giant TikTok risks being a major vector for election misinformation, experts warn, with the platform’s massive user base and its design making it particularly susceptible to such threats.Preliminary research published last week from digital watchdog Global Witness and the Cybersecurity for Democracy team at New York University suggests the video platform is failing to filter large volumes of election misinformation in the weeks leading up to the vote.TikTok approved 90% of advertisements featuring election misinformation submitted by researchers, including ads containing the wrong election date, false claims about voting requirements, and rhetoric dissuading people from voting.From dance videos to global sensation: what you need to know about TikTok’s riseRead moreTikTok has for several years prohibited political advertising on the platform, including branded content from creators and paid advertisements, and ahead of midterm elections has automatically disabled monetization to better enforce the policy, TikTok global business president Blake Chandlee said in a September blog post. “TikTok is, first and foremost, an entertainment platform,” he wrote.But the NYU study showed TikTok “performed the worst out of all of the platforms tested” in the experiment, the researchers said, approving more of the false advertisements than other sites such as YouTube and Facebook.The findings spark concern among experts who point out that – with 80 million monthly users in the US and large numbers of young Americans indicating the platform is their primary source of news – such posts could have far reaching consequences.Yet the results come to little surprise, those experts say. During previous major elections in the US, TikTok had far fewer users, but misinformation was already spreading widely on the app. TikTok faced challenges moderating misinformation about elections in Kenya and the war in Ukraine.And the company, experts say, is doing far too little to rein in election lies spreading among its users.“This year is going to be much worse as we near the midterms,” said Olivia Little, a researcher who co-authored the Media Matters report. “There has been an exponential increase in users, which only means there will be more misinformation TikTok needs to proactively work to stop or we risk facing another crisis.”A crucial testWith Joe Biden himself warning that the integrity of American elections is under threat, TikTok has announced a slew of policies aimed at combatting election misinformation spreading through the app.The company laid out guidelines and safety measures related to election content and launched an elections center, which “connect[s] people who engage with election content” to approved news sources in more than 45 languages.“To bolster our response to emerging threats, TikTok partners with independent intelligence firms and regularly engages with others across the industry, civil society organizations, and other experts,” said Eric Han, TikTok’s head of US safety, in August.In September, the company also announced new policies requiring government and politician accounts to be verified and said it would ban videos aimed at campaign fundraising. TikTok added it would block verified political accounts from using money-making features available to influencers on the app, such as digital payments and gifting.Still, experts have deep concerns about the spread of election falsehoods on the video app.Those fears are exacerbated by TikTok’s structure, which makes it difficult to investigate and quantify the spread of misinformation. Unlike Twitter, which makes public its Application Programming Interface (API), software that allows researchers to extract data from platforms for analysis, or Meta, which offers its own internal search engine called Crowdtangle, TikTok does not offer tools for external audits. However, independent research as well as the platform’s own transparency reports highlight the challenges it has faced in recent years moderating election-related content.TikTok removed 350,000 videos related to election misinformation in the latter half of 2020, according to a transparency report from the company, and blocked 441,000 videos containing misinformation from user feeds globally. Internet nonprofit Mozilla warned in the run-up to Kenya’s 2022 election that the platform was “failing its first real test” to stem dis- and misinformation during pivotal political moments. The nonprofit said it had found more than 130 videos on the platform containing election-related misinformation, hate speech, and incitement against communities prior to the vote, which together gained more than 4m views. “Rather than learn from the mistakes of more established platforms like Facebook and Twitter, TikTok is following in their footsteps,” Mozilla researcher Odanga Madung wrote at the time.Why TikTok is so vulnerable to misinformationPart of the reason TikTok is uniquely susceptible to misinformation lies in certain features of its design and algorithm, experts say.Its For You Page, or general video feed, is highly customized to users’ individual preferences via an algorithm that’s little understood, even by its own staff. That combination lends itself to misinformation bubbles, said Little, the Media Matters researcher.“TikTok’s hyper-tailored algorithm can blast random accounts into virality very quickly, and I don’t think that is going to change anytime soon because it’s the reason it has become such a popular platform,” she said.Meanwhile, the ease with which users’ remix, record, and repost videos – few of which have been fact-checked – allows misinformation to spread easily while making it more difficult to remove.TikTok’s video-exclusive content brings up additional moderation hurdles, as artificial intelligence processes may find it more difficult to automatically scrape video content for misinformation compared to text. Several recent studies have highlighted how those features have exacerbated the spread of misinformation on the platform. When it comes to TikTok content related to the war in Ukraine, for example, the ability to “remix media” without fact checking it has made it difficult “even for seasoned journalists and researchers to discern truth from rumor, parody and fabrication”, said a recent report from Harvard’s Shorenstein Center on Media.That report cited other design features in the app that make it an easy pathway for misinformation, including that most users post under pseudonyms and that, unlike on Facebook, where users’ feeds are filled primarily with content from friends and people they know, TikTok’s For You Page is largely composed of content from strangers.Some of these problems are not unique to TikTok, said Marc Faddoul co-director of Tracking Exposed, a digital rights organization investigating TikTok’s algorithm.Studies have shown that algorithms across all platforms are optimized to detect and exploit cognitive biases for more polarizing content, and that any platform that relies on algorithms rather than a chronological newsfeed is more susceptible to disinformation. But TikTok is the most accelerated model of an algorithmic feed yet, he said.At the same time, he added, the platform has been slow in coming to grips with issues that have plagued its peers like Facebook and Twitter for years.“Historically, TikTok has characterized itself as an entertainment platform, denying they host political content and therefore disinformation, but we know now that is not the case,” he said.Young user base is particularly at riskExperts say an additional cause for concern is a lack of media literacy among TikTok’s largely young user base. The vast majority of young people in the US use TikTok, a recent Pew Research Center report showed. Internal data from Google revealed in July that nearly 40% of Gen Z – the generation born between the late 1990s and early 2000s – globally uses TikTok and Instagram as their primary search engines.In addition to being more likely to get news coverage from social media, Gen Z also has far higher rates of mistrust in traditional institutions such as the news media and the government compared with past generations, creating a perfect storm for the spread misinformation, said Helen Lee Bouygues, president of the Reboot Foundation, a media literacy advocacy organization.“By the nature of its audience, TikTok is exposing a lot of young children to disinformation who are not trained in media literacy, period,” she said. “They are not equipped with the skills necessary to recognize propaganda or disinformation when they see it online.”The threat is amplified by the sheer amount of time spent on the app, with 67% of US teenagers using the app for an average of 99 minutes per day. Research conducted by the Reboot Foundation showed that the longer a user spends on an app the less likely they are to distinguish between misinformation and fact.To enforce its policies, which prohibit election misinformation, harassment, hateful behavior, and violent extremism, TikTok says it relies on “a combination of people and technology” and partners with fact checkers to moderate content. The company directed questions to this blog post regarding election misinformation measures, but declined to share how many human moderators it employs.Bouygues said the company should do far more to protect its users, particularly young ones. Her research shows that media literacy and in-app nudges towards fact checking could go a long way when it comes to combating misinformation. But government action is needed to force such changes.“If the TikToks of the world really want to fight fake news, they could do it,” she said. “But as long as their financial model is keeping eyes on the page, they have no incentive to do so. That’s where policymaking needs to come into play.”TopicsTikTokThe TikTok takeoverSocial mediaUS politicsfeaturesReuse this content More

  • in

    TikTok tightens policies around political issues in run-up to US midterms

    TikTok tightens policies around political issues in run-up to US midtermsPoliticians will be banned from using social media platform for campaign fundraising Politicians on TikTok will no longer be able to use the app tipping tools, nor access advertising features on the social network, as the company tightens its policies around political issues in the run-up to the US midterm elections in six weeks’ time.Political advertising is already banned on the platform, alongside “harmful misinformation”, but as TikTok has grown over the past two years, new features such as gifting, tipping and ecommerce have been embraced by some politicians on the site.Now, new rules will again limit political players’ ability to use the app for anything other than organic activity, to “help ensure TikTok remains a fun, positive and joyful experience”, the company said.“TikTok has long prohibited political advertising, including both paid ads on the platform and creators being paid directly to make branded content,” it added. “We currently do that by prohibiting political content in an ad, and we’re also now applying restrictions at an account level. “This means accounts belonging to politicians and political parties will automatically have their access to advertising features turned off, which will help us more consistently enforce our existing policy.”Political accounts will be blocked from other monetisation features, and will also be removed from eligibility for the company’s “creator fund”, which distributes cash to some of the most successful video producers on the site. They will also be banned from using the platform for campaign fundraising, “such as a video from a politician asking for donations, or a political party directing people to a donation page on their website,” the service has said.“TikTok is first and foremost an entertainment platform, and we’re proud to be a place that brings people together over creative and entertaining content. By prohibiting campaign fundraising and limiting access to our monetisation features, we’re aiming to strike a balance between enabling people to discuss the issues that are relevant to their lives while also protecting the creative, entertaining platform that our community wants.”The rules are in contrast to those of Meta’s Facebook and Instagram, both of which have long allowed political advertising and encouraged politicians to use their services for campaigning purposes. In August, Meta announced its own set of policy updates for the US midterm elections, and promised to devote “hundreds of people across more than 40 teams” to ensuring the safety and security of the elections.Meta will ban all new political, electoral and social issue adverts on both its platforms for the final weeks of the campaign, its head of global affairs, Nick Clegg, said, and will remove adverts that encourage people not to vote, or call into question the legitimacy of the election. But the company won’t remove “organic” content that does the same.After years of being effectively unregulated, more and more countries are bringing online political advertising under the aegis of electoral authorities. On Monday, Google said it would begin a program that ensured that political emails never get sent to spam folders, after Republican congressional leaders accused it of partisan censorship and introduced legislation to try to ban the practice. “We expect to begin the pilot with a small number of campaigns from both parties and will test whether these changes improve the user experience, and provide more certainty for senders during this election period,” the company said in a statement.TopicsTikTokUS midterm elections 2022US politicsUS political financingnewsReuse this content More

  • in

    Facebook owner reportedly paid Republican firm to push message TikTok is ‘the real threat’

    Facebook owner reportedly paid Republican firm to push message TikTok is ‘the real threat’Meta, owner of Facebook and Instagram, solicited campaign accusing TikTok of being a danger to American children Meta, the owner of Facebook, Instagram and other social media platforms, is reportedly paying a notable GOP consulting firm to create public distrust around TikTok.The campaign, launched by Republican strategy firm Targeted Victory, placed op-eds and letters to the editor in various publications, accusing TikTok of being a danger to American children, along with other disparaging accusations.The firm wanted to “get the message out that while Meta is the current punching bag, TikTok is the real threat especially as a foreign owned app that is #1 in sharing data that young teens are using,” wrote a director for the firm in a February email, part of a trove of emails revealed by the Washington Post.“Dream would be to get stories with headlines like ‘From dances to danger: how TikTok has become the most harmful social media space for kids,’” another staffer wrote.Campaign operatives promoted stories to local media, including some unsubstantiated claims, that tied TikTok to supposedly dangerous trends popular among teenagers – despite those trends originating on Facebook.Such trends included the viral 2021 “devious lick” trend, where students vandalized school property. Targeted Victory pushed stories on “devious lick” to local publications in Michigan, Minnesota, Rhode Island, Massachusetts and Washington DC. But the trend originally spread on Facebook, according to an investigation by Anna Foley with the podcast Reply All.Campaign workers also used anti-TikTok messages to deflect from criticisms that Meta had received for its privacy and antitrust policies.“Bonus point if we can fit this into a broader message that the current bills/proposals aren’t where [state attorneys general] or members of Congress should be focused,” wrote a Targeted Victory staffer.In a comment to the Post, a TikTok representative said that the company was “deeply concerned” about “the stoking of local media reports on alleged trends that have not been found on the platform”.A Meta representative, Andy Stone, defended the campaign to the Washington Post, saying: “We believe all platforms, including TikTok, should face a level of scrutiny consistent with their growing success.”TopicsTikTokRepublicansFacebookMetaSocial networkingUS politicsnewsReuse this content More

  • in

    Rightwing 'super-spreader': study finds handful of accounts spread bulk of election misinformation

    A handful of rightwing “super-spreaders” on social media were responsible for the bulk of election misinformation in the run-up to the Capitol attack, according to a new study that also sheds light on the staggering reach of falsehoods pushed by Donald Trump.A report from the Election Integrity Partnership (EIP), a group that includes Stanford and the University of Washington, analyzed social media platforms including Facebook, Twitter, Instagram, YouTube, and TikTok during several months before and after the 2020 elections.It found that “super-spreaders” – responsible for the most frequent and most impactful misinformation campaigns – included Trump and his two elder sons, as well as other members of the Trump administration and the rightwing media.The study’s authors and other researchers say the findings underscore the need to disable such accounts to stop the spread of misinformation.“If there is a limit to how much content moderators can tackle, have them focus on reducing harm by eliminating the most effective spreaders of misinformation,” said said Lisa Fazio, an assistant professor at Vanderbilt University who studies the psychology of fake news but was not involved EIP report. “Rather than trying to enforce the rules equally across all users, focus enforcement on the most powerful accounts.” The report analyzed social media posts featuring words like “election” and “voting” to track key misinformation narratives related to the the 2020 election, including claims of mail carriers throwing away ballots, legitimate ballots strategically not being counted, and other false or unproven stories.The report studied how these narratives developed and the effect they had. It found during this time period, popular rightwing Twitter accounts “transformed one-off stories, sometimes based on honest voter concerns or genuine misunderstandings, into cohesive narratives of systemic election fraud”.Ultimately, the “false claims and narratives coalesced into the meta-narrative of a ‘stolen election’, which later propelled the January 6 insurrection”, the report said.“The 2020 election demonstrated that actors – both foreign and domestic – remain committed to weaponizing viral false and misleading narratives to undermine confidence in the US electoral system and erode Americans’ faith in our democracy,” the authors concluded.Next to no factchecking, with Trump as the super-spreader- in-chiefIn monitoring Twitter, the researchers analyzed more than more than 22 million tweets sent between 15 August and 12 December. The study determined which accounts were most influential by the size and speed with which they spread misinformation.“Influential accounts on the political right rarely engaged in factchecking behavior, and were responsible for the most widely spread incidents of false or misleading information in our dataset,” the report said.Out of the 21 top offenders, 15 were verified Twitter accounts – which are particularly dangerous when it comes to election misinformation, the study said. The “repeat spreaders” responsible for the most widely spread misinformation included Eric Trump, Donald Trump, Donald Trump Jr. and influencers like James O’Keefe, Tim Pool, Elijah Riot, and Sidney Powell. All 21 of the top accounts for misinformation leaned rightwing, the study showed.“Top-down mis- and disinformation is dangerous because of the speed at which it can spread,” the report said. “If a social media influencer with millions of followers shares a narrative, it can garner hundreds of thousands of engagements and shares before a social media platform or factchecker has time to review its content.”On nearly all the platforms analyzed in the study – including Facebook, Twitter, and YouTube – Donald Trump played a massive role.It pinpointed 21 incidents in which a tweet from Trump’s official @realDonaldTrump account jumpstarted the spread of a false narrative across Twitter. For example, Trump’s tweets baselessly claiming that the voting equipment manufacturer Dominion Voting Systems was responsible for election fraud played a large role in amplifying the conspiracy theory to a wider audience. False or baseless tweets sent by Trump’s account – which had 88.9m followers at the time – garnered more than 460,000 retweets.Meanwhile, Trump’s YouTube channel was linked to six distinct waves of misinformation that, combined, were the most viewed of any other repeat-spreader’s videos. His Facebook account had the most engagement of all those studied.The Election Integrity Partnership study is not the first to show the massive influence Trump’s social media accounts have had on the spread of misinformation. In one year – between 1 January 2020 and 6 January 2021 – Donald Trump pushed disinformation in more than 1,400 Facebook posts, a report from Media Matters for America released in February found. Trump was ultimately suspended from the platform in January, and Facebook is debating whether he will ever be allowed back.Specifically, 516 of his posts contained disinformation about Covid-19, 368 contained election disinformation, and 683 contained harmful rhetoric attacking his political enemies. Allegations of election fraud earned over 149.4 million interactions, or an average of 412,000 interactions per post, and accounted for 16% of interactions on his posts in 2020. Trump had a unique ability to amplify news stories that would have otherwise remained contained in smaller outlets and subgroups, said Matt Gertz of Media Matters for America.“What Trump did was take misinformation from the rightwing ecosystem and turn it into a mainstream news event that affected everyone,” he said. “He was able to take these absurd lies and conspiracy theories and turn them into national news. And if you do that, and inflame people often enough, you will end up with what we saw on January 6.”Effects of false election narratives on voters“Super-spreader” accounts were ultimately very successful in undermining voters’ trust in the democratic system, the report found. Citing a poll by the Pew Research Center, the study said that, of the 54% of people who voted in person, approximately half had cited concerns about voting by mail, and only 30% of respondents were “very confident” that absentee or mail-in ballots had been counted as intended.The report outlined a number of recommendations, including removing “super-spreader” accounts entirely.Outside experts agree that tech companies should more closely scrutinize top accounts and repeat offenders.Researchers said the refusal to take action or establish clear rules for when action should be taken helped to fuel the prevalence of misinformation. For example, only YouTube had a publicly stated “three-strike” system for offenses related to the election. Platforms like Facebook reportedly had three-strike rules as well but did not make the system publicly known.Only four of the top 20 Twitter accounts cited as top spreaders were actually removed, the study showed – including Donald Trump’s in January. Twitter has maintained that its ban of the former president is permanent. YouTube’s chief executive officer stated this week that Trump would be reinstated on the platform once the “risk of violence” from his posts passes. Facebook’s independent oversight board is now considering whether to allow Trump to return.“We have seen that he uses his accounts as a way to weaponize disinformation. It has already led to riots at the US Capitol; I don’t know why you would give him the opportunity to do that again,” Gertz said. “It would be a huge mistake to allow Trump to return.” More

  • in

    Opinion divided over Trump's ban from social media

    As rioters were gathering around the US Capitol last Wednesday, a familiar question began to echo around the offices of the large social networks: what should they do about Donald Trump and his provocative posts?The answer has been emphatic: ban him.First he was suspended from Twitter, then from Facebook. Snapchat, Spotify, Twitch, Shopify, and Stripe have all followed suit, while Reddit, TikTok, YouTube and even Pinterest announced new restrictions on posting in support of the president or his actions. Parler, a social media platform that sells itself on a lack of moderation, was removed from app stores and refused service by Amazon.The action has sparked a huge debate about free speech and whether big technology companies – or, to be more precise, their billionaire chief executives – are fit to act as judge and jury in high-profile cases.So what are the arguments on both sides – and who is making them?FORFor many, such social media bans were the right thing to do – if too late. After all, the incitement has already occurred and the Capitol has already been stormed.“While I’m pleased to see social media platforms like Facebook, Twitter and YouTube take long-belated steps to address the president’s sustained misuse of their platforms to sow discord and violence, these isolated actions are both too late and not nearly enough,” said Mark Warner, a Democratic senator from Virginia. “Disinformation and extremism researchers have for years pointed to broader network-based exploitation of these platforms.”Greg Bensinger, a member of the editorial board of the New York Times, said what happened on 6 January “ought to be social media’s day of reckoning”.He added: “There is a greater calling than profits, and Mr Zuckerberg and Twitter’s CEO, Jack Dorsey, must play a fundamental role in restoring truth and decency to our democracy and democracies around the world.“That can involve more direct, human moderation of high-profile accounts; more prominent warning labels; software that can delay posts so that they can be reviewed before going out to the masses, especially during moments of high tension; and a far greater willingness to suspend or even completely block dangerous accounts like Mr Trump’s.”Even observers who had previously argued against taking action had changed their mind by the weekend. “Turn off Trump’s account,” wrote tech analyst Ben Thompson.“My preferred outcome to yesterday’s events is impeachment. Encouraging violence to undo an election result that one disagrees with is sedition, surely a high crime or misdemeanor, and I hold out hope that Congress will act over the next few days, as unlikely as that seems … Sometimes, though, the right level doesn’t work, yet the right thing needs to be done.” Free speech activist Jillian C York agreed that action had to be taken, but, she said on Monday: “I’m cautious about praising any of these companies, to be honest. I think that in particular Facebook deserves very little praise. They waited until the last moment to do anything, despite months of calls.“When it comes to Twitter, I think we can be a little bit more forgiving. They tried for many, many months to take cautious decisions. Yes, this is a sitting president; taking them down is a problem. And it is problematic, even if there is a line at which it becomes the right choice.” Some have wondered whether the platforms’ convenient decision to grow a backbone has less to do with the violence of the day and more with political manoeuvring.“It took blood & glass in the halls of Congress – and a change in the political winds – for the most powerful tech companies to recognise, at the last possible moment, the threat of Trump,” tweeted Senator Richard Blumenthal, from Connecticut.AGAINSTPredictably, opposition to Trump’s ban came from his own family. “Free speech is dead and controlled by leftist overlords,” tweeted his son Donald Jr. “The ayatollah and numerous other dictatorial regimes can have Twitter accounts with no issue despite threatening genocide to entire countries and killing homosexuals etc… but The President of the United States should be permanently suspended. Mao would be proud.”But the ban, and the precedent that it could set, has worried some analysts and media experts.“Banning a sitting president from social media platforms is, whichever way you look at it, an assault on free speech,” the Sunday Times wrote in an editorial. “The fact that the ban was called for by, among others, Michelle Obama, who said on Thursday that the Silicon Valley platforms should stop enabling him because of his ‘monstrous behaviour’, will add to the suspicion that the ban was politically motivated.”On Monday, the German chancellor, Angela Merkel – hardly known for her affection for the US president – made it clear that she thought it was “problematic” that Trump had been blocked. Her spokesperson, Steffen Seibert, called freedom of speech “a fundamental right of elementary significance”.She said any restriction should be “according to the law and within the framework defined by legislators – not according to a decision by the management of social media platforms”.The ban has also worried those who are already concerned about the strength of Silicon Valley.“The institutions of American democracy have consistently failed to hold President Trump’s unrestrained authoritarianism, hate and racism accountable,” says Silkie Carlo, director of Big Brother Watch, “but this corporate power grab does nothing to benefit American democracy in practice or in principle.”“American democracy is in peril if it relies on a corporate denial of service to protect the nation from its own president, rather than rely on accountable institutions of justice and democracy,” Carlo added.For York, such concerns are valid, but risk an over-emphasis on US politics and concerns. “The majority of the public doesn’t care about these issues on a day-to-day basis,” she says, citing world leaders such as Jair Bolsonaro and Narendra Modi as others who have engaged in hate speech and incitement on Twitter.“It’s only when it hits Trump, and that’s the problem. Because we should be thinking about this as a society day to day.” More

  • in

    TikTok asks US court to intervene after Trump administration leaves app in limbo

    [embedded content]
    The popular video-sharing app TikTok says its future has been in limbo since Donald Trump tried to shut it down earlier this fall and is asking a federal court to intervene.
    Trump in August signed an executive order to ban TikTok if it did not sell its US operations in 45 days. The move forced TikTok’s Chinese owner ByteDance to consider deals with several American companies before ultimately settling on a proposal to place TikTok under the oversight of the American companies Oracle and Walmart, each of which would also have a financial stake in the company.
    But TikTok said this week it’s received “no clarity” from the US government about whether that proposal has been accepted.
    The deal has been under a national-security review by the interagency Committee on Foreign Investment in the United States, or CFIUS, which is led by the treasury department. The department didn’t return emailed requests for comment this week.
    “With the November 12 CFIUS deadline imminent and without an extension in hand, we have no choice but to file a petition in court to defend our rights and those of our more than 1,500 employees in the US,” TikTok said in a written statement Tuesday.
    Trump has cited concerns that the Chinese government could spy on TikTok users if the app remains under Chinese ownership. TikTok has denied it is a security threat but said it is still trying to work with the administration to resolve its concerns.
    The legal challenge is “a protection to ensure these discussions can take place”, the company said.
    The Trump administration had earlier sought to ban the app from smartphone app stores and deprive it of vital technical services. To do this, the US could have internet service providers block TikTok usage from US IP addresses, as India did when it banned TikTok, effectively making TikTok unusable.
    Such actions were set to take place on 20 September but federal judges have so far granted TikTok extensions.
    TikTok is now looking to the US court of appeals for the District of Columbia circuit to review Trump’s divestment order and the government’s national-security review. The company filed a 49-page petition asking the court to review the decision, saying the forced divestment from TikTok violates the constitution.
    “The government has taken virtually all of the ‘sticks’ in the ‘bundle’ of property rights ByteDance possesses in its TikTok US platform, leaving it with no more than the twig of potentially being allowed to make a rushed, compelled sale, under shifting and unrealistic conditions, and subject to governmental approval,” the filing says.
    The US attorney general office did not immediately respond to request for comment. More