More stories

  • in

    Deepfakes are here and can be dangerous, but ignore the alarmists – they won’t harm our elections | Ciaran Martin

    Sixteen days before the Brexit referendum, and only two days before the deadline to apply to cast a ballot, the IT system for voter registrations collapsed. The remain and leave campaigns were forced to agree a 48-hour registration extension. Around the same time, evidence was beginning to emerge of a major Russian “hack-and-leak” operation targeting the US presidential election. Inevitably, questions arose as to whether the Russians had successfully disrupted the Brexit vote.The truth was more embarrassingly simple. A comprehensive technical investigation, supported by the National Cyber Security Centre – which I headed at the time – set out in detail what had happened. A TV debate on Brexit had generated unexpected interest. Applications spiked to double those projected. The website couldn’t cope and crashed. There was no sign of any hostile activity.But this conclusive evidence did not stop a parliamentary committee, a year later, saying that it did “not rule out the possibility that there was foreign interference” in the incident. No evidence was provided for this remarkable assertion. What actually happened was a serious failure of state infrastructure, but it was not a hostile act.This story matters because it has become too easy – even fashionable – to cast the integrity of elections into doubt. “Russia caused Brexit” is nothing more than a trope that provides easy comfort to the losing side. There was, and is, no evidence of any successful cyber operations or other digital interference in the UK’s 2016 vote.But Brexit is far from the only example of such electoral alarmism. In its famous report on Russia in 2020, the Intelligence and Security Committee correctly said that the first detected attempt by Russia to interfere in British politics occurred in the context of the Scottish referendum campaign in 2014.However, the committee did not add that the quality of such efforts was risible, and the impact of them was zero. Russia has been waging such campaigns against the UK and other western democracies for years. Thankfully, though, it hasn’t been very good at it. At least so far.Over the course of the past decade, there are only two instances where digital interference can credibly be seen to have severely affected a democratic election anywhere in the world. The US in 2016 is undoubtedly one. The other is Slovakia last year, when an audio deepfake seemed to have an impact on the polls late on.The incident in Slovakia fuelled part of a new wave of hysteria about electoral integrity. Now the panic is all about deepfakes. But we risk making exactly the same mistake with deepfakes as we did with cyber-attacks on elections: confusing activity and intent with impact, and what might be technically possible with what is realistically achievable.So far, it has proved remarkably hard to fool huge swathes of voters with deepfakes. Many of them, including much of China’s information operations, are poor in quality. Even some of the better ones – like a recent Russian fake of Ukrainian TV purporting to show Kyiv admitting it was behind the Moscow terror attacks – look impressive, but are so wholly implausible in substance they are not believed by anyone. Moreover, a co-ordinated response by a country to a deepfake can blunt its impact: think of the impressive British response to the attempt to smear Sadiq Khan last November, when the government security minister lined up behind the Labour mayor of London in exhorting the British media and public to pay no attention to a deepfake audio being circulated.This was in marked contrast to events in Slovakia, where gaps in Meta’s removal policy, and the country’s electoral reporting restrictions, made it much harder to circulate the message that the controversial audio was fake. If a deepfake does cut through in next month’s British election, what matters is how swiftly and comprehensively it is debunked.None of this is to be complacent about the reality that hostile states are trying to interfere in British politics. They are. And with fast-developing tech and techniques, the threat picture can change. “Micro” operations, such as a localised attempt to use AI to persuade voters in New Hampshire to stay at home during the primaries, are one such area of concern. In the course of the UK campaign, one of my main worries would be about targeted local disinformation and deepfake campaigns in individual contests. It is important that the government focuses resources and capabilities on blunting these operations.But saying that hostile states are succeeding in interfering in our elections, or that they are likely to, without providing any tangible evidence is not a neutral act. In fact, it’s really dangerous. If enough supposedly credible voices loudly cast aspersions on the integrity of elections, at least some voters will start to believe them. And if that happens, we will have done the adversaries’ job for them.There is a final reason why we should be cautious about the “something-must-be-done” tendency where the risk of electoral interference is concerned. State intervention in these matters is not some cost-free, blindingly obvious solution that the government is too complacent to use. If false information is so great a problem that it requires government action, that requires, in effect, creating an arbiter of truth. To which arm of the state would we wish to assign this task?
    Ciaran Martin is a professor at the Blavatnik School of Government at the University of Oxford, and a former chief executive of the National Cyber Security Centre More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More

  • in

    US cites AI deepfakes as reason to keep Biden recording with Robert Hur secret

    The US Department of Justice is making a novel legal argument to keep a recording of an interview with Joe Biden from becoming public. In a filing late last week, the bureau cited the risk of AI-generated deepfakes as one of the reasons it refuses to release audio of the president’s interview with special counsel Robert Hur. The conversation about Biden’s handling of classified documents is a source of heated political contention, with Republicans pushing for release of the recordings and the White House moving to block them.The justice department’s filing, which it released late on Friday night, argues that the recording should not be released on a variety of grounds including privacy interests and executive privilege. One section of the filing, however, is specifically dedicated to the threat of deepfakes and disinformation, stating that there is substantial risk people could maliciously manipulate the audio if it were to be made public.“The passage of time and advancements in audio, artificial intelligence, and ‘deep fake’ technologies only amplify concerns about malicious manipulation of audio files,” the justice department stated. “If the audio recording is released here, it is easy to foresee that it could be improperly altered, and that the altered file could be passed off as an authentic recording and widely distributed.”The filing presents a novel argument about the threat of AI-generated disinformation from the release of government materials, potentially setting up future legal battles over the balance between transparency and preventing the spread of misinformation.“A malicious actor could slow down the speed of the recording or insert words that President Biden did not say or delete words that he did say,” the filing argues. “That problem is exacerbated by the fact that there is now widely available technology that can be used to create entirely different audio ‘deepfakes’ based on a recording.”Biden’s interview with Hur reignited a longstanding conservative campaign of questioning Biden’s mental faculties and drawing attention to his age, which critics claim make him unfit to be president. While Hur’s report into classified documents found at Biden’s private residence did not result in charges against him, the special counsel’s description of him as an “elderly man with poor memory” became ammunition for Republicans and prompted Biden to defend his mental fitness.Although transcripts of Hur’s interview with Biden are public, conservative groups and House Republicans have taken legal action, filed Freedom of Information Act requests and demanded the release of recorded audio from the conversation as he campaigns against Donald Trump. Biden has asserted executive privilege to prevent the release of the audio, while the latest justice department filing pushes back against many of the conservative claims about the recording.The justice department’s filing argues that releasing the recording would create increased public awareness that audio of the interview is circulating, making it more believable when people encounter doctored versions of it.A number of politicians have become the target of deepfakes created in attempts to swing political opinion, including Biden. A robocall earlier this year that mimicked Biden’s voice and told people not to vote in New Hampshire’s Democratic primary was sent to thousands of people. The political consultant allegedly behind the disinformation campaign is now facing criminal charges and a potential $6m fine. More

  • in

    Trump joins TikTok despite seeking to ban app as president

    Former president Donald Trump has joined social media platform TikTok and made his first post late Saturday night, a video featuring the Ultimate Fighting Championship CEO, Dana White, introducing Trump on the social media platform.The move came despite that fact that as president Trump pushed to ban TikTok by executive order due to the app’s parent company being based in China. Trump said in March 2024 that he believed the app was a national security threat, but later reversed on supporting a ban.The 13-second video was taken as Trump attended a UFC event on Saturday evening in Newark, New Jersey. In the video Trump says it is an “honor” to have joined the app as a Kid Rock song played in the background.“The campaign is playing on all fields,” an adviser to Trump’s campaign told Politico. “Being able to do outreach on multiple platforms and outlets is important and this is just one of many ways we’re going to reach out to voters. TikTok skews towards a younger audience.”Trump’s son, Donald Trump Jr joined the app last week, where he posted videos from the Manhattan courthouse where Trump was convicted on Thursday on all 34 counts for falsifying business records.skip past newsletter promotionafter newsletter promotionJoe Biden signed legislation into law in April 2024 that will ban the social media app from the US, giving TikTok’s parent company ByteDance 270 days to sell the app over concerns the app poses a national security risk. TikTok has sued to block the ban with oral arguments in the case scheduled for September. Biden’s campaign has continued using the app despite the legislation. More

  • in

    Trump reportedly considers White House advisory role for Elon Musk

    Donald Trump has floated a possible advisory role for the tech billionaire Elon Musk if he were to retake the White House next year, according to a new report from the Wall Street Journal.The two men, who once had a tense relationship, have had several phone calls a month since March as Trump looks to court powerful donors and Musk seeks an outlet for his policy ideas, the newspaper said, citing several anonymous sources familiar with their conversations.Musk and Trump connected in March at the estate of billionaire Nelson Peltz. Since then, the two have discussed various policy issues, including immigration, which Musk has become vocal about in recent months.“America will fall if it tries to absorb the world,” Musk tweeted in March.Musk has said he will not donate to either presidential campaign this election, but has reportedly told Trump he plans to host gatherings to dissuade wealthy and powerful allies from supporting Joe Biden in November.It has only been just a few years since Musk and Trump were exchanging insults. At a rally in 2022, Trump called Musk “another bullshit artist”. Meanwhile, Musk tweeted that Trump should “hang up his hat and sail into the sunset”.Musk briefly served on Trump’s White House business advisory group early during his presidency, but Musk dropped out after Trump pulled the US out of the Paris climate accord in 2017.Now, relations appeared to have softened. When Musk acquired Twitter, renaming it X, in 2022, he reinstated Trump’s account. Musk has since asked Trump to be more active on X, according to the Journal, though Trump has largely been loyal to his Truth Social platform.In March, after meeting Musk at Peltz’s estate, Trump told CNBC: “I’ve been friendly with him over the years. I helped him when I was president. I helped him. I’ve liked him.”As the owner of Tesla and SpaceX, Musk has benefited from federal government policies and contracts over the last several years, including rocket-service contracts and tax credits for electric vehicles.Trump in March said he and Musk “obviously have opposing views on a minor subject called electric cars”, with Trump opposing ramping up electric vehicle production and supporting tariffs against foreign EV manufacturing.Peltz, an investor, has been a key connector between Trump and Musk. Peltz and Musk have told Trump that they are working on a large data-driven project designed to ensure votes are fairly counted, though details on the project remain opaque. More

  • in

    The US attempt to ban TikTok is an attack on ideas and hope | Dominic Andre

    I’m a TikTok creator. I’ve used TikTok to build a multimillion dollar business, focused on sharing interesting things I’ve learned in life and throughout my years in college. TikTok allowed me to create a community and help further my goal of educating the public. I always feared that one day, it would be threatened. And now, it’s happening.Why does the US government want to ban TikTok? The reasons given include TikTok’s foreign ownership and its “addictive” nature, but I suspect that part of the reason is that the app primarily appeals to younger generations who often hold political and moral views that differ significantly from those of older generations, including many of today’s politicians.The platform has become a powerful tool for grassroots movements challenging established elites and has amplified voices advocating against capitalism and in support of the Black Lives Matter movement and women’s rights. Moreover, for the first time in modern history, Americans’ support for Israel has sharply fallen, a shift I would argue can be attributed in part to TikTok’s video-sharing capabilities. In particular, the app’s stitching feature, which allows creators to link videos, correcting inaccuracies and presenting opposing views within a single video, has revolutionized how audiences access information and form more informed opinions.US Congress has cited concerns over Chinese data collection as justification for proposing a ban. This rationale might be appropriate for banning the app on government-issued devices, both for official and personal use. Other Americans, however, have the right to decide which technologies we use and how we share our data. Personally, I am indifferent to China possessing my data. What harm can the Chinese government do to me if I live in the United States? Also, I’d point out that viewpoints critical of Chinese policies have proliferated on TikTok, which would seem to indicate that the platform is not predominantly used for spreading Chinese propaganda.If politicians’ concern were genuinely about foreign influence, we would discuss in greater detail how Russia allegedly used Facebook to bolster Trump’s campaign and disseminate misinformation. Following this logic, we might as well consider banning Facebook.I spent a decade in college studying international affairs and psychology for my masters. So while I’m somewhat prepared for tough times in the event of TikTok ending, many others aren’t. TikTok hosts tens of thousands of small businesses who, thanks to the platform, reach millions worldwide. This platform has truly leveled the playing field, giving everyone from bedroom musicians to aspiring actors a real shot at being heard. A ban on TikTok would threaten those livelihoods.A ban on TikTok would also threaten a diverse community of creators and the global audience connected through it. As a Palestinian, TikTok gave my cause a voice, a loud one. It became a beacon for bringing the stories of Gaza’s suffering to the forefront, mobilizing awareness and action in ways no other platform has.Using TikTok’s live-streaming feature, I’ve been able to talk to hundreds of thousands of people each day about the issues Palestinians face. I personally watched the minds change of hundreds of people who asked me questions out of honest curiosity.TikTok has made a real difference in educating people about what is happening in Palestine. The stitch feature is one of the most powerful tools for debunking propaganda spread against Palestinians. This feature does not exist on other platforms and was first created by TikTok; with it, creators can correct information and respond to the spread of misinformation in real time.Removing TikTok would do more than disrupt entertainment; it would sever a lifeline for marginalized voices across the world – people like Bisan Owda, an influential young journalist in Gaza whose TikToks each reach hundreds of thousands of views – or creators like myself, whose family was driven out of Palestine in 1948, and killed during the Nakba. I’ve used TikTok to show all the paperwork of my great-grandfather’s land ownership in Palestine – and his passport – to show how his existence was taken away from him.On TikTok, you’ll find thousands of creators from different ethnic groups teaching the world about their cultures. You’ll also find disabled creators sharing their journeys and experiences in a world designed for able-bodied people. UncleTics, for example, is a creator who lives with Tourette syndrome and creates content about his life while also bringing joy to his audience.Banning TikTok wouldn’t just mean an enormous financial hit for the creators who use the platform – it would stifle the rich exchange of ideas, culture and awareness that TikTok uniquely fosters. We stand to lose a tool that has brought global issues out of the shadows and into the public eye. A ban on TikTok is a ban on ideas and hope.Almost every creator and consumer of TikTok I have spoken to does not care about potential data collection by China. Creators, in particular, don’t expect privacy when we’re posting about our lives on a public platform. If Congress wants to enact laws that make it harder for social-media companies to potentially harvest our data, Congress should do it across the board for all social media platforms – not just ones which happened to be based in non-Western countries.A TikTok ban threatens to destroy millions of jobs and silence diverse voices. It would change the world for the worse.
    Dominic Andre is a content creator and the CEO of The Lab More

  • in

    New York governor said Black kids in the Bronx do not know the word ‘computer’

    The governor of New York, Kathy Hochul, has rapidly backtracked on remarks she made on Monday after she came under a blizzard of criticism for saying that Black children in the Bronx did not know the word “computer”.Hochul had intended her appearance at the Milken Institute Global Conference in California on Monday to showcase Empire AI, the $400m consortium she is leading to create an artificial intelligence computing center in upstate New York. Instead, she dug herself into a hole with an utterance she quickly regretted.“Right now we have, you know, young Black kids growing up in the Bronx who don’t even know what the word ‘computer’ is,” she said. For good measure, she added: “They don’t know, they don’t know these things.”The backlash was swift and piercing. Amanda Septimo, a member of the New York state assembly representing the south Bronx, called Hochul’s remarks “harmful, deeply misinformed and genuinely appalling”. She said on X that “repeating harmful stereotypes about one of our most underserved communities only perpetuates systems of abuse”.Fellow assembly member and Bronxite Karines Reyes said she was deeply disturbed by the remarks and exhorted Hochul to “do better”. “Our children are bright, brilliant, extremely capable, and more than deserving of any opportunities that are extended to other kids,” she said.Few public figures were prepared to offer the governor support. They included the speaker of the state assembly, Carl Heastie, who said her words were “inartful and hurtful” but not reflective of “where her heart is”.The civil rights leader Al Sharpton also gave her the benefit of the doubt, saying that she was trying to make a “good point” that “a lot of our community is robbed of using social media because we are racially excluded from access”.By Monday evening, Hochul had apologized. “I misspoke and I regret it,” she said.In a statement to media, she said, “Of course Black children in the Bronx know what computers are – the problem is that they too often lack access to the technology needed to get on track to high-paying jobs in emerging industries like AI.”skip past newsletter promotionafter newsletter promotionThis is not the first time this year that Hochul has found herself with her foot in her mouth. In February she envisaged what would happen if Canada attacked a US city, as a metaphor for the Israeli military operation in Gaza in response to the 7 October Hamas attacks.“If Canada someday ever attacked Buffalo, I’m sorry, my friends, there would be no Canada the next day,” she said. That apology for a “poor choice of words” was made swiftly, too. More

  • in

    ‘It’s just not hitting like it used to’: TikTok was in its flop era before it got banned in the US

    TikTok is facing its most credible existential threat yet. Last week, the US Congress passed a bill that bans the short-form video app if it does not sell to an American company by this time next year. But as a former avid user whose time on the app has dropped sharply in recent months, I am left wondering – will I even be using the app a year from now?Like many Americans of my demographic (aging millennial), I first started using TikTok regularly when the Covid-19 pandemic began and lockdowns gave many of us more time than we knew how to fill.As 2020 wore on, the global news climate becoming somehow progressively worse with each passing day, what began as a casual distraction became a kind of mental health lifeline. My average total screen time exploded from four hours a day to upwards of 10 – much of which were spent scrolling my “For You” page, the main feed of algorithmically recommended videos within TikTok.At the time, content was predictable, mostly light and mind-numbing. From “Get Ready With Me” (GRWM) narratives to kitten videos and the classic TikTok viral dances, I could dive into the algorithmic oblivion anytime I wanted. I loved TikTok.The “For You” page taught me actually useful skills like sign language, crocheting and how to cook when you hate cooking (I do). It also filled my days with extremely dumb distractions like the rise (and subsequent criticisms) of a tradwife family and the politicized implosion of several influencers in 2022 over cheating allegations. I enjoy watching urban exploration videos in which people inexplicably hop down into sewers and investigate abandoned houses to see what they can find. Over the course of many months, I watched a man build an underground aquarium and fill it with live eels. I treasured every wet moment. Once I learned a dumb TikTok dance – Doja Cat’s Say So, which went mega-viral during the pandemic. I probably could still do it if pressed, but don’t look for it on my TikTok profile – I came to my senses and deleted it. I don’t post often, but I did genuinely enjoy the trend of “romanticizing your life” – setting mundane video clips to inspirational music. I was inspired to share my own attempts.But now, according to my iPhone’s Screen Time tool, my average time on TikTok ranges from 30 minutes to two hours a day – a far cry from the four-plus hours I was spending at the peak of the pandemic. My withdrawal from TikTok was not a conscious choice – it happened naturally, the same way my addiction began.As my partner put it during a recent nightly scroll before bed: “It’s just not hitting like it used to.” I still find some joy on the app. The delight is just less abundant than it was. Something has changed on TikTok. It’s become less serendipitous than before, though I don’t know when.Others seem to agree, from aggrieved fellow journalists to content creators on the platform and countless social media threads – which raises the question: as TikTok faces a potential ban in the US, was the app already on its way out?Top apps wax and wane, and content creators noticeAs with all trends, the hot social network of the moment tends to wax and wane (remember Clubhouse?). Facebook – the original top dog of social media and still the biggest by user numbers – has seen young users flee in recent years, despite overall growth bringing monthly active users to 3 billion in 2023.But unlike Meta, TikTok is not a public company – which means we may never get granular insight into its user metrics, which have surely evolved over the past few years amid political turmoil and changes to the platform. The company has recently stated that the proposed ban would affect more than 170 million monthly active users in the US.View image in fullscreenCreators – especially those who get most of their income from social media – are hyper-aware of fluctuations in the app of the moment, said Brooke Erin Duffy, associate professor of communication at Cornell University. From the time TikTok was first threatened with a ban by Donald Trump in 2020, major users of the platform raised the example of Vine – the now defunct short-form video platform – as a cautionary tale.“They are aware of the ability of an entire platform to vanish with very little notice,” she said. “[The potential Trump ban] was four years ago, and since then there has been an ebb and flow of panic about the future among creators.”With that in mind, a number of creators who grew a large audience on TikTok have been diversifying, trying to migrate their fanbases to other platforms in case TikTok disappears. Others have grown frustrated with the algorithm, reporting wildly fluctuating TikTok views and impressions for their videos. Gaming influencer DejaTwo said TikTok has been “very frustrating lately” in a recent post explaining why they believe influencers are leaving the platform. “The only reason I still use TikTok is because of brand loyalty,” they said.The unwelcome arrival of the TikTok ShopIn September 2023, TikTok launched its TikTok Shop feature – an algorithm-driven in-app shopping experience in which users can buy products directly hawked by creators.The feature has a number of benefits for TikTok: it boosts monetization of its highly engaged audience, allowing users to make purchases without ever leaving the platform. Integrating shopping will also allow TikTok to compete with platforms like Instagram and Facebook, which have long integrated shopping capabilities, as well as with Chinese e-commerce sites like Temu and Shein, which promise cheap abundance. It is also part of a broader effort from TikTok to move away from politicized videos and other content that may jeopardize its tenuous position with regulators, many of whom believe it has been boosting pro-Palestinian content despite all evidence to the contrary.Some users have pushed back against the shop’s new omnipresence on the app, often characterized as a kind of QVC shopping channel for gen Z users, stating that it takes away from the fun, unique and interesting original content that earned TikTok its popularity.skip past newsletter promotionafter newsletter promotion“The shopping push has not been very interesting or resonant in general, especially for younger users,” said Damian Rollison, director of market insights for digital marketing firm SOCi. “Shopping is not what appeals to US users on TikTok.”TikTok’s push of the shopping features, in spite of little interest from its audience, underscores the lack of say users and creators have over their favorite platforms and how they work. Creators report feeling pressure to participate in the shopping features lest their content get buried in the algorithm, said Duffy.“There is a tension for creators between gravitating towards what they think TikTok is trying to reward, and their own sense of what the most important and fulfilling kinds of content are,” she said.The magic algorithm – TikTok’s biggest asset (or liability)TikTok’s success has been largely attributed to its uncannily accurate algorithm, which monitors user behavior and serves related content on the “For You” page. According to a recent report, ByteDance would only consider selling the platform to comply with the new bill if it didn’t include the algorithm, which would make it nearly worthless.The algorithm, however, can be too responsive for some users. One friend told me they accidentally watched several videos of a niche Brazilian dance and their feed has been inundated with related content ever since. Conversely, I find if I spend less time on TikTok, when I log back in I find myself besieged with inside jokes that I am not quite in on – creators open monologues with “we’ve all seen that video about [fill in the blank]”. Most recently, my feed was filled with meta-memes commenting on a video about a series of videos about a Chinese factory I’d never heard of.“More so than any other platform. TikTok is very trend-based,” said Nathan Barry, CEO of ConvertKit. “It has its own kind of culture that you have to be tapped into in order to grow in a way you don’t see on platforms like Instagram Reels or YouTube Shorts.”The mystery of the algorithm is not unique to TikTok. Because social media platforms are not transparent about how they decide which content reaches users, it creates confusion and paranoia among creators about “shadow banning”, when content is demoted in the algorithm and shown less.“Because these algorithms are opaque and kind of concealed behind the screens, creators are left to discuss among themselves what the algorithm rewards or punishes,” said Duffy. “Companies like to act like they are neutral conduits that just reflect the interests and tastes of the audience, but, of course, they have a perverse level of power to shape these systems.”TikTok’s legacyEven if TikTok refuses to sell and shuts down forever, as its parent company seems to want, the app has left an indelible mark on the social media landscape and on the lives of the tens of millions who used it. Many users have stated they quit their traditional jobs to become full-time influencers, and will be financially devastated if TikTok disappears. In Montana, where a ban was passed (and later reversed) many such influencers lobbied aggressively against it.TikTok’s impact on me will continue in the form of countless pointless facts that are now buried deep in my brain: yesterday I spent 10 minutes of my life learning about the history of Bic pens. I watch ASMR – autonomous sensory meridian response – videos there when I am trying to fall asleep. BookTok influencers still give me legitimately enjoyable recommendations. The other day I laughed until I cried at this video. Entertaining drama remains, including one woman who was recently accused of pretending to be Amish to gain followers. I watched a cat give birth to a litter of kittens on TikTok Live just last week.The platform’s biggest legacy moving forward is the solidification of a demand for short-form videos, said Rollison – one that its competitors have yet to meet successfully. While Meta has invested heavily in Instagram Reels and Alphabet in YouTube Shorts, no platforms have found the secret sauce that TikTok has to keep users highly engaged.The Reels venture at Meta had been growing rapidly when the company last released numbers specific to the platform. In recent earnings reports, Meta did not report Reels engagement numbers specifically, but its CEO, Mark Zuckerberg, said that Reels alone now makes up 50% of user time spent on Instagram. Still, the company said it is focusing on scaling the product, and not yet monetizing it. Alphabet has also declined to share recent numbers on its Shorts, but said in October the videos average 70bn daily views. Executives called the product a “long-term bet for the business” in Alphabet’s most recent earnings call.“TikTok is still the defining standard of success in the realm of short-form video,” Rollison said. “It has defined a need, and if it goes away, that is going to create a vacuum that will be filled by something. The need for short-form video will survive the death of any particular platform.” More