More stories

  • in

    Attacks on Dominion Voting Persist Despite High-Profile Lawsuits

    Unproven claims about Dominion Voting Systems still spread widely online.With a series of billion-dollar lawsuits, including a $1.6 billion case against Fox News headed to trial this month, Dominion Voting Systems sent a stark warning to anyone spreading falsehoods that the company’s technology contributed to fraud in the 2020 election: Be careful with your words, or you might pay the price.Not everyone is heeding the warning.“Dominion, why don’t you show us what’s inside your machines?” Mike Lindell, the MyPillow executive and prominent election denier, shouted during a livestream last month. He added that the company, which has filed a $1.3 billion defamation lawsuit against him, was engaged in “the biggest cover-up for the biggest crime in United States history — probably in world history.”Claims that election software companies like Dominion helped orchestrate widespread fraud in the 2020 election have been widely debunked in the years since former President Donald J. Trump and his allies first pushed the theories. But far-right Americans on social media and influencers in the news media have continued in recent weeks and months to make unfounded assertions about the company and its electronic voting machines, pressuring government officials to scrap contracts with Dominion, sometimes successfully.The enduring attacks illustrate how Mr. Trump’s voter fraud claims have taken root in the shared imagination of his supporters. And they reflect the daunting challenge that Dominion, and any other group that draws the attention of conspiracy theorists, faces in putting false claims to rest.The attacks about Dominion have not reached the fevered pitch of late 2020, when the company was cast as a central villain in an elaborate and fictitious voter fraud story. In that tale, the company swapped votes between candidates, injected fake ballots or allowed glaring security vulnerabilities to remain on voting machines.Dominion says all those claims have been made without proof to support them.“Nearly two years after the 2020 election, no credible evidence has ever been presented to any court or authority that voting machines did anything other than count votes accurately and reliably in all states,” Dominion said in an emailed statement.On Friday, the judge in Delaware overseeing the Fox defamation case ruled that it was “CRYSTAL clear” that Fox News and Fox Business had made false claims about the company — a major setback for the network.Many prominent influencers have avoided mentioning the company since Dominion started suing prominent conspiracy theorists in 2021. Fox News fired Lou Dobbs that year — only days after it was sued by Smartmatic, another election software company — saying the network was focusing on “new formats.” Mr. Dobbs is also a defendant in Dominion’s case against Fox, which is scheduled to go to trial on April 17.Yet there have been nearly nine million mentions of Dominion across social media websites, broadcasts and traditional media since Dominion filed its first lawsuit in January 2021, including nearly a million that have mentioned “fraud” or related conspiracy theories, according to Zignal Labs, a media monitoring company. Some of the most widely shared posts came from Representative Marjorie Taylor Greene, Republican of Georgia, who tweeted last month that the lawsuits were politically motivated, and Kari Lake, the former Republican candidate for governor of Arizona who has advanced voter fraud theories about election machines since her defeat last year.Far-right Americans on social media and influencers in the news media continue to make unfounded assertions about Dominion and its electronic voting machines.Brynn Anderson/Associated PressMr. Lindell remains one of the loudest voices pushing unproven claims against Dominion and electronic voting machines, posting hundreds of videos to Frank Speech, his news site, attacking the company with tales of voter fraud.Last month, Mr. Lindell celebrated on his livestream after Shasta County, a conservative stronghold in Northern California, voted to use paper ballots after ending its contract with Dominion. A county supervisor had flown to meet privately with Mr. Lindell before the vote, discussing how to run elections without voting machines, according to Mr. Lindell. The supervisor ultimately voted to switch to paper ballots.In an interview this week with The New York Times, Mr. Lindell claimed to have spent millions on campaigns to end election fraud, focusing on abolishing electronic voting systems and replacing them with paper ballots and hand counting.“I will never back down, ever, ever, ever,” he said in the interview. He added that Dominion’s lawsuit against him, which is continuing after the United States Supreme Court declined to consider his appeal, was “frivolous” and that the company was “guilty.”“They can’t deny it, nobody can deny it,” Mr. Lindell said.Joe Oltmann, the host of “Conservative Daily Podcast” and a promoter of voter fraud conspiracy theories, hosted an episode in late March titled “Dominion Is FINISHED,” in which he claimed that there was a “device that’s used in Dominion machines to actually transfer ballots,” offering only speculative support.“This changes everything,” Mr. Oltmann said.Dominion sent Mr. Oltmann a letter in 2020 demanding that he preserve documents related to his claims about the company, which is often the first step in a defamation lawsuit.In a livestream last month on Rumble, the streaming platform popular among right-wing influencers, Tina Peters, a former county clerk in Colorado who was indicted on 10 charges related to allegations that she tampered with Dominion’s election equipment, devoted more than an hour to various election fraud claims, many of them featuring Dominion. The discussion included a suggestion that because boxes belonging to Dominion were stamped with “Made in China,” the election system was vulnerable to manipulation by the Chinese Communist Party.Mr. Oltmann and Ms. Peters did not respond to requests for comment.The Fox lawsuit has also added fuel to the conspiracy theory fire.Far-right news sites have largely ignored the finding that Fox News hosts disparaged voter fraud claims privately, even as they gave them significant airtime. Instead, the Gateway Pundit, a far-right site known for pushing voter fraud theories, focused on separate documents showing that Dominion executives “knew its voting systems had major security issues,” the site wrote.The documents showed the frenzied private messages between Dominion employees as they were troubleshooting problems, with one employee remarking, “our products suck.” In an email, a Dominion spokeswoman noted the remark was about a splash screen that was hiding an error message.In February, Mr. Trump shared the Gateway Pundit story on Truth Social, his right-wing social network, stoking a fresh wave of attacks against the company.“We will not be silent,” said one far-right influencer whose messages are sometimes shared by Mr. Trump on Truth Social. “Dominion is the enemy!” More

  • in

    Trump Supporter Convicted in 2016 Scheme to Suppress Votes for Clinton

    The federal prosecution of Douglass Mackey turned on the question of when free speech turns into dirty tricks.Months before the 2016 presidential election, people intent on swaying the outcome were communicating in private Twitter groups with names like “War Room” and “Infowars Madman.”The participants included obscure figures and notorious online trolls, many of whom concealed their real identities. There were fans of Donald J. Trump and avowed haters of Hillary Clinton, all working toward a Republican victory while celebrating the “meme magic” they employed to circulate lies and attacks.According to federal prosecutors, one man, Douglass Mackey, crossed a line from political speech to criminal conduct when he posted images to Twitter that resembled campaign ads for Mrs. Clinton and falsely stated that people could vote simply by texting “Hillary” to a certain phone number.On Friday, after just over four days of deliberation, a jury in Brooklyn found Mr. Mackey guilty of conspiring to deprive others of their right to vote. He is scheduled to be sentenced in August and faces a maximum of 10 years in prison.Mr. Mackey, wearing a gray suit, white shirt and pink tie, was stoic as the verdict was read. His lawyer, Andrew J. Frisch, suggested that his client would appeal.“This case presents an unusual array of appellate issues that are exceptionally strong,” Mr. Frisch said, adding: “I’m confident about the way forward.”Breon Peace, the United States attorney in Brooklyn, said in a statement that by convicting Mr. Mackey the jury had rejected “his cynical attempt to use the constitutional right of free speech as a shield for his scheme to subvert the ballot box and suppress the vote.”Mr. Mackey posted one image showing a Black woman and a sign reading “African Americans for Hillary” a day after writing on Twitter about limiting turnout among Black voters. Another image, in Spanish, showed a woman looking at her phone.Both images, posted a week before the election, were accompanied by the hashtag #ImWithHer, which was used by the Clinton campaign. Both also included logos that looked like the campaign’s, and fine print saying they had been paid for by “Hillary for President.”Prosecutors said about 5,000 people sent texts to the number shown in the deceptive images.Mr. Mackey, 33, who grew up in Vermont, attended Middlebury College and once lived in Manhattan, testified in his own defense. He said he was in dozens of private online groups before the election but did not pay close attention to everything discussed in them.While testifying, Mr. Mackey said he found the vote-by-text images on an online message board and posted them with little thought. He added that he had not meant to trick anyone but wanted to “see what happens.”“Maybe even the media will pick it up, the Clinton campaign,” he testified, adding that the images might “rile them up, get under their skin, get them off their message that they wanted to push.”Mr. Mackey was seen, according to evidence, as someone who could marshal followers and move the national conversation. He used the pseudonym “Ricky Vaughn,” the name of a character in the movie “Major League.”In early 2016, the Ricky Vaughn account was included on a list of the top 150 election influencers compiled by a research group with the M.I.T. Media Lab, ranking ahead of NBC News, Drudge Report and Glenn Beck.As Mr. Mackey’s trial approached, people sympathetic to him claimed that he was being prosecuted unfairly. The defense sought to have his case dismissed, saying that the voting memes were protected by the First Amendment. But a judge denied that request, writing that the case was about conspiracy and injury, not speech.The star prosecution witness, a Twitter user known as Microchip, helped direct online attacks against Mrs. Clinton in 2016, but began cooperating with the F.B.I. two years later. He testified that the private groups that he and Mr. Mackey took part in had the goal of “destroying Hillary Clinton.”Communications from the groups provided a glimpse into a shadowy world of crass motives and dirty tricks in which anti-Clinton activists developed propaganda, spread falsehoods and exulted in their impact.Evidence showed that participants had shared memes about voting by social media, tried to figure out what font a Clinton ad used and circulated hashtags. One, #DraftOurDaughters, was posted on Twitter along with images suggesting that Mrs. Clinton would start wars and conscript women to fight them. Mr. Mackey advanced another, #NeverVote, that he wrote was meant to be spread in “Black social spaces.”During the trial, Mr. Frisch described his client’s posts as part of a rambunctious online discourse.“Speech regulates itself,” Mr. Frisch told jurors in his summation. “These memes were a bad idea and the marketplace of ideas killed them almost immediately.”Prosecutors countered that the false-voting images were part of an orchestrated effort to affect the election through deceit, adding that criminal activity cannot hide behind the First Amendment.“You can’t use speech to trick people out of their sacred right to vote,” one prosecutor, William J. Gullotta, told jurors.Prosecutors drew upon statements by Mr. Mackey, who wrote that the 2016 election was on “a knife’s edge,” to argue that he had tried to help Mr. Trump by suppressing votes.“Trump should write off the Black vote,” Mr. Mackey wrote at one point. “And just focus on depressing their turnout.” More

  • in

    A Campaign Aide Didn’t Write That Email. A.I. Did.

    The Democratic Party has begun testing the use of artificial intelligence to write first drafts of some fund-raising messages, appeals that often perform better than those written entirely by human beings.Fake A.I. images of Donald J. Trump getting arrested in New York spread faster than they could be fact-checked last week.And voice-cloning tools are producing vividly lifelike audio of President Biden — and many others — saying things they did not actually say.Artificial intelligence isn’t just coming soon to the 2024 campaign trail. It’s already here.The swift advance of A.I. promises to be as disruptive to the political sphere as to broader society. Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake at a moment when the acceptance of universal truths — that Mr. Biden beat Mr. Trump in 2020, for example — is already being strained.And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?Inside campaigns, artificial intelligence is expected to soon help perform mundane tasks that previously required fleets of interns. Republican and Democratic engineers alike are racing to develop tools to harness A.I. to make advertising more efficient, to engage in predictive analysis of public behavior, to write more and more personalized copy and to discover new patterns in mountains of voter data. The technology is evolving so fast that most predict a profound impact, even if specific ways in which it will upend the political system are more speculation than science.“It’s an iPhone moment — that’s the only corollary that everybody will appreciate,” said Dan Woods, the chief technology officer on Mr. Biden’s 2020 campaign. “It’s going to take pressure testing to figure out whether it’s good or bad — and it’s probably both.”OpenAI, whose ChatGPT chatbot ushered in the generative-text gold rush, has already released a more advanced model. Google has announced plans to expand A.I. offerings inside popular apps like Google Docs and Gmail, and is rolling out its own chatbot. Microsoft has raced a version to market, too. A smaller firm, ElevenLabs, has developed a text-to-audio tool that can mimic anyone’s voice in minutes. Midjourney, a popular A.I. art generator, can conjure hyper-realistic images with a few lines of text that are compelling enough to win art contests.“A.I. is about to make a significant change in the 2024 election because of machine learning’s predictive ability,” said Brad Parscale, Mr. Trump’s first 2020 campaign manager, who has since founded a digital firm that advertises some A.I. capabilities.Disinformation and “deepfakes” are the dominant fear. While forgeries are nothing new to politics — a photoshopped image of John Kerry and Jane Fonda was widely shared in 2004 — the ability to produce and share them has accelerated, with viral A.I. images of Mr. Trump being restrained by the police only the latest example. A fake image of Pope Francis in a white puffy coat went viral in recent days, as well.Many are particularly worried about local races, which receive far less scrutiny. Ahead of the recent primary in the Chicago mayoral race, a fake video briefly sprung up on a Twitter account called “Chicago Lakefront News” that impersonated one candidate, Paul Vallas.“Unfortunately, I think people are going to figure out how to use this for evil faster than for improving civic life,” said Joe Rospars, who was chief strategist on Senator Elizabeth Warren’s 2020 campaign and is now the chief executive of a digital consultancy.Those who work at the intersection of politics and technology return repeatedly to the same historical hypothetical: If the infamous “Access Hollywood” tape broke today — the one in which Mr. Trump is heard bragging about assaulting women and getting away with it — would Mr. Trump acknowledge it was him, as he did in 2016?The nearly universal answer was no.“I think about that example all the time,” said Matt Hodges, who was the engineering director on Mr. Biden’s 2020 campaign and is now executive director of Zinc Labs, which invests in Democratic technology. Republicans, he said, “may not use ‘fake news’ anymore. It may be ‘Woke A.I.’”For now, the frontline function of A.I. on campaigns is expected to be writing first drafts of the unending email and text cash solicitations.“Given the amount of rote, asinine verbiage that gets produced in politics, people will put it to work,” said Luke Thompson, a Republican political strategist.As an experiment, The New York Times asked ChatGPT to produce a fund-raising email for Mr. Trump. The app initially said, “I cannot take political sides or promote any political agenda.” But then it immediately provided a template of a potential Trump-like email.The chatbot denied a request to make the message “angrier” but complied when asked to “give it more edge,” to better reflect the often apocalyptic tone of Mr. Trump’s pleas. “We need your help to send a message to the radical left that we will not back down,” the revised A.I. message said. “Donate now and help us make America great again.”Among the prominent groups that have experimented with this tool is the Democratic National Committee, according to three people briefed on the efforts. In tests, the A.I.-generated content the D.N.C. has used has, as often as not, performed as well or better than copy drafted entirely by humans, in terms of generating engagement and donations.Party officials still make edits to the A.I. drafts, the people familiar with the efforts said, and no A.I. messages have yet been written under the name of Mr. Biden or any other person, two people said. The D.N.C. declined to comment.Higher Ground Labs, a small venture capital firm that invests in political technology for progressives, is currently working on a project, called Quiller, to more systematically use A.I. to write, send and test the effectiveness of fund-raising emails — all at once.“A.I. has mostly been marketing gobbledygook for the last three cycles,” said Betsy Hoover, a founding partner at Higher Ground Labs who was the director of digital organizing for President Barack Obama’s 2012 campaign. “We are at a moment now where there are things people can do that are actually helpful.”Political operatives, several of whom were granted anonymity to discuss potentially unsavory uses of artificial intelligence they are concerned about or planning to deploy, raised a raft of possibilities.Some feared bad actors could leverage A.I. chatbots to distract or waste a campaign’s precious staff time by pretending to be potential voters. Others floated producing deepfakes of their own candidate to generate personalized videos — thanking supporters for their donations, for example. In India, one candidate in 2020 produced a deepfake to disseminate a video of himself speaking in different languages; the technology is far superior now.Mr. Trump himself shared an A.I. image in recent days that appeared to show him kneeling in prayer. He posted it on Truth Social, his social media site, with no explanation.One strategist predicted that the next generation of dirty tricks could be direct-to-voter misinformation that skips social media sites entirely. What if, this strategist said, an A.I. audio recording of a candidate was sent straight to the voice mail of voters on the eve of an election?Synthetic audio and video are already swirling online, much of it as parody.On TikTok, there is an entire genre of videos featuring Mr. Biden, Mr. Obama and Mr. Trump profanely bantering, with the A.I.-generated audio overlaid as commentary during imaginary online video gaming sessions.On “The Late Show,” Stephen Colbert recently used A.I. audio to have the Fox News host Tucker Carlson “read” aloud his text messages slamming Mr. Trump. Mr. Colbert labeled the audio as A.I. and the image on-screen showed a blend of Mr. Carlson’s face and a Terminator cyborg for emphasis.The right-wing provocateur Jack Posobiec pushed out a “deepfake” video last month of Mr. Biden announcing a national draft because of the conflict in Ukraine. It was quickly seen by millions.“The videos we’ve seen in the last few weeks are really the canary in the coal mine,” said Hany Farid, a professor of computer science at University of California at Berkeley, who specializes in digital forensics. “We measure advances now not in years but in months, and there are many months before the election.”Some A.I. tools were deployed in 2020. The Biden campaign created a program, code-named Couch Potato, that linked facial recognition, voice-to-text and other tools to automate the transcription of live events, including debates. It replaced the work of a host of interns and aides, and was immediately searchable through an internal portal.The technology has improved so quickly, Mr. Woods said, that off-the-shelf tools are “1,000 times better” than what had to be built from scratch four years ago.One looming question is what campaigns can and cannot do with OpenAI’s powerful tools. One list of prohibited uses last fall lumped together “political campaigns, adult content, spam, hateful content.”Kim Malfacini, who helped create the OpenAI’s rules and is on the company’s trust and safety team, said in an interview that “political campaigns can use our tools for campaigning purposes. But it’s the scaled use that we are trying to disallow here.” OpenAI revised its usage rules after being contacted by The Times, specifying now that “generating high volumes of campaign materials” is prohibited.Tommy Vietor, a former spokesman for Mr. Obama, dabbled with the A.I. tool from ElevenLabs to create a faux recording of Mr. Biden calling into the popular “Pod Save America” podcast that Mr. Vietor co-hosts. He paid a few dollars and uploaded real audio of Mr. Biden, and out came an audio likeness.“The accuracy was just uncanny,” Mr. Vietor said in an interview.The show labeled it clearly as A.I. But Mr. Vietor could not help noticing that some online commenters nonetheless seemed confused. “I started playing with the software thinking this is so much fun, this will be a great vehicle for jokes,” he said, “and finished thinking, ‘Oh God, this is going to be a big problem.’” More

  • in

    Utah bans under-18s from using social media unless parents consent

    The governor of Utah, Spencer Cox, has signed sweeping social media legislation requiring explicit parental permissions for anyone under 18 to use platforms such as TikTok, Instagram and Facebook. He also signed a bill prohibiting social media companies from employing techniques that could cause minors to develop an “addiction” to the platforms.The former is the first state law in the US prohibiting social media services from allowing access to minors without parental consent. The state’s Republican-controlled legislature passed both bills earlier this month, despite opposition from civil liberties groups.“We’re no longer willing to let social media companies continue to harm the mental health of our youth,” Cox, a Republican, said in a message on Twitter.The impact of social media on children has become a topic of growing debate among lawmakers at the state and federal levels. On the same day Cox signed the bills in Utah, TikTok’s CEO testified before Congress to address concerns about national security, data privacy and teen users’ mental health.The new law prohibiting minors from accessing social media without their parents’ consent would also allow parents or guardians to access all of their children’s posts. The platforms will be required to block users younger than 18 from accessing accounts between 10.30pm and 6.30am unless parents modify the settings.The laws also prohibit social media companies from advertising to minors, collecting information about them or targeting content to them.What’s not clear from the Utah laws and others is how the states plan to enforce the new regulations. Companies are already prohibited from collecting data on children younger than 13 without parental consent under the federal Children’s Online Privacy Protection Act. For this reason, social media companies already ban kids under 13 from signing up to their platforms – but children can easily get around it, both with and without their parents’ permission.Civil liberties groups have raised concerns that such provisions will block marginalized youth including LGBTQ+ teens from accessing online support networks and information.Tech groups have also opposed the laws. “Utah will soon require online services to collect sensitive information about teens and families, not only to verify ages, but to verify parental relationships, like government-issued IDs and birth certificates, putting their private data at risk of breach,” said Nicole Saad Bembridge, an associate director at NetChoice, a tech lobby group. “These laws also infringe on Utahans’ first amendment rights to share and access speech online – an effort already rejected by the supreme court in 1997.”skip past newsletter promotionafter newsletter promotionThe law will take effect next March. Michael McKell, the Republican state senator who sponsored the bill, told the New York Times that social media is “a contributing factor” to poor teen mental health, and that the laws were intended to address that issue.Several states have sought to enact guardrails for young social media users. Lawmakers in Connecticut and Ohio have put forward measures to require parental permissions for users younger than 16. Lawmakers in Arkansas and Texas have also introduced bills to restrict social media use among minors under 18, with the latter aiming to ban social media accounts for minors entirely.California enacted a measure requiring social media networks to enact the highest privacy settings for users younger than 18 as a default. More

  • in

    Key takeaways from TikTok hearing in Congress – and the uncertain road ahead

    The first appearance in Congress for TikTok’s CEO Shou Zi Chew stretched more than five hours, with contentious questioning targeting the app’s relationship with China and protections for its youngest users.Chew’s appearance comes at a pivotal time for TikTok, which is facing bipartisan fire after experiencing a meteoric rise in popularity in recent years. The company is owned by Chinese firm ByteDance, raising concerns about China’s influence over the app – criticisms Chew repeatedly tried to resist throughout the hearing.“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” he said in prepared testimony.He defended TikTok’s privacy practices, stating they are are in line with those of other social media platforms, adding that in many cases the app collects less data than its peers. “There are more than 150 million Americans who love our platform, and we know we have a responsibility to protect them,” Chew said.Here are some of the other key criticisms Chew faced at Thursday’s landmark hearing, and what could lie ahead.TikTok’s relationship to China under fireMany members of the committee focused on ByteDance and its executives, who lawmakers say have ties to the Chinese Communist party.The committee members asked how frequently Chew was in contact with them, and questioned whether the company’s proposed solution, called Project Texas, would offer sufficient protection against Chinese laws that require companies to make user data accessible to the government.At one point, Tony Cárdenas, a Democrat from California, asked Chew outright if TikTok is a Chinese company. Chew responded that TikTok is global in nature, not available in mainland China, and headquartered in Singapore and Los Angeles.Neal Dunn, a Republican from Florida, asked with similar bluntness whether ByteDance has “spied on American citizens” – a question that came amid reports the company accessed journalists’ information in an attempt to identify which employees were leaking information. Chew responded that “spying is not the right way to describe it”.Concerns about the viability of ‘Project Texas’In an effort to deflect concerns about Chinese influence, TikTok has pledged to relocate all US user data to domestic servers through an effort titled Project Texas, a plan that would also allow US tech firm Oracle to scrutinize TikTok’s source code and act as a third-party monitor.The company has promised to complete the effort by the end of the year, but some lawmakers questioned whether that is possible, with hundreds of millions of lines of source code requiring review in a relatively short amount of time.“I am concerned that what you’re proposing with Project Texas just doesn’t have the technical capability of providing us the assurances that we need,” the California Republican Jay Obernolte, a congressman and software engineer, said.Youth safety and mental health in the spotlightAnother frequent focus was the safety of TikTok’s young users, considering the app has exploded in popularity with this age group in recent years. A majority of teens in the US say they use TikTok – with 67% of people aged 13 to 17 saying they have used the app and 16% of that age group saying they use it “almost constantly”, according to the Pew Research Center.skip past newsletter promotionafter newsletter promotionLawmakers cited reports that drug-related content has spread on the app, allowing teens to purchase dangerous substances easily online. Chew said such content violates TikTok policy and that they are removed when identified.“We take this very seriously,” Chew said. “This is an industry-wide challenge, and we’re investing as much as we can. We don’t think it represents the majority of the users’ experience on TikTok, but it does happen.”Others cited self-harm and eating disorder content, which have been spreading on the platform. TikTok is also facing lawsuits over deadly “challenges” that have gone viral on the app. Mental health concerns were underscored at the hearing by the appearance of Dean and Michelle Nasca, the parents of a teen who died by suicide after allegedly being served unsolicited self-harm content on TikTok.“We need you to do your part,” said congresswoman Kim Schrier, who is a pediatrician. “It could save this generation.”Uncertainty lingers over a possible banThe federal government has already barred TikTok on government devices, and the Biden administration has threatened a national ban. Thursday’s hearing left the future of the app in the US uncertain, as members of the committee appeared unwavering in their conviction that TikTok was a tool that could be exploited by the Chinese Communist party. Their conviction was bolstered by a report in the Wall Street Journal, released just hours before the hearing, indicating the Chinese government would not approve a sale of TikTok.Lawmakers outside of the committee are also unconvinced. US senators Mark Warner and John Thune said in a statement that all Chinese companies “are ultimately required to do the bidding of Chinese intelligence services, should they be called upon to do so” and that nothing Chew said in his testimony assuaged those concerns. Colorado senator Michael Bennet also reiterated calls for an all-out ban of TikTok.But the idea of a national ban still faces huge hurdles, both legally and in the court of public opinion. For one, previous attempts to ban TikTok under the Trump administration was blocked in court due in part to free speech concerns. TikTok also remains one of the fastest growing and most popular apps in the US and millions of its users are unlikely to want to give it up.A coalition of civil liberties, privacy and security groups including Fight for the Future, the Center for Democracy and Technology, and the American Civil Liberties Union have written a letter opposing a ban, arguing that it would violate constitutional rights to freedom of expression. “A nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ first amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” the letter reads.Where the coalition and many members of the House committee agree is on the pressing need for federal data privacy regulation that protects consumer information and reins in all big tech platforms, including TikTok. The American Data Privacy Act – a bipartisan bill working its way through Washington – is one effort under way to address those concerns. More

  • in

    TikTok CEO grilled for over five hours on China, drugs and teen mental health

    The chief executive of TikTok, Shou Zi Chew, was forced to defend his company’s relationship with China, as well as the protections for its youngest users, at a testy congressional hearing on Thursday that came amid a bipartisan push to ban the app entirely in the US over national security concerns.The hearing got off to an intense start, with members of the committee hammering on Chew’s connection to executives at TikTok’s parent company, ByteDance, whom lawmakers say have ties to the Chinese Communist party. The committee members asked how frequently Chew was in contact with them, and questioned whether the company’s proposed solution, called Project Texas, would offer sufficient protection against Chinese laws that require companies to make user data accessible to the government.Lawmakers have long held concerns over China’s control over the app, concerns Chew repeatedly tried to resist throughout the hearing. “Let me state this unequivocally: ByteDance is not an agent of China or any other country,” he said in prepared testimony.But Chew’s claims of independence were undermined by a Wall Street Journal story published just hours before the hearing that said China would strongly oppose any forced sale of the company. Responding for the first time to Joe Biden’s threat of a national ban unless ByteDance sells its shares, the Chinese commerce ministry said such a move would involve exporting technology from China and thus would have to be approved by the Chinese government.Lawmakers also questioned Chew over the platform’s impact on mental health, particularly of its young users. The Republican congressman Gus Bilirakis shared the story of Chase Nasca, a 16-year-old boy who died by suicide a year ago by stepping in front of a train. Nasca’s parents, who have sued ByteDance, claiming Chase was “targeted” with unsolicited suicide-related content, appeared at the hearing and grew emotional as Bilirakis told their son’s story.“I want to thank his parents for being here today, and allowing us to show this,” Bilirakis said. “Mr Chew, your company destroyed their lives.”Driving home concerns about young users, Congresswoman Nanette Barragán asked Chew about reports that he does not let his own children use the app.“At what age do you think it would be appropriate for a young person to get on TikTok?” she said.Chew confirmed his own children were not on TikTok but said that was because in Singapore, where they live, there is not a version of the platform for users under the age of 13. In the US there is a version of TikTok in which the content is curated for a users under 13.“Our approach is to give differentiated experiences for different age groups, and let the parents have conversations with their children to decide what’s best for their family,” he said.The appearance of Chew before the House energy and commerce committee, the first ever by a TikTok chief executive, represents a major test for the 40-year-old, who has remained largely out of the spotlight.Throughout the hearing, Chew stressed TikTok’s distance from the Chinese government, kicking off his testimony with an emphasis on his own Singaporean heritage. Chew talked about Project Texas – an effort to move all US data to domestic servers – and said the company was deleting all US user data that is backed up to servers outside the US by the end of the year.Some legislators expressed that Project Texas was too large an undertaking, and would not tackle concerns about US data privacy soon enough. “I am concerned that what you’re proposing with Project Texas just doesn’t have the technical capability of providing us the assurances that we need,” the California Republican Jay Obernolte, a software engineer, said.At one point, Tony Cárdenas, a Democrat from California, asked Chew outright if TikTok is a Chinese company. Chew responded that TikTok is global in nature, not available in mainland China, and headquartered in Singapore and Los Angeles.Neal Dunn, a Republican from Florida, asked with similar bluntness whether ByteDance has “spied on American citizens” – a question that came amid reports the company accessed journalists’ information in an attempt to identify which employees were leaking information. Chew responded that “spying is not the right way to describe it”.The hearing comes three years after TikTok was formally targeted by the Trump administration with an executive order prohibiting US companies from doing business with ByteDance. Biden revoked that order in June 2021, under the stipulation that the US committee on foreign investment conduct a review of the company. When that review stalled, Biden demanded TikTok sell its Chinese-owned shares or face a ban in the US.This bipartisan nature of the backlash was remarked upon several times during the hearing, with Cárdenas pointing out that Chew “has been one of the few people to unite this committee”.Chew’s testimony, some lawmakers said, was reminiscent of Mark Zuckerberg’s appearance in an April 2018 hearing to answer for his own platform’s data-privacy issues – answers many lawmakers were unsatisfied with. Cárdenas said: “We are frustrated with TikTok … and yes, you keep mentioning that there are industry issues that not only TikTok faces but others. You remind me a lot of [Mark] Zuckerberg … when he came here, I said he reminds me of Fred Astaire: a good dancer with words. And you are doing the same today. A lot of your answers are a bit nebulous, they’re not yes or no.”Chew, a former Goldman Sachs banker who has helmed the company since March 2021, warned users in a video posted to TikTok earlier in the week that the company was at a “pivotal moment”.“Some politicians have started talking about banning TikTok,” he said, adding that the app now has more than 150 million active monthly US users. “That’s almost half the US coming to TikTok.”TikTok has battled legislative headwinds since its meteoric rise began in 2018. Today, a majority of teens in the US say they use TikTok – with 67% of people ages 13 to 17 saying they have used the app and 16% of that age group saying they use it “almost constantly”, according to the Pew Research Center.This has raised a number of concerns about the app’s impact on young users’ safety, with self-harm and eating disorder-related content spreading on the platform. TikTok is also facing lawsuits over deadly “challenges” that have gone viral on the app.TikTok has introduced features in response to such criticisms, including automatic time limits for users under 18.Some tech critics have said that while TikTok’s data collection does raise concerns, its practices are not much different from those of other big tech firms.“Holding TikTok and China accountable are steps in the right direction, but doing so without holding other platforms accountable is simply not enough,” said the Tech Oversight Project, a technology policy advocacy organization, in a statement.“Lawmakers and regulators should use this week’s hearing as an opportunity to re-engage with civil society organizations, NGOs, academics and activists to squash all of big tech’s harmful practices.” More

  • in

    Online Troll Named Microchip Tells of Sowing ‘Chaos’ in 2016 Election

    The defendant in the unusual trial, Douglass Mackey, and the pseudonymous witness collaborated to beat Hillary Clinton. They met for the first time in a Brooklyn courtroom.The two social media influencers teamed up online years ago.Both had large right-wing followings and pseudonyms to hide their real identities. One called himself Ricky Vaughn, after a fictional baseball player portrayed in a movie by Charlie Sheen. The other called himself Microchip.In 2016, prosecutors say, they set out to trick supporters of Hillary Clinton into thinking they could vote by text message or social media, discouraging them from the polls.“Ricky Vaughn,” whose real name is Douglass Mackey, was charged in 2021 with conspiring to deprive others of their right to vote, and on Wednesday the men met face to face in court for the first time.Mr. Mackey sat at the defense table in Federal District Court in Brooklyn wearing a sober gray suit. He watched as Microchip, clad in a royal-blue sweatsuit and black sandals, approached the witness stand, where he was sworn in under that name and began testifying against him.This month, a federal judge overseeing the case, Nicholas G. Garaufis, ruled that Microchip could testify without revealing his actual name after prosecutors said anonymity was needed to protect current and future investigations.The sight of a witness testifying under a fictional identity added one more odd element to an already unusual case that reflects both the rise of social media as a force in politics and the emergence of malicious online mischief-makers — trolls — as influential players in a presidential election. This week’s trial could help determine how much protection the First Amendment gives people who spread disinformation.Microchip’s testimony appeared intended to give jurors an inside view of what prosecutors describe as a conspiracy to disenfranchise voters. It also provided a firsthand account of crass, nihilistic motives behind those efforts.“I wanted to infect everything,” Microchip said, adding that his aim before the 2016 election had been “to cause as much chaos as possible” and diminish Mrs. Clinton’s chances of beating Donald J. Trump.Evidence presented by prosecutors has shown how Mr. Mackey and others, including Microchip, had private online discussions in the weeks before the election, discussing how they could move votes.While Mr. Mackey made clear that he wanted to help Mr. Trump become president, Microchip testified that he was driven mainly by animus for Mrs. Clinton, testifying that his aim had been to “destroy” her reputation.In the fervid and fluid environment surrounding the 2016 election, Mr. Mackey, whose lawyer described him as “a staunch political conservative,” and Microchip, who told BuzzFeed that he was a “staunch liberal,” became allies.Online exchanges and Twitter messages entered into evidence by prosecutors showed the men plotting their strategy. Mr. Mackey saw limiting Black turnout as a key to helping Mr. Trump. Prosecutors said that he posted an image showing a Black woman near a sign reading “African Americans for Hillary” and the message that people could vote by texting “Hillary” to a specific number.Microchip testified that Mr. Mackey was a participant in a private Twitter chat group called “War Room,” adding that he was “very well respected back then” and “a leader of sorts.”Prosecutors introduced records showing that Microchip and Mr. Mackey had retweeted one another dozens of times.Mr. Mackey’s particular talent, according to Microchip, was coming up with ideas and memes that resonated with people who felt that American society was declining and that the West was struggling.Microchip testified that he was self employed as a mobile app developer. He said he had pleaded guilty to a conspiracy charge related to his circulation of memes providing misinformation about how to vote. Because of his anonymity the details of that plea could not be confirmed. And he added that he had signed a cooperation agreement with prosecutors agreeing to testify against Mr. Mackey, and to help with other cases.Under cross-examination, Microchip said he had begun working with the F.B.I. in 2018. He also acknowledged telling an investigator in 2021 that there was no “grand plan around stopping people from voting.”His time on the stand included a tutorial of sorts on how he had amassed Twitter followers and misled people who viewed his messages.He testified that he had built up a following with bots, and used hashtags employed by Mrs. Clinton in a process he called “hijacking” to get his messages to her followers. He aimed to seduce viewers with humor, saying, “When people are laughing, they are very easily manipulated.”Microchip said that he sought to discourage voting among Clinton supporters “through fear tactics,” offering conspiratorial takes on ordinary events as a way to drive paranoia and disaffection.One example he cited involved the emails of John Podesta, Mrs. Clinton’s campaign chairman, which were made public by WikiLeaks during the campaign.There was nothing particularly surprising or sinister among those emails, Microchip said, yet he posted thousands of messages about them suggesting otherwise. “My talent is to make things weird and strange, so there is controversy.”Asked by a prosecutor whether he believed the messages he posted, Microchip did not hesitate.“No,” he said. “And I didn’t care.” More