More stories

  • in

    Trump Supporter Convicted in 2016 Scheme to Suppress Votes for Clinton

    The federal prosecution of Douglass Mackey turned on the question of when free speech turns into dirty tricks.Months before the 2016 presidential election, people intent on swaying the outcome were communicating in private Twitter groups with names like “War Room” and “Infowars Madman.”The participants included obscure figures and notorious online trolls, many of whom concealed their real identities. There were fans of Donald J. Trump and avowed haters of Hillary Clinton, all working toward a Republican victory while celebrating the “meme magic” they employed to circulate lies and attacks.According to federal prosecutors, one man, Douglass Mackey, crossed a line from political speech to criminal conduct when he posted images to Twitter that resembled campaign ads for Mrs. Clinton and falsely stated that people could vote simply by texting “Hillary” to a certain phone number.On Friday, after just over four days of deliberation, a jury in Brooklyn found Mr. Mackey guilty of conspiring to deprive others of their right to vote. He is scheduled to be sentenced in August and faces a maximum of 10 years in prison.Mr. Mackey, wearing a gray suit, white shirt and pink tie, was stoic as the verdict was read. His lawyer, Andrew J. Frisch, suggested that his client would appeal.“This case presents an unusual array of appellate issues that are exceptionally strong,” Mr. Frisch said, adding: “I’m confident about the way forward.”Breon Peace, the United States attorney in Brooklyn, said in a statement that by convicting Mr. Mackey the jury had rejected “his cynical attempt to use the constitutional right of free speech as a shield for his scheme to subvert the ballot box and suppress the vote.”Mr. Mackey posted one image showing a Black woman and a sign reading “African Americans for Hillary” a day after writing on Twitter about limiting turnout among Black voters. Another image, in Spanish, showed a woman looking at her phone.Both images, posted a week before the election, were accompanied by the hashtag #ImWithHer, which was used by the Clinton campaign. Both also included logos that looked like the campaign’s, and fine print saying they had been paid for by “Hillary for President.”Prosecutors said about 5,000 people sent texts to the number shown in the deceptive images.Mr. Mackey, 33, who grew up in Vermont, attended Middlebury College and once lived in Manhattan, testified in his own defense. He said he was in dozens of private online groups before the election but did not pay close attention to everything discussed in them.While testifying, Mr. Mackey said he found the vote-by-text images on an online message board and posted them with little thought. He added that he had not meant to trick anyone but wanted to “see what happens.”“Maybe even the media will pick it up, the Clinton campaign,” he testified, adding that the images might “rile them up, get under their skin, get them off their message that they wanted to push.”Mr. Mackey was seen, according to evidence, as someone who could marshal followers and move the national conversation. He used the pseudonym “Ricky Vaughn,” the name of a character in the movie “Major League.”In early 2016, the Ricky Vaughn account was included on a list of the top 150 election influencers compiled by a research group with the M.I.T. Media Lab, ranking ahead of NBC News, Drudge Report and Glenn Beck.As Mr. Mackey’s trial approached, people sympathetic to him claimed that he was being prosecuted unfairly. The defense sought to have his case dismissed, saying that the voting memes were protected by the First Amendment. But a judge denied that request, writing that the case was about conspiracy and injury, not speech.The star prosecution witness, a Twitter user known as Microchip, helped direct online attacks against Mrs. Clinton in 2016, but began cooperating with the F.B.I. two years later. He testified that the private groups that he and Mr. Mackey took part in had the goal of “destroying Hillary Clinton.”Communications from the groups provided a glimpse into a shadowy world of crass motives and dirty tricks in which anti-Clinton activists developed propaganda, spread falsehoods and exulted in their impact.Evidence showed that participants had shared memes about voting by social media, tried to figure out what font a Clinton ad used and circulated hashtags. One, #DraftOurDaughters, was posted on Twitter along with images suggesting that Mrs. Clinton would start wars and conscript women to fight them. Mr. Mackey advanced another, #NeverVote, that he wrote was meant to be spread in “Black social spaces.”During the trial, Mr. Frisch described his client’s posts as part of a rambunctious online discourse.“Speech regulates itself,” Mr. Frisch told jurors in his summation. “These memes were a bad idea and the marketplace of ideas killed them almost immediately.”Prosecutors countered that the false-voting images were part of an orchestrated effort to affect the election through deceit, adding that criminal activity cannot hide behind the First Amendment.“You can’t use speech to trick people out of their sacred right to vote,” one prosecutor, William J. Gullotta, told jurors.Prosecutors drew upon statements by Mr. Mackey, who wrote that the 2016 election was on “a knife’s edge,” to argue that he had tried to help Mr. Trump by suppressing votes.“Trump should write off the Black vote,” Mr. Mackey wrote at one point. “And just focus on depressing their turnout.” More

  • in

    A Campaign Aide Didn’t Write That Email. A.I. Did.

    The Democratic Party has begun testing the use of artificial intelligence to write first drafts of some fund-raising messages, appeals that often perform better than those written entirely by human beings.Fake A.I. images of Donald J. Trump getting arrested in New York spread faster than they could be fact-checked last week.And voice-cloning tools are producing vividly lifelike audio of President Biden — and many others — saying things they did not actually say.Artificial intelligence isn’t just coming soon to the 2024 campaign trail. It’s already here.The swift advance of A.I. promises to be as disruptive to the political sphere as to broader society. Now any amateur with a laptop can manufacture the kinds of convincing sounds and images that were once the domain of the most sophisticated digital players. This democratization of disinformation is blurring the boundaries between fact and fake at a moment when the acceptance of universal truths — that Mr. Biden beat Mr. Trump in 2020, for example — is already being strained.And as synthetic media gets more believable, the question becomes: What happens when people can no longer trust their own eyes and ears?Inside campaigns, artificial intelligence is expected to soon help perform mundane tasks that previously required fleets of interns. Republican and Democratic engineers alike are racing to develop tools to harness A.I. to make advertising more efficient, to engage in predictive analysis of public behavior, to write more and more personalized copy and to discover new patterns in mountains of voter data. The technology is evolving so fast that most predict a profound impact, even if specific ways in which it will upend the political system are more speculation than science.“It’s an iPhone moment — that’s the only corollary that everybody will appreciate,” said Dan Woods, the chief technology officer on Mr. Biden’s 2020 campaign. “It’s going to take pressure testing to figure out whether it’s good or bad — and it’s probably both.”OpenAI, whose ChatGPT chatbot ushered in the generative-text gold rush, has already released a more advanced model. Google has announced plans to expand A.I. offerings inside popular apps like Google Docs and Gmail, and is rolling out its own chatbot. Microsoft has raced a version to market, too. A smaller firm, ElevenLabs, has developed a text-to-audio tool that can mimic anyone’s voice in minutes. Midjourney, a popular A.I. art generator, can conjure hyper-realistic images with a few lines of text that are compelling enough to win art contests.“A.I. is about to make a significant change in the 2024 election because of machine learning’s predictive ability,” said Brad Parscale, Mr. Trump’s first 2020 campaign manager, who has since founded a digital firm that advertises some A.I. capabilities.Disinformation and “deepfakes” are the dominant fear. While forgeries are nothing new to politics — a photoshopped image of John Kerry and Jane Fonda was widely shared in 2004 — the ability to produce and share them has accelerated, with viral A.I. images of Mr. Trump being restrained by the police only the latest example. A fake image of Pope Francis in a white puffy coat went viral in recent days, as well.Many are particularly worried about local races, which receive far less scrutiny. Ahead of the recent primary in the Chicago mayoral race, a fake video briefly sprung up on a Twitter account called “Chicago Lakefront News” that impersonated one candidate, Paul Vallas.“Unfortunately, I think people are going to figure out how to use this for evil faster than for improving civic life,” said Joe Rospars, who was chief strategist on Senator Elizabeth Warren’s 2020 campaign and is now the chief executive of a digital consultancy.Those who work at the intersection of politics and technology return repeatedly to the same historical hypothetical: If the infamous “Access Hollywood” tape broke today — the one in which Mr. Trump is heard bragging about assaulting women and getting away with it — would Mr. Trump acknowledge it was him, as he did in 2016?The nearly universal answer was no.“I think about that example all the time,” said Matt Hodges, who was the engineering director on Mr. Biden’s 2020 campaign and is now executive director of Zinc Labs, which invests in Democratic technology. Republicans, he said, “may not use ‘fake news’ anymore. It may be ‘Woke A.I.’”For now, the frontline function of A.I. on campaigns is expected to be writing first drafts of the unending email and text cash solicitations.“Given the amount of rote, asinine verbiage that gets produced in politics, people will put it to work,” said Luke Thompson, a Republican political strategist.As an experiment, The New York Times asked ChatGPT to produce a fund-raising email for Mr. Trump. The app initially said, “I cannot take political sides or promote any political agenda.” But then it immediately provided a template of a potential Trump-like email.The chatbot denied a request to make the message “angrier” but complied when asked to “give it more edge,” to better reflect the often apocalyptic tone of Mr. Trump’s pleas. “We need your help to send a message to the radical left that we will not back down,” the revised A.I. message said. “Donate now and help us make America great again.”Among the prominent groups that have experimented with this tool is the Democratic National Committee, according to three people briefed on the efforts. In tests, the A.I.-generated content the D.N.C. has used has, as often as not, performed as well or better than copy drafted entirely by humans, in terms of generating engagement and donations.Party officials still make edits to the A.I. drafts, the people familiar with the efforts said, and no A.I. messages have yet been written under the name of Mr. Biden or any other person, two people said. The D.N.C. declined to comment.Higher Ground Labs, a small venture capital firm that invests in political technology for progressives, is currently working on a project, called Quiller, to more systematically use A.I. to write, send and test the effectiveness of fund-raising emails — all at once.“A.I. has mostly been marketing gobbledygook for the last three cycles,” said Betsy Hoover, a founding partner at Higher Ground Labs who was the director of digital organizing for President Barack Obama’s 2012 campaign. “We are at a moment now where there are things people can do that are actually helpful.”Political operatives, several of whom were granted anonymity to discuss potentially unsavory uses of artificial intelligence they are concerned about or planning to deploy, raised a raft of possibilities.Some feared bad actors could leverage A.I. chatbots to distract or waste a campaign’s precious staff time by pretending to be potential voters. Others floated producing deepfakes of their own candidate to generate personalized videos — thanking supporters for their donations, for example. In India, one candidate in 2020 produced a deepfake to disseminate a video of himself speaking in different languages; the technology is far superior now.Mr. Trump himself shared an A.I. image in recent days that appeared to show him kneeling in prayer. He posted it on Truth Social, his social media site, with no explanation.One strategist predicted that the next generation of dirty tricks could be direct-to-voter misinformation that skips social media sites entirely. What if, this strategist said, an A.I. audio recording of a candidate was sent straight to the voice mail of voters on the eve of an election?Synthetic audio and video are already swirling online, much of it as parody.On TikTok, there is an entire genre of videos featuring Mr. Biden, Mr. Obama and Mr. Trump profanely bantering, with the A.I.-generated audio overlaid as commentary during imaginary online video gaming sessions.On “The Late Show,” Stephen Colbert recently used A.I. audio to have the Fox News host Tucker Carlson “read” aloud his text messages slamming Mr. Trump. Mr. Colbert labeled the audio as A.I. and the image on-screen showed a blend of Mr. Carlson’s face and a Terminator cyborg for emphasis.The right-wing provocateur Jack Posobiec pushed out a “deepfake” video last month of Mr. Biden announcing a national draft because of the conflict in Ukraine. It was quickly seen by millions.“The videos we’ve seen in the last few weeks are really the canary in the coal mine,” said Hany Farid, a professor of computer science at University of California at Berkeley, who specializes in digital forensics. “We measure advances now not in years but in months, and there are many months before the election.”Some A.I. tools were deployed in 2020. The Biden campaign created a program, code-named Couch Potato, that linked facial recognition, voice-to-text and other tools to automate the transcription of live events, including debates. It replaced the work of a host of interns and aides, and was immediately searchable through an internal portal.The technology has improved so quickly, Mr. Woods said, that off-the-shelf tools are “1,000 times better” than what had to be built from scratch four years ago.One looming question is what campaigns can and cannot do with OpenAI’s powerful tools. One list of prohibited uses last fall lumped together “political campaigns, adult content, spam, hateful content.”Kim Malfacini, who helped create the OpenAI’s rules and is on the company’s trust and safety team, said in an interview that “political campaigns can use our tools for campaigning purposes. But it’s the scaled use that we are trying to disallow here.” OpenAI revised its usage rules after being contacted by The Times, specifying now that “generating high volumes of campaign materials” is prohibited.Tommy Vietor, a former spokesman for Mr. Obama, dabbled with the A.I. tool from ElevenLabs to create a faux recording of Mr. Biden calling into the popular “Pod Save America” podcast that Mr. Vietor co-hosts. He paid a few dollars and uploaded real audio of Mr. Biden, and out came an audio likeness.“The accuracy was just uncanny,” Mr. Vietor said in an interview.The show labeled it clearly as A.I. But Mr. Vietor could not help noticing that some online commenters nonetheless seemed confused. “I started playing with the software thinking this is so much fun, this will be a great vehicle for jokes,” he said, “and finished thinking, ‘Oh God, this is going to be a big problem.’” More

  • in

    Utah bans under-18s from using social media unless parents consent

    The governor of Utah, Spencer Cox, has signed sweeping social media legislation requiring explicit parental permissions for anyone under 18 to use platforms such as TikTok, Instagram and Facebook. He also signed a bill prohibiting social media companies from employing techniques that could cause minors to develop an “addiction” to the platforms.The former is the first state law in the US prohibiting social media services from allowing access to minors without parental consent. The state’s Republican-controlled legislature passed both bills earlier this month, despite opposition from civil liberties groups.“We’re no longer willing to let social media companies continue to harm the mental health of our youth,” Cox, a Republican, said in a message on Twitter.The impact of social media on children has become a topic of growing debate among lawmakers at the state and federal levels. On the same day Cox signed the bills in Utah, TikTok’s CEO testified before Congress to address concerns about national security, data privacy and teen users’ mental health.The new law prohibiting minors from accessing social media without their parents’ consent would also allow parents or guardians to access all of their children’s posts. The platforms will be required to block users younger than 18 from accessing accounts between 10.30pm and 6.30am unless parents modify the settings.The laws also prohibit social media companies from advertising to minors, collecting information about them or targeting content to them.What’s not clear from the Utah laws and others is how the states plan to enforce the new regulations. Companies are already prohibited from collecting data on children younger than 13 without parental consent under the federal Children’s Online Privacy Protection Act. For this reason, social media companies already ban kids under 13 from signing up to their platforms – but children can easily get around it, both with and without their parents’ permission.Civil liberties groups have raised concerns that such provisions will block marginalized youth including LGBTQ+ teens from accessing online support networks and information.Tech groups have also opposed the laws. “Utah will soon require online services to collect sensitive information about teens and families, not only to verify ages, but to verify parental relationships, like government-issued IDs and birth certificates, putting their private data at risk of breach,” said Nicole Saad Bembridge, an associate director at NetChoice, a tech lobby group. “These laws also infringe on Utahans’ first amendment rights to share and access speech online – an effort already rejected by the supreme court in 1997.”skip past newsletter promotionafter newsletter promotionThe law will take effect next March. Michael McKell, the Republican state senator who sponsored the bill, told the New York Times that social media is “a contributing factor” to poor teen mental health, and that the laws were intended to address that issue.Several states have sought to enact guardrails for young social media users. Lawmakers in Connecticut and Ohio have put forward measures to require parental permissions for users younger than 16. Lawmakers in Arkansas and Texas have also introduced bills to restrict social media use among minors under 18, with the latter aiming to ban social media accounts for minors entirely.California enacted a measure requiring social media networks to enact the highest privacy settings for users younger than 18 as a default. More

  • in

    Key takeaways from TikTok hearing in Congress – and the uncertain road ahead

    The first appearance in Congress for TikTok’s CEO Shou Zi Chew stretched more than five hours, with contentious questioning targeting the app’s relationship with China and protections for its youngest users.Chew’s appearance comes at a pivotal time for TikTok, which is facing bipartisan fire after experiencing a meteoric rise in popularity in recent years. The company is owned by Chinese firm ByteDance, raising concerns about China’s influence over the app – criticisms Chew repeatedly tried to resist throughout the hearing.“Let me state this unequivocally: ByteDance is not an agent of China or any other country,” he said in prepared testimony.He defended TikTok’s privacy practices, stating they are are in line with those of other social media platforms, adding that in many cases the app collects less data than its peers. “There are more than 150 million Americans who love our platform, and we know we have a responsibility to protect them,” Chew said.Here are some of the other key criticisms Chew faced at Thursday’s landmark hearing, and what could lie ahead.TikTok’s relationship to China under fireMany members of the committee focused on ByteDance and its executives, who lawmakers say have ties to the Chinese Communist party.The committee members asked how frequently Chew was in contact with them, and questioned whether the company’s proposed solution, called Project Texas, would offer sufficient protection against Chinese laws that require companies to make user data accessible to the government.At one point, Tony Cárdenas, a Democrat from California, asked Chew outright if TikTok is a Chinese company. Chew responded that TikTok is global in nature, not available in mainland China, and headquartered in Singapore and Los Angeles.Neal Dunn, a Republican from Florida, asked with similar bluntness whether ByteDance has “spied on American citizens” – a question that came amid reports the company accessed journalists’ information in an attempt to identify which employees were leaking information. Chew responded that “spying is not the right way to describe it”.Concerns about the viability of ‘Project Texas’In an effort to deflect concerns about Chinese influence, TikTok has pledged to relocate all US user data to domestic servers through an effort titled Project Texas, a plan that would also allow US tech firm Oracle to scrutinize TikTok’s source code and act as a third-party monitor.The company has promised to complete the effort by the end of the year, but some lawmakers questioned whether that is possible, with hundreds of millions of lines of source code requiring review in a relatively short amount of time.“I am concerned that what you’re proposing with Project Texas just doesn’t have the technical capability of providing us the assurances that we need,” the California Republican Jay Obernolte, a congressman and software engineer, said.Youth safety and mental health in the spotlightAnother frequent focus was the safety of TikTok’s young users, considering the app has exploded in popularity with this age group in recent years. A majority of teens in the US say they use TikTok – with 67% of people aged 13 to 17 saying they have used the app and 16% of that age group saying they use it “almost constantly”, according to the Pew Research Center.skip past newsletter promotionafter newsletter promotionLawmakers cited reports that drug-related content has spread on the app, allowing teens to purchase dangerous substances easily online. Chew said such content violates TikTok policy and that they are removed when identified.“We take this very seriously,” Chew said. “This is an industry-wide challenge, and we’re investing as much as we can. We don’t think it represents the majority of the users’ experience on TikTok, but it does happen.”Others cited self-harm and eating disorder content, which have been spreading on the platform. TikTok is also facing lawsuits over deadly “challenges” that have gone viral on the app. Mental health concerns were underscored at the hearing by the appearance of Dean and Michelle Nasca, the parents of a teen who died by suicide after allegedly being served unsolicited self-harm content on TikTok.“We need you to do your part,” said congresswoman Kim Schrier, who is a pediatrician. “It could save this generation.”Uncertainty lingers over a possible banThe federal government has already barred TikTok on government devices, and the Biden administration has threatened a national ban. Thursday’s hearing left the future of the app in the US uncertain, as members of the committee appeared unwavering in their conviction that TikTok was a tool that could be exploited by the Chinese Communist party. Their conviction was bolstered by a report in the Wall Street Journal, released just hours before the hearing, indicating the Chinese government would not approve a sale of TikTok.Lawmakers outside of the committee are also unconvinced. US senators Mark Warner and John Thune said in a statement that all Chinese companies “are ultimately required to do the bidding of Chinese intelligence services, should they be called upon to do so” and that nothing Chew said in his testimony assuaged those concerns. Colorado senator Michael Bennet also reiterated calls for an all-out ban of TikTok.But the idea of a national ban still faces huge hurdles, both legally and in the court of public opinion. For one, previous attempts to ban TikTok under the Trump administration was blocked in court due in part to free speech concerns. TikTok also remains one of the fastest growing and most popular apps in the US and millions of its users are unlikely to want to give it up.A coalition of civil liberties, privacy and security groups including Fight for the Future, the Center for Democracy and Technology, and the American Civil Liberties Union have written a letter opposing a ban, arguing that it would violate constitutional rights to freedom of expression. “A nationwide ban on TikTok would have serious ramifications for free expression in the digital sphere, infringing on Americans’ first amendment rights and setting a potent and worrying precedent in a time of increased censorship of internet users around the world,” the letter reads.Where the coalition and many members of the House committee agree is on the pressing need for federal data privacy regulation that protects consumer information and reins in all big tech platforms, including TikTok. The American Data Privacy Act – a bipartisan bill working its way through Washington – is one effort under way to address those concerns. More

  • in

    TikTok CEO grilled for over five hours on China, drugs and teen mental health

    The chief executive of TikTok, Shou Zi Chew, was forced to defend his company’s relationship with China, as well as the protections for its youngest users, at a testy congressional hearing on Thursday that came amid a bipartisan push to ban the app entirely in the US over national security concerns.The hearing got off to an intense start, with members of the committee hammering on Chew’s connection to executives at TikTok’s parent company, ByteDance, whom lawmakers say have ties to the Chinese Communist party. The committee members asked how frequently Chew was in contact with them, and questioned whether the company’s proposed solution, called Project Texas, would offer sufficient protection against Chinese laws that require companies to make user data accessible to the government.Lawmakers have long held concerns over China’s control over the app, concerns Chew repeatedly tried to resist throughout the hearing. “Let me state this unequivocally: ByteDance is not an agent of China or any other country,” he said in prepared testimony.But Chew’s claims of independence were undermined by a Wall Street Journal story published just hours before the hearing that said China would strongly oppose any forced sale of the company. Responding for the first time to Joe Biden’s threat of a national ban unless ByteDance sells its shares, the Chinese commerce ministry said such a move would involve exporting technology from China and thus would have to be approved by the Chinese government.Lawmakers also questioned Chew over the platform’s impact on mental health, particularly of its young users. The Republican congressman Gus Bilirakis shared the story of Chase Nasca, a 16-year-old boy who died by suicide a year ago by stepping in front of a train. Nasca’s parents, who have sued ByteDance, claiming Chase was “targeted” with unsolicited suicide-related content, appeared at the hearing and grew emotional as Bilirakis told their son’s story.“I want to thank his parents for being here today, and allowing us to show this,” Bilirakis said. “Mr Chew, your company destroyed their lives.”Driving home concerns about young users, Congresswoman Nanette Barragán asked Chew about reports that he does not let his own children use the app.“At what age do you think it would be appropriate for a young person to get on TikTok?” she said.Chew confirmed his own children were not on TikTok but said that was because in Singapore, where they live, there is not a version of the platform for users under the age of 13. In the US there is a version of TikTok in which the content is curated for a users under 13.“Our approach is to give differentiated experiences for different age groups, and let the parents have conversations with their children to decide what’s best for their family,” he said.The appearance of Chew before the House energy and commerce committee, the first ever by a TikTok chief executive, represents a major test for the 40-year-old, who has remained largely out of the spotlight.Throughout the hearing, Chew stressed TikTok’s distance from the Chinese government, kicking off his testimony with an emphasis on his own Singaporean heritage. Chew talked about Project Texas – an effort to move all US data to domestic servers – and said the company was deleting all US user data that is backed up to servers outside the US by the end of the year.Some legislators expressed that Project Texas was too large an undertaking, and would not tackle concerns about US data privacy soon enough. “I am concerned that what you’re proposing with Project Texas just doesn’t have the technical capability of providing us the assurances that we need,” the California Republican Jay Obernolte, a software engineer, said.At one point, Tony Cárdenas, a Democrat from California, asked Chew outright if TikTok is a Chinese company. Chew responded that TikTok is global in nature, not available in mainland China, and headquartered in Singapore and Los Angeles.Neal Dunn, a Republican from Florida, asked with similar bluntness whether ByteDance has “spied on American citizens” – a question that came amid reports the company accessed journalists’ information in an attempt to identify which employees were leaking information. Chew responded that “spying is not the right way to describe it”.The hearing comes three years after TikTok was formally targeted by the Trump administration with an executive order prohibiting US companies from doing business with ByteDance. Biden revoked that order in June 2021, under the stipulation that the US committee on foreign investment conduct a review of the company. When that review stalled, Biden demanded TikTok sell its Chinese-owned shares or face a ban in the US.This bipartisan nature of the backlash was remarked upon several times during the hearing, with Cárdenas pointing out that Chew “has been one of the few people to unite this committee”.Chew’s testimony, some lawmakers said, was reminiscent of Mark Zuckerberg’s appearance in an April 2018 hearing to answer for his own platform’s data-privacy issues – answers many lawmakers were unsatisfied with. Cárdenas said: “We are frustrated with TikTok … and yes, you keep mentioning that there are industry issues that not only TikTok faces but others. You remind me a lot of [Mark] Zuckerberg … when he came here, I said he reminds me of Fred Astaire: a good dancer with words. And you are doing the same today. A lot of your answers are a bit nebulous, they’re not yes or no.”Chew, a former Goldman Sachs banker who has helmed the company since March 2021, warned users in a video posted to TikTok earlier in the week that the company was at a “pivotal moment”.“Some politicians have started talking about banning TikTok,” he said, adding that the app now has more than 150 million active monthly US users. “That’s almost half the US coming to TikTok.”TikTok has battled legislative headwinds since its meteoric rise began in 2018. Today, a majority of teens in the US say they use TikTok – with 67% of people ages 13 to 17 saying they have used the app and 16% of that age group saying they use it “almost constantly”, according to the Pew Research Center.This has raised a number of concerns about the app’s impact on young users’ safety, with self-harm and eating disorder-related content spreading on the platform. TikTok is also facing lawsuits over deadly “challenges” that have gone viral on the app.TikTok has introduced features in response to such criticisms, including automatic time limits for users under 18.Some tech critics have said that while TikTok’s data collection does raise concerns, its practices are not much different from those of other big tech firms.“Holding TikTok and China accountable are steps in the right direction, but doing so without holding other platforms accountable is simply not enough,” said the Tech Oversight Project, a technology policy advocacy organization, in a statement.“Lawmakers and regulators should use this week’s hearing as an opportunity to re-engage with civil society organizations, NGOs, academics and activists to squash all of big tech’s harmful practices.” More

  • in

    Online Troll Named Microchip Tells of Sowing ‘Chaos’ in 2016 Election

    The defendant in the unusual trial, Douglass Mackey, and the pseudonymous witness collaborated to beat Hillary Clinton. They met for the first time in a Brooklyn courtroom.The two social media influencers teamed up online years ago.Both had large right-wing followings and pseudonyms to hide their real identities. One called himself Ricky Vaughn, after a fictional baseball player portrayed in a movie by Charlie Sheen. The other called himself Microchip.In 2016, prosecutors say, they set out to trick supporters of Hillary Clinton into thinking they could vote by text message or social media, discouraging them from the polls.“Ricky Vaughn,” whose real name is Douglass Mackey, was charged in 2021 with conspiring to deprive others of their right to vote, and on Wednesday the men met face to face in court for the first time.Mr. Mackey sat at the defense table in Federal District Court in Brooklyn wearing a sober gray suit. He watched as Microchip, clad in a royal-blue sweatsuit and black sandals, approached the witness stand, where he was sworn in under that name and began testifying against him.This month, a federal judge overseeing the case, Nicholas G. Garaufis, ruled that Microchip could testify without revealing his actual name after prosecutors said anonymity was needed to protect current and future investigations.The sight of a witness testifying under a fictional identity added one more odd element to an already unusual case that reflects both the rise of social media as a force in politics and the emergence of malicious online mischief-makers — trolls — as influential players in a presidential election. This week’s trial could help determine how much protection the First Amendment gives people who spread disinformation.Microchip’s testimony appeared intended to give jurors an inside view of what prosecutors describe as a conspiracy to disenfranchise voters. It also provided a firsthand account of crass, nihilistic motives behind those efforts.“I wanted to infect everything,” Microchip said, adding that his aim before the 2016 election had been “to cause as much chaos as possible” and diminish Mrs. Clinton’s chances of beating Donald J. Trump.Evidence presented by prosecutors has shown how Mr. Mackey and others, including Microchip, had private online discussions in the weeks before the election, discussing how they could move votes.While Mr. Mackey made clear that he wanted to help Mr. Trump become president, Microchip testified that he was driven mainly by animus for Mrs. Clinton, testifying that his aim had been to “destroy” her reputation.In the fervid and fluid environment surrounding the 2016 election, Mr. Mackey, whose lawyer described him as “a staunch political conservative,” and Microchip, who told BuzzFeed that he was a “staunch liberal,” became allies.Online exchanges and Twitter messages entered into evidence by prosecutors showed the men plotting their strategy. Mr. Mackey saw limiting Black turnout as a key to helping Mr. Trump. Prosecutors said that he posted an image showing a Black woman near a sign reading “African Americans for Hillary” and the message that people could vote by texting “Hillary” to a specific number.Microchip testified that Mr. Mackey was a participant in a private Twitter chat group called “War Room,” adding that he was “very well respected back then” and “a leader of sorts.”Prosecutors introduced records showing that Microchip and Mr. Mackey had retweeted one another dozens of times.Mr. Mackey’s particular talent, according to Microchip, was coming up with ideas and memes that resonated with people who felt that American society was declining and that the West was struggling.Microchip testified that he was self employed as a mobile app developer. He said he had pleaded guilty to a conspiracy charge related to his circulation of memes providing misinformation about how to vote. Because of his anonymity the details of that plea could not be confirmed. And he added that he had signed a cooperation agreement with prosecutors agreeing to testify against Mr. Mackey, and to help with other cases.Under cross-examination, Microchip said he had begun working with the F.B.I. in 2018. He also acknowledged telling an investigator in 2021 that there was no “grand plan around stopping people from voting.”His time on the stand included a tutorial of sorts on how he had amassed Twitter followers and misled people who viewed his messages.He testified that he had built up a following with bots, and used hashtags employed by Mrs. Clinton in a process he called “hijacking” to get his messages to her followers. He aimed to seduce viewers with humor, saying, “When people are laughing, they are very easily manipulated.”Microchip said that he sought to discourage voting among Clinton supporters “through fear tactics,” offering conspiratorial takes on ordinary events as a way to drive paranoia and disaffection.One example he cited involved the emails of John Podesta, Mrs. Clinton’s campaign chairman, which were made public by WikiLeaks during the campaign.There was nothing particularly surprising or sinister among those emails, Microchip said, yet he posted thousands of messages about them suggesting otherwise. “My talent is to make things weird and strange, so there is controversy.”Asked by a prosecutor whether he believed the messages he posted, Microchip did not hesitate.“No,” he said. “And I didn’t care.” More

  • in

    TikTok’s CEO eluded the spotlight. Now, a looming ban means he can’t avoid it

    Shou Zi Chew is not a prolific TikToker. The 40-year-old CEO of the Chinese-owned app has just 23 posts and 17,000 followers to his name – paltry by his own platform’s standards.Chew’s profile sees him attending football games, visiting Paris and London, trying Nashville hot chicken, or boating on a lake, often with generic captions. (“Love the outdoors!”). Users have noticed: “Bro the TikTok ceo with 41 likes,” one person commented on his video of the outdoors. “Shout out to this small creator,” another wrote.Suffice to say, Chew is not an influencer. But his influence over one of the world’s fastest growing, most popular and – some say – most dangerous social networks is under increasing scrutiny.On Thursday Chew will appear before a US congressional committee, answering to lawmakers’ concerns over the Chinese government’s access to US user data, as well as TikTok’s impact on the mental health of its younger user base. The stakes are high, coming amid a crackdown on TikTok from the US to Europe. In the past few months alone, the US has banned TikTok on federal government devices, following similar moves by multiple states’ governments, and the Biden administration has threatened a national ban unless its Chinese-owned parent company, ByteDance, sells its shares.It’s one of the biggest tests yet for the Harvard business alumnus, who counts stints at the consumer electronics giant Xiaomi, Yuri Milner’s investment firm DST and Goldman Sachs on his resume, and has only been in the TikTok job since May 2021.Chew’s low-key online presence is reflective of his public profile. In the two years since becoming CEO, Chew remained relatively quiet even as TikTok was thrust into the spotlight. Save for select interviews he operates largely in the background, staying under the radar as the company promises regulators increased transparency. There’s a lot riding on Chew’s first congressional appearance, which might explain why, in recent months, he’s been on a publicity tour. In addition to various interviews, Chew has been quietly meeting with lawmakers as he gears up for his testimony before the House Energy and Commerce Committee.Chew has also worked to mobilize the platform’s US user base. In a video posted to the TikTok main account, Chew warned that “some politicians” could take the app away from “all 150 million of you” and asked people to share what they love about using the video-sharing service in the comments.Over the past year, the company has attempted to address some lawmakers’ concerns about both data security and teen mental health. TikTok says it spent more than $1.5bn on security efforts and started the process of deleting the US user data that was backed up to its storage centers in Virginia and Singapore after it started routing all US traffic through Oracle-owned servers in the US. The company also recently announced it was limiting screen time for its under-18 users to one hour.But it’s unclear how much he stands to change lawmakers’ minds, especially as bipartisan efforts to appear tough on China gain momentum, making it difficult for him to find allies in either party.Regulatory pressure growsBy the time Chew took over in May 2021, he had his work cut out for him. The now seven-year-old company had already gone through two CEOs in just one year – Kevin Mayer, who ran the company for three months, and Vanessa Pappas, who served as a temporary global head before Chew replaced her. TikTok was seeing explosive growth, boasting 150 million users just in the US, but also increased regulatory pressure over potential ties to the Chinese government.Though Chew has not formally worked at TikTok for very long, he has been involved with its parent company since its early days. Chew was an early investor in ByteDance, founded in 2012, before it began to develop short-form video apps, according to an interview with David Rubenstein, the founder of private investment firm and ByteDance investor the Carlyle Group.Chew, whose promotion to CEO landed him a spot on Fortune’s 40 under 40 list in 2021, joined the ranks of tech executives like Mark Zuckerberg and Elon Musk at a time when people in those roles, once the subject of unadulterated adoration and hero worship, had become the subject of ire and disillusionment.While his lack of public persona may have largely saved him from personal scrutiny, it could hinder his attempts at making inroads among lawmakers and members of the public who have become wary of Chinese surveillance.“The mystery of ByteDance and TikTok and the uncertainty around whether they are doing anything that’s unscrupulous is part of the problem,” said Matt Navarra, a social media consultant and founder of the industry newsletter and podcast Geekout. “So [Chew’s] lack of profile and lack of awareness of who he is may be a blessing, but also it might be a downfall given people want to understand TikTok and ByteDance to understand what the level of risk is.”Within months of joining, Chew started working to combat those concerns. In June 2021, Chew wrote a letter to lawmakers, reiterating the company’s commitment to transparency and emphasizing the company was run by him, “a Singaporean based in Singapore” and not China-based ByteDance.Nearly two years later, those conversations appear to have deteriorated, and even appeals to individual lawmakers have not assuaged fears.Senator Michael Bennet, a Democrat of Colorado who called on Apple and Google to remove TikTok from their app stores in February, met with Chew last month but said he was still worried about the national security risks of the app and the “poisonous influence of TikTok’s algorithms on teen mental health”.“Mr Chew and I had a frank conversation,” Bennet said in a statement. “But I remain fundamentally concerned that TikTok, as a Chinese-owned company, is subject to dictates from the Chinese Communist party and poses an unacceptable risk to US national security.”Into the hot seatIt’s not the first time US lawmakers have grilled TikTok, but it will be Chew’s first time in the hot seat. In September 2022, battling national security concerns over whether ByteDance may be giving the Chinese government access to US user data, TikTok’s chief operating officer Vanessa Pappas testified in front of the Senate Committee on Homeland Security and Governmental Affairs, contending there is no basis for concern and that TikTok is working to minimize how much data non-US employees can access.Chew, who once interned at Facebook, has echoed the same sentiment since he started at TikTok: the company is not beholden to the Chinese government. “TikTok has never shared … US user data with the Chinese government. Nor would TikTok honor such a request if one were ever made,” Chew will say on Thursday, according to written testimony posted ahead of the hearing.In the past, Chew has pointed out that while ByteDance is based in China, TikTok itself is not available for download in China and all US user data is stored in Virginia with a backup in Singapore.Though the US government has offered no evidence that the Chinese government has accessed user data from TikTok, their concerns about the security of consumer information in the hands of the company aren’t unfounded. ByteDance employees have reportedly accessed US user data, and the Department of Justice and the FBI have launched an investigation into allegations that some ByteDance employees had obtained TikTok user data to investigate the source of leaks to US journalists.Several civil liberties and privacy advocates argue banning TikTok would amount to censorship, and that concerns over data security would be best addressed through a federal privacy regulation that limits how much user data all tech companies can collect and share with government agencies and third parties. The argument appears to have fallen flat and industry experts appear skeptical there is much Chew could say to assuage lawmakers’ concerns.“It’ll be interesting to see how believable and authentic he comes across or how rehearsed those answers [to Congress] are,” Navarra said. “I think that TikTok has to come in and tell these lawmakers something they haven’t already heard. Because if they don’t then the likelihood of banning is certainly gonna increase.” More

  • in

    Trial of 2016 Twitter Troll to Test Limits of Online Speech

    Douglass Mackey tried to trick Black people into thinking they could vote by text in the Clinton-Trump presidential election, prosecutors said.The images appeared on Twitter in late 2016 just as the presidential campaign was entering its final stretch. Some featured the message “vote for Hillary” and the phrases “avoid the line” and “vote from home.”Aimed at Democratic voters, and sometimes singling out Black people, the messages were actually intended to help Donald J. Trump, not Hillary Clinton. The goal, federal prosecutors said, was to suppress votes for Ms. Clinton by persuading her supporters to falsely believe they could cast presidential ballots by text message.The misinformation campaign was carried out by a group of conspirators, prosecutors said, including a man in his 20s who called himself Ricky Vaughn. On Monday he will go on trial in Federal District Court in Brooklyn under his real name, Douglass Mackey, after being charged with conspiring to spread misinformation designed to deprive others of their right to vote.“The defendant exploited a social media platform to infringe one of the most basic and sacred rights guaranteed by the Constitution,” Nicholas L. McQuaid, acting assistant attorney general for the Justice Department’s Criminal Division, said in 2021 when charges against Mr. Mackey were announced. Prosecutors have said that Mr. Mackey, who went to Middlebury College in Vermont and said he lived on the Upper East Side of Manhattan, used hashtags and memes as part of his deception and outlined his strategies publicly on Twitter and with co-conspirators in private Twitter group chats.“Obviously we can win Pennsylvania,” Mr. Mackey said on Twitter, using one of his pseudonymous accounts less than a week before the election, according to a complaint and affidavit. “The key is to drive up turnout with non-college whites, and limit black turnout.”That tweet, court papers said, came a day after Mr. Mackey tweeted an image showing a Black woman in front of a sign supporting Ms. Clinton. That tweet told viewers they could vote for Ms. Clinton by text message.Prosecutors said nearly 5,000 people texted the number shown in the deceptive images, adding that the images stated they had been paid for by the Clinton campaign and had been viewed by people in the New York City area.Mr. Mackey’s trial is expected to provide a window into a small part of what the authorities have described as broad efforts to sway the 2016 election through lies and disinformation. While some of those attempts were orchestrated by Russian security services, others were said to have emanated from American internet trolls.People whose names may surface during the trial or who are expected to testify include a man who tweeted about Jews and Black people and was then disinvited from the DeploraBall, a far-right event in Washington, D.C., the night before Mr. Trump’s inauguration; a failed congressional candidate from Wisconsin; and an obscure federal cooperator who will be allowed to testify under a code name.As the trial has approached, people sympathetic to Mr. Mackey have cast his case as part of a political and cultural war, a depiction driven in part by precisely the sort of partisan social media-fueled effort that he is accused of engineering.Mr. Mackey’s fans have portrayed him as a harmless prankster who is being treated unfairly by the state for engaging in a form of free expression. That notion, perhaps predictably, has proliferated on Twitter, advanced by people using some of the same tools that prosecutors said Mr. Mackey used to disseminate lies. Mackey supporters have referred to him on social media as a “meme martyr” and spread a meme showing him wearing a red MAGA hat and accompanied by the hashtag “#FreeRicky.”Some tweets about Mr. Mackey from prominent figures have included apocalyptic-sounding language. The Fox personality Tucker Carlson posted a video of himself on Twitter calling the trial “the single greatest assault on free speech and human rights in this country’s modern history.”Joe Lonsdale, a founder of Palantir Technologies, retweeted an assertion that Mr. Mackey was being “persecuted by the Biden DOJ for posting memes” and added: “This sounds concerning.” Elon Musk, the billionaire owner of Twitter, replied with a one word affirmation: “Yeah.”Mr. Mackey is accused of participating in private direct message groups on Twitter called “Fed Free Hatechat,” “War Room” and “Infowars Madman” to discuss how to influence the election.Prosecutors said people in those groups discussed sharing memes suggesting that celebrities were supporting Mr. Trump and that Ms. Clinton would start wars and draft women to fight them.One exchange in the Madman group centered on an image that falsely told opponents of Brexit that they could vote “remain” in that British referendum through Facebook or Twitter, according to investigators. One participant in the group asked whether they could make something similar for Ms. Clinton, investigators wrote, adding that another replied: “Typical that all the dopey minorities fell for it.”Last summer, defense lawyers asked that Mr. Mackey’s case be dismissed, referring to Twitter as a “no-holds-barred-free-for-all” and saying “the allegedly deceptive memes” had been protected by the First Amendment as satirical speech.They wrote to the court that it was “highly unlikely” that the memes had fooled any voters and added that any harm was in any event “far outweighed by the chilling of the marketplace of ideas where consumers can assess the value of political expression as provocation, satire, commentary, or otherwise.”Prosecutors say that Mr. Mackey focused on “intentional spreading of false information calculated to mislead and misinform voters about how, where and when to cast a vote in a federal election.”Karsten Moran for The New York TimesProsecutors countered that illegal conduct is not protected by the First Amendment merely because it is carried out by language and added that the charge against Mr. Mackey was not based on his political viewpoint or advocacy. Rather, they wrote, it was focused on “intentional spreading of false information calculated to mislead and misinform voters about how, where and when to cast a vote in a federal election.”Judge Nicholas G. Garaufis ruled that the case should continue, saying it was “about conspiracy and injury, not speech” and adding that Mr. Mackey’s contention that his speech was protected as satire was “a question of fact reserved for the jury.”The prosecution’s star witness is likely to be a man known as Microchip, a shadowy online figure who spread misinformation about the 2016 election, according to two people familiar with the matter who spoke on condition of anonymity.Microchip was a prominent player in alt-right Twitter around the time of the election, and Judge Garaufis allowed him to testify under his online handle in part because prosecutors say he is helping the F.B.I. with several other covert investigations. Sunday, the case was reassigned to U.S. District Judge Ann M. Donnelly.In court papers filed last month, prosecutors said they intended to ask the witness to explain to the jury how Mr. Mackey and his allies used Twitter direct messaging groups to come up with “deceptive images discussing the time, place, and manner of voting.”One of the people whom Microchip might mention from the stand is Anthime Gionet, better known by his Twitter name, Baked Alaska; he attended the violent “Unite the Right” rally in Charlottesville, Va., in August 2017. He was barred from the DeploraBall after sending a tweet that included stereotypes about Jews and Black people.In January, Mr. Gionet was sentenced to two months in prison for his role in storming the Capitol on Jan. 6, 2021. More