More stories

  • in

    Is Argentina the First A.I. Election?

    The posters dotting the streets of Buenos Aires had a certain Soviet flare to them.There was one of Argentina’s presidential candidates, Sergio Massa, dressed in a shirt with what appeared to be military medals, pointing to a blue sky. He was surrounded by hundreds of older people — in drab clothing, with serious, and often disfigured, faces — looked toward him in hope.The style was no mistake. The illustrator had been given clear instructions.“Sovietic Political propaganda poster illustration by Gustav Klutsis featuring a leader, masssa, standing firmly,” said a prompt that Mr. Massa’s campaign fed into an artificial-intelligence program to produce the image. “Symbols of unity and power fill the environment,” the prompt continued. “The image exudes authority and determination.”Javier Milei, the other candidate in Sunday’s runoff election, has struck back by sharing what appear to be A.I. images depicting Mr. Massa as a Chinese communist leader and himself as a cuddly cartoon lion. They have been viewed more than 30 million times.Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.Mr. Massa’s campaign has created an A.I. system that can create images and videos of many of the election’s main players — the candidates, running mates, political allies — doing a wide variety of things. The campaign has used A.I. to portray Mr. Massa, Argentina’s staid center-left economy minister, as strong, fearless and charismatic, including videos that show him as a soldier in war, a Ghostbuster and Indiana Jones, as well as posters that evoke Barack Obama’s 2008 “Hope” poster and a cover of The New Yorker.The campaign has also used the system to depict his opponent, Mr. Milei — a far-right libertarian economist and television personality known for outbursts — as unstable, putting him in films like “Clockwork Orange” and “Fear and Loathing in Las Vegas.”Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.“Imagine having kids and thinking that each is a long-term investment. Not in the traditional sense, but thinking of the economic potential of their organs,” says the manipulated image of Mr. Milei in the fabricated video, posted by the Massa campaign on its Instagram account for A.I. content, called “A.I. for the Homeland.”The post’s caption says, “We asked an Artificial Intelligence to help Javier explain the business of selling organs and this happened.”In an interview, Mr. Massa said he was shocked the first time he saw what A.I. could do. “I didn’t have my mind prepared for the world that I’m going to live in,” he said. “It’s a huge challenge. We’re on a horse that we have to ride but we still don’t know its tricks.”The New York Times then showed him the deepfake his campaign created of Mr. Milei and human organs. He appeared disturbed. “I don’t agree with that use,” he said.His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.“Now we’ve seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert based in England who has advised governments on A.I.-generated content.This year, a mayoral candidate in Toronto used gloomy A.I.-generated images of homeless people to telegraph what Toronto would turn into if he weren’t elected. In the United States, the Republican Party posted a video created with A.I. that shows China invading Taiwan and other dystopian scenes to depict what it says would happen if President Biden wins a second term.And the campaign of Gov. Ron DeSantis of Florida shared a video showing A.I.-generated images of Donald J. Trump hugging Dr. Anthony S. Fauci, who has become an enemy on the American right for his role leading the nation’s pandemic response.So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.Meta, the company that owns Facebook and Instagram, said this week that it would require political ads to disclose whether they used A.I. Other unpaid posts on the sites that use A.I., even if related to politics, would not be required to carry any disclosures. The U.S. Federal Election Commission is also considering whether to regulate the use of A.I. in political ads.The Institute for Strategic Dialogue, a London-based research group that studies internet platforms, signed a letter urging such regulations. Isabelle Frances-Wright, the group’s head of technology and society, said the extensive use of A.I. in Argentina’s election was worrisome.“I absolutely think it’s a slippery slope,” she said. “In a year from now, what already seems very realistic will only seem more so.” The Massa campaign said it decided to use A.I. in an effort to show that Peronism, the 78-year-old political movement behind Mr. Massa, can appeal to young voters by mixing Mr. Massa’s image with pop and meme culture.An A.I.-generated image created by Mr. Massa’s campaign.To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.During the campaign, Mr. Massa’s communications team has briefed artists working with the campaign’s A.I. on which messages or emotions they want the images to impart, such as national unity, family values and fear. The artists have then brainstormed ideas to put Mr. Massa or Mr. Milei, as well as other political figures, into content that references films, memes, artistic styles or moments in history.For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.The A.I. images have also shown up in the real world. The Soviet posters were one of the dozens of designs that Mr. Massa’s campaign and supporters printed to post across Argentina’s public spaces.Some images were generated by the campaign’s A.I., while others were created by supporters using A.I., including one of the most well-known, an image of Mr. Massa riding a horse in the style of José de San Martín, an Argentine independence hero. “Massa was too stiff,” said Octavio Tome, a community organizer who helped create the image. “We’re showing a boss-like Massa, and he’s very Argentine.”Supporters of Mr. Massa put up AI-generated posters depicting him in the style of José de San Martín, an Argentine independence hero.Sarah Pabst for The New York TimesThe rise of A.I. in Argentina’s election has also made some voters question what is real. After a video circulated last week of Mr. Massa looking exhausted after a campaign event, his critics accused him of being on drugs. His supporters quickly struck back, claiming the video was actually a deepfake.His campaign confirmed, however, that the video was, in fact, real.Mr. Massa said people were already using A.I. to try to cover up past mistakes or scandals. “It’s very easy to hide behind artificial intelligence when something you said come out, and you didn’t want them to,” Mr. Massa said in the interview.Earlier in the race, Patricia Bullrich, a candidate who failed to qualify for the runoff, tried to explain away leaked audio recordings of her economic adviser offering a woman a job in exchange for sex by saying the recordings were fabricated. “They can fake voices, alter videos,” she said.Were the recordings real or fake? It’s unclear. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    These 2024 Candidates Have Signed Up For Threads, Meta’s Twitter Alternative

    The bulk of the G.O.P. field is there, with some notable holdouts: Donald J. Trump, the front-runner, and his top rival, Ron DeSantis.While the front-runners in the 2024 presidential race have yet to show up on Threads, the new Instagram app aimed at rivaling Twitter, many of the long-shot candidates were quick to take advantage of the platform’s rapidly growing audience.“Buckle up and join me on Threads!” Senator Tim Scott, Republican of South Carolina, wrote in a caption accompanying a selfie of himself and others in a car that he posted on Thursday — by that morning, the app had already been downloaded more than 30 million times, putting it on track to be the most rapidly downloaded app ever.But President Biden, former President Donald J. Trump and Gov. Ron DeSantis of Florida remain absent from the platform so far.And that may be just fine with Adam Mosseri, the head of Instagram, who told The Times’s “Hard Fork” podcast on Thursday that he does not expect Threads to become a destination for news or politics, arenas where Twitter has dominated the public discourse.“I don’t want to lean into hard news at all. I don’t think there’s much that we can or should do to discourage it on Instagram or in Threads, but I don’t think we’ll do anything to encourage it,” Mr. Mosseri said.The app, released on Wednesday, was presented as an alternative to Twitter, with which many users became disillusioned after it was purchased by Elon Musk in October.Lawyers for Twitter threatened legal action against Meta, the company that owns Instagram, Facebook and Threads, accusing it of using trade secrets from former Twitter employees to build the new platform. Mr. Musk tweeted on Thursday, “Competition is fine, cheating is not.”Mr. Trump has not been active on Twitter recently either, despite Mr. Musk’s lifting the ban that was put on Mr. Trump’s account after the Jan. 6, 2021, attack on the Capitol. The former president has instead kept his focus on Truth Social, the right-wing social network he launched in 2021.But many of the G.O.P. candidates have begun making their pitches on Threads.Nikki Haley, the former United Nations ambassador and former governor of South Carolina, made a video compilation of her campaign events her first post on the app. “Strong and proud. Not weak and woke,” she wrote on Thursday. “That is the America I see.”Gov. Doug Burgum of North Dakota posted footage of his July 4 campaign appearances in New Hampshire, alongside a message on Wednesday that said he and his wife were “looking forward to continuing our time here.”And Will Hurd, a former Texas congressman, made a fund-raising pitch to viewers on Wednesday.“Welcome to Threads,” he said in a video posted on the app. “I’m looking forward to continuing the conversation here with you on the issues, my candidacy, where I’ll be and everything our campaign has going on.”Francis Suarez, the Republican mayor of Miami, and Larry Elder, a conservative talk radio host, also shared their campaign pitches on the platform, as did two candidates running in the Democratic primary: Robert F. Kennedy Jr., a leading vaccine skeptic, and Marianne Williamson, a self-help author. Even Cornel West, a professor and progressive activist running as a third-party candidate, has posted.Former Vice President Mike Pence and Vivek Ramaswamy, a tech entrepreneur, also established accounts — but have yet to post.Among the holdouts: Former Gov. Asa Hutchinson of Arkansas and former Gov. Chris Christie of New Jersey, both Republicans.The White House has not said whether Mr. Biden will join Threads. Andrew Bates, a White House spokesman, said on Thursday that the administration would “keep you all posted if we do.” More

  • in

    Hun Sen’s Facebook Page Goes Dark After Spat with Meta

    Prime Minister Hun Sen, an avid user of the platform, had vowed to delete his account after Meta’s oversight board said he had used it to threaten political violence.The usually very active Facebook account for Prime Minister Hun Sen of Cambodia appeared to have been deleted on Friday, a day after the oversight board for Meta, Facebook’s parent company, recommended that he be suspended from the platform for threatening political opponents with violence.The showdown pits the social media behemoth against one of Asia’s longest-ruling autocrats.Mr. Hun Sen, 70, has ruled Cambodia since 1985 and maintained power partly by silencing his critics. He is a staunch ally of China, a country whose support comes free of American-style admonishments on the value of human rights and democratic institutions.A note Friday on Mr. Hun Sen’s account, which had about 14 million followers, said that its content “isn’t available right now.” It was not immediately clear whether Meta had suspended the account or if Mr. Hun Sen had preemptively deleted it, as he had vowed to do in a post late Thursday on Telegram, a social media platform where he has a much smaller following.“That he stopped using Facebook is his private right,” Phay Siphan, a spokesman for the Cambodian government, told The New York Times on Friday. “Other Cambodians use it, and that’s their right.”The company-appointed oversight board for Meta had on Thursday recommended a minimum six-month suspension of Mr. Hun Sen’s accounts on Facebook and Instagram, which Meta also owns. The board also said that one of Mr. Hun Sen’s Facebook videos had violated Meta’s rules on “violence and incitement” and should be taken down.In the video, Mr. Hun Sen delivered a speech in which he responded to allegations of vote-stealing by calling on his political opponents to choose between the legal system and “a bat.”“If you say that’s freedom of expression, I will also express my freedom by sending people to your place and home,” Mr. Hun Sen said in the speech, according to Meta.Meta had previously decided to keep the video online under a policy that allows the platform to allow content that violates Facebook’s community standards on the grounds that it is newsworthy and in the public interest. But the oversight board said on Thursday that it was overturning the decision, calling it “incorrect.”A post on Facebook by Cambodian government official Duong Dara, which includes an image of the official Facebook page of Mr. Hun Sen.Tang Chhin Sothy/Agence France-Presse — Getty ImagesThe board added that its recommendation to suspend Mr. Hun Sen’s accounts for at least six months was justified given the severity of the violation and his “history of committing human rights violations and intimidating political opponents, and his strategic use of social media to amplify such threats.”Meta later said in a statement that it would remove the offending video to comply with the board’s decision. The company also said that it would respond to the suspension recommendation after analyzing it.Critics of Facebook have long said that the platform can undermine democracy, promote violence and help politicians unfairly target their critics, particularly in countries with weak institutions.Mr. Hun Sen has spent years cracking down on the news media and political opposition in an effort to consolidate his grip on power. In February, he ordered the shutdown of one of the country’s last independent news outlets, saying he did not like its coverage of his son and presumed successor, Lt. Gen. Hun Manet.Under Mr. Hun Sen, the government has also pushed for more government surveillance of the internet, a move that rights groups say makes it even easier for the authorities to monitor and punish online content.Mr. Hun Sen’s large Facebook following may overstate his actual support. In 2018, one of his most prominent political opponents, Sam Rainsy, argued in a California court that the prime minister used so-called click farms to accumulate millions of counterfeit followers.Mr. Sam Rainsy, who lives in exile, also argued that Mr. Hun Sen had used Facebook to spread false news stories and death threats directed at political opponents. The court later denied his request that Facebook be compelled to release records of advertising purchases by Mr. Hun Sen and his allies.In 2017, an opposition political party that Mr. Sam Rainsy had led, the Cambodia National Rescue Party, was dissolved by the country’s highest court. More recently, the Cambodian authorities have disqualified other opposition parties from running in a general election next month.At a public event in Cambodia on Friday, Mr. Hun Sen said that his political opponents outside the country were surely happy with his decision to quit Facebook.“You have to be aware that if I order Facebook to be shut down in Cambodia, it will strongly affect you,” he added, speaking at an event for garment workers ahead of the general election. “But this is not the path that I choose.” More

  • in

    Facebook Failed to Stop Ads Threatening Election Workers

    The ads, submitted by researchers, were rejected by YouTube and TikTok.Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote. More

  • in

    FTX’s Near-Collapse Batters the Crypto Industry

    Prices of digital currencies have tumbled even after the exchange FTX announced a provisional lifeline by a top rival, Binance. A humbling downfall for Sam Bankman-Fried.Erika P. Rodriguez for The New York TimesA crypto giant’s fate is in doubtDevastation in the crypto market continued on Wednesday, after the giant crypto exchange Binance announced a bombshell deal to buy its embattled rival, FTX. (The deal excludes FTX’s American operations.) The entire market’s capitalization now stands at $900 billion, down from $3 trillion just one year ago, while major cryptocurrencies were down by double-digit percentages. The damage is largely contained within crypto; both the S&P 500 and the Nasdaq closed up yesterday.But investors fear that Binance won’t go through with the rescue plan, and that more pain awaits after their industry’s biggest Lehman-esque moment to date.What happened? Binance, an early investor in FTX turned rival, said over the weekend that it planned to sell its holdings in FTT, a token used for trading on FTX’s platform — a stunning move that cast doubt on the financial health of FTX and its trading arm, Alameda Research. The token’s value has plunged by roughly 80 percent in the past 36 hours to just under $5.Traders withdrew over $1.2 billion from FTX on Monday alone, according to the research firm Nansen. By Tuesday, FTX had stopped processing withdrawals; its chief executive, Sam Bankman-Fried, who was reportedly casting about for a financial lifeline from billionaires, finally turned to Binance for salvation.Binance has cemented its dominance over crypto. It was already the largest exchange worldwide for digital currencies and derivatives; FTX’s trading volumes in September were just a fraction of Binance’s. Its founder, Changpeng Zhao — widely known as CZ — showed off his power by effectively kneecapping FTX and then swooping in with a rescue. “This elevates Zhao as the most powerful player in crypto,” Ilan Solot of the derivatives trader Marex Solutions told The Financial Times.It’s a humbling downfall for Bankman-Fried, who in just three years rocketed from obscurity to become one of the best-known moguls in crypto, earning comparisons to Warren Buffett and J.P. Morgan. Months ago, Bankman-Fried sought to live up to the Morgan comparison, swooping in to bail out troubled crypto companies like Celsius and Voyager Digital (deals whose status is now unclear); he also became a frequent presence in Washington, calling for more regulation of the crypto industry, to the ire of CZ and other executives.At the beginning of the year, FTX was valued at $32 billion, backed by heavyweight investors like BlackRock, SoftBank and Tiger Global. (Investors said yesterday they were blindsided by the deal.) The 30-year-old Bankman-Fried — known in the crypto world as S.B.F. — was said to have a net worth of over $16 billion. But a document leaked to CoinDesk purportedly showed that FTX and Alameda, whose finances had long been murky, were highly illiquid and financially vulnerable.The crypto world fears other shoes will drop. Investors worry that CZ may yet pull out of his rescue deal: He noted on Tuesday that the transaction was nonbinding and subject to due diligence. Meanwhile, tokens associated with FTX, including Solana, have continued to plunge in value.Other crypto players sought to distance themselves from the FTX meltdown. Brian Armstrong of Coinbase, the biggest U.S.-focused exchange, said FTX’s troubles appeared to arise from “risky business practices” that his company doesn’t engage in. Still, Coinbase shares fell nearly 11 percent yesterday.And regulators say the news justifies more scrutiny of crypto companies. “This is a major market event for the digital asset sector,” said Joe Rotunda of the Texas State Securities Board Enforcement Division, which had already been investigating FTX.HERE’S WHAT’S HAPPENING Elon Musk sells billions more in Tesla stock to pay for his Twitter deal. He sold nearly $4 billion worth of shares in recent days, according to regulatory filings, bringing his total sales for the year to $36 billion. The electric carmaker’s shares were up slightly in premarket trading.The United Nations seeks to end “sham” corporate net-zero pledges. Companies that claim to be trying to cut carbon emissions but invest in fossil fuels should be shamed, António Guterres, the U.N. secretary general, said at COP27. Meanwhile, more rich countries pledged to pay poorer ones compensation for damage from climate change.Disney reports a jump in streaming losses. The media giant said its direct-to-consumer unit — including Disney+ — doubled its third-quarter losses from a year ago, to $1.5 billion. But Disney said the quarter was the “peak” for losses, and noted it had added 12 million new subscribers.TikTok lowers its worldwide revenue targets amid a spending slump. The video platform cut its sales goals by 20 percent after its advertising and e-commerce operations struggled, The Financial Times reports. TikTok also revamped its leadership in the United States.Adidas cuts its profit forecast after breaking from Kanye West. The warning from the sportswear giant came weeks after it ended its highly profitable collaboration with the rapper now known as Ye. Separately, Adidas named Bjorn Gulden, the former head of Puma, as its next C.E.O.The red wave that wasn’t Republicans haven’t quite had the night they expected. As of 7 a.m. Eastern, Republicans were 21 seats shy of retaking control of the House. But leadership of the Senate remains up in the air after the Democrats flipped a seat in Pennsylvania. Here are the big highlights so far:Pennsylvania: John Fetterman, the state’s Democratic lieutenant governor, beat Mehmet Oz in the closely watched Senate race. Political analysts now say Democrats need to win two of three hotly contested Senate races — in Georgia, Arizona and Nevada, all currently held by Democrats — to maintain power in the chamber.Georgia: The Senate contest looks like it’s headed for a runoff on Dec. 6, pitting the incumbent, Raphael Warnock, against his Republican challenger, Herschel Walker.Governor races: Voters backed high-profile incumbents, including Kathy Hochul, Democrat of New York; Greg Abbott, Republican of Texas; and Tony Evers, Democrat of Wisconsin.Ballot initiatives: Voters in Michigan approved making abortion access a right protected under the State Constitution. Those in Maryland and Missouri voted to legalize marijuana, though similar measures were rejected in Arkansas and North Dakota.A rough night for Donald Trump: Several candidates that he endorsed, including in Arizona, Georgia, Michigan and Pennsylvania, lost or were behind. And a potential rival for the 2024 Republican presidential nomination, Gov. Ron DeSantis of Florida, handily won re-election.Meta slices through its work forceFacebook’s owner Meta will lay off 11,000 employees, equivalent to 13 percent of its work force, the company announced on Wednesday morning, in the biggest restructuring in the social media giant’s history. A slump in digital advertising and ballooning losses from its pivot to the metaverse have pushed the company to make a series of wide-ranging cuts.In a note to employees, Mark Zuckerberg, Meta’s co-founder and C.E.O., admitted that the company had hired too aggressively during the pandemic as homebound consumers spent more time socializing and shopping online. Meta mistakenly assumed this trend would continue: “I got this wrong, and I take responsibility for that,” he wrote.The company has begun cutting costs across its operations, “scaling back budgets, reducing perks, and shrinking our real estate footprint,” Zuckerberg wrote. The stock was up 3.7 percent in premarket trading, outperforming the Nasdaq.The economic downturn is forcing companies across industries to shrink. Citigroup and Barclays are expected to lay off hundreds in their investment banking units, Bloomberg reports. And, according to Protocol, Salesforce could cut as many as 2,500 positions in the coming weeks as the activist investor Starboard Value seeks big changes in corporate strategy.Exclusive: Keurig Dr Pepper buys stake in Athletic Brewing Keurig Dr Pepper has invested $50 million in Athletic Brewing, the nonalcoholic beer company, as part of a $75 million fund-raise by Athletic, DealBook is first to report. It’s the beverage giant’s second foray into the nonalcoholic booze category — it announced a deal to acquire a nonalcoholic cocktail brand called Atypique this summer — and another sign of interest in this fast-growing category.Athletic Brewing was founded in 2017 by Bill Shufelt, a former trader at the hedge fund Point72, and John Walker, a former craft brewer. It now sells its products — including lager, light beer and sparkling water — at retailers like Trader Joe’s. With its new backer, Athletic is looking to expand in Australia, France and Spain.Sales of nonalcoholic beer are skyrocketing, growing almost 70 percent between 2016 and 2021 in the U.S., to about $670 million, according to Euromonitor. While that is still a tiny portion of the overall beer market, its popularity stands in stark contrast to overall sluggishness in beer sales, as the younger generation drinks less and cares more about its waistline. Beer giants like Heineken, Budweiser and Sam Adams have released nonalcoholic alternatives in the last five years.It’s not just for recovering alcoholics or nondrinkers. Shufelt said 80 percent of his customers drink alcohol, and three-fourths are between the ages of 21 and 44. About half are women, he added.THE SPEED READ DealsThe E.U.’s antitrust watchdog will deepen its scrutiny of Microsoft’s $75 billion takeover of Activision Blizzard. (WSJ)Goldman Sachs has reportedly weighed buying payment-technology companies to expand its credit-card business. (WSJ)The electric carmaker Lucid said it planned to raise up to $1.5 billion in fresh capital. (NYT)PolicyThe private equity giants Apollo, Carlyle and KKR disclosed inquiries by regulators over their dealmakers’ use of messaging apps like WhatsApp for business. (Bloomberg)Supreme Court justices are weighing a Pennsylvania law that requires companies to consent to being sued in its courts for conduct done anywhere. (NYT)Kenya published some details of a 2014 loan it took out from China, potentially straining relations with the country’s biggest source of infrastructure financing. (NYT)Best of the restVirginia Giuffre, a victim of Jeffrey Epstein, now says she may have misidentified the Harvard law professor Alan Dershowitz as an abuser. (NYT)Twitter may now offer two kinds of check marks to verify users. (The Verge)Levi’s named Michelle Gass, Kohl’s chief executive, as its next C.E.O. (NYT)Would you take a Zoom meeting in a movie theater? AMC hopes so. (Insider)UBS’s chief risk officer, Christian Bluhm, is quitting to become … a professional photographer. (FT)Thanks for reading! We’ll see you tomorrow.We’d like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com. More

  • in

    Elon Musk Takes a Page Out of Mark Zuckerberg’s Social Media Playbook

    As Mr. Musk takes over Twitter, he is emulating some of the actions of Mr. Zuckerberg, who leads Facebook, Instagram and WhatsApp.Elon Musk has positioned himself as an unconventional businessman. When he agreed to buy Twitter this year, he declared he would make the social media service a place for unfettered free speech, reversing many of its rules and allowing banned users like former President Donald J. Trump to return.But since closing his $44 billion buyout of Twitter last week, Mr. Musk has followed a surprisingly conventional social media playbook.The world’s richest man met with more than six civil rights groups — including the N.A.A.C.P. and the Anti-Defamation League — on Tuesday to assure them that he will not make changes to Twitter’s content rules before the results of next week’s midterm elections are certified. He also met with advertising executives to discuss their concerns about their brands appearing alongside toxic online content. Last week, Mr. Musk said he would form a council to advise Twitter on what kinds of content to remove from the platform and would not immediately reinstate banned accounts.If these decisions and outreach seem familiar, that’s because they are. Other leaders of social media companies have taken similar steps. After Facebook was criticized for being misused in the 2016 presidential election, Mark Zuckerberg, the social network’s chief executive, also met with civil rights groups to calm them and worked to mollify irate advertisers. He later said he would establish an independent board to advise his company on content decisions.Mr. Musk is in his early days of owning Twitter and is expected to make big changes to the service and business, including laying off some of the company’s 7,500 employees. But for now, he is engaging with many of the same constituents that Mr. Zuckerberg has had to over many years, social media experts and heads of civil society groups said.Mr. Musk “has discovered what Mark Zuckerberg discovered several years ago: Being the face of controversial big calls isn’t fun,” said Evelyn Douek, an assistant professor at Stanford Law School. Social media companies “all face the same pressures of users, advertisers and governments, and there’s always this convergence around this common set of norms and processes that you’re forced toward.”Mr. Musk did not immediately respond to a request for comment, and a Twitter spokeswoman declined to comment. Meta, which owns Facebook and Instagram, declined to comment.Elon Musk’s Acquisition of TwitterCard 1 of 8A blockbuster deal. More

  • in

    Twitter and TikTok Lead in Amplifying Misinformation, Report Finds

    A new analysis found that algorithms and some features of social media sites help false posts go viral.It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.” More