More stories

  • in

    Elon Musk Takes a Page Out of Mark Zuckerberg’s Social Media Playbook

    As Mr. Musk takes over Twitter, he is emulating some of the actions of Mr. Zuckerberg, who leads Facebook, Instagram and WhatsApp.Elon Musk has positioned himself as an unconventional businessman. When he agreed to buy Twitter this year, he declared he would make the social media service a place for unfettered free speech, reversing many of its rules and allowing banned users like former President Donald J. Trump to return.But since closing his $44 billion buyout of Twitter last week, Mr. Musk has followed a surprisingly conventional social media playbook.The world’s richest man met with more than six civil rights groups — including the N.A.A.C.P. and the Anti-Defamation League — on Tuesday to assure them that he will not make changes to Twitter’s content rules before the results of next week’s midterm elections are certified. He also met with advertising executives to discuss their concerns about their brands appearing alongside toxic online content. Last week, Mr. Musk said he would form a council to advise Twitter on what kinds of content to remove from the platform and would not immediately reinstate banned accounts.If these decisions and outreach seem familiar, that’s because they are. Other leaders of social media companies have taken similar steps. After Facebook was criticized for being misused in the 2016 presidential election, Mark Zuckerberg, the social network’s chief executive, also met with civil rights groups to calm them and worked to mollify irate advertisers. He later said he would establish an independent board to advise his company on content decisions.Mr. Musk is in his early days of owning Twitter and is expected to make big changes to the service and business, including laying off some of the company’s 7,500 employees. But for now, he is engaging with many of the same constituents that Mr. Zuckerberg has had to over many years, social media experts and heads of civil society groups said.Mr. Musk “has discovered what Mark Zuckerberg discovered several years ago: Being the face of controversial big calls isn’t fun,” said Evelyn Douek, an assistant professor at Stanford Law School. Social media companies “all face the same pressures of users, advertisers and governments, and there’s always this convergence around this common set of norms and processes that you’re forced toward.”Mr. Musk did not immediately respond to a request for comment, and a Twitter spokeswoman declined to comment. Meta, which owns Facebook and Instagram, declined to comment.Elon Musk’s Acquisition of TwitterCard 1 of 8A blockbuster deal. More

  • in

    Twitter and TikTok Lead in Amplifying Misinformation, Report Finds

    A new analysis found that algorithms and some features of social media sites help false posts go viral.It is well known that social media amplifies misinformation and other harmful content. The Integrity Institute, an advocacy group, is now trying to measure exactly how much — and on Thursday it began publishing results that it plans to update each week through the midterm elections on Nov. 8.The institute’s initial report, posted online, found that a “well-crafted lie” will get more engagements than typical, truthful content and that some features of social media sites and their algorithms contribute to the spread of misinformation.Twitter, the analysis showed, has what the institute called the great misinformation amplification factor, in large part because of its feature allowing people to share, or “retweet,” posts easily. It was followed by TikTok, the Chinese-owned video site, which uses machine-learning models to predict engagement and make recommendations to users.“We see a difference for each platform because each platform has different mechanisms for virality on it,” said Jeff Allen, a former integrity officer at Facebook and a founder and the chief research officer at the Integrity Institute. “The more mechanisms there are for virality on the platform, the more we see misinformation getting additional distribution.”The institute calculated its findings by comparing posts that members of the International Fact-Checking Network have identified as false with the engagement of previous posts that were not flagged from the same accounts. It analyzed nearly 600 fact-checked posts in September on a variety of subjects, including the Covid-19 pandemic, the war in Ukraine and the upcoming elections.Facebook, according to the sample that the institute has studied so far, had the most instances of misinformation but amplified such claims to a lesser degree, in part because sharing posts requires more steps. But some of its newer features are more prone to amplify misinformation, the institute found.Facebook’s amplification factor of video content alone is closer to TikTok’s, the institute found. That’s because the platform’s Reels and Facebook Watch, which are video features, “both rely heavily on algorithmic content recommendations” based on engagements, according to the institute’s calculations.Instagram, which like Facebook is owned by Meta, had the lowest amplification rate. There was not yet sufficient data to make a statistically significant estimate for YouTube, according to the institute.The institute plans to update its findings to track how the amplification fluctuates, especially as the midterm elections near. Misinformation, the institute’s report said, is much more likely to be shared than merely factual content.“Amplification of misinformation can rise around critical events if misinformation narratives take hold,” the report said. “It can also fall, if platforms implement design changes around the event that reduce the spread of misinformation.” More

  • in

    Meta Removes Chinese Effort to Influence U.S. Elections

    Meta, the parent company of Facebook and Instagram, said on Tuesday that it had discovered and taken down what it described as the first targeted Chinese campaign to interfere in U.S. politics ahead of the midterm elections in November.Unlike the Russian efforts over the last two presidential elections, however, the Chinese campaign appeared limited in scope — and clumsy at times.The fake posts began appearing on Facebook and Instagram, as well as on Twitter, in November 2021, using profile pictures of men in formal attire but the names of women, according to the company’s report.The users later posed as conservative Americans, promoting gun rights and opposition to abortion, while criticizing President Biden. By April, they mostly presented themselves as liberals from Florida, Texas and California, opposing guns and promoting reproductive rights. They mangled the English language and failed to attract many followers.Two Meta officials said they could not definitively attribute the campaign to any group or individuals. Yet the tactics reflected China’s growing efforts to use international social media to promote the Communist Party’s political and diplomatic agenda.What made the effort unusual was what appeared to be the focus on divisive domestic politics ahead of the midterms.In previous influence campaigns, China’s propaganda apparatus concentrated more broadly on criticizing American foreign policy, while promoting China’s view of issues like the crackdown on political rights in Hong Kong and the mass repression in Xinjiang, the mostly Muslim region where hundreds of thousands were forced into re-education camps or prisons.Ben Nimmo, Meta’s lead official for global threat intelligence, said the operation reflected “a new direction for Chinese influence operations.”“It is talking to Americans, pretending to be Americans rather than talking about America to the rest of the world,” he added later. “So the operation is small in itself, but it is a change.”The operation appeared to lack urgency and scope, raising questions about its ambition and goals. It involved only 81 Facebook accounts, eight Facebook pages and one group. By July, the operation had suddenly shifted its efforts away from the United States and toward politics in the Czech Republic.The posts appeared during working hours in China, typically when Americans were asleep. They dropped off noticeably during what appeared to be “a substantial lunch break.”In one post, a user struggled with clarity: “I can’t live in an America on regression.”Even if the campaign failed to go viral, Mr. Nimmo said the company’s disclosure was intended to draw attention to the potential threat of Chinese interference in domestic affairs of its rivals.Meta also announced that it had taken down a much larger Russian influence operation that began in May and focused primarily on Germany, as well as France, Italy and Britain.The company said it was “the largest and most complex” operation it had detected from Russia since the war in Ukraine began in February.The campaign centered around a network of 60 websites that impersonated legitimate news organizations in Europe, like Der Spiegel, Bild, The Guardian and ANSA, the Italian news agency.The sites would then post original articles criticizing Ukraine, warning about Ukrainian refugees and arguing that economic sanctions against Russia would only backfire. Those articles were then promoted across the internet, including on Facebook and Instagram, but also on Twitter and Telegram, the messaging app, which is widely used in Russia.The Russian operation involved 1,633 accounts on Facebook, 703 pages and one group, as well as 29 different accounts on Instagram, the company’s report said. About 4,000 accounts followed one or more of the Facebook pages. As Meta moved to block the operation’s domains, new websites appeared, “suggesting persistence and continuous investment in this activity.”Meta began its investigation after disclosures in August by one of Germany’s television networks, ZDF. As in the case of the Chinese operation, it did not explicitly accuse the government of the Russian president, Vladimir V. Putin, though the activity clearly mirrors the Kremlin’s extensive information war surrounding its invasion.“They were kind of throwing everything at the wall and not a lot of it was sticking,” said David Agranovich, Meta’s director of threat disruption. “It doesn’t mean that we can say mission accomplished here.”Meta’s report noted overlap between the Russian and Chinese campaigns on “a number of occasions,” although the company said they were unconnected. The overlap reflects the growing cross-fertilization of official statements and state media reports in the two countries, especially regarding the United States.The accounts associated with the Chinese campaign posted material from Russia’s state media, including those involving unfounded allegations that the United States had secretly developed biological weapons in Ukraine.A French-language account linked to the operation posted a version of the allegation in April, 10 days after it had originally been posted by Russia’s Ministry of Defense on Telegram. That one drew only one response, in French, from an authentic user, according to Meta.“Fake,” the user wrote. “Fake. Fake as usual.” More

  • in

    The Midterm Election’s Most Dominant Toxic Narratives

    Ballot mules. Poll watch parties. Groomers.These topics are now among the most dominant divisive and misleading narratives online about November’s midterm elections, according to researchers and data analytics companies. On Twitter, Facebook, Reddit, Truth Social and other social media sites, some of these narratives have surged in recent months, often accompanied by angry and threatening rhetoric.The effects of these inflammatory online discussions are being felt in the real world, election officials and voting rights groups said. Voters have flooded some local election offices with misinformed questions about supposedly rigged voting machines, while some people appear befuddled about what pens to use on ballots and whether mail-in ballots are still legal, they said.“Our voters are angry and confused,” Lisa Marra, elections director in Cochise County, Ariz., told a House committee last month. “They simply don’t know what to believe.”The most prevalent of these narratives fall into three main categories: continued falsehoods about rampant election fraud; threats of violence and citizen policing of elections; and divisive posts on health and social policies that have become central to political campaigns. Here’s what to know about them.Misinformation about the 2020 election, left, has fueled the “Stop the Steal” movement, center, and continues to be raised at campaign events for the midterms, right.From left, Amir Hamja for The New York Times, Gabriela Bhaskar for The New York Times, Ash Ponders for The New York Times Election FraudFalse claims of election fraud are commanding conversation online, with former President Donald J. Trump continuing to protest that the 2020 presidential election was stolen from him.Voter fraud is rare, but that falsehood about the 2020 election has become a central campaign issue for dozens of candidates around the country, causing misinformation and toxic content about the issue to spread widely online.“Stolen election” was mentioned 325,589 times on Twitter from June 19 to July 19, a number that has been fairly steady throughout the year and that was up nearly 900 percent from the same period in 2020, according to Zignal Labs, a media research firm.On the video-sharing site Rumble, videos with the term “stop the steal” or “stolen election” and other claims of election fraud have been among the most popular. In May, such posts attracted 2.5 million viewers, more than triple the total from a year earlier, according to Similarweb, a digital analytics firm.More recently, misinformation around the integrity of voting has metastasized. More conspiracy theories are circulating online about individuals submitting fraudulent ballots, about voting machines being rigged to favor Democrats and about election officials switching the kinds of pens that voters must use to mark ballots in order to confuse them.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Inflation Concerns Persist: In the six-month primary season that has just ended, several issues have risen and fallen, but nothing has dislodged inflation and the economy from the top of voters’ minds.Herschel Walker: The Republican Senate candidate in Georgia claimed his business donated 15 percent of its profits to charities. Three of the four groups named as recipients say they didn’t receive money.North Carolina Senate Race: Are Democrats about to get their hearts broken again? The contest between Cheri Beasley, a Democrat, and her G.O.P. opponent, Representative Ted Budd, seems close enough to raise their hopes.Echoing Trump: Six G.O.P. nominees for governor and the Senate in critical midterm states, all backed by former President Donald J. Trump, would not commit to accepting this year’s election results.These conspiracy theories have in turn spawned new terms, such as “ballot trafficking” and “ballot mules,” which is used to describe people who are paid to cast fake ballots. The terms were popularized by the May release of the film “2000 Mules,” a discredited movie claiming widespread voter fraud in the 2020 election. From June 19 to July 19, “ballot mules” was mentioned 17,592 times on Twitter; it was not used before the 2020 election, according to Zignal.In April, the conservative talk show host Charlie Kirk interviewed the stars of the film, including Catherine Engelbrecht of the nonprofit voting group True the Vote. Mr. Kirk’s interview has garnered more than two million views online.“A sense of grievance is already in place,” said Kyle Weiss, a senior analyst at Graphika, a research firm that studies misinformation and fake social media accounts. The 2020 election “primed the public on a set of core narratives, which are reconstituting and evolving in 2022.”.css-1v2n82w{max-width:600px;width:calc(100% – 40px);margin-top:20px;margin-bottom:25px;height:auto;margin-left:auto;margin-right:auto;font-family:nyt-franklin;color:var(–color-content-secondary,#363636);}@media only screen and (max-width:480px){.css-1v2n82w{margin-left:20px;margin-right:20px;}}@media only screen and (min-width:1024px){.css-1v2n82w{width:600px;}}.css-161d8zr{width:40px;margin-bottom:18px;text-align:left;margin-left:0;color:var(–color-content-primary,#121212);border:1px solid var(–color-content-primary,#121212);}@media only screen and (max-width:480px){.css-161d8zr{width:30px;margin-bottom:15px;}}.css-tjtq43{line-height:25px;}@media only screen and (max-width:480px){.css-tjtq43{line-height:24px;}}.css-x1k33h{font-family:nyt-cheltenham;font-size:19px;font-weight:700;line-height:25px;}.css-ok2gjs{font-size:17px;font-weight:300;line-height:25px;}.css-ok2gjs a{font-weight:500;color:var(–color-content-secondary,#363636);}.css-1c013uz{margin-top:18px;margin-bottom:22px;}@media only screen and (max-width:480px){.css-1c013uz{font-size:14px;margin-top:15px;margin-bottom:20px;}}.css-1c013uz a{color:var(–color-signal-editorial,#326891);-webkit-text-decoration:underline;text-decoration:underline;font-weight:500;font-size:16px;}@media only screen and (max-width:480px){.css-1c013uz a{font-size:13px;}}.css-1c013uz a:hover{-webkit-text-decoration:none;text-decoration:none;}How Times reporters cover politics. We rely on our journalists to be independent observers. So while Times staff members may vote, they are not allowed to endorse or campaign for candidates or political causes. This includes participating in marches or rallies in support of a movement or giving money to, or raising money for, any political candidate or election cause.Learn more about our process.The security of ballot drop boxes, left; the search for documents at Mar-a-Lago, center; and the role of the F.B.I., right, are being widely discussed online in the context of the midterm elections. From left, Marco Garcia for The New York Times, Saul Martinez for The New York Times, Kenny Holston for The New York TimesCalls to ActionOnline conversations about the midterm elections have also been dominated by calls for voters to act against apparent election fraud. In response, some people have organized citizen policing of voting, with stakeouts of polling stations and demands for information about voter rolls in their counties. Civil rights groups widely criticize poll watching, which they say can intimidate voters, particularly immigrants and at sites in communities of color.From July 27 to Aug. 3, the second-most-shared tweet about the midterms was a photo of people staking out a ballot box, with the message that “residents are determined to safeguard the drop boxes,” according to Zignal. Among those who shared it was Dinesh D’Souza, the creator of “2000 Mules,” who has 2.4 million followers on Twitter.In July, Seth Keshel, a retired Army captain who has challenged the result of the 2020 presidential election, shared a message on Telegram calling for “all-night patriot tailgate parties for EVERY DROP BOX IN AMERICA.” The post was viewed more than 70,000 times.Anger toward the F.B.I. is also reflected in midterm-related conversations, with a rise in calls to shut down or defund the agency after last month’s raid of Mr. Trump’s Florida residence, Mar-a-Lago.“Abolish FBI” became a trending hashtag across social media, mentioned 122,915 times on Twitter, Facebook, Reddit and news sites from July 1 to Aug. 30, up 1,990 percent from about 5,882 mentions in the two months before the 2020 election, according to Zignal.In a video posted on Twitter on Sept. 20, Representative Andrew Clyde, Republican of Georgia, implied that he and others would take action against the F.B.I. if Republicans won control of Congress in November.“You wait till we take the House back. You watch what happens to the F.B.I.,” he said in a video captured by a left-leaning online show, “The Undercurrent,” and shared more than 1,000 times on Twitter within a few hours. Mr. Clyde did not respond to a request for comment.Representative Marjorie Taylor Greene of Georgia, center, is among the politicians who have spread misinformation about gay and transgender people, a report said.From left: Todd Heisler/The New York Times, Stefani Reynolds for The New York Times, Todd Heisler/The New York TimesHot-Button IssuesSome online conversations about the midterms are not directly related to voting. Instead, the discussions are centered on highly partisan issues — such as transgender rights — that candidates are campaigning on and that are widely regarded as motivating voters, leading to a surge of falsehoods.A month after Florida passed legislation that prohibits classroom discussion or instruction about sexual orientation and gender identity, which the Republican governor, Ron DeSantis, signed into law in March, the volume of tweets falsely linking gay and transgender individuals to pedophilia soared, for example.Language claiming that gay people and transgender people were “grooming” children for abuse increased 406 percent on Twitter in April, according to a study by the Human Rights Campaign and the Center for Countering Digital Hate.The narrative was spread most widely by 10 far-right figures, including midterm candidates such as Representatives Lauren Boebert of Colorado and Marjorie Taylor Greene of Georgia, according to the report. Their tweets on “grooming” misinformation were viewed an estimated 48 million times, the report said.In May, Ms. Boebert tweeted: “A North Carolina preschool is using LGBT flag flashcards with a pregnant man to teach kids colors. We went from Reading Rainbow to Randy Rainbow in a few decades, but don’t dare say the Left is grooming our kids!” The tweet was shared nearly 2,000 times and liked nearly 10,000 times.Ms. Boebert and Ms. Taylor Greene did not respond to requests for comment.On Facebook and Instagram, 59 ads also promoted the narrative that the L.G.B.T.Q.+ community and allies were “grooming” children, the report found. Meta, the owner of Facebook and Instagram, accepted up to $24,987 for the ads, which were served to users over 2.1 million times, according to the report.Meta said it had removed several of the ads mentioned in the report.“The repeated pushing of ‘groomer’ narratives has resulted in a wider anti-L.G.B.T. moral panic that has been influencing state and federal legislation and is likely to be a significant midterm issue,” said David Thiel, the chief technical officer at the Stanford Internet Observatory, which studies online extremism and disinformation. More

  • in

    Social Media Companies Still Boost Election Fraud Claims, Report Says

    The major social media companies all say they are ready to deal with a torrent of misinformation surrounding the midterm elections in November.A report released on Monday, however, claimed that they continued to undermine the integrity of the vote by allowing election-related conspiracy theories to fester and spread.In the report, the Stern Center for Business and Human Rights at New York University said the social media companies still host and amplify “election denialism,” threatening to further erode confidence in the democratic process.The companies, the report argued, bear a responsibility for the false but widespread belief among conservatives that the 2020 election was fraudulent — and that the coming midterms could be, too. The report joins a chorus of warnings from officials and experts that the results in November could be fiercely, even violently, contended.“The malady of election denialism in the U.S. has become one of the most dangerous byproducts of social media,” the report warned, “and it is past time for the industry to do more to address it.”The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Echoing Trump: Six G.O.P. nominees for governor and the Senate in critical midterm states, all backed by former President Donald J. Trump, would not commit to accepting this year’s election results.Times/Siena Poll: Our second survey of the 2022 election cycle found Democrats remain unexpectedly competitive in the battle for Congress, while G.O.P. dreams of a major realignment among Latino voters have failed to materialize.Ohio Senate Race: The contest between Representative Tim Ryan, a Democrat, and his Republican opponent, J.D. Vance, appears tighter than many once expected.Pennsylvania Senate Race: In one of his most extensive interviews since having a stroke, Lt. Gov. John Fetterman, the Democratic nominee, said he was fully capable of handling a campaign that could decide control of the Senate.The major platforms — Facebook, Twitter, TikTok and YouTube — have all announced promises or initiatives to combat disinformation ahead of the 2022 midterms, saying they were committed to protecting the election process. But the report said those measures were ineffective, haphazardly enforced or simply too limited.Facebook, for example, announced that it would ban ads that called into question the legitimacy of the coming elections, but it exempted politicians from its fact-checking program. That, the report says, allows candidates and other influential leaders to undermine confidence in the vote by questioning ballot procedures or other rules.In the case of Twitter, an internal report released as part of a whistle-blower’s complaint from a former head of security, Peiter Zatko, disclosed that the company’s site integrity team had only two experts on misinformation.The New York University report, which incorporated responses from all the companies except YouTube, called for greater transparency in how companies rank, recommend and remove content. It also said they should enhance fact-checking efforts and remove provably untrue claims, and not simply label them false or questionable.A spokeswoman for Twitter, Elizabeth Busby, said the company was undertaking a multifaceted approach to ensuring reliable information about elections. That includes efforts to “pre-bunk” false information and to “reduce the visibility of potentially misleading claims via labels.”In a statement, YouTube said it agreed with “many of the points” made in the report and had already carried out many of its recommendations.“We’ve already removed a number of videos related to the midterms for violating our policies,” the statement said, “and the most viewed and recommended videos and channels related to the election are from authoritative sources, including news channels.”TikTok did not respond to a request for comment.There are already signs that the integrity of the vote in November will be as contentious as it was in 2020, when President Donald J. Trump and some of his supporters refused to accept the outcome, falsely claiming widespread fraud.Inattention by social media companies in the interim has allowed what the report describes as a coordinated campaign to take root among conservatives claiming, again without evidence, that wholesale election fraud is bent on tipping elections to Democrats.“Election denialism,” the report said, “was evolving in 2021 from an obsession with the former president’s inability to accept defeat into a broader, if equally baseless, attack on the patriotism of all Democrats, as well as non-Trump-loving Republicans, and legions of election administrators, many of them career government employees.” More

  • in

    Political Campaigns Flood Streaming Video With Custom Voter Ads

    The targeted political ads could spread some of the same voter-influence techniques that proliferated on Facebook to an even less regulated medium.Over the last few weeks, tens of thousands of voters in the Detroit area who watch streaming video services were shown different local campaign ads pegged to their political leanings.Digital consultants working for Representative Darrin Camilleri, a Democrat in the Michigan House who is running for State Senate, targeted 62,402 moderate, female — and likely pro-choice — voters with an ad promoting reproductive rights.The campaign also ran a more general video ad for Mr. Camilleri, a former public-school teacher, directed at 77,836 Democrats and Independents who have voted in past midterm elections. Viewers in Mr. Camilleri’s target audience saw the messages while watching shows on Lifetime, Vice and other channels on ad-supported streaming services like Samsung TV Plus and LG Channels.Although millions of American voters may not be aware of it, the powerful data-mining techniques that campaigns routinely use to tailor political ads to consumers on sites and apps are making the leap to streaming video. The targeting has become so precise that next door neighbors streaming the same true crime show on the same streaming service may now be shown different political ads — based on data about their voting record, party affiliation, age, gender, race or ethnicity, estimated home value, shopping habits or views on gun control.Political consultants say the ability to tailor streaming video ads to small swaths of viewers could be crucial this November for candidates like Mr. Camilleri who are facing tight races. In 2016, Mr. Camilleri won his first state election by just several hundred votes.“Very few voters wind up determining the outcomes of close elections,” said Ryan Irvin, the co-founder of Change Media Group, the agency behind Mr. Camilleri’s ad campaign. “Very early in an election cycle, we can pull from the voter database a list of those 10,000 voters, match them on various platforms and run streaming TV ads to just those 10,000 people.”Representative Darrin Camilleri, a member of the Michigan House who is running for State Senate, targeted local voters with streaming video ads before he campaigned in their neighborhoods. Emily Elconin for The New York TimesTargeted political ads on streaming platforms — video services delivered via internet-connected devices like TVs and tablets — seemed like a niche phenomenon during the 2020 presidential election. Two years later, streaming has become the most highly viewed TV medium in the United States, according to Nielsen.Savvy candidates and advocacy groups are flooding streaming services with ads in an effort to reach cord-cutters and “cord nevers,” people who have never watched traditional cable or broadcast TV.The trend is growing so fast that political ads on streaming services are expected to generate $1.44 billion — or about 15 percent — of the projected $9.7 billion on ad spending for the 2022 election cycle, according to a report from AdImpact, an ad tracking company. That would for the first time put streaming on par with political ad spending on Facebook and Google.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Midterm Data: Could the 2020 polling miss repeat itself? Will this election cycle really be different? Nate Cohn, The Times’s chief political analyst, looks at the data in his new newsletter.Republicans’ Abortion Struggles: Senator Lindsey Graham’s proposed nationwide 15-week abortion ban was intended to unite the G.O.P. before the November elections. But it has only exposed the party’s divisions.Democrats’ Dilemma: The party’s candidates have been trying to signal their independence from the White House, while not distancing themselves from President Biden’s base or agenda.The quick proliferation of the streaming political messages has prompted some lawmakers and researchers to warn that the ads are outstripping federal regulation and oversight.For example, while political ads running on broadcast and cable TV must disclose their sponsors, federal rules on political ad transparency do not specifically address streaming video services. Unlike broadcast TV stations, streaming platforms are also not required to maintain public files about the political ads they sold.The result, experts say, is an unregulated ecosystem in which streaming services take wildly different approaches to political ads.“There are no rules over there, whereas, if you are a broadcaster or a cable operator, you definitely have rules you have to operate by,” said Steve Passwaiter, a vice president at Kantar Media, a company that tracks political advertising.The boom in streaming ads underscores a significant shift in the way that candidates, party committees and issue groups may target voters. For decades, political campaigns have blanketed local broadcast markets with candidate ads or tailored ads to the slant of cable news channels. With such bulk media buying, viewers watching the same show at the same time as their neighbors saw the same political messages.But now campaigns are employing advanced consumer-profiling and automated ad-buying services to deliver different streaming video messages, tailored to specific voters.“In the digital ad world, you’re buying the person, not the content,” said Mike Reilly, a partner at MVAR Media, a progressive political consultancy that creates ad campaigns for candidates and advocacy groups.Targeted political ads are being run on a slew of different ad-supported streaming channels. Some smart TV manufacturers air the political ads on proprietary streaming platforms, like Samsung TV Plus and LG Channels. Viewers watching ad-supported streaming channels via devices like Roku may also see targeted political ads.Policies on political ad targeting vary. Amazon prohibits political party and candidate ads on its streaming services. YouTube TV and Hulu allow political candidates to target ads based on viewers’ ZIP code, age and gender, but they prohibit political ad targeting by voting history or party affiliation.Roku, which maintains a public archive of some political ads running on its platform, declined to comment on its ad-targeting practices.Samsung and LG, which has publicly promoted its voter-targeting services for political campaigns, did not respond to requests for comment. Netflix declined to comment about its plans for an ad-supported streaming service.Targeting political ads on streaming services can involve more invasive data-mining than the consumer-tracking techniques typically used to show people online ads for sneakers.Political consulting firms can buy profiles on more than 200 millions voters, including details on an individual’s party affiliations, voting record, political leanings, education levels, income and consumer habits. Campaigns may employ that data to identify voters concerned about a specific issue — like guns or abortion — and hone video messages to them.In addition, internet-connected TV platforms like Samsung, LG and Roku often use data-mining technology, called “automated content recognition,” to analyze snippets of the videos people watch and segment viewers for advertising purposes.Some streaming services and ad tech firms allow political campaigns to provide lists of specific voters to whom they wish to show ads.To serve those messages, ad tech firms employ precise delivery techniques — like using IP addresses to identify devices in a voter’s household. The device mapping allows political campaigns to aim ads at certain voters whether they are streaming on internet-connected TVs, tablets, laptops or smartphones.Sten McGuire, an executive at a4 Advertising, presented a webinar in March announcing a partnership to sell political ads on LG channels.New York TimesUsing IP addresses, “we can intercept voters across the nation,” Sten McGuire, an executive at a4 Advertising, said in a webinar in March announcing a partnership to sell political ads on LG channels. His company’s ad-targeting worked, Mr. McGuire added, “whether you are looking to reach new cord cutters or ‘cord nevers’ streaming their favorite content, targeting Spanish-speaking voters in swing states, reaching opinion elites and policy influencers or members of Congress and their staff.”Some researchers caution that targeted video ads could spread some of the same voter-influence techniques that have proliferated on Facebook to a new, and even less regulated, medium.Facebook and Google, the researchers note, instituted some restrictions on political ad targeting after Russian operatives used digital platforms to try to disrupt the 2016 presidential election. With such restrictions in place, political advertisers on Facebook, for instance, should no longer be able to target users interested in Malcolm X or Martin Luther King with paid messages urging them not to vote.Facebook and Google have also created public databases that enable people to view political ads running on the platforms.But many streaming services lack such targeting restrictions and transparency measures. The result, these experts say, is an opaque system of political influence that runs counter to basic democratic principles.“This occupies a gray area that’s not getting as much scrutiny as ads running on social media,” said Becca Ricks, a senior researcher at the Mozilla Foundation who has studied the political ad policies of popular streaming services. “It creates an unfair playing field where you can precisely target, and change, your messaging based on the audience — and do all of this without some level of transparency.”Some political ad buyers are shying away from more restricted online platforms in favor of more permissive streaming services.“Among our clients, the percentage of budget going to social channels, and on Facebook and Google in particular, has been declining,” said Grace Briscoe, an executive overseeing candidate and political issue advertising at Basis Technologies, an ad tech firm. “The kinds of limitations and restrictions that those platforms have put on political ads has disinclined clients to invest as heavily there.”Senators Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would require online political ads to include disclosures similar to those on broadcast TV ads.Al Drago for The New York TimesMembers of Congress have introduced a number of bills that would curb voter-targeting or require digital ads to adhere to the same rules as broadcast ads. But the measures have not yet been enacted.Amid widespread covertness in the ad-targeting industry, Mr. Camilleri, the member of the Michigan House running for State Senate, was unusually forthcoming about how he was using streaming services to try to engage specific swaths of voters.In prior elections, he said, he sent postcards introducing himself to voters in neighborhoods where he planned to make campaign stops. During this year’s primaries, he updated the practice by running streaming ads introducing himself to certain households a week or two before he planned to knock on their doors.“It’s been working incredibly well because a lot of people will say, ‘Oh, I’ve seen you on TV,’” Mr. Camilleri said, noting that many of his constituents did not appear to understand the ads were shown specifically to them and not to a general broadcast TV audience. “They don’t differentiate” between TV and streaming, he added, “because you’re watching YouTube on your television now.” More