More stories

  • in

    Instagram and Facebook Subscriptions Are a New Focus of Child Safety Suit

    New Mexico’s attorney general has accused Meta of not protecting children from sexual predators on its platforms. He now wants to know how it polices subscribers to accounts featuring children.The New Mexico attorney general, who last year sued Meta alleging that it did not protect children from sexual predators and had made false claims about its platforms’ safety, announced Monday that his office would examine how the company’s paid-subscription services attract predators.Attorney General Raúl Torrez said he had formally requested documentation from the social media company about subscriptions on Facebook and Instagram, which are frequently available on children’s accounts run by parents.Instagram does not allow users under 13, but accounts that focus entirely on children are permitted as long as they are managed by an adult. The New York Times published an investigation on Thursday into girl influencers on the platform, reporting that the so-called mom-run accounts charge followers up to $19.99 a month for additional photos as well as chat sessions and other extras.The Times found that adult men subscribe to the accounts, including some who actively participate in forums where people discuss the girls in sexual terms.“This deeply disturbing pattern of conduct puts children at risk — and persists despite a wave of lawsuits and congressional investigations,” Mr. Torrez said in a statement.Mr. Torrez filed a complaint in December that accused Meta of enabling harmful activity between adults and minors on Facebook and Instagram and failing to detect and remove such activity when it was reported. The allegations were based, in part, on findings from accounts Mr. Torrez’s office created, including one for a fictitious 14-year-old girl that received an offer of $180,000 to appear in a pornographic video.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Supreme Court to Decide How the First Amendment Applies to Social Media

    Challenges to laws in Florida and Texas meant to protect conservative viewpoints are likely to yield a major constitutional ruling on tech platforms’ free speech rights.The most important First Amendment cases of the internet era, to be heard by the Supreme Court on Monday, may turn on a single question: Do platforms like Facebook, YouTube, TikTok and X most closely resemble newspapers or shopping centers or phone companies?The two cases arrive at the court garbed in politics, as they concern laws in Florida and Texas aimed at protecting conservative speech by forbidding leading social media sites from removing posts based on the views they express.But the outsize question the cases present transcends ideology. It is whether tech platforms have free speech rights to make editorial judgments. Picking the apt analogy from the court’s precedents could decide the matter, but none of the available ones is a perfect fit.If the platforms are like newspapers, they may publish what they want without government interference. If they are like private shopping centers open to the public, they may be required to let visitors say what they like. And if they are like phone companies, they must transmit everyone’s speech.“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” Justice Samuel A. Alito Jr. wrote in a 2022 dissent when one of the cases briefly reached the Supreme Court.Supporters of the state laws say they foster free speech, giving the public access to all points of view. Opponents say the laws trample on the platforms’ own First Amendment rights and would turn them into cesspools of filth, hate and lies. One contrarian brief, from liberal professors, urged the justices to uphold the key provision of the Texas law despite the harm they said it would cause.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A Marketplace of Girl Influencers Managed by Moms and Stalked by Men

    This box represents a real photo of a 9-year-old girl in a golden bikini lounging on a towel. The photo was posted on her Instagram account, which is run by adults. 1 🔥🔥🔥 wooowww Mama mia ❤️❤️🥰💯🤗 Great body😍🔥❤️ Love 😍😍😍😍 Perfect bikini body ❤️❤️❤️❤️❤️😋😋😋😍😍😍🔥🔥🔥🔥🔥 Mmmmmmmmm take that bikini off 😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍😍🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️❤️ You’re sooooo hot ❤️🤗💋🌺🌹🌹💯 […] More

  • in

    Five Takeaways From The Times’s Investigation Into Child Influencers

    Instagram does not allow children under 13 to have accounts, but parents are allowed to run them — and many do so for daughters who aspire to be social media influencers.What often starts as a parent’s effort to jump-start a child’s modeling career, or win favors from clothing brands, can quickly descend into a dark underworld dominated by adult men, many of whom openly admit on other platforms to being sexually attracted to children, an investigation by The New York Times found.Thousands of so-called mom-run accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.Nearly one in three preteens list influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers. But health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.The pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their daughter’s images. These are some key findings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Millennials Flock to Instagram to Share Pictures of Themselves at 21

    The generation that rose with smartphones and social media had a chance to look back this week.Most of the photos are slightly faded. The hairlines fuller. Some feature braces. Old friends. Sorority squats and college sweethearts. Caps and gowns. Laments about skinny jeans and other long lost trends.This week, Instagram stories the world over have been awash with nostalgic snapshots of youthful idealism — there have been at least 3.6 million shares, according a representative for Meta — as people post photos of themselves based on the prompt: “Everyone tap in. Let’s see you at 21.”The first post came from Damian Ruff, a 43-year-old Whole Foods employee based out of Mesa, Ariz. On Jan. 23, Mr. Ruff shared an image from a family trip to Mexico, wearing a tiny sombrero and drinking a Dos Equis. His mother sent him the photo, Mr. Ruff said in an interview. It was the first time they shared a beer together after he turned 21.“Not much has changed other than my gray hair. I see that person and go, ‘Ugh, you are such a child and have no idea,’” he said.Mr. Ruff created the shareable story template with the picture — a feature that Instagram introduced in 2021 but expanded in December — and watched it take off.“The amount of people that have been messaging me and adding me on Instagram out of nowhere, like people from around the world, has been crazy,” Mr. Ruff said.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    These 2024 Candidates Have Signed Up For Threads, Meta’s Twitter Alternative

    The bulk of the G.O.P. field is there, with some notable holdouts: Donald J. Trump, the front-runner, and his top rival, Ron DeSantis.While the front-runners in the 2024 presidential race have yet to show up on Threads, the new Instagram app aimed at rivaling Twitter, many of the long-shot candidates were quick to take advantage of the platform’s rapidly growing audience.“Buckle up and join me on Threads!” Senator Tim Scott, Republican of South Carolina, wrote in a caption accompanying a selfie of himself and others in a car that he posted on Thursday — by that morning, the app had already been downloaded more than 30 million times, putting it on track to be the most rapidly downloaded app ever.But President Biden, former President Donald J. Trump and Gov. Ron DeSantis of Florida remain absent from the platform so far.And that may be just fine with Adam Mosseri, the head of Instagram, who told The Times’s “Hard Fork” podcast on Thursday that he does not expect Threads to become a destination for news or politics, arenas where Twitter has dominated the public discourse.“I don’t want to lean into hard news at all. I don’t think there’s much that we can or should do to discourage it on Instagram or in Threads, but I don’t think we’ll do anything to encourage it,” Mr. Mosseri said.The app, released on Wednesday, was presented as an alternative to Twitter, with which many users became disillusioned after it was purchased by Elon Musk in October.Lawyers for Twitter threatened legal action against Meta, the company that owns Instagram, Facebook and Threads, accusing it of using trade secrets from former Twitter employees to build the new platform. Mr. Musk tweeted on Thursday, “Competition is fine, cheating is not.”Mr. Trump has not been active on Twitter recently either, despite Mr. Musk’s lifting the ban that was put on Mr. Trump’s account after the Jan. 6, 2021, attack on the Capitol. The former president has instead kept his focus on Truth Social, the right-wing social network he launched in 2021.But many of the G.O.P. candidates have begun making their pitches on Threads.Nikki Haley, the former United Nations ambassador and former governor of South Carolina, made a video compilation of her campaign events her first post on the app. “Strong and proud. Not weak and woke,” she wrote on Thursday. “That is the America I see.”Gov. Doug Burgum of North Dakota posted footage of his July 4 campaign appearances in New Hampshire, alongside a message on Wednesday that said he and his wife were “looking forward to continuing our time here.”And Will Hurd, a former Texas congressman, made a fund-raising pitch to viewers on Wednesday.“Welcome to Threads,” he said in a video posted on the app. “I’m looking forward to continuing the conversation here with you on the issues, my candidacy, where I’ll be and everything our campaign has going on.”Francis Suarez, the Republican mayor of Miami, and Larry Elder, a conservative talk radio host, also shared their campaign pitches on the platform, as did two candidates running in the Democratic primary: Robert F. Kennedy Jr., a leading vaccine skeptic, and Marianne Williamson, a self-help author. Even Cornel West, a professor and progressive activist running as a third-party candidate, has posted.Former Vice President Mike Pence and Vivek Ramaswamy, a tech entrepreneur, also established accounts — but have yet to post.Among the holdouts: Former Gov. Asa Hutchinson of Arkansas and former Gov. Chris Christie of New Jersey, both Republicans.The White House has not said whether Mr. Biden will join Threads. Andrew Bates, a White House spokesman, said on Thursday that the administration would “keep you all posted if we do.” More