More stories

  • in

    Five Takeaways From The Times’s Investigation Into Child Influencers

    Instagram does not allow children under 13 to have accounts, but parents are allowed to run them — and many do so for daughters who aspire to be social media influencers.What often starts as a parent’s effort to jump-start a child’s modeling career, or win favors from clothing brands, can quickly descend into a dark underworld dominated by adult men, many of whom openly admit on other platforms to being sexually attracted to children, an investigation by The New York Times found.Thousands of so-called mom-run accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.Nearly one in three preteens list influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers. But health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.The pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their daughter’s images. These are some key findings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    TikTok Is Subject of E.U. Inquiry Over ‘Addictive Design’

    The European Commission said it would investigate whether the site violated online laws aimed at protecting children from harmful content.European Union regulators on Monday opened an investigation into TikTok over potential breaches of online content rules aimed at protecting children, saying the popular social media platform’s “addictive design” risked exposing young people to harmful content.The move widens a preliminary investigation conducted in recent months into whether TikTok, owned by the Chinese company ByteDance, violated a new European law, the Digital Services Act, which requires large social media companies to stop the spread of harmful material. Under the law, companies can be penalized up to 6 percent of their global revenues.TikTok has been under the scrutiny of E.U. regulators for months. The company was fined roughly $370 million in September for having weak safeguards to protect the personal information of children using the platform. Policymakers in the United States have also been wrestling with how to regulate the platform for harmful content and data privacy — concerns amplified by TikTok’s links to China.The European Commission said it was particularly focused on how the company was managing the risk of “negative effects stemming” from the site’s design, including algorithmic systems that it said “may stimulate behavioral addictions” or “create so-called ‘rabbit hole effects,’” where a user is pulled further and further into the site’s content.Those risks could potentially compromise a person’s “physical and mental well-being,” the commission said.“The safety and well-being of online users in Europe is crucial,” Margrethe Vestager, the European Commission’s executive vice president overseeing digital policy, said in a statement. “TikTok needs to take a close look at the services they offer and carefully consider the risks that they pose to their users — young as well as old.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Terrorists Are Paying for Check Marks on X, Report Says

    The report shows that X has accepted payments for subscriptions from entities barred from doing business in the United States, a potential violation of sanctions.X, the social media platform owned by Elon Musk, is potentially violating U.S. sanctions by accepting payments for subscription accounts from terrorist organizations and other groups barred from doing business in the country, according to a new report.The report, by the Tech Transparency Project, a nonprofit focused on accountability for large technology companies, shows that X, formerly known as Twitter, has taken payments from accounts that include Hezbollah leaders, Houthi groups, and state-run media outlets in Iran and Russia. The subscriptions, which cost $8 a month, offer users a blue check mark — once limited to verified users like celebrities — and better promotion by X’s algorithm, among other perks.The U.S. Treasury Department maintains a list of entities that have been placed under sanctions, and while X’s official terms of service forbid people and organizations on the list to make payments on the platform, the report found 28 accounts that had the blue check mark.“We were surprised to find that X was providing premium services to a wide range of groups the U.S. has sanctioned for terrorism and other activities that harm its national security,” said Katie Paul, the director of the Tech Transparency Project. “It’s yet another sign that X has lost control of its platform.”X and Mr. Musk did not respond to a request for comment. Mr. Musk has said that he wants X to be a haven for free speech and that he will remove only illegal content.Since Mr. Musk’s acquisition of Twitter in 2022, the company has made drastic changes to the way it does business — in some cases spurning advertising in favor of subscription dollars. It has also restored thousands of barred accounts and rolled back rules that once governed the site.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    American Firms Invested $1 Billion in Chinese Chips, Lawmakers Find

    A Congressional investigation determined that U.S. funding helped fuel the growth of a sector now viewed by Washington as a security threat.A congressional investigation has determined that five American venture capital firms invested more than $1 billion in China’s semiconductor industry since 2001, fueling the growth of a sector that the United States government now regards as a national security threat.Funds supplied by the five firms — GGV Capital, GSR Ventures, Qualcomm Ventures, Sequoia Capital and Walden International — went to more than 150 Chinese companies, according to the report, which was released Thursday by both Republicans and Democrats on the House Select Committee on the Chinese Communist Party.The investments included roughly $180 million that went to Chinese firms that the committee said directly or indirectly support Beijing’s military. That includes companies that the U.S. government has said provide chips for China’s military research, equipment and weapons, such as Semiconductor Manufacturing International Corporation, or SMIC, China’s largest chipmaker.The report by the House committee focuses on investments made before the Biden administration imposed sweeping restrictions aimed at cutting off China’s access to American financing. It does not allege any illegality.Last August, the Biden administration banned U.S. venture capital and private equity firms from investing in Chinese quantum computing, artificial intelligence and advanced semiconductors. It has also imposed worldwide limits on sales of advanced chips and chip-making machines to China, arguing that these technologies could help advance the capabilities of the Chinese military and spy agencies.Since it was established a year ago, the committee has called for raising tariffs on China, targeted Ford Motor and others for doing business with Chinese companies, and spotlighted forced labor concerns involving Chinese shopping sites.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Joins Effort to Help Spot Content Made With A.I.

    The tech company’s plan is similar to one announced two days earlier by Meta, another Silicon Valley giant.Google, whose work in artificial intelligence helped make A.I.-generated content far easier to create and spread, now wants to ensure that such content is traceable as well.The tech giant said on Thursday that it was joining an effort to develop credentials for digital content, a sort of “nutrition label” that identifies when and how a photograph, a video, an audio clip or another file was produced or altered — including with A.I. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards.The announcement follows a similar promise announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.Google, which spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate the digital certification into its own products and services, though it did not specify its timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which Google owns and which will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating on current events in voices powered by text-to-speech services.Recognizing where online content originates and how it changes is a high priority for lawmakers and tech watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable A.I. detection tools caused people to further doubt the authenticity of things they saw and heard on the internet.Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards have been supported by news organizations such as The New York Times as well as by camera manufacturers, banks and advertising agencies.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Snap Lays Off 10% of Its Work Force

    The company laid off more than 500 of its employees on Monday, or about 10 percent of its global work force.Snap, the parent of messaging app Snapchat, on Monday said it would lay off more than 500 employees, joining other tech companies in a wave of new cost-cutting measures.The layoffs amount to 10 percent of its global work force; the majority will occur in the first quarter of 2024. “We have made the difficult decision to restructure our team,” the company said in a securities filing, adding that it would take pretax charges of $55 million to $75 million, primarily for severance and related costs.Amazon, Google and Microsoft have announced layoffs this year, following tens of thousands across the sector last year. Snap laid off a small number of employees on Friday, Business Insider reported.The company is set to report earnings on Tuesday. Cost-cutting measures at other companies have buoyed stock prices. Snap shares were trading about 2 percent lower before the market opened on Monday.Like other social media companies reliant on advertising, Snap has had a rough couple years. Changes by Apple to its privacy policy in 2021 made it tougher for advertisers to track users — something that hurt Snap and also had a heavy effect on Meta, which owns Facebook and Instagram.Snapchat, which has more than 400 million daily active users, experienced a revenue decline in the first two quarters of last year and only 5 percent growth in its most recent quarter, which ended Sept. 30.In 2022, Snap cut 20 percent of its work force, or 1,300 jobs, and also discontinued at least six products. It let go nearly 20 product managers in November and in September shut a division that sells augmented reality products to businesses, laying off 170 people. More

  • in

    Matisyahu Salomon, Rabbi Who Warned of the Internet’s Dangers, Dies at 86

    As a supervisor at America’s largest yeshiva, he wielded influence across the world of ultra-Orthodox Jews. He feared the internet jeopardized the observance of Jewish customs.Rabbi Matisyahu Salomon, a longtime spiritual counselor at America’s largest yeshiva who spearheaded a crusade to warn observant Jews of the risks posed by the internet, died on Jan. 2 in Lakewood, N.J. He was 86.The death was confirmed by Rabbi Avi Shafran, public affairs director of Agudath Israel of America, the umbrella organization for numerous Hasidic and other ultra-Orthodox groups. He said Rabbi Salomon had been ill for many years.Rabbi Salomon’s title during his three decades at Beth Medrash Govoha, a religious school in Lakewood whose enrollment of almost 9,000 students is exceeded only by the Mir Yeshiva in Israel, was dean of students. But he achieved far more influence than the title might suggest, through weekly lectures and personal encounters that guided thousands of young men on ethical and pious conduct.Many of his acolytes became leaders of the teeming haredi, or ultra-Orthodox, communities in Brooklyn, England and Israel, as well as in smaller enclaves around the world.He capitalized on that influence in a campaign he led a decade ago to warn observant Jews that new technologies were threatening observance of the laws, traditions and principles that are the backbone of their faith.Ultra-Orthodox Jews had been as enthusiastic about the benefits of computers, the internet and smartphones as their non-Jewish and more secular neighbors. But it became apparent to Rabbi Salomon and other community leaders that these new technologies could also be dangerous, beguiling pious Jews with videos. images and temporal content that would distract them from their family life, daily religious obligations and pursuits like Torah study.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More