More stories

  • in

    Terrorists Are Paying for Check Marks on X, Report Says

    The report shows that X has accepted payments for subscriptions from entities barred from doing business in the United States, a potential violation of sanctions.X, the social media platform owned by Elon Musk, is potentially violating U.S. sanctions by accepting payments for subscription accounts from terrorist organizations and other groups barred from doing business in the country, according to a new report.The report, by the Tech Transparency Project, a nonprofit focused on accountability for large technology companies, shows that X, formerly known as Twitter, has taken payments from accounts that include Hezbollah leaders, Houthi groups, and state-run media outlets in Iran and Russia. The subscriptions, which cost $8 a month, offer users a blue check mark — once limited to verified users like celebrities — and better promotion by X’s algorithm, among other perks.The U.S. Treasury Department maintains a list of entities that have been placed under sanctions, and while X’s official terms of service forbid people and organizations on the list to make payments on the platform, the report found 28 accounts that had the blue check mark.“We were surprised to find that X was providing premium services to a wide range of groups the U.S. has sanctioned for terrorism and other activities that harm its national security,” said Katie Paul, the director of the Tech Transparency Project. “It’s yet another sign that X has lost control of its platform.”X and Mr. Musk did not respond to a request for comment. Mr. Musk has said that he wants X to be a haven for free speech and that he will remove only illegal content.Since Mr. Musk’s acquisition of Twitter in 2022, the company has made drastic changes to the way it does business — in some cases spurning advertising in favor of subscription dollars. It has also restored thousands of barred accounts and rolled back rules that once governed the site.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    American Firms Invested $1 Billion in Chinese Chips, Lawmakers Find

    A Congressional investigation determined that U.S. funding helped fuel the growth of a sector now viewed by Washington as a security threat.A congressional investigation has determined that five American venture capital firms invested more than $1 billion in China’s semiconductor industry since 2001, fueling the growth of a sector that the United States government now regards as a national security threat.Funds supplied by the five firms — GGV Capital, GSR Ventures, Qualcomm Ventures, Sequoia Capital and Walden International — went to more than 150 Chinese companies, according to the report, which was released Thursday by both Republicans and Democrats on the House Select Committee on the Chinese Communist Party.The investments included roughly $180 million that went to Chinese firms that the committee said directly or indirectly support Beijing’s military. That includes companies that the U.S. government has said provide chips for China’s military research, equipment and weapons, such as Semiconductor Manufacturing International Corporation, or SMIC, China’s largest chipmaker.The report by the House committee focuses on investments made before the Biden administration imposed sweeping restrictions aimed at cutting off China’s access to American financing. It does not allege any illegality.Last August, the Biden administration banned U.S. venture capital and private equity firms from investing in Chinese quantum computing, artificial intelligence and advanced semiconductors. It has also imposed worldwide limits on sales of advanced chips and chip-making machines to China, arguing that these technologies could help advance the capabilities of the Chinese military and spy agencies.Since it was established a year ago, the committee has called for raising tariffs on China, targeted Ford Motor and others for doing business with Chinese companies, and spotlighted forced labor concerns involving Chinese shopping sites.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Joins Effort to Help Spot Content Made With A.I.

    The tech company’s plan is similar to one announced two days earlier by Meta, another Silicon Valley giant.Google, whose work in artificial intelligence helped make A.I.-generated content far easier to create and spread, now wants to ensure that such content is traceable as well.The tech giant said on Thursday that it was joining an effort to develop credentials for digital content, a sort of “nutrition label” that identifies when and how a photograph, a video, an audio clip or another file was produced or altered — including with A.I. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards.The announcement follows a similar promise announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.Google, which spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate the digital certification into its own products and services, though it did not specify its timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which Google owns and which will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating on current events in voices powered by text-to-speech services.Recognizing where online content originates and how it changes is a high priority for lawmakers and tech watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable A.I. detection tools caused people to further doubt the authenticity of things they saw and heard on the internet.Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards have been supported by news organizations such as The New York Times as well as by camera manufacturers, banks and advertising agencies.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Snap Lays Off 10% of Its Work Force

    The company laid off more than 500 of its employees on Monday, or about 10 percent of its global work force.Snap, the parent of messaging app Snapchat, on Monday said it would lay off more than 500 employees, joining other tech companies in a wave of new cost-cutting measures.The layoffs amount to 10 percent of its global work force; the majority will occur in the first quarter of 2024. “We have made the difficult decision to restructure our team,” the company said in a securities filing, adding that it would take pretax charges of $55 million to $75 million, primarily for severance and related costs.Amazon, Google and Microsoft have announced layoffs this year, following tens of thousands across the sector last year. Snap laid off a small number of employees on Friday, Business Insider reported.The company is set to report earnings on Tuesday. Cost-cutting measures at other companies have buoyed stock prices. Snap shares were trading about 2 percent lower before the market opened on Monday.Like other social media companies reliant on advertising, Snap has had a rough couple years. Changes by Apple to its privacy policy in 2021 made it tougher for advertisers to track users — something that hurt Snap and also had a heavy effect on Meta, which owns Facebook and Instagram.Snapchat, which has more than 400 million daily active users, experienced a revenue decline in the first two quarters of last year and only 5 percent growth in its most recent quarter, which ended Sept. 30.In 2022, Snap cut 20 percent of its work force, or 1,300 jobs, and also discontinued at least six products. It let go nearly 20 product managers in November and in September shut a division that sells augmented reality products to businesses, laying off 170 people. More

  • in

    Matisyahu Salomon, Rabbi Who Warned of the Internet’s Dangers, Dies at 86

    As a supervisor at America’s largest yeshiva, he wielded influence across the world of ultra-Orthodox Jews. He feared the internet jeopardized the observance of Jewish customs.Rabbi Matisyahu Salomon, a longtime spiritual counselor at America’s largest yeshiva who spearheaded a crusade to warn observant Jews of the risks posed by the internet, died on Jan. 2 in Lakewood, N.J. He was 86.The death was confirmed by Rabbi Avi Shafran, public affairs director of Agudath Israel of America, the umbrella organization for numerous Hasidic and other ultra-Orthodox groups. He said Rabbi Salomon had been ill for many years.Rabbi Salomon’s title during his three decades at Beth Medrash Govoha, a religious school in Lakewood whose enrollment of almost 9,000 students is exceeded only by the Mir Yeshiva in Israel, was dean of students. But he achieved far more influence than the title might suggest, through weekly lectures and personal encounters that guided thousands of young men on ethical and pious conduct.Many of his acolytes became leaders of the teeming haredi, or ultra-Orthodox, communities in Brooklyn, England and Israel, as well as in smaller enclaves around the world.He capitalized on that influence in a campaign he led a decade ago to warn observant Jews that new technologies were threatening observance of the laws, traditions and principles that are the backbone of their faith.Ultra-Orthodox Jews had been as enthusiastic about the benefits of computers, the internet and smartphones as their non-Jewish and more secular neighbors. But it became apparent to Rabbi Salomon and other community leaders that these new technologies could also be dangerous, beguiling pious Jews with videos. images and temporal content that would distract them from their family life, daily religious obligations and pursuits like Torah study.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Millennials Flock to Instagram to Share Pictures of Themselves at 21

    The generation that rose with smartphones and social media had a chance to look back this week.Most of the photos are slightly faded. The hairlines fuller. Some feature braces. Old friends. Sorority squats and college sweethearts. Caps and gowns. Laments about skinny jeans and other long lost trends.This week, Instagram stories the world over have been awash with nostalgic snapshots of youthful idealism — there have been at least 3.6 million shares, according a representative for Meta — as people post photos of themselves based on the prompt: “Everyone tap in. Let’s see you at 21.”The first post came from Damian Ruff, a 43-year-old Whole Foods employee based out of Mesa, Ariz. On Jan. 23, Mr. Ruff shared an image from a family trip to Mexico, wearing a tiny sombrero and drinking a Dos Equis. His mother sent him the photo, Mr. Ruff said in an interview. It was the first time they shared a beer together after he turned 21.“Not much has changed other than my gray hair. I see that person and go, ‘Ugh, you are such a child and have no idea,’” he said.Mr. Ruff created the shareable story template with the picture — a feature that Instagram introduced in 2021 but expanded in December — and watched it take off.“The amount of people that have been messaging me and adding me on Instagram out of nowhere, like people from around the world, has been crazy,” Mr. Ruff said.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Tech CEOs Got Grilled, but New Rules Are Still a Question

    Tech leaders faced a grilling in the Senate, and one offered an apology. But skeptics fear little will change this time.Five tech C.E.O.s faced a grilling yesterday, but it’s unclear whether new laws to impose more safeguards for online children’s safety will pass.Kenny Holston/The New York TimesA lot of heat, but will there be regulation?Five technology C.E.O.s endured hours of grilling by senators on both sides of the aisle about their apparent failures to make their platforms safer for children, with some lawmakers accusing them of having “blood” on their hands.But for all of the drama, including Mark Zuckerberg of Meta apologizing to relatives of online child sex abuse victims, few observers believe that there’s much chance of concrete action.“Your product is killing people,” Senator Josh Hawley, Republican of Missouri, flatly told Zuckerberg at Wednesday’s hearing. Over 3.5 hours, members of the Senate Judiciary Committee laid into the Meta chief and the heads of Discord, Snap, TikTok and X over their policies. (Before the hearing began, senators released internal Meta documents that showed that executives had rejected efforts to devote more resources to safeguard children.)But tech C.E.O.s offered only qualified support for legislative efforts. Those include the Kids Online Safety Act, or KOSA, which would require tech platforms to take “reasonable measures” to prevent harm, and STOP CSAM and EARN IT, two bills that would curtail some of the liability shield given to those companies by Section 230 of the Communications Decency Act.Both Evan Spiegel of Snap and Linda Yaccarino of X backed KOSA, and Yaccarino also became the first tech C.E.O. to back the STOP CSAM Act. But neither endorsed EARN IT.Zuckerberg called for legislation to force Apple and Google — neither of which was asked to testify — to be held responsible for verifying app users’ ages. But he otherwise emphasized that Meta had already offered resources to keep children safe.Shou Chew of TikTok noted only that his company expected to invest over $2 billion in trust and safety measures this year.Jason Citron of Discord allowed that Section 230 “needs to be updated,” and his company later said that it supports “elements” of STOP CSAM.Experts worry that we’ve seen this play out before. Tech companies have zealously sought to defend Section 230, which protects them from liability for content users post on their platforms. Some lawmakers say altering it would be crucial to holding online platforms to account.Meanwhile, tech groups have fought efforts by states to tighten the use of their services by children. Such laws would lead to a patchwork of regulations that should instead be addressed by Congress, the industry has argued.Congress has failed to move meaningfully on such legislation. Absent a sea change in congressional will, Wednesday’s drama may have been just that.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More