More stories

  • in

    Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

    An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Bitcoin Hits Record High, Recovering From 2022 Meltdown

    Bitcoin’s price surged above $68,800, breaking the record the digital currency set in November 2021 when the crypto industry was booming.Bitcoin hit a record high of about $68,800 on Tuesday, capping a remarkable comeback for the volatile cryptocurrency after its value plunged in 2022 amid a market meltdown.Bitcoin’s price has risen more than 300 percent since November 2022, a resurgence that few predicted when the price dropped below $20,000 in 2022. Its previous record was just under $68,790 in November 2021, as crypto markets boomed and amateur investors poured savings into experimental digital coins.“This is just the beginning of this bull market,” said Nathan McCauley, the chief executive of the crypto company Anchorage Digital. “The best is yet to come.”Bitcoin’s recent surge has been driven by investor enthusiasm for a new financial product tied to the digital coin. In January, U.S. regulators authorized a group of crypto companies and traditional finance firms to offer exchange-traded funds, or E.T.F.s, which track Bitcoin’s price. The funds provide a simple way for people to invest in the crypto markets without directly owning the virtual currency.As of last week, investors had poured more than $7 billion into the investment products, propelling Bitcoin’s rapid rise, according to Bloomberg Intelligence.The price of Ether, the second-most-valuable digital currency after Bitcoin, has also risen this year. Its increase has been driven partly by enthusiasm over the prospect that regulators may also approve an E.T.F. tied to Ether.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    SEC Is Investigating OpenAI Over Its Board’s Actions

    The U.S. regulator opened its inquiry after the board unexpectedly fired the company’s chief executive, Sam Altman, in November.The Securities and Exchange Commission began an inquiry into OpenAI soon after the company’s board of directors unexpectedly removed Sam Altman, its chief executive, at the end of last year, three people familiar with the inquiry said.The regulator has sent official requests to OpenAI, the developer of the ChatGPT online chatbot, seeking information about the situation. It is unclear whether the S.E.C. is investigating Mr. Altman’s behavior, the board’s decision to oust him or both.Even as OpenAI has tried to turn the page on the dismissal of Mr. Altman, who was soon reinstated, the controversy continues to hound the company. In addition to the S.E.C. inquiry, the San Francisco artificial intelligence company has hired a law firm to conduct its own investigation into Mr. Altman’s behavior and the board’s decision to remove him.The board dismissed Mr. Altman on Nov. 17, saying it no longer had confidence in his ability to run OpenAI. It said he had not been “consistently candid in his communications,” though it did not provide specifics. It agreed to reinstate him five days later.Privately, the board worried that Mr. Altman was not sharing all of his plans to raise money from investors in the Middle East for an A.I. chip project, people with knowledge of the situation have said.Spokespeople for the S.E.C. and OpenAI and a lawyer for Mr. Altman all declined to comment.The S.E.C.’s inquiry was reported earlier by The Wall Street Journal.OpenAI kicked off an industrywide A.I. boom at the end of 2022 when it released ChatGPT. The company is considered a leader in what is called generative A.I., technologies that can generate text, sounds and images from short prompts. A recent funding deal values the start-up at more than $80 billion.Many believe that generative A.I., which represents a fundamental shift in the way computers behave, could remake the industry as thoroughly as the iPhone or the web browser. Others argue that the technology could cause serious harm, helping to spread online disinformation, replacing jobs with unusual speed and maybe even threatening the future of humanity.After the release of ChatGPT, Mr. Altman became the face of the industry’s push toward generative A.I. as he endlessly promoted the technology — while acknowledging the dangers.In an effort to resolve the turmoil surrounding Mr. Altman’s ouster, he and the board agreed to remove two members and add two others: Bret Taylor, who is a former Salesforce executive, and former Treasury Secretary Lawrence H. Summers.Mr. Altman and the board also agreed that OpenAI would start its own investigation into the matter. That investigation, by the WilmerHale law firm, is expected to close soon. More

  • in

    A.I. Frenzy Complicates Efforts to Keep Power-Hungry Data Sites Green

    West Texas, from the oil rigs of the Permian Basin to the wind turbines twirling above the High Plains, has long been a magnet for companies seeking fortunes in energy.Now, those arid ranch lands are offering a new moneymaking opportunity: data centers.Lancium, an energy and data center management firm setting up shop in Fort Stockton and Abilene, is one of many companies around the country betting that building data centers close to generating sites will allow them to tap into underused clean power.“It’s a land grab,” said Lancium’s president, Ali Fenn.In the past, companies built data centers close to internet users, to better meet consumer requests, like streaming a show on Netflix or playing a video game hosted in the cloud. But the growth of artificial intelligence requires huge data centers to train the evolving large-language models, making proximity to users less necessary.But as more of these sites start to pop up across the United States, there are new questions on whether they can meet the demand while still operating sustainably. The carbon footprint from the construction of the centers and the racks of expensive computer equipment is substantial in itself, and their power needs have grown considerably.Just a decade ago, data centers drew 10 megawatts of power, but 100 megawatts is common today. The Uptime Institute, an industry advisory group, has identified 10 supersize cloud computing campuses across North America with an average size of 621 megawatts.This growth in electricity demand comes as manufacturing in the United States is the highest in the past half-century, and the power grid is becoming increasingly strained.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    How to Manage Streaming Subscriptions As Service Prices Rise

    Canceling is simple. The tough part is remembering to do it.The dream of streaming — watch what you want, whenever you want, for a sliver of the price of cable! — is coming to an end.With all the price increases for video streaming apps like Amazon Prime Video, Netflix and Hulu, the average household that subscribes to four streaming apps may now end up paying just as much as a cable subscriber, according to research by Deloitte.To name a few of the price jumps for streaming video (without ads) in just over the past year: Amazon’s ad-free Prime Video is now $12 a month, up from $9; Netflix raised the price of its premium plan for watching content on four devices to $23 a month, from $20; Disney increased the price of its Hulu service to $18 a month, from $15; and HBO’s Max now costs $16 a month, up from $15.If, like many people, you subscribe to all those services, you are paying about $70 a month, roughly the same as a modest cable TV package.More changes on the horizon will have people paying more for streaming. Disney announced this month that it would crack down on password sharing for Disney+, Hulu and ESPN+. Netflix told shareholders last month to expect more price increases.Streaming services still offer more flexibility and potential to save than a cable bundle. If that’s what drew you to streaming, the solution may seem obvious: You could be more judicious about managing your subscriptions — by canceling Netflix as soon as you’re done bingeing “Love Is Blind,” for instance.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Supreme Court to Decide How the First Amendment Applies to Social Media

    Challenges to laws in Florida and Texas meant to protect conservative viewpoints are likely to yield a major constitutional ruling on tech platforms’ free speech rights.The most important First Amendment cases of the internet era, to be heard by the Supreme Court on Monday, may turn on a single question: Do platforms like Facebook, YouTube, TikTok and X most closely resemble newspapers or shopping centers or phone companies?The two cases arrive at the court garbed in politics, as they concern laws in Florida and Texas aimed at protecting conservative speech by forbidding leading social media sites from removing posts based on the views they express.But the outsize question the cases present transcends ideology. It is whether tech platforms have free speech rights to make editorial judgments. Picking the apt analogy from the court’s precedents could decide the matter, but none of the available ones is a perfect fit.If the platforms are like newspapers, they may publish what they want without government interference. If they are like private shopping centers open to the public, they may be required to let visitors say what they like. And if they are like phone companies, they must transmit everyone’s speech.“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies,” Justice Samuel A. Alito Jr. wrote in a 2022 dissent when one of the cases briefly reached the Supreme Court.Supporters of the state laws say they foster free speech, giving the public access to all points of view. Opponents say the laws trample on the platforms’ own First Amendment rights and would turn them into cesspools of filth, hate and lies. One contrarian brief, from liberal professors, urged the justices to uphold the key provision of the Texas law despite the harm they said it would cause.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Five Takeaways From The Times’s Investigation Into Child Influencers

    Instagram does not allow children under 13 to have accounts, but parents are allowed to run them — and many do so for daughters who aspire to be social media influencers.What often starts as a parent’s effort to jump-start a child’s modeling career, or win favors from clothing brands, can quickly descend into a dark underworld dominated by adult men, many of whom openly admit on other platforms to being sexually attracted to children, an investigation by The New York Times found.Thousands of so-called mom-run accounts examined by The Times offer disturbing insights into how social media is reshaping childhood, especially for girls, with direct parental encouragement and involvement.Nearly one in three preteens list influencing as a career goal, and 11 percent of those born in Generation Z, between 1997 and 2012, describe themselves as influencers. But health and technology experts have recently cautioned that social media presents a “profound risk of harm” for girls. Constant comparisons to their peers and face-altering filters are driving negative feelings of self-worth and promoting objectification of their bodies, researchers found.The pursuit of online fame, particularly through Instagram, has supercharged the often toxic phenomenon, The Times found, encouraging parents to commodify their daughter’s images. These are some key findings.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    TikTok Is Subject of E.U. Inquiry Over ‘Addictive Design’

    The European Commission said it would investigate whether the site violated online laws aimed at protecting children from harmful content.European Union regulators on Monday opened an investigation into TikTok over potential breaches of online content rules aimed at protecting children, saying the popular social media platform’s “addictive design” risked exposing young people to harmful content.The move widens a preliminary investigation conducted in recent months into whether TikTok, owned by the Chinese company ByteDance, violated a new European law, the Digital Services Act, which requires large social media companies to stop the spread of harmful material. Under the law, companies can be penalized up to 6 percent of their global revenues.TikTok has been under the scrutiny of E.U. regulators for months. The company was fined roughly $370 million in September for having weak safeguards to protect the personal information of children using the platform. Policymakers in the United States have also been wrestling with how to regulate the platform for harmful content and data privacy — concerns amplified by TikTok’s links to China.The European Commission said it was particularly focused on how the company was managing the risk of “negative effects stemming” from the site’s design, including algorithmic systems that it said “may stimulate behavioral addictions” or “create so-called ‘rabbit hole effects,’” where a user is pulled further and further into the site’s content.Those risks could potentially compromise a person’s “physical and mental well-being,” the commission said.“The safety and well-being of online users in Europe is crucial,” Margrethe Vestager, the European Commission’s executive vice president overseeing digital policy, said in a statement. “TikTok needs to take a close look at the services they offer and carefully consider the risks that they pose to their users — young as well as old.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More