More stories

  • in

    Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

    An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Chinese National Accused of Stealing AI Secrets From Google

    Linwei Ding, a Chinese national, was arrested in California and accused of uploading hundreds of files to the cloud.A Chinese citizen who recently quit his job as a software engineer for Google in California has been charged with trying to transfer artificial intelligence technology to a Beijing-based company that paid him secretly, according to a federal indictment unsealed on Wednesday.Prosecutors accused Linwei Ding, who was part of the team that designs and maintains Google’s vast A.I. supercomputer data system, of stealing information about the “architecture and functionality” of the system, and of pilfering software used to “orchestrate” supercomputers “at the cutting edge of machine learning and A.I. technology.”From May 2022 to May 2023, Mr. Ding, also known as Leon, uploaded 500 files, many containing trade secrets, from his Google-issued laptop to the cloud by using a multistep scheme that allowed him to “evade immediate detection,” according to the U.S. attorney’s office for the Northern District of California.Mr. Ding was arrested on Wednesday morning at his home in Newark, Calif., not far from Google’s sprawling main campus in Mountain View, officials said.Starting in June 2022, Mr. Ding was paid $14,800 per month — plus a bonus and company stock — by a China-based technology company, without telling his supervisors at Google, according to the indictment. He is also accused of working with another company in China.Mr. Ding openly sought funding for a new A.I. start-up company he had incorporated at an investor conference in Beijing in November, boasting that “we have experience with Google’s 10,000-card computational power platform; we just need to replicate and upgrade it,” prosecutors said in the indictment, which was unsealed in San Francisco federal court.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The Big Questions Raised by Elon Musk’s Lawsuit Against OpenAI

    Experts say the case against the start-up and its chief executive, Sam Altman, raises unusual legal issues that do not have a clear precedent.From Silicon Valley to Wall Street to Washington, the blockbuster case that Elon Musk filed against OpenAI and its C.E.O., Sam Altman, has become Topic A. It is the business world’s hottest soap opera.But among lawyers, the case has become something of a fascination for a different reason: It poses a series of unique and unusual legal questions without clear precedent. And it remains unclear what would constitute “winning” in a case like this, given that it appears to have been brought out of Musk’s own personal frustration and philosophical differences with Open A.I, a company he helped found and then left.The lawsuit — which pits one of the wealthiest men in the world against the most advanced A.I. company in the world, backed by Microsoft, one the world’s most valuable companies — argues that OpenAI, a nonprofit organization that created a for-profit subsidiary in 2019, breached a contract to operate in the public interest and violated its duties by diverting from its founding purpose of benefiting humanity.Musk’s lawyers — led by Morgan Chu, a partner at Irell & Manella who is known as the “$5 billion man” for his win record — want the court to force OpenAI to open its technology to others and to stop licensing it to Microsoft, which has invested billions in its partnership with the start-up.Among the questions that lawyers and scholars are asking after poring through Musk’s 35-page complaint:Does Musk even have standing to sue? “One of the differences with nonprofits compared to other companies is that, generally, no one other than the state attorney general has standing to sue for the kind of stuff that he’s complaining about, like not following your mission,“ Peter Molk, a professor of law at the University of Florida, said of Musk’s lawsuit. That’s most likely why Musk’s lawyers are presenting the case as a breach of contract instead of attacking the company’s nonprofit status.Musk also alleges that OpenAI has breached its fiduciary duty, but that charge has its own challenges, lawyers said, given that such claims are traditionally handled in Delaware, not California, where the lawsuit was filed. (Musk, of course, has an infamously rocky relationship with the state of Delaware.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Is Making the Sexual Exploitation of Girls Even Worse

    On Tuesday, Kat Tenbarge and Liz Kreutz of NBC News reported that several middle schoolers in Beverly Hills, Calif., were caught making and distributing fake naked photos of their peers: “School officials at Beverly Vista Middle School were made aware of the ‘A.I.-generated nude photos’ of students last week, the district superintendent said in a letter to parents. The superintendent told NBC News the photos included students’ faces superimposed onto nude bodies.”I had heard about this kind of thing happening to high school girls, which is horrible enough. But the idea of such young children being dehumanized by their classmates, humiliated and sexualized in one of the places they’re supposed to feel safe, and knowing those images could be indelible and worldwide, turned my stomach.I’m not a technophobe and have, in the past, been somewhat skeptical about the outsize negative impact of social media on teen girls. And while I still think the subject is complicated, and that the research doesn’t always conclude that there are unfavorable mental health effects of social media use on all groups of young people, the increasing reach of artificial intelligence adds a new wrinkle that has the potential to cause all sorts of damage. The possibilities are especially frightening when the technology is used by teens and tweens, groups with notoriously iffy judgment about the permanence of their actions.I have to admit that my gut reaction to the Beverly Hills story was rage — I wanted the book thrown at the kids who made those fakes. But I wanted to hear from someone with more experience talking to teens and thinking deeply about the adolescent relationship with privacy and technology. So I called Devorah Heitner, the author of “Growing Up in Public: Coming of Age in a Digital World,” to help me step back a bit from my punitive fury.Heitner pointed out that although artificial intelligence adds a new dimension, kids have been passing around digital sexual images without consent for years. According to a 2018 meta-analysis from JAMA Pediatrics, among children in the 12 to 17 age range, “The prevalence of forwarding a sext without consent was 12.0 percent,” and “and the prevalence of having a sext forwarded without consent was 8.4 percent.”In her book, Heitner offers an example in which an eighth-grade girl sends a topless photo to her boyfriend, who circulates it to his friends without her permission. After they broke up, but without her knowledge, “her picture kept circulating, passing from classmate to classmate throughout their middle school,” and then “one afternoon, she opened her school email to find a video with her image with sound effects from a porn video playing with it.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk’s Feud With OpenAI Goes to Court

    The tech mogul wants to force the A.I. start-up to reveal its research to the public and prevent it from pursuing profits.Elon Musk, the tech billionaire, has escalated his feud with OpenAI and its C.E.O., Sam Altman.Jonathan Ernst/ReutersMusk takes aim at OpenAI The gloves have really come off in one of the most personal fights in the tech world: Elon Musk has sued OpenAI and its C.E.O., Sam Altman, accusing them of reneging on the start-up’s original purpose of being a nonprofit laboratory for the technology.Yes, Musk has disagreed with Altman for years about the purpose of the organization they co-founded and he is creating a rival artificial intelligence company. But the lawsuit also appears rooted in philosophical differences that go to the heart of who controls a hugely transformative technology — and is backed by one of the wealthiest men on the planet.The backstory: Musk, Altman and others agreed to create OpenAI in 2015 to provide an open-sourced alternative to the likes of Google, which had bought the leading A.I. start-up DeepMind the year before. Musk notes in his suit that OpenAI’s certificate of incorporation states that its work “will benefit the public,” and that it isn’t “organized for the private gain of any person.”Musk poured more than $44 million into OpenAI between 2016 and 2020, and helped hire top talent like the researcher Ilya Sutskever.Altman has moved OpenAI toward commerce, starting with the creation in 2019 of a for-profit subsidiary that would raise money from investors, notably Microsoft. The final straw for Musk came last year, when OpenAI released its GPT-4 A.I. model — but kept its workings hidden from all except itself and Microsoft.“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawyers write in the complaint.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    SEC Is Investigating OpenAI Over Its Board’s Actions

    The U.S. regulator opened its inquiry after the board unexpectedly fired the company’s chief executive, Sam Altman, in November.The Securities and Exchange Commission began an inquiry into OpenAI soon after the company’s board of directors unexpectedly removed Sam Altman, its chief executive, at the end of last year, three people familiar with the inquiry said.The regulator has sent official requests to OpenAI, the developer of the ChatGPT online chatbot, seeking information about the situation. It is unclear whether the S.E.C. is investigating Mr. Altman’s behavior, the board’s decision to oust him or both.Even as OpenAI has tried to turn the page on the dismissal of Mr. Altman, who was soon reinstated, the controversy continues to hound the company. In addition to the S.E.C. inquiry, the San Francisco artificial intelligence company has hired a law firm to conduct its own investigation into Mr. Altman’s behavior and the board’s decision to remove him.The board dismissed Mr. Altman on Nov. 17, saying it no longer had confidence in his ability to run OpenAI. It said he had not been “consistently candid in his communications,” though it did not provide specifics. It agreed to reinstate him five days later.Privately, the board worried that Mr. Altman was not sharing all of his plans to raise money from investors in the Middle East for an A.I. chip project, people with knowledge of the situation have said.Spokespeople for the S.E.C. and OpenAI and a lawyer for Mr. Altman all declined to comment.The S.E.C.’s inquiry was reported earlier by The Wall Street Journal.OpenAI kicked off an industrywide A.I. boom at the end of 2022 when it released ChatGPT. The company is considered a leader in what is called generative A.I., technologies that can generate text, sounds and images from short prompts. A recent funding deal values the start-up at more than $80 billion.Many believe that generative A.I., which represents a fundamental shift in the way computers behave, could remake the industry as thoroughly as the iPhone or the web browser. Others argue that the technology could cause serious harm, helping to spread online disinformation, replacing jobs with unusual speed and maybe even threatening the future of humanity.After the release of ChatGPT, Mr. Altman became the face of the industry’s push toward generative A.I. as he endlessly promoted the technology — while acknowledging the dangers.In an effort to resolve the turmoil surrounding Mr. Altman’s ouster, he and the board agreed to remove two members and add two others: Bret Taylor, who is a former Salesforce executive, and former Treasury Secretary Lawrence H. Summers.Mr. Altman and the board also agreed that OpenAI would start its own investigation into the matter. That investigation, by the WilmerHale law firm, is expected to close soon. More

  • in

    A.I. Frenzy Complicates Efforts to Keep Power-Hungry Data Sites Green

    West Texas, from the oil rigs of the Permian Basin to the wind turbines twirling above the High Plains, has long been a magnet for companies seeking fortunes in energy.Now, those arid ranch lands are offering a new moneymaking opportunity: data centers.Lancium, an energy and data center management firm setting up shop in Fort Stockton and Abilene, is one of many companies around the country betting that building data centers close to generating sites will allow them to tap into underused clean power.“It’s a land grab,” said Lancium’s president, Ali Fenn.In the past, companies built data centers close to internet users, to better meet consumer requests, like streaming a show on Netflix or playing a video game hosted in the cloud. But the growth of artificial intelligence requires huge data centers to train the evolving large-language models, making proximity to users less necessary.But as more of these sites start to pop up across the United States, there are new questions on whether they can meet the demand while still operating sustainably. The carbon footprint from the construction of the centers and the racks of expensive computer equipment is substantial in itself, and their power needs have grown considerably.Just a decade ago, data centers drew 10 megawatts of power, but 100 megawatts is common today. The Uptime Institute, an industry advisory group, has identified 10 supersize cloud computing campuses across North America with an average size of 621 megawatts.This growth in electricity demand comes as manufacturing in the United States is the highest in the past half-century, and the power grid is becoming increasingly strained.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Chinese Influence Campaign Pushes Disunity Before U.S. Election, Study Says

    A long-running network of accounts, known as Spamouflage, is using A.I.-generated images to amplify negative narratives involving the presidential race.A Chinese influence campaign that has tried for years to boost Beijing’s interests is now using artificial intelligence and a network of social media accounts to amplify American discontent and division ahead of the U.S. presidential election, according to a new report.The campaign, known as Spamouflage, hopes to breed disenchantment among voters by maligning the United States as rife with urban decay, homelessness, fentanyl abuse, gun violence and crumbling infrastructure, according to the report, which was published on Thursday by the Institute for Strategic Dialogue, a nonprofit research organization in London.An added aim, the report said, is to convince international audiences that the United States is in a state of chaos.Artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy the country.One post on X that said “American partisan divisions” had an image showing President Biden and former President Donald J. Trump aggressively crossing fiery spears under this text: “INFIGHTING INTENSIFIES.” Other images featured the two men facing off, cracks in the White House or the Statue of Liberty, and terminology like “CIVIL WAR,” “INTERNAL STRIFE” and “THE COLLAPSE OF AMERICAN DEMOCRACY.”The campaign’s artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy America.via XWe are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More