More stories

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More

  • in

    Loneliness Is a Problem That A.I. Won’t Solve

    When I was reporting my ed tech series, I stumbled on one of the most disturbing things I’ve read in years about how technology might interfere with human connection: an article on the website of the venture capital firm Andreessen Horowitz cheerfully headlined “It’s Not a Computer, It’s a Companion!”It opens with this quote from someone who has apparently fully embraced the idea of having a chatbot for a significant other: “The great thing about A.I. is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” The article goes on to breathlessly outline use cases for “A.I. companions,” suggesting that some future iteration of chatbots could stand in for mental health professionals, relationship coaches or chatty co-workers.This week, OpenAI released an update to its ChatGPT chatbot, an indication that the inhuman future foretold by the Andreessen Horowitz story is fast approaching. According to The Washington Post, “The new model, called GPT-4o (“o” stands for “omni”), can interpret user instructions delivered via text, audio and image — and respond in all three modes as well.” GPT-4o is meant to encourage people to speak to it rather than type into it, The Post reports, as “The updated voice can mimic a wider range of human emotions, and allows the user to interrupt. It chatted with users with fewer delays, and identified an OpenAI executive’s emotion based on a video chat where he was grinning.”There have been lots of comparisons between GPT-4o and the 2013 movie “Her,” in which a man falls in love with his A.I. assistant, voiced by Scarlett Johansson. While some observers, including the Times Opinion contributing writer Julia Angwin, who called ChatGPT’s recent update “rather routine,” weren’t particularly impressed, there’s been plenty of hype about the potential for humanlike chatbots to ameliorate emotional challenges, particularly loneliness and social isolation.For example, in January, the co-founder of one A.I. company argued that the technology could improve quality of life for isolated older people, writing, “Companionship can be provided in the form of virtual assistants or chatbots, and these companions can engage in conversations, play games or provide information, helping to alleviate feelings of loneliness and boredom.”Certainly, there are valuable and beneficial uses for A.I. chatbots — they can be life-changing for people who are visually impaired, for example. But the notion that bots will one day be an adequate substitute for human contact misunderstands what loneliness really is, and doesn’t account for the necessity of human touch.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk to Open Source Grok Chatbot in Latest AI War Escalation

    Mr. Musk’s move to open up the code behind Grok is the latest volley in a war to win the A.I. battle, after a suit against OpenAI on the same topic.Elon Musk released the raw computer code behind his version of an artificial intelligence chatbot on Sunday, an escalation by one of the world’s richest men in a battle to control the future of A.I.Grok, which is designed to give snarky replies styled after the science-fiction novel “The Hitchhiker’s Guide to the Galaxy,” is a product from xAI, the company Mr. Musk founded last year. While xAI is an independent entity from X, its technology has been integrated into the social media platform and is trained on users’ posts. Users who subscribe to X’s premium features can ask Grok questions and receive responses.By opening the code up for everyone to view and use — known as open sourcing — Mr. Musk waded further into a heated debate in the A.I. world over whether doing so could help make the technology safer, or simply open it up to misuse.Mr. Musk, a self-proclaimed proponent of open sourcing, did the same with X’s recommendation algorithm last year, but he has not updated it since.“Still work to do, but this platform is already by far the most transparent & truth-seeking (not a high bar tbh),” Mr. Musk posted on Sunday in response to a comment on open sourcing X’s recommendation algorithm. The move to open-source chatbot code is the latest volley between Mr. Musk and ChatGPT’s creator, OpenAI, which the mercurial billionaire sued recently over breaking its promise to do the same. Mr. Musk, who was a founder and helped fund OpenAI before departing several years later, has argued such an important technology should not be controlled solely by tech giants like Google and Microsoft, which is a close partner of OpenAI.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

    An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The Big Questions Raised by Elon Musk’s Lawsuit Against OpenAI

    Experts say the case against the start-up and its chief executive, Sam Altman, raises unusual legal issues that do not have a clear precedent.From Silicon Valley to Wall Street to Washington, the blockbuster case that Elon Musk filed against OpenAI and its C.E.O., Sam Altman, has become Topic A. It is the business world’s hottest soap opera.But among lawyers, the case has become something of a fascination for a different reason: It poses a series of unique and unusual legal questions without clear precedent. And it remains unclear what would constitute “winning” in a case like this, given that it appears to have been brought out of Musk’s own personal frustration and philosophical differences with Open A.I, a company he helped found and then left.The lawsuit — which pits one of the wealthiest men in the world against the most advanced A.I. company in the world, backed by Microsoft, one the world’s most valuable companies — argues that OpenAI, a nonprofit organization that created a for-profit subsidiary in 2019, breached a contract to operate in the public interest and violated its duties by diverting from its founding purpose of benefiting humanity.Musk’s lawyers — led by Morgan Chu, a partner at Irell & Manella who is known as the “$5 billion man” for his win record — want the court to force OpenAI to open its technology to others and to stop licensing it to Microsoft, which has invested billions in its partnership with the start-up.Among the questions that lawyers and scholars are asking after poring through Musk’s 35-page complaint:Does Musk even have standing to sue? “One of the differences with nonprofits compared to other companies is that, generally, no one other than the state attorney general has standing to sue for the kind of stuff that he’s complaining about, like not following your mission,“ Peter Molk, a professor of law at the University of Florida, said of Musk’s lawsuit. That’s most likely why Musk’s lawyers are presenting the case as a breach of contract instead of attacking the company’s nonprofit status.Musk also alleges that OpenAI has breached its fiduciary duty, but that charge has its own challenges, lawyers said, given that such claims are traditionally handled in Delaware, not California, where the lawsuit was filed. (Musk, of course, has an infamously rocky relationship with the state of Delaware.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk’s Feud With OpenAI Goes to Court

    The tech mogul wants to force the A.I. start-up to reveal its research to the public and prevent it from pursuing profits.Elon Musk, the tech billionaire, has escalated his feud with OpenAI and its C.E.O., Sam Altman.Jonathan Ernst/ReutersMusk takes aim at OpenAI The gloves have really come off in one of the most personal fights in the tech world: Elon Musk has sued OpenAI and its C.E.O., Sam Altman, accusing them of reneging on the start-up’s original purpose of being a nonprofit laboratory for the technology.Yes, Musk has disagreed with Altman for years about the purpose of the organization they co-founded and he is creating a rival artificial intelligence company. But the lawsuit also appears rooted in philosophical differences that go to the heart of who controls a hugely transformative technology — and is backed by one of the wealthiest men on the planet.The backstory: Musk, Altman and others agreed to create OpenAI in 2015 to provide an open-sourced alternative to the likes of Google, which had bought the leading A.I. start-up DeepMind the year before. Musk notes in his suit that OpenAI’s certificate of incorporation states that its work “will benefit the public,” and that it isn’t “organized for the private gain of any person.”Musk poured more than $44 million into OpenAI between 2016 and 2020, and helped hire top talent like the researcher Ilya Sutskever.Altman has moved OpenAI toward commerce, starting with the creation in 2019 of a for-profit subsidiary that would raise money from investors, notably Microsoft. The final straw for Musk came last year, when OpenAI released its GPT-4 A.I. model — but kept its workings hidden from all except itself and Microsoft.“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawyers write in the complaint.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    SEC Is Investigating OpenAI Over Its Board’s Actions

    The U.S. regulator opened its inquiry after the board unexpectedly fired the company’s chief executive, Sam Altman, in November.The Securities and Exchange Commission began an inquiry into OpenAI soon after the company’s board of directors unexpectedly removed Sam Altman, its chief executive, at the end of last year, three people familiar with the inquiry said.The regulator has sent official requests to OpenAI, the developer of the ChatGPT online chatbot, seeking information about the situation. It is unclear whether the S.E.C. is investigating Mr. Altman’s behavior, the board’s decision to oust him or both.Even as OpenAI has tried to turn the page on the dismissal of Mr. Altman, who was soon reinstated, the controversy continues to hound the company. In addition to the S.E.C. inquiry, the San Francisco artificial intelligence company has hired a law firm to conduct its own investigation into Mr. Altman’s behavior and the board’s decision to remove him.The board dismissed Mr. Altman on Nov. 17, saying it no longer had confidence in his ability to run OpenAI. It said he had not been “consistently candid in his communications,” though it did not provide specifics. It agreed to reinstate him five days later.Privately, the board worried that Mr. Altman was not sharing all of his plans to raise money from investors in the Middle East for an A.I. chip project, people with knowledge of the situation have said.Spokespeople for the S.E.C. and OpenAI and a lawyer for Mr. Altman all declined to comment.The S.E.C.’s inquiry was reported earlier by The Wall Street Journal.OpenAI kicked off an industrywide A.I. boom at the end of 2022 when it released ChatGPT. The company is considered a leader in what is called generative A.I., technologies that can generate text, sounds and images from short prompts. A recent funding deal values the start-up at more than $80 billion.Many believe that generative A.I., which represents a fundamental shift in the way computers behave, could remake the industry as thoroughly as the iPhone or the web browser. Others argue that the technology could cause serious harm, helping to spread online disinformation, replacing jobs with unusual speed and maybe even threatening the future of humanity.After the release of ChatGPT, Mr. Altman became the face of the industry’s push toward generative A.I. as he endlessly promoted the technology — while acknowledging the dangers.In an effort to resolve the turmoil surrounding Mr. Altman’s ouster, he and the board agreed to remove two members and add two others: Bret Taylor, who is a former Salesforce executive, and former Treasury Secretary Lawrence H. Summers.Mr. Altman and the board also agreed that OpenAI would start its own investigation into the matter. That investigation, by the WilmerHale law firm, is expected to close soon. More