More stories

  • in

    Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

    Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie “Her.”Days before OpenAI demonstrated its new, flirty voice assistant last week, the actress Scarlett Johansson said, Sam Altman, the company’s chief executive, called her agent and asked that she consider licensing her voice for a virtual assistant.It was his second request to the actress in the past year, Ms. Johannson said in a statement on Monday, adding that the reply both times was no.Despite those refusals, Ms. Johansson said, OpenAI used a voice that sounded “eerily similar to mine.” She has hired a lawyer and asked OpenAI to stop using a voice it called “Sky.”OpenAI suspended its release of “Sky” over the weekend. The company said in a blog post on Sunday that “AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”For Ms. Johansson, the episode has been a surreal case of life-imitating art. In 2013, she provided the voice for an A.I. system in the Spike Jonze movie “Her.” The film told the story of a lonely introvert seduced by a virtual assistant named Samantha, a tragic commentary on the potential pitfalls of technology as it becomes more realistic.Last week, Mr. Altman appeared to nod to the similarity between OpenAI’s virtual assistant and the film in a post on X with the single word “her.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More

  • in

    Loneliness Is a Problem That A.I. Won’t Solve

    When I was reporting my ed tech series, I stumbled on one of the most disturbing things I’ve read in years about how technology might interfere with human connection: an article on the website of the venture capital firm Andreessen Horowitz cheerfully headlined “It’s Not a Computer, It’s a Companion!”It opens with this quote from someone who has apparently fully embraced the idea of having a chatbot for a significant other: “The great thing about A.I. is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” The article goes on to breathlessly outline use cases for “A.I. companions,” suggesting that some future iteration of chatbots could stand in for mental health professionals, relationship coaches or chatty co-workers.This week, OpenAI released an update to its ChatGPT chatbot, an indication that the inhuman future foretold by the Andreessen Horowitz story is fast approaching. According to The Washington Post, “The new model, called GPT-4o (“o” stands for “omni”), can interpret user instructions delivered via text, audio and image — and respond in all three modes as well.” GPT-4o is meant to encourage people to speak to it rather than type into it, The Post reports, as “The updated voice can mimic a wider range of human emotions, and allows the user to interrupt. It chatted with users with fewer delays, and identified an OpenAI executive’s emotion based on a video chat where he was grinning.”There have been lots of comparisons between GPT-4o and the 2013 movie “Her,” in which a man falls in love with his A.I. assistant, voiced by Scarlett Johansson. While some observers, including the Times Opinion contributing writer Julia Angwin, who called ChatGPT’s recent update “rather routine,” weren’t particularly impressed, there’s been plenty of hype about the potential for humanlike chatbots to ameliorate emotional challenges, particularly loneliness and social isolation.For example, in January, the co-founder of one A.I. company argued that the technology could improve quality of life for isolated older people, writing, “Companionship can be provided in the form of virtual assistants or chatbots, and these companions can engage in conversations, play games or provide information, helping to alleviate feelings of loneliness and boredom.”Certainly, there are valuable and beneficial uses for A.I. chatbots — they can be life-changing for people who are visually impaired, for example. But the notion that bots will one day be an adequate substitute for human contact misunderstands what loneliness really is, and doesn’t account for the necessity of human touch.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Unveils A.I. for Predicting Behavior of Human Molecules

    The system, AlphaFold3, could accelerate efforts to understand the human body and fight disease.Artificial intelligence is giving machines the power to generate videos, write computer code and even carry on a conversation.It is also accelerating efforts to understand the human body and fight disease.On Wednesday, Google DeepMind, the tech giant’s central artificial intelligence lab, and Isomorphic Labs, a sister company, unveiled a more powerful version of AlphaFold, an artificial intelligence technology that helps scientists understand the behavior of the microscopic mechanisms that drive the cells in the human body.An early version of AlphaFold, released in 2020, solved a puzzle that had bedeviled scientists for more than 50 years. It was called “the protein folding problem.”Proteins are the microscopic molecules that drive the behavior of all living things. These molecules begin as strings of chemical compounds before twisting and folding into three-dimensional shapes that define how they interact with other microscopic mechanisms in the body.Biologists spent years or even decades trying to pinpoint the shape of individual proteins. Then AlphaFold came along. When a scientist fed this technology a string of amino acids that make up a protein, it could predict the three-dimensional shape within minutes.When DeepMind publicly released AlphaFold a year later, biologists began using it to accelerate drug discovery. Researchers at the University of California, San Francisco, used the technology as they worked to understand the coronavirus and prepare for similar pandemics. Others used it as they struggled to find remedies for malaria and Parkinson’s disease.The hope is that this kind of technology will significantly streamline the creation of new drugs and vaccines.A segment of a video from Google DeepMind demonstrating the new AlphaFold3 technology.Video by Google Deepmind“It tells us a lot more about how the machines of the cell interact,” said John Jumper, a Google DeepMind researcher. “It tells us how this should work and what happens when we get sick.”The new version of AlphaFold — AlphaFold3 — extends the technology beyond protein folding. In addition to predicting the shapes of proteins, it can predict the behavior of other microscopic biological mechanisms, including DNA, where the body stores genetic information, and RNA, which transfers information from DNA to proteins.“Biology is a dynamic system. You need to understand the interactions between different molecules and structures,” said Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs, which Google also owns. “This is a step in that direction.”Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs.Taylor Hill/Getty ImagesThe company is offering a website where scientists can use AlphaFold3. Other labs, most notably one at the University of Washington, offer similar technology. In a paper released on Tuesday in the scientific journal Nature, Dr. Jumper and his fellow researchers show that it achieves a level of accuracy well beyond the state of the art.The technology could “save months of experimental work and enable research that was previously impossible,” said Deniz Kavi, a co-founder and the chief executive of Tamarind Bio, a start-up that builds technology for accelerating drug discovery. “This represents tremendous promise.” More

  • in

    A New Diplomatic Strategy Emerges as Artificial Intelligence Grows

    The new U.S. approach to cyberthreats comes as early optimism about a “global internet” connecting the world has been shattered.American and Chinese diplomats plan to meet later this month to begin what amounts to the first, tentative arms control talks over the use of artificial intelligence.A year in the making, the talks in Geneva are an attempt to find some common ground on how A.I. will be used and in which situations it could be banned — for example, in the command and control of each country’s nuclear arsenals.The fact that Beijing agreed to the discussion at all was something of a surprise, since it has refused any discussion of limiting the size of nuclear arsenals themselves.But for the Biden administration, the conversation represents the first serious foray into a new realm of diplomacy, which Secretary of State Antony J. Blinken spoke about on Monday in a speech in San Francisco at the RSA Conference, Silicon Valley’s annual convention on both the technology and the politics of securing cyberspace.The Biden administration’s strategy goes beyond the rules of managing cyberconflict and focuses on American efforts to assure control over physical technologies like undersea cables, which connect countries, companies and individual users to cloud services.Yuri Gripas for The New York Times“It’s true that ‘move fast and break things’ is literally the exact opposite of what we try to do at the State Department,” Mr. Blinken told the thousands of cyberexperts, coders and entrepreneurs, a reference to the Silicon Valley mantra about technological disruption.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Why Beijing Stands to Gain from Elon Musk’s Visit

    Tesla’s C.E.O. appears to have landed a deal that moves the company closer to bringing fully autonomous driving to a giant market. But Beijing is keen to exploit the visit for its own purposes.Elon Musk meeting with Premier Li Qiang, China’s second-highest official, on a weekend visit to Beijing that boosted Tesla stock.Wang Ye/Xinhua, via Associated PressWhy Elon Musk went to China Just days after Secretary of State Antony Blinken traveled to Beijing and warned China about unfair trade practices, Elon Musk landed in the Chinese capital. The Tesla boss’s meeting with China’s No. 2 official may have paid off: Musk reportedly cleared two obstacles to introducing a fully autonomous driving system in the world’s biggest car market.The split screen again reveals the gap between Western diplomacy and corporate imperatives. Tesla has to stay committed to China even as it faces big headwinds — a conundrum that other multinationals also face, and one that Beijing is eager to exploit.Musk is betting big on self-driving, and China is key. Tesla last week reported its worst quarter in two years as a price war hurts profit. Tesla shares have plummeted (though they’ve rebounded in recent days, and are up more than 8 percent in premarket trading) amid plans for big layoffs.Musk has tried to reassure the market by pushing ahead with a low-cost model. Fully autonomous driving is also crucial. Musk told analysts last week that if investors don’t believe Tesla would “solve” the technological challenge that is autonomous driving, “I think they should not be an investor in the company.”The carmaker faces challenges in its second biggest market. Heavily subsidized Chinese rivals are eating into sales, led by the Warren Buffett-backed BYD, which is vying with Tesla for the crown of world’s biggest E.V. maker.Teslas are banned from many Chinese government sites because of concern about what data the American company collects. President Biden’s move to declare Chinese E.V.s a security threat probably won’t have made it any easier for Tesla in China.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The Worst Part of a Wall Street Career May Be Coming to an End

    Artificial intelligence tools can replace much of Wall Street’s entry-level white-collar work, raising tough questions about the future of finance.Pulling all-nighters to assemble PowerPoint presentations. Punching numbers into Excel spreadsheets. Finessing the language on esoteric financial documents that may never be read by another soul.Such grunt work has long been a rite of passage in investment banking, an industry at the top of the corporate pyramid that lures thousands of young people every year with the promise of prestige and pay.Until now. Generative artificial intelligence — the technology upending many industries with its ability to produce and crunch new data — has landed on Wall Street. And investment banks, long inured to cultural change, are rapidly turning into Exhibit A on how the new technology could not only supplement but supplant entire ranks of workers.The jobs most immediately at risk are those performed by analysts at the bottom rung of the investment banking business, who put in endless hours to learn the building blocks of corporate finance, including the intricacies of mergers, public offerings and bond deals. Now, A.I. can do much of that work speedily and with considerably less whining.“The structure of these jobs has remained largely unchanged at least for a decade,” said Julia Dhar, head of BCG’s Behavioral Science Lab and a consultant to major banks experimenting with A.I. The inevitable question, as she put it, is “do you need fewer analysts?”The inevitable question, according to Julia Dhar, head of BCG’s Behavioral Science Lab, is “do you need fewer analysts?”John Lamparski/Getty Images for Concordia SummitWe are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More