More stories

  • in

    Meta in Talks to Use Voices of Judi Dench, Awkwafina and Others for A.I.

    If deals are struck, Meta may incorporate the actors’ voices into a digital assistant product called MetaAI, people with knowledge of the effort said.Meta is in discussions with Awkwafina, Judi Dench and other actors and influencers for the right to incorporate their voices into a digital assistant product called MetaAI, according to three people with knowledge of the talks, as the company pushes to build more products that feature artificial intelligence.Apart from Ms. Dench and Awkwafina, Meta is in talks with the comedian Keegan-Michael Key and other celebrities, said the people, who spoke on the condition of anonymity because the discussions are private. They added that all of Hollywood’s top talent agencies were involved in negotiations with the tech giant.The talks remain fluid, and it is unclear which actors and influencers, if any, may sign on to the project, the people said. If the parties come to an agreement, Meta could pay millions of dollars in fees to the actors.A Meta spokesman declined to comment. The discussions were reported earlier by Bloomberg.Meta, which owns Facebook, Instagram and WhatsApp, has invested heavily in artificial intelligence, which the biggest tech companies are racing to develop and lead. Meta has plowed billions into weaving the technology into its social networking apps and advertising business, including by creating artificially intelligent characters that could chat through text across its messaging apps.On Wednesday, Mark Zuckerberg, Meta’s chief executive, increased how much his company would spend on A.I. and other expenses this year to at least $37 billion, up from $30 billion at the beginning of 2024. Mr. Zuckerberg said he would rather build too fast “rather than too late” to prevent his competitors from gaining an edge in the A.I. race.One area of A.I. that is rapidly emerging are chatbots with voice abilities, which act as virtual assistants. In May, OpenAI, a leading A.I. company, unveiled a version of its ChatGPT chatbot that could receive and respond to voice commands, images and videos. It was part of a wider effort to combine conversational chatbots with voice assistants like the Google Assistant and Apple’s Siri.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Hey, Siri! Let’s Talk About How Apple Is Giving You an A.I. Makeover.

    Apple, a latecomer to artificial intelligence, has struck a deal with OpenAI and developed tools to improve its Siri voice assistant, which it is set to showcase on Monday.Each June, Apple unveils its newest software features for the iPhone at its futuristic Silicon Valley campus. But at its annual developer conference on Monday, the company will shine a spotlight on a feature that isn’t new: Siri, its talking assistant, which has been around for more than a decade.What will be different this time is the technology powering Siri: generative artificial intelligence.In recent months, Adrian Perica, Apple’s vice president of corporate development, has helped spearhead an effort to bring generative A.I. to the masses, said two people with knowledge of the work, who asked for anonymity because of the sensitivity of the effort.Mr. Perica and his colleagues have talked with leading A.I. companies, including Google and OpenAI, seeking a partner to help Apple deliver generative A.I. across its business. Apple recently struck a deal with OpenAI, which makes the ChatGPT chatbot, to fold its technology into the iPhone, two people familiar with the agreement said. It was still in talks with Google as of last week, two people familiar with the conversations said.That has helped lead to a more conversational and versatile version of Siri, which will be shown on Monday, three people familiar with the company said. Siri will be powered by a generative A.I. system developed by Apple, which will allow the talking assistant to chat rather than just respond to one question at a time. Apple will market its new A.I. capabilities as Apple Intelligence, a person familiar with the marketing plan said.Apple, OpenAI and Google declined to comment. Apple’s agreement with OpenAI was previously reported by The Information and Bloomberg, which also reported the name for Apple’s A.I. system.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I.’s Black Boxes Just Got a Little Less Mysterious

    Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

    Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie “Her.”Days before OpenAI demonstrated its new, flirty voice assistant last week, the actress Scarlett Johansson said, Sam Altman, the company’s chief executive, called her agent and asked that she consider licensing her voice for a virtual assistant.It was his second request to the actress in the past year, Ms. Johannson said in a statement on Monday, adding that the reply both times was no.Despite those refusals, Ms. Johansson said, OpenAI used a voice that sounded “eerily similar to mine.” She has hired a lawyer and asked OpenAI to stop using a voice it called “Sky.”OpenAI suspended its release of “Sky” over the weekend. The company said in a blog post on Sunday that “AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”For Ms. Johansson, the episode has been a surreal case of life-imitating art. In 2013, she provided the voice for an A.I. system in the Spike Jonze movie “Her.” The film told the story of a lonely introvert seduced by a virtual assistant named Samantha, a tragic commentary on the potential pitfalls of technology as it becomes more realistic.Last week, Mr. Altman appeared to nod to the similarity between OpenAI’s virtual assistant and the film in a post on X with the single word “her.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More

  • in

    Loneliness Is a Problem That A.I. Won’t Solve

    When I was reporting my ed tech series, I stumbled on one of the most disturbing things I’ve read in years about how technology might interfere with human connection: an article on the website of the venture capital firm Andreessen Horowitz cheerfully headlined “It’s Not a Computer, It’s a Companion!”It opens with this quote from someone who has apparently fully embraced the idea of having a chatbot for a significant other: “The great thing about A.I. is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” The article goes on to breathlessly outline use cases for “A.I. companions,” suggesting that some future iteration of chatbots could stand in for mental health professionals, relationship coaches or chatty co-workers.This week, OpenAI released an update to its ChatGPT chatbot, an indication that the inhuman future foretold by the Andreessen Horowitz story is fast approaching. According to The Washington Post, “The new model, called GPT-4o (“o” stands for “omni”), can interpret user instructions delivered via text, audio and image — and respond in all three modes as well.” GPT-4o is meant to encourage people to speak to it rather than type into it, The Post reports, as “The updated voice can mimic a wider range of human emotions, and allows the user to interrupt. It chatted with users with fewer delays, and identified an OpenAI executive’s emotion based on a video chat where he was grinning.”There have been lots of comparisons between GPT-4o and the 2013 movie “Her,” in which a man falls in love with his A.I. assistant, voiced by Scarlett Johansson. While some observers, including the Times Opinion contributing writer Julia Angwin, who called ChatGPT’s recent update “rather routine,” weren’t particularly impressed, there’s been plenty of hype about the potential for humanlike chatbots to ameliorate emotional challenges, particularly loneliness and social isolation.For example, in January, the co-founder of one A.I. company argued that the technology could improve quality of life for isolated older people, writing, “Companionship can be provided in the form of virtual assistants or chatbots, and these companions can engage in conversations, play games or provide information, helping to alleviate feelings of loneliness and boredom.”Certainly, there are valuable and beneficial uses for A.I. chatbots — they can be life-changing for people who are visually impaired, for example. But the notion that bots will one day be an adequate substitute for human contact misunderstands what loneliness really is, and doesn’t account for the necessity of human touch.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk to Open Source Grok Chatbot in Latest AI War Escalation

    Mr. Musk’s move to open up the code behind Grok is the latest volley in a war to win the A.I. battle, after a suit against OpenAI on the same topic.Elon Musk released the raw computer code behind his version of an artificial intelligence chatbot on Sunday, an escalation by one of the world’s richest men in a battle to control the future of A.I.Grok, which is designed to give snarky replies styled after the science-fiction novel “The Hitchhiker’s Guide to the Galaxy,” is a product from xAI, the company Mr. Musk founded last year. While xAI is an independent entity from X, its technology has been integrated into the social media platform and is trained on users’ posts. Users who subscribe to X’s premium features can ask Grok questions and receive responses.By opening the code up for everyone to view and use — known as open sourcing — Mr. Musk waded further into a heated debate in the A.I. world over whether doing so could help make the technology safer, or simply open it up to misuse.Mr. Musk, a self-proclaimed proponent of open sourcing, did the same with X’s recommendation algorithm last year, but he has not updated it since.“Still work to do, but this platform is already by far the most transparent & truth-seeking (not a high bar tbh),” Mr. Musk posted on Sunday in response to a comment on open sourcing X’s recommendation algorithm. The move to open-source chatbot code is the latest volley between Mr. Musk and ChatGPT’s creator, OpenAI, which the mercurial billionaire sued recently over breaking its promise to do the same. Mr. Musk, who was a founder and helped fund OpenAI before departing several years later, has argued such an important technology should not be controlled solely by tech giants like Google and Microsoft, which is a close partner of OpenAI.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More