More stories

  • in

    Deepfake of U.S. Official Appears After Shift on Ukraine Attacks in Russia

    A manufactured video fabricated comments by the State Department spokesman, Matthew Miller.A day after U.S. officials said Ukraine could use American weapons in limited strikes inside Russia, a deepfake video of a U.S. spokesman discussing the policy appeared online.The fabricated video, which is drawn from actual footage, shows the State Department spokesman, Matthew Miller, seeming to suggest that the Russian city of Belgorod, just 25 miles north of Ukraine’s border with Russia, was a legitimate target for such strikes.The 49-second video clip, which has an authentic feel despite telltale clues of manipulation, illustrates the growing threat of disinformation and especially so-called deepfake videos powered by artificial intelligence.U.S. officials said they had no information about the origins of the video. But they are particularly concerned about how Russia might employ such techniques to manipulate opinion around the war in Ukraine or even American political discourse.Belgorod “has essentially no civilians remaining,” the video purports to show Mr. Miller saying at the State Department in response to a reporter’s question, which was also manufactured. “It’s practically full of military targets at this point, and we are seeing the same thing starting in the regions around there.”“Russia needs to get the message that this is unacceptable,” Mr. Miller adds in the video, which has been circulating on Telegram channels followed by residents of Belgorod widely enough to draw responses from Russian government officials.The claim in the video about Belgorod is completely false. While it has been the target of some Ukrainian attacks, and its schools operate online, its 340,000 residents have not been evacuated.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Justices’ ‘Disturbing’ Ruling in South Carolina Gerrymandering Case

    More from our inbox:Questions for RepublicansThe Case Against the PurebredChatbot TherapyCriticism of Israel Caroline Gutman for The New York TimesTo the Editor:Re “In Top Court, G.O.P. Prevails on Voting Map” (front page, May 24):The action of the conservative wing of the Supreme Court, anchoring the 6-to-3 decision to allow the South Carolina Legislature to go forward with redistricting plans that clearly marginalize African American representation in the state — and after a meticulous review by an appellate court to preclude the plan — is disturbing.The persistent erosion of voting rights and apparent denial that racism is still part of the fabric of American society are troubling.Surely there can be deference to decisions made by states; concocting “intent” to deny true representative justice in an apparent quest to return to the “Ozzie and Harriet” days of the 1950s seems too transparent an attempt to “keep America white again” — as they may perceive the challenge of changing demographics.This particular ruling cries out for the need to expand court membership.Raymond ColemanPotomac, Md.To the Editor:Writing for the majority, Justice Samuel Alito presumes the South Carolina lawmakers acted “in good faith” in gerrymandering the voting district map for the purpose of favoring the Republicans, and not for racial reasons, an improbable rationale on its face.Astoundingly, he further reasons that the gerrymander is acceptable because it was for partisan rather than race-based reasons (acknowledging that redistricting based on race “may be held unconstitutional.”)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I.’s Black Boxes Just Got a Little Less Mysterious

    Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

    Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie “Her.”Days before OpenAI demonstrated its new, flirty voice assistant last week, the actress Scarlett Johansson said, Sam Altman, the company’s chief executive, called her agent and asked that she consider licensing her voice for a virtual assistant.It was his second request to the actress in the past year, Ms. Johannson said in a statement on Monday, adding that the reply both times was no.Despite those refusals, Ms. Johansson said, OpenAI used a voice that sounded “eerily similar to mine.” She has hired a lawyer and asked OpenAI to stop using a voice it called “Sky.”OpenAI suspended its release of “Sky” over the weekend. The company said in a blog post on Sunday that “AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”For Ms. Johansson, the episode has been a surreal case of life-imitating art. In 2013, she provided the voice for an A.I. system in the Spike Jonze movie “Her.” The film told the story of a lonely introvert seduced by a virtual assistant named Samantha, a tragic commentary on the potential pitfalls of technology as it becomes more realistic.Last week, Mr. Altman appeared to nod to the similarity between OpenAI’s virtual assistant and the film in a post on X with the single word “her.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More

  • in

    Loneliness Is a Problem That A.I. Won’t Solve

    When I was reporting my ed tech series, I stumbled on one of the most disturbing things I’ve read in years about how technology might interfere with human connection: an article on the website of the venture capital firm Andreessen Horowitz cheerfully headlined “It’s Not a Computer, It’s a Companion!”It opens with this quote from someone who has apparently fully embraced the idea of having a chatbot for a significant other: “The great thing about A.I. is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” The article goes on to breathlessly outline use cases for “A.I. companions,” suggesting that some future iteration of chatbots could stand in for mental health professionals, relationship coaches or chatty co-workers.This week, OpenAI released an update to its ChatGPT chatbot, an indication that the inhuman future foretold by the Andreessen Horowitz story is fast approaching. According to The Washington Post, “The new model, called GPT-4o (“o” stands for “omni”), can interpret user instructions delivered via text, audio and image — and respond in all three modes as well.” GPT-4o is meant to encourage people to speak to it rather than type into it, The Post reports, as “The updated voice can mimic a wider range of human emotions, and allows the user to interrupt. It chatted with users with fewer delays, and identified an OpenAI executive’s emotion based on a video chat where he was grinning.”There have been lots of comparisons between GPT-4o and the 2013 movie “Her,” in which a man falls in love with his A.I. assistant, voiced by Scarlett Johansson. While some observers, including the Times Opinion contributing writer Julia Angwin, who called ChatGPT’s recent update “rather routine,” weren’t particularly impressed, there’s been plenty of hype about the potential for humanlike chatbots to ameliorate emotional challenges, particularly loneliness and social isolation.For example, in January, the co-founder of one A.I. company argued that the technology could improve quality of life for isolated older people, writing, “Companionship can be provided in the form of virtual assistants or chatbots, and these companions can engage in conversations, play games or provide information, helping to alleviate feelings of loneliness and boredom.”Certainly, there are valuable and beneficial uses for A.I. chatbots — they can be life-changing for people who are visually impaired, for example. But the notion that bots will one day be an adequate substitute for human contact misunderstands what loneliness really is, and doesn’t account for the necessity of human touch.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Unveils A.I. for Predicting Behavior of Human Molecules

    The system, AlphaFold3, could accelerate efforts to understand the human body and fight disease.Artificial intelligence is giving machines the power to generate videos, write computer code and even carry on a conversation.It is also accelerating efforts to understand the human body and fight disease.On Wednesday, Google DeepMind, the tech giant’s central artificial intelligence lab, and Isomorphic Labs, a sister company, unveiled a more powerful version of AlphaFold, an artificial intelligence technology that helps scientists understand the behavior of the microscopic mechanisms that drive the cells in the human body.An early version of AlphaFold, released in 2020, solved a puzzle that had bedeviled scientists for more than 50 years. It was called “the protein folding problem.”Proteins are the microscopic molecules that drive the behavior of all living things. These molecules begin as strings of chemical compounds before twisting and folding into three-dimensional shapes that define how they interact with other microscopic mechanisms in the body.Biologists spent years or even decades trying to pinpoint the shape of individual proteins. Then AlphaFold came along. When a scientist fed this technology a string of amino acids that make up a protein, it could predict the three-dimensional shape within minutes.When DeepMind publicly released AlphaFold a year later, biologists began using it to accelerate drug discovery. Researchers at the University of California, San Francisco, used the technology as they worked to understand the coronavirus and prepare for similar pandemics. Others used it as they struggled to find remedies for malaria and Parkinson’s disease.The hope is that this kind of technology will significantly streamline the creation of new drugs and vaccines.A segment of a video from Google DeepMind demonstrating the new AlphaFold3 technology.Video by Google Deepmind“It tells us a lot more about how the machines of the cell interact,” said John Jumper, a Google DeepMind researcher. “It tells us how this should work and what happens when we get sick.”The new version of AlphaFold — AlphaFold3 — extends the technology beyond protein folding. In addition to predicting the shapes of proteins, it can predict the behavior of other microscopic biological mechanisms, including DNA, where the body stores genetic information, and RNA, which transfers information from DNA to proteins.“Biology is a dynamic system. You need to understand the interactions between different molecules and structures,” said Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs, which Google also owns. “This is a step in that direction.”Demis Hassabis, Google DeepMind’s chief executive and the founder of Isomorphic Labs.Taylor Hill/Getty ImagesThe company is offering a website where scientists can use AlphaFold3. Other labs, most notably one at the University of Washington, offer similar technology. In a paper released on Tuesday in the scientific journal Nature, Dr. Jumper and his fellow researchers show that it achieves a level of accuracy well beyond the state of the art.The technology could “save months of experimental work and enable research that was previously impossible,” said Deniz Kavi, a co-founder and the chief executive of Tamarind Bio, a start-up that builds technology for accelerating drug discovery. “This represents tremendous promise.” More

  • in

    A New Diplomatic Strategy Emerges as Artificial Intelligence Grows

    The new U.S. approach to cyberthreats comes as early optimism about a “global internet” connecting the world has been shattered.American and Chinese diplomats plan to meet later this month to begin what amounts to the first, tentative arms control talks over the use of artificial intelligence.A year in the making, the talks in Geneva are an attempt to find some common ground on how A.I. will be used and in which situations it could be banned — for example, in the command and control of each country’s nuclear arsenals.The fact that Beijing agreed to the discussion at all was something of a surprise, since it has refused any discussion of limiting the size of nuclear arsenals themselves.But for the Biden administration, the conversation represents the first serious foray into a new realm of diplomacy, which Secretary of State Antony J. Blinken spoke about on Monday in a speech in San Francisco at the RSA Conference, Silicon Valley’s annual convention on both the technology and the politics of securing cyberspace.The Biden administration’s strategy goes beyond the rules of managing cyberconflict and focuses on American efforts to assure control over physical technologies like undersea cables, which connect countries, companies and individual users to cloud services.Yuri Gripas for The New York Times“It’s true that ‘move fast and break things’ is literally the exact opposite of what we try to do at the State Department,” Mr. Blinken told the thousands of cyberexperts, coders and entrepreneurs, a reference to the Silicon Valley mantra about technological disruption.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More