More stories

  • in

    Hey, Siri! Let’s Talk About How Apple Is Giving You an A.I. Makeover.

    Apple, a latecomer to artificial intelligence, has struck a deal with OpenAI and developed tools to improve its Siri voice assistant, which it is set to showcase on Monday.Each June, Apple unveils its newest software features for the iPhone at its futuristic Silicon Valley campus. But at its annual developer conference on Monday, the company will shine a spotlight on a feature that isn’t new: Siri, its talking assistant, which has been around for more than a decade.What will be different this time is the technology powering Siri: generative artificial intelligence.In recent months, Adrian Perica, Apple’s vice president of corporate development, has helped spearhead an effort to bring generative A.I. to the masses, said two people with knowledge of the work, who asked for anonymity because of the sensitivity of the effort.Mr. Perica and his colleagues have talked with leading A.I. companies, including Google and OpenAI, seeking a partner to help Apple deliver generative A.I. across its business. Apple recently struck a deal with OpenAI, which makes the ChatGPT chatbot, to fold its technology into the iPhone, two people familiar with the agreement said. It was still in talks with Google as of last week, two people familiar with the conversations said.That has helped lead to a more conversational and versatile version of Siri, which will be shown on Monday, three people familiar with the company said. Siri will be powered by a generative A.I. system developed by Apple, which will allow the talking assistant to chat rather than just respond to one question at a time. Apple will market its new A.I. capabilities as Apple Intelligence, a person familiar with the marketing plan said.Apple, OpenAI and Google declined to comment. Apple’s agreement with OpenAI was previously reported by The Information and Bloomberg, which also reported the name for Apple’s A.I. system.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Samsung Workers Strike, the First in the Company’s History

    The South Korean tech giant is at odds with some of its employees as it is trying to reassure investors that its memory chip business can meet demand.For the first time, workers at Samsung, the conglomerate that dominates the South Korean economy, went on strike on Friday.The action comes as Samsung Electronics fights to regain its edge in the business of making memory chips, a critical component in the advanced artificial intelligence systems that are reshaping longstanding rivalries among global technology companies.Workers in Samsung’s chip division were expected to make up the majority of those who will not report to work on Friday for a planned one-day strike. Union representatives said that multiple rounds of negotiations over wage increases and bonuses had broken down.“The company doesn’t value the union as a negotiating partner,” said Lee Hyun Kuk, the vice president of the Nationwide Samsung Electronics Union, the largest among five labor groups at the company. It says that it represents 28,000 members, about one-fifth of Samsung’s global work force, and that nearly 75 percent voted in favor of a strike in April.Lee Hyun Kuk, vice president of the union, said the workers aimed “to send a message to the management that we have reached a certain level of maturation.”Tina Hsu for The New York TimesMr. Lee said that union workers received no bonuses last year, while some had gotten bonuses of as much as 30 percent of their salaries in the past. “It feels like we’ve taken a 30 percent pay cut,” he said. The average union worker earned about 80 million won last year, or around $60,000, before incentives, he said.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Artificial Intelligence Rethink Art? Should it?

    There is an increasing overlap between art and artificial intelligence. Some celebrate it, while others worry.The skeleton seems to be at the epicenter of a mystifying ritual.In a new work by the French artist Pierre Huyghe, robots powered by artificial intelligence film the unburied remains of a man, and periodically position objects next to it in a ceremony that only they seem to understand. The scene takes place in the Atacama Desert in Chile, one of the planet’s oldest and driest deserts.“Camata” is on view at the Punta della Dogana – Pinault Collection exhibition space, in a show concurrent with the Venice Biennale (through Nov. 24). It’s a stirring example of the increasing overlap between art and artificial intelligence, or A.I.Those two vowels, placed side by side, seem to present a menace to many disciplines whose practitioners risk being replaced by smart and autonomous machines. Humanity itself could, at some future point, be replaced by superintelligent machines, according to some globally renowned thinkers and philosophers such as the Israeli historian Yuval Noah Harari and Stephen Hawking.So why are artists dabbling with A.I.? And do they risk being extinguished by it?“There’s always been an attraction, on the part of artists, for chance: something which is beyond your own control, something that liberates you from the finite subject,” said Daniel Birnbaum, a curator who is the artistic director of the digital art production platform Acute Art and a panelist at the Art for Tomorrow conference here this week convened by the Democracy & Culture Foundation with panels moderated by New York Times journalists.Birnbaum said that Huyghe was among the artists who — rather than “overwhelming us with A.I.-generated nonsense from the internet” — are interested in exploring “places where nature and artificiality merge,” and where “biological systems and artificial systems somehow collaborate, creating visually strange things.”In the world at large, Birnbaum acknowledged, there were “frightening scenarios” whereby artificially intelligent systems could control decisions made by governments or the military, and pose grave threats to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Deepfake of U.S. Official Appears After Shift on Ukraine Attacks in Russia

    A manufactured video fabricated comments by the State Department spokesman, Matthew Miller.A day after U.S. officials said Ukraine could use American weapons in limited strikes inside Russia, a deepfake video of a U.S. spokesman discussing the policy appeared online.The fabricated video, which is drawn from actual footage, shows the State Department spokesman, Matthew Miller, seeming to suggest that the Russian city of Belgorod, just 25 miles north of Ukraine’s border with Russia, was a legitimate target for such strikes.The 49-second video clip, which has an authentic feel despite telltale clues of manipulation, illustrates the growing threat of disinformation and especially so-called deepfake videos powered by artificial intelligence.U.S. officials said they had no information about the origins of the video. But they are particularly concerned about how Russia might employ such techniques to manipulate opinion around the war in Ukraine or even American political discourse.Belgorod “has essentially no civilians remaining,” the video purports to show Mr. Miller saying at the State Department in response to a reporter’s question, which was also manufactured. “It’s practically full of military targets at this point, and we are seeing the same thing starting in the regions around there.”“Russia needs to get the message that this is unacceptable,” Mr. Miller adds in the video, which has been circulating on Telegram channels followed by residents of Belgorod widely enough to draw responses from Russian government officials.The claim in the video about Belgorod is completely false. While it has been the target of some Ukrainian attacks, and its schools operate online, its 340,000 residents have not been evacuated.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Justices’ ‘Disturbing’ Ruling in South Carolina Gerrymandering Case

    More from our inbox:Questions for RepublicansThe Case Against the PurebredChatbot TherapyCriticism of Israel Caroline Gutman for The New York TimesTo the Editor:Re “In Top Court, G.O.P. Prevails on Voting Map” (front page, May 24):The action of the conservative wing of the Supreme Court, anchoring the 6-to-3 decision to allow the South Carolina Legislature to go forward with redistricting plans that clearly marginalize African American representation in the state — and after a meticulous review by an appellate court to preclude the plan — is disturbing.The persistent erosion of voting rights and apparent denial that racism is still part of the fabric of American society are troubling.Surely there can be deference to decisions made by states; concocting “intent” to deny true representative justice in an apparent quest to return to the “Ozzie and Harriet” days of the 1950s seems too transparent an attempt to “keep America white again” — as they may perceive the challenge of changing demographics.This particular ruling cries out for the need to expand court membership.Raymond ColemanPotomac, Md.To the Editor:Writing for the majority, Justice Samuel Alito presumes the South Carolina lawmakers acted “in good faith” in gerrymandering the voting district map for the purpose of favoring the Republicans, and not for racial reasons, an improbable rationale on its face.Astoundingly, he further reasons that the gerrymander is acceptable because it was for partisan rather than race-based reasons (acknowledging that redistricting based on race “may be held unconstitutional.”)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I.’s Black Boxes Just Got a Little Less Mysterious

    Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

    Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie “Her.”Days before OpenAI demonstrated its new, flirty voice assistant last week, the actress Scarlett Johansson said, Sam Altman, the company’s chief executive, called her agent and asked that she consider licensing her voice for a virtual assistant.It was his second request to the actress in the past year, Ms. Johannson said in a statement on Monday, adding that the reply both times was no.Despite those refusals, Ms. Johansson said, OpenAI used a voice that sounded “eerily similar to mine.” She has hired a lawyer and asked OpenAI to stop using a voice it called “Sky.”OpenAI suspended its release of “Sky” over the weekend. The company said in a blog post on Sunday that “AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”For Ms. Johansson, the episode has been a surreal case of life-imitating art. In 2013, she provided the voice for an A.I. system in the Spike Jonze movie “Her.” The film told the story of a lonely introvert seduced by a virtual assistant named Samantha, a tragic commentary on the potential pitfalls of technology as it becomes more realistic.Last week, Mr. Altman appeared to nod to the similarity between OpenAI’s virtual assistant and the film in a post on X with the single word “her.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More