More stories

  • in

    Samsung Workers Strike, the First in the Company’s History

    The South Korean tech giant is at odds with some of its employees as it is trying to reassure investors that its memory chip business can meet demand.For the first time, workers at Samsung, the conglomerate that dominates the South Korean economy, went on strike on Friday.The action comes as Samsung Electronics fights to regain its edge in the business of making memory chips, a critical component in the advanced artificial intelligence systems that are reshaping longstanding rivalries among global technology companies.Workers in Samsung’s chip division were expected to make up the majority of those who will not report to work on Friday for a planned one-day strike. Union representatives said that multiple rounds of negotiations over wage increases and bonuses had broken down.“The company doesn’t value the union as a negotiating partner,” said Lee Hyun Kuk, the vice president of the Nationwide Samsung Electronics Union, the largest among five labor groups at the company. It says that it represents 28,000 members, about one-fifth of Samsung’s global work force, and that nearly 75 percent voted in favor of a strike in April.Lee Hyun Kuk, vice president of the union, said the workers aimed “to send a message to the management that we have reached a certain level of maturation.”Tina Hsu for The New York TimesMr. Lee said that union workers received no bonuses last year, while some had gotten bonuses of as much as 30 percent of their salaries in the past. “It feels like we’ve taken a 30 percent pay cut,” he said. The average union worker earned about 80 million won last year, or around $60,000, before incentives, he said.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Artificial Intelligence Rethink Art? Should it?

    There is an increasing overlap between art and artificial intelligence. Some celebrate it, while others worry.The skeleton seems to be at the epicenter of a mystifying ritual.In a new work by the French artist Pierre Huyghe, robots powered by artificial intelligence film the unburied remains of a man, and periodically position objects next to it in a ceremony that only they seem to understand. The scene takes place in the Atacama Desert in Chile, one of the planet’s oldest and driest deserts.“Camata” is on view at the Punta della Dogana – Pinault Collection exhibition space, in a show concurrent with the Venice Biennale (through Nov. 24). It’s a stirring example of the increasing overlap between art and artificial intelligence, or A.I.Those two vowels, placed side by side, seem to present a menace to many disciplines whose practitioners risk being replaced by smart and autonomous machines. Humanity itself could, at some future point, be replaced by superintelligent machines, according to some globally renowned thinkers and philosophers such as the Israeli historian Yuval Noah Harari and Stephen Hawking.So why are artists dabbling with A.I.? And do they risk being extinguished by it?“There’s always been an attraction, on the part of artists, for chance: something which is beyond your own control, something that liberates you from the finite subject,” said Daniel Birnbaum, a curator who is the artistic director of the digital art production platform Acute Art and a panelist at the Art for Tomorrow conference here this week convened by the Democracy & Culture Foundation with panels moderated by New York Times journalists.Birnbaum said that Huyghe was among the artists who — rather than “overwhelming us with A.I.-generated nonsense from the internet” — are interested in exploring “places where nature and artificiality merge,” and where “biological systems and artificial systems somehow collaborate, creating visually strange things.”In the world at large, Birnbaum acknowledged, there were “frightening scenarios” whereby artificially intelligent systems could control decisions made by governments or the military, and pose grave threats to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Deepfake of U.S. Official Appears After Shift on Ukraine Attacks in Russia

    A manufactured video fabricated comments by the State Department spokesman, Matthew Miller.A day after U.S. officials said Ukraine could use American weapons in limited strikes inside Russia, a deepfake video of a U.S. spokesman discussing the policy appeared online.The fabricated video, which is drawn from actual footage, shows the State Department spokesman, Matthew Miller, seeming to suggest that the Russian city of Belgorod, just 25 miles north of Ukraine’s border with Russia, was a legitimate target for such strikes.The 49-second video clip, which has an authentic feel despite telltale clues of manipulation, illustrates the growing threat of disinformation and especially so-called deepfake videos powered by artificial intelligence.U.S. officials said they had no information about the origins of the video. But they are particularly concerned about how Russia might employ such techniques to manipulate opinion around the war in Ukraine or even American political discourse.Belgorod “has essentially no civilians remaining,” the video purports to show Mr. Miller saying at the State Department in response to a reporter’s question, which was also manufactured. “It’s practically full of military targets at this point, and we are seeing the same thing starting in the regions around there.”“Russia needs to get the message that this is unacceptable,” Mr. Miller adds in the video, which has been circulating on Telegram channels followed by residents of Belgorod widely enough to draw responses from Russian government officials.The claim in the video about Belgorod is completely false. While it has been the target of some Ukrainian attacks, and its schools operate online, its 340,000 residents have not been evacuated.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Justices’ ‘Disturbing’ Ruling in South Carolina Gerrymandering Case

    More from our inbox:Questions for RepublicansThe Case Against the PurebredChatbot TherapyCriticism of Israel Caroline Gutman for The New York TimesTo the Editor:Re “In Top Court, G.O.P. Prevails on Voting Map” (front page, May 24):The action of the conservative wing of the Supreme Court, anchoring the 6-to-3 decision to allow the South Carolina Legislature to go forward with redistricting plans that clearly marginalize African American representation in the state — and after a meticulous review by an appellate court to preclude the plan — is disturbing.The persistent erosion of voting rights and apparent denial that racism is still part of the fabric of American society are troubling.Surely there can be deference to decisions made by states; concocting “intent” to deny true representative justice in an apparent quest to return to the “Ozzie and Harriet” days of the 1950s seems too transparent an attempt to “keep America white again” — as they may perceive the challenge of changing demographics.This particular ruling cries out for the need to expand court membership.Raymond ColemanPotomac, Md.To the Editor:Writing for the majority, Justice Samuel Alito presumes the South Carolina lawmakers acted “in good faith” in gerrymandering the voting district map for the purpose of favoring the Republicans, and not for racial reasons, an improbable rationale on its face.Astoundingly, he further reasons that the gerrymander is acceptable because it was for partisan rather than race-based reasons (acknowledging that redistricting based on race “may be held unconstitutional.”)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I.’s Black Boxes Just Got a Little Less Mysterious

    Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson Said No, but OpenAI’s Virtual Assistant Sounds Just Like Her

    Last week, the company released a chatbot with an option that sounded like the actress, who provided the voice of an A.I. system in the movie “Her.”Days before OpenAI demonstrated its new, flirty voice assistant last week, the actress Scarlett Johansson said, Sam Altman, the company’s chief executive, called her agent and asked that she consider licensing her voice for a virtual assistant.It was his second request to the actress in the past year, Ms. Johannson said in a statement on Monday, adding that the reply both times was no.Despite those refusals, Ms. Johansson said, OpenAI used a voice that sounded “eerily similar to mine.” She has hired a lawyer and asked OpenAI to stop using a voice it called “Sky.”OpenAI suspended its release of “Sky” over the weekend. The company said in a blog post on Sunday that “AI voices should not deliberately mimic a celebrity’s distinctive voice — Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.”For Ms. Johansson, the episode has been a surreal case of life-imitating art. In 2013, she provided the voice for an A.I. system in the Spike Jonze movie “Her.” The film told the story of a lonely introvert seduced by a virtual assistant named Samantha, a tragic commentary on the potential pitfalls of technology as it becomes more realistic.Last week, Mr. Altman appeared to nod to the similarity between OpenAI’s virtual assistant and the film in a post on X with the single word “her.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Scarlett Johansson’s Statement About Her Interactions With Sam Altman

    The actress released a lengthy statement about the company and the similarity of one of its A.I. voices.Here is Scarlett Johansson’s statement on Monday:“Last September, I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and A.I. He said he felt that my voice would be comforting to people. After much consideration and for personal reasons, I declined the offer. Nine months later, my friends, family and the general public all noted how much the newest system named ‘Sky’ sounded like me.“When I heard the released demo, I was shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference. Mr. Altman even insinuated that the similarity was intentional, tweeting a single word, ‘her’ — a reference to the film in which I voiced a chat system, Samantha, who forms an intimate relationship with a human.“Two days before the ChatGPT 4.0 demo was released, Mr. Altman contacted my agent, asking me to reconsider. Before we could connect, the system was out there. As a result of their actions, I was forced to hire legal counsel, who wrote two letters to Mr. Altman and OpenAI, setting out what they had done and asking them to detail the exact process by which they created the ‘Sky’ voice. Consequently, OpenAI reluctantly agreed to take down the ‘Sky’ voice.“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected.” More

  • in

    Loneliness Is a Problem That A.I. Won’t Solve

    When I was reporting my ed tech series, I stumbled on one of the most disturbing things I’ve read in years about how technology might interfere with human connection: an article on the website of the venture capital firm Andreessen Horowitz cheerfully headlined “It’s Not a Computer, It’s a Companion!”It opens with this quote from someone who has apparently fully embraced the idea of having a chatbot for a significant other: “The great thing about A.I. is that it is constantly evolving. One day it will be better than a real [girlfriend]. One day, the real one will be the inferior choice.” The article goes on to breathlessly outline use cases for “A.I. companions,” suggesting that some future iteration of chatbots could stand in for mental health professionals, relationship coaches or chatty co-workers.This week, OpenAI released an update to its ChatGPT chatbot, an indication that the inhuman future foretold by the Andreessen Horowitz story is fast approaching. According to The Washington Post, “The new model, called GPT-4o (“o” stands for “omni”), can interpret user instructions delivered via text, audio and image — and respond in all three modes as well.” GPT-4o is meant to encourage people to speak to it rather than type into it, The Post reports, as “The updated voice can mimic a wider range of human emotions, and allows the user to interrupt. It chatted with users with fewer delays, and identified an OpenAI executive’s emotion based on a video chat where he was grinning.”There have been lots of comparisons between GPT-4o and the 2013 movie “Her,” in which a man falls in love with his A.I. assistant, voiced by Scarlett Johansson. While some observers, including the Times Opinion contributing writer Julia Angwin, who called ChatGPT’s recent update “rather routine,” weren’t particularly impressed, there’s been plenty of hype about the potential for humanlike chatbots to ameliorate emotional challenges, particularly loneliness and social isolation.For example, in January, the co-founder of one A.I. company argued that the technology could improve quality of life for isolated older people, writing, “Companionship can be provided in the form of virtual assistants or chatbots, and these companions can engage in conversations, play games or provide information, helping to alleviate feelings of loneliness and boredom.”Certainly, there are valuable and beneficial uses for A.I. chatbots — they can be life-changing for people who are visually impaired, for example. But the notion that bots will one day be an adequate substitute for human contact misunderstands what loneliness really is, and doesn’t account for the necessity of human touch.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More