More stories

  • in

    Target Tests an A.I. Tool to Help Its Workers Aid Shoppers

    The retailer is rolling out a chatbot to help workers answer questions from shoppers — and workers.Target is the latest retailer to put generative artificial intelligence tools in the hands of its workers, with the goal of improving the in-store experience for employees and shoppers.On Thursday, the retailer said it had built a chatbot, called Store Companion, that would appear as an app on a store worker’s hand-held device. The chatbot can provide guidance on tasks like rebooting a cash register or enrolling a customer in the retailer’s loyalty program. The idea is to give workers “confidence to serve our guests,” Brett Craig, Target’s chief information officer, said in an interview.Target is testing the device in 400 stores and plans to make the app available to most workers across its nearly 2,000 locations by August.As the retail industry experiments with generative A.I., some see its potential to eventually make in-store shopping feel more like online shopping, said Roy Singh, the global head of Bain & Co’s advanced analytics practice who works with retailers on generative A.I. initiatives.Retailers have personalized online shopping for customers with things like predictive technology, which suggests items to buy. Shoppers also see e-commerce as more convenient than having to walk in a store and track down workers. The Target app is meant to help workers assist shoppers with their questions faster.Mr. Craig is often asked if these sorts of tools will replace workers, he said. “I believe the relationship between people and technology is so very important,” he said. “We’re here to make sure that they get the right tools to do their work.”Walmart recently expanded access to the A.I. tool it had started using in its corporate offices last summer for use in its retail stores, rolling it out to 13,000 managers of its Sam’s Club stores.While there is significant investment and hype around generative A.I., some retailers have also rolled back experiments with the technology that have failed.“We are still in that growth curve — learning, failing and relearning — and trying to get through adoption at scale,” said Duleep Rodrigo, who leads the U.S. consumer and retail sector for KPMG. More

  • in

    Mistral, a French A.I. Start-Up, Is Valued at $6.2 Billion

    Created by alumni from Meta and Google, Mistral is just a year old and has already raised more than $1 billion from investors, leading to eye-popping valuations.Mistral, a French artificial intelligence start-up, said on Tuesday that it had raised 600 million euros, or about $640 million, from investors, a sign of robust interest in a company seen as Europe’s most promising rival to OpenAI and other Silicon Valley A.I. developers.Mistral is now valued at €5.8 billion, according to a person familiar with the investment, an eye-popping sum for a company founded just one year ago by alumni from Meta and Google. The company’s valuation has roughly tripled since December when it raised €385 million.Investors in the latest round included the venture capital firms General Catalyst, Andreessen Horowitz and Lightspeed Ventures, as well as Nvidia, Samsung, Salesforce, Cisco, IBM and BNP Paribas.Since OpenAI released ChatGPT in November 2022, investors have poured money into generative A.I. technology, which can answer questions in humanlike prose, create images and write software code. Two weeks ago, Elon Musk raised $6 billion for his start-up, xAI. OpenAI has raised roughly $13 billion from Microsoft, while another California start-up, Anthropic, has raised more than $7.3 billion.Mistral has positioned itself as a European alternative to the larger American tech giants and boasts that its products like the chatbot, Le Chat, are strong in a wider range of languages, including English. In contrast to firms like OpenAI and Anthropic, Mistral subscribes to the view that A.I. software should be open source, meaning that the programming codes should be available for anyone to download, copy, tweak and repurpose. Meta has also made its A.I. code open source.In a sign of A.I.’s growing geopolitical significance, President Emmanuel Macron of France and others in the French government have given the company their full-throated support. Mr. Macron has called Mistral a sign of “French genius” and invited the company’s chief executive, Arthur Mensch, to dinner at the presidential palace.On Tuesday, Mr. Mensch said in a statement that the latest investment would help keep the company independent and fuel its expansion. More

  • in

    Hey, Siri! Let’s Talk About How Apple Is Giving You an A.I. Makeover.

    Apple, a latecomer to artificial intelligence, has struck a deal with OpenAI and developed tools to improve its Siri voice assistant, which it is set to showcase on Monday.Each June, Apple unveils its newest software features for the iPhone at its futuristic Silicon Valley campus. But at its annual developer conference on Monday, the company will shine a spotlight on a feature that isn’t new: Siri, its talking assistant, which has been around for more than a decade.What will be different this time is the technology powering Siri: generative artificial intelligence.In recent months, Adrian Perica, Apple’s vice president of corporate development, has helped spearhead an effort to bring generative A.I. to the masses, said two people with knowledge of the work, who asked for anonymity because of the sensitivity of the effort.Mr. Perica and his colleagues have talked with leading A.I. companies, including Google and OpenAI, seeking a partner to help Apple deliver generative A.I. across its business. Apple recently struck a deal with OpenAI, which makes the ChatGPT chatbot, to fold its technology into the iPhone, two people familiar with the agreement said. It was still in talks with Google as of last week, two people familiar with the conversations said.That has helped lead to a more conversational and versatile version of Siri, which will be shown on Monday, three people familiar with the company said. Siri will be powered by a generative A.I. system developed by Apple, which will allow the talking assistant to chat rather than just respond to one question at a time. Apple will market its new A.I. capabilities as Apple Intelligence, a person familiar with the marketing plan said.Apple, OpenAI and Google declined to comment. Apple’s agreement with OpenAI was previously reported by The Information and Bloomberg, which also reported the name for Apple’s A.I. system.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Samsung Workers Strike, the First in the Company’s History

    The South Korean tech giant is at odds with some of its employees as it is trying to reassure investors that its memory chip business can meet demand.For the first time, workers at Samsung, the conglomerate that dominates the South Korean economy, went on strike on Friday.The action comes as Samsung Electronics fights to regain its edge in the business of making memory chips, a critical component in the advanced artificial intelligence systems that are reshaping longstanding rivalries among global technology companies.Workers in Samsung’s chip division were expected to make up the majority of those who will not report to work on Friday for a planned one-day strike. Union representatives said that multiple rounds of negotiations over wage increases and bonuses had broken down.“The company doesn’t value the union as a negotiating partner,” said Lee Hyun Kuk, the vice president of the Nationwide Samsung Electronics Union, the largest among five labor groups at the company. It says that it represents 28,000 members, about one-fifth of Samsung’s global work force, and that nearly 75 percent voted in favor of a strike in April.Lee Hyun Kuk, vice president of the union, said the workers aimed “to send a message to the management that we have reached a certain level of maturation.”Tina Hsu for The New York TimesMr. Lee said that union workers received no bonuses last year, while some had gotten bonuses of as much as 30 percent of their salaries in the past. “It feels like we’ve taken a 30 percent pay cut,” he said. The average union worker earned about 80 million won last year, or around $60,000, before incentives, he said.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Artificial Intelligence Rethink Art? Should it?

    There is an increasing overlap between art and artificial intelligence. Some celebrate it, while others worry.The skeleton seems to be at the epicenter of a mystifying ritual.In a new work by the French artist Pierre Huyghe, robots powered by artificial intelligence film the unburied remains of a man, and periodically position objects next to it in a ceremony that only they seem to understand. The scene takes place in the Atacama Desert in Chile, one of the planet’s oldest and driest deserts.“Camata” is on view at the Punta della Dogana – Pinault Collection exhibition space, in a show concurrent with the Venice Biennale (through Nov. 24). It’s a stirring example of the increasing overlap between art and artificial intelligence, or A.I.Those two vowels, placed side by side, seem to present a menace to many disciplines whose practitioners risk being replaced by smart and autonomous machines. Humanity itself could, at some future point, be replaced by superintelligent machines, according to some globally renowned thinkers and philosophers such as the Israeli historian Yuval Noah Harari and Stephen Hawking.So why are artists dabbling with A.I.? And do they risk being extinguished by it?“There’s always been an attraction, on the part of artists, for chance: something which is beyond your own control, something that liberates you from the finite subject,” said Daniel Birnbaum, a curator who is the artistic director of the digital art production platform Acute Art and a panelist at the Art for Tomorrow conference here this week convened by the Democracy & Culture Foundation with panels moderated by New York Times journalists.Birnbaum said that Huyghe was among the artists who — rather than “overwhelming us with A.I.-generated nonsense from the internet” — are interested in exploring “places where nature and artificiality merge,” and where “biological systems and artificial systems somehow collaborate, creating visually strange things.”In the world at large, Birnbaum acknowledged, there were “frightening scenarios” whereby artificially intelligent systems could control decisions made by governments or the military, and pose grave threats to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Deepfake of U.S. Official Appears After Shift on Ukraine Attacks in Russia

    A manufactured video fabricated comments by the State Department spokesman, Matthew Miller.A day after U.S. officials said Ukraine could use American weapons in limited strikes inside Russia, a deepfake video of a U.S. spokesman discussing the policy appeared online.The fabricated video, which is drawn from actual footage, shows the State Department spokesman, Matthew Miller, seeming to suggest that the Russian city of Belgorod, just 25 miles north of Ukraine’s border with Russia, was a legitimate target for such strikes.The 49-second video clip, which has an authentic feel despite telltale clues of manipulation, illustrates the growing threat of disinformation and especially so-called deepfake videos powered by artificial intelligence.U.S. officials said they had no information about the origins of the video. But they are particularly concerned about how Russia might employ such techniques to manipulate opinion around the war in Ukraine or even American political discourse.Belgorod “has essentially no civilians remaining,” the video purports to show Mr. Miller saying at the State Department in response to a reporter’s question, which was also manufactured. “It’s practically full of military targets at this point, and we are seeing the same thing starting in the regions around there.”“Russia needs to get the message that this is unacceptable,” Mr. Miller adds in the video, which has been circulating on Telegram channels followed by residents of Belgorod widely enough to draw responses from Russian government officials.The claim in the video about Belgorod is completely false. While it has been the target of some Ukrainian attacks, and its schools operate online, its 340,000 residents have not been evacuated.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Justices’ ‘Disturbing’ Ruling in South Carolina Gerrymandering Case

    More from our inbox:Questions for RepublicansThe Case Against the PurebredChatbot TherapyCriticism of Israel Caroline Gutman for The New York TimesTo the Editor:Re “In Top Court, G.O.P. Prevails on Voting Map” (front page, May 24):The action of the conservative wing of the Supreme Court, anchoring the 6-to-3 decision to allow the South Carolina Legislature to go forward with redistricting plans that clearly marginalize African American representation in the state — and after a meticulous review by an appellate court to preclude the plan — is disturbing.The persistent erosion of voting rights and apparent denial that racism is still part of the fabric of American society are troubling.Surely there can be deference to decisions made by states; concocting “intent” to deny true representative justice in an apparent quest to return to the “Ozzie and Harriet” days of the 1950s seems too transparent an attempt to “keep America white again” — as they may perceive the challenge of changing demographics.This particular ruling cries out for the need to expand court membership.Raymond ColemanPotomac, Md.To the Editor:Writing for the majority, Justice Samuel Alito presumes the South Carolina lawmakers acted “in good faith” in gerrymandering the voting district map for the purpose of favoring the Republicans, and not for racial reasons, an improbable rationale on its face.Astoundingly, he further reasons that the gerrymander is acceptable because it was for partisan rather than race-based reasons (acknowledging that redistricting based on race “may be held unconstitutional.”)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I.’s Black Boxes Just Got a Little Less Mysterious

    Researchers at the A.I. company Anthropic claim to have found clues about the inner workings of large language models, possibly helping to prevent their misuse and to curb their potential threats.One of the weirder, more unnerving things about today’s leading artificial intelligence systems is that nobody — not even the people who build them — really knows how the systems work.That’s because large language models, the type of A.I. systems that power ChatGPT and other popular chatbots, are not programmed line by line by human engineers, as conventional computer programs are.Instead, these systems essentially learn on their own, by ingesting massive amounts of data and identifying patterns and relationships in language, then using that knowledge to predict the next words in a sequence.One consequence of building A.I. systems this way is that it’s difficult to reverse-engineer them or to fix problems by identifying specific bugs in the code. Right now, if a user types “Which American city has the best food?” and a chatbot responds with “Tokyo,” there’s no real way of understanding why the model made that error, or why the next person who asks may receive a different answer.And when large language models do misbehave or go off the rails, nobody can really explain why. (I encountered this problem last year, when a Bing chatbot acted in an unhinged way during an interaction with me, and not even top executives at Microsoft could tell me with any certainty what had gone wrong.)The inscrutability of large language models is not just an annoyance but a major reason some researchers fear that powerful A.I. systems could eventually become a threat to humanity.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More