More stories

  • in

    Why Wouldn’t ChatGPT Say ‘David Mayer’?

    A bizarre saga in which users noticed the chatbot refused to say “David Mayer” raised questions about privacy and A.I., with few clear answers.Across the final years of his life, David Mayer, a theater professor living in Manchester, England, faced the cascading consequences an unfortunate coincidence: A dead Chechen rebel on a terror watch list had once used Mr. Mayer’s name as an alias.The real Mr. Mayer had travel plans thwarted, financial transactions frozen and crucial academic correspondence blocked, his family said. The frustrations plagued him until his death in 2023, at age 94.But this month, his fight for his identity edged back into the spotlight when eagle-eyed users noticed one particular name was sending OpenAI’s ChatGPT bot into shutdown.David Mayer.Users’ efforts to prompt the bot to say “David Mayer” in a variety of ways were instead resulting in error messages, or the bot would simply refuse to respond. It’s unclear why the name was kryptonite for the bot service, and OpenAI would not say whether the professor’s plight was related to ChatGPT’s issue with the name.But the saga underscores some of the prickliest questions about generative A.I. and the chatbots it powers: Why did that name knock the chatbot out? Who, or what, is making those decisions? And who is responsible for the mistakes?“This was something that he would’ve almost enjoyed, because it would have vindicated the effort he put in to trying to deal with it,” Mr. Mayer’s daughter, Catherine, said of the debacle in an interview.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying

    The “best bromance in tech” has had a reality check as OpenAI has tried to change its deal with Microsoft and the software maker has tried to hedge its bet on the start-up.Last fall, Sam Altman, OpenAI’s chief executive, asked his counterpart at Microsoft, Satya Nadella, if the tech giant would invest billions of dollars in the start-up.Microsoft had already pumped $13 billion into OpenAI, and Mr. Nadella was initially willing to keep the cash spigot flowing. But after OpenAI’s board of directors briefly ousted Mr. Altman last November, Mr. Nadella and Microsoft reconsidered, according to four people familiar with the talks who spoke on the condition of anonymity.Over the next few months, Microsoft wouldn’t budge as OpenAI, which expects to lose $5 billion this year, continued to ask for more money and more computing power to build and run its A.I. systems.Mr. Altman once called OpenAI’s partnership with Microsoft “the best bromance in tech,” but ties between the companies have started to fray. Financial pressure on OpenAI, concern about its stability and disagreements between employees of the two companies have strained their five-year partnership, according to interviews with 19 people familiar with the relationship between the companies.That tension demonstrates a key challenge for A.I. start-ups: They are dependent on the world’s tech giants for money and computing power because those big companies control the massive cloud computing systems the small outfits need to develop A.I.No pairing displays this dynamic better than Microsoft and OpenAI, the maker of the ChatGPT chatbot. When OpenAI got its giant investment from Microsoft, it agreed to an exclusive deal to buy computing power from Microsoft and work closely with the tech giant on new A.I.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Doctors, A.I. and Empathy for Patients

    More from our inbox:Breast Cancer ScreeningWalz’s MisstepsMental Health Support for SchoolchildrenTo the Editor:Re “ChatGPT’s Bedside Manner Is Better Than Mine,” by Jonathan Reisman (Opinion guest essay, Oct. 9):Dr. Reisman notes that ChatGPT’s answers to patient questions have been rated as more empathetic than those written by actual doctors. This should not be a call for doctors to surrender our human role to A.I. To the contrary, we need to continue to improve our communication skills.For the past 25 years, I have been facilitating seminars in doctor-patient communication. The skills to communicate bad news listed by Dr. Reisman are exactly the techniques that we suggest to our medical students. However, doctors can avoid the temptation to surrender their “humanity to a script” as if it were “just another day at work.”Techniques are a valuable guide, but the real work consists of carefully listening to the responses and their emotional content, and crafting new words and phrases that speak to the unique patient’s confusion, fear and distress.In my experience, patients know when we are reciting a script, and when we are paying attention to their thoughts and feelings. Unlike A.I., and especially when conversations are matters of life and death, we can reach into the depths of our humanity to feel and communicate empathy and compassion toward our patients.Neil S. ProseDurham, N.C.To the Editor:Mention the words “A.I.” and “doctoring” to most physicians in the same sentence, and the immediate reaction is often skepticism or fear.As Dr. Jonathan Reisman noted in his essay, A.I. has shown a remarkable ability to mimic human empathy in encounters with patients. This is one reason many practicing physicians worry that A.I. may replace doctors eventually.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Harris wants to bring ‘joy, joy, joy’ to Americans. What about Palestinians? | Arwa Mahdawi

    Muslim Women for Harris is disbandingGot any spare brooms to hand? I think the folk at the Democratic national convention may need a few extra because they’ve been very busy this week trying to sweep the carnage in Gaza under the rug.Hope and joy have been the big themes of the convention. On Wednesday, Hakeem Jeffries, the House minority leader, told the crowd that working to get Kamala Harris elected would mean “joy, joy, joy comes in the morning”. It is wonderful to see all this exuberance, all this optimism for a brighter future. But it is also impossible not to contrast the revelry in Chicago with the Biden administration-sponsored suffering coming out of Gaza.Well, it’s impossible for some of us, anyway. For plenty of delegates at the convention, the suffering of Palestinians, the harrowing images on social media of charred babies and toddlers in Gaza whose heads have been caved in from US-manufactured bombs, seem to be nothing more than an annoying distraction. Pro-Palestinian protesters at the convention haven’t just been met with stony faces, they’ve been met with jeers and violence. One delegate inside the convention was caught on camera repeatedly hitting a Muslim woman in the head with a “We Love Joe” sign. The woman’s crime was that she had peacefully unfurled a banner saying “Stop Arming Israel”. It’s not clear who the man assaulting this woman was but one imagines he will not face any consequences.To be fair, Gaza hasn’t been completely ignored. On Monday, there was a panel centered on Palestinian human rights, in which Dr Tanya Haj-Hassan, a pediatric doctor who treated patients in Gaza, talked about the horrors she had witnessed. But the panel, while important, wasn’t on the main stage. It wasn’t given star billing like the parents of the Israeli-American hostage Hersh Goldberg-Polin, who gave an emotional speech on Wednesday. It felt a lot like pro-Palestinian activists had just been tossed a few crumbs.For a brief moment, it did seem like a Palestinian might get a proper chance to speak. The Uncommitted National Movement, which launched an anti-war protest vote during the primaries, had been urging convention officials to include two Palestinian American speakers on the convention’s main stage. “We are learning that Israeli hostages’ families will be speaking from the main stage. We strongly support that decision and also strongly hope that we will also be hearing from Palestinians who’ve endured the largest civilian death toll since 1948,” the movement’s statement released on Tuesday read.By Wednesday evening, however, it seemed clear that the convention had rejected these requests. In response, a group of uncommitted delegates staged a sit-in in front of Chicago’s United Center. Ilhan Omar joined the demonstration, and Alexandria Ocasio-Cortez called in via FaceTime.In light of the convention’s refusal to have a Palestinian American speaker, the group Muslim Women for Harris made the decision to disband and withdraw support for Harris. “The family of the Israeli hostage that was on the stage tonight, has shown more empathy towards Palestinian Americans and Palestinians, than our candidate or the DNC has,” Muslim Women for Harris’s statement read.For those of us who have been cautiously optimistic that Harris might break from Joe Biden’s disastrous policy of unconditional support for Israel, this week has been bitterly disappointing. Whoever wins this election, it seems clear joy, joy, joy will not be coming to Gaza anytime soon. Just more bombs, bombs, bombs.Dismiss ‘grannies’ as frail old biddies at your perilWhether it’s “Nans against Nazis” protesting in Liverpool or the Raging Nannies getting arrested at US army recruitment centers, older women are some of the toughest activists out there, writes Sally Feldman.Woman, 75, uses gardening tools to fill in potholes outside home in Scottish villageArmed with a bucket and spade, Jenny Paterson undertook the resurfacing work against her doctor’s orders. She’d had surgery and wasn’t supposed to lift things but said: “I’m fine and I’m not a person to sit around and do nothing anyway.” Which has given me some inspiration to pick up a rake and go tackle the raggedy roads of Philadelphia.The late Queen Elizabeth II thought Donald Trump was ‘very rude’Apparently, she also “believed Trump ‘must have some sort of arrangement’ with his wife, Melania, or else why would she have remained married to him?”skip past newsletter promotionafter newsletter promotionHow Tanya Smith stole $40m, evaded the FBI and broke out of prisonThe Guardian has a fascinating profile of Smith that touches on how the FBI couldn’t catch her for so long because they didn’t think a Black woman was capable of orchestrating her crimes. In Smith’s memoir, she recounts how one officer told her that “neeee-grroes murder, steal and rob, but they don’t have the brains to commit sophisticated crimes like this”.A clueless Alicia Silverstone eats poisonous fruit off a bushIf you’re wandering the streets of London and see a bush in someone’s front garden with mysterious fruit on it, should you a) admire it and move on? Or b) reach through the fence and film a TikTok of yourself munching the lil street snack while asking whether anyone knows what the heck it is? This week, Silverstone chose option b. The woman thinks vaccines are dodgy and yet she has no problem sticking an unknown fruit into her mouth. Turns out it was toxic but Silverstone has confirmed she’s OK, which means we can all laugh at her without feeling too bad about it.Women use ChatGPT 16%-20% less than their male peersThat’s according to two recent studies examined by the Economist. One explanation for this was that high-achieving women appeared to impose an AI ban on themselves. “It’s the ‘good girl’ thing,” one researcher said. “It’s this idea that ‘I have to go through this pain, I have to do it on my own and I shouldn’t cheat and take short-cuts.’” Very demure, very mindful.Patriarchal law cuts some South African women off from owning their homesBack in the 1990s, South Africa introduced a new land law (the Upgrading of Land Tenure Rights Act) that was supposed to fix the injustices of apartheid. It upgraded the property rights of Black long-term leaseholders so they could own their homes. But only a man could hold the property permit, effectively pushing women out of inheriting. Since the 1990s, there have been challenges and changes to the Upgrading Act, but experts say that women’s property rights are still not sufficiently recognized and “customary law has placed women outside the law”.The week in pawtriarchyThey stared into the void of an arcade game, and the void stared back. Punters at a Pennsylvania custard shop were startled when they realized that the cute little groundhog nestled among the stuffed animals in a mechanical-claw game was a real creature. Nobody knows exactly how he got into the game but he has since been rescued and named Colonel Custard. “It’s a good story that ended well,” the custard shop manager said. “He got set free. No one got bit.” More

  • in

    Iranian group used ChatGPT to try to influence US election, OpenAI says

    OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues.The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel’s presence at the Olympic Games and then shared it via social media accounts and websites, Open AI said.Investigation by the Microsoft-backed AI company showed ChatGPT was used for generating long-form articles and shorter social media comments.OpenAI said the operation did not appear to have achieved meaningful audience engagement.The majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media.The accounts have been banned from using OpenAI’s services and the company continues to monitor activities for any further attempts to violate policies, it said.Earlier in August, a Microsoft threat-intelligence report said the Iranian network Storm-2035, comprising four websites masquerading as news outlets, was actively engaging US voter groups on opposing ends of the political spectrum.The engagement was being built with “polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict”, the report stated.The Democratic candidate, Kamala Harris, and her Republican rival, Donald Trump, are locked in a tight race, ahead of the presidential election on 5 November.The AI firm said in May it had disrupted five covert influence operations that sought to use its models for “deceptive activity” across the internet. More

  • in

    How A.I. Imitates Restaurant Reviews

    A new study showed people real restaurant reviews and ones produced by A.I. They couldn’t tell the difference.The White Clam Pizza at Frank Pepe Pizzeria Napoletana in New Haven, Conn., is a revelation. The crust, kissed by the intense heat of the coal-fired oven, achieves a perfect balance of crispness and chew. Topped with freshly shucked clams, garlic, oregano and a dusting of grated cheese, it is a testament to the magic that simple, high-quality ingredients can conjure.Sound like me? It’s not. The entire paragraph, except the pizzeria’s name and the city, was generated by GPT-4 in response to a simple prompt asking for a restaurant critique in the style of Pete Wells.I have a few quibbles. I would never pronounce any food a revelation, or describe heat as a kiss. I don’t believe in magic, and rarely call anything perfect without using “nearly” or some other hedge. But these lazy descriptors are so common in food writing that I imagine many readers barely notice them. I’m unusually attuned to them because whenever I commit a cliché in my copy, I get boxed on the ears by my editor.He wouldn’t be fooled by the counterfeit Pete. Neither would I. But as much as it pains me to admit, I’d guess that many people would say it’s a four-star fake.The person responsible for Phony Me is Balazs Kovacs, a professor of organizational behavior at Yale School of Management. In a recent study, he fed a large batch of Yelp reviews to GPT-4, the technology behind ChatGPT, and asked it to imitate them. His test subjects — people — could not tell the difference between genuine reviews and those churned out by artificial intelligence. In fact, they were more likely to think the A.I. reviews were real. (The phenomenon of computer-generated fakes that are more convincing than the real thing is so well known that there’s a name for it: A.I. hyperrealism.)Dr. Kovacs’s study belongs to a growing body of research suggesting that the latest versions of generative A.I. can pass the Turing test, a scientifically fuzzy but culturally resonant standard. When a computer can dupe us into believing that language it spits out was written by a human, we say it has passed the Turing test.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More