More stories

  • in

    Doctors, A.I. and Empathy for Patients

    More from our inbox:Breast Cancer ScreeningWalz’s MisstepsMental Health Support for SchoolchildrenTo the Editor:Re “ChatGPT’s Bedside Manner Is Better Than Mine,” by Jonathan Reisman (Opinion guest essay, Oct. 9):Dr. Reisman notes that ChatGPT’s answers to patient questions have been rated as more empathetic than those written by actual doctors. This should not be a call for doctors to surrender our human role to A.I. To the contrary, we need to continue to improve our communication skills.For the past 25 years, I have been facilitating seminars in doctor-patient communication. The skills to communicate bad news listed by Dr. Reisman are exactly the techniques that we suggest to our medical students. However, doctors can avoid the temptation to surrender their “humanity to a script” as if it were “just another day at work.”Techniques are a valuable guide, but the real work consists of carefully listening to the responses and their emotional content, and crafting new words and phrases that speak to the unique patient’s confusion, fear and distress.In my experience, patients know when we are reciting a script, and when we are paying attention to their thoughts and feelings. Unlike A.I., and especially when conversations are matters of life and death, we can reach into the depths of our humanity to feel and communicate empathy and compassion toward our patients.Neil S. ProseDurham, N.C.To the Editor:Mention the words “A.I.” and “doctoring” to most physicians in the same sentence, and the immediate reaction is often skepticism or fear.As Dr. Jonathan Reisman noted in his essay, A.I. has shown a remarkable ability to mimic human empathy in encounters with patients. This is one reason many practicing physicians worry that A.I. may replace doctors eventually.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Harris wants to bring ‘joy, joy, joy’ to Americans. What about Palestinians? | Arwa Mahdawi

    Muslim Women for Harris is disbandingGot any spare brooms to hand? I think the folk at the Democratic national convention may need a few extra because they’ve been very busy this week trying to sweep the carnage in Gaza under the rug.Hope and joy have been the big themes of the convention. On Wednesday, Hakeem Jeffries, the House minority leader, told the crowd that working to get Kamala Harris elected would mean “joy, joy, joy comes in the morning”. It is wonderful to see all this exuberance, all this optimism for a brighter future. But it is also impossible not to contrast the revelry in Chicago with the Biden administration-sponsored suffering coming out of Gaza.Well, it’s impossible for some of us, anyway. For plenty of delegates at the convention, the suffering of Palestinians, the harrowing images on social media of charred babies and toddlers in Gaza whose heads have been caved in from US-manufactured bombs, seem to be nothing more than an annoying distraction. Pro-Palestinian protesters at the convention haven’t just been met with stony faces, they’ve been met with jeers and violence. One delegate inside the convention was caught on camera repeatedly hitting a Muslim woman in the head with a “We Love Joe” sign. The woman’s crime was that she had peacefully unfurled a banner saying “Stop Arming Israel”. It’s not clear who the man assaulting this woman was but one imagines he will not face any consequences.To be fair, Gaza hasn’t been completely ignored. On Monday, there was a panel centered on Palestinian human rights, in which Dr Tanya Haj-Hassan, a pediatric doctor who treated patients in Gaza, talked about the horrors she had witnessed. But the panel, while important, wasn’t on the main stage. It wasn’t given star billing like the parents of the Israeli-American hostage Hersh Goldberg-Polin, who gave an emotional speech on Wednesday. It felt a lot like pro-Palestinian activists had just been tossed a few crumbs.For a brief moment, it did seem like a Palestinian might get a proper chance to speak. The Uncommitted National Movement, which launched an anti-war protest vote during the primaries, had been urging convention officials to include two Palestinian American speakers on the convention’s main stage. “We are learning that Israeli hostages’ families will be speaking from the main stage. We strongly support that decision and also strongly hope that we will also be hearing from Palestinians who’ve endured the largest civilian death toll since 1948,” the movement’s statement released on Tuesday read.By Wednesday evening, however, it seemed clear that the convention had rejected these requests. In response, a group of uncommitted delegates staged a sit-in in front of Chicago’s United Center. Ilhan Omar joined the demonstration, and Alexandria Ocasio-Cortez called in via FaceTime.In light of the convention’s refusal to have a Palestinian American speaker, the group Muslim Women for Harris made the decision to disband and withdraw support for Harris. “The family of the Israeli hostage that was on the stage tonight, has shown more empathy towards Palestinian Americans and Palestinians, than our candidate or the DNC has,” Muslim Women for Harris’s statement read.For those of us who have been cautiously optimistic that Harris might break from Joe Biden’s disastrous policy of unconditional support for Israel, this week has been bitterly disappointing. Whoever wins this election, it seems clear joy, joy, joy will not be coming to Gaza anytime soon. Just more bombs, bombs, bombs.Dismiss ‘grannies’ as frail old biddies at your perilWhether it’s “Nans against Nazis” protesting in Liverpool or the Raging Nannies getting arrested at US army recruitment centers, older women are some of the toughest activists out there, writes Sally Feldman.Woman, 75, uses gardening tools to fill in potholes outside home in Scottish villageArmed with a bucket and spade, Jenny Paterson undertook the resurfacing work against her doctor’s orders. She’d had surgery and wasn’t supposed to lift things but said: “I’m fine and I’m not a person to sit around and do nothing anyway.” Which has given me some inspiration to pick up a rake and go tackle the raggedy roads of Philadelphia.The late Queen Elizabeth II thought Donald Trump was ‘very rude’Apparently, she also “believed Trump ‘must have some sort of arrangement’ with his wife, Melania, or else why would she have remained married to him?”skip past newsletter promotionafter newsletter promotionHow Tanya Smith stole $40m, evaded the FBI and broke out of prisonThe Guardian has a fascinating profile of Smith that touches on how the FBI couldn’t catch her for so long because they didn’t think a Black woman was capable of orchestrating her crimes. In Smith’s memoir, she recounts how one officer told her that “neeee-grroes murder, steal and rob, but they don’t have the brains to commit sophisticated crimes like this”.A clueless Alicia Silverstone eats poisonous fruit off a bushIf you’re wandering the streets of London and see a bush in someone’s front garden with mysterious fruit on it, should you a) admire it and move on? Or b) reach through the fence and film a TikTok of yourself munching the lil street snack while asking whether anyone knows what the heck it is? This week, Silverstone chose option b. The woman thinks vaccines are dodgy and yet she has no problem sticking an unknown fruit into her mouth. Turns out it was toxic but Silverstone has confirmed she’s OK, which means we can all laugh at her without feeling too bad about it.Women use ChatGPT 16%-20% less than their male peersThat’s according to two recent studies examined by the Economist. One explanation for this was that high-achieving women appeared to impose an AI ban on themselves. “It’s the ‘good girl’ thing,” one researcher said. “It’s this idea that ‘I have to go through this pain, I have to do it on my own and I shouldn’t cheat and take short-cuts.’” Very demure, very mindful.Patriarchal law cuts some South African women off from owning their homesBack in the 1990s, South Africa introduced a new land law (the Upgrading of Land Tenure Rights Act) that was supposed to fix the injustices of apartheid. It upgraded the property rights of Black long-term leaseholders so they could own their homes. But only a man could hold the property permit, effectively pushing women out of inheriting. Since the 1990s, there have been challenges and changes to the Upgrading Act, but experts say that women’s property rights are still not sufficiently recognized and “customary law has placed women outside the law”.The week in pawtriarchyThey stared into the void of an arcade game, and the void stared back. Punters at a Pennsylvania custard shop were startled when they realized that the cute little groundhog nestled among the stuffed animals in a mechanical-claw game was a real creature. Nobody knows exactly how he got into the game but he has since been rescued and named Colonel Custard. “It’s a good story that ended well,” the custard shop manager said. “He got set free. No one got bit.” More

  • in

    Iranian group used ChatGPT to try to influence US election, OpenAI says

    OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the US presidential election and other issues.The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the US elections, the conflict in Gaza and Israel’s presence at the Olympic Games and then shared it via social media accounts and websites, Open AI said.Investigation by the Microsoft-backed AI company showed ChatGPT was used for generating long-form articles and shorter social media comments.OpenAI said the operation did not appear to have achieved meaningful audience engagement.The majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media.The accounts have been banned from using OpenAI’s services and the company continues to monitor activities for any further attempts to violate policies, it said.Earlier in August, a Microsoft threat-intelligence report said the Iranian network Storm-2035, comprising four websites masquerading as news outlets, was actively engaging US voter groups on opposing ends of the political spectrum.The engagement was being built with “polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict”, the report stated.The Democratic candidate, Kamala Harris, and her Republican rival, Donald Trump, are locked in a tight race, ahead of the presidential election on 5 November.The AI firm said in May it had disrupted five covert influence operations that sought to use its models for “deceptive activity” across the internet. More

  • in

    How A.I. Imitates Restaurant Reviews

    A new study showed people real restaurant reviews and ones produced by A.I. They couldn’t tell the difference.The White Clam Pizza at Frank Pepe Pizzeria Napoletana in New Haven, Conn., is a revelation. The crust, kissed by the intense heat of the coal-fired oven, achieves a perfect balance of crispness and chew. Topped with freshly shucked clams, garlic, oregano and a dusting of grated cheese, it is a testament to the magic that simple, high-quality ingredients can conjure.Sound like me? It’s not. The entire paragraph, except the pizzeria’s name and the city, was generated by GPT-4 in response to a simple prompt asking for a restaurant critique in the style of Pete Wells.I have a few quibbles. I would never pronounce any food a revelation, or describe heat as a kiss. I don’t believe in magic, and rarely call anything perfect without using “nearly” or some other hedge. But these lazy descriptors are so common in food writing that I imagine many readers barely notice them. I’m unusually attuned to them because whenever I commit a cliché in my copy, I get boxed on the ears by my editor.He wouldn’t be fooled by the counterfeit Pete. Neither would I. But as much as it pains me to admit, I’d guess that many people would say it’s a four-star fake.The person responsible for Phony Me is Balazs Kovacs, a professor of organizational behavior at Yale School of Management. In a recent study, he fed a large batch of Yelp reviews to GPT-4, the technology behind ChatGPT, and asked it to imitate them. His test subjects — people — could not tell the difference between genuine reviews and those churned out by artificial intelligence. In fact, they were more likely to think the A.I. reviews were real. (The phenomenon of computer-generated fakes that are more convincing than the real thing is so well known that there’s a name for it: A.I. hyperrealism.)Dr. Kovacs’s study belongs to a growing body of research suggesting that the latest versions of generative A.I. can pass the Turing test, a scientifically fuzzy but culturally resonant standard. When a computer can dupe us into believing that language it spits out was written by a human, we say it has passed the Turing test.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    More Than Words: 10 Charts That Defined 2023

    Some years are defined by a single event or person — a pandemic, a recession, an insurrection — while others are buffeted by a series of disparate forces. Such was 2023. The economy and inflation remained front of mind until the war in Gaza grabbed headlines and the world’s attention — all while Donald Trump’s […] More

  • in

    Plus-Size Female Shoppers ‘Deserve Better’

    More from our inbox:Why Trump’s Supporters Love HimChatGPT Is PlagiarismThe Impact of China’s Economic WoesThe ‘Value’ of CollegeKim SaltTo the Editor:Re “Just Make It, Toots,” by Elizabeth Endicott (Opinion guest essay, Aug. 20):Despite the fact that two-thirds of American women are size 14 or above, brands and retailers continue to overlook and disregard plus-size women whose dollars are as green as those held by “straight size” women.The root cause is simple, and it’s not that it’s more expensive or time-consuming; these excuses have been bandied about for years. There are not enough clothes available to plus-size women because brands and retailers assume that larger women will just accept whatever they’re given, since they have in the past.As Ms. Endicott pointed out in her essay, this is no longer the case — women are finding other ways to express themselves through clothing that fits their bodies, their styles and their budgets, from making clothes themselves to shopping at independent designers and boutiques.We still have a long way to go, but for every major retailer that dips a toe into the market and just as quickly pulls back, there are new designers and stores willing to step in and take their place.Plus-size women deserve more and deserve better. Those who won’t cater to them do so at their own peril.Shanna GoldstoneNew YorkThe writer is the founder and C.E.O. of Pari Passu, an apparel company that sells clothing to women sizes 12 to 24.To the Editor:Plus-size people aren’t the only folks whose clothing doesn’t fit. I wore a size 10 for decades, but most clothes wouldn’t fit my wide well-muscled shoulders. Apparently being really fit is just as bad as being a plus size.I wasn’t alone; most of my co-workers had similar problems. Don’t even get me started about having a short back and a deep pelvis. I found only one brand of pants that came close to fitting and have worn them for almost 40 years. They definitely are not a fashion statement.Eloise TwiningUkiah, Calif.To the Editor:Thank you, Elizabeth Endicott, for revealing the ways that historically marginalized consumers grapple with retail trends. You recognized that “plus size is now the American average.”As someone who works for a company that sells clothing outside of the traditional gender binary, I’d add that gender neutral clothing will also soon be an American retail norm. It’s now up to large-scale retailers to decide if they want to meet this wave of demand, or miss out on contemporary consumers.Ashlie GrilzProvidence, R.I.The writer is brand director for Peau De Loup.Why Trump’s Supporters Love HimSam Whitney/The New York TimesTo the Editor:Re “The Thing Is, Most Republicans Really Like Trump,” by Kristen Soltis Anderson (Opinion guest essay, Aug. 30):Ms. Anderson writes that one of the most salient reasons that Republican voters favor Donald Trump as their presidential nominee is that they believe he is “best poised” to beat Joe Biden. I do not concur.His likability is not based primarily on his perceived electability. Nor is his core appeal found in policy issues such as budget deficits, import tariffs or corporate tax relief. It won’t even be found in his consequential appointments to the Supreme Court.Politics is primarily visceral, not cerebral. When Mr. Trump denounces the elites that he claims are hounding him with political prosecutions, his followers concur and channel their own grievances and resentments with his.When Mr. Trump rages against the professional political class and “fake news,” his acolytes applaud because they themselves feel ignored and disrespected.Mr. Trump is more than an entertaining self-promoter. He offers oxygen for self-esteem, and his supporters love him for it.John R. LeopoldStoney Beach, Md.ChatGPT Is Plagiarism“I do want students to learn to use it,” Yazmin Bahena, a middle school social studies teacher, said about ChatGPT. “They are going to grow up in a world where this is the norm.”Ricardo Nagaoka for The New York TimesTo the Editor:Re “Schools Shift to Embrace ChatGPT,” by Natasha Singer (news article, Aug. 26):What gets lost in this discussion is that these schools are authorizing a form of academic plagiarism and outright theft of the texts authors have created. This is why over 8,000 authors have signed a petition to the A.I. companies that have “scraped” (the euphemistic term they use for “stolen”) their intellectual properties and repackaged them as their own property to be sold for profit. In the process, the A.I. chatbots are depriving authors of the fruits of their labor.What a lesson to teach our nation’s children. This is the very definition of theft. Schools that accept this are contributing to the ethical breakdown of a nation already deeply challenged by a culture of cheating.Dennis M. ClausenEscondido, Calif.The writer is an author and professor at the University of San Diego.The Impact of China’s Economic WoesThe Port of Oakland in California. China only accounted for 7.5 percent of U.S. exports in 2022.Jim Wilson/The New York TimesTo the Editor:Re “China’s Woes Are Unlikely to Hamper U.S. Growth” (Business, Aug. 28):Lydia DePillis engages in wishful thinking in arguing that the fallout of China’s deep economic troubles for the U.S. economy probably will be limited.China is the world’s second-largest economy, until recently the main engine of world economic growth and a major consumer of internationally traded commodities. As such, a major Chinese economic setback would cast a dark cloud over the world economic recovery.While Ms. DePillis is correct in asserting that China’s direct impact on our economy might be limited, its indirect impact could be large, particularly if it precipitates a world economic recession.China’s economic woes could spill over to its Asian trade partners and to economies like Germany, Australia and the commodity-dependent emerging market economies, which all are heavily dependent on the Chinese market for their exports.Desmond LachmanWashingtonThe writer is a senior fellow at the American Enterprise Institute.The ‘Value’ of CollegeSarah Reingewirtz/MediaNews Group — Los Angeles Daily News, via Getty ImagesTo the Editor:Re “Let’s Stop Pretending College Degrees Don’t Matter,” by Ben Wildavsky (Opinion guest essay, Aug. 26):There are quite a few things wrong with Mr. Wildavsky’s assessment of the value of a college education. But I’ll focus on the most obvious: Like so many pundits, he equates value with money, pointing out that those with college degrees earn more than those without.Some do, some don’t. I have a Ph.D. from an Ivy League university, but the electrician who dealt with a very minor problem in my apartment earns considerably more than I do. So, for that matter, does the plumber.What about satisfaction, taking pleasure in one’s accomplishments? Do we really think that the coder takes more pride in their work than does the construction worker who told me he likes to drive around the city with his children and point out the buildings he helped build? He didn’t need a college degree to find his work meaningful.How about organizing programs that prepare high school students for work, perhaps through apprenticeships, and paying all workers what their efforts are worth?Erika RosenfeldNew York More