More stories

  • in

    Doctors, A.I. and Empathy for Patients

    More from our inbox:Breast Cancer ScreeningWalz’s MisstepsMental Health Support for SchoolchildrenTo the Editor:Re “ChatGPT’s Bedside Manner Is Better Than Mine,” by Jonathan Reisman (Opinion guest essay, Oct. 9):Dr. Reisman notes that ChatGPT’s answers to patient questions have been rated as more empathetic than those written by actual doctors. This should not be a call for doctors to surrender our human role to A.I. To the contrary, we need to continue to improve our communication skills.For the past 25 years, I have been facilitating seminars in doctor-patient communication. The skills to communicate bad news listed by Dr. Reisman are exactly the techniques that we suggest to our medical students. However, doctors can avoid the temptation to surrender their “humanity to a script” as if it were “just another day at work.”Techniques are a valuable guide, but the real work consists of carefully listening to the responses and their emotional content, and crafting new words and phrases that speak to the unique patient’s confusion, fear and distress.In my experience, patients know when we are reciting a script, and when we are paying attention to their thoughts and feelings. Unlike A.I., and especially when conversations are matters of life and death, we can reach into the depths of our humanity to feel and communicate empathy and compassion toward our patients.Neil S. ProseDurham, N.C.To the Editor:Mention the words “A.I.” and “doctoring” to most physicians in the same sentence, and the immediate reaction is often skepticism or fear.As Dr. Jonathan Reisman noted in his essay, A.I. has shown a remarkable ability to mimic human empathy in encounters with patients. This is one reason many practicing physicians worry that A.I. may replace doctors eventually.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A Shift in the World of Science

    What this year’s Nobels can teach us about science and humanity.Alan Burdick and Technology observers have grown increasingly vocal in recent years about the threat that artificial intelligence poses to the human variety. A.I. models can write and talk like us, draw and paint like us, crush us at chess and Go. They express an unnerving simulacrum of creativity, not least where the truth is concerned.A.I. is coming for science, too, as this week’s Nobel Prizes seemed keen to demonstrate. On Tuesday, the Nobel Prize in Physics was awarded to two scientists who helped computers “learn” closer to the way the human brain does. A day later, the Nobel Prize in Chemistry went to three researchers for using A.I. to invent new proteins and reveal the structure of existing ones — a problem that stumped biologists for decades, yet could be solved by A.I. in minutes.The Nobel Committee for Chemistry announced the winners last week.Jonathan Nackstrand/Agence France-Presse — Getty ImagesCue the grousing: This was computer science, not physics or chemistry! Indeed, of the five laureates on Tuesday and Wednesday, arguably only one, the University of Washington biochemist David Baker, works in the field he was awarded in.The scientific Nobels tend to award concrete results over theories, empirical discovery over pure idea. But that schema didn’t quite hold this year, either. One prize went to scientists who leaned into physics as a foundation on which to build computer models used for no groundbreaking result in particular. The laureates on Wednesday, on the other hand, had created computer models that made big advancements in biochemistry.These were outstanding and fundamentally human accomplishments, to be sure. But the Nobel recognition underscored a chilling prospect: Henceforth, perhaps scientists will merely craft the tools that make the breakthroughs, rather than do the revolutionary work themselves or even understand how it came about. Artificial intelligence designs and builds hundreds of molecular Notre Dames and Hagia Sophias, and a researcher gets a pat for inventing the shovel.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A Filmmaker Focuses on Climate and Democracy

    In his next documentary, Michael P. Nash takes on A.I. and how it might be used to address environmental issues.The Athens Democracy Forum last week featured an array of speakers from countries worldwide: politicians, leaders of nonprofits, youths dedicated to promoting democracy. Michael P. Nash was the only filmmaker to speak.Mr. Nash, who resides in Nashville and Los Angeles, is behind more than a dozen documentaries and psychological thrillers. His most well-known work is “Climate Refugees,” a documentary that debuted at the 2010 Sundance Film Festival and portrays the stories of people from 48 countries who were affected by climate change.Mr. Nash’s other notable films include “Fuel” (2017), which focuses on alternative energy, and “Saving the Roar” (2021), an inspirational sports documentary about Penn State University’s football culture.Mr. Nash is directing and producing “Chasing Truth,” a documentary examining whether artificial intelligence can solve environmental issues such as climate change and food security. The film is a collaboration with the actor Leonardo DiCaprio and his father, George DiCaprio, who are executive producers. It is expected to be released in 2026.George DiCaprio said he and his son got to know Mr. Nash more than a decade ago, at a screening of “Climate Refugees” at their home. “It was clear that we all shared a passion for addressing the world’s most pressing issues, and now, more than ever, that commitment has deepened,” he said in an email. After the forum, Mr. Nash was interviewed by email and phone about his interest in democracy advocacy; the connection between climate change and democracy; and what he had learned in Athens. The conversation has been edited and condensed.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Athens Democracy Forum: Where Is Global Politics Headed?

    Voters have more opportunities than ever in 2024 to shape history in their countries, but war, technology and other forces pose a powerful threat, experts said.This article is from a special report on the Athens Democracy Forum, which gathered experts last week in the Greek capital to discuss global issues.Jordan Bardella, the 29-year-old far-right leader who nearly became France’s prime minister last summer, warned last week that his country’s existence was imperiled by Muslim migrants who shared the same militant Islamist ideology as the Hamas-led assailants who committed deadly attacks in Israel on Oct. 7, 2023.“We have this Islamist ideology that is appearing in France,” he said. “The people behind it want to impose on French society something that is totally alien to our country, to our values.“I do not want my country to disappear,” he said. “I want France to be proud of itself.”The politician — whose party, the National Rally, finished first in the initial round of parliamentary elections in June, before being defeated by a broad multiparty coalition in the second and final round — spoke in an onstage conversation at the Athens Democracy Forum, an annual gathering of policymakers, business leaders, academics and activists organized in association with The New York Times.The defeat of Mr. Bardella and his party by a broad anti-far-right coalition were a sign of the endurance of liberal democratic values in the West. Yet his rapid rise as a political figure in France also comes as a warning that the basic tenets of liberal democracy are constantly being tested — and like never before in the postwar period.The year 2024 has been the year of elections: More of them were held than ever before in history. Some four billion people — more than half of humankind — have been, or will be, called to the ballot box in dozens of elections around the world. They include the 161 million U.S. voters heading to the polls on Nov. 5.Elections are the unquestionable cornerstone of democracy: the process by which voters choose the leaders and lawmakers who will rule over them. Voters’ ability to make an informed choice rests on their access to accurate and verified news and information about the candidates and their parties.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    California Passes Law Protecting Consumer Brain Data

    The state extended its current personal privacy law to include the neural data increasingly coveted by technology companies.On Saturday, Governor Gavin Newsom of California signed a new law that aims to protect people’s brain data from being potentially misused by neurotechnology companies.A growing number of consumer technology products promise to help address cognitive issues: apps to meditate, to improve focus and to treat mental health conditions like depression. These products monitor and record brain data, which encodes virtually everything that goes on in the mind, including thoughts, feelings and intentions.The new law, which passed both the California State Assembly and the Senate with no voter opposition, amends the state’s current personal privacy law — known as the California Consumer Privacy Act — by including “neural data” under “personal sensitive information.” This includes data generated by a user’s brain activity and the meshwork of nerves that extends to the rest of the body.“I’m very excited,” said Sen. Josh Becker, Democrat of California, who sponsored the bill. “It’s important that we be up front about protecting the privacy of neural data — a very important set of data that belongs to people.”With tens of thousands of tech startups, California is a hub for tech innovation. This includes smaller companies developing brain technologies, but Big Tech companies like Meta and Apple are also developing devices that will likely involve collecting vast troves of brain data.“The importance of protecting neural data in California cannot be understated,” Sen. Becker said.The bill extends the same level of protections to neural data that it does for other data already considered sensitive under the California Consumer Privacy Act, such as facial images, DNA and fingerprints, known as biometric information.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk Hails Italian Leader Giorgia Meloni at Awards Ceremony

    Mr. Musk described Prime Minister Giorgia Meloni as “authentic, honest and thoughtful.” She used her Atlantic Council spotlight to defend Western values.Elon Musk, the chief executive of Tesla, and Giorgia Meloni, the prime minister of Italy, were the stars of a black-tie dinner in New York on Monday that highlighted Mr. Musk’s increasing involvement in politics.Ms. Meloni had chosen Mr. Musk to introduce her as she received a Global Citizen Award from the Atlantic Council, a Washington think tank that cited “her political and economic leadership of Italy, in the European Union” and of the Group of 7 nations “as well as her support of Ukraine in Russia’s war against it.”The prime minister and the billionaire business leader have bonded over the years. They share concerns about artificial intelligence and declining birthrates in Western countries, which Mr. Musk has called an existential threat to civilization.He described Ms. Meloni on Monday as “someone who is even more beautiful inside than outside” and “authentic, honest and thoughtful.”“That can’t always be said about politicians,” Mr. Musk added, to laughter from the crowd of 700 at the Ziegfeld Ballroom in Manhattan.After thanking Mr. Musk for his “precious genius,” Ms. Meloni delivered a passionate defense of Western values. While rejecting authoritarian nationalism, she said, “we should not be afraid to defend words like ‘nation’ and ‘patriotism.’”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.

    Jim Covello, Goldman Sachs’s head of stock research, warned that building too much of what the world doesn’t need “typically ends badly.”As Jim Covello’s car barreled up highway 101 from San Jose to San Francisco this month, he counted the billboards about artificial intelligence. The nearly 40 signs he passed, including one that promoted something called Writer Enterprise AI and another for Speech AI, were fresh evidence, he thought, of an economic bubble.“Not that long ago, they were all crypto,” Mr. Covello said of the billboards. “And now they’re all A.I.”Mr. Covello, the head of stock research at Goldman Sachs, has become Wall Street’s leading A.I. skeptic. Three months ago, he jolted markets with a research paper that challenged whether businesses would see a sufficient return on what by some estimates could be $1 trillion in A.I. spending in the coming years. He said that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.The Goldman paper landed days after a partner at Sequoia Capital, a venture firm, raised similar questions in a blog post about A.I. Their skepticism marked a turning point for A.I.-related stocks, leading to a reassessment of Wall Street’s hottest trade.Goldman’s basket of A.I. stocks, which is managed by a separate arm of the firm and includes Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta and Oracle, has declined 7 percent from its peak on July 10, as investors and business leaders debate whether A.I. can justify its staggering costs.The pause has come early in the A.I. arms race. The tech industry has a history of spending big to deliver technology transitions, as it did during the personal computer and internet revolutions. Those build outs spanned five years or more before there was a reckoning.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More