More stories

  • in

    A Filmmaker Focuses on Climate and Democracy

    In his next documentary, Michael P. Nash takes on A.I. and how it might be used to address environmental issues.The Athens Democracy Forum last week featured an array of speakers from countries worldwide: politicians, leaders of nonprofits, youths dedicated to promoting democracy. Michael P. Nash was the only filmmaker to speak.Mr. Nash, who resides in Nashville and Los Angeles, is behind more than a dozen documentaries and psychological thrillers. His most well-known work is “Climate Refugees,” a documentary that debuted at the 2010 Sundance Film Festival and portrays the stories of people from 48 countries who were affected by climate change.Mr. Nash’s other notable films include “Fuel” (2017), which focuses on alternative energy, and “Saving the Roar” (2021), an inspirational sports documentary about Penn State University’s football culture.Mr. Nash is directing and producing “Chasing Truth,” a documentary examining whether artificial intelligence can solve environmental issues such as climate change and food security. The film is a collaboration with the actor Leonardo DiCaprio and his father, George DiCaprio, who are executive producers. It is expected to be released in 2026.George DiCaprio said he and his son got to know Mr. Nash more than a decade ago, at a screening of “Climate Refugees” at their home. “It was clear that we all shared a passion for addressing the world’s most pressing issues, and now, more than ever, that commitment has deepened,” he said in an email. After the forum, Mr. Nash was interviewed by email and phone about his interest in democracy advocacy; the connection between climate change and democracy; and what he had learned in Athens. The conversation has been edited and condensed.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Athens Democracy Forum: Where Is Global Politics Headed?

    Voters have more opportunities than ever in 2024 to shape history in their countries, but war, technology and other forces pose a powerful threat, experts said.This article is from a special report on the Athens Democracy Forum, which gathered experts last week in the Greek capital to discuss global issues.Jordan Bardella, the 29-year-old far-right leader who nearly became France’s prime minister last summer, warned last week that his country’s existence was imperiled by Muslim migrants who shared the same militant Islamist ideology as the Hamas-led assailants who committed deadly attacks in Israel on Oct. 7, 2023.“We have this Islamist ideology that is appearing in France,” he said. “The people behind it want to impose on French society something that is totally alien to our country, to our values.“I do not want my country to disappear,” he said. “I want France to be proud of itself.”The politician — whose party, the National Rally, finished first in the initial round of parliamentary elections in June, before being defeated by a broad multiparty coalition in the second and final round — spoke in an onstage conversation at the Athens Democracy Forum, an annual gathering of policymakers, business leaders, academics and activists organized in association with The New York Times.The defeat of Mr. Bardella and his party by a broad anti-far-right coalition were a sign of the endurance of liberal democratic values in the West. Yet his rapid rise as a political figure in France also comes as a warning that the basic tenets of liberal democracy are constantly being tested — and like never before in the postwar period.The year 2024 has been the year of elections: More of them were held than ever before in history. Some four billion people — more than half of humankind — have been, or will be, called to the ballot box in dozens of elections around the world. They include the 161 million U.S. voters heading to the polls on Nov. 5.Elections are the unquestionable cornerstone of democracy: the process by which voters choose the leaders and lawmakers who will rule over them. Voters’ ability to make an informed choice rests on their access to accurate and verified news and information about the candidates and their parties.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    California Passes Law Protecting Consumer Brain Data

    The state extended its current personal privacy law to include the neural data increasingly coveted by technology companies.On Saturday, Governor Gavin Newsom of California signed a new law that aims to protect people’s brain data from being potentially misused by neurotechnology companies.A growing number of consumer technology products promise to help address cognitive issues: apps to meditate, to improve focus and to treat mental health conditions like depression. These products monitor and record brain data, which encodes virtually everything that goes on in the mind, including thoughts, feelings and intentions.The new law, which passed both the California State Assembly and the Senate with no voter opposition, amends the state’s current personal privacy law — known as the California Consumer Privacy Act — by including “neural data” under “personal sensitive information.” This includes data generated by a user’s brain activity and the meshwork of nerves that extends to the rest of the body.“I’m very excited,” said Sen. Josh Becker, Democrat of California, who sponsored the bill. “It’s important that we be up front about protecting the privacy of neural data — a very important set of data that belongs to people.”With tens of thousands of tech startups, California is a hub for tech innovation. This includes smaller companies developing brain technologies, but Big Tech companies like Meta and Apple are also developing devices that will likely involve collecting vast troves of brain data.“The importance of protecting neural data in California cannot be understated,” Sen. Becker said.The bill extends the same level of protections to neural data that it does for other data already considered sensitive under the California Consumer Privacy Act, such as facial images, DNA and fingerprints, known as biometric information.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk Hails Italian Leader Giorgia Meloni at Awards Ceremony

    Mr. Musk described Prime Minister Giorgia Meloni as “authentic, honest and thoughtful.” She used her Atlantic Council spotlight to defend Western values.Elon Musk, the chief executive of Tesla, and Giorgia Meloni, the prime minister of Italy, were the stars of a black-tie dinner in New York on Monday that highlighted Mr. Musk’s increasing involvement in politics.Ms. Meloni had chosen Mr. Musk to introduce her as she received a Global Citizen Award from the Atlantic Council, a Washington think tank that cited “her political and economic leadership of Italy, in the European Union” and of the Group of 7 nations “as well as her support of Ukraine in Russia’s war against it.”The prime minister and the billionaire business leader have bonded over the years. They share concerns about artificial intelligence and declining birthrates in Western countries, which Mr. Musk has called an existential threat to civilization.He described Ms. Meloni on Monday as “someone who is even more beautiful inside than outside” and “authentic, honest and thoughtful.”“That can’t always be said about politicians,” Mr. Musk added, to laughter from the crowd of 700 at the Ziegfeld Ballroom in Manhattan.After thanking Mr. Musk for his “precious genius,” Ms. Meloni delivered a passionate defense of Western values. While rejecting authoritarian nationalism, she said, “we should not be afraid to defend words like ‘nation’ and ‘patriotism.’”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.

    Jim Covello, Goldman Sachs’s head of stock research, warned that building too much of what the world doesn’t need “typically ends badly.”As Jim Covello’s car barreled up highway 101 from San Jose to San Francisco this month, he counted the billboards about artificial intelligence. The nearly 40 signs he passed, including one that promoted something called Writer Enterprise AI and another for Speech AI, were fresh evidence, he thought, of an economic bubble.“Not that long ago, they were all crypto,” Mr. Covello said of the billboards. “And now they’re all A.I.”Mr. Covello, the head of stock research at Goldman Sachs, has become Wall Street’s leading A.I. skeptic. Three months ago, he jolted markets with a research paper that challenged whether businesses would see a sufficient return on what by some estimates could be $1 trillion in A.I. spending in the coming years. He said that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.The Goldman paper landed days after a partner at Sequoia Capital, a venture firm, raised similar questions in a blog post about A.I. Their skepticism marked a turning point for A.I.-related stocks, leading to a reassessment of Wall Street’s hottest trade.Goldman’s basket of A.I. stocks, which is managed by a separate arm of the firm and includes Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta and Oracle, has declined 7 percent from its peak on July 10, as investors and business leaders debate whether A.I. can justify its staggering costs.The pause has come early in the A.I. arms race. The tech industry has a history of spending big to deliver technology transitions, as it did during the personal computer and internet revolutions. Those build outs spanned five years or more before there was a reckoning.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    California Gov. Newsom Signs Laws Regulating Election A.I. ‘Deepfakes’

    The state joins dozens of others in regulating the A.I. fakery in ways that could impact this year’s presidential race.California will now require social media companies to moderate the spread of election-related impersonations powered by artificial intelligence, known as “deepfakes,” after Gov. Gavin Newsom, a Democrat, signed three new laws on the subject Tuesday.The three laws, including a first-of-its kind law that imposes a new requirement on social media platforms, largely deal with banning or labeling the deepfakes. Only one of the laws will take effect in time to affect the 2024 presidential election, but the trio could offer a road map for regulators across the country who are attempting to slow the spread of the manipulative content powered by artificial intelligence.The laws are expected to face legal challenges from social media companies or groups focusing on free speech rights.Deepfakes use A.I. tools to create lifelike images, videos or audio clips resembling actual people. Though the technology has been used to create jokes and artwork, it has also been widely adopted to supercharge scams, create non-consensual pornography and disseminate political misinformation.Elon Musk, the owner of X, has posted a deepfake to his account this year that would have run afoul of the new laws, experts said. In one video viewed millions of times, Mr. Musk posted fake audio of Vice President Kamala Harris, the Democratic nominee, calling herself the “ultimate diversity hire.”Election-Related ‘Deepfake’ LawsSeveral states have adopted or seem poised to adopt laws regulating “deepfakes” around elections. More

  • in

    How A.I., QAnon and Falsehoods Are Reshaping the Presidential Race

    Three experts on social media and disinformation share their predictions for this year’s chaotic election.This year’s presidential election has been polluted with rumors, conspiracy theories and a wave of artificial intelligence imagery. Former President Donald J. Trump has continued to sow doubts about election integrity as his allies across the country have taken steps to make election denial a fixture of the balloting process.How worried should voters be?To better understand the role that misinformation and conspiracy theories are playing this year, The New York Times asked three authors of new books about disinformation and social media to share their views and predictions.The risk that violence could spring from election denialism seems as pressing as in the weeks after the 2020 election, when Trump supporters — incensed by false claims of voter fraud — stormed the Capitol building, they argue. But the day-to-day churn of falsehoods and rumors that spread online may be getting largely drowned out by the billions spent on political advertising.In a series of emails with The Times, the authors laid out their predictions for the year. These interviews have been edited for length and clarity.Q. Let’s jump right in: How concerned are you that conspiracy theories and misinformation will influence the outcome of this year’s presidential election?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More