More stories

  • in

    Elon Musk Hails Italian Leader Giorgia Meloni at Awards Ceremony

    Mr. Musk described Prime Minister Giorgia Meloni as “authentic, honest and thoughtful.” She used her Atlantic Council spotlight to defend Western values.Elon Musk, the chief executive of Tesla, and Giorgia Meloni, the prime minister of Italy, were the stars of a black-tie dinner in New York on Monday that highlighted Mr. Musk’s increasing involvement in politics.Ms. Meloni had chosen Mr. Musk to introduce her as she received a Global Citizen Award from the Atlantic Council, a Washington think tank that cited “her political and economic leadership of Italy, in the European Union” and of the Group of 7 nations “as well as her support of Ukraine in Russia’s war against it.”The prime minister and the billionaire business leader have bonded over the years. They share concerns about artificial intelligence and declining birthrates in Western countries, which Mr. Musk has called an existential threat to civilization.He described Ms. Meloni on Monday as “someone who is even more beautiful inside than outside” and “authentic, honest and thoughtful.”“That can’t always be said about politicians,” Mr. Musk added, to laughter from the crowd of 700 at the Ziegfeld Ballroom in Manhattan.After thanking Mr. Musk for his “precious genius,” Ms. Meloni delivered a passionate defense of Western values. While rejecting authoritarian nationalism, she said, “we should not be afraid to defend words like ‘nation’ and ‘patriotism.’”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.

    Jim Covello, Goldman Sachs’s head of stock research, warned that building too much of what the world doesn’t need “typically ends badly.”As Jim Covello’s car barreled up highway 101 from San Jose to San Francisco this month, he counted the billboards about artificial intelligence. The nearly 40 signs he passed, including one that promoted something called Writer Enterprise AI and another for Speech AI, were fresh evidence, he thought, of an economic bubble.“Not that long ago, they were all crypto,” Mr. Covello said of the billboards. “And now they’re all A.I.”Mr. Covello, the head of stock research at Goldman Sachs, has become Wall Street’s leading A.I. skeptic. Three months ago, he jolted markets with a research paper that challenged whether businesses would see a sufficient return on what by some estimates could be $1 trillion in A.I. spending in the coming years. He said that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.The Goldman paper landed days after a partner at Sequoia Capital, a venture firm, raised similar questions in a blog post about A.I. Their skepticism marked a turning point for A.I.-related stocks, leading to a reassessment of Wall Street’s hottest trade.Goldman’s basket of A.I. stocks, which is managed by a separate arm of the firm and includes Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta and Oracle, has declined 7 percent from its peak on July 10, as investors and business leaders debate whether A.I. can justify its staggering costs.The pause has come early in the A.I. arms race. The tech industry has a history of spending big to deliver technology transitions, as it did during the personal computer and internet revolutions. Those build outs spanned five years or more before there was a reckoning.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    California Gov. Newsom Signs Laws Regulating Election A.I. ‘Deepfakes’

    The state joins dozens of others in regulating the A.I. fakery in ways that could impact this year’s presidential race.California will now require social media companies to moderate the spread of election-related impersonations powered by artificial intelligence, known as “deepfakes,” after Gov. Gavin Newsom, a Democrat, signed three new laws on the subject Tuesday.The three laws, including a first-of-its kind law that imposes a new requirement on social media platforms, largely deal with banning or labeling the deepfakes. Only one of the laws will take effect in time to affect the 2024 presidential election, but the trio could offer a road map for regulators across the country who are attempting to slow the spread of the manipulative content powered by artificial intelligence.The laws are expected to face legal challenges from social media companies or groups focusing on free speech rights.Deepfakes use A.I. tools to create lifelike images, videos or audio clips resembling actual people. Though the technology has been used to create jokes and artwork, it has also been widely adopted to supercharge scams, create non-consensual pornography and disseminate political misinformation.Elon Musk, the owner of X, has posted a deepfake to his account this year that would have run afoul of the new laws, experts said. In one video viewed millions of times, Mr. Musk posted fake audio of Vice President Kamala Harris, the Democratic nominee, calling herself the “ultimate diversity hire.”Election-Related ‘Deepfake’ LawsSeveral states have adopted or seem poised to adopt laws regulating “deepfakes” around elections. More

  • in

    How A.I., QAnon and Falsehoods Are Reshaping the Presidential Race

    Three experts on social media and disinformation share their predictions for this year’s chaotic election.This year’s presidential election has been polluted with rumors, conspiracy theories and a wave of artificial intelligence imagery. Former President Donald J. Trump has continued to sow doubts about election integrity as his allies across the country have taken steps to make election denial a fixture of the balloting process.How worried should voters be?To better understand the role that misinformation and conspiracy theories are playing this year, The New York Times asked three authors of new books about disinformation and social media to share their views and predictions.The risk that violence could spring from election denialism seems as pressing as in the weeks after the 2020 election, when Trump supporters — incensed by false claims of voter fraud — stormed the Capitol building, they argue. But the day-to-day churn of falsehoods and rumors that spread online may be getting largely drowned out by the billions spent on political advertising.In a series of emails with The Times, the authors laid out their predictions for the year. These interviews have been edited for length and clarity.Q. Let’s jump right in: How concerned are you that conspiracy theories and misinformation will influence the outcome of this year’s presidential election?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    OpenAI Names Political Veteran Chris Lehane as Head of Global Policy

    The prominent A.I. start-up is also considering a change to its corporate structure to make it more appealing to outside investors.Amid a flurry of news around its funding plans, OpenAI has tapped the political veteran Chris Lehane as its vice president of global policy.Mr. Lehane held a similar role at Airbnb and served in the Clinton White House as a lawyer and spokesman who specialized in opposition research. He earned a reputation as “the master of disaster” during his time working for President Bill Clinton.As OpenAI has built increasingly powerful artificial intelligence technologies, it has warned of their potential danger, and it is under pressure from lawmakers, regulators and others across the globe to ensure that these technologies do not cause serious harm. Some researchers worry that the A.I. systems could be used to spread disinformation, fuel cyberattacks or even destroy humanity.Mr. Lehane could help navigate an increasingly complex social and political landscape. Through a spokeswoman, he declined to comment.A spokeswoman for OpenAI, Liz Bourgeois, said, “Just as the company is making changes in other areas of the business to scale the impact of various teams as we enter this next chapter, we recently made changes to our global affairs organization.”OpenAI is negotiating a new funding deal that would value the company at more than $100 billion, three people familiar with discussions have said. The deal would be led by the investment firm Thrive Capital, which would invest more than $1 billion.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Joins $250 Million Deal to Support Newsrooms in California

    The agreement includes $70 million from the state, which needs legislative approval. Some lawmakers objected, calling for a more comprehensive solution with tech companies.Google, a news industry trade group and key California lawmakers announced a first-in-the-nation agreement on Wednesday aimed at shoring up newsrooms in the state with as much as $250 million.Through a mix of funding from Google, taxpayers and potentially other private sources, the five-year deal would let Google avert a proposed state bill that could force tech companies to pay news organizations when advertising appeared alongside articles on the tech company’s platform.The announcement was packed with praise for the effort to stabilize the news industry, which has faced layoffs and shuttered newsrooms as readership has shifted online.“The deal not only provides funding to support hundreds of new journalists but helps rebuild a robust and dynamic California press corps for years to come, reinforcing the vital role of journalism in our democracy,” Gov. Gavin Newsom said in a statement.The trade group, the California News Publishers Association, called the agreement “a first step toward what we hope will become a comprehensive program to sustain local news in the long term.” The author of the bill, Assemblymember Buffy Wicks, praised it for being a “cross-sector commitment” and called it “just the beginning.”A union representing journalists, however, denounced the deal as a “shakedown,” and lawmakers who had been working for months on more comprehensive proposals criticized its scope. Also, the president pro tempore of the State Senate, Mike McGuire, questioned legislative support for the state’s share of the deal, which would require approval as part of the annual budget process.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    AI Companies Have Pitched US Political Campaigns. The Campaigns Are Wary.

    More than 30 tech companies have pitched A.I. tools to political campaigns for November’s election. The campaigns have been wary.Sam WoodMatthew Diemer, a Democrat running for election in Ohio’s Seventh Congressional District, was approached by the artificial intelligence company Civox in January with a pitch: A.I.-backed voice technology that could make tens of thousands of personalized phone calls to voters using Mr. Diemer’s talking points and sense of humor.His campaign agreed to try out the technology. But it turned out that the only thing voters hated more than a robocall was an A.I.-backed one.While Civox’s A.I. program made almost 1,000 calls to voters in five minutes, nearly all of them hung up in the first few seconds when they heard a voice that described itself as an A.I. volunteer, Mr. Diemer said.“People just didn’t want to be on the phone, and they especially didn’t want to be on the phone when they heard they were talking to an A.I. program,” said the entrepreneur, who ran unsuccessfully in 2022 for the same seat he is seeking now. “Maybe people weren’t ready yet for this type of technology.”This was supposed to be the year of the A.I. election. Fueled by a proliferation of A.I. tools like chatbots and image generators, more than 30 tech companies have offered A.I. products to national, state and local U.S. political campaigns in recent months. The companies — mostly smaller firms such as BHuman, VoterVoice and Poll the People — make products that reorganize voter rolls and campaign emails, expand robocalls and create A.I.-generated likenesses of candidates that can meet and greet constituents virtually.But campaigns are largely not biting — and when they have, the technology has fallen flat. Only a handful of candidates are using A.I., and even fewer are willing to admit it, according to interviews with 23 tech companies and seven political campaigns. Three of the companies said campaigns agreed to buy their tech only if they could ensure that the public would never find out they had used A.I.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More