More stories

  • in

    Sam Altman on Microsoft, Trump and Musk

    The OpenAI C.E.O. spoke with Andrew Ross Sorkin at the DealBook Summit.Since kicking off the artificial intelligence boom with the launch of ChatGPT in 2022, OpenAI has amassed more than 300 million weekly users and a $157 billion valuation. Its C.E.O., Sam Altman, addressed whether that staggering pace of growth can continue at the DealBook Summit last week.Altman pushed back on assertions that progress in A.I. is becoming slower and more expensive; on reports that the company’s relationship with its biggest investor, Microsoft, is fraying; and on concerns that Elon Musk, who founded an A.I. company last year, may use his relationship with President-elect Donald Trump to hurt competitors.Altman said that artificial general intelligence, the point at which artificial intelligence can do almost anything that a human brain can do, will arrive “sooner than most people in the world think.” Here are five highlights from the conversation.On Elon MuskMusk, who co-founded OpenAI, has become one of its major antagonists. He has sued the company, accusing it of departing from its founding mission as a nonprofit, and started a competing startup called xAI. On Friday, OpenAI said Musk had wanted to turn OpenAI into a for-profit company in 2017 and walked away when he didn’t get majority equity. Altman called the change in the relationship “tremendously sad.” He continued:I grew up with Elon as like a mega hero. I thought what Elon was doing was absolutely incredible for the world, and I’m still, of course, I mean, I have different feelings about him now, but I’m still glad he exists. I mean that genuinely. Not just because I think his companies are awesome, which I do think, but because I think at a time when most of the world was not thinking very ambitiously, he pushed a lot of people, me included, to think much more ambitiously. And grateful is the wrong kind of word. But I’m like thankful.You know, we started OpenAI together, and then at some point he totally lost faith in OpenAI and decided to go his own way. And that’s fine, too. But I think of Elon as a builder and someone who — a known thing about Elon is that he really cares about being ‘the guy.’ But I think of him as someone who, if he’s not, that just competes in the market and in the technology, and whatever else. And doesn’t resort to lawfare. And, you know, whatever the stated complaint is, what I believe is he’s a competitor and we’re doing well. And that’s sad to see.Altman said of Musk’s close relationship with Trump:I may turn out to be wrong, but I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors and advantage your own businesses. And I don’t think people would tolerate that. I don’t think Elon would do it.On OpenAI’s relationship with MicrosoftMicrosoft, OpenAI’s largest investor, has put more than $13 billion into the company and has an exclusive license to its raw technologies. Altman once called the relationship “the best bromance in tech,” but The Times and others have reported that the partnership has become strained as OpenAI seeks more and cheaper access to computing power and Microsoft has made moves to diversify its access to A.I. technology. OpenAI expects to lose $5 billion this year because of the steep costs of developing A.I.At the DealBook Summit, Altman said of the relationship with Microsoft, “I don’t think we’re disentangling. I will not pretend that there are no misalignments or challenges.” He added:We need lots of compute, more than we projected. And that has just been an unusual thing in the history of business, to scale that quickly. And there’s been tension on that.Some of OpenAI’s own products compete with those of partners that depend on its technologies. On whether that presents a conflict of interest, Altman said:We have a big platform business. We have a big first party business. Many other companies manage both of those things. And we have things that we’re really good at. Microsoft has things they’re really good at. Again, there’s not no tension, but on the whole, our incentives are pretty aligned.On whether making progress in A.I. development was becoming more expensive and slower, as some experts have suggested, he doubled down on a message he’d previously posted on social media: “There is no wall.” Andrew asked the same question of Sundar Pichai, the Google C.E.O., which we’ll recap in tomorrow’s newsletter.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    OpenAI Fires Back at Elon Musk’s Lawsuit

    The artificial intelligence start-up argues that Mr. Musk is trying to hamstring its business as he builds a rival company.Earlier this month, Elon Musk asked a federal court to block OpenAI’s efforts to transform itself from a nonprofit into a purely for-profit company.On Friday, OpenAI responded with its own legal filing, arguing that Mr. Musk is merely trying to hamstring OpenAI as he builds a rival company, called xAI.What Mr. Musk is asking for would “debilitate OpenAI’s business, board deliberations, and mission to create safe and beneficial A.I. — all to the advantage of Musk and his own A.I. company,” the filing said. “The motion should be denied.”OpenAI also disputed many of the claims made by Mr. Musk in the lawsuit he brought against OpenAI earlier this year. In a blog post published before Friday’s filing, OpenAI portrayed Mr. Musk as a hypocrite, saying that he had tried to transform the lab from a nonprofit into a for-profit operation before he left the organization six years ago.The filing and blog post included documents claiming to show that in 2017, Jared Birchall, the head of Mr. Musk’s family office, registered a company called Open Artificial Intelligence Technologies, Inc. that was meant to be a for-profit incarnation of OpenAI.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Why Wouldn’t ChatGPT Say ‘David Mayer’?

    A bizarre saga in which users noticed the chatbot refused to say “David Mayer” raised questions about privacy and A.I., with few clear answers.Across the final years of his life, David Mayer, a theater professor living in Manchester, England, faced the cascading consequences an unfortunate coincidence: A dead Chechen rebel on a terror watch list had once used Mr. Mayer’s name as an alias.The real Mr. Mayer had travel plans thwarted, financial transactions frozen and crucial academic correspondence blocked, his family said. The frustrations plagued him until his death in 2023, at age 94.But this month, his fight for his identity edged back into the spotlight when eagle-eyed users noticed one particular name was sending OpenAI’s ChatGPT bot into shutdown.David Mayer.Users’ efforts to prompt the bot to say “David Mayer” in a variety of ways were instead resulting in error messages, or the bot would simply refuse to respond. It’s unclear why the name was kryptonite for the bot service, and OpenAI would not say whether the professor’s plight was related to ChatGPT’s issue with the name.But the saga underscores some of the prickliest questions about generative A.I. and the chatbots it powers: Why did that name knock the chatbot out? Who, or what, is making those decisions? And who is responsible for the mistakes?“This was something that he would’ve almost enjoyed, because it would have vindicated the effort he put in to trying to deal with it,” Mr. Mayer’s daughter, Catherine, said of the debacle in an interview.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Physical Intelligence, a Specialist in Robot A.I., Raises $400 Million

    The start-up raised $400 million in a funding round with investments from the likes of Jeff Bezos, Thrive Capital and OpenAI.Physical Intelligence, an artificial intelligence start-up seeking to create brains for a wide variety of robots, plans to announce on Monday that it had raised $400 million in financing from major investors.The round was led by Jeff Bezos, Amazon’s executive chairman, and the venture capital firms Thrive Capital and Lux Capital. Other investors include OpenAI, Redpoint Ventures and Bond.The fund-raising valued the company at about $2 billion, not including the new investments. That’s significantly more than the $70 million that the start-up, which was founded this year, had raised in seed financing.The company wants to make foundational software that would work for any robot, instead of the traditional approach of creating software for specific machines and specific tasks.“What we’re doing is not just a brain for any particular robot,” said Karol Hausman, the company’s co-founder and chief executive. “It’s a single generalist brain that can control any robot.”It’s a tricky task: Building such a model requires a huge amount of data on how to operate in the real world. Those information sets largely do not exist, compelling the company to compile its own. Its work has been aided by big leaps in A.I. models that can interpret visual data.Among the company’s co-founders are Mr. Hausman, a former robotics scientist at Google; Sergey Levine, a professor at the University of California, Berkeley; and Lachy Groom, an investor and former executive at the payments giant Stripe.In a paper published last week, Physical Intelligence showed how its software — called π0, or pi-zero — enabled robots to fold laundry, clear a table, flatten a box and more.“It’s a true generalist,” Mr. Hausman said. Physical Intelligence executives said that its software was closer to GPT-1, the first model published for chatbots by OpenAI, than to the more advanced brains that power ChatGPT.Mr. Groom said that it was hard to predict the rate of progress: A ChatGPT-style breakthrough “could be far sooner than we expect, or it could definitely be far out.”The field of robotics A.I. is getting crowded, with players including Skild, which is also working on general-purpose robot A.I.; Figure AI, whose backers include OpenAI and Mr. Bezos; and Covariant, which focuses on industrial applications.Amazon has a vested interest in the industry, and has been adding more robots in its operations as it seeks to drive down costs and get orders to customers faster. Tesla also has major A.I. ambitions, with Elon Musk recently saying that the company’s humanoid robot would be “the biggest product ever of any kind.” More

  • in

    Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying

    The “best bromance in tech” has had a reality check as OpenAI has tried to change its deal with Microsoft and the software maker has tried to hedge its bet on the start-up.Last fall, Sam Altman, OpenAI’s chief executive, asked his counterpart at Microsoft, Satya Nadella, if the tech giant would invest billions of dollars in the start-up.Microsoft had already pumped $13 billion into OpenAI, and Mr. Nadella was initially willing to keep the cash spigot flowing. But after OpenAI’s board of directors briefly ousted Mr. Altman last November, Mr. Nadella and Microsoft reconsidered, according to four people familiar with the talks who spoke on the condition of anonymity.Over the next few months, Microsoft wouldn’t budge as OpenAI, which expects to lose $5 billion this year, continued to ask for more money and more computing power to build and run its A.I. systems.Mr. Altman once called OpenAI’s partnership with Microsoft “the best bromance in tech,” but ties between the companies have started to fray. Financial pressure on OpenAI, concern about its stability and disagreements between employees of the two companies have strained their five-year partnership, according to interviews with 19 people familiar with the relationship between the companies.That tension demonstrates a key challenge for A.I. start-ups: They are dependent on the world’s tech giants for money and computing power because those big companies control the massive cloud computing systems the small outfits need to develop A.I.No pairing displays this dynamic better than Microsoft and OpenAI, the maker of the ChatGPT chatbot. When OpenAI got its giant investment from Microsoft, it agreed to an exclusive deal to buy computing power from Microsoft and work closely with the tech giant on new A.I.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Will A.I. Be a Bust? A Wall Street Skeptic Rings the Alarm.

    Jim Covello, Goldman Sachs’s head of stock research, warned that building too much of what the world doesn’t need “typically ends badly.”As Jim Covello’s car barreled up highway 101 from San Jose to San Francisco this month, he counted the billboards about artificial intelligence. The nearly 40 signs he passed, including one that promoted something called Writer Enterprise AI and another for Speech AI, were fresh evidence, he thought, of an economic bubble.“Not that long ago, they were all crypto,” Mr. Covello said of the billboards. “And now they’re all A.I.”Mr. Covello, the head of stock research at Goldman Sachs, has become Wall Street’s leading A.I. skeptic. Three months ago, he jolted markets with a research paper that challenged whether businesses would see a sufficient return on what by some estimates could be $1 trillion in A.I. spending in the coming years. He said that generative artificial intelligence, which can summarize text and write software code, makes so many mistakes that it was questionable whether it would ever reliably solve complex problems.The Goldman paper landed days after a partner at Sequoia Capital, a venture firm, raised similar questions in a blog post about A.I. Their skepticism marked a turning point for A.I.-related stocks, leading to a reassessment of Wall Street’s hottest trade.Goldman’s basket of A.I. stocks, which is managed by a separate arm of the firm and includes Nvidia, Microsoft, Apple, Alphabet, Amazon, Meta and Oracle, has declined 7 percent from its peak on July 10, as investors and business leaders debate whether A.I. can justify its staggering costs.The pause has come early in the A.I. arms race. The tech industry has a history of spending big to deliver technology transitions, as it did during the personal computer and internet revolutions. Those build outs spanned five years or more before there was a reckoning.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    OpenAI Names Political Veteran Chris Lehane as Head of Global Policy

    The prominent A.I. start-up is also considering a change to its corporate structure to make it more appealing to outside investors.Amid a flurry of news around its funding plans, OpenAI has tapped the political veteran Chris Lehane as its vice president of global policy.Mr. Lehane held a similar role at Airbnb and served in the Clinton White House as a lawyer and spokesman who specialized in opposition research. He earned a reputation as “the master of disaster” during his time working for President Bill Clinton.As OpenAI has built increasingly powerful artificial intelligence technologies, it has warned of their potential danger, and it is under pressure from lawmakers, regulators and others across the globe to ensure that these technologies do not cause serious harm. Some researchers worry that the A.I. systems could be used to spread disinformation, fuel cyberattacks or even destroy humanity.Mr. Lehane could help navigate an increasingly complex social and political landscape. Through a spokeswoman, he declined to comment.A spokeswoman for OpenAI, Liz Bourgeois, said, “Just as the company is making changes in other areas of the business to scale the impact of various teams as we enter this next chapter, we recently made changes to our global affairs organization.”OpenAI is negotiating a new funding deal that would value the company at more than $100 billion, three people familiar with discussions have said. The deal would be led by the investment firm Thrive Capital, which would invest more than $1 billion.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More