More stories

  • in

    OpenAI CEO tells Federal Reserve confab that entire job categories will disappear due to AI

    During his latest trip to Washington, OpenAI’s chief executive, Sam Altman, painted a sweeping vision of an AI-dominated future in which entire job categories disappear, presidents follow ChatGPT’s recommendations and hostile nations wield artificial intelligence as a weapon of mass destruction, all while positioning his company as the indispensable architect of humanity’s technological destiny.Speaking at the Capital Framework for Large Banks conference at the Federal Reserve board of governors, Altman told the crowd that certain job categories would be completely eliminated by AI advancement.“Some areas, again, I think just like totally, totally gone,” he said, singling out customer support roles. “That’s a category where I just say, you know what, when you call customer support, you’re on target and AI, and that’s fine.”The OpenAI founder described the transformation of customer service as already complete, telling the Federal Reserve vice-chair for supervision, Michelle Bowman: “Now you call one of these things and AI answers. It’s like a super-smart, capable person. There’s no phone tree, there’s no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It’s very quick. You call once, the thing just happens, it’s done.”The OpenAI founder then turned to healthcare, making the suggestion that AI’s diagnostic capabilities had surpassed human doctors, but wouldn’t go so far as to accept the superior performer as the sole purveyor of healthcare.“ChatGPT today, by the way, most of the time, can give you better – it’s like, a better diagnostician than most doctors in the world,” he said. “Yet people still go to doctors, and I am not, like, maybe I’m a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop.”His visit to Washington was aligned with the Trump administration’s unveiling of its “AI action plan”, which is focused defining and easing some regulations and promoting more datacenters. Altman’s latest engagement is with the federal government under Donald Trump, which has taken on a new tune compared with years past. While much has changed with the tech over the years, under the Biden administration, OpenAI and its rivals asked the government to regulate AI. Meanwhile under Trump, they talk of accelerating to beat China.At the fireside chat, he said one of his biggest worries was over AI’s rapidly advancing destructive capabilities, with one scenario that kept him up at night being a hostile nation using these weapons to attack the US financial system. And despite being in awe of advances in voice cloning, Altman warned the crowd about how that same benefit could enable sophisticated fraud and identity theft, considering that “there are still some financial institutions that will accept the voiceprint as authentication”.skip past newsletter promotionafter newsletter promotionOpenAI and Altman are already under way on their big pivot to Washington, attempting to crash a party at which Elon Musk once held the golden ticket. Along with announcing plans to open his company’s first office in Washington next year, Altman faced the Senate commerce committee for his first congressional testimony since his high-profile appearance in May 2023 that propelled him on to the global stage. More

  • in

    How Google’s Antitrust Case Could Upend the A.I. Race

    A federal judge issued a landmark ruling last year, saying that Google had become a monopolist in internet search. But in a hearing that began last week to figure out how to fix the problem, the emphasis has frequently landed on a different technology, artificial intelligence.In U.S. District Court in Washington last week, a Justice Department lawyer argued that Google could use its search monopoly to become the dominant player in A.I. Google executives disclosed internal discussions about expanding the reach of Gemini, the company’s A.I. chatbot. And executives at rival A.I. companies said that Google’s power was an obstacle to their success.On Wednesday, the first substantial question posed to Google’s chief executive, Sundar Pichai, after he took the stand was also about A.I. Throughout his 90-minute testimony, the subject came up more than two dozen times.“I think it’s one of the most dynamic moments in the industry,” said Mr. Pichai. “I’ve seen users’ home screens with, like, seven to nine applications of chatbots which they are trying and playing and training with.”An antitrust lawsuit about the past has effectively turned into a fight about the future, as the government and Google face off over proposed changes to the tech giant’s business that could shift the course of the A.I. race.For more than 20 years, Google’s search engine dominated the way people got answers online. Now the federal court is in essence grappling with whether the Silicon Valley giant will dominate the next era of how people get information on the internet, as consumers turn to a new crop of A.I. chatbots to answer questions, find solutions to their problems and learn about the world.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Sam Altman on Microsoft, Trump and Musk

    The OpenAI C.E.O. spoke with Andrew Ross Sorkin at the DealBook Summit.Since kicking off the artificial intelligence boom with the launch of ChatGPT in 2022, OpenAI has amassed more than 300 million weekly users and a $157 billion valuation. Its C.E.O., Sam Altman, addressed whether that staggering pace of growth can continue at the DealBook Summit last week.Altman pushed back on assertions that progress in A.I. is becoming slower and more expensive; on reports that the company’s relationship with its biggest investor, Microsoft, is fraying; and on concerns that Elon Musk, who founded an A.I. company last year, may use his relationship with President-elect Donald Trump to hurt competitors.Altman said that artificial general intelligence, the point at which artificial intelligence can do almost anything that a human brain can do, will arrive “sooner than most people in the world think.” Here are five highlights from the conversation.On Elon MuskMusk, who co-founded OpenAI, has become one of its major antagonists. He has sued the company, accusing it of departing from its founding mission as a nonprofit, and started a competing startup called xAI. On Friday, OpenAI said Musk had wanted to turn OpenAI into a for-profit company in 2017 and walked away when he didn’t get majority equity. Altman called the change in the relationship “tremendously sad.” He continued:I grew up with Elon as like a mega hero. I thought what Elon was doing was absolutely incredible for the world, and I’m still, of course, I mean, I have different feelings about him now, but I’m still glad he exists. I mean that genuinely. Not just because I think his companies are awesome, which I do think, but because I think at a time when most of the world was not thinking very ambitiously, he pushed a lot of people, me included, to think much more ambitiously. And grateful is the wrong kind of word. But I’m like thankful.You know, we started OpenAI together, and then at some point he totally lost faith in OpenAI and decided to go his own way. And that’s fine, too. But I think of Elon as a builder and someone who — a known thing about Elon is that he really cares about being ‘the guy.’ But I think of him as someone who, if he’s not, that just competes in the market and in the technology, and whatever else. And doesn’t resort to lawfare. And, you know, whatever the stated complaint is, what I believe is he’s a competitor and we’re doing well. And that’s sad to see.Altman said of Musk’s close relationship with Trump:I may turn out to be wrong, but I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon has it to hurt your competitors and advantage your own businesses. And I don’t think people would tolerate that. I don’t think Elon would do it.On OpenAI’s relationship with MicrosoftMicrosoft, OpenAI’s largest investor, has put more than $13 billion into the company and has an exclusive license to its raw technologies. Altman once called the relationship “the best bromance in tech,” but The Times and others have reported that the partnership has become strained as OpenAI seeks more and cheaper access to computing power and Microsoft has made moves to diversify its access to A.I. technology. OpenAI expects to lose $5 billion this year because of the steep costs of developing A.I.At the DealBook Summit, Altman said of the relationship with Microsoft, “I don’t think we’re disentangling. I will not pretend that there are no misalignments or challenges.” He added:We need lots of compute, more than we projected. And that has just been an unusual thing in the history of business, to scale that quickly. And there’s been tension on that.Some of OpenAI’s own products compete with those of partners that depend on its technologies. On whether that presents a conflict of interest, Altman said:We have a big platform business. We have a big first party business. Many other companies manage both of those things. And we have things that we’re really good at. Microsoft has things they’re really good at. Again, there’s not no tension, but on the whole, our incentives are pretty aligned.On whether making progress in A.I. development was becoming more expensive and slower, as some experts have suggested, he doubled down on a message he’d previously posted on social media: “There is no wall.” Andrew asked the same question of Sundar Pichai, the Google C.E.O., which we’ll recap in tomorrow’s newsletter.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Why Wouldn’t ChatGPT Say ‘David Mayer’?

    A bizarre saga in which users noticed the chatbot refused to say “David Mayer” raised questions about privacy and A.I., with few clear answers.Across the final years of his life, David Mayer, a theater professor living in Manchester, England, faced the cascading consequences an unfortunate coincidence: A dead Chechen rebel on a terror watch list had once used Mr. Mayer’s name as an alias.The real Mr. Mayer had travel plans thwarted, financial transactions frozen and crucial academic correspondence blocked, his family said. The frustrations plagued him until his death in 2023, at age 94.But this month, his fight for his identity edged back into the spotlight when eagle-eyed users noticed one particular name was sending OpenAI’s ChatGPT bot into shutdown.David Mayer.Users’ efforts to prompt the bot to say “David Mayer” in a variety of ways were instead resulting in error messages, or the bot would simply refuse to respond. It’s unclear why the name was kryptonite for the bot service, and OpenAI would not say whether the professor’s plight was related to ChatGPT’s issue with the name.But the saga underscores some of the prickliest questions about generative A.I. and the chatbots it powers: Why did that name knock the chatbot out? Who, or what, is making those decisions? And who is responsible for the mistakes?“This was something that he would’ve almost enjoyed, because it would have vindicated the effort he put in to trying to deal with it,” Mr. Mayer’s daughter, Catherine, said of the debacle in an interview.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying

    The “best bromance in tech” has had a reality check as OpenAI has tried to change its deal with Microsoft and the software maker has tried to hedge its bet on the start-up.Last fall, Sam Altman, OpenAI’s chief executive, asked his counterpart at Microsoft, Satya Nadella, if the tech giant would invest billions of dollars in the start-up.Microsoft had already pumped $13 billion into OpenAI, and Mr. Nadella was initially willing to keep the cash spigot flowing. But after OpenAI’s board of directors briefly ousted Mr. Altman last November, Mr. Nadella and Microsoft reconsidered, according to four people familiar with the talks who spoke on the condition of anonymity.Over the next few months, Microsoft wouldn’t budge as OpenAI, which expects to lose $5 billion this year, continued to ask for more money and more computing power to build and run its A.I. systems.Mr. Altman once called OpenAI’s partnership with Microsoft “the best bromance in tech,” but ties between the companies have started to fray. Financial pressure on OpenAI, concern about its stability and disagreements between employees of the two companies have strained their five-year partnership, according to interviews with 19 people familiar with the relationship between the companies.That tension demonstrates a key challenge for A.I. start-ups: They are dependent on the world’s tech giants for money and computing power because those big companies control the massive cloud computing systems the small outfits need to develop A.I.No pairing displays this dynamic better than Microsoft and OpenAI, the maker of the ChatGPT chatbot. When OpenAI got its giant investment from Microsoft, it agreed to an exclusive deal to buy computing power from Microsoft and work closely with the tech giant on new A.I.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Doctors, A.I. and Empathy for Patients

    More from our inbox:Breast Cancer ScreeningWalz’s MisstepsMental Health Support for SchoolchildrenTo the Editor:Re “ChatGPT’s Bedside Manner Is Better Than Mine,” by Jonathan Reisman (Opinion guest essay, Oct. 9):Dr. Reisman notes that ChatGPT’s answers to patient questions have been rated as more empathetic than those written by actual doctors. This should not be a call for doctors to surrender our human role to A.I. To the contrary, we need to continue to improve our communication skills.For the past 25 years, I have been facilitating seminars in doctor-patient communication. The skills to communicate bad news listed by Dr. Reisman are exactly the techniques that we suggest to our medical students. However, doctors can avoid the temptation to surrender their “humanity to a script” as if it were “just another day at work.”Techniques are a valuable guide, but the real work consists of carefully listening to the responses and their emotional content, and crafting new words and phrases that speak to the unique patient’s confusion, fear and distress.In my experience, patients know when we are reciting a script, and when we are paying attention to their thoughts and feelings. Unlike A.I., and especially when conversations are matters of life and death, we can reach into the depths of our humanity to feel and communicate empathy and compassion toward our patients.Neil S. ProseDurham, N.C.To the Editor:Mention the words “A.I.” and “doctoring” to most physicians in the same sentence, and the immediate reaction is often skepticism or fear.As Dr. Jonathan Reisman noted in his essay, A.I. has shown a remarkable ability to mimic human empathy in encounters with patients. This is one reason many practicing physicians worry that A.I. may replace doctors eventually.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Math Help AI Chatbots Stop Making Stuff Up?

    Chatbots like ChatGPT get stuff wrong. But researchers are building new A.I. systems that can verify their own math — and maybe more.On a recent afternoon, Tudor Achim gave a brain teaser to an A.I. bot called Aristotle.The question involved a 10-by-10 table filled with a hundred numbers. If you collected the smallest number in each row and the largest number in each column, he asked, could the largest of the small numbers ever be greater than the smallest of the large numbers?The bot correctly answered “No.” But that was not surprising. Popular chatbots like ChatGPT may give the right answer, too. The difference was that Aristotle had proven that its answer was right. The bot generated a detailed computer program that verified “No” was the correct response.Chatbots like ChatGPT from OpenAI and Gemini from Google can answer questions, write poetry, summarize news articles and generate images. But they also make mistakes that defy common sense. Sometimes, they make stuff up — a phenomenon called hallucination.Mr. Achim, the chief executive and co-founder of a Silicon Valley start-up called Harmonic, is part of growing effort to build a new kind of A.I. that never hallucinates. Today, this technology is focused on mathematics. But many leading researchers believe they can extend the same techniques into computer programming and other areas.Because math is a rigid discipline with formal ways of proving whether an answer is right or wrong, companies like Harmonic can build A.I. technologies that check their own answers and learn to produce reliable information.Google DeepMind, the tech giant’s central A.I. lab, recently unveiled a system called AlphaProof that operates in this way. Competing in the International Mathematical Olympiad, the premier math competition for high schoolers, the system achieved “silver medal” performance, solving four of the competition’s six problems. It was the first time a machine had reached that level.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More