More stories

  • in

    A.I. Has a Measurement Problem

    There’s a problem with leading artificial intelligence tools like ChatGPT, Gemini and Claude: We don’t really know how smart they are.That’s because, unlike companies that make cars or drugs or baby formula, A.I. companies aren’t required to submit their products for testing before releasing them to the public. There’s no Good Housekeeping seal for A.I. chatbots, and few independent groups are putting these tools through their paces in a rigorous way.Instead, we’re left to rely on the claims of A.I. companies, which often use vague, fuzzy phrases like “improved capabilities” to describe how their models differ from one version to the next. And while there are some standard tests given to A.I. models to assess how good they are at, say, math or logical reasoning, many experts have doubts about how reliable those tests really are.This might sound like a petty gripe. But I’ve become convinced that a lack of good measurement and evaluation for A.I. systems is a major problem.For starters, without reliable information about A.I. products, how are people supposed to know what to do with them?I can’t count the number of times I’ve been asked in the past year, by a friend or a colleague, which A.I. tool they should use for a certain task. Does ChatGPT or Gemini write better Python code? Is DALL-E 3 or Midjourney better at generating realistic images of people?We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk to Open Source Grok Chatbot in Latest AI War Escalation

    Mr. Musk’s move to open up the code behind Grok is the latest volley in a war to win the A.I. battle, after a suit against OpenAI on the same topic.Elon Musk released the raw computer code behind his version of an artificial intelligence chatbot on Sunday, an escalation by one of the world’s richest men in a battle to control the future of A.I.Grok, which is designed to give snarky replies styled after the science-fiction novel “The Hitchhiker’s Guide to the Galaxy,” is a product from xAI, the company Mr. Musk founded last year. While xAI is an independent entity from X, its technology has been integrated into the social media platform and is trained on users’ posts. Users who subscribe to X’s premium features can ask Grok questions and receive responses.By opening the code up for everyone to view and use — known as open sourcing — Mr. Musk waded further into a heated debate in the A.I. world over whether doing so could help make the technology safer, or simply open it up to misuse.Mr. Musk, a self-proclaimed proponent of open sourcing, did the same with X’s recommendation algorithm last year, but he has not updated it since.“Still work to do, but this platform is already by far the most transparent & truth-seeking (not a high bar tbh),” Mr. Musk posted on Sunday in response to a comment on open sourcing X’s recommendation algorithm. The move to open-source chatbot code is the latest volley between Mr. Musk and ChatGPT’s creator, OpenAI, which the mercurial billionaire sued recently over breaking its promise to do the same. Mr. Musk, who was a founder and helped fund OpenAI before departing several years later, has argued such an important technology should not be controlled solely by tech giants like Google and Microsoft, which is a close partner of OpenAI.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

    An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The Big Questions Raised by Elon Musk’s Lawsuit Against OpenAI

    Experts say the case against the start-up and its chief executive, Sam Altman, raises unusual legal issues that do not have a clear precedent.From Silicon Valley to Wall Street to Washington, the blockbuster case that Elon Musk filed against OpenAI and its C.E.O., Sam Altman, has become Topic A. It is the business world’s hottest soap opera.But among lawyers, the case has become something of a fascination for a different reason: It poses a series of unique and unusual legal questions without clear precedent. And it remains unclear what would constitute “winning” in a case like this, given that it appears to have been brought out of Musk’s own personal frustration and philosophical differences with Open A.I, a company he helped found and then left.The lawsuit — which pits one of the wealthiest men in the world against the most advanced A.I. company in the world, backed by Microsoft, one the world’s most valuable companies — argues that OpenAI, a nonprofit organization that created a for-profit subsidiary in 2019, breached a contract to operate in the public interest and violated its duties by diverting from its founding purpose of benefiting humanity.Musk’s lawyers — led by Morgan Chu, a partner at Irell & Manella who is known as the “$5 billion man” for his win record — want the court to force OpenAI to open its technology to others and to stop licensing it to Microsoft, which has invested billions in its partnership with the start-up.Among the questions that lawyers and scholars are asking after poring through Musk’s 35-page complaint:Does Musk even have standing to sue? “One of the differences with nonprofits compared to other companies is that, generally, no one other than the state attorney general has standing to sue for the kind of stuff that he’s complaining about, like not following your mission,“ Peter Molk, a professor of law at the University of Florida, said of Musk’s lawsuit. That’s most likely why Musk’s lawyers are presenting the case as a breach of contract instead of attacking the company’s nonprofit status.Musk also alleges that OpenAI has breached its fiduciary duty, but that charge has its own challenges, lawyers said, given that such claims are traditionally handled in Delaware, not California, where the lawsuit was filed. (Musk, of course, has an infamously rocky relationship with the state of Delaware.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Elon Musk’s Feud With OpenAI Goes to Court

    The tech mogul wants to force the A.I. start-up to reveal its research to the public and prevent it from pursuing profits.Elon Musk, the tech billionaire, has escalated his feud with OpenAI and its C.E.O., Sam Altman.Jonathan Ernst/ReutersMusk takes aim at OpenAI The gloves have really come off in one of the most personal fights in the tech world: Elon Musk has sued OpenAI and its C.E.O., Sam Altman, accusing them of reneging on the start-up’s original purpose of being a nonprofit laboratory for the technology.Yes, Musk has disagreed with Altman for years about the purpose of the organization they co-founded and he is creating a rival artificial intelligence company. But the lawsuit also appears rooted in philosophical differences that go to the heart of who controls a hugely transformative technology — and is backed by one of the wealthiest men on the planet.The backstory: Musk, Altman and others agreed to create OpenAI in 2015 to provide an open-sourced alternative to the likes of Google, which had bought the leading A.I. start-up DeepMind the year before. Musk notes in his suit that OpenAI’s certificate of incorporation states that its work “will benefit the public,” and that it isn’t “organized for the private gain of any person.”Musk poured more than $44 million into OpenAI between 2016 and 2020, and helped hire top talent like the researcher Ilya Sutskever.Altman has moved OpenAI toward commerce, starting with the creation in 2019 of a for-profit subsidiary that would raise money from investors, notably Microsoft. The final straw for Musk came last year, when OpenAI released its GPT-4 A.I. model — but kept its workings hidden from all except itself and Microsoft.“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawyers write in the complaint.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    SEC Is Investigating OpenAI Over Its Board’s Actions

    The U.S. regulator opened its inquiry after the board unexpectedly fired the company’s chief executive, Sam Altman, in November.The Securities and Exchange Commission began an inquiry into OpenAI soon after the company’s board of directors unexpectedly removed Sam Altman, its chief executive, at the end of last year, three people familiar with the inquiry said.The regulator has sent official requests to OpenAI, the developer of the ChatGPT online chatbot, seeking information about the situation. It is unclear whether the S.E.C. is investigating Mr. Altman’s behavior, the board’s decision to oust him or both.Even as OpenAI has tried to turn the page on the dismissal of Mr. Altman, who was soon reinstated, the controversy continues to hound the company. In addition to the S.E.C. inquiry, the San Francisco artificial intelligence company has hired a law firm to conduct its own investigation into Mr. Altman’s behavior and the board’s decision to remove him.The board dismissed Mr. Altman on Nov. 17, saying it no longer had confidence in his ability to run OpenAI. It said he had not been “consistently candid in his communications,” though it did not provide specifics. It agreed to reinstate him five days later.Privately, the board worried that Mr. Altman was not sharing all of his plans to raise money from investors in the Middle East for an A.I. chip project, people with knowledge of the situation have said.Spokespeople for the S.E.C. and OpenAI and a lawyer for Mr. Altman all declined to comment.The S.E.C.’s inquiry was reported earlier by The Wall Street Journal.OpenAI kicked off an industrywide A.I. boom at the end of 2022 when it released ChatGPT. The company is considered a leader in what is called generative A.I., technologies that can generate text, sounds and images from short prompts. A recent funding deal values the start-up at more than $80 billion.Many believe that generative A.I., which represents a fundamental shift in the way computers behave, could remake the industry as thoroughly as the iPhone or the web browser. Others argue that the technology could cause serious harm, helping to spread online disinformation, replacing jobs with unusual speed and maybe even threatening the future of humanity.After the release of ChatGPT, Mr. Altman became the face of the industry’s push toward generative A.I. as he endlessly promoted the technology — while acknowledging the dangers.In an effort to resolve the turmoil surrounding Mr. Altman’s ouster, he and the board agreed to remove two members and add two others: Bret Taylor, who is a former Salesforce executive, and former Treasury Secretary Lawrence H. Summers.Mr. Altman and the board also agreed that OpenAI would start its own investigation into the matter. That investigation, by the WilmerHale law firm, is expected to close soon. More

  • in

    Google Joins Effort to Help Spot Content Made With A.I.

    The tech company’s plan is similar to one announced two days earlier by Meta, another Silicon Valley giant.Google, whose work in artificial intelligence helped make A.I.-generated content far easier to create and spread, now wants to ensure that such content is traceable as well.The tech giant said on Thursday that it was joining an effort to develop credentials for digital content, a sort of “nutrition label” that identifies when and how a photograph, a video, an audio clip or another file was produced or altered — including with A.I. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards.The announcement follows a similar promise announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.Google, which spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate the digital certification into its own products and services, though it did not specify its timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which Google owns and which will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating on current events in voices powered by text-to-speech services.Recognizing where online content originates and how it changes is a high priority for lawmakers and tech watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable A.I. detection tools caused people to further doubt the authenticity of things they saw and heard on the internet.Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards have been supported by news organizations such as The New York Times as well as by camera manufacturers, banks and advertising agencies.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The big themes in 2024: elections, antitrust and shadow banking.

    From elections and A.I. to antitrust and shadow banking, here are the big themes that could define the worlds of business and policy.What we’re watching in 2024 Andrew here. As we look ahead to the new year, the DealBook team has identified about a dozen themes that are likely to become running narratives that could define the business and policy ecosystem for the next 12 months.Of course, the presidential election, perhaps one of the most polarizing in history, is going to infect every part of the business world. Watch out for which C.E.O.s and other financiers back candidates — and, importantly, which ones go silent — and how companies deal with outspoken employees. Also: Look for some wealthy executives to avoid giving directly to candidates but instead donate to PACs as a shield, of sorts, from public scrutiny.Another story line that will probably remain part of the water cooler — er, Slack and X — conversation in business is the backlash against environmental, social and corporate governance principles, or E.S.G. This fight has manifested itself into a political battle and increasingly found its way in the past year into a debate about free speech on campuses (another theme that isn’t going away).Here’s a bit more detail on what we’re looking out for this year.The U.S. presidential election. The race seems set to come down to a rerun of 2020, with Donald Trump leading opinion polls to be the Republican candidate despite his mounting legal battles. The big question is how business leaders will respond. Will they coalesce around (and direct their money to) an anyone-but-Trump candidate? Nikki Haley, the former governor of South Carolina, is leading that race, but she has a long way to go to catch up to Trump. President Biden, who has made a series of consequential decisions on the economy, hopes voters will start to feel an economic upswing to reverse his sagging poll ratings.Private credit could be hit by a wave of defaults. Just as 1980s-style leveraged buyouts have been rechristened “private equity,” so too has “shadow banking” been rebranded as “private credit” and “direct lending” in time for the business to reach its highest levels yet. Direct lending by investment firms and hedge funds has become a $1.5 trillion titan, with scores of companies turning to the likes of Apollo and Ares for loans instead of, say, JPMorgan Chase.But the industry may face a test in 2024: Indebted borrowers, facing looming debt maturities and high interest rates, already are turning to private credit for yet more loans, raising concerns that lenders could face a wave of defaulting clients. A string of failures could hit these lenders hard, skeptics fear — leaving pension funds, insurers and other backers of private credit funds holding the bag.Paramount Pictures may be sold, a move that could be the start of a year of media deal-making.Hunter Kerhart for The New York TimesMedia deal mania? Reports that David Zaslav, the C.E.O. of Warner Bros. Discovery, held talks last month about a potential merger with Paramount set off a wave of speculation that 2024 would be a year of media consolidation. The industry has been transformed in recent years by the growth of streaming, changes in the way people consume media and big tech’s encroachment into sectors typically dominated by old-school media companies. Now, the industry is on the cusp of the next major shift with the rise of artificial intelligence.One date to put in your diary: April 8, 2024, the two-year anniversary of the merger of Warner Media and Discovery to create Warner Bros. Discovery — and the first day that the new company can be sold without risking a big tax bill.Will unions maintain their momentum? Organized labor had a banner year in 2023, with big wins in fights with Hollywood studios and the auto industry. Whether that signals a permanent turnaround for the labor movement is up for debate. But the election most likely will be a key factor. Both Biden and Trump tried to woo striking autoworkers this year, so expect more efforts to win over blue-collar voters.Middle East money will keep flowing. Tensions with China and economic sanctions have made it increasingly difficult for companies to raise money from a place that used to be top of the list. Middle Eastern investors have picked up the slack. Saudi Arabia, the United Arab Emirates, Qatar and others are spending money as they look to diversify their fossil fuel-dependent economies. The sectors are wide-ranging, including sports, tech companies, luxury, retail and media. Critics say the petrostates with dubious human rights records are trying to launder their reputations, but that hasn’t stopped Western business from seeking their lucre.One trend to watch: the growing ties between China and Middle Eastern money. Beijing is trying to deepen links with countries outside of Washington’s orbit or, at least, with those willing to play both sides.Lina Khan, the chair of the F.T.C., will keep challenging big deals despite losing some legal fights in 2023.Haiyun Jiang for The New York TimesMore antitrust fights. A tough year for regulators — like Lina Khan at the F.T.C. and Jonathan Kanter of the Justice Department — ended with two wins after both Illumina and Adobe called off multibillion-dollar takeovers in the face of government pressure. Enforcers could already claim some success by forcing deal makers to weigh whether a big deal is worth pursuing, given the potential risk that they might have to spend months in court defending it. Don’t expect Khan to ease the pressure; do expect more antitrust fights.New climate disclosure rules. Public companies have been bracing for years for new climate-related disclosure rules from the S.E.C. In 2021, the agency signaled that climate change would be one of its priorities. About a year later, Gary Gensler, the S.E.C. chair, proposed new rules. The most contentious aspect of the draft regulations was a requirement that large companies disclose greenhouse gasses emitted along their value chain. The new rules are set to be finalized in the spring. But the probable lawsuits could go all the way to the Supreme Court.Another election to watch: India’s. The world’s biggest democracy and a rising superpower, India will go to the polls in April and May. Prime Minister Narendra Modi is benefiting from the West’s search for a regional bulwark to counter China. Business is looking at opportunities in India, as companies work to diversify their supply chains and tap into a fast-growing economy. The election will also be a crucial early test of how A.I. can factor into the spread of (mis)information during an election.Workplace shake-up. In late 2022, the release of ChatGPT propelled A.I. into the public consciousness. In 2023, companies experimented with new ways to build the technology into their operations, but few had yet to overhaul their procedures to cope with it. It’s still not clear exactly what A.I. will mean for jobs, but in 2024 we may see more companies making decisions about its use in ways that will have consequences for workers.The other big topic workplaces are grappling with is the response to the war in Gaza. Some companies are already considering changes to their workplace diversity, equity and inclusion programs, and executives face some of the same pressures as university presidents when it comes to how to handle their statements and responses to incidents related to the war. More