More stories

  • in

    Elon Musk’s Feud With OpenAI Goes to Court

    The tech mogul wants to force the A.I. start-up to reveal its research to the public and prevent it from pursuing profits.Elon Musk, the tech billionaire, has escalated his feud with OpenAI and its C.E.O., Sam Altman.Jonathan Ernst/ReutersMusk takes aim at OpenAI The gloves have really come off in one of the most personal fights in the tech world: Elon Musk has sued OpenAI and its C.E.O., Sam Altman, accusing them of reneging on the start-up’s original purpose of being a nonprofit laboratory for the technology.Yes, Musk has disagreed with Altman for years about the purpose of the organization they co-founded and he is creating a rival artificial intelligence company. But the lawsuit also appears rooted in philosophical differences that go to the heart of who controls a hugely transformative technology — and is backed by one of the wealthiest men on the planet.The backstory: Musk, Altman and others agreed to create OpenAI in 2015 to provide an open-sourced alternative to the likes of Google, which had bought the leading A.I. start-up DeepMind the year before. Musk notes in his suit that OpenAI’s certificate of incorporation states that its work “will benefit the public,” and that it isn’t “organized for the private gain of any person.”Musk poured more than $44 million into OpenAI between 2016 and 2020, and helped hire top talent like the researcher Ilya Sutskever.Altman has moved OpenAI toward commerce, starting with the creation in 2019 of a for-profit subsidiary that would raise money from investors, notably Microsoft. The final straw for Musk came last year, when OpenAI released its GPT-4 A.I. model — but kept its workings hidden from all except itself and Microsoft.“OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” Musk’s lawyers write in the complaint.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    SEC Is Investigating OpenAI Over Its Board’s Actions

    The U.S. regulator opened its inquiry after the board unexpectedly fired the company’s chief executive, Sam Altman, in November.The Securities and Exchange Commission began an inquiry into OpenAI soon after the company’s board of directors unexpectedly removed Sam Altman, its chief executive, at the end of last year, three people familiar with the inquiry said.The regulator has sent official requests to OpenAI, the developer of the ChatGPT online chatbot, seeking information about the situation. It is unclear whether the S.E.C. is investigating Mr. Altman’s behavior, the board’s decision to oust him or both.Even as OpenAI has tried to turn the page on the dismissal of Mr. Altman, who was soon reinstated, the controversy continues to hound the company. In addition to the S.E.C. inquiry, the San Francisco artificial intelligence company has hired a law firm to conduct its own investigation into Mr. Altman’s behavior and the board’s decision to remove him.The board dismissed Mr. Altman on Nov. 17, saying it no longer had confidence in his ability to run OpenAI. It said he had not been “consistently candid in his communications,” though it did not provide specifics. It agreed to reinstate him five days later.Privately, the board worried that Mr. Altman was not sharing all of his plans to raise money from investors in the Middle East for an A.I. chip project, people with knowledge of the situation have said.Spokespeople for the S.E.C. and OpenAI and a lawyer for Mr. Altman all declined to comment.The S.E.C.’s inquiry was reported earlier by The Wall Street Journal.OpenAI kicked off an industrywide A.I. boom at the end of 2022 when it released ChatGPT. The company is considered a leader in what is called generative A.I., technologies that can generate text, sounds and images from short prompts. A recent funding deal values the start-up at more than $80 billion.Many believe that generative A.I., which represents a fundamental shift in the way computers behave, could remake the industry as thoroughly as the iPhone or the web browser. Others argue that the technology could cause serious harm, helping to spread online disinformation, replacing jobs with unusual speed and maybe even threatening the future of humanity.After the release of ChatGPT, Mr. Altman became the face of the industry’s push toward generative A.I. as he endlessly promoted the technology — while acknowledging the dangers.In an effort to resolve the turmoil surrounding Mr. Altman’s ouster, he and the board agreed to remove two members and add two others: Bret Taylor, who is a former Salesforce executive, and former Treasury Secretary Lawrence H. Summers.Mr. Altman and the board also agreed that OpenAI would start its own investigation into the matter. That investigation, by the WilmerHale law firm, is expected to close soon. More

  • in

    A.I. Frenzy Complicates Efforts to Keep Power-Hungry Data Sites Green

    West Texas, from the oil rigs of the Permian Basin to the wind turbines twirling above the High Plains, has long been a magnet for companies seeking fortunes in energy.Now, those arid ranch lands are offering a new moneymaking opportunity: data centers.Lancium, an energy and data center management firm setting up shop in Fort Stockton and Abilene, is one of many companies around the country betting that building data centers close to generating sites will allow them to tap into underused clean power.“It’s a land grab,” said Lancium’s president, Ali Fenn.In the past, companies built data centers close to internet users, to better meet consumer requests, like streaming a show on Netflix or playing a video game hosted in the cloud. But the growth of artificial intelligence requires huge data centers to train the evolving large-language models, making proximity to users less necessary.But as more of these sites start to pop up across the United States, there are new questions on whether they can meet the demand while still operating sustainably. The carbon footprint from the construction of the centers and the racks of expensive computer equipment is substantial in itself, and their power needs have grown considerably.Just a decade ago, data centers drew 10 megawatts of power, but 100 megawatts is common today. The Uptime Institute, an industry advisory group, has identified 10 supersize cloud computing campuses across North America with an average size of 621 megawatts.This growth in electricity demand comes as manufacturing in the United States is the highest in the past half-century, and the power grid is becoming increasingly strained.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Chinese Influence Campaign Pushes Disunity Before U.S. Election, Study Says

    A long-running network of accounts, known as Spamouflage, is using A.I.-generated images to amplify negative narratives involving the presidential race.A Chinese influence campaign that has tried for years to boost Beijing’s interests is now using artificial intelligence and a network of social media accounts to amplify American discontent and division ahead of the U.S. presidential election, according to a new report.The campaign, known as Spamouflage, hopes to breed disenchantment among voters by maligning the United States as rife with urban decay, homelessness, fentanyl abuse, gun violence and crumbling infrastructure, according to the report, which was published on Thursday by the Institute for Strategic Dialogue, a nonprofit research organization in London.An added aim, the report said, is to convince international audiences that the United States is in a state of chaos.Artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy the country.One post on X that said “American partisan divisions” had an image showing President Biden and former President Donald J. Trump aggressively crossing fiery spears under this text: “INFIGHTING INTENSIFIES.” Other images featured the two men facing off, cracks in the White House or the Statue of Liberty, and terminology like “CIVIL WAR,” “INTERNAL STRIFE” and “THE COLLAPSE OF AMERICAN DEMOCRACY.”The campaign’s artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy America.via XWe are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Imran Khan Uses A.I. To Give Victory Speech in Pakistan

    It was not the first time the technology had been used in Pakistan’s notably repressive election season, but this time it got the world’s attention.Imran Khan, Pakistan’s former prime minister, has spent the duration of the country’s electoral campaign in jail, disqualified from running in what experts have described as one of the least credible general elections in the country’s 76-year history.But from behind bars, he has been rallying his supporters in recent months with speeches that use artificial intelligence to replicate his voice, part of a tech-savvy strategy his party deployed to circumvent a crackdown by the military.And on Saturday, as official counts showed candidates aligned with his party, Pakistan Tehreek-e-Insaf, or P.T.I., winning the most seats in a surprise result that threw the country’s political system into chaos, it was Mr. Khan’s A.I. voice that declared victory.“I had full confidence that you would all come out to vote. You fulfilled my faith in you, and your massive turnout has stunned everybody,” the mellow, slightly robotic voice said in the minute-long video, which used historical images and footage of Mr. Khan and bore a disclaimer about its A.I. origins. The speech rejected the victory claim of Mr. Khan’s rival, Nawaz Sharif, and urged supporters to defend the win.As concerns grow about the use of artificial intelligence and its power to mislead, particularly in elections, Mr. Khan’s videos offer an example of how A.I. can work to circumvent suppression. But, experts say, they also increase fear about its potential dangers.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Google Joins Effort to Help Spot Content Made With A.I.

    The tech company’s plan is similar to one announced two days earlier by Meta, another Silicon Valley giant.Google, whose work in artificial intelligence helped make A.I.-generated content far easier to create and spread, now wants to ensure that such content is traceable as well.The tech giant said on Thursday that it was joining an effort to develop credentials for digital content, a sort of “nutrition label” that identifies when and how a photograph, a video, an audio clip or another file was produced or altered — including with A.I. The company will collaborate with companies like Adobe, the BBC, Microsoft and Sony to fine-tune the technical standards.The announcement follows a similar promise announced on Tuesday by Meta, which like Google has enabled the easy creation and distribution of artificially generated content. Meta said it would promote standardized labels that identified such material.Google, which spent years pouring money into its artificial intelligence initiatives, said it would explore how to incorporate the digital certification into its own products and services, though it did not specify its timing or scope. Its Bard chatbot is connected to some of the company’s most popular consumer services, such as Gmail and Docs. On YouTube, which Google owns and which will be included in the digital credential effort, users can quickly find videos featuring realistic digital avatars pontificating on current events in voices powered by text-to-speech services.Recognizing where online content originates and how it changes is a high priority for lawmakers and tech watchdogs in 2024, when billions of people will vote in major elections around the world. After years of disinformation and polarization, realistic images and audio produced by artificial intelligence and unreliable A.I. detection tools caused people to further doubt the authenticity of things they saw and heard on the internet.Configuring digital files to include a verified record of their history could make the digital ecosystem more trustworthy, according to those who back a universal certification standard. Google is joining the steering committee for one such group, the Coalition for Content Provenance and Authenticity, or C2PA. The C2PA standards have been supported by news organizations such as The New York Times as well as by camera manufacturers, banks and advertising agencies.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Meta Calls for Industry Effort to Label A.I.-Generated Content

    The social network wants to promote standardized labels to help detect artificially created photo, video and audio material across its platforms.Last month at the World Economic Forum in Davos, Switzerland, Nick Clegg, president of global affairs at Meta, called a nascent effort to detect artificially generated content “the most urgent task” facing the tech industry today.On Tuesday, Mr. Clegg proposed a solution. Meta said it would promote technological standards that companies across the industry could use to recognize markers in photo, video and audio material that would signal that the content was generated using artificial intelligence.The standards could allow social media companies to quickly identify content generated with A.I. that has been posted to their platforms and allow them to add a label to that material. If adopted widely, the standards could help identify A.I.-generated content from companies like Google, OpenAI and Microsoft, Adobe, Midjourney and others that offer tools that allow people to quickly and easily create artificial posts.“While this is not a perfect answer, we did not want to let perfect be the enemy of the good,” Mr. Clegg said in an interview.He added that he hoped this effort would be a rallying cry for companies across the industry to adopt standards for detecting and signaling that content was artificial so that it would be simpler for all of them to recognize it.As the United States enters a presidential election year, industry watchers believe that A.I. tools will be widely used to post fake content to misinform voters. Over the past year, people have used A.I to create and spread fake videos of President Biden making false or inflammatory statements. The attorney general’s office in New Hampshire is also investigating a series of robocalls that appeared to employ an A.I.-generated voice of Mr. Biden that urged people not to vote in a recent primary.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Promises Give Tech Earnings from Meta and Others a Jolt

    Companies like Meta that could tout their work in the fast-growing field saw a benefit in their fourth-quarter results — and won praise from eager investors.Mark Zuckerberg, Meta’s C.E.O., spoke expansively to analysts about his company’s work on A.I.Carlos Barria/ReutersA.I. and cost cuts lift Big Tech Earlier this week, Mark Zuckerberg of Meta endured a grilling on Capitol Hill and publicly apologized to relatives of victims of online abuse. Little more than a day later, he had a lot to crow about, as his business delivered some of its best quarterly earnings in years.Meta’s results illustrate how the most recent earnings season has gone for Big Tech: a mostly positive period in which companies that could claim the benefits of artificial intelligence and cost-cutting were hailed the most on Wall Street.Meta shot the lights out. After years of facing questions about its ad business and its ability to cope with scandals, the parent of Facebook and Instagram reported that fourth-quarter profits tripled from a year ago. A.I. was credited for some of that, with the technology helping make its core ad business more effective. So too was cost-cutting, which included tens of thousands of layoffs as part of the company’s self-described “year of efficiency.”Meta’s profit was so good that the company will soon start paying stock dividends for the first time (which could total $700 million a year for Zuckerberg alone) and announced a $50 billion buyback. It’s a sign that the tech giant is “coming of age,” according to one analyst, joining Microsoft and Apple in making regular payouts to investors.Zuckerberg pledged more investment in A.I. — “Expect us to continue investing aggressively in this area,” he said on an earnings call — and the company said it had largely concluded its cost cuts. But some analysts said that Meta will eventually have to show a return on that spending.Amazon also touted its A.I. initiatives. Much of its earnings call was spent talking about Rufus, a new smart assistant intended to help shoppers find what they’re looking for. (It may also allow Amazon to reduce ad spending on Google and social media platforms.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More