More stories

  • in

    Will A.I. Boost Productivity? Companies Sure Hope So.

    Wendy’s ordering kiosks. Ben & Jerry’s grocery store freezers. Abercrombie & Fitch’s marketing. Many mainstays of the American customer experience are increasingly powered by artificial intelligence.The question is whether the technology will actually make companies more efficient.Rapid productivity improvement is the dream for both companies and economic policymakers. If output per hour holds steady, firms must either sacrifice profits or raise prices to pay for wage increases or investment projects. But when firms figure out how to produce more per working hour, it means that they can maintain or expand profits even as they pay or invest more. Economies experiencing productivity booms can experience rapid wage gains and quick growth without as much risk of rapid inflation.But many economists and officials seem dubious that A.I. — especially generative A.I., which is still in its infancy — has spread enough to show up in productivity data already.Jerome H. Powell, the Federal Reserve chair, recently suggested that A.I. “may” have the potential to increase productivity growth, “but probably not in the short run.” John C. Williams, president of the New York Fed, has made similar remarks, specifically citing the work of the Northwestern University economist Robert Gordon.Mr. Gordon has argued that new technologies in recent years, while important, have probably not been transformative enough to give a lasting lift to productivity growth.“The enthusiasm about large language models and ChatGPT has gone a bit overboard,” he said in an interview.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    How One Tech Skeptic Decided AI Might Benefit the Middle Class

    David Autor, an M.I.T. economist and tech contrarian, argues that A.I. is fundamentally different from past waves of computerization.David Autor seems an unlikely A.I. optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years.But Mr. Autor is now making the case that the new wave of technology — generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing — could reverse that trend.“A.I., if used well, can assist with restoring the middle-skill, middle-class heart of the U.S. labor market that has been hollowed out by automation and globalization,” Mr. Autor wrote in a National Bureau of Economic Research paper published in February.Mr. Autor’s stance on A.I. looks like a stunning conversion for a longtime expert on technology’s work force casualties. But he said the facts had changed and so had his thinking. Modern A.I., Mr. Autor said, is a fundamentally different technology, opening the door to new possibilities. It can, he continued, change the economics of high-stakes decision-making so more people can take on some of the work that is now the province of elite, and expensive, experts like doctors, lawyers, software engineers and college professors. And if more people, including those without college degrees, can do more valuable work, they should be paid more, lifting more workers into the middle class.The researcher, whom The Economist once called “the academic voice of the American worker,” started his career as a software developer and a leader of a computer-education nonprofit before switching to economics — and spending decades examining the impact of technology and globalization on workers and wages.Mr. Autor, 59, was an author of an influential study in 2003 that concluded that 60 percent of the shift in demand favoring college-educated workers over the previous three decades was attributable to computerization. Later research examined the role of technology in wage polarization and in skewing employment growth toward low-wage service jobs.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Can Xerox’s PARC, a Silicon Valley Icon, Find New Life with SRI?

    Two research labs known for some of the tech industry’s most important innovations have merged in hopes of recapturing their glory days. It is one of Silicon Valley’s enduring legends. In 1979, a 24-year old Steve Jobs was permitted to visit Xerox’s Palo Alto Research Center (PARC) to view a demonstration of an experimental personal computer called the Alto. Mr. Jobs took away a handful of ideas that would transform the computing world when they became the heart of Apple’s Lisa and Macintosh computers. More

  • in

    Elon Musk to Open Source Grok Chatbot in Latest AI War Escalation

    Mr. Musk’s move to open up the code behind Grok is the latest volley in a war to win the A.I. battle, after a suit against OpenAI on the same topic.Elon Musk released the raw computer code behind his version of an artificial intelligence chatbot on Sunday, an escalation by one of the world’s richest men in a battle to control the future of A.I.Grok, which is designed to give snarky replies styled after the science-fiction novel “The Hitchhiker’s Guide to the Galaxy,” is a product from xAI, the company Mr. Musk founded last year. While xAI is an independent entity from X, its technology has been integrated into the social media platform and is trained on users’ posts. Users who subscribe to X’s premium features can ask Grok questions and receive responses.By opening the code up for everyone to view and use — known as open sourcing — Mr. Musk waded further into a heated debate in the A.I. world over whether doing so could help make the technology safer, or simply open it up to misuse.Mr. Musk, a self-proclaimed proponent of open sourcing, did the same with X’s recommendation algorithm last year, but he has not updated it since.“Still work to do, but this platform is already by far the most transparent & truth-seeking (not a high bar tbh),” Mr. Musk posted on Sunday in response to a comment on open sourcing X’s recommendation algorithm. The move to open-source chatbot code is the latest volley between Mr. Musk and ChatGPT’s creator, OpenAI, which the mercurial billionaire sued recently over breaking its promise to do the same. Mr. Musk, who was a founder and helped fund OpenAI before departing several years later, has argued such an important technology should not be controlled solely by tech giants like Google and Microsoft, which is a close partner of OpenAI.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons

    An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Chinese National Accused of Stealing AI Secrets From Google

    Linwei Ding, a Chinese national, was arrested in California and accused of uploading hundreds of files to the cloud.A Chinese citizen who recently quit his job as a software engineer for Google in California has been charged with trying to transfer artificial intelligence technology to a Beijing-based company that paid him secretly, according to a federal indictment unsealed on Wednesday.Prosecutors accused Linwei Ding, who was part of the team that designs and maintains Google’s vast A.I. supercomputer data system, of stealing information about the “architecture and functionality” of the system, and of pilfering software used to “orchestrate” supercomputers “at the cutting edge of machine learning and A.I. technology.”From May 2022 to May 2023, Mr. Ding, also known as Leon, uploaded 500 files, many containing trade secrets, from his Google-issued laptop to the cloud by using a multistep scheme that allowed him to “evade immediate detection,” according to the U.S. attorney’s office for the Northern District of California.Mr. Ding was arrested on Wednesday morning at his home in Newark, Calif., not far from Google’s sprawling main campus in Mountain View, officials said.Starting in June 2022, Mr. Ding was paid $14,800 per month — plus a bonus and company stock — by a China-based technology company, without telling his supervisors at Google, according to the indictment. He is also accused of working with another company in China.Mr. Ding openly sought funding for a new A.I. start-up company he had incorporated at an investor conference in Beijing in November, boasting that “we have experience with Google’s 10,000-card computational power platform; we just need to replicate and upgrade it,” prosecutors said in the indictment, which was unsealed in San Francisco federal court.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    The Big Questions Raised by Elon Musk’s Lawsuit Against OpenAI

    Experts say the case against the start-up and its chief executive, Sam Altman, raises unusual legal issues that do not have a clear precedent.From Silicon Valley to Wall Street to Washington, the blockbuster case that Elon Musk filed against OpenAI and its C.E.O., Sam Altman, has become Topic A. It is the business world’s hottest soap opera.But among lawyers, the case has become something of a fascination for a different reason: It poses a series of unique and unusual legal questions without clear precedent. And it remains unclear what would constitute “winning” in a case like this, given that it appears to have been brought out of Musk’s own personal frustration and philosophical differences with Open A.I, a company he helped found and then left.The lawsuit — which pits one of the wealthiest men in the world against the most advanced A.I. company in the world, backed by Microsoft, one the world’s most valuable companies — argues that OpenAI, a nonprofit organization that created a for-profit subsidiary in 2019, breached a contract to operate in the public interest and violated its duties by diverting from its founding purpose of benefiting humanity.Musk’s lawyers — led by Morgan Chu, a partner at Irell & Manella who is known as the “$5 billion man” for his win record — want the court to force OpenAI to open its technology to others and to stop licensing it to Microsoft, which has invested billions in its partnership with the start-up.Among the questions that lawyers and scholars are asking after poring through Musk’s 35-page complaint:Does Musk even have standing to sue? “One of the differences with nonprofits compared to other companies is that, generally, no one other than the state attorney general has standing to sue for the kind of stuff that he’s complaining about, like not following your mission,“ Peter Molk, a professor of law at the University of Florida, said of Musk’s lawsuit. That’s most likely why Musk’s lawyers are presenting the case as a breach of contract instead of attacking the company’s nonprofit status.Musk also alleges that OpenAI has breached its fiduciary duty, but that charge has its own challenges, lawyers said, given that such claims are traditionally handled in Delaware, not California, where the lawsuit was filed. (Musk, of course, has an infamously rocky relationship with the state of Delaware.)We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    A.I. Is Making the Sexual Exploitation of Girls Even Worse

    On Tuesday, Kat Tenbarge and Liz Kreutz of NBC News reported that several middle schoolers in Beverly Hills, Calif., were caught making and distributing fake naked photos of their peers: “School officials at Beverly Vista Middle School were made aware of the ‘A.I.-generated nude photos’ of students last week, the district superintendent said in a letter to parents. The superintendent told NBC News the photos included students’ faces superimposed onto nude bodies.”I had heard about this kind of thing happening to high school girls, which is horrible enough. But the idea of such young children being dehumanized by their classmates, humiliated and sexualized in one of the places they’re supposed to feel safe, and knowing those images could be indelible and worldwide, turned my stomach.I’m not a technophobe and have, in the past, been somewhat skeptical about the outsize negative impact of social media on teen girls. And while I still think the subject is complicated, and that the research doesn’t always conclude that there are unfavorable mental health effects of social media use on all groups of young people, the increasing reach of artificial intelligence adds a new wrinkle that has the potential to cause all sorts of damage. The possibilities are especially frightening when the technology is used by teens and tweens, groups with notoriously iffy judgment about the permanence of their actions.I have to admit that my gut reaction to the Beverly Hills story was rage — I wanted the book thrown at the kids who made those fakes. But I wanted to hear from someone with more experience talking to teens and thinking deeply about the adolescent relationship with privacy and technology. So I called Devorah Heitner, the author of “Growing Up in Public: Coming of Age in a Digital World,” to help me step back a bit from my punitive fury.Heitner pointed out that although artificial intelligence adds a new dimension, kids have been passing around digital sexual images without consent for years. According to a 2018 meta-analysis from JAMA Pediatrics, among children in the 12 to 17 age range, “The prevalence of forwarding a sext without consent was 12.0 percent,” and “and the prevalence of having a sext forwarded without consent was 8.4 percent.”In her book, Heitner offers an example in which an eighth-grade girl sends a topless photo to her boyfriend, who circulates it to his friends without her permission. After they broke up, but without her knowledge, “her picture kept circulating, passing from classmate to classmate throughout their middle school,” and then “one afternoon, she opened her school email to find a video with her image with sound effects from a porn video playing with it.”We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More