More stories

  • in

    More Than Words: 10 Charts That Defined 2023

    Some years are defined by a single event or person — a pandemic, a recession, an insurrection — while others are buffeted by a series of disparate forces. Such was 2023. The economy and inflation remained front of mind until the war in Gaza grabbed headlines and the world’s attention — all while Donald Trump’s […] More

  • in

    Plus-Size Female Shoppers ‘Deserve Better’

    More from our inbox:Why Trump’s Supporters Love HimChatGPT Is PlagiarismThe Impact of China’s Economic WoesThe ‘Value’ of CollegeKim SaltTo the Editor:Re “Just Make It, Toots,” by Elizabeth Endicott (Opinion guest essay, Aug. 20):Despite the fact that two-thirds of American women are size 14 or above, brands and retailers continue to overlook and disregard plus-size women whose dollars are as green as those held by “straight size” women.The root cause is simple, and it’s not that it’s more expensive or time-consuming; these excuses have been bandied about for years. There are not enough clothes available to plus-size women because brands and retailers assume that larger women will just accept whatever they’re given, since they have in the past.As Ms. Endicott pointed out in her essay, this is no longer the case — women are finding other ways to express themselves through clothing that fits their bodies, their styles and their budgets, from making clothes themselves to shopping at independent designers and boutiques.We still have a long way to go, but for every major retailer that dips a toe into the market and just as quickly pulls back, there are new designers and stores willing to step in and take their place.Plus-size women deserve more and deserve better. Those who won’t cater to them do so at their own peril.Shanna GoldstoneNew YorkThe writer is the founder and C.E.O. of Pari Passu, an apparel company that sells clothing to women sizes 12 to 24.To the Editor:Plus-size people aren’t the only folks whose clothing doesn’t fit. I wore a size 10 for decades, but most clothes wouldn’t fit my wide well-muscled shoulders. Apparently being really fit is just as bad as being a plus size.I wasn’t alone; most of my co-workers had similar problems. Don’t even get me started about having a short back and a deep pelvis. I found only one brand of pants that came close to fitting and have worn them for almost 40 years. They definitely are not a fashion statement.Eloise TwiningUkiah, Calif.To the Editor:Thank you, Elizabeth Endicott, for revealing the ways that historically marginalized consumers grapple with retail trends. You recognized that “plus size is now the American average.”As someone who works for a company that sells clothing outside of the traditional gender binary, I’d add that gender neutral clothing will also soon be an American retail norm. It’s now up to large-scale retailers to decide if they want to meet this wave of demand, or miss out on contemporary consumers.Ashlie GrilzProvidence, R.I.The writer is brand director for Peau De Loup.Why Trump’s Supporters Love HimSam Whitney/The New York TimesTo the Editor:Re “The Thing Is, Most Republicans Really Like Trump,” by Kristen Soltis Anderson (Opinion guest essay, Aug. 30):Ms. Anderson writes that one of the most salient reasons that Republican voters favor Donald Trump as their presidential nominee is that they believe he is “best poised” to beat Joe Biden. I do not concur.His likability is not based primarily on his perceived electability. Nor is his core appeal found in policy issues such as budget deficits, import tariffs or corporate tax relief. It won’t even be found in his consequential appointments to the Supreme Court.Politics is primarily visceral, not cerebral. When Mr. Trump denounces the elites that he claims are hounding him with political prosecutions, his followers concur and channel their own grievances and resentments with his.When Mr. Trump rages against the professional political class and “fake news,” his acolytes applaud because they themselves feel ignored and disrespected.Mr. Trump is more than an entertaining self-promoter. He offers oxygen for self-esteem, and his supporters love him for it.John R. LeopoldStoney Beach, Md.ChatGPT Is Plagiarism“I do want students to learn to use it,” Yazmin Bahena, a middle school social studies teacher, said about ChatGPT. “They are going to grow up in a world where this is the norm.”Ricardo Nagaoka for The New York TimesTo the Editor:Re “Schools Shift to Embrace ChatGPT,” by Natasha Singer (news article, Aug. 26):What gets lost in this discussion is that these schools are authorizing a form of academic plagiarism and outright theft of the texts authors have created. This is why over 8,000 authors have signed a petition to the A.I. companies that have “scraped” (the euphemistic term they use for “stolen”) their intellectual properties and repackaged them as their own property to be sold for profit. In the process, the A.I. chatbots are depriving authors of the fruits of their labor.What a lesson to teach our nation’s children. This is the very definition of theft. Schools that accept this are contributing to the ethical breakdown of a nation already deeply challenged by a culture of cheating.Dennis M. ClausenEscondido, Calif.The writer is an author and professor at the University of San Diego.The Impact of China’s Economic WoesThe Port of Oakland in California. China only accounted for 7.5 percent of U.S. exports in 2022.Jim Wilson/The New York TimesTo the Editor:Re “China’s Woes Are Unlikely to Hamper U.S. Growth” (Business, Aug. 28):Lydia DePillis engages in wishful thinking in arguing that the fallout of China’s deep economic troubles for the U.S. economy probably will be limited.China is the world’s second-largest economy, until recently the main engine of world economic growth and a major consumer of internationally traded commodities. As such, a major Chinese economic setback would cast a dark cloud over the world economic recovery.While Ms. DePillis is correct in asserting that China’s direct impact on our economy might be limited, its indirect impact could be large, particularly if it precipitates a world economic recession.China’s economic woes could spill over to its Asian trade partners and to economies like Germany, Australia and the commodity-dependent emerging market economies, which all are heavily dependent on the Chinese market for their exports.Desmond LachmanWashingtonThe writer is a senior fellow at the American Enterprise Institute.The ‘Value’ of CollegeSarah Reingewirtz/MediaNews Group — Los Angeles Daily News, via Getty ImagesTo the Editor:Re “Let’s Stop Pretending College Degrees Don’t Matter,” by Ben Wildavsky (Opinion guest essay, Aug. 26):There are quite a few things wrong with Mr. Wildavsky’s assessment of the value of a college education. But I’ll focus on the most obvious: Like so many pundits, he equates value with money, pointing out that those with college degrees earn more than those without.Some do, some don’t. I have a Ph.D. from an Ivy League university, but the electrician who dealt with a very minor problem in my apartment earns considerably more than I do. So, for that matter, does the plumber.What about satisfaction, taking pleasure in one’s accomplishments? Do we really think that the coder takes more pride in their work than does the construction worker who told me he likes to drive around the city with his children and point out the buildings he helped build? He didn’t need a college degree to find his work meaningful.How about organizing programs that prepare high school students for work, perhaps through apprenticeships, and paying all workers what their efforts are worth?Erika RosenfeldNew York More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    ‘An evolution in propaganda’: a digital expert on AI influence in elections

    Every election presents an opportunity for disinformation to find its way into the public discourse. But as the 2024 US presidential race begins to take shape, the growth of artificial intelligence (AI) technology threatens to give propagandists powerful new tools to ply their trade.Generative AI models that are able to create unique content from simple prompts are already being deployed for political purposes, taking disinformation campaigns into strange new places. Campaigns have circulated fake images and audio targeting other candidates, including an AI-generated campaign ad attacking Joe Biden and deepfake videos mimicking real-life news footage.The Guardian spoke with Renée DiResta, technical research manager at the Stanford Internet Observatory, a university program that researches the abuses of information technology, about how the latest developments in AI influence campaigns and how society is catching up to a new, artificially created reality.Concern around AI and its potential for disinformation has been around for a while. What has changed that makes this threat more urgent?When people became aware of deepfakes – which usually refers to machine-generated video of an event that did not happen – a few years ago there was concern that adversarial actors would use these types of video to disrupt elections. Perhaps they would make video of a candidate, perhaps they would make video of some sort of disaster. But it didn’t really happen. The technology captured public attention, but it wasn’t very widely democratized. And so it didn’t primarily manifest in the political conversation, but instead in the realm of much more mundane but really individually harmful things, like revenge porn.There’s been two major developments in the last six months. First is the rise of ChatGPT, which is generated text. It became available to a mass market and people began to realize how easy it was to use these types of text-based tools. At the same time, text-to-still image tools became globally available. Today, anybody can use Stable Diffusion or Midjourney to create photorealistic images of things that don’t really exist in the world. The combination of these two things, in addition to the concerns that a lot of people feel around the 2024 elections, has really captured public attention once again.Why did the political use of deepfakes not materialize?The challenge with using video in a political environment is that you really have to nail the substance of the content. There are a lot of tells in video, a lot of ways in which you can determine whether it’s generated. On top of that, when a video is truly sensational, a lot of people look at it and factcheck it and respond to it. You might call it a natural immune response.Text and images, however, have the potential for higher actual impact in an election scenario because they can be more subtle and longer lasting. Elections require months of campaigning during which people formulate an opinion. It’s not something where you’re going to change the entire public mind with a video and have that be the most impactful communication of the election.How do you think large language models can change political propaganda?I want to caveat that describing what is tactically possible is not the same thing as me saying the sky is falling. I’m not a doomer about this technology. But I do think that we should understand generative AI in the context of what it makes possible. It increases the number of people who can create political propaganda or content. It decreases the cost to do it. That’s not to say necessarily that they will, and so I think we want to maintain that differentiation between this is the tactic that a new technology enables versus that this is going to swing an election.As far as the question of what’s possible, in terms of behaviors, you’ll see things like automation. You might remember back in 2015 there were all these fears about bots. You had a lot of people using automation to try to make their point of view look more popular – making it look like a whole lot of people think this thing, when in reality it’s six guys and their 5,000 bots. For a while Twitter wasn’t doing anything to stop that, but it was fairly easy to detect. A lot of the accounts would be saying the exact same thing at the exact same time, because it was expensive and time consuming to generate a unique message for each of your fake accounts. But with generative AI it is now effortless to generate highly personalized content and to automate its dissemination.And then finally, in terms of content, it’s really just that the messages are more credible and persuasive.That seems tied to another aspect you’ve written about, that the sheer amount of content that can be generated, including misleading or inaccurate content, has a muddying effect on information and trust.It’s the scale that makes it really different. People have always been able to create propaganda, and I think it’s very important to emphasize that. There is an entire industry of people whose job it is to create messages for campaigns and then figure out how to get them out into the world. We’ve just changed the speed and the scale and the cost to do that. It’s just an evolution in propaganda.When we think about what’s new and what’s different here, the same thing goes for images. When Photoshop emerged, the public at first was very uncomfortable with Photoshopped images, and gradually became more comfortable with it. The public acclimated to the idea that Photoshop existed and that not everything that you see with your eyes is a thing that necessarily is as it seems – the idea that the woman that you see on the magazine cover probably does not actually look like that. Where we’ve gone with generative AI is the fabrication of a complete unreality, where nothing about the image is what it seems but it looks photorealistic.skip past newsletter promotionafter newsletter promotionNow anybody can make it look like the pope is wearing Balenciaga.Exactly.In the US, it seems like meaningful federal regulation is pretty far away if it’s going to come at all. Absent of that, what are some of the sort of short-term ways to mitigate these risks?First is the education piece. There was a very large education component when deepfakes became popular – media covered them and people began to get the sense that we were entering a world in which a video might not be what it seems.But it’s unreasonable to expect every person engaging with somebody on a social media platform to figure out if the person they’re talking to is real. Platforms will have to take steps to more carefully identify if automation is in play.On the image front, social media platforms, as well as generative AI companies, are starting to come together to try and determine what kind of watermarking might be useful so that platforms and others can determine computationally whether an image is generated.Some companies, like OpenAI, have policies around generating misinformation or the use of ChatGPT for political ends. How effective do you see those policies being?It’s a question of access. For any technology, you can try to put guardrails on your proprietary version of that technology and you can argue you’ve made a values-based decision to not allow your products to generate particular types of content. On the flip side, though, there are models that are open source and anyone can go and get access to them. Some of the things that are being done with some of the open source models and image generation are deeply harmful, but once the model is open sourced, the ability to control its use is much more limited.And it’s a very big debate right now in the field. You don’t want to necessarily create regulations that lock in and protect particular corporate actors. At the same time, there is a recognition that open-source models are out there in the world already. The question becomes how the platforms that are going to serve as the dissemination pathways for this stuff think about their role and their policies in what they amplify and curate.What’s the media or the public getting wrong about AI and disinformation?One of the real challenges is that people are going to believe what they see if it conforms to what they want to believe. In a world of unreality in which you can create that content that fulfills that need, one of the real challenges is whether media literacy efforts actually solve any of the problems. Or will we move further into divergent realities – where people are going to continue to hold the belief in something that they’ve seen on the internet as long as it tells them what they want. Larger offline challenges around partisanship and trust are reflected in, and exacerbated by, new technologies that enable this kind of content to propagate online. More

  • in

    When the tech boys start asking for new regulations, you know something’s up | John Naughton

    Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.What I’ve been readingCapitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon. More

  • in

    OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful’ AI

    The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said “regulation of AI is essential” as he testified in his first appearance in front of the US Congress.Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms.“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his prepared remarks.Altman suggested the US government might consider licensing and testing requirements for development and release of AI models. He proposed establishing a set of safety standards and a specific test models would have to pass before they can be deployed, as well as allowing independent auditors to examine the models before they are launched. He also argued existing frameworks like Section 230, which releases platforms from liability for the content its users post, would not be the right way to regulate the system.“For a very new technology we need a new framework,” Altman said.Both Altman and Gary Marcus, an emeritus professor of psychology and neural science at New York University who also testified at the hearing, called for a new regulatory agency for the technology. AI is complicated and moving fast, Marcus argued, making “an agency whose full-time job” is to regulate it crucial.Throughout the hearing, senators drew parallels between social media and generative AI, and the lessons lawmakers had learned from the government’s failure to act on regulating social platforms.Yet the hearing was far less contentious than those at which the likes of the Meta CEO, Mark Zuckerberg, testified. Many lawmakers gave Altman credit for his calls for regulation and acknowledgment of the pitfalls of generative AI. Even Marcus, brought on to provide skepticism about the technology, called Altman’s testimony sincere.The hearing came as renowned and respected AI experts and ethicists, including former Google researchers Dr Timnit Gebru, who co-led the company’s ethical AI team, and Meredith Whitaker, have been sounding the alarm about the rapid adoption of generative AI, arguing the technology is over-hyped. “The idea that this is going to magically become a source of social good … is a fantasy used to market these programs,” Whitaker, now the president of secure messaging app Signal, recently said in an interview with Meet the Press Reports.Generative AI is a probability machine “designed to spit out things that seem plausible” based on “massive amounts of effectively surveillance data that has been scraped from the web”, she argued.Senators Josh Hawley and Richard Blumenthal said this hearing is just the first step in understanding the technology.Blumenthal said he recognized what he described as the “promises” of the technology including “curing cancer, developing new understandings of physics and biology, or modeling climate and weather”.Potential risks Blumenthal said he was worried about include deepfakes, weaponized disinformation, housing discrimination, harassment of women and impersonation frauds. “For me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers,” he said.Altman said that while OpenAI was building tools that will one day “address some of humanity’s biggest challenges like climate changes and curing cancer”, the current systems were not capable of doing these things yet.But he believes the benefits of the tools deployed so far “vastly outweigh the risks” and said the company conducts extensive testing and implements safety and monitoring systems before releasing any new system.“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” Altman said.Altman said the technology will significantly affect the job market but he believes “there will be far greater jobs on the other side of this”.“The jobs will get better,” he said. “I think it’s important to think of GPT as a tool not a creature … GPT 4 and tools like it are good at doing tasks, not jobs. GPT 4 will, I think, entirely automate away some jobs and it will create new ones that we believe will be much better.”Altman also said he was very concerned about the impact that large language model services will have on elections and misinformation, particularly ahead of the primaries.“There’s a lot that we can and do do,” Altman said in response to a question from Senator Amy Klobuchar about a tweet ChatGPT crafted that listed fake polling locations. “There are things that the model won’t do and there is monitoring. At scale … we can detect someone generating a lot of those [misinformation] tweets.”Altman didn’t have an answer yet for how content creators whose work is being used in AI-generated songs, articles or other works can be compensated, saying the company is engaged with artists and other entities on what that economic model could look like. When asked by Klobuchar about how he plans to remedy threats to local news publications whose content is being scraped and used to train these models, Altman said he hopes the tool would help journalists but that “if there are things that we can do to help local news, we’d certainly like to”.Touched upon but largely missing from the conversation was the potential danger of a small group of power players dominating the industry, a dynamic Whitaker has warned risks entrenching existing power dynamics.“There are only a handful of companies in the world that have the combination of data and infrastructural power to create what we’re calling AI from nose-to-tail,” she said in the Meet the Press interview. “We’re now in a position that this overhyped technology is being created, distributed and ultimately shaped to serve the economic interests of these same handful of actors.” More

  • in

    Breakfast with Chad: Techno-feudalism

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More

  • in

    Mind Blowing: The Startling Reality of Conscious Machines

    The Fair Observer website uses digital cookies so it can collect statistics on how many visitors come to the site, what content is viewed and for how long, and the general location of the computer network of the visitor. These statistics are collected and processed using the Google Analytics service. Fair Observer uses these aggregate statistics from website visits to help improve the content of the website and to provide regular reports to our current and future donors and funding organizations. The type of digital cookie information collected during your visit and any derived data cannot be used or combined with other information to personally identify you. Fair Observer does not use personal data collected from its website for advertising purposes or to market to you.As a convenience to you, Fair Observer provides buttons that link to popular social media sites, called social sharing buttons, to help you share Fair Observer content and your comments and opinions about it on these social media sites. These social sharing buttons are provided by and are part of these social media sites. They may collect and use personal data as described in their respective policies. Fair Observer does not receive personal data from your use of these social sharing buttons. It is not necessary that you use these buttons to read Fair Observer content or to share on social media. More