More stories

  • in

    Minute-Long Soap Operas Are Here. Is America Ready?

    Popularized in China during the pandemic, ReelShort and other apps are hoping to bring minute-by-minute melodramas to the United States.When Albee Zhang received an offer to produce cheesy short-form features made for phones last spring, she was skeptical, and so, she declined.But the offers kept coming. Finally, Ms. Zhang, who has been a producer for 12 years, realized it could be a profitable new way of storytelling and said yes.Since last summer, she has produced two short-form features and is working on four more for several apps that are creating cookie-cutter content aimed at women.Think: Lifetime movie cut up into TikTok videos. Think: soap opera, but for the short attention span of the internet age.The biggest player in this new genre is ReelShort, an app that offers melodramatic content in minute-long, vertically shot episodes and is hoping to bring a successful formula established abroad to the United States by hooking millions of people on its short-form content.“The Double Life of My Billionaire Husband” is one of the many short features you can watch on ReelShort, an app that offers short dramatic content meant to be watched on phones. ReelShort

    @reelshortapp On your 18th birthday, the Moon Goddess granted you a RED wolf. She said a new journey awaited you, but there were also evil forces after your power… Called weak your whole life, what POWER could you possibly have?! #fyp #reelshort #binge #bingeworthy #bingewatching #obsessed #obsession #mustwatch #witch #alpha #werewolf #moon #wolfpack #booktok #luna #drama #film #movie #tiktok #tv #tvseries #shortclips #tvclips #filmtok #movietok #dramatok #romance #love #marriage #relationship #couple #dramatiktok #filmtiktok #movietiktoks #saturday #saturdayvibes #saturdaymood #saturdaymotivation #saturdayfeels #saturdayfeeling #weekend #weekendvibes ♬ original sound – ReelShort APP We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More

  • in

    La IA hace campaña en las elecciones de Argentina

    Los afiches que salpican las calles de Buenos Aires tienen un cierto toque soviético.Había uno de Sergio Massa, uno de los candidatos presidenciales de Argentina, vestido con una camisa con lo que parecían ser medallas militares, señalando a un cielo azul. Lo rodeaban cientos de personas mayores —con atuendos monótonos, rostros serios y a menudo desfigurados— que lo miraban con esperanza.El estilo no era un error. El ilustrador había recibido instrucciones claras.“Ilustración de afiche de propaganda política soviética de Gustav Klutsis con un líder, masssa, de pie y firme”, decía un mensaje que la campaña de Massa introdujo en un programa de inteligencia artificial para producir la imagen. “Símbolos de unidad y poder llenan el entorno”, continuaba el comando o prompt. “La imagen irradia autoridad y determinación”.Javier Milei, el otro candidato en la segunda vuelta electoral del domingo, ha contraatacado compartiendo lo que parecen ser imágenes creadas con inteligencia artificial que representan a Massa como un líder comunista chino y a sí mismo como un adorable león de dibujos animados. Han sido vistas más de 30 millones de veces.Las elecciones argentinas se han convertido rápidamente en un campo de pruebas para la inteligencia artificial en las campañas electorales, con los dos candidatos y sus partidarios empleando la tecnología para adulterar imágenes y videos existentes y crear otros desde cero.La inteligencia artificial ha hecho que los candidatos digan cosas que no decían y los ha colocado en películas y memes famosos. Ha generado carteles de campaña y desencadenado debates sobre si los videos reales son efectivamente reales.El papel destacado de la inteligencia artificial en la campaña de Argentina y el debate político que ha suscitado subrayan la creciente prevalencia de la tecnología y demuestran que, con su creciente poder y su costo cada vez menor, es probable que ahora sea un factor en muchas elecciones democráticas de todo el mundo.Los expertos comparan este momento con los primeros días de las redes sociales, una tecnología que ofrece nuevas y tentadoras herramientas para la política, así como amenazas imprevistas.La campaña de Massa ha creado un sistema de inteligencia artificial que puede crear imágenes y videos de muchos de los principales protagonistas de las elecciones —los candidatos, los compañeros de fórmula, los aliados políticos— haciendo una gran variedad de cosas.La campaña ha usado inteligencia artificial para retratar a Massa, el serio ministro de Economía de centroizquierda, como fuerte, intrépido y carismático, incluyendo videos que lo muestran como soldado en una guerra, un Cazafantasmas e Indiana Jones, así como afiches que evocan al cartel “Hope” de la campaña de 2008 de Barack Obama y a una portada de The New Yorker.La campaña también ha usado al sistema para retratar al candidato oponente, Milei —un economista libertario de extrema derecha y figura televisiva conocida por sus arrebatos—, como inestable, colocándolo en películas como La naranja mecánica y Pánico y locura en Las Vegas.Mucho del contenido ha sido claramente falso. Pero un puñado de creaciones pisaron la línea de la desinformación. La campaña de Massa produjo un video ultrafalso, conocido como deepfake en inglés, en el cual Milei explica cómo funcionaría un mercado de órganos humanos, algo que él ha dicho que filosóficamente encaja con sus opiniones libertarias.“Imaginate tener hijos y pensar que cada uno de ellos es como una inversión a largo plazo. No en el sentido tradicional, sino pensando en el potencial económico de sus órganos en el futuro”, dice la imagen manipulada de Milei en el video falsificado, publicado por la campaña de Massa en su cuenta de Instagram para inteligencia artificial llamado IAxlaPatria.La leyenda de la publicación dice: “Le pedimos a una Inteligencia Artificial que lo ayude a Javier a explicar el negocio de la venta de órganos y esto sucedió”.En una entrevista, Massa dijo que la primera vez que vio lo que la inteligencia artificial podía hacer se quedó impactado. “No tenía la cabeza preparada para el mundo que me iba a tocar vivir a mí”, dijo. “Es un enorme desafío, estamos arriba de un caballo al que tenemos que cabalgar y no le conocemos las mañas”.The New York Times entonces le mostró el ultrafalso que su campaña había creado en donde aparece Milei hablando de los órganos humanos. Pareció perturbado. “Sobre ese uso no estoy de acuerdo”, dijo.Su vocero luego recalcó que la publicación era en broma y que estaba claramente etiquetada como generada por inteligencia artificial. Su campaña aseguró en un comunicado que su uso de la tecnología es para divertir y hacer observaciones políticas, no para engañar.Los investigadores hace tiempo que han expresado preocupación por los efectos de la IA en las elecciones. La tecnología tiene la capacidad de confundir y engañar a los votantes, crear dudas sobre lo que es real y añadir desinformación que puede propagarse por las redes sociales.Durante años, dichos temores han sido de carácter especulativo puesto que la tecnología para producir contenidos falsos de ese tipo era demasiado complicada, costosa y burda.“Ahora hemos visto esta total explosión de conjuntos de herramientas increíblemente accesibles y cada vez más potentes que se han democratizado, y esa apreciación ha cambiado de manera radical”, dijo Henry Ajder, experto afincado en Inglaterra que ha brindado asesoría a gobiernos sobre contenido generado con IA.Este año, un candidato a la alcaldía de Toronto empleó imágenes de personas sin hogar generadas por IA de tono sombrío para insinuar cómo sería Toronto si no resultaba electo. En Estados Unidos, el Partido Republicano publicó un video creado con IA que muestra a China invadiendo Taiwán y otras escenas distópicas para ilustrar lo que supuestamente sucedería si el presidente Biden ganara la reelección.Y la campaña del gobernador de Florida, Ron DeSantis, compartió un video que mostraba imágenes generadas por IA donde aparece Donald Trump abrazando a Anthony Fauci, el médico que se ha convertido en enemigo de la derecha estadounidense por su papel como líder de la respuesta nacional frente a la pandemia.Hasta ahora, el contenido generado por IA compartido por las campañas en Argentina ha sido etiquetado para identificar su origen o es una falsificación tan evidente que es poco probable que engañe incluso a los votantes más crédulos. Más bien, la tecnología ha potenciado la capacidad de crear contenido viral que antiguamente habría requerido el trabajo de equipos enteros de diseñadores gráficos durante días o semanas.Meta, la empresa dueña de Facebook e Instagram, dijo esta semana que iba a exigir que los avisos políticos indiquen si usaron IA. Otras publicaciones no pagadas en sitios que emplean esa tecnología, incluso relacionados con política, no iban a requerir indicar tal información. La Comisión Federal de Elecciones en EE. UU. también está evaluando si va a regular el uso de IA en propaganda política.El Instituto de Diálogo Estratégico, un grupo de investigación con sede en Londres que estudia las plataformas de internet, firmó una carta en la que se hace un llamado a implementar este tipo de regulaciones. Isabelle Frances-Wright, la directora de tecnología y sociedad del grupo, comentó que el uso extenso de IA en las elecciones argentinas era preocupante.“Sin duda considero que es un terreno resbaladizo”, dijo. “De aquí a un año lo que ya se ve muy real solo lo parecerá más”.La campaña de Massa dijo que decidió usar IA en un esfuerzo por mostrar que el peronismo, el movimiento político de 78 años de antigüedad que respalda a Massa, es capaz de atraer a los votantes jóvenes al rodear la imagen de Massa de cultura pop y de memes.Imagen generada con IA por la campaña de MassaPara lograrlo, ingenieros y artistas de la campaña subieron a un programa de código abierto llamado Stable Diffusion fotografías de distintas figuras políticas argentinas a fin de entrenar a su sistema de IA para que creara imágenes falsas de esas personas reales. Ahora pueden producir con rapidez una imagen o un video en donde aparezcan más de una decena de notables personalidades de la política de Argentina haciendo prácticamente lo que le indiquen.Durante la campaña, el equipo de comunicación de Massa instruyó a los artistas que trabajaban con la IA de la campaña sobre los mensajes o emociones que deseaban suscitar con las imágenes, por ejemplo: unidad nacional, valores familiares o miedo. Los artistas luego hacían lluvia de ideas para insertar a Massa o a Milei, así como a otros políticos, en contenido que evoca películas, memes, estilos artísticos o momentos históricos.Para Halloween, la campaña de Massa le pidió a su IA que creara una serie de imágenes caricaturescas de Milei y sus aliados en donde parecieran zombis. La campaña también empleó IA para crear un tráiler cinematográfico dramático en donde aparece Buenos Aires en llamas, Milei como un villano malvado en una camisa de fuerza y Massa en el papel del héroe que va a salvar el país.Las imágenes de IA también han hecho su aparición en el mundo real. Los afiches soviéticos estuvieron entre las decenas de diseños que campaña y seguidores de Massa imprimieron y pegaron en los espacios públicos de Argentina.Algunas imágenes fueron generadas por la IA de la campaña mientras que otras fueron creadas por simpatizantes que usaron IA, entre ellas una de las más conocidas, una en la que Massa monta un caballo al estilo de José de San Martín, héroe de la independencia argentina.“Massa estaba muy acartonado”, dijo Octavio Tome, organizador comunitario que ayudó a crear la imagen. “Esa imagen da un Massa con impronta jefe. Hay algo muy fuerte de la argentinidad”.Simpatizantes de Massa colocaron afiches generados con IA en donde aparece como el prócer de la independencia argentina José de San Martín.Sarah Pabst para The New York TimesEl surgimiento de la inteligencia artificial en las elecciones argentinas también ha causado que algunos votantes duden de la realidad. Luego de que la semana pasada circuló un video en donde se veía a Massa exhausto tras un acto de campaña, sus críticos lo acusaron de estar drogado. Sus seguidores rápidamente respondieron que el video en realidad era un deepfake.No obstante, su campaña confirmó que el video era, en efecto, real.Massa comentó que la gente ya estaba usando la tecnología para intentar encubrir errores del pasado o escándalos. “Es muy fácil escudarse en la inteligencia artificial cuando aparecen cosas que dijiste y no querías que se supieran”, dijo Massa en la entrevista.Al principio de la contienda, Patricia Bullrich, una candidata que no logró pasar a la segunda vuelta, intentó explicar que eran falsas unas grabaciones de audio filtradas en donde su asesor económico le ofrecía trabajo a una mujer a cambio de sexo. “Te hacen voces con inteligencia artificial, te recortan videos, te meten audios que nadie sabe de dónde salen”, dijo.No está claro si los audios eran falsos o reales.Jack Nicas es el jefe de la corresponsalía en Brasil, que abarca Brasil, Argentina, Chile, Paraguay y Uruguay. Anteriormente reportó sobre tecnología desde San Francisco y, antes de integrarse al Times en 2018, trabajó siete años en The Wall Street Journal. Más de Jack Nicas More

  • in

    Is Argentina the First A.I. Election?

    The posters dotting the streets of Buenos Aires had a certain Soviet flare to them.There was one of Argentina’s presidential candidates, Sergio Massa, dressed in a shirt with what appeared to be military medals, pointing to a blue sky. He was surrounded by hundreds of older people — in drab clothing, with serious, and often disfigured, faces — looked toward him in hope.The style was no mistake. The illustrator had been given clear instructions.“Sovietic Political propaganda poster illustration by Gustav Klutsis featuring a leader, masssa, standing firmly,” said a prompt that Mr. Massa’s campaign fed into an artificial-intelligence program to produce the image. “Symbols of unity and power fill the environment,” the prompt continued. “The image exudes authority and determination.”Javier Milei, the other candidate in Sunday’s runoff election, has struck back by sharing what appear to be A.I. images depicting Mr. Massa as a Chinese communist leader and himself as a cuddly cartoon lion. They have been viewed more than 30 million times.Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.Mr. Massa’s campaign has created an A.I. system that can create images and videos of many of the election’s main players — the candidates, running mates, political allies — doing a wide variety of things. The campaign has used A.I. to portray Mr. Massa, Argentina’s staid center-left economy minister, as strong, fearless and charismatic, including videos that show him as a soldier in war, a Ghostbuster and Indiana Jones, as well as posters that evoke Barack Obama’s 2008 “Hope” poster and a cover of The New Yorker.The campaign has also used the system to depict his opponent, Mr. Milei — a far-right libertarian economist and television personality known for outbursts — as unstable, putting him in films like “Clockwork Orange” and “Fear and Loathing in Las Vegas.”Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.“Imagine having kids and thinking that each is a long-term investment. Not in the traditional sense, but thinking of the economic potential of their organs,” says the manipulated image of Mr. Milei in the fabricated video, posted by the Massa campaign on its Instagram account for A.I. content, called “A.I. for the Homeland.”The post’s caption says, “We asked an Artificial Intelligence to help Javier explain the business of selling organs and this happened.”In an interview, Mr. Massa said he was shocked the first time he saw what A.I. could do. “I didn’t have my mind prepared for the world that I’m going to live in,” he said. “It’s a huge challenge. We’re on a horse that we have to ride but we still don’t know its tricks.”The New York Times then showed him the deepfake his campaign created of Mr. Milei and human organs. He appeared disturbed. “I don’t agree with that use,” he said.His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.“Now we’ve seen this absolute explosion of incredibly accessible and increasingly powerful democratized tool sets, and that calculation has radically changed,” said Henry Ajder, an expert based in England who has advised governments on A.I.-generated content.This year, a mayoral candidate in Toronto used gloomy A.I.-generated images of homeless people to telegraph what Toronto would turn into if he weren’t elected. In the United States, the Republican Party posted a video created with A.I. that shows China invading Taiwan and other dystopian scenes to depict what it says would happen if President Biden wins a second term.And the campaign of Gov. Ron DeSantis of Florida shared a video showing A.I.-generated images of Donald J. Trump hugging Dr. Anthony S. Fauci, who has become an enemy on the American right for his role leading the nation’s pandemic response.So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.Meta, the company that owns Facebook and Instagram, said this week that it would require political ads to disclose whether they used A.I. Other unpaid posts on the sites that use A.I., even if related to politics, would not be required to carry any disclosures. The U.S. Federal Election Commission is also considering whether to regulate the use of A.I. in political ads.The Institute for Strategic Dialogue, a London-based research group that studies internet platforms, signed a letter urging such regulations. Isabelle Frances-Wright, the group’s head of technology and society, said the extensive use of A.I. in Argentina’s election was worrisome.“I absolutely think it’s a slippery slope,” she said. “In a year from now, what already seems very realistic will only seem more so.” The Massa campaign said it decided to use A.I. in an effort to show that Peronism, the 78-year-old political movement behind Mr. Massa, can appeal to young voters by mixing Mr. Massa’s image with pop and meme culture.An A.I.-generated image created by Mr. Massa’s campaign.To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.During the campaign, Mr. Massa’s communications team has briefed artists working with the campaign’s A.I. on which messages or emotions they want the images to impart, such as national unity, family values and fear. The artists have then brainstormed ideas to put Mr. Massa or Mr. Milei, as well as other political figures, into content that references films, memes, artistic styles or moments in history.For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.The A.I. images have also shown up in the real world. The Soviet posters were one of the dozens of designs that Mr. Massa’s campaign and supporters printed to post across Argentina’s public spaces.Some images were generated by the campaign’s A.I., while others were created by supporters using A.I., including one of the most well-known, an image of Mr. Massa riding a horse in the style of José de San Martín, an Argentine independence hero. “Massa was too stiff,” said Octavio Tome, a community organizer who helped create the image. “We’re showing a boss-like Massa, and he’s very Argentine.”Supporters of Mr. Massa put up AI-generated posters depicting him in the style of José de San Martín, an Argentine independence hero.Sarah Pabst for The New York TimesThe rise of A.I. in Argentina’s election has also made some voters question what is real. After a video circulated last week of Mr. Massa looking exhausted after a campaign event, his critics accused him of being on drugs. His supporters quickly struck back, claiming the video was actually a deepfake.His campaign confirmed, however, that the video was, in fact, real.Mr. Massa said people were already using A.I. to try to cover up past mistakes or scandals. “It’s very easy to hide behind artificial intelligence when something you said come out, and you didn’t want them to,” Mr. Massa said in the interview.Earlier in the race, Patricia Bullrich, a candidate who failed to qualify for the runoff, tried to explain away leaked audio recordings of her economic adviser offering a woman a job in exchange for sex by saying the recordings were fabricated. “They can fake voices, alter videos,” she said.Were the recordings real or fake? It’s unclear. More

  • in

    Struggling to Understand TV Dialogue? Join the Club.

    More from our inbox:Airbrushing Older ModelsHaley’s Raised HandSea Life in CaptivityDerek AbellaTo the Editor: Re “Huh? What? There Are Ways to Improve the Sound on Your TV?” (Business, Aug. 18):As an American expat, I got a good chuckle out of Brian X. Chen’s article about poor dialogue sound quality in streaming. The premise, that using subtitles is a terrible inconvenience that diminishes one’s enjoyment of video entertainment, is one of those peculiarly American complaints that seem bizarre to many people overseas.In Chinese-speaking areas and other parts of East Asia, the wide variety of languages, accents and usages can make it tough to comprehend dialogue regardless of sound quality, so video nearly always comes with subtitles, whether it’s on TV, in a movie theater or online. Nobody here seems to mind.Indeed, the people in Malaysia who build the Sonos equipment that Mr. Chen praised must be thrilled that Americans will spend $900 on soundbars to avoid those irritating subtitles.Michael P. ClarkeTaoyuan City, TaiwanTo the Editor:We do not have to bring speakers to a movie theater to watch a movie and we should not have to put speakers on our TV sets to enjoy a television show. Modern television sets should come with high-resolution pictures and high-quality, audible sound. The quality of the sound is as important as the quality of the picture. We should not have to buy soundbars.Bill ChastainNew YorkTo the Editor:I’ve used closed captioning for a while now, not only because the sound quality on streaming services is far from as good as it should be but also because programs produced in England — many of the shows on PBS, which I like — use a lot of slang and hard-to-understand dialects.But a major problem is that some of the streaming services, like Netflix, have closed captions that are far from helpful. They come on well before or well after the spoken words, and too often they flash on so fast that it is impossible to read the entire line of dialogue.Michael SpielmanWellfleet, Mass.To the Editor:Brian X. Chen suggests that we can hear the dialogue in movies and television shows better by installing new equipment. Along with the attempts at improvements made by directors and sound mixers, producers might insist upon better diction from the actors.I’ve noticed this slurring and breathy quality in stage performers, too. Perhaps Broadway shows need closed captioning?Lawrence RaikenQueensAirbrushing Older ModelsRafael Pavarotti/VogueTo the Editor: Re “Do Supermodels Age, or Get Airbrushed Instead?” (Sunday Styles, Aug. 20):The timing couldn’t be more prescient. Just as Greta Gerwig’s irreverent blockbuster “Barbie” is sweeping theaters around the world, Vogue has released its iconic September issue featuring the likes of America’s supermodels — Linda Evangelista, 58, Cindy Crawford, 57, Christy Turlington, 54, and Naomi Campbell, 53 — on its cover.As Vanessa Friedman aptly remarks, they are “paragons of mature beauty whose years have seemingly been smoothed from their faces,” which “look so retouched that they seem more like A.I.-generated bots than actual people.” A Vogue spokeswoman claimed there was only “minimal retouching.” We know better.Although we can surely applaud Vogue’s decision to feature 50-something models on its cover, “retouching” them is perpetuating a big lie. It is, in effect, “Barbiefying” them. Barbie was the icon that fed upon young girls’ feelings of inadequacy. Now older women can gaze at Vogue’s cover and feel inadequate too. Thank you, Vogue.If Vogue, “the fashion Bible,” had elected not to retouch these mature beauties, it would have been a truly groundbreaking event. Certainly a missed opportunity.Thank you, Vanessa Friedman, for speaking truth to Vogue. As Ms. Gerwig’s Barbie comes to realize, “It’s time to change the Constitution.”Elizabeth LangerNew YorkThe writer is a co-founder of the Women’s Rights Law Reporter, the first U.S. journal devoted to women and the law.To the Editor:I laughed this morning reading Vanessa Friedman’s column at the silliness of an article criticizing the airbrushing of aging models. The fashion industry runs on unrealistic representations of beauty. Why should those standards be different for older models?I’ve attended fashion shoots where young models had terrible acne that was ultimately airbrushed out. It seems that, no matter how young or beautiful a model is, there’s almost always flattering lighting and image manipulation. The industry runs on fantasy.So, whether or not older models have their wrinkles airbrushed seems irrelevant if everything is unrealistic. This is commerce. They aren’t profiling women curing cancer. At least now they’re democratizing fashion to allow older women to put their best selves forward, too.I hope they can continue to do that without being criticized for tricks of the trade. I think focusing on airbrushing undermines how great it is that Vogue is keeping women over 50 relevant.Jenifer VogtDobbs Ferry, N.Y.Haley’s Raised HandJoe Buglewicz for The New York TimesTo the Editor: Re “Nikki Haley Is the Best Alternative to Trump,” by David Brooks (column, Aug. 25):Wednesday night’s Republican debate persuaded Mr. Brooks that Nikki Haley is the best alternative to Donald Trump. Yet while Mr. Brooks makes a convincing case that Ms. Haley is a preferable candidate to Mike Pence, Ron DeSantis and especially Vivek Ramaswamy, he fails to address the fact that Ms. Haley, along with every other candidate on the stage except Chris Christie and Asa Hutchinson, raised her hand when asked if she would support Mr. Trump if he is convicted of one or more felonies and is the Republican nominee.I would ask Mr. Brooks how Ms. Haley’s raised hand shows that she is “one of the few candidates who understands that to run against Trump you have to run against Trump”? And should that not, by itself, render her unfit to become the next president of the United States?David A. BarryCambridge, Mass.Sea Life in CaptivityLolita during a performance at the Miami Seaquarium in 1995. She has been in captivity since 1970.Nuri Vallbona/Miami Herald, via Associated PressTo the Editor: Re “Lolita the Orca, Mainstay of Miami Seaquarium for 50 Years, Dies,” by Jesus Jiménez (news article, nytimes.com, Aug. 18):I know I am not alone in grieving the tragedy of the kidnapping of this orca, also known as Tokitae, her decades spent in captivity, and her untimely death just when freedom and the possibility of being reunited with her family in the Salish Sea were close enough to touch. Her sorrowful life story hurts all the more because our human collective doesn’t seem to have learned a thing from it.Orcas remain endangered and continue to struggle to hear each other and catch dwindling salmon in polluted waters that are choking with boat noise from unceasing human commercial and recreational activity. Worse, the captive industry carries on, including in Seattle, which is intent upon building a shiny new shark tank to imprison even more animals.My hope is that Tokitae’s death will galvanize support against the captivity industry locally and beyond, and serve as a beacon of hope for other beings languishing in tanks simply so that they can be ogled by humans. Let’s honor Tokitae and her bereaved family by ensuring that nobody else has to suffer similarly.Stephanie C. BellSeaTac, Wash. More

  • in

    YouTube Restores Donald Trump’s Account Privileges

    The Google-owned video platform became the latest of the big social networks to reverse the former president’s account restrictions.YouTube suspended former President Donald J. Trump’s account on the platform six days after the Jan. 6 attack on the Capitol. The video platform said it was concerned that Mr. Trump’s lies about the 2020 election could lead to more real-world violence.YouTube, which is owned by Google, reversed that decision on Friday, permitting Mr. Trump to once again upload videos to the popular site. The move came after similar decisions by Twitter and Meta, which owns Facebook and Instagram.“We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube said on Twitter on Friday. Mr. Trump’s account will have to comply with the site’s content rules like any other account, YouTube added.After false claims that the 2020 presidential election was stolen circulated online and helped stoke the Jan. 6 attack, social media giants suspended Mr. Trump’s account privileges. Two years later, the platforms have started to soften their content rules. Under Elon Musk’s ownership, Twitter has unwound many of its content moderation efforts. YouTube recently laid off members of its trust and safety team, leaving one person in charge of setting political misinformation policies.Mr. Trump announced in November that he was seeking a second term as president, setting off deliberations at social media companies over whether to allow him back on their platforms. Days later, Mr. Musk polled Twitter users on whether he should reinstate Mr. Trump, and 52 percent of respondents said yes. Like YouTube, Meta said in January that it was important that people hear what political candidates are saying ahead of an election.The former president’s reinstatement is one of the first significant content decisions that YouTube has taken under its new chief executive, Neal Mohan, who got the top job last month. YouTube also recently loosened its profanity rules so that creators who used swear words at the start of a video could still make money from the content.YouTube’s announcement on Friday echoes a pattern of the company and its parent Google making polarizing content decisions after a competitor has already taken the same action. YouTube followed Meta and Twitter in suspending Mr. Trump after the Capitol attack, and in reversing the bans.Since losing his bid for re-election in 2020, Mr. Trump has sought to make a success of his own social media service, Truth Social, which is known for its loose content moderation rules.Mr. Trump on Friday posted on his Facebook page for the first time since his reinstatement. “I’M BACK!” Mr. Trump wrote, alongside a video in which he said, “Sorry to keep you waiting. Complicated business. Complicated.”Despite his Twitter reinstatement, Mr. Trump has not returned to posting from that account.In his last tweet, dated Jan. 8, 2021, he said he would not attend the coming inauguration, held at the Capitol. More

  • in

    Political Campaigns Flood Streaming Video With Custom Voter Ads

    The targeted political ads could spread some of the same voter-influence techniques that proliferated on Facebook to an even less regulated medium.Over the last few weeks, tens of thousands of voters in the Detroit area who watch streaming video services were shown different local campaign ads pegged to their political leanings.Digital consultants working for Representative Darrin Camilleri, a Democrat in the Michigan House who is running for State Senate, targeted 62,402 moderate, female — and likely pro-choice — voters with an ad promoting reproductive rights.The campaign also ran a more general video ad for Mr. Camilleri, a former public-school teacher, directed at 77,836 Democrats and Independents who have voted in past midterm elections. Viewers in Mr. Camilleri’s target audience saw the messages while watching shows on Lifetime, Vice and other channels on ad-supported streaming services like Samsung TV Plus and LG Channels.Although millions of American voters may not be aware of it, the powerful data-mining techniques that campaigns routinely use to tailor political ads to consumers on sites and apps are making the leap to streaming video. The targeting has become so precise that next door neighbors streaming the same true crime show on the same streaming service may now be shown different political ads — based on data about their voting record, party affiliation, age, gender, race or ethnicity, estimated home value, shopping habits or views on gun control.Political consultants say the ability to tailor streaming video ads to small swaths of viewers could be crucial this November for candidates like Mr. Camilleri who are facing tight races. In 2016, Mr. Camilleri won his first state election by just several hundred votes.“Very few voters wind up determining the outcomes of close elections,” said Ryan Irvin, the co-founder of Change Media Group, the agency behind Mr. Camilleri’s ad campaign. “Very early in an election cycle, we can pull from the voter database a list of those 10,000 voters, match them on various platforms and run streaming TV ads to just those 10,000 people.”Representative Darrin Camilleri, a member of the Michigan House who is running for State Senate, targeted local voters with streaming video ads before he campaigned in their neighborhoods. Emily Elconin for The New York TimesTargeted political ads on streaming platforms — video services delivered via internet-connected devices like TVs and tablets — seemed like a niche phenomenon during the 2020 presidential election. Two years later, streaming has become the most highly viewed TV medium in the United States, according to Nielsen.Savvy candidates and advocacy groups are flooding streaming services with ads in an effort to reach cord-cutters and “cord nevers,” people who have never watched traditional cable or broadcast TV.The trend is growing so fast that political ads on streaming services are expected to generate $1.44 billion — or about 15 percent — of the projected $9.7 billion on ad spending for the 2022 election cycle, according to a report from AdImpact, an ad tracking company. That would for the first time put streaming on par with political ad spending on Facebook and Google.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Midterm Data: Could the 2020 polling miss repeat itself? Will this election cycle really be different? Nate Cohn, The Times’s chief political analyst, looks at the data in his new newsletter.Republicans’ Abortion Struggles: Senator Lindsey Graham’s proposed nationwide 15-week abortion ban was intended to unite the G.O.P. before the November elections. But it has only exposed the party’s divisions.Democrats’ Dilemma: The party’s candidates have been trying to signal their independence from the White House, while not distancing themselves from President Biden’s base or agenda.The quick proliferation of the streaming political messages has prompted some lawmakers and researchers to warn that the ads are outstripping federal regulation and oversight.For example, while political ads running on broadcast and cable TV must disclose their sponsors, federal rules on political ad transparency do not specifically address streaming video services. Unlike broadcast TV stations, streaming platforms are also not required to maintain public files about the political ads they sold.The result, experts say, is an unregulated ecosystem in which streaming services take wildly different approaches to political ads.“There are no rules over there, whereas, if you are a broadcaster or a cable operator, you definitely have rules you have to operate by,” said Steve Passwaiter, a vice president at Kantar Media, a company that tracks political advertising.The boom in streaming ads underscores a significant shift in the way that candidates, party committees and issue groups may target voters. For decades, political campaigns have blanketed local broadcast markets with candidate ads or tailored ads to the slant of cable news channels. With such bulk media buying, viewers watching the same show at the same time as their neighbors saw the same political messages.But now campaigns are employing advanced consumer-profiling and automated ad-buying services to deliver different streaming video messages, tailored to specific voters.“In the digital ad world, you’re buying the person, not the content,” said Mike Reilly, a partner at MVAR Media, a progressive political consultancy that creates ad campaigns for candidates and advocacy groups.Targeted political ads are being run on a slew of different ad-supported streaming channels. Some smart TV manufacturers air the political ads on proprietary streaming platforms, like Samsung TV Plus and LG Channels. Viewers watching ad-supported streaming channels via devices like Roku may also see targeted political ads.Policies on political ad targeting vary. Amazon prohibits political party and candidate ads on its streaming services. YouTube TV and Hulu allow political candidates to target ads based on viewers’ ZIP code, age and gender, but they prohibit political ad targeting by voting history or party affiliation.Roku, which maintains a public archive of some political ads running on its platform, declined to comment on its ad-targeting practices.Samsung and LG, which has publicly promoted its voter-targeting services for political campaigns, did not respond to requests for comment. Netflix declined to comment about its plans for an ad-supported streaming service.Targeting political ads on streaming services can involve more invasive data-mining than the consumer-tracking techniques typically used to show people online ads for sneakers.Political consulting firms can buy profiles on more than 200 millions voters, including details on an individual’s party affiliations, voting record, political leanings, education levels, income and consumer habits. Campaigns may employ that data to identify voters concerned about a specific issue — like guns or abortion — and hone video messages to them.In addition, internet-connected TV platforms like Samsung, LG and Roku often use data-mining technology, called “automated content recognition,” to analyze snippets of the videos people watch and segment viewers for advertising purposes.Some streaming services and ad tech firms allow political campaigns to provide lists of specific voters to whom they wish to show ads.To serve those messages, ad tech firms employ precise delivery techniques — like using IP addresses to identify devices in a voter’s household. The device mapping allows political campaigns to aim ads at certain voters whether they are streaming on internet-connected TVs, tablets, laptops or smartphones.Sten McGuire, an executive at a4 Advertising, presented a webinar in March announcing a partnership to sell political ads on LG channels.New York TimesUsing IP addresses, “we can intercept voters across the nation,” Sten McGuire, an executive at a4 Advertising, said in a webinar in March announcing a partnership to sell political ads on LG channels. His company’s ad-targeting worked, Mr. McGuire added, “whether you are looking to reach new cord cutters or ‘cord nevers’ streaming their favorite content, targeting Spanish-speaking voters in swing states, reaching opinion elites and policy influencers or members of Congress and their staff.”Some researchers caution that targeted video ads could spread some of the same voter-influence techniques that have proliferated on Facebook to a new, and even less regulated, medium.Facebook and Google, the researchers note, instituted some restrictions on political ad targeting after Russian operatives used digital platforms to try to disrupt the 2016 presidential election. With such restrictions in place, political advertisers on Facebook, for instance, should no longer be able to target users interested in Malcolm X or Martin Luther King with paid messages urging them not to vote.Facebook and Google have also created public databases that enable people to view political ads running on the platforms.But many streaming services lack such targeting restrictions and transparency measures. The result, these experts say, is an opaque system of political influence that runs counter to basic democratic principles.“This occupies a gray area that’s not getting as much scrutiny as ads running on social media,” said Becca Ricks, a senior researcher at the Mozilla Foundation who has studied the political ad policies of popular streaming services. “It creates an unfair playing field where you can precisely target, and change, your messaging based on the audience — and do all of this without some level of transparency.”Some political ad buyers are shying away from more restricted online platforms in favor of more permissive streaming services.“Among our clients, the percentage of budget going to social channels, and on Facebook and Google in particular, has been declining,” said Grace Briscoe, an executive overseeing candidate and political issue advertising at Basis Technologies, an ad tech firm. “The kinds of limitations and restrictions that those platforms have put on political ads has disinclined clients to invest as heavily there.”Senators Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would require online political ads to include disclosures similar to those on broadcast TV ads.Al Drago for The New York TimesMembers of Congress have introduced a number of bills that would curb voter-targeting or require digital ads to adhere to the same rules as broadcast ads. But the measures have not yet been enacted.Amid widespread covertness in the ad-targeting industry, Mr. Camilleri, the member of the Michigan House running for State Senate, was unusually forthcoming about how he was using streaming services to try to engage specific swaths of voters.In prior elections, he said, he sent postcards introducing himself to voters in neighborhoods where he planned to make campaign stops. During this year’s primaries, he updated the practice by running streaming ads introducing himself to certain households a week or two before he planned to knock on their doors.“It’s been working incredibly well because a lot of people will say, ‘Oh, I’ve seen you on TV,’” Mr. Camilleri said, noting that many of his constituents did not appear to understand the ads were shown specifically to them and not to a general broadcast TV audience. “They don’t differentiate” between TV and streaming, he added, “because you’re watching YouTube on your television now.” More

  • in

    YouTube Deletes Jan. 6 Video That Included Clip of Trump Sharing Election Lies

    The House select committee investigating the Jan. 6 riot has been trying to draw more eyes to its televised hearings by uploading clips of the proceedings online. But YouTube has removed one of those videos from its platform, saying the committee was advancing election misinformation.The excerpt, which was uploaded June 14, included recorded testimony from former Attorney General William P. Barr. But the problem for YouTube was that the video also included a clip of former President Donald J. Trump sharing lies about the election on the Fox Business channel.A screenshot of the committee’s website showing the video removal notification. The message initially said the video had been removed.Select Committee to Investigate the January 6th Attack on the United States Capitol“We had glitches where they moved thousands of votes from my account to Biden’s account,” Mr. Trump said falsely, before suggesting the F.B.I. and Department of Justice may have been involved.The excerpt of the hearing did not include Mr. Barr’s perspective, stated numerous times elsewhere in the hearing, that Mr. Trump’s assertion that the election was stolen was wrong. The video initially was replaced with a black box stating that the clip had been removed for violating YouTube’s terms of service.“Our election integrity policy prohibits content advancing false claims that widespread fraud, errors or glitches changed the outcome of the 2020 U.S. presidential election, if it does not provide sufficient context,” YouTube spokeswoman Ivy Choi said in a statement. “We enforce our policies equally for everyone, and have removed the video uploaded by the Jan. 6 committee channel.”The message on the video page has since been changed to “This video is private,” which may mean that YouTube would allow the committee to upload a version of the clip that makes clear that Trump’s claims are false. More

  • in

    YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Share of Election-Related Posts on Social Platforms Linking to Videos Making Claims of Fraud
    Source: Center for Social Media and Politics at New York UniversityBy The New York TimesYouTube’s stricter policies against election misinformation was followed by sharp drops in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released on Thursday, underscoring the video service’s power across social media.Researchers at the Center for Social Media and Politics at New York University found a significant rise in election fraud YouTube videos shared on Twitter immediately after the Nov. 3 election. In November, those videos consistently accounted for about one-third of all election-related video shares on Twitter. The top YouTube channels about election fraud that were shared on Twitter that month came from sources that had promoted election misinformation in the past, such as Project Veritas, Right Side Broadcasting Network and One America News Network.But the proportion of election fraud claims shared on Twitter dropped sharply after Dec. 8. That was the day YouTube said it would remove videos that promoted the unfounded theory that widespread errors and fraud changed the outcome of the presidential election. By Dec. 21, the proportion of election fraud content from YouTube that was shared on Twitter had dropped below 20 percent for the first time since the election.The proportion fell further after Jan. 7, when YouTube announced that any channels that violated its election misinformation policy would receive a “strike,” and that channels that received three strikes in a 90-day period would be permanently removed. By Inauguration Day, the proportion was around 5 percent.The trend was replicated on Facebook. A postelection surge in sharing videos containing fraud theories peaked at about 18 percent of all videos on Facebook just before Dec. 8. After YouTube introduced its stricter policies, the proportion fell sharply for much of the month, before rising slightly before the Jan. 6 riot at the Capitol. The proportion dropped again, to 4 percent by Inauguration Day, after the new policies were put in place on Jan. 7.To reach their findings, researchers collected a random sampling of 10 percent of all tweets each day. They then isolated tweets that linked to YouTube videos. They did the same for YouTube links on Facebook, using a Facebook-owned social media analytics tool, CrowdTangle.From this large data set, the researchers filtered for YouTube videos about the election broadly, as well as about election fraud using a set of keywords like “Stop the Steal” and “Sharpiegate.” This allowed the researchers to get a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.Misinformation on major social networks has proliferated in recent years. YouTube in particular has lagged behind other platforms in cracking down on different types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has toughened its policies, such as banning all antivaccine misinformation and suspending the accounts of prominent antivaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.Ivy Choi, a YouTube spokeswoman, said that YouTube was the only major online platform with a presidential election integrity policy. “We also raised up authoritative content for election-related search queries and reduced the spread of harmful election-related misinformation,” she said.Megan Brown, a research scientist at the N.Y.U. Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos that promoted election fraud. It is also possible that interest in the election fraud theories dropped considerably after states certified their election results.But the bottom line, Ms. Brown said, is that “we know these platforms are deeply interconnected.” YouTube, she pointed out, has been identified as one of the most-shared domains across other platforms, including in both of Facebook’s recently released content reports and N.Y.U.’s own research.“It’s a huge part of the information ecosystem,” Ms. Brown said, “so when YouTube’s platform becomes healthier, others do as well.” More