More stories

  • in

    Trump me atacó. Después, Musk lo hizo. No fue casualidad

    Timo LenzenCuando trabajaba en Twitter, ahora conocida como X, dirigí al equipo que puso por primera vez una etiqueta de verificación de hechos en uno de los tuits de Donald Trump. Tras la violencia del 6 de enero, ayudé a tomar la decisión de suspender su cuenta en Twitter. Nada me preparó para lo que ocurriría después.Respaldado por sus seguidores en las redes sociales, Trump me atacó públicamente. Dos años después, tras su adquisición de Twitter y después de que yo dimití de mi puesto como responsable de confianza y seguridad de la empresa, Elon Musk echó más leña al fuego. He vivido con guardias armados en la puerta de mi casa y he tenido que trastocar la vida de mi familia, así como esconderme durante meses y mudarme una y otra vez.No es una historia que me guste recordar. Pero he aprendido que lo que me ocurrió no fue casualidad. No fue solo una venganza personal o la “cultura de la cancelación”. Se trató de una estrategia que no solo afecta a personas específicas, como en mi caso, sino a todos nosotros, ya que está cambiando a gran velocidad lo que vemos en internet.Los individuos —desde investigadores académicos hasta trabajadores de empresas de tecnología— son cada vez más objeto de demandas, comparecencias ante el Congreso y despiadados ataques en línea. Estos ataques, organizados en gran medida por la derecha, están teniendo el efecto deseado: las universidades están reduciendo sus esfuerzos para cuantificar la información abusiva y engañosa que se difunde en internet. Las empresas de redes sociales están evitando tomar el tipo de decisiones difíciles que mi equipo tomó cuando intervinimos ante las mentiras de Trump sobre las elecciones de 2020. Las plataformas no empezaron a tomarse en serio estos riesgos sino hasta después de las elecciones de 2016. Ahora, ante la posibilidad de ataques desproporcionados contra sus empleados, las empresas parecen cada vez más reacias a tomar decisiones controvertidas, lo cual permite que la desinformación y el abuso se enconen para evitar provocar represalias públicas.Estos ataques a la seguridad en internet se producen en un momento en el que la democracia no podría estar más en riesgo. En 2024, está prevista la celebración de más de 40 elecciones importantes, entre ellas las de Estados Unidos, la Unión Europea, la India, Ghana y México. Lo más probable es que estas democracias se enfrenten a los mismos riesgos de campañas de desinformación respaldadas por los gobiernos y de incitación a la violencia en línea que han plagado las redes sociales durante años. Deberíamos preocuparnos por lo que ocurra.Mi historia comienza con esa verificación de datos. En la primavera de 2020, tras años de debate interno, mi equipo decidió que Twitter debía aplicar una etiqueta a un tuit del entonces presidente Trump que afirmaba que el voto por correo era propenso al fraude y que las próximas elecciones estarían “amañadas”. “Conoce los hechos sobre la votación por correo”, decía la etiqueta.El 27 de mayo, la mañana siguiente a la colocación de la etiqueta, la asesora principal de la Casa Blanca, Kellyanne Conway, me identificó de manera pública como el director del equipo de integridad de Twitter. Al día siguiente, The New York Post publicó en su portada varios tuits en los que me burlaba de Trump y otros republicanos. Los había publicado años antes, cuando era estudiante y tenía pocos seguidores, sobre todo amigos y familiares, en las redes sociales. Ahora, eran noticia de primera plana. Ese mismo día, Trump tuiteó que yo era un “odiador”.Legiones de usuarios de Twitter, la mayoría de quienes días antes no tenían ni idea de quién era yo ni en qué consistía mi trabajo, comenzaron una campaña de acoso en línea que duró meses, en la que exigían que me despidieran, me encarcelaran o me mataran. La cantidad de notificaciones de Twitter arrunió mi teléfono. Amigos de los que no tenía noticias desde hacía años expresaron su preocupación. En Instagram, fotos antiguas de mis vacaciones y de mi perro se inundaron de comentarios amenazantes e insultos (algunos comentaristas, que malinterpretaron el momento de manera atroz, aprovecharon para intentar coquetear conmigo).Me sentí avergonzado y asustado. Hasta ese momento, nadie fuera de unos pocos círculos bastante especializados tenía idea de quién era yo. Los académicos que estudian las redes sociales llaman a esto “colapso de contexto”: las cosas que publicamos en las redes sociales con un público en mente pueden acabar circulando entre un público muy diferente, con resultados inesperados y destructivos. En la práctica, se siente como si todo tu mundo se derrumba.El momento en que se desató la campaña en contra de mi persona y mi supuesta parcialidad sugería que los ataques formaban parte de una estrategia bien planificada. Los estudios académicos han rebatido en más de una ocasión las afirmaciones de que las plataformas de Silicon Valley son tendenciosas contra los conservadores. Pero el éxito de una estrategia encaminada a obligar a las empresas de redes sociales a reconsiderar sus decisiones quizá no requiera la demostración de una verdadera mala conducta. Como describió en una ocasión Rich Bond, expresidente del Partido Republicano, tal vez solo sea necesario “ganarse a los árbitros”: presionar sin cesar a las empresas para que se lo piensen dos veces antes de emprender acciones que podrían provocar una reacción negativa. Lo que me ocurrió fue parte de un esfuerzo calculado para que Twitter se mostrara reacio a moderar a Trump en el futuro y para disuadir a otras empresas de tomar medidas similares.Y funcionó. Mientras se desataba la violencia en el Capitolio el 6 de enero, Jack Dorsey, entonces director general de Twitter, anuló la recomendación del departamento de confianza y seguridad de que se bloqueara la cuenta de Trump debido a varios tuits, incluido uno que atacaba al vicepresidente Mike Pence. En cambio, se le impuso una suspensión temporal de 12 horas (antes de que su cuenta se se suspendiera indefinidamente el 8 de enero). Dentro de los límites de las normas, se animó a los miembros del personal a encontrar soluciones para ayudar a la empresa a evitar el tipo de reacción que da lugar a ciclos de noticias furiosas, audiencias y acoso a empleados. En la práctica, lo que sucedió fue que Twitter dio mayor libertad a los infractores: a la representante Marjorie Taylor Greene se le permitió violar las normas de Twitter al menos cinco veces antes de que una de sus cuentas fuera suspendida de manera definitiva en 2022. Otras figuras prominentes de derecha, como la cuenta de guerra cultural Libs of TikTok, gozaron de una deferencia similar.En todo el mundo, se están desplegando tácticas similares para influir en los esfuerzos de confianza y seguridad de las plataformas. En India, la policía visitó dos de nuestras oficinas en 2021 cuando comprobamos los hechos de las publicaciones de un político del partido gobernante y la policía se presentó en la casa de un empleado después de que el gobierno nos solicitó bloquear cuentas implicadas en una serie de protestas. El acoso volvió a rendir frutos: los ejecutivos de Twitter decidieron que cualquier acción que pudiera ser delicada en la India requeriría la aprobación de los más altos mandos, un nivel único de escalada de decisiones que, de otro modo, serían rutinarias.Y cuando quisimos revelar una campaña de propaganda llevada a cabo por una rama del ejército indio, nuestro equipo jurídico nos advirtió que nuestros empleados en la India podrían ser acusados de sedición y condenados a muerte. Así que Twitter no reveló la campaña sino hasta más de un año después, sin señalar al gobierno indio como autor.En 2021, antes de las elecciones legislativas de Rusia, los funcionarios de un servicio de seguridad estatal fueron a la casa de una alta ejecutiva de Google en Moscú para exigir la retirada de una aplicación que se usaba para protestar en contra de Vladimir Putin. Los agentes la amenazaron con encarcelarla si la empresa no cumplía en 24 horas. Tanto Apple como Google retiraron la aplicación de sus respectivas tiendas y la restablecieron una vez concluidas las elecciones.En cada uno de estos casos, los empleados en cuestión carecían de la capacidad para hacer lo que les pedían los funcionarios de turno, ya que las decisiones subyacentes se tomaban a miles de kilómetros de distancia, en California. Pero como los empleados locales tenían la desgracia de residir dentro de la jurisdicción de las autoridades, fueron objeto de campañas coercitivas, que enfrentaban el sentido del deber de las empresas hacia sus empleados contra los valores, principios o políticas que pudieran hacerles resistirse a las demandas locales. Inspirados por la idea, India y otros países comenzaron a promulgar leyes de “toma de rehenes” para garantizar que las empresas de redes sociales contrataran personal local.En Estados Unidos, hemos visto que estas formas de coerción no las han llevado a cabo jueces y policías, sino organizaciones de base, turbas en las redes sociales, comentaristas de noticias por cable y, en el caso de Twitter, el nuevo propietario de la empresa.Una de las fuerzas más recientes en esta campaña son los “archivos de Twitter”, una gran selección de documentos de la empresa —muchos de los cuales yo mismo envié o recibí durante mis casi ocho años en Twitter— entregados por orden de Musk a un puñado de escritores selectos. Los archivos fueron promocionados por Musk como una forma innovadora de transparencia, que supuestamente exponían por primera vez la forma en que el sesgo liberal de las costas de Estados Unidos de Twitter reprime el contenido conservador.El resultado fue algo muy distinto. Como dijo el periodista de tecnología Mike Masnick, después de toda la fanfarria que rodeó la publicación inicial de los archivos de Twitter, al final “no había absolutamente nada de interés” en los documentos y lo poco que había tenía errores factuales importantes. Hasta Musk acabó por impacientarse con la estrategia. Pero, en el proceso, el esfuerzo marcó una nueva e inquietante escalada en el acoso a los empleados de las empresas tecnológicas.A diferencia de los documentos que por lo general saldrían de las grandes empresas, las primeras versiones de los archivos de Twitter no suprimieron los nombres de los empleados, ni siquiera de los de menor nivel. Un empleado de Twitter que residía en Filipinas fue víctima de doxeo (la revelación de información personal) y de acoso grave. Otros se han convertido en objeto de conspiraciones. Las decisiones tomadas por equipos de decenas de personas de acuerdo con las políticas escritas de Twitter se presentaron como si hubieran sido tomadas por los deseos caprichosos de individuos, cada uno identificado por su nombre y su fotografía. Yo fui, por mucho, el objetivo más frecuente.La primera entrega de los archivos de Twitter se dio tras un mes de mi salida de la empresa y unos cuantos días después de que publiqué un ensayo invitado en The New York Times y hablé sobre mi experiencia como empleado de Musk. No pude evitar sentir que las acciones de la empresa eran, hasta cierto punto, represalias. A la semana siguiente, Musk fue incluso más allá y sacó de contexto un párrafo de mi tesis doctoral para afirmar sin fundamentos que yo aprobaba la pedofilia, un tropo conspirativo que suelen utilizar los extremistas de ultraderecha y los seguidores de QAnon para desprestigiar a personas de la comunidad LGBTQ.La respuesta fue todavía más extrema que la que experimenté tras el tuit que Trump publicó sobre mí. “Deberías colgarte de un viejo roble por la traición que has cometido. Vive con miedo cada uno de tus días”, decía uno de los miles de tuits y correos electrónicos amenazantes. Ese mensaje y cientos de otros similares eran violaciones de las mismas políticas que yo había trabajado para desarrollar y hacer cumplir. Bajo la nueva administración, Twitter se hizo de la vista gorda y los mensajes permanecen en el sitio hasta el día de hoy.El 6 de diciembre, cuatro días después de la primera divulgación de los archivos de Twitter, se me pidió comparecer en una audiencia del Congreso centrada en los archivos y la presunta censura de Twitter. En esa audiencia, algunos miembros del Congreso mostraron carteles de gran tamaño con mis tuits de hace años y me preguntaron bajo juramento si seguía manteniendo esas opiniones (en la medida en que las bromas tuiteadas con descuido pudieran tomarse como mis opiniones reales, no las sostengo). Greene dijo en Fox News que yo tenía “unas posturas muy perturbadoras sobre los menores y la pornografía infantil” y que yo permití “la proliferación de la pornografía infantil en Twitter”, lo que desvirtuó aún más las mentiras de Musk (y además, aumentó su alcance). Llenos de amenazas y sin opciones reales para responder o protegernos, mi marido y yo tuvimos que vender nuestra casa y mudarnos.El ámbito académico se ha convertido en el objetivo más reciente de estas campañas para socavar las medidas de seguridad en línea. Los investigadores que trabajan para entender y resolver la propagación de desinformación en línea reciben ahora más ataques partidistas; las universidades a las que están afiliados han estado envueltas en demandas, onerosas solicitudes de registros públicos y procedimientos ante el Congreso. Ante la posibilidad de facturas de abogados de siete dígitos, hasta los laboratorios de las universidades más grandes y mejor financiadas han dicho que tal vez tengan que abandonar el barco. Otros han optado por cambiar el enfoque de sus investigaciones en función de la magnitud del acoso.Poco a poco, audiencia tras audiencia, estas campañas están erosionando de manera sistemática las mejoras a la seguridad y la integridad de las plataformas en línea que tanto ha costado conseguir y las personas que realizan este trabajo son las que pagan el precio más directo.Las plataformas de tecnología están replegando sus iniciativas para proteger la seguridad de las elecciones y frenar la propagación de la desinformación en línea. En medio de un clima de austeridad más generalizado, las empresas han disminuido muy en especial sus iniciativas relacionadas con la confianza y la seguridad. Ante la creciente presión de un Congreso hostil, estas decisiones son tan racionales como peligrosas.Podemos analizar lo que ha sucedido en otros países para vislumbrar cómo podría terminar esta historia. Donde antes las empresas hacían al menos un esfuerzo por resistir la presión externa; ahora, ceden en gran medida por defecto. A principios de 2023, el gobierno de India le pidió a Twitter que restringiera las publicaciones que criticaran al primer ministro del país, Narendra Modi. En años anteriores, la empresa se había opuesto a tales peticiones; en esta ocasión, Twitter accedió. Cuando un periodista señaló que tal cooperación solo incentiva la proliferación de medidas draconianas, Musk se encogió de hombros: “Si nos dan a elegir entre que nuestra gente vaya a prisión o cumplir con las leyes, cumpliremos con las leyes”.Resulta difícil culpar a Musk por su decisión de no poner en peligro a los empleados de Twitter en India. Pero no deberíamos olvidar de dónde provienen estas tácticas ni cómo se han extendido tanto. Las acciones de Musk (que van desde presionar para abrir los archivos de Twitter hasta tuitear sobre conspiraciones infundadas relacionadas con exempleados) normalizan y popularizan que justicieros exijan la rendición de cuentas y convierten a los empleados de su empresa en objetivos aún mayores. Su reciente ataque a la Liga Antidifamación demuestra que considera que toda crítica contra él o sus intereses empresariales debe tener como consecuencia una represalia personal. Y, en la práctica, ahora que el discurso de odio va en aumento y disminuyen los ingresos de los anunciantes, las estrategias de Musk parecen haber hecho poco para mejorar los resultados de Twitter.¿Qué puede hacerse para revertir esta tendencia?Dejar claras las influencias coercitivas en la toma de decisiones de las plataformas es un primer paso fundamental. También podría ayudar que haya reglamentos que les exijan a las empresas transparentar las decisiones que tomen en estos casos y por qué las toman.En su ausencia, las empresas deben oponerse a los intentos de que se quiera controlar su trabajo. Algunas de estas decisiones son cuestiones fundamentales de estrategia empresarial a largo plazo, como dónde abrir (o no abrir) oficinas corporativas. Pero las empresas también tienen un deber para con su personal: los empleados no deberían tener que buscar la manera de protegerse cuando sus vidas ya se han visto alteradas por estas campañas. Ofrecer acceso a servicios que promuevan la privacidad puede ayudar. Muchas instituciones harían bien en aprender la lección de que pocas esferas de la vida pública son inmunes a la influencia mediante la intimidación.Si las empresas de redes sociales no pueden operar con seguridad en un país sin exponer a sus trabajadores a riesgos personales y a las decisiones de la empresa a influencias indebidas, tal vez no deberían operar allí para empezar. Como a otros, me preocupa que esas retiradas empeoren las opciones que les quedan a las personas que más necesitan expresarse en línea de forma libre y abierta. Pero permanecer en internet teniendo que hacer concesiones podría impedir el necesario ajuste de cuentas con las políticas gubernamentales de censura. Negarse a cumplir exigencias moralmente injustificables y enfrentarse a bloqueos por ello puede provocar a largo plazo la necesaria indignación pública que ayude a impulsar la reforma.El mayor desafío —y quizá el más ineludible— en este caso es el carácter esencialmente humano de las iniciativas de confianza y seguridad en línea. No son modelos de aprendizaje automático ni algoritmos sin rostro los que están detrás de las decisiones clave de moderación de contenidos: son personas. Y las personas pueden ser presionadas, intimidadas, amenazadas y extorsionadas. Enfrentarse a la injusticia, al autoritarismo y a los perjuicios en línea requiere empleados dispuestos a hacer ese trabajo.Pocas personas podrían aceptar un trabajo así, si lo que les cuesta es la vida o la libertad. Todos debemos reconocer esta nueva realidad y planear en consecuencia.Yoel Roth es académico visitante de la Universidad de Pensilvania y la Fundación Carnegie para la Paz Internacional, y fue responsable de confianza y seguridad en Twitter. More

  • in

    I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online.

    Timo LenzenWhen I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.What can be done to turn back this tide?Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Trump’s Indictment and What’s Next

    The fallout will be widespread, with ramifications for the 2024 presidential race, policymaking and more.Donald Trump is likely to turn himself in on Tuesday.Christopher Lee for The New York TimesWhat you need to know about Trump’s indictment A Manhattan grand jury has indicted Donald Trump over his role in paying hush money to a porn star, making him the first former president to face criminal charges. It’s a pivotal moment in U.S. politics — there was an audible on-air gasp when Fox News anchors reported the news on Thursday — with ramifications for the 2024 presidential race, policymaking and more.Here are the most important things to note so far.Mr. Trump is likely to turn himself in on Tuesday, which will see the former president be fingerprinted and photographed in a New York State courthouse. (Prosecutors for the Manhattan district attorney, Alvin Bragg, wanted Trump to surrender on Friday, but were rebuffed by the former president’s lawyers, according to Politico.) Afterward, Mr. Trump would be arraigned and would finally learn the charges against him and be given the chance to enter a plea. The former president has consistently denied all wrongdoing.Mr. Trump and his advisers, who were at his Mar-a-Lago resort in Florida on Thursday, were caught off guard by the announcement, believing some news reports that suggested an indictment wouldn’t come for weeks. The former president blasted the news, describing it in all-caps as “an attack on our country the likes of which has never been seen before” on Truth Social, the social network he founded.The case revolves in part around the Trump family business. Charges by the Manhattan district attorney arise from a five-year investigation into a $130,000 payment by the fixer Michael Cohen to the porn actress Stormy Daniels in 2016, before the presidential election that year.The Trump Organization reimbursed Mr. Cohen — but in internal documents, company executives falsely recorded the payment as a legal expense and invented a bogus legal retainer with Mr. Cohen to justify them. Falsifying business records is a crime in New York. But to make it a felony charge, prosecutors may tie the crime to a second one: violating election law.The fallout will be wide, and unpredictable. Democrats and Republicans alike used the news to underpin a flurry of fund-raising efforts. (Among them, of course, was Mr. Trump’s own presidential campaign.)It’s unclear how the indictment will affect the 2024 race. Mr. Trump, who can run for president despite facing criminal charges, is leading in early polls. Still, his potential opponents for the Republican nomination — including Gov. Ron DeSantis of Florida and Mike Pence, Mr. Trump’s former vice president — harshly criticized the move. House Republicans have also flocked to his defense, potentially increasing the chances of gridlock in Washington.But while the charges may give Mr. Trump a boost in the G.O.P. primary, they could also hurt his standing in the general election against President Biden.HERE’S WHAT’S HAPPENING European inflation remains stubbornly high. Consumer prices rose 6.9 percent on an annualized basis across the eurozone in March, below analysts’ forecasts. But core inflation accelerated, a sign that Europe’s cost-of-living crisis is not easing. In the U.S., investors will be watching for data on personal consumption expenditure inflation, set to be released at 8:30 a.m.A Swiss court convicts bankers of helping a Putin ally hide millions. Four officials from the Swiss office of Gazprombank were accused of failing to conduct due diligence on accounts opened by a concert cellist who has been nicknamed “Putin’s wallet.” The case was seen as a test of Switzerland’s willingness to discipline bankers for wrongdoing.More Gulf nations back Jared Kushner’s investment firm. Sovereign funds in the United Arab Emirates and Qatar have poured hundreds of millions into Affinity Partners, The Times reports. The revelation underscores efforts by Mr. Kushner, Donald Trump’s son-in-law, and others in the Trump orbit to profit from close ties they forged with Middle Eastern powers while in the White House.Lawyers for a woman accusing Leon Black of rape ask to quit the case. A lawyer from the Wigdor firm, who had been representing Guzel Ganieva, told a court on Thursday that the attorney-client relationship had broken down and that Ms. Ganieva wanted to represent herself. It’s the latest twist in the lawsuit by Ms. Ganieva, who has said she had an affair with the private equity mogul that turned abusive; Black has denied wrongdoing.Richard Branson’s satellite-launching company is halting operations. Virgin Orbit said that it failed to raise much-needed capital, and would cease business for now and lay off nearly all of its roughly 660 employees. It signals the potential end of the company after it suffered a failed rocket launch in January.A brutal quarter for dealmaking Bankers and lawyers began the year with modest expectations for M.&A. Rising interest rates, concerns about the economy and costly financing had undercut what had been a booming market for deals.But the first three months of 2023 proved to be even more difficult than most would have guessed, as the volume of transactions fell to its lowest level in a decade.About 11,366 deals worth $550.5 billion were announced in the quarter, according to data from Refinitiv. That’s a 22 percent drop in the number of transactions — and a 45 percent plunge by value. That’s bad news for bankers who had been hoping for any improvement from a dismal second half of 2022. (They’ve already had to grapple with another bit of bad news: Wall Street bonuses were down 26 percent last year, according to New York State’s comptroller.)The outlook for improvement isn’t clear. While the Nasdaq is climbing, there’s enough uncertainty and volatility in the market — particularly given concerns around banks — to deter many would-be acquirers from doing risky deals. Then again, three months ago some dealmakers told DealBook that they expected their business to pick up in the middle of 2023.Here’s how the league tables look: JPMorgan Chase, Goldman Sachs and the boutique Centerview Partners led investment banks, with a combined 58 percent of the market. And Sullivan & Cromwell, Wachtell Lipton and Goodwin Procter were the big winners among law firms, with 46 percent market share.Biden wants new rules for lenders The Biden administration on Thursday called on regulators to toughen oversight of America’s midsize banks in the wake of the crisis triggered by the collapse of Silicon Valley Bank, as policymakers shift from containing the turmoil to figuring out how to prevent it from happening again.Much of the focus was on reviving measures included in the Dodd-Frank law passed in the aftermath of the 2008 financial crisis. These include reapplying stress tests and capital requirements used for the nation’s systemically important banks to midsize lenders, after they were rolled back in 2018 during the Trump administration.Here are the new rules the White House wants to see imposed:Tougher capital requirements and oversight of lenders. At the top of the list is the reinstatement of liquidity requirements (and stress tests on that liquidity) for lenders with $100 billion to $250 billion in assets like SVB and Signature Bank, which also collapsed.Plans for managing a bank failure and annual capital stress tests. The administration sees the need for more rigorous capital-testing measures designed to see if banks “can withstand high interest rates and other stresses.”It appears the White House will go it alone on these proposals. “There’s no need for congressional action in order to authorize the agencies to take any of these steps,” an administration official told journalists.Lobbyists are already pushing back, saying more oversight would drive up costs and hurt the economy. “It would be unfortunate if the response to bad management and delinquent supervision at SVB were additional regulation on all banks,” Greg Baer, the president and C.E.O. of the Bank Policy Institute, said in a statement.Elsewhere in banking:In the hours after Silicon Valley Bank’s failure on March 10, Jamie Dimon, C.E.O. of JPMorgan Chase, expressed his reluctance to get involved in another banking rescue effort. Dimon changed his position four days later as he and Janet Yellen, the Treasury secretary, spearheaded a plan for the country’s biggest banks to inject $30 billion in deposits into smaller ailing ones. “If my government asks me to help, I’ll help,” Mr. Dimon, 67, told The Times.“We are definitely working with technology which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way.” — Sundar Pichai, C.E.O. of Google, on the need for the tech industry to responsibly develop artificial intelligence tools, like chatbots, before rolling them out commercially.Carl Icahn and Jesus Illumina, the DNA sequencing company, stepped up its fight with the activist investor Carl Icahn on Thursday, pushing back against his efforts to secure three board seats and force it to spin off Grail, a maker of cancer-detection tests that it bought for $8 billion. But it is a reference to Jesus that the company says he made that is garnering much attention.The company said that it had nearly reached a settlement with Mr. Icahn before their fight went public, in a preliminary proxy statement. It added that he had no plan for the company beyond putting his nominees on the board.But Illumina also said Mr. Icahn told its executives that he “would not even support Jesus Christ” as an independent candidate over one of his own nominees because “my guys answer to me.”Experts say Mr. Icahn’s comments could be used against him in future fights. Board members are supposed to act as stewards of a company, not agents for a single investor. “If any disputes along these lines arise for public companies where Icahn has nominees on the board, shareholders are going to use this as exhibit A for allegations that the directors followed Icahn rather than their own judgment,” said Ann Lipton, a professor of law at Tulane University.Mr. Icahn doesn’t seem to care. He said the comments were “taken out of context” and the company broke an agreement to keep negotiations private.“It was a very poor choice of words and he is usually much smarter than that,” said John Coffee, a corporate governance professor at Columbia Law School. “But he can always say that he was misinterpreted and recognizes that directors owe their duties to all the shareholders.”THE SPEED READ DealsBed Bath & Beyond ended a deal to take money from the hedge fund Hudson Bay Capital after reporting another quarter of declining sales, and will instead try to raise $300 million by selling new stock. (WSJ)Apollo Global Management reportedly plans to bid nearly $2.8 billion for the aerospace parts maker Arconic. (Bloomberg)Marshall, the maker of guitar amps favored by Jimi Hendrix and Eric Clapton, will sell itself to Zound, a Swedish speaker maker that it had partnered with. (The Verge)PolicyFinland cleared its last hurdle to joining NATO after Turkey approved its entry into the security alliance. (NYT)The F.T.C. is reportedly investigating America’s largest alcohol distributor over how wine and liquor are priced across the U.S. (Politico)“Lobbyists Begin Chipping Away at Biden’s $80 Billion I.R.S. Overhaul” (NYT)Best of the restNetflix revamped its film division, as the streaming giant prepares to make fewer movies to cut costs. (Bloomberg)“A.I., Brain Scans and Cameras: The Spread of Police Surveillance Tech” (NYT)A jury cleared Gwyneth Paltrow of fault in a 2016 ski crash and awarded her the $1 she had requested in damages. (NYT)“Do We Know How Many People Are Working From Home?” (NYT)We’d like your feedback! Please email thoughts and suggestions to dealbook@nytimes.com. More

  • in

    YouTube Restores Donald Trump’s Account Privileges

    The Google-owned video platform became the latest of the big social networks to reverse the former president’s account restrictions.YouTube suspended former President Donald J. Trump’s account on the platform six days after the Jan. 6 attack on the Capitol. The video platform said it was concerned that Mr. Trump’s lies about the 2020 election could lead to more real-world violence.YouTube, which is owned by Google, reversed that decision on Friday, permitting Mr. Trump to once again upload videos to the popular site. The move came after similar decisions by Twitter and Meta, which owns Facebook and Instagram.“We carefully evaluated the continued risk of real-world violence, while balancing the chance for voters to hear equally from major national candidates in the run up to an election,” YouTube said on Twitter on Friday. Mr. Trump’s account will have to comply with the site’s content rules like any other account, YouTube added.After false claims that the 2020 presidential election was stolen circulated online and helped stoke the Jan. 6 attack, social media giants suspended Mr. Trump’s account privileges. Two years later, the platforms have started to soften their content rules. Under Elon Musk’s ownership, Twitter has unwound many of its content moderation efforts. YouTube recently laid off members of its trust and safety team, leaving one person in charge of setting political misinformation policies.Mr. Trump announced in November that he was seeking a second term as president, setting off deliberations at social media companies over whether to allow him back on their platforms. Days later, Mr. Musk polled Twitter users on whether he should reinstate Mr. Trump, and 52 percent of respondents said yes. Like YouTube, Meta said in January that it was important that people hear what political candidates are saying ahead of an election.The former president’s reinstatement is one of the first significant content decisions that YouTube has taken under its new chief executive, Neal Mohan, who got the top job last month. YouTube also recently loosened its profanity rules so that creators who used swear words at the start of a video could still make money from the content.YouTube’s announcement on Friday echoes a pattern of the company and its parent Google making polarizing content decisions after a competitor has already taken the same action. YouTube followed Meta and Twitter in suspending Mr. Trump after the Capitol attack, and in reversing the bans.Since losing his bid for re-election in 2020, Mr. Trump has sought to make a success of his own social media service, Truth Social, which is known for its loose content moderation rules.Mr. Trump on Friday posted on his Facebook page for the first time since his reinstatement. “I’M BACK!” Mr. Trump wrote, alongside a video in which he said, “Sorry to keep you waiting. Complicated business. Complicated.”Despite his Twitter reinstatement, Mr. Trump has not returned to posting from that account.In his last tweet, dated Jan. 8, 2021, he said he would not attend the coming inauguration, held at the Capitol. More

  • in

    Political Campaigns Flood Streaming Video With Custom Voter Ads

    The targeted political ads could spread some of the same voter-influence techniques that proliferated on Facebook to an even less regulated medium.Over the last few weeks, tens of thousands of voters in the Detroit area who watch streaming video services were shown different local campaign ads pegged to their political leanings.Digital consultants working for Representative Darrin Camilleri, a Democrat in the Michigan House who is running for State Senate, targeted 62,402 moderate, female — and likely pro-choice — voters with an ad promoting reproductive rights.The campaign also ran a more general video ad for Mr. Camilleri, a former public-school teacher, directed at 77,836 Democrats and Independents who have voted in past midterm elections. Viewers in Mr. Camilleri’s target audience saw the messages while watching shows on Lifetime, Vice and other channels on ad-supported streaming services like Samsung TV Plus and LG Channels.Although millions of American voters may not be aware of it, the powerful data-mining techniques that campaigns routinely use to tailor political ads to consumers on sites and apps are making the leap to streaming video. The targeting has become so precise that next door neighbors streaming the same true crime show on the same streaming service may now be shown different political ads — based on data about their voting record, party affiliation, age, gender, race or ethnicity, estimated home value, shopping habits or views on gun control.Political consultants say the ability to tailor streaming video ads to small swaths of viewers could be crucial this November for candidates like Mr. Camilleri who are facing tight races. In 2016, Mr. Camilleri won his first state election by just several hundred votes.“Very few voters wind up determining the outcomes of close elections,” said Ryan Irvin, the co-founder of Change Media Group, the agency behind Mr. Camilleri’s ad campaign. “Very early in an election cycle, we can pull from the voter database a list of those 10,000 voters, match them on various platforms and run streaming TV ads to just those 10,000 people.”Representative Darrin Camilleri, a member of the Michigan House who is running for State Senate, targeted local voters with streaming video ads before he campaigned in their neighborhoods. Emily Elconin for The New York TimesTargeted political ads on streaming platforms — video services delivered via internet-connected devices like TVs and tablets — seemed like a niche phenomenon during the 2020 presidential election. Two years later, streaming has become the most highly viewed TV medium in the United States, according to Nielsen.Savvy candidates and advocacy groups are flooding streaming services with ads in an effort to reach cord-cutters and “cord nevers,” people who have never watched traditional cable or broadcast TV.The trend is growing so fast that political ads on streaming services are expected to generate $1.44 billion — or about 15 percent — of the projected $9.7 billion on ad spending for the 2022 election cycle, according to a report from AdImpact, an ad tracking company. That would for the first time put streaming on par with political ad spending on Facebook and Google.The State of the 2022 Midterm ElectionsWith the primaries over, both parties are shifting their focus to the general election on Nov. 8.Midterm Data: Could the 2020 polling miss repeat itself? Will this election cycle really be different? Nate Cohn, The Times’s chief political analyst, looks at the data in his new newsletter.Republicans’ Abortion Struggles: Senator Lindsey Graham’s proposed nationwide 15-week abortion ban was intended to unite the G.O.P. before the November elections. But it has only exposed the party’s divisions.Democrats’ Dilemma: The party’s candidates have been trying to signal their independence from the White House, while not distancing themselves from President Biden’s base or agenda.The quick proliferation of the streaming political messages has prompted some lawmakers and researchers to warn that the ads are outstripping federal regulation and oversight.For example, while political ads running on broadcast and cable TV must disclose their sponsors, federal rules on political ad transparency do not specifically address streaming video services. Unlike broadcast TV stations, streaming platforms are also not required to maintain public files about the political ads they sold.The result, experts say, is an unregulated ecosystem in which streaming services take wildly different approaches to political ads.“There are no rules over there, whereas, if you are a broadcaster or a cable operator, you definitely have rules you have to operate by,” said Steve Passwaiter, a vice president at Kantar Media, a company that tracks political advertising.The boom in streaming ads underscores a significant shift in the way that candidates, party committees and issue groups may target voters. For decades, political campaigns have blanketed local broadcast markets with candidate ads or tailored ads to the slant of cable news channels. With such bulk media buying, viewers watching the same show at the same time as their neighbors saw the same political messages.But now campaigns are employing advanced consumer-profiling and automated ad-buying services to deliver different streaming video messages, tailored to specific voters.“In the digital ad world, you’re buying the person, not the content,” said Mike Reilly, a partner at MVAR Media, a progressive political consultancy that creates ad campaigns for candidates and advocacy groups.Targeted political ads are being run on a slew of different ad-supported streaming channels. Some smart TV manufacturers air the political ads on proprietary streaming platforms, like Samsung TV Plus and LG Channels. Viewers watching ad-supported streaming channels via devices like Roku may also see targeted political ads.Policies on political ad targeting vary. Amazon prohibits political party and candidate ads on its streaming services. YouTube TV and Hulu allow political candidates to target ads based on viewers’ ZIP code, age and gender, but they prohibit political ad targeting by voting history or party affiliation.Roku, which maintains a public archive of some political ads running on its platform, declined to comment on its ad-targeting practices.Samsung and LG, which has publicly promoted its voter-targeting services for political campaigns, did not respond to requests for comment. Netflix declined to comment about its plans for an ad-supported streaming service.Targeting political ads on streaming services can involve more invasive data-mining than the consumer-tracking techniques typically used to show people online ads for sneakers.Political consulting firms can buy profiles on more than 200 millions voters, including details on an individual’s party affiliations, voting record, political leanings, education levels, income and consumer habits. Campaigns may employ that data to identify voters concerned about a specific issue — like guns or abortion — and hone video messages to them.In addition, internet-connected TV platforms like Samsung, LG and Roku often use data-mining technology, called “automated content recognition,” to analyze snippets of the videos people watch and segment viewers for advertising purposes.Some streaming services and ad tech firms allow political campaigns to provide lists of specific voters to whom they wish to show ads.To serve those messages, ad tech firms employ precise delivery techniques — like using IP addresses to identify devices in a voter’s household. The device mapping allows political campaigns to aim ads at certain voters whether they are streaming on internet-connected TVs, tablets, laptops or smartphones.Sten McGuire, an executive at a4 Advertising, presented a webinar in March announcing a partnership to sell political ads on LG channels.New York TimesUsing IP addresses, “we can intercept voters across the nation,” Sten McGuire, an executive at a4 Advertising, said in a webinar in March announcing a partnership to sell political ads on LG channels. His company’s ad-targeting worked, Mr. McGuire added, “whether you are looking to reach new cord cutters or ‘cord nevers’ streaming their favorite content, targeting Spanish-speaking voters in swing states, reaching opinion elites and policy influencers or members of Congress and their staff.”Some researchers caution that targeted video ads could spread some of the same voter-influence techniques that have proliferated on Facebook to a new, and even less regulated, medium.Facebook and Google, the researchers note, instituted some restrictions on political ad targeting after Russian operatives used digital platforms to try to disrupt the 2016 presidential election. With such restrictions in place, political advertisers on Facebook, for instance, should no longer be able to target users interested in Malcolm X or Martin Luther King with paid messages urging them not to vote.Facebook and Google have also created public databases that enable people to view political ads running on the platforms.But many streaming services lack such targeting restrictions and transparency measures. The result, these experts say, is an opaque system of political influence that runs counter to basic democratic principles.“This occupies a gray area that’s not getting as much scrutiny as ads running on social media,” said Becca Ricks, a senior researcher at the Mozilla Foundation who has studied the political ad policies of popular streaming services. “It creates an unfair playing field where you can precisely target, and change, your messaging based on the audience — and do all of this without some level of transparency.”Some political ad buyers are shying away from more restricted online platforms in favor of more permissive streaming services.“Among our clients, the percentage of budget going to social channels, and on Facebook and Google in particular, has been declining,” said Grace Briscoe, an executive overseeing candidate and political issue advertising at Basis Technologies, an ad tech firm. “The kinds of limitations and restrictions that those platforms have put on political ads has disinclined clients to invest as heavily there.”Senators Amy Klobuchar and Mark Warner introduced the Honest Ads Act, which would require online political ads to include disclosures similar to those on broadcast TV ads.Al Drago for The New York TimesMembers of Congress have introduced a number of bills that would curb voter-targeting or require digital ads to adhere to the same rules as broadcast ads. But the measures have not yet been enacted.Amid widespread covertness in the ad-targeting industry, Mr. Camilleri, the member of the Michigan House running for State Senate, was unusually forthcoming about how he was using streaming services to try to engage specific swaths of voters.In prior elections, he said, he sent postcards introducing himself to voters in neighborhoods where he planned to make campaign stops. During this year’s primaries, he updated the practice by running streaming ads introducing himself to certain households a week or two before he planned to knock on their doors.“It’s been working incredibly well because a lot of people will say, ‘Oh, I’ve seen you on TV,’” Mr. Camilleri said, noting that many of his constituents did not appear to understand the ads were shown specifically to them and not to a general broadcast TV audience. “They don’t differentiate” between TV and streaming, he added, “because you’re watching YouTube on your television now.” More

  • in

    Barack Obama’s New Role: Fighting Disinformation

    The former president has embarked on a campaign to warn that the scourge of online falsehoods has eroded the foundations of democracy.SAN FRANCISCO — In 2011, President Barack Obama swept into Silicon Valley and yukked it up with Mark Zuckerberg, Facebook’s founder. The occasion was a town hall with the social network’s employees that covered the burning issues of the day: taxes, health care, the promise of technology to solve the nation’s problems.More than a decade later, Mr. Obama is making another trip to Silicon Valley, this time with a grimmer message about the threat that the tech giants have created to the nation itself.In private meetings and public appearances over the last year, the former president has waded deeply into the public fray over misinformation and disinformation, warning that the scourge of falsehoods online has eroded the foundations of democracy at home and abroad.In a speech at Stanford University on Thursday, he is expected to add his voice to demands for rules to rein in the flood of lies polluting public discourse.The urgency of the crisis — the internet’s “demand for crazy,” as he put it recently — has already pushed him further than he was ever prepared to go as president to take on social media.“I think it is reasonable for us as a society to have a debate and then put in place a combination of regulatory measures and industry norms that leave intact the opportunity for these platforms to make money but say to them that there’s certain practices you engage in that we don’t think are good for society,” Mr. Obama, now 61, said at a conference on disinformation this month organized by the University of Chicago and The Atlantic.Mr. Obama’s campaign — the timing of which stemmed not from a single cause, people close to him said, but a broad concern about the damage to democracy’s foundations — comes in the middle of a fierce but inconclusive debate over how best to restore trust online.In Washington, lawmakers are so sharply divided that any legislative compromise seems out of reach. Democrats criticize giants like Facebook, which has been renamed Meta, and Twitter for failing to rid their sites of harmful content. President Joseph R. Biden Jr., too, has lashed out at the platforms that allowed falsehoods about coronavirus vaccines to spread, saying last year that “they’re killing people.”Republicans, for their part, accuse the companies of suppressing free speech by censoring conservative voices — above all former President Donald J. Trump, who was barred from Facebook and Twitter after the riot on Capitol Hill on Jan. 6 last year. With so little agreement about the problem, there is even less about a solution.Whether Mr. Obama’s advocacy can sway the debate remains to be seen. While he has not sought to endorse a single solution or particular piece of legislation, he nonetheless hopes to appeal across the political spectrum for common ground.“You’ve got to think about how things are going to be consumed through different partisan filtering but still make your true, authentic, best case about how you see the world and what the stakes are and why,” said Jason Goldman, a former Twitter, Blogger and Medium executive who served as the White House’s first chief digital officer under Mr. Obama and continues to advise him.“There’s a potential reason to believe that a good path exists out of some of the messes that we’re in,” he added.As an apostle of the dangers of disinformation, Mr. Obama might be an imperfect messenger. He was the first presidential candidate to ride the power of social media into office in 2008 but then, as president, did little to intervene when its darker side — propagating falsehoods, extremism, racism and violence — became apparent at home and abroad.“I saw it sort of unfold — and that is the degree to which information, disinformation, misinformation was being weaponized,” Mr. Obama said in Chicago, expressing something close to regret. He added, “I think I underestimated the degree to which democracies were as vulnerable to it as they were, including ours.”Mr. Obama, those close to him said, became fixated by disinformation after leaving office. He rehashed, as many others have, whether he had done enough to counter the information campaign ordered by Russia’s president, Vladimir V. Putin, to tilt the 2016 election against Hillary Rodham Clinton.He began meeting with executives, activists and other experts in earnest last year after Mr. Trump refused to recognize the results of the 2020 election, making unfounded claims of widespread voter fraud, those who have consulted with Mr. Obama said.In his musings on the matter, Mr. Obama has not claimed to have discovered a silver bullet that has eluded others who have studied the issue. By coming forward more publicly, however, he hopes to highlight the values for corporate conduct around which consensus could form.“This can be an effective nudge to a lot of the thinking that is already taking place,” Ben Rhodes, a former deputy national security adviser, said. “Every day brings more proof of why this matters.”The location of Thursday’s speech, Stanford’s Cyber Policy Center, was intentional, bringing Mr. Obama to the heart of the industry that in many ways shaped his presidency.In his 2008 presidential campaign, he went from being an underdog candidate to an online sensation with his embrace of social media as a tool to target voters and to solicit donations. He became an industry favorite; his digital campaign was led by a Facebook co-founder, Chris Hughes, and several other tech chief executives endorsed him, including Eric Schmidt of Google.During his administration, Mr. Obama extolled the promise of tech companies to strengthen the economy with higher-skilled jobs and to propel democracy movements abroad. He lured tech employees like Mr. Goldman to join his administration and filled his campaign coffers with fund-raisers at the Bay Area homes of supporters like Sheryl Sandberg, the chief operating officer of Meta, and Marc Benioff, the chief executive of Salesforce.It was a period of mutual admiration and little government oversight of the tech industry. Though Mr. Obama endorsed privacy regulations, not a single piece of legislation to control the tech companies passed during his tenure, even as they became economic behemoths that touch virtually every aspect of life.Looking back at his administration’s approach, Mr. Obama has said he would not pinpoint any one action or piece of legislation that he might have handled differently. In hindsight, though, he understands now how optimism about online technologies, including social media, outweighed caution, according to Mr. Rhodes.“He’ll certainly acknowledge that there’s things that could have been done differently or ways we were all thinking about the tools and technologies that turned out at times to see the opportunities more than the risks,” Mr. Rhodes said.Mr. Obama’s views began to change with Russia’s flood of propaganda on social media sites like Facebook, Twitter and YouTube to stir confusion and chaos in the 2016 presidential election. Days after that election, Mr. Obama took Mr. Zuckerberg aside at a meeting of world leaders in Lima, Peru, to warn that he needed to take the problem more seriously.Once he left office, Mr. Obama was noticeably absent for much of the public conversation around disinformation.“As a general matter, there was an awareness that anything he said about certain issues was just going to ricochet around the fun house mirrors,” Mr. Rhodes said.Mr. Obama’s approach to the issue has been characteristically deliberative. He has consulted the chief executives of Apple, Alphabet and others. Through the Obama Foundation in Chicago, he has also met often with the scholars the foundation has trained; they recounted their own experiences with disinformation in a variety of fields around the world.From those deliberations, potential solutions have begun taking shape, a theme he plans to outline broadly on Thursday. While Mr. Obama maintains that he remains “close to a First Amendment absolutist,” he has focused on the need for greater transparency and regulatory oversight of online discourse — and the ways companies have profited from manipulating audiences through their proprietary algorithms.Mr. Goldman compared a potential approach to consumer protection or food safety practices already in place.“You may not know exactly what’s in a hot dog, but you trust that there is a process for meat inspections that ensures that the food sold and consumed in this country and other countries around the world are safe,” he said.In Congress, lawmakers have already proposed the creation of a regulatory agency dedicated to overseeing internet companies. Others have proposed stripping tech companies of a legal shield that protects them from liability.No proposals have advanced, though, even as the European Union has moved forward, putting into law some of the practices still merely bandied about in Washington. The union is expected to move as soon as Friday on new regulations to impose audits of algorithmic amplification.Kyle Plotkin, a Republican strategist and former chief of staff to Senator Josh Hawley of Missouri, said Mr. Obama “can be a polarizing figure” and could inflame, not calm, the debate over disinformation.“Adoring fans will be very happy with him weighing in, but others won’t,” he said. “I don’t think he will move the ball forward. If anything, he moves the ball backward.” More

  • in

    ‘La French Tech’ Arrives Under Macron, but Proves No Panacea

    The president has brought innovation, jobs and growth. Still, resentments fester on the eve of the presidential election.PARIS — In full Steve Jobs mode, President Emmanuel Macron of France donned a black turtleneck in January and took to Twitter to celebrate the creation in France of 25 “unicorn” start-ups — companies with a market value of over 1 billion euros, or almost $1.1 billion.He declared that France’s start-up economy was “changing the lives of French people” and “strengthening our sovereignty.” It was also helping to create jobs: Unemployment has fallen to 7.4 percent, the lowest level in a decade.The start-up boom was a milestone for a young president elected five years ago as a restless disrupter, promising to pry open the economy and make it competitive in the 21st century.To some extent, Mr. Macron has succeeded, luring billions of euros in foreign investments and creating hundreds of thousands of new jobs, many in tech start-ups, in a country whose resistance to change is stubborn. But disruption is just that, and the president has at the same time left many French feeling unsettled and unhappy, left behind or ignored.As Mr. Macron seeks re-election starting on Sunday, it is two countries that will vote — a mainly urban France that sees the need for change to meet the era’s sweeping technological and economic challenges, and a France of the “periphery,” wary of innovation, struggling to get by, alarmed by immigration and resentful of a leader seen as embodying the arrogance of the privileged.Which France shows up at voting booths in greater numbers will determine the outcome.Campaign posters on display this month in the northeastern French town of Stiring-Wendel.Andrea Mantovani for The New York TimesIn many Western societies, the simultaneous spread of technology and inequality has posed acute problems, stirring social tensions, and France has proved no exception. If the disenchanted France prevails, Marine Le Pen, the perennial candidate of the nationalist right, will most likely prevail, too.Worried that he may have lost the left by favoring start-up entrepreneurship and market reforms, Mr. Macron has in the past week been multiplying appeals to the left, resorting to phrases like “our lives are worth more than their profits” to suggest his perceived rightward lurch was not the whole story.He told France Inter radio that “fraternity” was the most important word in the French national motto, and said during a visit to Brittany that “solidarity” and “equality of opportunity” would be the central themes of an eventual second term.Learn More About France’s Presidential ElectionThe run-up to the first round of the election has been dominated by issues such as security, immigration and national identity.On the Scene: A Times reporter attended a rally held by Marine Le Pen, the far-right French presidential candidate. Here is what he saw.Challenges to Re-election: A troubled factory in President Emmanuel Macron’s hometown shows his struggle in winning the confidence of French workers.A Late Surge: After recently rising in voter surveys, Jean-Luc Mélenchon could become the first left-wing candidate since 2012 to reach the second round of the election.A Political Bellwether: Auxerre has backed the winner in the presidential race for 40 years. This time, many residents see little to vote for.The pledges looked like signs of growing anxiety about the election’s outcome. After several months in which Mr. Macron’s re-election had appeared virtually assured, the gap between him and Ms. Le Pen has closed. The leading two candidates in Sunday’s vote will go through to a runoff on April 24.The election will be largely decided by perceptions of the economy. In Mr. Macron’s favor, the country has bounced back faster than expected from coronavirus lockdowns, with economic growth reaching 7 percent after a devastating pandemic-induced recession.Marine Le Pen speaking this month in Stiring-Wendel.Andrea Mantovani for The New York TimesThe most significant cultural transformation has come in the area of tech, where Mr. Macron’s determination to create a start-up culture centered around new technology has brought changes the government considers essential to the future of France.Cédric O, the secretary of state for the digital sector, wearing jeans and a white dress shirt, no tie, admits to being obsessed. Day after long day, he plots the future of “la French tech” from his spacious office at the Finance Ministry.Five years ago, that may have seemed quixotic, but something has stirred. “It’s vital to be obsessed because the risk France and Europe are facing is to be kicked out of history,” Mr. O, 39, said, borrowing a line often used by Mr. Macron. “We have to get back into the international technological race.”Toward that end, Mr. Macron opened Station F, a mammoth incubator project in Paris representing France’s start-up ambitions, and earmarked nearly €10 billion in tax credits and other inducements to lure research activity and artificial intelligence business. A new bank was created to help finance start-ups.The president wined and dined multinational chief executives, creating an annual gathering at Versailles called “Choose France.”Since 2019, France has become the leading destination for foreign investment in Europe, and more than 70 investment projects worth €12 billion have been pledged by foreign multinationals at the Versailles gatherings, said Franck Riester, France’s foreign trade minister.In the past four years, IBM, SAP of Germany and DeepMind, the London-based machine learning company owned by Google’s parent, Alphabet, have increased investment in France and created thousands of jobs.Station F, a mammoth project in Paris that represents France’s start-up ambitions.Roberto Frankenberg for The New York TimesFacebook and Google have also bolstered their French presence and their artificial intelligence teams in Paris. Salesforce, the American cloud computing company, is moving ahead with over €2 billion in pledged investments.“Macron brought a culture shift where France was suddenly open to the world of funders,” said Thomas Clozel, a doctor by training and the founder in 2016 of Owkin, a start-up that uses Artificial Intelligence to personalize and improve medical treatment. “He made everything easy for start-up entrepreneurs and so changed the view of France as an anticapitalist society.”François Hollande, Mr. Macron’s Socialist Party predecessor, had famously declared in 2012: “My enemy is the world of finance.” As a result, Mr. Clozel said, securing funds as a French start-up was so problematic that he chose to incorporate in the United States.No longer.“Today, I am thinking of reincorporating in France,” he said. “The ease of dealing with the government, the consortium of start-ups helping one another, and the new French tech pride are compelling.”Among the start-ups that have had a significant effect on French life are Doctolib, a website that allows patients to arrange for medical appointments and tests online, and Backmarket, an online market for reconditioned tech gadgets that just became France’s most valuable start-up, at $5.7 billion.They began life before Mr. Macron took office, but have grown exponentially in the past five years.“I have made 56 investments in the last two years, and 53 of them are in France,” said Jonathan Benhamou, a French entrepreneur who founded PeopleDoc, a company that simplifies access to information for human resources departments.Now funding new ventures and focusing on a new start-up called Resilience in the field of personalized cancer care, Mr. Benhamou credits Mr. Macron with “giving investors confidence in stability and creating a virtuous cycle.”Talented engineers no longer go elsewhere because there is an “ecosystem” for them in France, Mr. O said.Yellow Vest protesters blocking a road in Caen, in France’s Normandy region, in November 2018.Charly Triballeau/Agence France-Presse — Getty ImagesMr. Macron has insisted that opening the economy is consistent with maintaining protections for French workers and that the arrival of la French tech does not mean the embrace of the no-holds-barred capitalism behind the churn of American creativity.Despite the president’s overhauls, France remains one of the most expensive countries for payroll taxes, according to the Organization for Economic Cooperation and Development, with hourly labor costs of nearly €38, close to levels seen in Sweden, Norway and other northern European countries.“We know that we have to go further,” Mr. Riester, the foreign trade minister, said in a recent interview. “We still have some brakes that could be taken off the economy, and we have to cut some red tape in the future.”Who Is Running for President of France?Card 1 of 6The campaign begins. More

  • in

    Election Falsehoods Surged on Podcasts Before Capitol Riots, Researchers Find

    A new study analyzed nearly 1,500 episodes, showing the extent to which podcasts pushed misinformation about voter fraud.Weeks before the 2020 presidential election, the conservative broadcaster Glenn Beck outlined his prediction for how Election Day would unfold: President Donald J. Trump would be winning that night, but his lead would erode as dubious mail-in ballots arrived, giving Joseph R. Biden Jr. an unlikely edge.“No one will believe the outcome because they’ve changed the way we’re electing a president this time,” he said.None of the predictions of widespread voter fraud came true. But podcasters frequently advanced the false belief that the election was illegitimate, first as a trickle before the election and then as a tsunami in the weeks leading up to the violent attack at the Capitol on Jan. 6, 2021, according to new research.Researchers at the Brookings Institution reviewed transcripts of nearly 1,500 episodes from 20 of the most popular political podcasts. Among episodes released between the election and the Jan. 6 riot, about half contained election misinformation, according to the analysis.In some weeks, 60 percent of episodes mentioned the election fraud conspiracy theories tracked by Brookings. Those included false claims that software glitches interfered with the count, that fake ballots were used, and that voting machines run by Dominion Voting Systems were rigged to help Democrats. Those kinds of theories gained currency in Republican circles and would later be leveraged to justify additional election audits across the country.Misinformation Soared After ElectionThe share of podcast episodes per week featuring election misinformation increased sharply after the election.

    Note: Among the most popular political talk show podcasts evaluated by Brookings, using a selection of keywords related to electoral fraud between Aug. 20, 2020 and Jan. 6, 2021.Source: The Brookings InstitutionThe New York TimesThe new research underscores the extent to which podcasts have spread misinformation using platforms operated by Apple, Google, Spotify and others, often with little content moderation. While social media companies have been widely criticized for their role in spreading misinformation about the election and Covid-19 vaccines, they have cracked down on both in the last year. Podcasts and the companies distributing them have been spared similar scrutiny, researchers say, in large part because podcasts are harder to analyze and review.“People just have no sense of how bad this problem is on podcasts,” said Valerie Wirtschafter, a senior data analyst at Brookings who co-wrote the report with Chris Meserole, a director of research at Brookings.Dr. Wirtschafter downloaded and transcribed more than 30,000 podcast episodes deemed “talk shows,” meaning they offered analysis and commentary rather than strictly news updates. Focusing on 1,490 episodes around the election from 20 popular shows, she created a dictionary of terms about election fraud. After transcribing the podcasts, a team of researchers searched for the keywords and manually checked each mention to determine if the speaker was supporting or denouncing the claims.In the months leading up to the election, conservative podcasters focused mostly on the fear that mail-in ballots could lead to fraud, the analysis showed.At the time, political analysts were busy warning of a “red mirage”: an early lead by Mr. Trump that could erode because mail-in ballots, which tend to get counted later, were expected to come from Democratic-leaning districts. As ballots were counted, that is precisely what happened. But podcasters used the changing fortunes to raise doubts about the election’s integrity.Election misinformation shot upward, with about 52 percent of episodes containing misinformation in the weeks after the election, up from about 6 percent of episodes before the election.The biggest offender in Brookings’s analysis was Stephen K. Bannon, Mr. Trump’s former adviser. His podcast, “Bannon’s War Room,” was flagged 115 times for episodes using voter fraud terms included in Brookings’ analysis between the election and Jan. 6.“You know why they’re going to steal this election?” Mr. Bannon asked on Nov. 3. “Because they don’t think you’re going to do anything about it.”As the Jan. 6 protest drew closer, his podcast pushed harder on those claims, including the false belief that poll workers handed out markers that would disqualify ballots.“Now we’re on, as they say, the point of attack,” Mr. Bannon said the day before the protest. “The point of attack tomorrow. It’s going to kick off. It’s going to be very dramatic.”Mr. Bannon’s show was removed from Spotify in November 2020 after he discussed beheading federal officials, but it remains available on Apple and Google.When reached for comment on Monday, Mr. Bannon said that President Biden was “an illegitimate occupant of the White House” and referenced investigations into the election that show they “are decertifying his electors.” Many legal experts have argued there is no way to decertify the election.Election Misinformation by PodcastThe podcast by Stephen K. Bannon was flagged for election misinformation more than other podcasts tracked by the Brookings Institution.

    .dw-chart-subhed {
    line-height: 1;
    margin-bottom: 6px;
    font-family: nyt-franklin;
    color: #121212;
    font-size: 15px;
    font-weight: 700;
    }

    Episodes sharing electoral misinformation
    Note: Among the most popular political talk show podcasts evaluated by Brookings, using a selection of keywords related to electoral fraud between Aug. 20, 2020 and Jan. 6, 2021.Source: Brookings InstitutionBy The New York TimesSean Hannity, the Fox News anchor, also ranked highly in the Brookings data. His podcast and radio program, “The Sean Hannity Show,” is now the most popular radio talk show in America, reaching upward of 15 million radio listeners, according to Talk Media.“Underage people voting, people that moved voting, people that never re-registered voting, dead people voting — we have it all chronicled,” Mr. Hannity said during one episode.Key Figures in the Jan. 6 InquiryCard 1 of 10The House investigation. More