More stories

  • in

    Trump me atacó. Después, Musk lo hizo. No fue casualidad

    Timo LenzenCuando trabajaba en Twitter, ahora conocida como X, dirigí al equipo que puso por primera vez una etiqueta de verificación de hechos en uno de los tuits de Donald Trump. Tras la violencia del 6 de enero, ayudé a tomar la decisión de suspender su cuenta en Twitter. Nada me preparó para lo que ocurriría después.Respaldado por sus seguidores en las redes sociales, Trump me atacó públicamente. Dos años después, tras su adquisición de Twitter y después de que yo dimití de mi puesto como responsable de confianza y seguridad de la empresa, Elon Musk echó más leña al fuego. He vivido con guardias armados en la puerta de mi casa y he tenido que trastocar la vida de mi familia, así como esconderme durante meses y mudarme una y otra vez.No es una historia que me guste recordar. Pero he aprendido que lo que me ocurrió no fue casualidad. No fue solo una venganza personal o la “cultura de la cancelación”. Se trató de una estrategia que no solo afecta a personas específicas, como en mi caso, sino a todos nosotros, ya que está cambiando a gran velocidad lo que vemos en internet.Los individuos —desde investigadores académicos hasta trabajadores de empresas de tecnología— son cada vez más objeto de demandas, comparecencias ante el Congreso y despiadados ataques en línea. Estos ataques, organizados en gran medida por la derecha, están teniendo el efecto deseado: las universidades están reduciendo sus esfuerzos para cuantificar la información abusiva y engañosa que se difunde en internet. Las empresas de redes sociales están evitando tomar el tipo de decisiones difíciles que mi equipo tomó cuando intervinimos ante las mentiras de Trump sobre las elecciones de 2020. Las plataformas no empezaron a tomarse en serio estos riesgos sino hasta después de las elecciones de 2016. Ahora, ante la posibilidad de ataques desproporcionados contra sus empleados, las empresas parecen cada vez más reacias a tomar decisiones controvertidas, lo cual permite que la desinformación y el abuso se enconen para evitar provocar represalias públicas.Estos ataques a la seguridad en internet se producen en un momento en el que la democracia no podría estar más en riesgo. En 2024, está prevista la celebración de más de 40 elecciones importantes, entre ellas las de Estados Unidos, la Unión Europea, la India, Ghana y México. Lo más probable es que estas democracias se enfrenten a los mismos riesgos de campañas de desinformación respaldadas por los gobiernos y de incitación a la violencia en línea que han plagado las redes sociales durante años. Deberíamos preocuparnos por lo que ocurra.Mi historia comienza con esa verificación de datos. En la primavera de 2020, tras años de debate interno, mi equipo decidió que Twitter debía aplicar una etiqueta a un tuit del entonces presidente Trump que afirmaba que el voto por correo era propenso al fraude y que las próximas elecciones estarían “amañadas”. “Conoce los hechos sobre la votación por correo”, decía la etiqueta.El 27 de mayo, la mañana siguiente a la colocación de la etiqueta, la asesora principal de la Casa Blanca, Kellyanne Conway, me identificó de manera pública como el director del equipo de integridad de Twitter. Al día siguiente, The New York Post publicó en su portada varios tuits en los que me burlaba de Trump y otros republicanos. Los había publicado años antes, cuando era estudiante y tenía pocos seguidores, sobre todo amigos y familiares, en las redes sociales. Ahora, eran noticia de primera plana. Ese mismo día, Trump tuiteó que yo era un “odiador”.Legiones de usuarios de Twitter, la mayoría de quienes días antes no tenían ni idea de quién era yo ni en qué consistía mi trabajo, comenzaron una campaña de acoso en línea que duró meses, en la que exigían que me despidieran, me encarcelaran o me mataran. La cantidad de notificaciones de Twitter arrunió mi teléfono. Amigos de los que no tenía noticias desde hacía años expresaron su preocupación. En Instagram, fotos antiguas de mis vacaciones y de mi perro se inundaron de comentarios amenazantes e insultos (algunos comentaristas, que malinterpretaron el momento de manera atroz, aprovecharon para intentar coquetear conmigo).Me sentí avergonzado y asustado. Hasta ese momento, nadie fuera de unos pocos círculos bastante especializados tenía idea de quién era yo. Los académicos que estudian las redes sociales llaman a esto “colapso de contexto”: las cosas que publicamos en las redes sociales con un público en mente pueden acabar circulando entre un público muy diferente, con resultados inesperados y destructivos. En la práctica, se siente como si todo tu mundo se derrumba.El momento en que se desató la campaña en contra de mi persona y mi supuesta parcialidad sugería que los ataques formaban parte de una estrategia bien planificada. Los estudios académicos han rebatido en más de una ocasión las afirmaciones de que las plataformas de Silicon Valley son tendenciosas contra los conservadores. Pero el éxito de una estrategia encaminada a obligar a las empresas de redes sociales a reconsiderar sus decisiones quizá no requiera la demostración de una verdadera mala conducta. Como describió en una ocasión Rich Bond, expresidente del Partido Republicano, tal vez solo sea necesario “ganarse a los árbitros”: presionar sin cesar a las empresas para que se lo piensen dos veces antes de emprender acciones que podrían provocar una reacción negativa. Lo que me ocurrió fue parte de un esfuerzo calculado para que Twitter se mostrara reacio a moderar a Trump en el futuro y para disuadir a otras empresas de tomar medidas similares.Y funcionó. Mientras se desataba la violencia en el Capitolio el 6 de enero, Jack Dorsey, entonces director general de Twitter, anuló la recomendación del departamento de confianza y seguridad de que se bloqueara la cuenta de Trump debido a varios tuits, incluido uno que atacaba al vicepresidente Mike Pence. En cambio, se le impuso una suspensión temporal de 12 horas (antes de que su cuenta se se suspendiera indefinidamente el 8 de enero). Dentro de los límites de las normas, se animó a los miembros del personal a encontrar soluciones para ayudar a la empresa a evitar el tipo de reacción que da lugar a ciclos de noticias furiosas, audiencias y acoso a empleados. En la práctica, lo que sucedió fue que Twitter dio mayor libertad a los infractores: a la representante Marjorie Taylor Greene se le permitió violar las normas de Twitter al menos cinco veces antes de que una de sus cuentas fuera suspendida de manera definitiva en 2022. Otras figuras prominentes de derecha, como la cuenta de guerra cultural Libs of TikTok, gozaron de una deferencia similar.En todo el mundo, se están desplegando tácticas similares para influir en los esfuerzos de confianza y seguridad de las plataformas. En India, la policía visitó dos de nuestras oficinas en 2021 cuando comprobamos los hechos de las publicaciones de un político del partido gobernante y la policía se presentó en la casa de un empleado después de que el gobierno nos solicitó bloquear cuentas implicadas en una serie de protestas. El acoso volvió a rendir frutos: los ejecutivos de Twitter decidieron que cualquier acción que pudiera ser delicada en la India requeriría la aprobación de los más altos mandos, un nivel único de escalada de decisiones que, de otro modo, serían rutinarias.Y cuando quisimos revelar una campaña de propaganda llevada a cabo por una rama del ejército indio, nuestro equipo jurídico nos advirtió que nuestros empleados en la India podrían ser acusados de sedición y condenados a muerte. Así que Twitter no reveló la campaña sino hasta más de un año después, sin señalar al gobierno indio como autor.En 2021, antes de las elecciones legislativas de Rusia, los funcionarios de un servicio de seguridad estatal fueron a la casa de una alta ejecutiva de Google en Moscú para exigir la retirada de una aplicación que se usaba para protestar en contra de Vladimir Putin. Los agentes la amenazaron con encarcelarla si la empresa no cumplía en 24 horas. Tanto Apple como Google retiraron la aplicación de sus respectivas tiendas y la restablecieron una vez concluidas las elecciones.En cada uno de estos casos, los empleados en cuestión carecían de la capacidad para hacer lo que les pedían los funcionarios de turno, ya que las decisiones subyacentes se tomaban a miles de kilómetros de distancia, en California. Pero como los empleados locales tenían la desgracia de residir dentro de la jurisdicción de las autoridades, fueron objeto de campañas coercitivas, que enfrentaban el sentido del deber de las empresas hacia sus empleados contra los valores, principios o políticas que pudieran hacerles resistirse a las demandas locales. Inspirados por la idea, India y otros países comenzaron a promulgar leyes de “toma de rehenes” para garantizar que las empresas de redes sociales contrataran personal local.En Estados Unidos, hemos visto que estas formas de coerción no las han llevado a cabo jueces y policías, sino organizaciones de base, turbas en las redes sociales, comentaristas de noticias por cable y, en el caso de Twitter, el nuevo propietario de la empresa.Una de las fuerzas más recientes en esta campaña son los “archivos de Twitter”, una gran selección de documentos de la empresa —muchos de los cuales yo mismo envié o recibí durante mis casi ocho años en Twitter— entregados por orden de Musk a un puñado de escritores selectos. Los archivos fueron promocionados por Musk como una forma innovadora de transparencia, que supuestamente exponían por primera vez la forma en que el sesgo liberal de las costas de Estados Unidos de Twitter reprime el contenido conservador.El resultado fue algo muy distinto. Como dijo el periodista de tecnología Mike Masnick, después de toda la fanfarria que rodeó la publicación inicial de los archivos de Twitter, al final “no había absolutamente nada de interés” en los documentos y lo poco que había tenía errores factuales importantes. Hasta Musk acabó por impacientarse con la estrategia. Pero, en el proceso, el esfuerzo marcó una nueva e inquietante escalada en el acoso a los empleados de las empresas tecnológicas.A diferencia de los documentos que por lo general saldrían de las grandes empresas, las primeras versiones de los archivos de Twitter no suprimieron los nombres de los empleados, ni siquiera de los de menor nivel. Un empleado de Twitter que residía en Filipinas fue víctima de doxeo (la revelación de información personal) y de acoso grave. Otros se han convertido en objeto de conspiraciones. Las decisiones tomadas por equipos de decenas de personas de acuerdo con las políticas escritas de Twitter se presentaron como si hubieran sido tomadas por los deseos caprichosos de individuos, cada uno identificado por su nombre y su fotografía. Yo fui, por mucho, el objetivo más frecuente.La primera entrega de los archivos de Twitter se dio tras un mes de mi salida de la empresa y unos cuantos días después de que publiqué un ensayo invitado en The New York Times y hablé sobre mi experiencia como empleado de Musk. No pude evitar sentir que las acciones de la empresa eran, hasta cierto punto, represalias. A la semana siguiente, Musk fue incluso más allá y sacó de contexto un párrafo de mi tesis doctoral para afirmar sin fundamentos que yo aprobaba la pedofilia, un tropo conspirativo que suelen utilizar los extremistas de ultraderecha y los seguidores de QAnon para desprestigiar a personas de la comunidad LGBTQ.La respuesta fue todavía más extrema que la que experimenté tras el tuit que Trump publicó sobre mí. “Deberías colgarte de un viejo roble por la traición que has cometido. Vive con miedo cada uno de tus días”, decía uno de los miles de tuits y correos electrónicos amenazantes. Ese mensaje y cientos de otros similares eran violaciones de las mismas políticas que yo había trabajado para desarrollar y hacer cumplir. Bajo la nueva administración, Twitter se hizo de la vista gorda y los mensajes permanecen en el sitio hasta el día de hoy.El 6 de diciembre, cuatro días después de la primera divulgación de los archivos de Twitter, se me pidió comparecer en una audiencia del Congreso centrada en los archivos y la presunta censura de Twitter. En esa audiencia, algunos miembros del Congreso mostraron carteles de gran tamaño con mis tuits de hace años y me preguntaron bajo juramento si seguía manteniendo esas opiniones (en la medida en que las bromas tuiteadas con descuido pudieran tomarse como mis opiniones reales, no las sostengo). Greene dijo en Fox News que yo tenía “unas posturas muy perturbadoras sobre los menores y la pornografía infantil” y que yo permití “la proliferación de la pornografía infantil en Twitter”, lo que desvirtuó aún más las mentiras de Musk (y además, aumentó su alcance). Llenos de amenazas y sin opciones reales para responder o protegernos, mi marido y yo tuvimos que vender nuestra casa y mudarnos.El ámbito académico se ha convertido en el objetivo más reciente de estas campañas para socavar las medidas de seguridad en línea. Los investigadores que trabajan para entender y resolver la propagación de desinformación en línea reciben ahora más ataques partidistas; las universidades a las que están afiliados han estado envueltas en demandas, onerosas solicitudes de registros públicos y procedimientos ante el Congreso. Ante la posibilidad de facturas de abogados de siete dígitos, hasta los laboratorios de las universidades más grandes y mejor financiadas han dicho que tal vez tengan que abandonar el barco. Otros han optado por cambiar el enfoque de sus investigaciones en función de la magnitud del acoso.Poco a poco, audiencia tras audiencia, estas campañas están erosionando de manera sistemática las mejoras a la seguridad y la integridad de las plataformas en línea que tanto ha costado conseguir y las personas que realizan este trabajo son las que pagan el precio más directo.Las plataformas de tecnología están replegando sus iniciativas para proteger la seguridad de las elecciones y frenar la propagación de la desinformación en línea. En medio de un clima de austeridad más generalizado, las empresas han disminuido muy en especial sus iniciativas relacionadas con la confianza y la seguridad. Ante la creciente presión de un Congreso hostil, estas decisiones son tan racionales como peligrosas.Podemos analizar lo que ha sucedido en otros países para vislumbrar cómo podría terminar esta historia. Donde antes las empresas hacían al menos un esfuerzo por resistir la presión externa; ahora, ceden en gran medida por defecto. A principios de 2023, el gobierno de India le pidió a Twitter que restringiera las publicaciones que criticaran al primer ministro del país, Narendra Modi. En años anteriores, la empresa se había opuesto a tales peticiones; en esta ocasión, Twitter accedió. Cuando un periodista señaló que tal cooperación solo incentiva la proliferación de medidas draconianas, Musk se encogió de hombros: “Si nos dan a elegir entre que nuestra gente vaya a prisión o cumplir con las leyes, cumpliremos con las leyes”.Resulta difícil culpar a Musk por su decisión de no poner en peligro a los empleados de Twitter en India. Pero no deberíamos olvidar de dónde provienen estas tácticas ni cómo se han extendido tanto. Las acciones de Musk (que van desde presionar para abrir los archivos de Twitter hasta tuitear sobre conspiraciones infundadas relacionadas con exempleados) normalizan y popularizan que justicieros exijan la rendición de cuentas y convierten a los empleados de su empresa en objetivos aún mayores. Su reciente ataque a la Liga Antidifamación demuestra que considera que toda crítica contra él o sus intereses empresariales debe tener como consecuencia una represalia personal. Y, en la práctica, ahora que el discurso de odio va en aumento y disminuyen los ingresos de los anunciantes, las estrategias de Musk parecen haber hecho poco para mejorar los resultados de Twitter.¿Qué puede hacerse para revertir esta tendencia?Dejar claras las influencias coercitivas en la toma de decisiones de las plataformas es un primer paso fundamental. También podría ayudar que haya reglamentos que les exijan a las empresas transparentar las decisiones que tomen en estos casos y por qué las toman.En su ausencia, las empresas deben oponerse a los intentos de que se quiera controlar su trabajo. Algunas de estas decisiones son cuestiones fundamentales de estrategia empresarial a largo plazo, como dónde abrir (o no abrir) oficinas corporativas. Pero las empresas también tienen un deber para con su personal: los empleados no deberían tener que buscar la manera de protegerse cuando sus vidas ya se han visto alteradas por estas campañas. Ofrecer acceso a servicios que promuevan la privacidad puede ayudar. Muchas instituciones harían bien en aprender la lección de que pocas esferas de la vida pública son inmunes a la influencia mediante la intimidación.Si las empresas de redes sociales no pueden operar con seguridad en un país sin exponer a sus trabajadores a riesgos personales y a las decisiones de la empresa a influencias indebidas, tal vez no deberían operar allí para empezar. Como a otros, me preocupa que esas retiradas empeoren las opciones que les quedan a las personas que más necesitan expresarse en línea de forma libre y abierta. Pero permanecer en internet teniendo que hacer concesiones podría impedir el necesario ajuste de cuentas con las políticas gubernamentales de censura. Negarse a cumplir exigencias moralmente injustificables y enfrentarse a bloqueos por ello puede provocar a largo plazo la necesaria indignación pública que ayude a impulsar la reforma.El mayor desafío —y quizá el más ineludible— en este caso es el carácter esencialmente humano de las iniciativas de confianza y seguridad en línea. No son modelos de aprendizaje automático ni algoritmos sin rostro los que están detrás de las decisiones clave de moderación de contenidos: son personas. Y las personas pueden ser presionadas, intimidadas, amenazadas y extorsionadas. Enfrentarse a la injusticia, al autoritarismo y a los perjuicios en línea requiere empleados dispuestos a hacer ese trabajo.Pocas personas podrían aceptar un trabajo así, si lo que les cuesta es la vida o la libertad. Todos debemos reconocer esta nueva realidad y planear en consecuencia.Yoel Roth es académico visitante de la Universidad de Pensilvania y la Fundación Carnegie para la Paz Internacional, y fue responsable de confianza y seguridad en Twitter. More

  • in

    I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online.

    Timo LenzenWhen I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.What can be done to turn back this tide?Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    China Sows Disinformation About Hawaii Fires Using New Techniques

    Beijing’s influence campaign using artificial intelligence is a rapid change in tactics, researchers from Microsoft and other organizations say.When wildfires swept across Maui last month with destructive fury, China’s increasingly resourceful information warriors pounced.The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was the result of a secret “weather weapon” being tested by the United States. To bolster the plausibility, the posts carried photographs that appeared to have been generated by artificial intelligence programs, making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign.For China — which largely stood on the sidelines of the 2016 and 2020 U.S. presidential elections while Russia ran hacking operations and disinformation campaigns — the effort to cast the wildfires as a deliberate act by American intelligence agencies and the military was a rapid change of tactics.Until now, China’s influence campaigns have been focused on amplifying propaganda defending its policies on Taiwan and other subjects. The most recent effort, revealed by researchers from Microsoft and a range of other organizations, suggests that Beijing is making more direct attempts to sow discord in the United States.The move also comes as the Biden administration and Congress are grappling with how to push back on China without tipping the two countries into open conflict, and with how to reduce the risk that A.I. is used to magnify disinformation.The impact of the Chinese campaign — identified by researchers from Microsoft, Recorded Future, the RAND Corporation, NewsGuard and the University of Maryland — is difficult to measure, though early indications suggest that few social media users engaged with the most outlandish of the conspiracy theories.Brad Smith, the vice chairman and president of Microsoft, whose researchers analyzed the covert campaign, sharply criticized China for exploiting a natural disaster for political gain.“I just don’t think that’s worthy of any country, much less any country that aspires to be a great country,” Mr. Smith said in an interview on Monday.China was not the only country to make political use of the Maui fires. Russia did as well, spreading posts that emphasized how much money the United States was spending on the war in Ukraine and that suggested the cash would be better spent at home for disaster relief.The researchers suggested that China was building a network of accounts that could be put to use in future information operations, including the next U.S. presidential election. That is the pattern that Russia set in the year or so leading up to the 2016 election.“This is going into a new direction, which is sort of amplifying conspiracy theories that are not directly related to some of their interests, like Taiwan,” said Brian Liston, a researcher at Recorded Future, a cybersecurity company based in Massachusetts.A destroyed neighborhood in Lahaina, Hawaii, last month. China has made the wildfires a target of disinformation.Go Nakamura for The New York TimesIf China does engage in influence operations for the election next year, U.S. intelligence officials have assessed in recent months, it is likely to try to diminish President Biden and raise the profile of former President Donald J. Trump. While that may seem counterintuitive to Americans who remember Mr. Trump’s effort to blame Beijing for what he called the “China virus,” the intelligence officials have concluded that Chinese leaders prefer Mr. Trump. He has called for pulling Americans out of Japan, South Korea and other parts of Asia, while Mr. Biden has cut off China’s access to the most advanced chips and the equipment made to produce them.China’s promotion of a conspiracy theory about the fires comes after Mr. Biden vented in Bali last fall to Xi Jinping, China’s president, about Beijing’s role in the spread of such disinformation. According to administration officials, Mr. Biden angrily criticized Mr. Xi for the spread of false accusations that the United States operated biological weapons laboratories in Ukraine.There is no indication that Russia and China are working together on information operations, according to the researchers and administration officials, but they often echo each other’s messages, particularly when it comes to criticizing U.S. policies. Their combined efforts suggest a new phase of the disinformation wars is about to begin, one bolstered by the use of A.I. tools.“We don’t have direct evidence of coordination between China and Russia in these campaigns, but we’re certainly finding alignment and a sort of synchronization,” said William Marcellino, a researcher at RAND and an author of a new report warning that artificial intelligence will enable a “critical jump forward” in global influence operations.The wildfires in Hawaii — like many natural disasters these days — spawned numerous rumors, false reports and conspiracy theories almost from the start.Caroline Amy Orr Bueno, a researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security, reported that a coordinated Russian campaign began on Twitter, the social media platform now known as X, on Aug. 9, a day after the fires started.It spread the phrase, “Hawaii, not Ukraine,” from one obscure account with few followers through a series of conservative or right-wing accounts like Breitbart and ultimately Russian state media, reaching thousands of users with a message intended to undercut U.S. military assistance to Ukraine.President Biden has criticized President Xi Jinping of China for the spread of false accusations about the United States and Ukraine.Florence Lo/ReutersChina’s state media apparatus often echoes Russian themes, especially animosity toward the United States. But in this case, it also pursued a distinct disinformation campaign.Recorded Future first reported that the Chinese government mounted a covert campaign to blame a “weather weapon” for the fires, identifying numerous posts in mid-August falsely claiming that MI6, the British foreign intelligence service, had revealed “the amazing truth behind the wildfire.” Posts with the exact language appeared on social media sites across the internet, including Pinterest, Tumblr, Medium and Pixiv, a Japanese site used by artists.Other inauthentic accounts spread similar content, often accompanied with mislabeled videos, including one from a popular TikTok account, The Paranormal Chic, that showed a transformer explosion in Chile. According to Recorded Future, the Chinese content often echoed — and amplified — posts by conspiracy theorists and extremists in the United States, including white supremacists.The Chinese campaign operated across many of the major social media platforms — and in many languages, suggesting it was aimed at reaching a global audience. Microsoft’s Threat Analysis Center identified inauthentic posts in 31 languages, including French, German and Italian, but also in less prominent ones like Igbo, Odia and Guarani.The artificially generated images of the Hawaii wildfires identified by Microsoft’s researchers appeared on multiple platforms, including a Reddit post in Dutch. “These specific A.I.-generated images appear to be exclusively used” by Chinese accounts used in this campaign, Microsoft said in a report. “They do not appear to be present elsewhere online.”Clint Watts, the general manager of Microsoft’s Threat Analysis Center, said that China appeared to have adopted Russia’s playbook for influence operations, laying the groundwork to influence politics in the United States and other countries.“This would be Russia in 2015,” he said, referring to the bots and inauthentic accounts Russia created before its extensive online influence operation during the 2016 election. “If we look at how other actors have done this, they are building capacity. Now they’re building accounts that are covert.”Natural disasters have often been the focus of disinformation campaigns, allowing bad actors to exploit emotions to accuse governments of shortcomings, either in preparation or in response. The goal can be to undermine trust in specific policies, like U.S. support for Ukraine, or more generally to sow internal discord. By suggesting the United States was testing or using secret weapons against its own citizens, China’s effort also seemed intended to depict the country as a reckless, militaristic power.“We’ve always been able to come together in the wake of humanitarian disasters and provide relief in the wake of earthquakes or hurricanes or fires,” said Mr. Smith, who is presenting some of Microsoft’s findings to Congress on Tuesday. “And to see this kind of pursuit instead is both, I think deeply disturbing and something that the global community should draw a red line around and put off-limits.” More

  • in

    Nikki Haley Is the Best Trump Alternative

    I have a bunch of friends and acquaintances who are Never Trump, maybe-Trump or kind-of-Trump Republicans. They’ve been looking around for the candidate they can support and give their dollars to, somebody who is an antidote to Donald Trump and who can win a general election.We’ve had endless conversations about who this person might be. Many of these friends and acquaintances went through a Ron DeSantis phase. A few like the No Labels third candidate option. I’ve often found myself talking up Tim Scott with them. If Trump is a moral stain, I would say, Tim Scott is the kind, honest and optimistic remedy.But Wednesday’s debate persuaded me that the best Trump alternative is not Scott, it’s Nikki Haley. Nothing against Scott, he just didn’t show the specific kind of power and force needed to bring down Trump. Haley showed more than a glimpse of that power.Wednesday’s debate illustrated the cancer that is eating away at the Republican Party. It’s not just Trumpian immorality. The real disease is narcissistic hucksterism. The real danger is that he’s creating generations of people, like Vivek Ramaswamy, who threaten to dominate the G.O.P. for decades to come.Ramaswamy has absolutely no reason to be running for president. He said that Trump is the best president of the 21st century. So why is he running against the man he so admires? The answer is: To draw attention to himself. Maybe to be Trump’s vice president or secretary of social media memes.If Trump emerged from the make-believe world of pro wrestling, Ramaswamy emerges from the make-believe world of social media and the third-rate sectors of the right-wing media sphere. His statements are brisk, in-your-face provocations intended to produce temporary populist dopamine highs. It’s all performative show. Ramaswamy seems as uninterested in actually governing as his idol.Republicans have been unable to take down Trump because they haven’t been able to rebut and replace the core Trump/Ramaswamy ethos — that politics is essentially a form of entertainment. But time and again, Haley seemed to look at the Trump/Ramaswamy wing and implicitly say: You children need to stop preening and deal with reality. She showed total impatience for the kind of bravado that the fragile male ego manufactures by the boatload.Haley dismantled Ramaswamy on foreign policy. It was not only her contemptuous put-down: “You have no foreign policy experience and it shows.” She took on the whole America First ethos that sounds good as a one-liner but that doesn’t work when you’re governing a superpower. Gesturing to Ramaswamy, she said, “He wants to hand Ukraine to Russia, he wants to let China eat Taiwan, he wants to go and stop funding Israel. You don’t do that to friends.”Similarly on abortion, many of her opponents took the issue as a chance to perform self-righteous bluster — to make the issue about themselves. She was the only one who acknowledged the complexity of the issue, who tried to humanize people caught in horrible situations, who acknowledged that the absolutist position is politically unsustainable.She was the candidate brave enough to state the obvious truth that Trump took decades of G.O.P. fiscal conservative posturing and he blew it to smithereens. The other candidates assumed the usual conservative postures about cutting taxes and spending, but she introduced the reality: Under Trump, the G.O.P. added $8 trillion to the national debt. Where have you been the last seven years?That was part of a larger accomplishment. She seems to be one of the few candidates who understands that to run against Trump you have to run against Trump. Many of the other candidates, especially Ron DeSantis, seem to have absorbed the pernicious Trumpian assumption that Republican voters are so stupid that they can be won over by hokum. DeSantis is a smart guy trying to run as a simpleton. Haley, by contrast, seems to believe that voters are intelligent enough to be treated as adults.I’m trying to point to an overall pattern. When politics becomes entertainment, it’s very easy to create a land of make-believe in which you get high on your own supply. To follow Trump, you more or less have to say farewell to the actual world and live by the rules of the fun house carnival. Haley seems to have her feet still planted on the ground — able to face what Saul Bellow once called “the reality situation.”My largest question about Haley is: Does she know what year it is? The most interesting exchange of the night was between Ramaswamy and Mike Pence. Ramaswamy, to his credit, was talking about the nation’s mental health crisis and the national identity crisis that lies beneath it. Pence waved all that talk about the loss of meaning and purpose as so much woo-woo, and argued that the real problem is that government is not as good as the people. Pence, like many in the field, is still living in the age of Reagan, or at the latest, the Tea Party. They haven’t reoriented their focus to the sorts of concerns that are most important to heartland voters without a college degree. They don’t understand why the old Republican orthodoxy was so fragile in the face of Trump. They haven’t faced the new realities that have emerged this century.Has Haley? Too soon to tell. But if any of my friends and acquaintances want to stop Trump, this is their moment to give Haley her chance.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    The Secrets of Debate Swag

    Oh, the games campaigns play with political merch. They may surprise you.Whatever happens at the first Republican presidential primary debate on Aug. 23, whatever revelations emerge from the melee of eight (count ’em) contenders, whatever slings and arrows are thrown, and whoever is declared the winner, one thing is certain: There will be a viral moment or two; a riposte that becomes a meme. Campaign staff will be watching. And before you can say “in my prime” or “too honest,” it will end up on a T-shirt in a candidate’s store.These days, retail politics has a whole new meaning. At a point in the electoral cycle when candidates are desperate to distinguish themselves and have only minutes onstage to do so, being able to deliver a zinger that will play on via swag is a key advantage.Ever since the inauguration of George Washington, voters have been participating in the electoral process by means of merch. Back then, it was fancy commemorative buttons that were sewn onto clothes (and were, largely, accessible only to the well-off).Over the years, the “store” — effectively an alternate way for candidates to elicit small-dollar donations and add to their supporter base by appealing to consumer culture — has grown in importance as technology has transformed our ability to make stuff, sell stuff and mine data. Now, almost as soon as presidential contenders declare their candidacy and their websites go live, the shops go live with them.“It’s one of the biggest changes over the last 20 years,” said Ron Bonjean, a Republican strategist and a founder of ROKK, a public affairs firm.By their stuff ye shall know them. Or at least know something about their strategy. It’s no longer just bumper stickers and baseball caps with a candidate’s name and the electoral year, but a constantly evolving stream of purpose-made product.And because of that, by their merch they are also finding new ways to know you.Casey DeSantis in her “Where Woke Goes to Die” leather jacket on a campaign stop in Iowa in June.Hannah Fingerhut/Associated PressA quarter-zip sweatshirt on offer in the DeSantis for President store inspired by Casey DeSantis’s jacket.Campaign store offerings have essentially become Rorschach tests for the electorate: What people buy, the slogans that get their shopping juices flowing, help determine how the candidates sell their ideas.“It’s a way to trial how candidates market themselves and how people respond to that,” said Claire Jerry, a curator of political history at the Smithsonian’s National Museum of American History, who had been scouting the landscape at the Iowa State Fair.Which is why campaign store offerings are getting so, well, tailored, the better to put their own spin on the popular conversation. Not the one taking place about policy among talking heads, but the one taking place on Instagram, X (the platform formerly known as Twitter) and TikTok. It’s a bona fide trend.Get Ready for the RevolutionJust in time for the debate, Vivek Ramaswamy’s team rebranded his main product stream (he offers about 65 total SKUs, as stock-keeping units of an item are called) to move away from his original focus on woke-ness, or anti-wokeness, to a new “Revolutions” theme., including what Tricia McLaughlin, a senior adviser on the campaign, called “Thomas Paine-style” campaign literature and slogans, with 18th-century script and sepia tones.At the Iowa State fair, Nikki Haley (about 70 SKUs), who has had great success with products featuring the slogan “In Her Prime” — a reference to Don Lemon’s much criticized comment that she was past her prime — modeled her “Underestimate me, that’ll be fun” T-shirt, which became its own talking point.Doug Burgum, the governor of North Dakota (about 40 SKUs), has “Doug Who?” shirts, playing up his underdog status. When Casey DeSantis wore a leather jacket with an alligator on the back superimposed over a map of Florida with the words “Where Woke Goes to Die” on it, the image went viral — and ended up on a quarter-zip sweatshirt in the store. The DeSantis campaign boasts it is the fastest selling of its more than 70 products.Clockwise from top left, candidate merchandise that exploits a moment from Vivek Ramaswamy, Nikki Haley, Mike Pence and Doug Burgum.And when the federal indictment against Donald Trump was opened and included a quotation from Mr. Trump calling Mike Pence “too honest” for insisting there was no constitutional basis for rejecting Biden electoral votes in the 2020 election, the Pence campaign jumped on the phrase and made it the centerpiece of his store.This kind of quick reaction “allows you to meet people where they are, rather than trying to drag them over to where you are,” Ms. McLaughlin said. (The “Dark Brandon” phenomenon, which President Biden’s team has appropriated to great success, is a prime example.)Arguably, where people are — in the middle of cancel culture, locked in their own social media echo chambers — is not the most positive place, and making it into merch is a cynical move to exploit our factionalism and us-versus-them mentality. But then, fashion is often the locale where culture and politics meet. Swag just makes it obvious.Indeed, the shop has become so central to campaigning that not long after a group of Republican strategists created WinRed, the party’s donation-processing digital platform, in 2019, it has included support for opening storefronts available free of charge to every candidate. That helped erase any barrier to entry for a campaign that may not have the complex operations needed to design, source, produce and distribute merch. (Democrats have had a similar entity, ActBlue, since 2004.)Every Republican candidate who has qualified for the debate on Wednesday night uses WinRed for their shop, except Chris Christie, the rare candidate, Republican or Democratic, to not have a store, viewing it as a drain on personnel resources. Donald J. Trump, who qualified for the debate but has decided not to appear, also uses the platform.WinRed vets its recommended vendors, like Ace Specialties, “known for making the MAGA hat,” and Merch Raise, allowing candidates to state that products are “made in the U.S.A.” And all of them work on a drop-ship model, meaning they produce items only after they are ordered, so campaigns can test as many designs as they want without the expense of holding inventory.That has allowed campaigns to be ultra-responsive to buzzword moments and to weaponize them for their own purposes. After all, sites like Redbubble and Etsy have built their business on exploiting virality, including viral political moments. Why shouldn’t the protagonists themselves profit from the give-and-take between publicity and product? Not to mention exploit our desire for stuff.Reading the Merch Leaves“People like the tangible sense of participating in a campaign,” Ms. Jerry, of the Smithsonian, said. And we have become conditioned to appreciate acquisition.“If someone just asks if you want to donate, you might say no,” Ms. Jerry continued. “But if you can get a T-shirt?” Tim Scott has even sent out direct mailings asking supporters what “new piece of Tim Scott merchandise” they would like to see. (The socks are kind of fun.)Merch turns individuals into billboards in a cycle of shopping satiation and public support. “When you see people in a crowd identified as being on your side, it creates a sense of excitement,” Ms. Jerry said. Case in point, the ocean of red baseball caps at Trump rallies, which sends a visual message that is, to many in our current environment, more convincing than any poll.Even more significantly, merch allows candidates to see what is resonating with voters and adjust their message accordingly, much like a focus group. When you buy some merch, you are giving a candidate not just your money, email and address, but (whether you realize it or not) psychographic information that can be used to geo-target mailings and commercials. The more varied the offerings, the more information they elicit.If you buy, say, a camo hat in the Burgum store, you may suddenly find yourself on the receiving end of lots of Second Amendment information. If you buy a “Joe Biden Makes Me Cry” baby onesie at the DeSantis store, or a “Mamas for DeSantis” T-shirt, you may be inundated with information about the battle over school curriculums and abortion. If you buy a “Faith” tee from Tim Scott, it’s understood as a signal that you care about religious freedoms.Clockwise from top left, candidate merchandise from Vivek Ramaswamy, Mike Pence, Doug Burgum and Ron DeSantis, on what appear to be the same computer-generated bodies.There’s only one problem: The WinRed-effected ease of shop creation, in which every candidate’s store is powered by the same platform, means that they all look pretty much the same.Down to the structure (four horizontal squares of products), the color scheme (red, white and blue, duh, with some gray, black, white and pink thrown in for good measure) and the chubby baby torso depicted in each onesie, or the generic female and male torsos, all of which resemble A.I.-generated fake humans from a very bland heartland. It can make going from one shop to the other a bit like entering the Twilight Zone.And, given the need to stand out from the crowd, having a storefront that looks just like the other guy’s — and is populated with the same bots as the other guy’s — can also seem less than ideal.“I don’t think anyone notices,” said Mr. Bonjean, the strategist (he is not working for any of the candidates). Which may be true for those already decided, but given the early stage of the campaign cycle, anyone … um, shopping around for a candidate and visiting the sites may have a different opinion.Still, the current reality has led to a situation in which, Mr. Bonjean said, not only are campaigns primed to jump on any one-liner that can easily translate into merch, but also they are likely teeing them up, seeding quips in debate responses, the better to jump-start a new political product placement cycle.“We don’t think it’s ever going away,” Ms. Jerry said.Watch for it Wednesday, and then see what sentiment ends up on the sleeves, socks or sunglasses strap coming soon to a voter near you. More

  • in

    Special Counsel Used Warrant to Get Trump’s Twitter Direct Messages

    The nature of the messages or who exactly wrote them remained unclear, but it was a revelation that such messages were associated with the former president’s account.The federal prosecutors who charged former President Donald J. Trump this month with conspiring to overturn the 2020 election got access this winter to a trove of so-called direct messages that Mr. Trump sent others privately through his Twitter account, according to court papers unsealed on Tuesday.While it remained unclear what sorts of information the messages contained and who exactly may have written them, it was a revelation that there were private messages associated with the Twitter account of Mr. Trump, who has famously been cautious about using written forms of communications in his dealings with aides and allies.The court papers disclosing that prosecutors in the office of the special counsel, Jack Smith, obtained direct messages from Mr. Trump’s Twitter account emerged from a fight with Twitter over the legality of executing a warrant on the former president’s social media. Days after the attack on the Capitol on Jan. 6, 2021, the platform shut down his account.The papers included transcripts of hearings in Federal District Court in Washington in February during which Judge Beryl A. Howell asserted that Mr. Smith’s office had sought Mr. Trump’s direct messages — or DMs — from Twitter as part of a search warrant it executed on the account in January.In one of the transcripts, a lawyer for Twitter, answering questions from Judge Howell, confirmed that the company had turned over to the special counsel’s office “all direct messages, the DMs” from Mr. Trump’s Twitter account, including those sent, received and “stored in draft form.”The lawyer for Twitter told Judge Howell that the company had found both “deleted” and “nondeleted” direct messages associated with Mr. Trump’s account.The warrant was first revealed last week when a federal appeals court in Washington released court papers about Twitter’s attempt to challenge certain aspects of the warrant.The court papers unsealed on Tuesday revealed that Mr. Smith’s prosecutors sought “all content, records and other information” related to Mr. Trump’s Twitter account from October 2020 to January 2021, including all tweets “created, drafted, favorited/liked or retweeted” by the account and all direct messages sent from, received by or stored in draft form by the account.The warrant, which was signed by a federal judge in Washington in January after Elon Musk took over Twitter, now called X, is the first known example of prosecutors directly searching Mr. Trump’s communications and adds a new dimension to the scope of the special counsel’s efforts to investigate the former president.Mr. Trump’s Twitter account was often managed by Dan Scavino, a longtime adviser going back to his days in his private business, and it was unclear if any direct messages were from when he was using the account.CNN earlier reported the revelation that Mr. Trump’s direct messages were sought by the search warrant.A spokesman for Mr. Trump, asked for comment, referred to a post the former president made on his social media website, Truth Social, on Monday, in which he called Mr. Smith a “lowlife” and accused him breaking into his Twitter account. “What could he possibly find out that is not already known,” Mr. Trump wrote.The election charges filed against Mr. Trump accuse him of three overlapping conspiracies: to defraud the United States, to disrupt the certification of the election at a proceeding at the Capitol on Jan. 6 and to deprive people of the right to have their votes counted.Mr. Trump’s relentless use of Twitter is detailed several times in the indictment.The indictment notes, for instance, how Mr. Trump used Twitter on Dec. 19, 2020, to summon his followers to Washington on Jan. 6 for what he described as a “wild” protest. The message ultimately served as a lightning rod for both far-right extremists and ordinary Trump supporters who descended on the city that day, answering Mr. Trump’s call.The indictment also describes how Mr. Trump used Twitter in the run-up to Jan. 6 to instill in his followers “the false expectation” that Vice President Mike Pence had the authority to use his role in overseeing the certification proceeding at the Capitol “to reverse the election outcome” in Mr. Trump’s favor.On Jan. 6, Mr. Trump continued posting messages on Twitter that kept up this drumbeat of “knowingly false statements aimed at pressuring the vice president,” the indictment said. Ultimately, when Mr. Pence declined to give in, Mr. Trump posted yet another tweet blaming the vice president for not having “the courage to do what should have been done to protect our country and our Constitution.”One minute after the tweet was posted, the indictment said, Secret Service agents were forced to evacuate Mr. Pence to a secure location. And throughout that afternoon, it added, rioters roamed the Capitol and its grounds, shouting chants like “Traitor Pence” and “Hang Mike Pence.”When the special counsel’s office obtained the warrant for Mr. Trump’s Twitter account, prosecutors also got permission from a judge to force Twitter not to inform the former president that they were scrutinizing his communications.If Mr. Trump had learned about the warrant, the court papers unsealed on Tuesday said, it “would result in destruction of or tampering with evidence, intimidation of potential witnesses or serious jeopardy to this investigation.”Twitter challenged this so-called nondisclosure order, arguing that prosecutors had violated the company’s First Amendment rights by seeking to keep officials from communicating with Mr. Trump, one of its customers.The company also asked to delay complying with the warrant until the issues surrounding the provision were resolved. Otherwise, it claimed, Mr. Trump would not have a chance to assert executive privilege in a bid to “shield communications made using his Twitter account.”Ultimately, Twitter not only lost the fight but also was found to be in contempt of court for delaying complying with the warrant. Judge Howell fined the company $350,000. More

  • in

    A tsunami of AI misinformation will shape next year’s knife-edge elections | John Naughton

    It looks like 2024 will be a pivotal year for democracy. There are elections taking place all over the free world – in South Africa, Ghana, Tunisia, Mexico, India, Austria, Belgium, Lithuania, Moldova and Slovakia, to name just a few. And of course there’s also the UK and the US. Of these, the last may be the most pivotal because: Donald Trump is a racing certainty to be the Republican candidate; a significant segment of the voting population seems to believe that the 2020 election was “stolen”; and the Democrats are, well… underwhelming.The consequences of a Trump victory would be epochal. It would mean the end (for the time being, at least) of the US experiment with democracy, because the people behind Trump have been assiduously making what the normally sober Economist describes as “meticulous, ruthless preparations” for his second, vengeful term. The US would morph into an authoritarian state, Ukraine would be abandoned and US corporations unhindered in maximising shareholder value while incinerating the planet.So very high stakes are involved. Trump’s indictment “has turned every American voter into a juror”, as the Economist puts it. Worse still, the likelihood is that it might also be an election that – like its predecessor – is decided by a very narrow margin.In such knife-edge circumstances, attention focuses on what might tip the balance in such a fractured polity. One obvious place to look is social media, an arena that rightwing actors have historically been masters at exploiting. Its importance in bringing about the 2016 political earthquakes of Trump’s election and Brexit is probably exaggerated, but it – and notably Trump’s exploitation of Twitter and Facebook – definitely played a role in the upheavals of that year. Accordingly, it would be unwise to underestimate its disruptive potential in 2024, particularly for the way social media are engines for disseminating BS and disinformation at light-speed.And it is precisely in that respect that 2024 will be different from 2016: there was no AI way back then, but there is now. That is significant because generative AI – tools such as ChatGPT, Midjourney, Stable Diffusion et al – are absolutely terrific at generating plausible misinformation at scale. And social media is great at making it go viral. Put the two together and you have a different world.So you’d like a photograph of an explosive attack on the Pentagon? No problem: Dall-E, Midjourney or Stable Diffusion will be happy to oblige in seconds. Or you can summon up the latest version of ChatGPT, built on OpenAI’s large language model GPT-4, and ask it to generate a paragraph from the point of view of an anti-vaccine advocate “falsely claiming that Pfizer secretly added an ingredient to its Covid-19 vaccine to cover up its allegedly dangerous side-effects” and it will happily oblige. “As a staunch advocate for natural health,” the chatbot begins, “it has come to my attention that Pfizer, in a clandestine move, added tromethamine to its Covid-19 vaccine for children aged five to 11. This was a calculated ploy to mitigate the risk of serious heart conditions associated with the vaccine. It is an outrageous attempt to obscure the potential dangers of this experimental injection, which has been rushed to market without appropriate long-term safety data…” Cont. p94, as they say.You get the point: this is social media on steroids, and without the usual telltale signs of human derangement or any indication that it has emerged from a machine. We can expected a tsunami of this stuff in the coming year. Wouldn’t it be prudent to prepare for it and look for ways of mitigating it?That’s what the Knight First Amendment Institute at Columbia University is trying to do. In June, it published a thoughtful paper by Sayash Kapoor and Arvind Narayanan on how to prepare for the deluge. It contains a useful categorisation of malicious uses of the technology, but also, sensibly, includes the non-malicious ones – because, like all technologies, this stuff has beneficial uses too (as the tech industry keeps reminding us).The malicious uses it examines are disinformation, so-called “spear phishing”, non-consensual image sharing and voice and video cloning, all of which are real and worrying. But when it comes to what might be done about these abuses, the paper runs out of steam, retreating to bromides about public education and the possibility of civil society interventions while avoiding the only organisations that have the capacity actually to do something about it: the tech companies that own the platforms and have a vested interest in not doing anything that might impair their profitability. Could it be that speaking truth to power is not a good career move in academia?What I’ve been readingShake it upDavid Hepworth has written a lovely essay for LitHub about the Beatles recording Twist and Shout at Abbey Road, “the moment when the band found its voice”.Dish the dirtThere is an interesting profile of Techdirt founder Mike Masnick by Kashmir Hill in the New York Times, titled An Internet Veteran’s Guide to Not Being Scared of Technology.Truth bombsWhat does Oppenheimer the film get wrong about Oppenheimer the man? A sharp essay by Haydn Belfield for Vox illuminates the differences. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More