More stories

  • in

    Trump me atacó. Después, Musk lo hizo. No fue casualidad

    Timo LenzenCuando trabajaba en Twitter, ahora conocida como X, dirigí al equipo que puso por primera vez una etiqueta de verificación de hechos en uno de los tuits de Donald Trump. Tras la violencia del 6 de enero, ayudé a tomar la decisión de suspender su cuenta en Twitter. Nada me preparó para lo que ocurriría después.Respaldado por sus seguidores en las redes sociales, Trump me atacó públicamente. Dos años después, tras su adquisición de Twitter y después de que yo dimití de mi puesto como responsable de confianza y seguridad de la empresa, Elon Musk echó más leña al fuego. He vivido con guardias armados en la puerta de mi casa y he tenido que trastocar la vida de mi familia, así como esconderme durante meses y mudarme una y otra vez.No es una historia que me guste recordar. Pero he aprendido que lo que me ocurrió no fue casualidad. No fue solo una venganza personal o la “cultura de la cancelación”. Se trató de una estrategia que no solo afecta a personas específicas, como en mi caso, sino a todos nosotros, ya que está cambiando a gran velocidad lo que vemos en internet.Los individuos —desde investigadores académicos hasta trabajadores de empresas de tecnología— son cada vez más objeto de demandas, comparecencias ante el Congreso y despiadados ataques en línea. Estos ataques, organizados en gran medida por la derecha, están teniendo el efecto deseado: las universidades están reduciendo sus esfuerzos para cuantificar la información abusiva y engañosa que se difunde en internet. Las empresas de redes sociales están evitando tomar el tipo de decisiones difíciles que mi equipo tomó cuando intervinimos ante las mentiras de Trump sobre las elecciones de 2020. Las plataformas no empezaron a tomarse en serio estos riesgos sino hasta después de las elecciones de 2016. Ahora, ante la posibilidad de ataques desproporcionados contra sus empleados, las empresas parecen cada vez más reacias a tomar decisiones controvertidas, lo cual permite que la desinformación y el abuso se enconen para evitar provocar represalias públicas.Estos ataques a la seguridad en internet se producen en un momento en el que la democracia no podría estar más en riesgo. En 2024, está prevista la celebración de más de 40 elecciones importantes, entre ellas las de Estados Unidos, la Unión Europea, la India, Ghana y México. Lo más probable es que estas democracias se enfrenten a los mismos riesgos de campañas de desinformación respaldadas por los gobiernos y de incitación a la violencia en línea que han plagado las redes sociales durante años. Deberíamos preocuparnos por lo que ocurra.Mi historia comienza con esa verificación de datos. En la primavera de 2020, tras años de debate interno, mi equipo decidió que Twitter debía aplicar una etiqueta a un tuit del entonces presidente Trump que afirmaba que el voto por correo era propenso al fraude y que las próximas elecciones estarían “amañadas”. “Conoce los hechos sobre la votación por correo”, decía la etiqueta.El 27 de mayo, la mañana siguiente a la colocación de la etiqueta, la asesora principal de la Casa Blanca, Kellyanne Conway, me identificó de manera pública como el director del equipo de integridad de Twitter. Al día siguiente, The New York Post publicó en su portada varios tuits en los que me burlaba de Trump y otros republicanos. Los había publicado años antes, cuando era estudiante y tenía pocos seguidores, sobre todo amigos y familiares, en las redes sociales. Ahora, eran noticia de primera plana. Ese mismo día, Trump tuiteó que yo era un “odiador”.Legiones de usuarios de Twitter, la mayoría de quienes días antes no tenían ni idea de quién era yo ni en qué consistía mi trabajo, comenzaron una campaña de acoso en línea que duró meses, en la que exigían que me despidieran, me encarcelaran o me mataran. La cantidad de notificaciones de Twitter arrunió mi teléfono. Amigos de los que no tenía noticias desde hacía años expresaron su preocupación. En Instagram, fotos antiguas de mis vacaciones y de mi perro se inundaron de comentarios amenazantes e insultos (algunos comentaristas, que malinterpretaron el momento de manera atroz, aprovecharon para intentar coquetear conmigo).Me sentí avergonzado y asustado. Hasta ese momento, nadie fuera de unos pocos círculos bastante especializados tenía idea de quién era yo. Los académicos que estudian las redes sociales llaman a esto “colapso de contexto”: las cosas que publicamos en las redes sociales con un público en mente pueden acabar circulando entre un público muy diferente, con resultados inesperados y destructivos. En la práctica, se siente como si todo tu mundo se derrumba.El momento en que se desató la campaña en contra de mi persona y mi supuesta parcialidad sugería que los ataques formaban parte de una estrategia bien planificada. Los estudios académicos han rebatido en más de una ocasión las afirmaciones de que las plataformas de Silicon Valley son tendenciosas contra los conservadores. Pero el éxito de una estrategia encaminada a obligar a las empresas de redes sociales a reconsiderar sus decisiones quizá no requiera la demostración de una verdadera mala conducta. Como describió en una ocasión Rich Bond, expresidente del Partido Republicano, tal vez solo sea necesario “ganarse a los árbitros”: presionar sin cesar a las empresas para que se lo piensen dos veces antes de emprender acciones que podrían provocar una reacción negativa. Lo que me ocurrió fue parte de un esfuerzo calculado para que Twitter se mostrara reacio a moderar a Trump en el futuro y para disuadir a otras empresas de tomar medidas similares.Y funcionó. Mientras se desataba la violencia en el Capitolio el 6 de enero, Jack Dorsey, entonces director general de Twitter, anuló la recomendación del departamento de confianza y seguridad de que se bloqueara la cuenta de Trump debido a varios tuits, incluido uno que atacaba al vicepresidente Mike Pence. En cambio, se le impuso una suspensión temporal de 12 horas (antes de que su cuenta se se suspendiera indefinidamente el 8 de enero). Dentro de los límites de las normas, se animó a los miembros del personal a encontrar soluciones para ayudar a la empresa a evitar el tipo de reacción que da lugar a ciclos de noticias furiosas, audiencias y acoso a empleados. En la práctica, lo que sucedió fue que Twitter dio mayor libertad a los infractores: a la representante Marjorie Taylor Greene se le permitió violar las normas de Twitter al menos cinco veces antes de que una de sus cuentas fuera suspendida de manera definitiva en 2022. Otras figuras prominentes de derecha, como la cuenta de guerra cultural Libs of TikTok, gozaron de una deferencia similar.En todo el mundo, se están desplegando tácticas similares para influir en los esfuerzos de confianza y seguridad de las plataformas. En India, la policía visitó dos de nuestras oficinas en 2021 cuando comprobamos los hechos de las publicaciones de un político del partido gobernante y la policía se presentó en la casa de un empleado después de que el gobierno nos solicitó bloquear cuentas implicadas en una serie de protestas. El acoso volvió a rendir frutos: los ejecutivos de Twitter decidieron que cualquier acción que pudiera ser delicada en la India requeriría la aprobación de los más altos mandos, un nivel único de escalada de decisiones que, de otro modo, serían rutinarias.Y cuando quisimos revelar una campaña de propaganda llevada a cabo por una rama del ejército indio, nuestro equipo jurídico nos advirtió que nuestros empleados en la India podrían ser acusados de sedición y condenados a muerte. Así que Twitter no reveló la campaña sino hasta más de un año después, sin señalar al gobierno indio como autor.En 2021, antes de las elecciones legislativas de Rusia, los funcionarios de un servicio de seguridad estatal fueron a la casa de una alta ejecutiva de Google en Moscú para exigir la retirada de una aplicación que se usaba para protestar en contra de Vladimir Putin. Los agentes la amenazaron con encarcelarla si la empresa no cumplía en 24 horas. Tanto Apple como Google retiraron la aplicación de sus respectivas tiendas y la restablecieron una vez concluidas las elecciones.En cada uno de estos casos, los empleados en cuestión carecían de la capacidad para hacer lo que les pedían los funcionarios de turno, ya que las decisiones subyacentes se tomaban a miles de kilómetros de distancia, en California. Pero como los empleados locales tenían la desgracia de residir dentro de la jurisdicción de las autoridades, fueron objeto de campañas coercitivas, que enfrentaban el sentido del deber de las empresas hacia sus empleados contra los valores, principios o políticas que pudieran hacerles resistirse a las demandas locales. Inspirados por la idea, India y otros países comenzaron a promulgar leyes de “toma de rehenes” para garantizar que las empresas de redes sociales contrataran personal local.En Estados Unidos, hemos visto que estas formas de coerción no las han llevado a cabo jueces y policías, sino organizaciones de base, turbas en las redes sociales, comentaristas de noticias por cable y, en el caso de Twitter, el nuevo propietario de la empresa.Una de las fuerzas más recientes en esta campaña son los “archivos de Twitter”, una gran selección de documentos de la empresa —muchos de los cuales yo mismo envié o recibí durante mis casi ocho años en Twitter— entregados por orden de Musk a un puñado de escritores selectos. Los archivos fueron promocionados por Musk como una forma innovadora de transparencia, que supuestamente exponían por primera vez la forma en que el sesgo liberal de las costas de Estados Unidos de Twitter reprime el contenido conservador.El resultado fue algo muy distinto. Como dijo el periodista de tecnología Mike Masnick, después de toda la fanfarria que rodeó la publicación inicial de los archivos de Twitter, al final “no había absolutamente nada de interés” en los documentos y lo poco que había tenía errores factuales importantes. Hasta Musk acabó por impacientarse con la estrategia. Pero, en el proceso, el esfuerzo marcó una nueva e inquietante escalada en el acoso a los empleados de las empresas tecnológicas.A diferencia de los documentos que por lo general saldrían de las grandes empresas, las primeras versiones de los archivos de Twitter no suprimieron los nombres de los empleados, ni siquiera de los de menor nivel. Un empleado de Twitter que residía en Filipinas fue víctima de doxeo (la revelación de información personal) y de acoso grave. Otros se han convertido en objeto de conspiraciones. Las decisiones tomadas por equipos de decenas de personas de acuerdo con las políticas escritas de Twitter se presentaron como si hubieran sido tomadas por los deseos caprichosos de individuos, cada uno identificado por su nombre y su fotografía. Yo fui, por mucho, el objetivo más frecuente.La primera entrega de los archivos de Twitter se dio tras un mes de mi salida de la empresa y unos cuantos días después de que publiqué un ensayo invitado en The New York Times y hablé sobre mi experiencia como empleado de Musk. No pude evitar sentir que las acciones de la empresa eran, hasta cierto punto, represalias. A la semana siguiente, Musk fue incluso más allá y sacó de contexto un párrafo de mi tesis doctoral para afirmar sin fundamentos que yo aprobaba la pedofilia, un tropo conspirativo que suelen utilizar los extremistas de ultraderecha y los seguidores de QAnon para desprestigiar a personas de la comunidad LGBTQ.La respuesta fue todavía más extrema que la que experimenté tras el tuit que Trump publicó sobre mí. “Deberías colgarte de un viejo roble por la traición que has cometido. Vive con miedo cada uno de tus días”, decía uno de los miles de tuits y correos electrónicos amenazantes. Ese mensaje y cientos de otros similares eran violaciones de las mismas políticas que yo había trabajado para desarrollar y hacer cumplir. Bajo la nueva administración, Twitter se hizo de la vista gorda y los mensajes permanecen en el sitio hasta el día de hoy.El 6 de diciembre, cuatro días después de la primera divulgación de los archivos de Twitter, se me pidió comparecer en una audiencia del Congreso centrada en los archivos y la presunta censura de Twitter. En esa audiencia, algunos miembros del Congreso mostraron carteles de gran tamaño con mis tuits de hace años y me preguntaron bajo juramento si seguía manteniendo esas opiniones (en la medida en que las bromas tuiteadas con descuido pudieran tomarse como mis opiniones reales, no las sostengo). Greene dijo en Fox News que yo tenía “unas posturas muy perturbadoras sobre los menores y la pornografía infantil” y que yo permití “la proliferación de la pornografía infantil en Twitter”, lo que desvirtuó aún más las mentiras de Musk (y además, aumentó su alcance). Llenos de amenazas y sin opciones reales para responder o protegernos, mi marido y yo tuvimos que vender nuestra casa y mudarnos.El ámbito académico se ha convertido en el objetivo más reciente de estas campañas para socavar las medidas de seguridad en línea. Los investigadores que trabajan para entender y resolver la propagación de desinformación en línea reciben ahora más ataques partidistas; las universidades a las que están afiliados han estado envueltas en demandas, onerosas solicitudes de registros públicos y procedimientos ante el Congreso. Ante la posibilidad de facturas de abogados de siete dígitos, hasta los laboratorios de las universidades más grandes y mejor financiadas han dicho que tal vez tengan que abandonar el barco. Otros han optado por cambiar el enfoque de sus investigaciones en función de la magnitud del acoso.Poco a poco, audiencia tras audiencia, estas campañas están erosionando de manera sistemática las mejoras a la seguridad y la integridad de las plataformas en línea que tanto ha costado conseguir y las personas que realizan este trabajo son las que pagan el precio más directo.Las plataformas de tecnología están replegando sus iniciativas para proteger la seguridad de las elecciones y frenar la propagación de la desinformación en línea. En medio de un clima de austeridad más generalizado, las empresas han disminuido muy en especial sus iniciativas relacionadas con la confianza y la seguridad. Ante la creciente presión de un Congreso hostil, estas decisiones son tan racionales como peligrosas.Podemos analizar lo que ha sucedido en otros países para vislumbrar cómo podría terminar esta historia. Donde antes las empresas hacían al menos un esfuerzo por resistir la presión externa; ahora, ceden en gran medida por defecto. A principios de 2023, el gobierno de India le pidió a Twitter que restringiera las publicaciones que criticaran al primer ministro del país, Narendra Modi. En años anteriores, la empresa se había opuesto a tales peticiones; en esta ocasión, Twitter accedió. Cuando un periodista señaló que tal cooperación solo incentiva la proliferación de medidas draconianas, Musk se encogió de hombros: “Si nos dan a elegir entre que nuestra gente vaya a prisión o cumplir con las leyes, cumpliremos con las leyes”.Resulta difícil culpar a Musk por su decisión de no poner en peligro a los empleados de Twitter en India. Pero no deberíamos olvidar de dónde provienen estas tácticas ni cómo se han extendido tanto. Las acciones de Musk (que van desde presionar para abrir los archivos de Twitter hasta tuitear sobre conspiraciones infundadas relacionadas con exempleados) normalizan y popularizan que justicieros exijan la rendición de cuentas y convierten a los empleados de su empresa en objetivos aún mayores. Su reciente ataque a la Liga Antidifamación demuestra que considera que toda crítica contra él o sus intereses empresariales debe tener como consecuencia una represalia personal. Y, en la práctica, ahora que el discurso de odio va en aumento y disminuyen los ingresos de los anunciantes, las estrategias de Musk parecen haber hecho poco para mejorar los resultados de Twitter.¿Qué puede hacerse para revertir esta tendencia?Dejar claras las influencias coercitivas en la toma de decisiones de las plataformas es un primer paso fundamental. También podría ayudar que haya reglamentos que les exijan a las empresas transparentar las decisiones que tomen en estos casos y por qué las toman.En su ausencia, las empresas deben oponerse a los intentos de que se quiera controlar su trabajo. Algunas de estas decisiones son cuestiones fundamentales de estrategia empresarial a largo plazo, como dónde abrir (o no abrir) oficinas corporativas. Pero las empresas también tienen un deber para con su personal: los empleados no deberían tener que buscar la manera de protegerse cuando sus vidas ya se han visto alteradas por estas campañas. Ofrecer acceso a servicios que promuevan la privacidad puede ayudar. Muchas instituciones harían bien en aprender la lección de que pocas esferas de la vida pública son inmunes a la influencia mediante la intimidación.Si las empresas de redes sociales no pueden operar con seguridad en un país sin exponer a sus trabajadores a riesgos personales y a las decisiones de la empresa a influencias indebidas, tal vez no deberían operar allí para empezar. Como a otros, me preocupa que esas retiradas empeoren las opciones que les quedan a las personas que más necesitan expresarse en línea de forma libre y abierta. Pero permanecer en internet teniendo que hacer concesiones podría impedir el necesario ajuste de cuentas con las políticas gubernamentales de censura. Negarse a cumplir exigencias moralmente injustificables y enfrentarse a bloqueos por ello puede provocar a largo plazo la necesaria indignación pública que ayude a impulsar la reforma.El mayor desafío —y quizá el más ineludible— en este caso es el carácter esencialmente humano de las iniciativas de confianza y seguridad en línea. No son modelos de aprendizaje automático ni algoritmos sin rostro los que están detrás de las decisiones clave de moderación de contenidos: son personas. Y las personas pueden ser presionadas, intimidadas, amenazadas y extorsionadas. Enfrentarse a la injusticia, al autoritarismo y a los perjuicios en línea requiere empleados dispuestos a hacer ese trabajo.Pocas personas podrían aceptar un trabajo así, si lo que les cuesta es la vida o la libertad. Todos debemos reconocer esta nueva realidad y planear en consecuencia.Yoel Roth es académico visitante de la Universidad de Pensilvania y la Fundación Carnegie para la Paz Internacional, y fue responsable de confianza y seguridad en Twitter. More

  • in

    I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online.

    Timo LenzenWhen I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.What can be done to turn back this tide?Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Behold the Free Speech Chutzpah of the Republican Party

    A solid majority of Republicans continues to believe that Donald Trump won the 2020 election — evidence to the contrary notwithstanding. Virtually all Democrats believe that Trump did, in fact, lose the 2020 election and that Biden won fair and square.Now in an extraordinary display of chutzpah, Representative Jim Jordan, Republican of Ohio, and fellow Republicans on the House Judiciary Committee have accused Democrats of violating the First Amendment rights of election deniers.In a June 26, 2023, interim staff report, Jordan and his colleagues charged that the Biden administration “colluded with big tech and ‘disinformation’ partners to censor” those who claimed that Trump won in 2020.The report, “The Weaponization of CISA: How a ‘Cybersecurity’ Agency Colluded With Big Tech and ‘Disinformation’ Partners to Censor Americans,” makes the argument thatThe First Amendment recognizes that no person or entity has a monopoly on the truth, and that the “truth” of today can quickly become the “misinformation” of tomorrow. Labeling speech “misinformation” or “disinformation” does not strip it of its First Amendment protection. As such, under the Constitution, the federal government is strictly prohibited from censoring Americans’ political speech.These civil libertarian claims of unconstitutional suppression of speech come from the same Republican Party that is leading the charge to censor the teaching of what it calls “divisive concepts” about race; the same party that expelled two Democratic members of the Tennessee state legislature who loudly called for more gun control after a school shooting; the same party that threatens to impeach a liberal judge in North Carolina for speaking out about racial bias; the same party that has aided and abetted book banning in red states across the country.In other words, it is Republicans who have become the driving force in deploying censorship to silence the opposition, simultaneously claiming that their own First Amendment rights are threatened by Democrats.One of the most egregious examples of Republican censorship is taking place in North Carolina, where a state judicial commission has initiated an investigation of Anita Earls, a Black State Supreme Court justice, because she publicly called for increased diversity in the court system.A June 2 Law360 piece examined the racial and gender composition of the North Carolina judiciary and found “that out of 22 appellate jurists — seven state Supreme Court justices and 15 Court of Appeals judges — 64 percent are male and 86 percent are white.”The article then quoted Earls: “It has been shown by social scientists that diverse decision-making bodies do a better job. … I really feel like everyone’s voice needs to be heard, and if you don’t have a diverse judicial system, perspectives and views are not being heard, you’re not making decisions that are in the interests of the entire society. And I feel like that’s wrong.”On Aug. 15, the North Carolina Judicial Standards Commission notified Earls that it was opening an investigation “based on an interview you since gave to the media in which you appear to allege that your Supreme Court colleagues are acting out of racial, gender, and/or political bias in some of their decision-making.”Earls’s interview, the notification letter continued, “potentially violates Canon 2A of the Code of Judicial Conduct which requires a judge to conduct herself ‘at all times in a manner which promotes public confidence in the integrity and impartiality of the judiciary.’”On Aug. 29, Earls filed suit in federal court charging that there is “an ongoing campaign on the part of the North Carolina Judicial Standards Commission to stifle” her First Amendment free-speech rights “and expose her to punishment that ranges from a letter of caution that becomes part of a permanent file available to any entity conducting a background check to removal from the bench.”At the center of Republican efforts to censor ideological adversaries is an extensive drive to regulate what is taught in public schools and colleges.In an Education Week article published last year, “Here’s the Long List of Topics Republicans Want Banned From the Classroom,” Sarah Schwartz and Eesha Pendharkar provided a laundry list of Republican state laws regulating education:Since January 2021, 14 states have passed into law what’s popularly referred to as “anti-critical race theory” legislation. These laws and orders, combined with local actions to restrict certain types of instruction, now impact more than one out of every three children in the country, according to a recent study from UCLA.Schwartz and Pendharkar also noted that “many of these new bills propose withholding funding from school districts that don’t comply with these regulations. Some, though, would allow parents to sue individual educators who provide banned material to students, potentially collecting thousands of dollars.”What’s more, “Most prohibited teaching a list of ‘divisive concepts,’ which originally appeared in an executive order signed by then-President Donald Trump in fall 2020.”The Trump order, “Combating Race and Sex Stereotyping,” included prohibitions on the following “divisive concepts”:That an individual, by virtue of his or her race or sex, bears responsibility for actions committed in the past by other members of the same race or sex; that any individual should feel discomfort, guilt, anguish, or any other form of psychological distress on account of his or her race or sex; or that meritocracy or traits such as a hard work ethic are racist or sexist, or were created by a particular race to oppress another race.The censorship effort has been quite successful.In a February 2022 article, “New Critical Race Theory Laws Have Teachers Scared, Confused and Self-censoring,” The Washington Post reported that “in 13 states, new laws or directives govern how race can be taught in schools, in some cases creating reporting systems for complaints. The result, teachers and principals say, is a climate of fear around how to comply with rules they often do not understand.”Larry Summers, a former president of Harvard who is a professor of economics there, argued in an email that issues of free speech are not easily resolved.The problem, Summers wrote, “comes from both sides. Ron DeSantis’s efforts to limit what he regards as critical race theory is deplorable as are efforts on Ivy League campuses to discredit and devalue those with unfashionable beliefs about diversity or the role of genes or things military.”But, Summers continued,It’s sometimes a bit harder than the good guys make out. What about cultures of intolerance where those who, for example, believe in genetic determinism are shunned, and graduate students all exhibit their academic freedom rights to not be the teaching fellows of faculty with those beliefs. Does ideological diversity mean philosophy departments need to treat Ayn Rand with dignity or biology departments need to hear out creationism?“What about professional schools where professional ethics are part of what is being instilled?” Summers asked:Could a law school consider hiring a lawyer who, while in government, defended coercive interrogation practices? Under what circumstances should one accept, perhaps insist on university leaders criticizing speech? I have been fond of saying academic freedom does not include freedom from criticism but when should leaders speak out? Was I right to condemn calls for divesting in Israel as antisemitic in effect, if not intent? When should speech be attacked?There is, at this moment, a nascent mobilization on many campuses of organizations determined to defend free speech rights, to reject the sanctioning of professors and students, and to ensure the safety of controversial speakers.Graduates of 22 colleges and universities have formed branches of the Alumni Free Speech Alliance “to support free speech, academic freedom, and viewpoint diversity.”At Harvard, 133 members of the faculty have joined the Council on Academic Freedom at Harvard, dedicated to upholding the free speech guidelines adopted by the university in 1990:Free speech is uniquely important to the university because we are a community committed to reason and rational discourse. Free interchange of ideas is vital for our primary function of discovering and disseminating ideas through research, teaching, and learning.Steven Pinker, a psychology professor at the school and a founder of the group, wrote in an email that achieving this goal is much tougher than generally believed:To understand the recent assaults on free speech, we need to flip the question: Not why diverse opinions are being suppressed, but why they are tolerated. Freedom of speech is an exotic, counterintuitive concept. What’s intuitive is that the people who disagree with me are spreading dangerous falsehoods and must be stifled for the greater good. The realization that everyone feels this way, that all humans are fallible, that however confident I am in my beliefs, I may be wrong, and that the only way we can collectively approach the truth is to allow opinions to be expressed and then evaluate them, requires feats of abstraction and self-control.The example I cited at the beginning of this column — the charge that the Biden administration “colluded with big tech and ‘disinformation’ partners to censor” the claims of election deniers — has proved to be a case study of a successful Republican tactic on several fronts.Republicans claimed the moral high ground as the victims of censorship, throwing their adversaries on the defensive and quieting their opponents.On June 6, The Washington Post reported, in “These Academics Studied Falsehoods Spread by Trump. Now the G.O.P. Wants Answers,” thatThe pressure has forced some researchers to change their approach or step back, even as disinformation is rising ahead of the 2024 election. As artificial intelligence makes deception easier and platforms relax their rules on political hoaxes, industry veterans say they fear that young scholars will avoid studying disinformation.One of the underlying issues in the free speech debate is the unequal distribution of power. Paul Frymer, a political scientist at Princeton, raised a question in reply to my email: “I wonder if the century-long standard for why we defend free speech — that we need a fairly absolute marketplace of ideas to allow all ideas to be heard (with a few exceptions), deliberated upon, and that the truth will ultimately win out — is a bit dated in this modern era of social media, algorithms and most importantly profound corporate power.”While there has always been a corporate skew to speech, Frymer argued,in the modern era, technology enables such an overwhelming drowning out of different ideas. How long are we hanging on to the protection of a hypothetical — that someone will find the truth on the 40th page of a Google search or a podcast with no corporate backing? How long do we defend a hypothetical when the reality is so strongly skewed toward the suppression of the meaningful exercise of free speech?Frymer contended thatWe do seem to need regulation of speech, in some form, more than ever. I’m not convinced we can’t find a way to do it that would enable our society to be more just and informed. The stakes — the fragility of democracy, the increasing hatred and violence on the basis of demographic categories, and the health of our planet — are extremely high to defend a single idea with no compromise.Frymer suggested that ultimatelyWe can’t consider free speech without at least some understanding of power. We can’t assume in all contexts that the truth will ever come out; unregulated speech does not mean free speech.From a different vantage point, Robert C. Post, a law professor at Yale, argued in an email that the censorship/free speech debate has run amok:It certainly has gone haywire. The way I understand it is that freedom of speech has not been a principled commitment, but has been used instrumentally to attain other political ends. The very folks who were so active in demanding freedom of speech in universities have turned around and imposed unconscionable censorship on schools and libraries. The very folks who have demanded a freedom of speech for minority groups have sought to suppress offensive and racist speech.The framing in the current debate over free speech and the First Amendment, Post contends, is dangerously off-kilter. He sent me an article he wrote that will be published shortly by the scholarly journal Daedalus, “The Unfortunate Consequences of a Misguided Free Speech Principle.” In it, he notes that the issues are not just more complex than generally recognized, but in fact distorted by false assumptions.Post makes the case that there is “a widespread tendency to conceptualize the problem as one of free speech. We imagine that the crisis would be resolved if only we could speak more freely.” In fact, he writes, “the difficulty we face is not one of free speech, but of politics. Our capacity to speak has been disrupted because our politics has become diseased.”He specifically faults a widely read March 2022 Times editorial, “America Has a Free Speech Problem,” that warnedAmericans are losing hold of a fundamental right as citizens of a free country: the right to speak their minds and voice their opinions in public without fear of being shamed or shunned.Post observes thatNo such right exists in any well-ordered society. If I walk into a room shouting outrageous slurs, I should expect to be shamed and shunned. Only a demoralized community would passively accept irresponsibly hurtful speech.People constantly “balance self-restraint against the need for candor.”Arguments that the protection of free speech is crucial to the preservation of democracy, Post maintains, “encourage us to forget that the fundamental point of public discourse is the political legitimation of the state. Our public discourse is successful when it produces a healthy public opinion capable of making state power answerable to politics.”In Post’s view, polarization “is not a simple question of speech. It is the corrosive dissolution of the political commitments by which Americans have forged themselves into a single nation. If we conceptualize public discourse as a social practice, we can see that its failures stem from this fundamental problem.”In this context, Post concludes,Politics is possible only when diverse persons agree to be bound by a common fate. Lacking that fundamental commitment, politics can easily slide into an existential struggle for survival that is the equivalent of war. We can too easily come to imagine our opponents as enemies, whose victory would mean the collapse of the nation.In such circumstances, Post continues,Political debate can no longer produce a healthy and legitimate democratic will. However inclusive we may make our public discourse, however tolerant of the infinite realms of potential diversity we may become, the social practice of public discourse will fail to achieve its purpose so long as we no longer experience ourselves as tied to a common destiny.“We cannot now speak to each other because something has already gone violently wrong with our political community,” Post writes. “The underlying issue is not our speech, but our politics. So long as we insist on allegiance to a mythical free speech principle that exists immaculately distinct from the concrete social practices, we shall look for solutions in all the wrong places.”The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    A Bipartisan Plan to Limit Big Tech

    More from our inbox:DeSantis Admits the Inconvenient Truth: Trump LostScenarios for a Trump Trial and the Election‘Thank You, Mr. Trump’Mushroom CloudsMacho C.E.O.s Erik Isakson/DigitalVision, via Getty ImagesTo the Editor:Re “We Have a Way for Congress to Rein In Big Tech,” by Lindsey Graham and Elizabeth Warren (Opinion guest essay, July 27):The most heartening thing about the proposal for a Digital Consumer Protection Commission is its authorship.After years of zero-sum legislative gridlock, to see Senators Warren and Graham collaborating is a ray of hope that governing may someday return to the time when opposing parties were not enemies, when each party brought valid perspectives to the table and House-Senate conference committees forged legislation encompassing the best of both perspectives.David SadkinBradenton, Fla.To the Editor:Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.Adam ThiererWashingtonThe writer is a senior fellow in technology policy at the free-market R Street Institute.To the Editor:The regulation of social media, rapidly emerging A.I. and the internet in general is long overdue. Like the telephone more than a century earlier, as any new technology evolves from novelty to convenience to ubiquitous necessity used by billions of people, so must its regulation for the common good.Jay P. MaillePleasanton, Calif.DeSantis Admits the Inconvenient Truth: Trump Lost Rachel Mummey for The New York TimesTo the Editor:Re “DeSantis Acknowledges Trump’s Defeat: ‘Of Course He Lost’” (news article, Aug. 8):It is sad to see a politician turn toward the hard truth only in desperation, but that is what the failing and flailing Republican presidential candidate Ron DeSantis has done.Mr. DeSantis is not stupid. He has known all along that Joe Biden was the legitimate winner of the 2020 presidential election, but until now, he hedged when asked about it, hoping not to alienate supporters of Donald Trump.Now Mr. DeSantis says: “Of course he lost. Joe Biden is the president.”In today’s Republican Party, telling the inconvenient truth will diminish a candidate’s support from the die-hard individuals who make up the party’s base.We have reached a sad point in the history of our country when we have come to feel that a politician who tells the truth is doing something extraordinary and laudable.Oren SpieglerPeters Township, Pa.Scenarios for a Trump Trial and the Election Doug Mills/The New York TimesTo the Editor:Re “Layered Case in Indictment Reduces Risk” (news analysis, front page, Aug. 6):It may well be that the special prosecutor, Jack Smith, has fashioned an indictment ideally suited for achieving a conviction of Donald Trump. However, even in the event that the trial comes before the election, there is little reason to believe that it will relieve us of the scourge of Mr. Trump’s influence on American life.First, there is the possibility of a hung jury, even in Washington, D.C. Such an outcome would be treated by Trump supporters as an outright exoneration.A conviction would not undermine his support any more than his myriad previous shocking transgressions. While the inevitable appeals would last well past the election, his martyrdom might improve his electoral chances.And were he to lose the election, he would surely claim that he lost only because of these indictments. Here he would have a powerful argument because so many of us hope that the indictments will have precisely that effect.The alternative, that he wins the election, either before or after the trial, is too dreadful to contemplate.If there is anything that can terminate the plague of Trumpism, it is for a few prominent Republicans whose seniority makes their voices important — Mitch McConnell, Mitt Romney and George W. Bush — to speak out and unequivocally state that Donald Trump is unfit for office. That they all believe this is generally acknowledged.If they fail to defend American democracy at this time, they will be complicit in what Trumpism does to the Republican Party and to the Republic.Robert N. CahnWalnut Creek, Calif.‘Thank You, Mr. Trump’Former President Donald Trump has made his 2024 race principally about his own personal grievances — attempting to convince supporters to see themselves in him.David Degner for The New York TimesTo the Editor:Re “Playing Indicted Martyr, Trump Draws In His Base” (news article, Aug. 9):Thank you, Mr. Trump, for sacrificing yourself for the greater good. And when you spend years and years and years in prison, we will never forget what you did to (oops, I mean for) us.Winnie BoalCincinnatiMushroom Clouds U.S. Department of DefenseTo the Editor:Re “A Symbol Evoking Both Pride and Fear,” by Nicolas Rapold (Critic’s Notebook, Arts, Aug. 1):Richland High School in Washington State is in an area, highly restricted during World War II, where plutonium essential to building the first atomic bombs was produced. As in areas of New Mexico, there have been numerous “downwind” cancer cases, as well as leakage of contaminated water into the Columbia River basin.Bizarrely, Richland High’s athletic teams are called the Bombers; a mushroom cloud is their symbol on uniforms and the gym floor. This must be the worst “mascot” on earth.Nancy AndersonSeattleMacho C.E.O.s Illustration by Taylor CalleryTo the Editor:Re “We’re in the Era of the ‘Top Gun’ C.E.O.” (Sunday Business, July 30):The propensity of the current class of business leaders to grab at team-building gimmicks knows no bounds. Simulating the role of fighter pilots at $100,000 a pop might give a C.E.O. a fleeting feeling of exhilaration, but it is a poor substitute for actual team-building.That happens when organizations and compensation levels are flattened to more down-to-earth levels. With some C.E.O.s pulling in pay rewards that are hundreds, if not thousands, of times more than their median employee, team-affirming commitment in the boardroom is far from genuine.Employees are not fooled by C.E.O.s trying to play Top Gun for a day, and making more in that short time than most employees will earn in a year.J. Richard FinlayTorontoThe writer is the founder of the Finlay Center for Corporate and Public Governance. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    How the Rise of QAnon Broke Conspiracy Culture

    The date was Jan. 20, 2021, and Stephen Miles Lewis was trying to keep the peace.Two weeks before, a mob of pro-Trump protesters had stormed the Capitol building, and the circles Mr. Lewis ran in were now brimming with tension. Many of his closest friends had been outraged by what they saw. But he also knew someone who had been there, who now claimed that the violence had been stirred up by antifa agents disguised as Trump supporters.Mr. Lewis, a middle-aged man with a round face and a gray beard who goes by the nickname SMiles, sat at his desk, in front of a wall covered with posters of aliens, flying saucers and Bigfoot. In a YouTube video, he urged viewers to “take a step back and hopefully think, meditate, reflect on the times that we’re in,” to not “malign the others’ viewpoint.” He expressed frustration that the term “conspiracy theorist” was increasingly being used as an insult. After all, he pointed out: “I am a conspiracy theorist.”At the time, Mr. Lewis was trying to project calm, to help ensure that the community he’d been part of since he was 18 didn’t tear itself apart. But in the years since, he has found himself unsettled by the darker elements of a world he thought he knew.Over the past year, I’ve been part of an academic research project seeking to understand how the internet changed conspiracy theories. Many of the dynamics the internet creates are, at this point, well understood: We know its capacity to help users find one another, making it easier than ever for people to get involved in conspiracy networks; we also know how social media platforms prioritize inflammatory content and that as a result, ideas and information that make people angry travel farther.What we felt was missing from this story, though, was what this period of change looked like from the perspective of conspiracy theorists themselves.My team has been speaking to researchers and writers who were part of this world or connected to it in the pre-social media era. And we’ve learned something surprising: Many of the people we’ve interviewed told us they, too, have spent the past few years baffled by the turn conspiracy culture has taken. Many expressed discomfort with and at times outright disgust for QAnon and the related theories claiming the 2020 election had been stolen and said that they felt as if the very worst elements of conspiracy culture had become its main representatives.It’s worth noting that our sample was biased by who agreed to speak to us. While all of conspiracy culture can be characterized by its deep skepticism, that skepticism doesn’t always point in the same direction. Although we’ve approached as many people as possible, so far it’s mostly been those on the left of the political spectrum who have been interested in talking to university researchers. (They’ve also been overwhelmingly men.)Still, what our interviewees had to say was striking: The same forces that have made conspiracy theories unavoidable in our politics have also fundamentally changed them, to the extent that even those who pride themselves on their openness to alternative viewpoints — Sept. 11 truthers, Kennedy assassination investigators and U.F.O. cover-up researchers — have been alarmed by what they’ve seen.Mr. Lewis’s sense that conspiracy networks would be rived by tensions in the aftermath of Jan. 6 was well founded. Rumors immediately began circulating that the rioters had been infiltrated by agents instigating violence — an accusation that some of the rioters themselves took to social media to denounce. Ashli Babbitt, the rioter who was fatally shot by a police officer during the attack, was simultaneously lionized as a martyr and derided as a false flag.All this ultimately left Mr. Lewis less inclined to play peacemaker and more inclined to take a step away from it all. Today, he says, he increasingly avoids some of the language that floats around the conspiracysphere: Terms like “the illuminati” used to feel like fun ideas to play with. Now he worries they could be used to create scapegoats, or even encourage violence.SMiles Lewis grew up in Austin, Texas, with his mother — his parents separated when he was very young — and it was his close connection with her that first sparked his interest in the unexplained: “There was a sense, early on with my Mom and I, where we felt like we were reading each other’s minds,” he said. The two of them would watch shows like “That’s Incredible!,” which retold stories of paranormal encounters. Mr. Lewis recalls his mother telling him after one episode: “If you are ever in distress, just concentrate on me really hard, and I will get the message.” Her theory got put to the test when Mr. Lewis was a teenager: Once, when home alone, he heard voices in their yard after dark. Afraid, he considered calling his mother, but the fear of losing precious new adult freedoms stopped him. The next day she asked him if everything had been all right, because out of nowhere, she had felt the overwhelming urge to call. Mr. Lewis took this as confirmation that there was more to human abilities than science could yet rationalize.Once Mr. Lewis graduated from high school, he joined the Austin chapter of the Mutual U.F.O. Network, an organization for enthusiasts to meet and discuss sightings. From there, he became the leader of a support group for people who believed they’d had close encounters with aliens. Mr. Lewis never had such an experience himself, but he said the group didn’t mind — they just appreciated that he kept an open mind.U.F.O.s and conspiracy theories have always been intertwined, but it was Sept. 11 that really turned Mr. Lewis political. As he speculated in an editorial for The Austin Para Times after the planes hit the towers, he felt that he had “been a witness to Amerika’s greatest Reichstag event ” — a planned disaster to justify fascist encroachment on civil liberties, something many of the writers Mr. Lewis admired had warned of.For Mr. Lewis, conspiracism was always about thinking critically about the narratives of the powerful and questioning your own biases. In our interviews, he saw his interest in the parapolitical — in how intelligence and security services quietly shape the world — as connected to his political activism, not so different from attending an abortion rights rally or joining a local anti-Patriot Act group. All were about standing up for civil liberties and citizen privacy against an opportunistic, overreaching state.But for all Mr. Lewis’s political idealism, there was also something undeniably invigorating about conspiracy culture. This was a scene free from the stifling hegemony of sensible mainstream thought, a place where writers, filmmakers and artists could explore whatever ideas or theories interested them, however weird or improper. This radical commitment to resisting censorship in all its forms sometimes led to decisions that, from the perspective of 2023, look like dangerous naïveté at best: Reading countercultural material from the 1990s can feel like navigating a political minefield, where musings about the North American “mothman” and experimental poetry sit side-by-side with Holocaust denial. Conspiracy culture was tolerant of banned or stigmatized ideas in a way many of our interviewees said they found liberating, but this tolerance always had a dangerous edge.Still, Mr. Lewis looks back nostalgically on days when there seemed to be more respect and camaraderie. The aftermath of Sept. 11 and the war on terror presented, he said, a threat to citizens that the conspiracy-friendly left and right could unite over. Now the rift between the two was deep and vicious. He felt as if the ideas that had first attracted him to conspiratorial thought had been “weaponized,” pointing people away from legitimate abuses of power and toward other citizens — the grieving parents of Sandy Hook, for example — and at times involved real-world violence.When I asked Mr. Lewis when he first heard of QAnon, he told me a story about a family member who’d sent him a video that began with what he saw as a fairly unobjectionable narrative of government abuses of power. “I’m nodding my head, I’m agreeing,” he said. Then it got to the satanic pedophile networks.The conspiracy culture that Mr. Lewis knew had celebrated the unusual and found beauty in the bizarre. He had friends who considered themselves pagans, friends who participated in occult rituals. “The vast majority of them are not blood-drinking lunatics!” he told me. Some of his friends were no longer comfortable talking about their beliefs for fear of becoming targets.Others we interviewed told us similar stories: about a scene that had once felt niche, vibrant and underground but had transformed into something almost unrecognizable. Greg Bishop, a friend of Mr. Lewis’s and editor of the 1990s zine The Excluded Middle, which covered U.F.O.s, conspiracy theories and psychedelia, among other things, told me that as the topics he’d covered had become more mainstream, he’d watched the vitriol and division increase. “You’d see somebody at a convention who was frothing at the mouth or whatever, figuratively, and that’s changed into something that’s basically a part of the culture now.”Joseph E. Green, an author and parapolitical researcher, described how in the past, attending conferences on conspiracy topics, “there’s always a couple of guys in there who will tell you after they get familiar with you that the Jews run the world.” Mr. Green had no interest in such ideas, but nor did he think they ran much risk of going mainstream. But somewhere along the way, conspiracy spaces on the internet had become “a haven” for the “lunatic fringe” of the right wing, which in turn spilled back into the real world.Jonathan Vankin, a journalist who wrote about the conspiracy scene of the 1990s, said watching the emergence of QAnon had been disillusioning. Mr. Vankin never considered himself a conspiracy theorist, but as a journalist he felt an appreciation for them. They may not have always gotten the facts right, but their approach was a way of saying, “The official story, the way we’re fed that every day, isn’t really necessarily the way it is.” Now, he said, conspiracy theories felt more like “tools of control” that changed how people saw the world, not in a liberatory sense but “in a distorted way” — one that no longer challenged power but served its interests.Have conspiracy theories and conspiracy theorists gotten nastier? It’s worth recalling that the reactionary, violent impulse that we think of as characterizing contemporary conspiracism was always there: The John Birch Society of the 1960s and its hunt for secret Communists in the very top levels of government has been described by some historians as an early ancestor of QAnon. And it’s also worth remembering that the historical friendliness between left and right conspiracism could be ethically murky. When Timothy McVeigh detonated a truck bomb in Oklahoma City, killing 168 people and injuring hundreds more, he said he was acting in retaliation for the Waco siege of 1993 and its aftermath — what he and many others in militia circles saw as the government covering up a deliberate massacre of its own citizens. Some liberal writers in the conspiracy scene defended him — some even went as far as to suggest he had been framed.What does seem clear is that conspiracy theories have become less of a specialist interest and more of an unavoidable phenomenon that affects us all, whether in the form of anti-vaccination sentiments or election denialism. With both Robert F. Kennedy Jr. and Donald Trump running for president, none of this seems likely to fade away anytime soon.Michael Barkun, a scholar of religious extremism and conspiracy theories, describes conspiracy-minded networks as spaces of “stigmatized knowledge” — ideas that are ignored or rejected by the institutions that society relies on to help us make sense of the world. Recently, though, Mr. Barkun writes, in part because of the development of the internet, that stigma has been weakening as what “was once clearly recognizable as ‘the fringe’ is now beginning to merge with the mainstream.”The story we’ve heard from our interviewees is that this mainstreaming process has had profound effects, fundamentally altering the character of both the theories themselves and those who claim to be adherents, by making conspiracy theories more accessible and more potentially profitable. It’s these shifts that have left people like Mr. Lewis feeling so out of place in the spaces they once saw as their ideological homes.The conspiracy scene, on left and right, immediately grasped the significance of the World Wide Web’s arrival in the 1990s. For people who wanted to explore stigmatized topics, the liberatory potential was obvious, and most of the people we spoke to were early adopters. Mr. Lewis himself at one point had between 70 and 80 registered domain names.And yet, despite pouring more effort into his passion than some people put into their jobs, Mr. Lewis never made much, if any, money from it. When I asked him about it, it didn’t even seem to have occurred to him to try. This wasn’t unusual; the biggest names in conspiracy culture before the internet — radio hosts like Bill Cooper and Mae Brussell — may have sold books and tapes but hardly built media empires. Making money seemed secondary to the principle of getting the truth — as they saw it, at least — out there, for like-minded people to debate and discuss.Today’s conspiracy theorists are different. Termed “conspiracy entrepreneurs” by academics, they combine the audience-growth strategies of social media lifestyle influencers with a mixture of culture war and survivalist rhetoric. They’re active on various platforms, constantly responding to new developments, and most of them are selling their audience something on the side.One of the first entrepreneurs to pioneer this approach was Alex Jones, who a recent court case revealed had an estimated combined net worth with his company of up to $270 million. Before his name became synonymous with conspiracy theories, Mr. Jones got his start in Austin community access television in the 1990s — a scene that Mr. Lewis was intimately familiar with. But as Mr. Lewis and others tell it, Mr. Jones always possessed both an aggressive streak and a sense of showmanship that many of his contemporaries lacked, making him perfect for social media, where conspiracy theorists, like everyone else, are competing in an attention economy.“The last thing I want to do is sit on a recorded video and say to you, ‘In our day, conspiracy theories were kinder and gentler,’” said Ruffin Prevost, an editor at ParaScope, a now-defunct site set up in 1996 that covered U.F.O.s, secret societies, and mind control, among other subjects. “But there is definitely a different tenor to how people go about this stuff now,” he said. “It’s almost like you’ve got to be strident and hard-core about whatever your thing is to have enough bona fides to capture that audience.”The belief that the incentives of social media had shorn conspiracy research of its serious, scholarly edge was a common theme. “The things that we’re describing are not really the same thing,” Mr. Green declared to me flatly, comparing the archival work and conferences that he had been involved with to the salacious videos of QAnon influencers. The scholarly work “is never going to have that commercial appeal,” he said. “You know, just like if I try to get somebody to watch a film by Ingmar Bergman, it’s much more difficult than to get them to watch a film by Michael Bay. It’s almost not even the same thing, right?”In the minds of many conspiracy theorists, Mr. Jones and his imitators don’t deserve the title. In his 2017 book, “Trumpocalypse Now!: The Triumph of the Conspiracy Spectacle,” Kenn Thomas, a towering figure in the world of 1990s conspiracy, termed the recent crop of opportunists looking to profit from the hard work of researchers “conspiracy celebrities.” And the conspiracy celebrity in chief, Mr. Thomas said, was Donald Trump, who referred to conspiracy theories he hadn’t researched and didn’t understand. To the world at large, it might seem as if we’re living in a time in which conspiratorial thinking is ascendant. But in his foreword to Mr. Thomas’s book, Robert Sterling, editor of a 1990s and 2000s countercultural conspiracy blog called The Konformist, argued otherwise: “If this moment is a victory for the conspiracy culture,” he wrote, “it is a Pyrrhic victory at best.”“There’s a few different stories we can tell about what happened,” Douglas Rushkoff, a media theorist and author, told me. Conspiracy culture up through the ’90s was dominated by what could be called a “radio sensibility.” Fringe topics were mostly discussed on late-night talk shows. There were guest experts, and listeners could call in, but the host still functioned as a (lenient) gatekeeper, and the theories themselves conformed to a narrative format. They were, for the most part, complete stories, with beginnings, middles and ends.In the digital age, he said, sense-making had become a fragmented, nonlinear and crowdsourced affair that as a result could never reach a conclusion and lacked internal logic. There were always potential new connections to be spotted — in the case of the 2020 election, for instance, two imprisoned Italian hackers, or a voting machine company founded by Venezuelans. This lack of satisfying resolution meant the new theories had no natural stopping point, he said, and their perpetual motion eventually brought them to a place that was “much more strident” — “even amongst the left.”The new “born-digital” conspiracy theories, like QAnon and the Great Reset, are constantly looking forward by necessity. Attaching themselves to the fast-paced flow of current events and trending topics is a matter of survival on social media, which can also explain why those who perpetuate them rarely stay focused on unpacking just one event: The Great Reset theory, for example, began by alleging that the Covid-19 pandemic had been deliberately engineered by the global elite, but soon expanded to encompass climate change, economic inflation and local traffic schemes.Some academics have argued that even when conspiracy theories warn of dark and dystopian futures, they are fundamentally optimistic: They are assertions that humans are ultimately in control of events, and humans can stop whatever terrible catastrophe is coming around the corner. But perhaps the problem is that human beings are no longer really in control of the conspiracy theories themselves. Even when Q, the anonymous figure who sparked the QAnon movement, stopped posting, the movement’s adherents carried on.Before we had even spoken over Zoom, Mr. Lewis sent me a 2022 Medium article written by Rani Baker that he said summed up a lot of his feelings about the topic. It was titled “So When, Exactly, Did Conspiracy Culture Stop Being Fun?” It was a question he said he had been struggling with too.When I asked Mr. Lewis if he had become more moderate over time, he was ambivalent. He said he maintains his skepticism about power and the state, but he’s less dogmatic these days — perhaps because he’s gained a new appreciation for the destructive power of uncompromising narratives. His thinking on Sept. 11, in particular, has evolved, from what truthers call MIHOP (Made It Happen on Purpose) to LIHOP (Let It Happen on Purpose) to today, when he allows it might have been something very different: an event foreseeable in the abstract, but as a horrific consequence of decades of U.S. interference in the Middle East, not a government’s deliberate attack on its own people.But from Mr. Lewis’s perspective, asking if he had moderated his views wasn’t quite the right question. For him and many of the others we spoke to, the paranormal and the parapolitical had been their passion and their home for their entire adult lives, places where they had found friends, ideas and ways of theorizing about the world that fascinated and excited them. They were used to their interest in these topics making them outsiders. Now they found themselves living with one foot in and one foot out of the current conspiracy scene, which had become increasingly popular, ubiquitous and dangerous. As they saw it, it wasn’t that they had rejected conspiracy culture; conspiracy culture was leaving them behind.As we wrapped up one of our interviews, Mr. Lewis told me that he finds himself increasingly returning to listening to old broadcasts of his to see if he can make sense of when that turning point began.“I keep trying to imagine,” he said. “Like, I think of the time before, and I think of the time now, and it’s like, yeah, where did the transition happen? Were there milestones along the way? Were there signs, portents, that we could have recognized?” He trails off and pauses. “And I don’t have the answer to this, but that’s kind of where my mind keeps going.”Annie Kelly is a postdoctoral researcher working on conspiracy theories at King’s College London and the University of Manchester. She is also the British correspondent for the podcast “QAnon Anonymous.”The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    A Trump-Biden Rematch That Many Are Dreading

    More from our inbox:Perils of A.I., and Limits on Its DevelopmentAn image from a televised presidential debate in 2020.Damon Winter/The New York TimesTo the Editor:Re “The Presidential Rematch Nobody Wants,” by Pamela Paul (column, July 21):Ms. Paul asks, “Have you met anyone truly excited about Joe Biden running for re-election?”I am wildly enthusiastic about President Biden, who is the best president in my lifetime. His legislation to repair America’s infrastructure and bring back chip manufacturing are both huge accomplishments. Mr. Biden has done more to combat climate change, the existential issue of the day, than all the presidents who have gone before him.Mr. Biden extracted us from the endless morass of Afghanistan. He has marshaled the free peoples of the world to stop the Russian takeover of Ukraine, giving dictators around the world pause.Mr. Biden is the first president in a generation to really believe in unions and to emphasize the issues of working people, understanding how much jobs matter.I might wish he were 20 years younger. I wish I were 20 years younger.Most important, Joe Biden is an honorable man at a time when his biggest rivals do not know the meaning of the word. Being honorable is the essential virtue, without which youth or glibness do not matter.I support his re-election with all my heart and soul.Gregg CoodleyPortland, Ore.To the Editor:We endured (barely) four years of Donald Trump. Now we have Joe Biden, whose time has come and gone, and third party disrupters who know they cannot win but are looking for publicity.Mr. Biden had his turn, and is exceedingly arrogant to believe that he is our best hope. His good sense and moral values won’t help if Donald Trump wins against him, which is eminently possible. The Democratic Party must nominate a powerfully charismatic candidate.Mitchell ZuckermanNew Hope, Pa.To the Editor:I think Pamela Paul misses the point entirely. No, Biden supporters are not jumping up and down in a crazed frenzy like Trump supporters. That is actually a good thing. People like me who fully support President Biden’s re-election are sick and tired of the nonstop insanity that is Donald Trump. I’m very happy to have a sound, calm, upstanding president who actually gets things done for middle- and working-class Americans.Excitement isn’t the answer to solving America’s problems. A president who gets things done is — like Joe Biden!Sue EverettChattanooga, Tenn.To the Editor:Pamela Paul is spot on in her diagnosis of the depressing likelihood of Trump vs. Biden, Round 2.The solution is money, as is true in all things in American politics. The Big Money donors in the Democratic Party should have a conference call with Team Biden and tell it, flat out, we’re not supporting the president’s re-election. It’s time for a younger generation of leaders.Without their money, President Biden would realize that he cannot run a competitive campaign. But in a strange echo of how Republican leaders genuflect to Donald Trump and don’t confront him, the wealthy contributors to the Democratic Party do exactly the same with Mr. Biden.Ethan PodellRutherford Island, MaineTo the Editor:In an ideal world, few would want a presidential rematch. Donald Trump is a menace, and it would be nice to have a Democratic nominee who is young, charismatic and exciting. But in the real world, I favor a Trump-Biden rematch, if Mr. Trump is the Republican nominee.Mr. Biden might shuffle like a senior, and mumble his words, but he is a decent man who loves our country and has delivered beyond expectations.In leadership crises, Americans yearn for shiny new saviors riding into town on a stallion. I prefer an honest old shoe whom we can count on to get us through an election of a lifetime.Jerome T. MurphyCambridge, Mass.The writer is a retired Harvard professor and dean who taught courses on leadership.To the Editor:I am grateful to Pamela Paul for articulating and encapsulating how I, and probably many others, feel about the impending 2024 presidential race. I appreciate the stability that President Biden returned to the White House and our national politics. However, the future demands so much more than Mr. Biden or any other announced candidate can deliver.Christine CunhaBolinas, Calif.To the Editor:Pamela Paul presents many reasons, in her view, why President Biden is a flawed candidate, including that Mr. Biden’s “old age is showing.” As an example, she writes that during an interview on MSNBC he appeared to wander off the set.Fox News has been pushing this phony notion relentlessly, claiming that he walked off while the host was still talking. In fact, the interview was over, Mr. Biden shook hands with the host, they both said goodbye, and while Mr. Biden left the set, the host faced the camera and announced what was coming up next on her show.Howard EhrlichmanHuntington, N.Y.Perils of A.I., and Limits on Its DevelopmentOpenAI’s logo at its offices in San Francisco. The company is testing an image analysis feature for its ChatGPT chatbot. Jim Wilson/The New York TimesTo the Editor:Re “New Worries That Chatbot Reads Faces” (Business, July 19):The integration of facial surveillance and generative A.I. carries a warning: Without prohibitions on the use of certain A.I. techniques, the United States could easily construct a digital dystopia, adopting A.I. systems favored by authoritarian governments for social control.Our report “Artificial Intelligence and Democratic Values” established that facial surveillance is among the most controversial A.I. deployments in the world. UNESCO urged countries to prohibit the use of A.I. for mass surveillance. The European Parliament proposes a ban in the pending E.U. Artificial Intelligence Act. And Clearview AI, the company that scraped images from websites, is now prohibited in many countries.Earlier this year, we urged the Federal Trade Commission to open an investigation of OpenAI. We specifically asked the agency to prevent the deployment of future versions of ChatGPT, such as the technique that will make it possible to match facial images with data across the internet.We now urge the F.T.C. to expedite the investigation and clearly prohibit the use of A.I. techniques for facial surveillance. Even the White House announcement of voluntary standards for the A.I. industry offers no guarantee of protection.Legal standards, not industry assurances, are what is needed now.Merve HickokLorraine KisselburghMarc RotenbergWashingtonThe writers are, respectively, the president, the chair and the executive director of the Center for A.I. and Digital Policy, an independent research organization. Ms. Hickok testified before Congress in March on the need to establish guardrails for A.I.To the Editor:Re “Pressed by Biden, Big Tech Agrees to A.I. Rules” (front page, July 22):It is troubling that the Biden administration is jumping in and exacting “voluntary” limitations on the development of A.I. technologies. The government manifestly lacks the expertise and knowledge necessary to ascertain what guardrails might be appropriate, and the inevitable outcome will be to stifle innovation and reduce competition, the worst possible result.Imagine what the internet would be today had the government played a similarly intrusive and heavy-handed role at its inception.Kenneth A. MargolisChappaqua, N.Y. More

  • in

    DeSantis’s Risky Strategy: Trying Not to Trick Small Donors

    Diverging from Donald Trump, who has often cajoled, guilt-tripped and even misled small donors, the DeSantis team is pledging to avoid “smoke and mirrors” in its online fund-raising.In the months before the 2020 presidential election, Roy W. Bailey, a Dallas businessman, received a stream of text messages from Donald J. Trump’s re-election campaign, asking for money in persistent, almost desperate terms.“Have you forgotten me?” the messages read, Mr. Bailey recalled. “Have you deserted us?”Mr. Bailey was familiar with the Trump campaign: He was the co-chair of its finance committee, helped raise millions for the effort and personally contributed several thousand dollars.“Think about that,” Mr. Bailey said recently about the frequency of the messages and the beseeching tone. “That is how out of control and crazy some of this fund-raising has gotten.”He did, ultimately, desert Mr. Trump: He is now raising money for Gov. Ron DeSantis of Florida, whose campaign has pledged to avoid the kinds of online fund-raising tactics that irritated Mr. Bailey and that have spread in both parties, particularly the Republican Party, in recent years as candidates have tried to amass small donors.No phony deadlines, Mr. DeSantis has promised donors. No wildly implausible pledges that sizable contributions will be matched by committees affiliated with the campaign. And no tricking donors into recurring donations.This strategy is one of the subtle ways Mr. DeSantis’s team is trying to contrast him with Mr. Trump, who has often cajoled, guilt-tripped and occasionally misled small donors. Although his campaign has not directly called out Mr. Trump’s methods, on the day Mr. DeSantis declared he would run for president, his website prominently vowed to eschew “smoke and mirrors,” “fake matches” and “lies” in its fund-raising.For the DeSantis campaign, the vow of no trickery is risky. Mr. Trump, the most successful online Republican fund-raiser ever, has shown that such tactics work. But Generra Peck, Mr. DeSantis’s campaign manager, said that approach damaged the long-term financial health of the Republican Party because it risked alienating small donors.“We’re building a movement,” Ms. Peck said last month in an interview at DeSantis campaign headquarters in Tallahassee.So far, it’s difficult to tell if Mr. DeSantis’s approach is working. His fund-raising slowed after his campaign began in late May, and campaign officials did not provide figures that would have shed light on its success with small donors.It is difficult to tell if Gov. Ron DeSantis’s approach is working. His fund-raising slowed after his campaign began in late May, and campaign officials did not provide figures that would have shed light on its success with small donors.Christopher Lee for The New York TimesThe battle to raise money from average Americans may seem quaint in the era of billionaires and super PACs, which have taken on outsize roles in U.S. elections. But straight campaign cash is still, in many ways, the lifeblood of a campaign, and a powerful measure of the strength of a candidate. For example, G.O.P. presidential contenders must reach a threshold of individual donors set by the Republican National Committee to qualify for the debate stage, a bar that is already causing some candidates to engage in gimmicky contortions.To highlight what it bills as a more ethical approach to fund-raising, the DeSantis campaign has devoted a giant wall inside its modest office to scrawling the names — first name, last initial — of every donor to the campaign, tens of thousands of them so far.It is an intensive effort. During work hours, campaign staff members — as well as Mr. DeSantis himself, in one instance — constantly write names on the wall in red, blue and black markers.“We want our staff to look at that wall, remember who supports us, to remember why we’re here,” Ms. Peck said.Mr. DeSantis’s advisers argue that being more transparent with donors could be a long-term way for Republicans to counter the clear advantage Democrats have built up in internet fund-raising, largely thanks to their online platform ActBlue, founded in 2004. A Republican alternative, WinRed, didn’t get off the ground until 15 years later. A greater share of Democrats than Republicans said they had donated to a political campaign in the last two years, according to a recent NBC News poll, meaning the G.O.P. has a less robust pool of donors to draw from.“One of the biggest challenges for Republicans, across the board, is building out the small-dollar universe,” said Kristin Davison, the chief operating officer of Never Back Down, the main super PAC supporting Mr. DeSantis.The tell-the-truth approach to deadlines and goals has been tested by other campaigns, including those of Senator Bernie Sanders, who built a durable network of grass-roots donors in his two presidential runs.Mr. DeSantis’s campaign said last week that it had raised $20 million in his first six weeks as an official presidential candidate, but the amount that came from small donors will not be apparent until later this month, when campaigns file second-quarter disclosures.The campaign did not respond to a question about how many small donors had contributed so far. It had set a goal of recruiting 100,000 donors by July 1, but as of late June, the wall had only about 50,000 names, according to a fund-raising email.And although Mr. DeSantis’s team has pledged to act transparently when it comes to small donors, senior aides in the governor’s office have faced accusations that they inappropriately pressured lobbyists into donating to his campaign.Eric Wilson, the director of the Center for Campaign Innovation, a conservative nonprofit focused on digital politics, said the DeSantis campaign was wise to avoid online pressure tactics, which he likened to a “dopamine arms race” that burns out donors and turns off voters.“They can be effective, but voters say they don’t like them,” Mr. Wilson said. “You can’t make the entire meal around sugar.”Mr. Wilson said he had also seen other campaigns try more honest communications: “You are starting to see a recalibration.”For instance, the campaign of former Gov. Nikki Haley of South Carolina said in May that Mr. DeSantis had imitated language used in Ms. Haley’s fund-raising emails.The ways that campaigns reach out to potential small donors online grew out of old-fashioned telemarketing and fund-raising by mail. Before email, campaigns sent out fake telegrams, letters stamped to appear they had been hand-addressed, surveys and other gimmicks to draw donations.The DeSantis campaign has also adopted a “subscriber exclusive” model, allowing donors to join so-called tele-town halls with Mr. DeSantis, gain early access to merchandise and receive weekly “insider” updates. Nicole Craine for The New York TimesIn the era of email and smartphones, it is easier to reach a large number of prospective donors, but the risk of bombarding and overwhelming them is higher. It can also be harder to induce people to open messages, let alone contribute. The subject line has to be compelling, and the offers need to stand out — which can lead, for example, to dubious promises that campaigns will somehow “match” any contributions made, a practice that has been widely criticized.Mr. Trump’s campaign sends about 10 emails per day, in addition to text messages. His campaign has escalated bogus matching promises to the point of absurdity, telling donors that their contributions will be matched at “1,500%.”A spokesman for the Trump campaign did not respond to a request for comment.The tactics aren’t limited to Republicans. Democratic groups have also been criticized and mocked for vague promises of “300 percent matches” in their fund-raising pitches.For its part, the DeSantis campaign said its strategy was devised to establish long-term relationships with small donors, rather than to suck them dry as quickly as possible.The DeSantis campaign has adopted a “subscriber exclusive” model, allowing donors to join so-called tele-town halls with Mr. DeSantis (“You guys are part of the team,” the governor told listeners during a June 12 call), gain early access to merchandise, and receive weekly “insider” updates. It’s the carrot, not the stick, a blueprint that campaign officials said was adopted in part from the business world.Mr. Trump’s campaign has clearly taken notice.The DeSantis campaign said recently that it had raised $20 million in his first six weeks as a candidate, but the amount that came from small donors will not be apparent until later this month. Rachel Mummey for The New York TimesOn Friday, in an apparent round of fund-raising one-upsmanship, the Trump campaign announced a new donor initiative, saying it would build a “big, beautiful Donor Wall” at its New Hampshire headquarters.“And I don’t mean scribbled on the wall with a crayon, like some other campaigns do,” said the campaign email, which was written in Mr. Trump’s voice, “but a heavy, respectable plaque with the names of our great donors finely etched within.”All for a donation of $75.Patricia Mazzei More