More stories

  • in

    In Tense Election Year, Public Officials Face Climate of Intimidation

    Colorado and Maine, which blocked former President Donald J. Trump from the ballot, have grappled with the harassment of officials.The caller had tipped off the authorities in Maine on Friday night: He told them that he had broken into the home of Shenna Bellows, the state’s top election official, a Democrat who one night earlier had disqualified former President Donald J. Trump from the primary ballot because of his actions during the Jan. 6 Capitol riot.No one was home when officers arrived, according to Maine State Police, who labeled the false report as a “swatting” attempt, one intended to draw a heavily armed law enforcement response.In the days since, more bogus calls and threats have rolled in across the country. On Wednesday, state capitol buildings in Connecticut, Georgia, Hawaii, Kentucky, Michigan, Minnesota, Mississippi and Montana were evacuated or placed on lockdown after the authorities said they had received bomb threats that they described as false and nonspecific. The F.B.I. said it had no information to suggest any threats were credible.The incidents intensified a climate of intimidation and the harassment of public officials, including those responsible for overseeing ballot access and voting. Since 2020, election officials have confronted rising threats and difficult working conditions, aggravated by rampant conspiracy theories about fraud. The episodes suggested 2024 would be another heated election year.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber?  More

  • in

    Trump me atacó. Después, Musk lo hizo. No fue casualidad

    Timo LenzenCuando trabajaba en Twitter, ahora conocida como X, dirigí al equipo que puso por primera vez una etiqueta de verificación de hechos en uno de los tuits de Donald Trump. Tras la violencia del 6 de enero, ayudé a tomar la decisión de suspender su cuenta en Twitter. Nada me preparó para lo que ocurriría después.Respaldado por sus seguidores en las redes sociales, Trump me atacó públicamente. Dos años después, tras su adquisición de Twitter y después de que yo dimití de mi puesto como responsable de confianza y seguridad de la empresa, Elon Musk echó más leña al fuego. He vivido con guardias armados en la puerta de mi casa y he tenido que trastocar la vida de mi familia, así como esconderme durante meses y mudarme una y otra vez.No es una historia que me guste recordar. Pero he aprendido que lo que me ocurrió no fue casualidad. No fue solo una venganza personal o la “cultura de la cancelación”. Se trató de una estrategia que no solo afecta a personas específicas, como en mi caso, sino a todos nosotros, ya que está cambiando a gran velocidad lo que vemos en internet.Los individuos —desde investigadores académicos hasta trabajadores de empresas de tecnología— son cada vez más objeto de demandas, comparecencias ante el Congreso y despiadados ataques en línea. Estos ataques, organizados en gran medida por la derecha, están teniendo el efecto deseado: las universidades están reduciendo sus esfuerzos para cuantificar la información abusiva y engañosa que se difunde en internet. Las empresas de redes sociales están evitando tomar el tipo de decisiones difíciles que mi equipo tomó cuando intervinimos ante las mentiras de Trump sobre las elecciones de 2020. Las plataformas no empezaron a tomarse en serio estos riesgos sino hasta después de las elecciones de 2016. Ahora, ante la posibilidad de ataques desproporcionados contra sus empleados, las empresas parecen cada vez más reacias a tomar decisiones controvertidas, lo cual permite que la desinformación y el abuso se enconen para evitar provocar represalias públicas.Estos ataques a la seguridad en internet se producen en un momento en el que la democracia no podría estar más en riesgo. En 2024, está prevista la celebración de más de 40 elecciones importantes, entre ellas las de Estados Unidos, la Unión Europea, la India, Ghana y México. Lo más probable es que estas democracias se enfrenten a los mismos riesgos de campañas de desinformación respaldadas por los gobiernos y de incitación a la violencia en línea que han plagado las redes sociales durante años. Deberíamos preocuparnos por lo que ocurra.Mi historia comienza con esa verificación de datos. En la primavera de 2020, tras años de debate interno, mi equipo decidió que Twitter debía aplicar una etiqueta a un tuit del entonces presidente Trump que afirmaba que el voto por correo era propenso al fraude y que las próximas elecciones estarían “amañadas”. “Conoce los hechos sobre la votación por correo”, decía la etiqueta.El 27 de mayo, la mañana siguiente a la colocación de la etiqueta, la asesora principal de la Casa Blanca, Kellyanne Conway, me identificó de manera pública como el director del equipo de integridad de Twitter. Al día siguiente, The New York Post publicó en su portada varios tuits en los que me burlaba de Trump y otros republicanos. Los había publicado años antes, cuando era estudiante y tenía pocos seguidores, sobre todo amigos y familiares, en las redes sociales. Ahora, eran noticia de primera plana. Ese mismo día, Trump tuiteó que yo era un “odiador”.Legiones de usuarios de Twitter, la mayoría de quienes días antes no tenían ni idea de quién era yo ni en qué consistía mi trabajo, comenzaron una campaña de acoso en línea que duró meses, en la que exigían que me despidieran, me encarcelaran o me mataran. La cantidad de notificaciones de Twitter arrunió mi teléfono. Amigos de los que no tenía noticias desde hacía años expresaron su preocupación. En Instagram, fotos antiguas de mis vacaciones y de mi perro se inundaron de comentarios amenazantes e insultos (algunos comentaristas, que malinterpretaron el momento de manera atroz, aprovecharon para intentar coquetear conmigo).Me sentí avergonzado y asustado. Hasta ese momento, nadie fuera de unos pocos círculos bastante especializados tenía idea de quién era yo. Los académicos que estudian las redes sociales llaman a esto “colapso de contexto”: las cosas que publicamos en las redes sociales con un público en mente pueden acabar circulando entre un público muy diferente, con resultados inesperados y destructivos. En la práctica, se siente como si todo tu mundo se derrumba.El momento en que se desató la campaña en contra de mi persona y mi supuesta parcialidad sugería que los ataques formaban parte de una estrategia bien planificada. Los estudios académicos han rebatido en más de una ocasión las afirmaciones de que las plataformas de Silicon Valley son tendenciosas contra los conservadores. Pero el éxito de una estrategia encaminada a obligar a las empresas de redes sociales a reconsiderar sus decisiones quizá no requiera la demostración de una verdadera mala conducta. Como describió en una ocasión Rich Bond, expresidente del Partido Republicano, tal vez solo sea necesario “ganarse a los árbitros”: presionar sin cesar a las empresas para que se lo piensen dos veces antes de emprender acciones que podrían provocar una reacción negativa. Lo que me ocurrió fue parte de un esfuerzo calculado para que Twitter se mostrara reacio a moderar a Trump en el futuro y para disuadir a otras empresas de tomar medidas similares.Y funcionó. Mientras se desataba la violencia en el Capitolio el 6 de enero, Jack Dorsey, entonces director general de Twitter, anuló la recomendación del departamento de confianza y seguridad de que se bloqueara la cuenta de Trump debido a varios tuits, incluido uno que atacaba al vicepresidente Mike Pence. En cambio, se le impuso una suspensión temporal de 12 horas (antes de que su cuenta se se suspendiera indefinidamente el 8 de enero). Dentro de los límites de las normas, se animó a los miembros del personal a encontrar soluciones para ayudar a la empresa a evitar el tipo de reacción que da lugar a ciclos de noticias furiosas, audiencias y acoso a empleados. En la práctica, lo que sucedió fue que Twitter dio mayor libertad a los infractores: a la representante Marjorie Taylor Greene se le permitió violar las normas de Twitter al menos cinco veces antes de que una de sus cuentas fuera suspendida de manera definitiva en 2022. Otras figuras prominentes de derecha, como la cuenta de guerra cultural Libs of TikTok, gozaron de una deferencia similar.En todo el mundo, se están desplegando tácticas similares para influir en los esfuerzos de confianza y seguridad de las plataformas. En India, la policía visitó dos de nuestras oficinas en 2021 cuando comprobamos los hechos de las publicaciones de un político del partido gobernante y la policía se presentó en la casa de un empleado después de que el gobierno nos solicitó bloquear cuentas implicadas en una serie de protestas. El acoso volvió a rendir frutos: los ejecutivos de Twitter decidieron que cualquier acción que pudiera ser delicada en la India requeriría la aprobación de los más altos mandos, un nivel único de escalada de decisiones que, de otro modo, serían rutinarias.Y cuando quisimos revelar una campaña de propaganda llevada a cabo por una rama del ejército indio, nuestro equipo jurídico nos advirtió que nuestros empleados en la India podrían ser acusados de sedición y condenados a muerte. Así que Twitter no reveló la campaña sino hasta más de un año después, sin señalar al gobierno indio como autor.En 2021, antes de las elecciones legislativas de Rusia, los funcionarios de un servicio de seguridad estatal fueron a la casa de una alta ejecutiva de Google en Moscú para exigir la retirada de una aplicación que se usaba para protestar en contra de Vladimir Putin. Los agentes la amenazaron con encarcelarla si la empresa no cumplía en 24 horas. Tanto Apple como Google retiraron la aplicación de sus respectivas tiendas y la restablecieron una vez concluidas las elecciones.En cada uno de estos casos, los empleados en cuestión carecían de la capacidad para hacer lo que les pedían los funcionarios de turno, ya que las decisiones subyacentes se tomaban a miles de kilómetros de distancia, en California. Pero como los empleados locales tenían la desgracia de residir dentro de la jurisdicción de las autoridades, fueron objeto de campañas coercitivas, que enfrentaban el sentido del deber de las empresas hacia sus empleados contra los valores, principios o políticas que pudieran hacerles resistirse a las demandas locales. Inspirados por la idea, India y otros países comenzaron a promulgar leyes de “toma de rehenes” para garantizar que las empresas de redes sociales contrataran personal local.En Estados Unidos, hemos visto que estas formas de coerción no las han llevado a cabo jueces y policías, sino organizaciones de base, turbas en las redes sociales, comentaristas de noticias por cable y, en el caso de Twitter, el nuevo propietario de la empresa.Una de las fuerzas más recientes en esta campaña son los “archivos de Twitter”, una gran selección de documentos de la empresa —muchos de los cuales yo mismo envié o recibí durante mis casi ocho años en Twitter— entregados por orden de Musk a un puñado de escritores selectos. Los archivos fueron promocionados por Musk como una forma innovadora de transparencia, que supuestamente exponían por primera vez la forma en que el sesgo liberal de las costas de Estados Unidos de Twitter reprime el contenido conservador.El resultado fue algo muy distinto. Como dijo el periodista de tecnología Mike Masnick, después de toda la fanfarria que rodeó la publicación inicial de los archivos de Twitter, al final “no había absolutamente nada de interés” en los documentos y lo poco que había tenía errores factuales importantes. Hasta Musk acabó por impacientarse con la estrategia. Pero, en el proceso, el esfuerzo marcó una nueva e inquietante escalada en el acoso a los empleados de las empresas tecnológicas.A diferencia de los documentos que por lo general saldrían de las grandes empresas, las primeras versiones de los archivos de Twitter no suprimieron los nombres de los empleados, ni siquiera de los de menor nivel. Un empleado de Twitter que residía en Filipinas fue víctima de doxeo (la revelación de información personal) y de acoso grave. Otros se han convertido en objeto de conspiraciones. Las decisiones tomadas por equipos de decenas de personas de acuerdo con las políticas escritas de Twitter se presentaron como si hubieran sido tomadas por los deseos caprichosos de individuos, cada uno identificado por su nombre y su fotografía. Yo fui, por mucho, el objetivo más frecuente.La primera entrega de los archivos de Twitter se dio tras un mes de mi salida de la empresa y unos cuantos días después de que publiqué un ensayo invitado en The New York Times y hablé sobre mi experiencia como empleado de Musk. No pude evitar sentir que las acciones de la empresa eran, hasta cierto punto, represalias. A la semana siguiente, Musk fue incluso más allá y sacó de contexto un párrafo de mi tesis doctoral para afirmar sin fundamentos que yo aprobaba la pedofilia, un tropo conspirativo que suelen utilizar los extremistas de ultraderecha y los seguidores de QAnon para desprestigiar a personas de la comunidad LGBTQ.La respuesta fue todavía más extrema que la que experimenté tras el tuit que Trump publicó sobre mí. “Deberías colgarte de un viejo roble por la traición que has cometido. Vive con miedo cada uno de tus días”, decía uno de los miles de tuits y correos electrónicos amenazantes. Ese mensaje y cientos de otros similares eran violaciones de las mismas políticas que yo había trabajado para desarrollar y hacer cumplir. Bajo la nueva administración, Twitter se hizo de la vista gorda y los mensajes permanecen en el sitio hasta el día de hoy.El 6 de diciembre, cuatro días después de la primera divulgación de los archivos de Twitter, se me pidió comparecer en una audiencia del Congreso centrada en los archivos y la presunta censura de Twitter. En esa audiencia, algunos miembros del Congreso mostraron carteles de gran tamaño con mis tuits de hace años y me preguntaron bajo juramento si seguía manteniendo esas opiniones (en la medida en que las bromas tuiteadas con descuido pudieran tomarse como mis opiniones reales, no las sostengo). Greene dijo en Fox News que yo tenía “unas posturas muy perturbadoras sobre los menores y la pornografía infantil” y que yo permití “la proliferación de la pornografía infantil en Twitter”, lo que desvirtuó aún más las mentiras de Musk (y además, aumentó su alcance). Llenos de amenazas y sin opciones reales para responder o protegernos, mi marido y yo tuvimos que vender nuestra casa y mudarnos.El ámbito académico se ha convertido en el objetivo más reciente de estas campañas para socavar las medidas de seguridad en línea. Los investigadores que trabajan para entender y resolver la propagación de desinformación en línea reciben ahora más ataques partidistas; las universidades a las que están afiliados han estado envueltas en demandas, onerosas solicitudes de registros públicos y procedimientos ante el Congreso. Ante la posibilidad de facturas de abogados de siete dígitos, hasta los laboratorios de las universidades más grandes y mejor financiadas han dicho que tal vez tengan que abandonar el barco. Otros han optado por cambiar el enfoque de sus investigaciones en función de la magnitud del acoso.Poco a poco, audiencia tras audiencia, estas campañas están erosionando de manera sistemática las mejoras a la seguridad y la integridad de las plataformas en línea que tanto ha costado conseguir y las personas que realizan este trabajo son las que pagan el precio más directo.Las plataformas de tecnología están replegando sus iniciativas para proteger la seguridad de las elecciones y frenar la propagación de la desinformación en línea. En medio de un clima de austeridad más generalizado, las empresas han disminuido muy en especial sus iniciativas relacionadas con la confianza y la seguridad. Ante la creciente presión de un Congreso hostil, estas decisiones son tan racionales como peligrosas.Podemos analizar lo que ha sucedido en otros países para vislumbrar cómo podría terminar esta historia. Donde antes las empresas hacían al menos un esfuerzo por resistir la presión externa; ahora, ceden en gran medida por defecto. A principios de 2023, el gobierno de India le pidió a Twitter que restringiera las publicaciones que criticaran al primer ministro del país, Narendra Modi. En años anteriores, la empresa se había opuesto a tales peticiones; en esta ocasión, Twitter accedió. Cuando un periodista señaló que tal cooperación solo incentiva la proliferación de medidas draconianas, Musk se encogió de hombros: “Si nos dan a elegir entre que nuestra gente vaya a prisión o cumplir con las leyes, cumpliremos con las leyes”.Resulta difícil culpar a Musk por su decisión de no poner en peligro a los empleados de Twitter en India. Pero no deberíamos olvidar de dónde provienen estas tácticas ni cómo se han extendido tanto. Las acciones de Musk (que van desde presionar para abrir los archivos de Twitter hasta tuitear sobre conspiraciones infundadas relacionadas con exempleados) normalizan y popularizan que justicieros exijan la rendición de cuentas y convierten a los empleados de su empresa en objetivos aún mayores. Su reciente ataque a la Liga Antidifamación demuestra que considera que toda crítica contra él o sus intereses empresariales debe tener como consecuencia una represalia personal. Y, en la práctica, ahora que el discurso de odio va en aumento y disminuyen los ingresos de los anunciantes, las estrategias de Musk parecen haber hecho poco para mejorar los resultados de Twitter.¿Qué puede hacerse para revertir esta tendencia?Dejar claras las influencias coercitivas en la toma de decisiones de las plataformas es un primer paso fundamental. También podría ayudar que haya reglamentos que les exijan a las empresas transparentar las decisiones que tomen en estos casos y por qué las toman.En su ausencia, las empresas deben oponerse a los intentos de que se quiera controlar su trabajo. Algunas de estas decisiones son cuestiones fundamentales de estrategia empresarial a largo plazo, como dónde abrir (o no abrir) oficinas corporativas. Pero las empresas también tienen un deber para con su personal: los empleados no deberían tener que buscar la manera de protegerse cuando sus vidas ya se han visto alteradas por estas campañas. Ofrecer acceso a servicios que promuevan la privacidad puede ayudar. Muchas instituciones harían bien en aprender la lección de que pocas esferas de la vida pública son inmunes a la influencia mediante la intimidación.Si las empresas de redes sociales no pueden operar con seguridad en un país sin exponer a sus trabajadores a riesgos personales y a las decisiones de la empresa a influencias indebidas, tal vez no deberían operar allí para empezar. Como a otros, me preocupa que esas retiradas empeoren las opciones que les quedan a las personas que más necesitan expresarse en línea de forma libre y abierta. Pero permanecer en internet teniendo que hacer concesiones podría impedir el necesario ajuste de cuentas con las políticas gubernamentales de censura. Negarse a cumplir exigencias moralmente injustificables y enfrentarse a bloqueos por ello puede provocar a largo plazo la necesaria indignación pública que ayude a impulsar la reforma.El mayor desafío —y quizá el más ineludible— en este caso es el carácter esencialmente humano de las iniciativas de confianza y seguridad en línea. No son modelos de aprendizaje automático ni algoritmos sin rostro los que están detrás de las decisiones clave de moderación de contenidos: son personas. Y las personas pueden ser presionadas, intimidadas, amenazadas y extorsionadas. Enfrentarse a la injusticia, al autoritarismo y a los perjuicios en línea requiere empleados dispuestos a hacer ese trabajo.Pocas personas podrían aceptar un trabajo así, si lo que les cuesta es la vida o la libertad. Todos debemos reconocer esta nueva realidad y planear en consecuencia.Yoel Roth es académico visitante de la Universidad de Pensilvania y la Fundación Carnegie para la Paz Internacional, y fue responsable de confianza y seguridad en Twitter. More

  • in

    I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online.

    Timo LenzenWhen I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.What can be done to turn back this tide?Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Another Texas Election Official Quits After Threats From Trump Supporters

    Heider Garcia, the top election official in deep-red Tarrant County, had previously testified about being harassed by the former president’s right-wing supporters.Heider Garcia, the head of elections in Tarrant County, Texas, announced this week that he would resign after facing death threats, joining other beleaguered election officials across the nation who have quit under similar circumstances.Mr. Garcia oversees elections in a county where, in 2020, Donald J. Trump became only the second Republican presidential candidate to lose in more than 50 years. Right-wing skepticism of the election results fueled threats against him, even though the county received acclaim from state auditors for its handling of the 2020 voting. Why it’s importantWith Mr. Trump persistently repeating the lie that he won the 2020 election, many of his supporters and those in right-wing media have latched on to conspiracy theories and joined him in spreading disinformation about election security. Those tasked with running elections, even in deeply Republican areas that did vote for Mr. Trump in 2020, have borne the brunt of vitriol and threats from people persuaded by baseless claims of fraud.The threats made against himMr. Garcia detailed a series of threats as part of his written testimony last year to the Senate Judiciary Committee, which he urged to pass better protections for election officials.One of the threats made online that he cited: “hang him when convicted from fraud and let his lifeless body hang in public until maggots drip out his mouth.”He testified that he had repeatedly been the target of a doxxing campaign, including the posting of his home address on Twitter after Sidney Powell, a lawyer for Mr. Trump, falsely accused him on television and social media of manipulating election results.Mr. Garcia also testified that he received direct messages on Facebook with death threats calling him a “traitor,” and one election denier used Twitter to urge others to “hunt him down.”Heider Garcia’s backgroundMr. Garcia, whose political affiliation is not listed on public voting records, has overseen elections in Tarrant County since 2018. Before that, he had a similar role outside Sacramento in Placer County, Calif.He did not immediately respond to a request for comment on Tuesday.Election deniers have fixated on Mr. Garcia’s previous employment with Smartmatic, an election technology company that faced baseless accusations of rigging the 2020 election and filed a $2.7 billion defamation lawsuit against Fox News that is similar to one brought by the voting machine company Dominion, which was settled on Tuesday. He had several roles with Smartmatic over more than a dozen years, ending in 2016, according to his LinkedIn profile. His work for the company in Venezuela, a favorite foil of the right wing because of its troubled socialist government, has been a focus of conspiracy theorists.What he said about the threats“I could not sleep that night, I just sat in the living room, until around 3:00 a.m., just waiting to see if anyone had read this and decided to act on it.”— From Mr. Garcia’s written testimony last year, describing the toll that the posting of his address online, along with other threats, had taken on him and his family.Other election officials who have quitAll three election officials resigned last year in another Texas county, Gillespie — at least one of whom cited repeated death threats and stalking.A rural Virginia county about 70 miles west of Richmond lost its entire elections staff this year after an onslaught of baseless voter fraud claims, NBC News reported.Read moreElection officials have resorted to an array of heightened security measures as threats against them have intensified, including hiring private security, fireproofing and erecting fencing around a vote tabulation center.The threats have led to several arrests by a Justice Department task force that was created in 2021 to focus on attempts to intimidate election officials. More

  • in

    Before Midterms, Election Officials Increase Security Over Threats

    In Wisconsin, one of the nation’s key swing states, cameras and plexiglass now fortify the reception area of a county election office in Madison, the capital, after a man wearing camouflage and a mask tried to open locked doors during an election in April.In another bellwether area, Maricopa County, Ariz., where beleaguered election workers had to be escorted through a scrum of election deniers to reach their cars in 2020, a security fence was added to protect the perimeter of a vote tabulation center.And in Colorado, the state’s top election official, Jena Griswold, the secretary of state and a Democrat, resorted to paying for private security out of her budget after a stream of threats.As the nation hurtles closer to the midterm elections, those who will oversee them are taking a range of steps to beef up security for themselves, their employees, polling places and even drop boxes, tapping state and federal funding for a new set of defenses. The heightened vigilance comes as violent rhetoric from the right intensifies and as efforts to intimidate election officials by those who refuse to accept the results of the 2020 election become commonplace.Discussing security in a recent interview with The Times, Ms. Griswold, 37, said that threats of violence had kept her and her aides up late at night as they combed through comments on social media.At a right-wing group’s gathering in Colorado earlier this year, she said, a prominent election denier with militia ties suggested that she should be killed. That was when she concluded that her part-time security detail provided by the Colorado State Patrol wasn’t enough.“They called for me to be hung,” said Ms. Griswold, who is running for re-election. “It’s a long weekend. I’m home alone, and I only get seven hours of State Patrol coverage.”Even in places where there was never a shadow of a doubt about the political leanings of the electorate, election officials have found themselves under threat. In a Texas county that President Donald J. Trump won by 59 percentage points in 2020, all three election officials recently resigned, with at least one citing repeated death threats and stalking.One in five local election officials who responded to a survey earlier this year by the Brennan Center for Justice said that they were “very” or “somewhat unlikely” to continue serving through 2024. The collective angst is a recurring theme at workshops and conferences attended by election officials, who say it is not unusual for them exchange anecdotes about threatening messages or harassment at the grocery store. The discussions have turned at times to testing drop boxes — a focus of right-wing attacks on mail-in voting — to see if they can withstand being set on fire.The State of the 2022 Midterm ElectionsWith the primaries winding down, both parties are starting to shift their focus to the general election on Nov. 8.Battleground Pennsylvania: Few states feature as many high-stakes, competitive races as Pennsylvania, which has emerged as the nation’s center of political gravity.The Dobbs Decision’s Effect: Since the Supreme Court overturned Roe v. Wade, the number of women signing up to vote has surged in some states and the once-clear signs of a Republican advantage are hard to see.How a G.O.P. Haul Vanished: Last year, the campaign arm of Senate Republicans was smashing fund-raising records. Now, most of the money is gone.Digital Pivot: At least 10 G.O.P. candidates in competitive races have updated their websites to minimize their ties to former President Donald J. Trump or to adjust their stances on abortion.Benjamin Hovland, a member of the U.S. Election Assistance Commission, described the intimidation campaign as pervasive.“This isn’t a red-state issue or a blue-state issue,” Mr. Hovland said in a recent interview. “This is a national issue, where the professional public servants that run our elections have been subjected to an unprecedented level of threats, harassment and intimidating behavior.”In guidance issued in June, the Election Assistance Commission allowed for federal election grants to be used for physical security services and to monitor threats on social media.A poll worker sorting absentee ballots in Madison, Wis., in August. Officials recently budgeted $95,000 to start designing a more secure election center in the county.Jamie Kelter Davis for The New York TimesIn Wisconsin’s Dane County, which includes Madison, partisan poll watchers and a brigade of lawyers with the Trump campaign descended in 2020 to dispute the election results. County officials recently budgeted $95,000 to start designing a new and more secure election center.The move came after the U.S. Department of Homeland Security conducted a risk assessment in April on the current election offices for the county and city, which are housed in the same building.“It’s kind of a sieve,” Scott McDonell, a Democrat and the county’s clerk for the past decade, said in an interview. More

  • in

    Germany Struggles to Stop Online Abuse Ahead of Election

    Scrolling through her social media feed, Laura Dornheim is regularly stopped cold by a new blast of abuse aimed at her, including from people threatening to kill or sexually assault her. One person last year said he looked forward to meeting her in person so he could punch her teeth out.Ms. Dornheim, a candidate for Parliament in Germany’s election on Sunday, is often attacked for her support of abortion rights, gender equality and immigration. She flags some of the posts to Facebook and Twitter, hoping that the platforms will delete the posts or that the perpetrators will be barred. She’s usually disappointed.“There might have been one instance where something actually got taken down,” Ms. Dornheim said.Harassment and abuse are all too common on the modern internet. Yet it was supposed to be different in Germany. In 2017, the country enacted one of the world’s toughest laws against online hate speech. It requires Facebook, Twitter and YouTube to remove illegal comments, pictures or videos within 24 hours of being notified about them or risk fines of up to 50 million euros, or $59 million. Supporters hailed it as a watershed moment for internet regulation and a model for other countries.But an influx of hate speech and harassment in the run-up to the German election, in which the country will choose a new leader to replace Angela Merkel, its longtime chancellor, has exposed some of the law’s weaknesses. Much of the toxic speech, researchers say, has come from far-right groups and is aimed at intimidating female candidates like Ms. Dornheim.Some critics of the law say it is too weak, with limited enforcement and oversight. They also maintain that many forms of abuse are deemed legal by the platforms, such as certain kinds of harassment of women and public officials. And when companies do remove illegal material, critics say, they often do not alert the authorities or share information about the posts, making prosecutions of the people publishing the material far more difficult. Another loophole, they say, is that smaller platforms like the messaging app Telegram, popular among far-right groups, are not subject to the law.Free-expression groups criticize the law on other grounds. They argue that the law should be abolished not only because it fails to protect victims of online abuse and harassment, but also because it sets a dangerous precedent for government censorship of the internet.The country’s experience may shape policy across the continent. German officials are playing a key role in drafting one of the world’s most anticipated new internet regulations, a European Union law called the Digital Services Act, which will require Facebook and other online platforms to do more to address the vitriol, misinformation and illicit content on their sites. Ursula von der Leyen, a German who is president of the European Commission, the 27-nation bloc’s executive arm, has called for an E.U. law that would list gender-based violence as a special crime category, a proposal that would include online attacks.“Germany was the first to try to tackle this kind of online accountability,” said Julian Jaursch, a project director at the German think tank Stiftung Neue Verantwortung, which focuses on digital issues. “It is important to ask whether the law is working.”Campaign billboards in Germany’s race for chancellor, showing, from left, Annalena Baerbock of the Green Party, Olaf Scholz of the Social Democrats and Christian Lindner of the Free Democrats.Sean Gallup/Getty ImagesMarc Liesching, a professor at HTWK Leipzig who published an academic report on the policy, said that of the posts that had been deleted by Facebook, YouTube and Twitter, a vast majority were classified as violating company policies, not the hate speech law. That distinction makes it harder for the government to measure whether companies are complying with the law. In the second half of 2020, Facebook removed 49 million pieces of “hate speech” based on its own community standards, compared with the 154 deletions that it attributed to the German law, he found.The law, Mr. Liesching said, “is not relevant in practice.”With its history of Nazism, Germany has long tried to balance free speech rights against a commitment to combat hate speech. Among Western democracies, the country has some of the world’s toughest laws against incitement to violence and hate speech. Targeting religious, ethnic and racial groups is illegal, as are Holocaust denial and displaying Nazi symbols in public. To address concerns that companies were not alerting the authorities to illegal posts, German policymakers this year passed amendments to the law. They require Facebook, Twitter and YouTube to turn over data to the police about accounts that post material that German law would consider illegal speech. The Justice Ministry was also given more powers to enforce the law. “The aim of our legislative package is to protect all those who are exposed to threats and insults on the internet,” Christine Lambrecht, the justice minister, who oversees enforcement of the law, said after the amendments were adopted. “Whoever engages in hate speech and issues threats will have to expect to be charged and convicted.”Germans will vote for a leader to replace Angela Merkel, the country’s longtime chancellor.Markus Schreiber/Associated PressFacebook and Google have filed a legal challenge to block the new rules, arguing that providing the police with personal information about users violates their privacy.Facebook said that as part of an agreement with the government it now provided more figures about the complaints it received. From January through July, the company received more than 77,000 complaints, which led it to delete or block about 11,500 pieces of content under the German law, known as NetzDG.“We have zero tolerance for hate speech and support the aims of NetzDG,” Facebook said in a statement. Twitter, which received around 833,000 complaints and removed roughly 81,000 posts during the same period, said a majority of those posts did not fit the definition of illegal speech, but still violated the company’s terms of service.“Threats, abusive content and harassment all have the potential to silence individuals,” Twitter said in a statement. “However, regulation and legislation such as this also has the potential to chill free speech by emboldening regimes around the world to legislate as a way to stifle dissent and legitimate speech.”YouTube, which received around 312,000 complaints and removed around 48,000 pieces of content in the first six months of the year, declined to comment other than saying it complies with the law.The amount of hate speech has become increasingly pronounced during election season, according to researchers at Reset and HateAid, organizations that track online hate speech and are pushing for tougher laws.The groups reviewed nearly one million comments on far-right and conspiratorial groups across about 75,000 Facebook posts in June, finding that roughly 5 percent were “highly toxic” or violated the online hate speech law. Some of the worst material, including messages with Nazi symbolism, had been online for more than a year, the groups found. Of 100 posts reported by the groups to Facebook, roughly half were removed within a few days, while the others remain online.The election has also seen a wave of misinformation, including false claims about voter fraud.Annalena Baerbock, the 40-year-old leader of the Green Party and the only woman among the top candidates running to succeed Ms. Merkel, has been the subject of an outsize amount of abuse compared with her male rivals from other parties, including sexist slurs and misinformation campaigns, according to researchers.Ms. Baerbock, the Green Party candidate for chancellor, taking a selfie with one of her supporters.Laetitia Vancon for The New York TimesOthers have stopped running altogether. In March, a former Syrian refugee running for the German Parliament, Tareq Alaows, dropped out of the race after experiencing racist attacks and violent threats online.While many policymakers want Facebook and other platforms to be aggressive in screening user-generated content, others have concerns about private companies making decisions about what people can and can’t say. The far-right party Alternative for Germany, which has criticized the law for unfairly targeting its supporters, has vowed to repeal the policy “to respect freedom of expression.”Jillian York, an author and free speech activist with the Electronic Frontier Foundation in Berlin, said the German law encouraged companies to remove potentially offensive speech that is perfectly legal, undermining free expression rights.“Facebook doesn’t err on the side of caution, they just take it down,” Ms. York said. Another concern, she said, is that less democratic countries such as Turkey and Belarus have adopted laws similar to Germany’s so that they could classify certain material critical of the government as illegal.Renate Künast, a former government minister who once invited a journalist to accompany her as she confronted individuals in person who had targeted her with online abuse, wants to see the law go further. Victims of online abuse should be able to go after perpetrators directly for libel and financial settlements, she said. Without that ability, she added, online abuse will erode political participation, particularly among women and minority groups.In a survey of more than 7,000 German women released in 2019, 58 percent said they did not share political opinions online for fear of abuse.“They use the verbal power of hate speech to force people to step back, leave their office or not to be candidates,” Ms. Künast said.The Reichstag, where the German Parliament convenes, in Berlin.Emile Ducke for The New York TimesMs. Dornheim, the Berlin candidate, who has a master’s degree in computer science and used to work in the tech industry, said more restrictions were needed. She described getting her home address removed from public records after somebody mailed a package to her house during a particularly bad bout of online abuse.Yet, she said, the harassment has only steeled her resolve.“I would never give them the satisfaction of shutting up,” she said. More

  • in

    Have Trump’s Lies Wrecked Free Speech?

    AdvertisementContinue reading the main storyOpinionSupported byContinue reading the main storyHave Trump’s Lies Wrecked Free Speech?A debate has broken out over whether the once-sacrosanct constitutional protection of the First Amendment has become a threat to democracy.Mr. Edsall contributes a weekly column from Washington, D.C. on politics, demographics and inequality.Jan. 6, 2021The president in Georgia on Monday.Credit…Erin Schaff/The New York TimesIn the closing days of his presidency, Donald Trump has demonstrated that he can make innumerable false claims and assertions that millions of Republican voters will believe and more than 150 Republican members of the House and Senate will embrace.“The formation of public opinion is out of control because of the way the internet is forming groups and dispersing information freely,” Robert C. Post, a Yale law professor and former dean, said in an interview.Before the advent of the internet, Post noted,People were always crazy, but they couldn’t find each other, they couldn’t talk and disperse their craziness. Now we are confronting a new phenomenon and we have to think about how we regulate that in a way which is compatible with people’s freedom to form public opinion.Trump has brought into sharp relief the vulnerability of democracy in the midst of a communication upheaval more pervasive in its impact, both destructive and beneficial, than the invention of radio and television in the 20th Century.In making, embracing and disseminating innumerable false statements, Trump has provoked a debate among legal scholars over whether the once-sacrosanct constitutional protection of free speech has itself become a threat to democracy by enabling the widespread and instantaneous transmission of lies in the service of political gain.In the academic legal community, there are two competing schools of thought concerning how to go about restraining the proliferation of flagrant misstatements of fact in political speech.Richard Hasen, at the University of California-Irvine Law School, described some of the more radical reform thinking in an email:There is a cadre of scholars, especially younger ones, who believe that the First Amendment balance needs to be struck differently in the digital age. The greatest threat is no longer censorship, but deliberate disinformation aimed at destabilizing democratic institutions and civic competence.Hasen argues:Change is urgent to deal with election pathologies caused by the cheap speech era, but even legal changes as tame as updating disclosure laws to apply to online political ads could face new hostility from a Supreme Court taking a libertarian marketplace-of-ideas approach to the First Amendment. As I explain, we are experiencing a market failure when it comes to reliable information voters need to make informed choices and to have confidence in the integrity of our electoral system. But the Court may stand in the way of necessary reform.Those challenging the viability of applying free speech jurisprudence to political speech face a barrage of criticism from legal experts who contend that the blame for current political crises should not fall on the First Amendment.Robert Post, for example, contends that the amendment is essential to self-governance becausea functioning democracy requires both that citizens feel free to participate in the formation of public opinion and that they are able to access adequate accurate information about public matters. Insofar as it protects these values, the First Amendment serves as a crucial tool of self-governance. In the absence of self-governance, government is experienced as compulsion, as being told what to think and what to do. That’s not a desirable situation.Post added: “As we try to adapt the First Amendment to contemporary issues, we have to be clear about the values we wish to protect, so that we don’t throw the baby out with the bath water.”Toni M. Massaro, a law professor at the University of Arizona, who with Helen L. Norton, a law professor at the University of Colorado, co-authored a December 2020 paper “Free Speech and Democracy: A Primer for 21st Century Reformers,” makes a related point in an email:Free speech theorists have lots to be anxious about these days as we grapple with abiding faith in the many virtues of free expression while coping with the undeniable reality that it can — irony runs deep — undermine free expression itself.Massaro added:Those who believe in democracy’s virtues, as I do, need to engage the arguments about its threats. And those who believe in the virtues of free speech, as I also do, need to be cleareyed about the information distortions and gross inequalities and other harms to democratic and other public goods it produces. So our generation absolutely is up at bat here. We all need to engage the Wu question ‘is free speech obsolete?’ lest it become so through inattention to the gravity of the threats it faces and poses.Helen Norton, in a separate email, expanded on the different vantage points in the legal community. On one side are those “who privilege democratic self-governance” and who are more likely to be concerned “about whether and when speech threatens free speech and democracy.” On the other side arethe many, past and present, who privilege individual autonomy and are more comfortable with the premise that more speech is always better. I’d describe it as a difference in one’s preferred theory of and perspective on the First Amendment.Other legal scholars emphasize the inherent difficulties in resolving speech-related issues:Rebecca Tushnet, a law professor at Harvard, wrote by email:Those are some big questions and I don’t think they have yes-or-no answers. These are not new arguments but they have new forms, and changes in both economic organization and technology make certain arguments more or differently salient than they used to be.Tushnet described the questions raised by those calling for major reform of the interpretation and application of the First Amendment as “legitimate,” but pointed out that this“doesn’t mean they’ll get taken seriously by this Supreme Court, which was constituted precisely to avoid any ‘progressive’ constitutional interpretation.”In certain respects, the divide in the American legal community reflects some of the differences that characterize American and European approaches to issues of speech, including falsehoods and hate speech. Noah Feldman, a law professor at Harvard, described this intercontinental split in a March 2017 column for Bloomberg,U.S. constitutional tradition treats hate speech as the advocacy of racist or sexist ideas. They may be repellent, but because they count as ideas, they get full First Amendment protection. Hate speech can only be banned in the U.S. if it is intended to incite imminent violence and is actually likely to do so. This permissive U.S. attitude is highly unusual. Europeans don’t consider hate speech to be valuable public discourse and reserve the right to ban it. They consider hate speech to degrade from equal citizenship and participation. Racism isn’t an idea; it’s a form of discrimination.The underlying philosophical difference here is about the right of the individual to self-expression. Americans value that classic liberal right very highly — so highly that we tolerate speech that might make others less equal. Europeans value the democratic collective and the capacity of all citizens to participate fully in it — so much that they are willing to limit individual rights.Tim Wu, a law professor at Columbia and a contributing opinion writer for The Times, is largely responsible for pushing the current debate onto center stage, with the 2018 publication in the Michigan Law Review of his essay, “Is the First Amendment Obsolete?”“The First Amendment was brought to life in a period, the twentieth century, when the political speech environment was markedly differently than today’s,” Wu wrote. The basic presumption then was “that the greatest threat to free speech was direct punishment of speakers by government.” Now, in contrast, he argued, those, including Trump, “who seek to control speech use new methods that rely on the weaponization of speech itself, such as the deployment of ‘troll armies,’ the fabrication of news, or ‘flooding’ tactics.”Instead of protecting speech, the First Amendment might need to be invoked now to constrain certain forms of speech, in Wu’s view:Among emerging threats are the speech-control techniques linked to online trolling, which seek to humiliate, harass, discourage, and even destroy targeted speakers using personal threats, embarrassment, and ruining of their reputations.The techniques used to silence opponents “rely on the low cost of speech to punish speakers.”Wu’s conclusion:The emerging threats to our political speech environment have turned out to be different from what many predicted — for few forecast that speech itself would become a weapon of state-sponsored censorship. In fact, some might say that celebrants of open and unfettered channels of internet expression (myself included) are being hoisted on their own petard, as those very same channels are today used as ammunition against disfavored speakers. As such, the emerging methods of speech control present a particularly difficult set of challenges for those who share the commitment to free speech articulated so powerfully in the founding — and increasingly obsolete — generation of First Amendment jurisprudence.I asked Wu if he has changed his views since the publication of his paper, and he wrote back:No, and indeed I think the events of the last four years have fortified my concerns. The premise of the paper is that Americans cannot take the existence of the First Amendment as serving as an adequate guarantee against malicious speech control and censorship. To take another metaphor it can be not unlike the fortified castle in the age of air warfare. Still useful, still important, but obviously not the full kind of protection one might need against the attacks on the speech environment going on right now.That said, Wu continued, “my views have been altered in a few ways.” Now, Wu said, he would give stronger emphasis to the importance of “the president’s creation of his own filter bubble” in whichthe president creates an entire attentional ecosystem that revolves around him, what he and his close allies do, and the reactions to it — centered on Twitter, but then spreading onward through affiliated sites, Facebook & Twitter filters. It has dovetailed with the existing cable news and talk radio ecosystems to form a kind of seamless whole, a system separate from the conventional idea of discourse, debate, or even fact.At the same time, Wu wrote that he would de-emphasize the role of troll armies which “has proven less significant than I might have suggested in the 2018 piece.”Miguel Schor, a professor at Drake University Law School, elaborated Wu’s arguments in a December 2020 paper, “Trumpism and the Continuing Challenges to Three Political-Constitutionalist Orthodoxies.”New information technologies, Schor writes,are the most worrisome of the exogenous shocks facing democracies because they undermine the advantages that democracies once enjoyed over authoritarianism.Democracies, Schor continued, “have muddled through profound crises in the past, but they were able to count on a functioning marketplace of ideas” that gave the public the opportunity to weigh competing arguments, policies, candidates and political parties, and to weed out lies and false claims. That marketplace, however, has become corrupted by “information technologies” that “facilitate the transmission of false information while destroying the economic model that once sustained news reporting.” Now, false information “spreads virally via social networks as they lack the guardrails that print media employs to check the flow of information.”To support his case that traditional court interpretation of the First Amendment no longer serves to protect citizens from the flood tide of purposely false information, Schor cited the 2012 Supreme Court case United States v. Alvarez which, Schor wrote, “concluded that false statements of fact enjoyed the same protection as core political speech for fear that the government would otherwise be empowered to create an Orwellian ministry of truth.”In the Alvarez case, Justice Anthony Kennedy wrote thatthe remedy for speech that is false is speech that is true. This is the ordinary course in a free society. The response to the unreasoned is the rational; to the uninformed, the enlightened; to the straight-out lie, the simple truth.Kennedy added at the conclusion of his opinion:The Nation well knows that one of the costs of the First Amendment is that it protects the speech we detest as well as the speech we embrace.Kennedy cited Oliver Wendell Holmes Jr.’s famous 1919 dissent in Abrams v. United States:The best test of truth is the power of the thought to get itself accepted in the competition of the market.In practice, Schor argued, the Supreme Court’s Alvarez decisionstood Orwell on his head by broadly protecting lies. The United States currently does have an official ministry of truth in the form of the president’s bully pulpit which Trump has used to normalize lying.The crowd at the president’s rally on Monday night.Credit…Damon Winter/The New York TimesAlong parallel lines, Sanford Levinson, a law professor at the University of Texas, argued in an email that “today, things are remarkably different” from the environment in the 20th century when much of the body of free speech law was codified: “Speech can be distributed immediately to vast audiences. The ‘market of ideas’ may be increasingly siloed,” Levinson wrote, as “faith in the invisible hand is simply gone. The evidence seems overwhelming that falsehood is just as likely to prevail.”In that context, Levinson raised the possibility that the United States might emulate post-WWII Germany, which “adopted a strong doctrine of ‘militant democracy,’ ” banning the neo-Nazi and Communist parties (the latter later than the former):Can/should we really wait until there is a “clear and present danger” to the survival of a democratic system before suppressing speech that is antagonistic to the survival of liberal democracy. Most Americans rejected “militant democracy” in part, I believe, because we were viewed as much too strong to need that kind of doctrine. But I suspect there is more interest in the concept inasmuch as it is clear that we’re far less strong than we imagined.Lawrence Lessig, a law professor at Harvard, was outspoken in his call for reform of free speech law:There’s a very particular reason why this more recent change in technology has become so particularly destructive: it is not just the technology, but also the changes in the business model of media that those changes have inspired. The essence is that the business model of advertising added to the editor-free world of the internet, means that it pays for them to make us crazy. Think about the comparison to the processed food industry: they, like the internet platforms, have a business that exploits a human weakness, they profit the more they exploit, the more they exploit, the sicker we are.All of this means, Lessig wrote by email, thatthe First Amendment should be changed — not in the sense that the values the First Amendment protects should be changed, but the way in which it protects them needs to be translated in light of these new technologies/business models.Lessig dismissed fears that reforms could result in worsening the situation:How dangerous is it to “tinker” with the First Amendment? How dangerous is it not to tinker with the doctrine that constitutes the First Amendment given the context has changed so fundamentally?Randall Kennedy, who is also a law professor at Harvard, made the case in an email that new internet technologies demand major reform of the scope and interpretation of the First Amendment and he, too, argued that the need for change outweighs risks: “Is that dangerous? Yes. But stasis is dangerous too. There is no safe harbor from danger.”Kennedy described one specific reform he had in mind:A key distinction in the law now has to do with the state action doctrine. The First Amendment is triggered only when state action censors. The First Amendment protects you from censorship by the state or the United States government. The First Amendment, however, does not similarly protect you from censorship by Facebook or The New York Times. To the contrary, under current law Facebook and The New York Times can assert a First Amendment right to exclude anyone whose opinions they abhor. But just suppose the audience you seek to reach is only reachable via Facebook or The New York Times?The application of First Amendment protection from censorship by large media companies could be achieved by following the precedent of the court’s abolition of whites-only primaries in the Deep South, Kennedy argued:Not so long ago, political parties were viewed as “private” and thus outside the reach if the federal constitution. Thus, up until the late 1940s the Democratic Party in certain Deep South states excluded any participation by Blacks in party primaries. The white primary was ended when the courts held that political parties played a governmental function and thus had to conduct themselves according to certain minimal constitutional standards — i.e., allow Blacks to participate.Wu, Schor and others are not without prominent critics whose various assertions include the idea that attempts to constrain lying through radical change in the interpretation of the First Amendment risk significant damage to a pillar of democracy; that the concerns of Wu and others can be remedied through legislation and don’t require constitutional change; that polarization, not an outdated application of the First Amendment, is the dominant force inflicting damage on the political system.In one of the sharpest critiques I gathered, Laurence H. Tribe, emeritus professor at Harvard Law School, wrote in an email that,We are witnessing a reissue, if not a simple rerun, of an old movie. With each new technology, from mass printing to radio and then television, from film to broadcast TV to cable and then the internet, commentators lamented that the freedoms of speech, press, and assembly enshrined in a document ratified in 1791 were ill-adapted to the brave new world and required retooling in light of changed circumstances surrounding modes of communication.” Tribe added: “to the limited degree those laments were ever warranted, the reason was a persistent misunderstanding of how constitutional law properly operates and needs to evolve.The core principles underlying the First Amendment, Tribe wrote, “require no genuine revision unless they are formulated in ways so rigid and inflexible that they will predictably become obsolete as technological capacities and limitations change,” adding thatoccasions for sweeping revision in something as fundamental to an open society as the First Amendment are invariably dangerous, inviting as they do the infusion of special pleading into the basic architecture of the republic.In this light, Tribe arguedthat the idea of adopting a more European interpretation of the rights of free speech — an interpretation that treats the dangers that uncensored speech can pose for democracy as far more weighty than the dangers of governmentally imposed limitations — holds much greater peril than possibility if one is searching for a more humane and civil universe of public discourse in America.Tribe concluded his email citing his speech at the First Annual Conference of the Electronic Freedom Foundation on Computers, Freedom and Privacy in San Francisco in March 1991, “The Constitution in Cyberspace”:If we should ever abandon the Constitution’s protections for the distinctively and universally human, it won’t be because robotics or genetic engineering or computer science have led us to deeper truths but, rather, because they have seduced us into more profound confusions. Science and technology open options, create possibilities, suggest incompatibilities, generate threats. They do not alter what is “right” or what is “wrong.” The fact that those notions are elusive and subject to endless debate need not make them totally contingent upon contemporary technology.Jack Balkin, a law professor at Yale, takes a different tack. In an email, he makes a detailed case that the source of the problems cited by Wu and others is not the First Amendment but the interaction of digital business practices, political polarization and the decline of trusted sources of information, especially newspapers.“Our problems grow out of business models of private companies that are key governors of speech,” Balkin wrote, arguing that these problems can be addressed by “a series of antitrust, competition, consumer protection, privacy and telecommunications law reforms.”Balkin continued:The problem of propaganda that Tim Wu has identified is not new to the digital age, nor is the problem of speech that exacerbates polarization. In the United States, at least, both problems were created and fostered by predigital media.Instead, Balkin contended:The central problem we face today is not too much protection for free speech but the lack of new trustworthy and trusted intermediate institutions for knowledge production and dissemination. Without these institutions, the digital public sphere does not serve democracy very well.A strong and vigorous political system, in Balkin’s view,has always required more than mere formal freedoms of speech. It has required institutions like journalism, educational institutions, scientific institutions, libraries, and archives. Law can help foster a healthy public sphere by giving the right incentives for these kinds of institutions to develop. Right now, journalism in the United States is dying a slow death, and many parts of the United States are news deserts — they lack reliable sources of local news. The First Amendment is not to blame for these developments, and cutting back on First Amendment protections will not save journalism. Nevertheless, when key institutions of knowledge production and dissemination are decimated, demagogues and propagandists thrive.Erwin Chemerinsky, dean of the law school at Berkeley, responded to my inquiry by email, noting that the “internet and social media have benefits and drawbacks with regard to speech.”On the plus side, he wrote,the internet and social media have democratized the ability to reach a large audience. It used to be that to do so took owning a newspaper or having a broadcast license. Now anyone with a smartphone or access to a library can do so. The internet provides immediate access to infinite knowledge and information.On the negative side, Chemerinsky noted that:It is easy to spread false information. Deep fakes are a huge potential problem. People can be targeted and harassed or worse. The internet and social media have caused the failure of many local papers. Who will be there to do the investigative reporting, especially at the local level? It is so easy now for people to get the information that reinforces their views, fostering polarization.Despite these drawbacks, Chemerinsky wrote that he isvery skeptical of claims that this makes the traditional First Amendment obsolete or that there needs to be a major change in First Amendment jurisprudence. I see all of the problems posed by the internet and social media, but don’t see a better alternative. Certainly, greater government control is worse. As for the European approach, I am skeptical that it has proven any better at balancing the competing considerations. For example, the European bans on hate speech have not decreased hate and often have been used against political messages or mild speech that a prosecutor doesn’t like.Geoffrey Stone, a professor at the University of Chicago Law School, voiced his strong support for First Amendment law while acknowledging that Wu and others have raised legitimate questions. In an email, Stone wrote:I begin with a very strong commitment to current First Amendment doctrine. I think it has taken us a long time to get to where we are, and the current approach has stood us — and our democracy — in very good stead. In my view, the single greatest danger of allowing government regulation of speech is that those in power will manipulate their authority to silence their critics and to solidify their authority. One need only to consider what the Trump administration would have done if it had had this power. In my view, nothing is more dangerous to a democracy that allowing those in authority to decide what ideas can and cannot be expressed.Having said that, Stone continued,I recognize that changes in the structure of public discourse can create other dangers that can undermine both public discourse and democracy. But there should be a strong presumption against giving government the power to manipulate public discourse.The challenge, Stone continued,is whether there is a way to regulate social media in a way that will retain its extraordinary capacity to enable individual citizens to communicate freely in a way that was never before possible, while at the same time limiting the increasingly evident risks of abuse, manipulation and distortion.In an email, Nathaniel Persily, a law professor at Stanford, declared flatly that “The First Amendment is not obsolete.” Instead, he argued, “the universe of speech ‘issues’ and speech ‘regulators’ has expanded.”While much of the history of the First Amendment has “been focused on government suppression of dissenting speech,” Persily continued,most speech now takes place online and that raises new concerns and new sources of authority. The relationship of governments to platforms to users has not been fleshed out yet. Indeed, Facebook, Google and Twitter have unprecedented power over the speech environment and their content moderation policies may implicate more speech than formal law these days.But, Persily warned, “government regulation of the platforms also raises speech concerns.”The complex and contentious debate over politicians’ false claims, the First Amendment, the influence of the internet on politics and the destructive potential of new information technologies will almost certainly play out slowly over years, if not decades, in the courts, Congress and state legislatures. This is likely to make the traditionalists who call for slow, evolutionary change the victors, and the more radical scholars the losers — by default rather than on the merits.The two weeks between now and the inauguration will reveal how much more damage Trump, in alliance with a Republican Party complicit in a deliberate attempt to corrupt our political processes, can inflict on a nation that has shown itself to be extremely vulnerable to disinformation, falsehoods and propaganda — propaganda that millions don’t know is not true.As Congress is set to affirm the outcome of the 2020 presidential election, the words of Hannah Arendt, who fled Nazi Germany after being arrested in 1933, acquire new relevance.In 1967, Arendt published “Truth and Politics” in The New Yorker:The result of a consistent and total substitution of lies for factual truth is not that the lies will now be accepted as truth, and the truth defamed as lies, but that the sense by which we take our bearings in the real world — and the category of truth vs. falsehood is among the mental means to this end — is being destroyed.The fragility of democracy had long been apparent. In 1951, in “The Origins of Totalitarianism,” Arendt wrote:Never has our future been more unpredictable, never have we depended so much on political forces that cannot be trusted to follow the rules of common sense and self-interest — forces that look like sheer insanity, if judged by the standards of other centuries.Totalitarianism required first blurring and then erasing the line between falsehood and truth, as Arendt famously put it:In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true ….Mass propaganda discovered that its audience was ready at all times to believe the worst, no matter how absurd, and did not particularly object to being deceived because it held every statement to be a lie anyhow.And here’s Arendt in “Truth and Politics” again, sounding like she is talking about contemporary politics:Freedom of opinion is a farce unless factual information is guaranteed and the facts themselves are not in dispute.America in 2021 is a very different time and a very different place from the totalitarian regimes of the 20th Century, but we should still listen to what Arendt is saying and heed her warning.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.AdvertisementContinue reading the main story More