More stories

  • in

    Trump me atacó. Después, Musk lo hizo. No fue casualidad

    Timo LenzenCuando trabajaba en Twitter, ahora conocida como X, dirigí al equipo que puso por primera vez una etiqueta de verificación de hechos en uno de los tuits de Donald Trump. Tras la violencia del 6 de enero, ayudé a tomar la decisión de suspender su cuenta en Twitter. Nada me preparó para lo que ocurriría después.Respaldado por sus seguidores en las redes sociales, Trump me atacó públicamente. Dos años después, tras su adquisición de Twitter y después de que yo dimití de mi puesto como responsable de confianza y seguridad de la empresa, Elon Musk echó más leña al fuego. He vivido con guardias armados en la puerta de mi casa y he tenido que trastocar la vida de mi familia, así como esconderme durante meses y mudarme una y otra vez.No es una historia que me guste recordar. Pero he aprendido que lo que me ocurrió no fue casualidad. No fue solo una venganza personal o la “cultura de la cancelación”. Se trató de una estrategia que no solo afecta a personas específicas, como en mi caso, sino a todos nosotros, ya que está cambiando a gran velocidad lo que vemos en internet.Los individuos —desde investigadores académicos hasta trabajadores de empresas de tecnología— son cada vez más objeto de demandas, comparecencias ante el Congreso y despiadados ataques en línea. Estos ataques, organizados en gran medida por la derecha, están teniendo el efecto deseado: las universidades están reduciendo sus esfuerzos para cuantificar la información abusiva y engañosa que se difunde en internet. Las empresas de redes sociales están evitando tomar el tipo de decisiones difíciles que mi equipo tomó cuando intervinimos ante las mentiras de Trump sobre las elecciones de 2020. Las plataformas no empezaron a tomarse en serio estos riesgos sino hasta después de las elecciones de 2016. Ahora, ante la posibilidad de ataques desproporcionados contra sus empleados, las empresas parecen cada vez más reacias a tomar decisiones controvertidas, lo cual permite que la desinformación y el abuso se enconen para evitar provocar represalias públicas.Estos ataques a la seguridad en internet se producen en un momento en el que la democracia no podría estar más en riesgo. En 2024, está prevista la celebración de más de 40 elecciones importantes, entre ellas las de Estados Unidos, la Unión Europea, la India, Ghana y México. Lo más probable es que estas democracias se enfrenten a los mismos riesgos de campañas de desinformación respaldadas por los gobiernos y de incitación a la violencia en línea que han plagado las redes sociales durante años. Deberíamos preocuparnos por lo que ocurra.Mi historia comienza con esa verificación de datos. En la primavera de 2020, tras años de debate interno, mi equipo decidió que Twitter debía aplicar una etiqueta a un tuit del entonces presidente Trump que afirmaba que el voto por correo era propenso al fraude y que las próximas elecciones estarían “amañadas”. “Conoce los hechos sobre la votación por correo”, decía la etiqueta.El 27 de mayo, la mañana siguiente a la colocación de la etiqueta, la asesora principal de la Casa Blanca, Kellyanne Conway, me identificó de manera pública como el director del equipo de integridad de Twitter. Al día siguiente, The New York Post publicó en su portada varios tuits en los que me burlaba de Trump y otros republicanos. Los había publicado años antes, cuando era estudiante y tenía pocos seguidores, sobre todo amigos y familiares, en las redes sociales. Ahora, eran noticia de primera plana. Ese mismo día, Trump tuiteó que yo era un “odiador”.Legiones de usuarios de Twitter, la mayoría de quienes días antes no tenían ni idea de quién era yo ni en qué consistía mi trabajo, comenzaron una campaña de acoso en línea que duró meses, en la que exigían que me despidieran, me encarcelaran o me mataran. La cantidad de notificaciones de Twitter arrunió mi teléfono. Amigos de los que no tenía noticias desde hacía años expresaron su preocupación. En Instagram, fotos antiguas de mis vacaciones y de mi perro se inundaron de comentarios amenazantes e insultos (algunos comentaristas, que malinterpretaron el momento de manera atroz, aprovecharon para intentar coquetear conmigo).Me sentí avergonzado y asustado. Hasta ese momento, nadie fuera de unos pocos círculos bastante especializados tenía idea de quién era yo. Los académicos que estudian las redes sociales llaman a esto “colapso de contexto”: las cosas que publicamos en las redes sociales con un público en mente pueden acabar circulando entre un público muy diferente, con resultados inesperados y destructivos. En la práctica, se siente como si todo tu mundo se derrumba.El momento en que se desató la campaña en contra de mi persona y mi supuesta parcialidad sugería que los ataques formaban parte de una estrategia bien planificada. Los estudios académicos han rebatido en más de una ocasión las afirmaciones de que las plataformas de Silicon Valley son tendenciosas contra los conservadores. Pero el éxito de una estrategia encaminada a obligar a las empresas de redes sociales a reconsiderar sus decisiones quizá no requiera la demostración de una verdadera mala conducta. Como describió en una ocasión Rich Bond, expresidente del Partido Republicano, tal vez solo sea necesario “ganarse a los árbitros”: presionar sin cesar a las empresas para que se lo piensen dos veces antes de emprender acciones que podrían provocar una reacción negativa. Lo que me ocurrió fue parte de un esfuerzo calculado para que Twitter se mostrara reacio a moderar a Trump en el futuro y para disuadir a otras empresas de tomar medidas similares.Y funcionó. Mientras se desataba la violencia en el Capitolio el 6 de enero, Jack Dorsey, entonces director general de Twitter, anuló la recomendación del departamento de confianza y seguridad de que se bloqueara la cuenta de Trump debido a varios tuits, incluido uno que atacaba al vicepresidente Mike Pence. En cambio, se le impuso una suspensión temporal de 12 horas (antes de que su cuenta se se suspendiera indefinidamente el 8 de enero). Dentro de los límites de las normas, se animó a los miembros del personal a encontrar soluciones para ayudar a la empresa a evitar el tipo de reacción que da lugar a ciclos de noticias furiosas, audiencias y acoso a empleados. En la práctica, lo que sucedió fue que Twitter dio mayor libertad a los infractores: a la representante Marjorie Taylor Greene se le permitió violar las normas de Twitter al menos cinco veces antes de que una de sus cuentas fuera suspendida de manera definitiva en 2022. Otras figuras prominentes de derecha, como la cuenta de guerra cultural Libs of TikTok, gozaron de una deferencia similar.En todo el mundo, se están desplegando tácticas similares para influir en los esfuerzos de confianza y seguridad de las plataformas. En India, la policía visitó dos de nuestras oficinas en 2021 cuando comprobamos los hechos de las publicaciones de un político del partido gobernante y la policía se presentó en la casa de un empleado después de que el gobierno nos solicitó bloquear cuentas implicadas en una serie de protestas. El acoso volvió a rendir frutos: los ejecutivos de Twitter decidieron que cualquier acción que pudiera ser delicada en la India requeriría la aprobación de los más altos mandos, un nivel único de escalada de decisiones que, de otro modo, serían rutinarias.Y cuando quisimos revelar una campaña de propaganda llevada a cabo por una rama del ejército indio, nuestro equipo jurídico nos advirtió que nuestros empleados en la India podrían ser acusados de sedición y condenados a muerte. Así que Twitter no reveló la campaña sino hasta más de un año después, sin señalar al gobierno indio como autor.En 2021, antes de las elecciones legislativas de Rusia, los funcionarios de un servicio de seguridad estatal fueron a la casa de una alta ejecutiva de Google en Moscú para exigir la retirada de una aplicación que se usaba para protestar en contra de Vladimir Putin. Los agentes la amenazaron con encarcelarla si la empresa no cumplía en 24 horas. Tanto Apple como Google retiraron la aplicación de sus respectivas tiendas y la restablecieron una vez concluidas las elecciones.En cada uno de estos casos, los empleados en cuestión carecían de la capacidad para hacer lo que les pedían los funcionarios de turno, ya que las decisiones subyacentes se tomaban a miles de kilómetros de distancia, en California. Pero como los empleados locales tenían la desgracia de residir dentro de la jurisdicción de las autoridades, fueron objeto de campañas coercitivas, que enfrentaban el sentido del deber de las empresas hacia sus empleados contra los valores, principios o políticas que pudieran hacerles resistirse a las demandas locales. Inspirados por la idea, India y otros países comenzaron a promulgar leyes de “toma de rehenes” para garantizar que las empresas de redes sociales contrataran personal local.En Estados Unidos, hemos visto que estas formas de coerción no las han llevado a cabo jueces y policías, sino organizaciones de base, turbas en las redes sociales, comentaristas de noticias por cable y, en el caso de Twitter, el nuevo propietario de la empresa.Una de las fuerzas más recientes en esta campaña son los “archivos de Twitter”, una gran selección de documentos de la empresa —muchos de los cuales yo mismo envié o recibí durante mis casi ocho años en Twitter— entregados por orden de Musk a un puñado de escritores selectos. Los archivos fueron promocionados por Musk como una forma innovadora de transparencia, que supuestamente exponían por primera vez la forma en que el sesgo liberal de las costas de Estados Unidos de Twitter reprime el contenido conservador.El resultado fue algo muy distinto. Como dijo el periodista de tecnología Mike Masnick, después de toda la fanfarria que rodeó la publicación inicial de los archivos de Twitter, al final “no había absolutamente nada de interés” en los documentos y lo poco que había tenía errores factuales importantes. Hasta Musk acabó por impacientarse con la estrategia. Pero, en el proceso, el esfuerzo marcó una nueva e inquietante escalada en el acoso a los empleados de las empresas tecnológicas.A diferencia de los documentos que por lo general saldrían de las grandes empresas, las primeras versiones de los archivos de Twitter no suprimieron los nombres de los empleados, ni siquiera de los de menor nivel. Un empleado de Twitter que residía en Filipinas fue víctima de doxeo (la revelación de información personal) y de acoso grave. Otros se han convertido en objeto de conspiraciones. Las decisiones tomadas por equipos de decenas de personas de acuerdo con las políticas escritas de Twitter se presentaron como si hubieran sido tomadas por los deseos caprichosos de individuos, cada uno identificado por su nombre y su fotografía. Yo fui, por mucho, el objetivo más frecuente.La primera entrega de los archivos de Twitter se dio tras un mes de mi salida de la empresa y unos cuantos días después de que publiqué un ensayo invitado en The New York Times y hablé sobre mi experiencia como empleado de Musk. No pude evitar sentir que las acciones de la empresa eran, hasta cierto punto, represalias. A la semana siguiente, Musk fue incluso más allá y sacó de contexto un párrafo de mi tesis doctoral para afirmar sin fundamentos que yo aprobaba la pedofilia, un tropo conspirativo que suelen utilizar los extremistas de ultraderecha y los seguidores de QAnon para desprestigiar a personas de la comunidad LGBTQ.La respuesta fue todavía más extrema que la que experimenté tras el tuit que Trump publicó sobre mí. “Deberías colgarte de un viejo roble por la traición que has cometido. Vive con miedo cada uno de tus días”, decía uno de los miles de tuits y correos electrónicos amenazantes. Ese mensaje y cientos de otros similares eran violaciones de las mismas políticas que yo había trabajado para desarrollar y hacer cumplir. Bajo la nueva administración, Twitter se hizo de la vista gorda y los mensajes permanecen en el sitio hasta el día de hoy.El 6 de diciembre, cuatro días después de la primera divulgación de los archivos de Twitter, se me pidió comparecer en una audiencia del Congreso centrada en los archivos y la presunta censura de Twitter. En esa audiencia, algunos miembros del Congreso mostraron carteles de gran tamaño con mis tuits de hace años y me preguntaron bajo juramento si seguía manteniendo esas opiniones (en la medida en que las bromas tuiteadas con descuido pudieran tomarse como mis opiniones reales, no las sostengo). Greene dijo en Fox News que yo tenía “unas posturas muy perturbadoras sobre los menores y la pornografía infantil” y que yo permití “la proliferación de la pornografía infantil en Twitter”, lo que desvirtuó aún más las mentiras de Musk (y además, aumentó su alcance). Llenos de amenazas y sin opciones reales para responder o protegernos, mi marido y yo tuvimos que vender nuestra casa y mudarnos.El ámbito académico se ha convertido en el objetivo más reciente de estas campañas para socavar las medidas de seguridad en línea. Los investigadores que trabajan para entender y resolver la propagación de desinformación en línea reciben ahora más ataques partidistas; las universidades a las que están afiliados han estado envueltas en demandas, onerosas solicitudes de registros públicos y procedimientos ante el Congreso. Ante la posibilidad de facturas de abogados de siete dígitos, hasta los laboratorios de las universidades más grandes y mejor financiadas han dicho que tal vez tengan que abandonar el barco. Otros han optado por cambiar el enfoque de sus investigaciones en función de la magnitud del acoso.Poco a poco, audiencia tras audiencia, estas campañas están erosionando de manera sistemática las mejoras a la seguridad y la integridad de las plataformas en línea que tanto ha costado conseguir y las personas que realizan este trabajo son las que pagan el precio más directo.Las plataformas de tecnología están replegando sus iniciativas para proteger la seguridad de las elecciones y frenar la propagación de la desinformación en línea. En medio de un clima de austeridad más generalizado, las empresas han disminuido muy en especial sus iniciativas relacionadas con la confianza y la seguridad. Ante la creciente presión de un Congreso hostil, estas decisiones son tan racionales como peligrosas.Podemos analizar lo que ha sucedido en otros países para vislumbrar cómo podría terminar esta historia. Donde antes las empresas hacían al menos un esfuerzo por resistir la presión externa; ahora, ceden en gran medida por defecto. A principios de 2023, el gobierno de India le pidió a Twitter que restringiera las publicaciones que criticaran al primer ministro del país, Narendra Modi. En años anteriores, la empresa se había opuesto a tales peticiones; en esta ocasión, Twitter accedió. Cuando un periodista señaló que tal cooperación solo incentiva la proliferación de medidas draconianas, Musk se encogió de hombros: “Si nos dan a elegir entre que nuestra gente vaya a prisión o cumplir con las leyes, cumpliremos con las leyes”.Resulta difícil culpar a Musk por su decisión de no poner en peligro a los empleados de Twitter en India. Pero no deberíamos olvidar de dónde provienen estas tácticas ni cómo se han extendido tanto. Las acciones de Musk (que van desde presionar para abrir los archivos de Twitter hasta tuitear sobre conspiraciones infundadas relacionadas con exempleados) normalizan y popularizan que justicieros exijan la rendición de cuentas y convierten a los empleados de su empresa en objetivos aún mayores. Su reciente ataque a la Liga Antidifamación demuestra que considera que toda crítica contra él o sus intereses empresariales debe tener como consecuencia una represalia personal. Y, en la práctica, ahora que el discurso de odio va en aumento y disminuyen los ingresos de los anunciantes, las estrategias de Musk parecen haber hecho poco para mejorar los resultados de Twitter.¿Qué puede hacerse para revertir esta tendencia?Dejar claras las influencias coercitivas en la toma de decisiones de las plataformas es un primer paso fundamental. También podría ayudar que haya reglamentos que les exijan a las empresas transparentar las decisiones que tomen en estos casos y por qué las toman.En su ausencia, las empresas deben oponerse a los intentos de que se quiera controlar su trabajo. Algunas de estas decisiones son cuestiones fundamentales de estrategia empresarial a largo plazo, como dónde abrir (o no abrir) oficinas corporativas. Pero las empresas también tienen un deber para con su personal: los empleados no deberían tener que buscar la manera de protegerse cuando sus vidas ya se han visto alteradas por estas campañas. Ofrecer acceso a servicios que promuevan la privacidad puede ayudar. Muchas instituciones harían bien en aprender la lección de que pocas esferas de la vida pública son inmunes a la influencia mediante la intimidación.Si las empresas de redes sociales no pueden operar con seguridad en un país sin exponer a sus trabajadores a riesgos personales y a las decisiones de la empresa a influencias indebidas, tal vez no deberían operar allí para empezar. Como a otros, me preocupa que esas retiradas empeoren las opciones que les quedan a las personas que más necesitan expresarse en línea de forma libre y abierta. Pero permanecer en internet teniendo que hacer concesiones podría impedir el necesario ajuste de cuentas con las políticas gubernamentales de censura. Negarse a cumplir exigencias moralmente injustificables y enfrentarse a bloqueos por ello puede provocar a largo plazo la necesaria indignación pública que ayude a impulsar la reforma.El mayor desafío —y quizá el más ineludible— en este caso es el carácter esencialmente humano de las iniciativas de confianza y seguridad en línea. No son modelos de aprendizaje automático ni algoritmos sin rostro los que están detrás de las decisiones clave de moderación de contenidos: son personas. Y las personas pueden ser presionadas, intimidadas, amenazadas y extorsionadas. Enfrentarse a la injusticia, al autoritarismo y a los perjuicios en línea requiere empleados dispuestos a hacer ese trabajo.Pocas personas podrían aceptar un trabajo así, si lo que les cuesta es la vida o la libertad. Todos debemos reconocer esta nueva realidad y planear en consecuencia.Yoel Roth es académico visitante de la Universidad de Pensilvania y la Fundación Carnegie para la Paz Internacional, y fue responsable de confianza y seguridad en Twitter. More

  • in

    I Was Attacked by Donald Trump and Elon Musk. I Believe It Was a Strategy To Change What You See Online.

    Timo LenzenWhen I worked at Twitter, I led the team that placed a fact-checking label on one of Donald Trump’s tweets for the first time. Following the violence of Jan. 6, I helped make the call to ban his account from Twitter altogether. Nothing prepared me for what would happen next.Backed by fans on social media, Mr. Trump publicly attacked me. Two years later, following his acquisition of Twitter and after I resigned my role as the company’s head of trust and safety, Elon Musk added fuel to the fire. I’ve lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move.This isn’t a story I relish revisiting. But I’ve learned that what happened to me wasn’t an accident. It wasn’t just personal vindictiveness or “cancel culture.” It was a strategy — one that affects not just targeted individuals like me, but all of us, as it is rapidly changing what we see online.Private individuals — from academic researchers to employees of tech companies — are increasingly the targets of lawsuits, congressional hearings and vicious online attacks. These efforts, staged largely by the right, are having their desired effect: Universities are cutting back on efforts to quantify abusive and misleading information spreading online. Social media companies are shying away from making the kind of difficult decisions my team did when we intervened against Mr. Trump’s lies about the 2020 election. Platforms had finally begun taking these risks seriously only after the 2016 election. Now, faced with the prospect of disproportionate attacks on their employees, companies seem increasingly reluctant to make controversial decisions, letting misinformation and abuse fester in order to avoid provoking public retaliation.These attacks on internet safety and security come at a moment when the stakes for democracy could not be higher. More than 40 major elections are scheduled to take place in 2024, including in the United States, the European Union, India, Ghana and Mexico. These democracies will most likely face the same risks of government-backed disinformation campaigns and online incitement of violence that have plagued social media for years. We should be worried about what happens next.My story starts with that fact check. In the spring of 2020, after years of internal debate, my team decided that Twitter should apply a label to a tweet of then-President Trump’s that asserted that voting by mail is fraud-prone, and that the coming election would be “rigged.” “Get the facts about mail-in ballots,” the label read.On May 27, the morning after the label went up, the White House senior adviser Kellyanne Conway publicly identified me as the head of Twitter’s site integrity team. The next day, The New York Post put several of my tweets making fun of Mr. Trump and other Republicans on its cover. I had posted them years earlier, when I was a student and had a tiny social media following of mostly my friends and family. Now, they were front-page news. Later that day, Mr. Trump tweeted that I was a “hater.”Legions of Twitter users, most of whom days prior had no idea who I was or what my job entailed, began a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed. The volume of Twitter notifications crashed my phone. Friends I hadn’t heard from in years expressed their concern. On Instagram, old vacation photos and pictures of my dog were flooded with threatening comments and insults. (A few commenters, wildly misreading the moment, used the opportunity to try to flirt with me.)I was embarrassed and scared. Up to that moment, no one outside of a few fairly niche circles had any idea who I was. Academics studying social media call this “context collapse”: things we post on social media with one audience in mind might end up circulating to a very different audience, with unexpected and destructive results. In practice, it feels like your entire world has collapsed.The timing of the campaign targeting me and my alleged bias suggested the attacks were part of a well-planned strategy. Academic studies have repeatedly pushed back on claims that Silicon Valley platforms are biased against conservatives. But the success of a strategy aimed at forcing social media companies to reconsider their choices may not require demonstrating actual wrongdoing. As the former Republican Party chair Rich Bond once described, maybe you just need to “work the refs”: repeatedly pressure companies into thinking twice before taking actions that could provoke a negative reaction. What happened to me was part of a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps.It worked. As violence unfolded at the Capitol on Jan. 6, Jack Dorsey, then the C.E.O. of Twitter, overruled Trust and Safety’s recommendation that Mr. Trump’s account should be banned because of several tweets, including one that attacked Vice President Mike Pence. He was given a 12-hour timeout instead (before being banned on Jan. 8). Within the boundaries of the rules, staff members were encouraged to find solutions to help the company avoid the type of blowback that results in angry press cycles, hearings and employee harassment. The practical result was that Twitter gave offenders greater latitude: Representative Marjorie Taylor Greene was permitted to violate Twitter’s rules at least five times before one of her accounts was banned in 2022. Other prominent right-leaning figures, such as the culture war account Libs of TikTok, enjoyed similar deference.Similar tactics are being deployed around the world to influence platforms’ trust and safety efforts. In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee’s home after the government asked us to block accounts involved in a series of protests. The harassment again paid off: Twitter executives decided any potentially sensitive actions in India would require top-level approval, a unique level of escalation of otherwise routine decisions.And when we wanted to disclose a propaganda campaign operated by a branch of the Indian military, our legal team warned us that our India-based employees could be charged with sedition — and face the death penalty if convicted. So Twitter only disclosed the campaign over a year later, without fingering the Indian government as the perpetrator.In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded.In each of these cases, the targeted staffers lacked the ability to do what was being asked of them by the government officials in charge, as the underlying decisions were made thousands of miles away in California. But because local employees had the misfortune of residing within the jurisdiction of the authorities, they were nevertheless the targets of coercive campaigns, pitting companies’ sense of duty to their employees against whatever values, principles or policies might cause them to resist local demands. Inspired, India and a number of other countries started passing “hostage-taking” laws to ensure social-media companies employ locally based staff.In the United States, we’ve seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and — in Twitter’s case — by the company’s new owner.One of the most recent forces in this campaign is the “Twitter Files,” a large assortment of company documents — many of them sent or received by me during my nearly eight years at Twitter — turned over at Mr. Musk’s direction to a handful of selected writers. The files were hyped by Mr. Musk as a groundbreaking form of transparency, purportedly exposing for the first time the way Twitter’s coastal liberal bias stifles conservative content.What they delivered was something else entirely. As tech journalist Mike Masnick put it, after all the fanfare surrounding the initial release of the Twitter Files, in the end “there was absolutely nothing of interest” in the documents, and what little there was had significant factual errors. Even Mr. Musk eventually lost patience with the effort. But, in the process, the effort marked a disturbing new escalation in the harassment of employees of tech firms.Unlike the documents that would normally emanate from large companies, the earliest releases of the Twitter Files failed to redact the names of even rank-and-file employees. One Twitter employee based in the Philippines was doxxed and severely harassed. Others have become the subjects of conspiracies. Decisions made by teams of dozens in accordance with Twitter’s written policies were presented as having been made by the capricious whims of individuals, each pictured and called out by name. I was, by far, the most frequent target.The first installment of the Twitter Files came a month after I left the company, and just days after I published a guest essay in The Times and spoke about my experience working for Mr. Musk. I couldn’t help but feel that the company’s actions were, on some level, retaliatory. The next week, Mr. Musk went further by taking a paragraph of my Ph.D. dissertation out of context to baselessly claim that I condoned pedophilia — a conspiracy trope commonly used by far-right extremists and QAnon adherents to smear L.G.B.T.Q. people.The response was even more extreme than I experienced after Mr. Trump’s tweet about me. “You need to swing from an old oak tree for the treason you have committed. Live in fear every day,” said one of thousands of threatening tweets and emails. That post, and hundreds of others like it, were violations of the very policies I’d worked to develop and enforce. Under new management, Twitter turned a blind eye, and the posts remain on the site today.On Dec. 6, four days after the first Twitter Files release, I was asked to appear at a congressional hearing focused on the files and Twitter’s alleged censorship. In that hearing, members of Congress held up oversize posters of my years-old tweets and asked me under oath whether I still held those opinions. (To the extent the carelessly tweeted jokes could be taken as my actual opinions, I don’t.) Ms. Greene said on Fox News that I had “some very disturbing views about minors and child porn” and that I “allowed child porn to proliferate on Twitter,” warping Mr. Musk’s lies even further (and also extending their reach). Inundated with threats, and with no real options to push back or protect ourselves, my husband and I had to sell our home and move.Academia has become the latest target of these campaigns to undermine online safety efforts. Researchers working to understand and address the spread of online misinformation have increasingly become subjects of partisan attacks; the universities they’re affiliated with have become embroiled in lawsuits, burdensome public record requests and congressional proceedings. Facing seven-figure legal bills, even some of the largest and best-funded university labs have said they may have to abandon ship. Others targeted have elected to change their research focus based on the volume of harassment.Bit by bit, hearing by hearing, these campaigns are systematically eroding hard-won improvements in the safety and integrity of online platforms — with the individuals doing this work bearing the most direct costs.Tech platforms are retreating from their efforts to protect election security and slow the spread of online disinformation. Amid a broader climate of belt-tightening, companies have pulled back especially hard on their trust and safety efforts. As they face mounting pressure from a hostile Congress, these choices are as rational as they are dangerous.We can look abroad to see how this story might end. Where once companies would at least make an effort to resist outside pressure, they now largely capitulate by default. In early 2023, the Indian government asked Twitter to restrict posts critical of Prime Minister Narendra Modi. In years past, the company had pushed back on such requests; this time, Twitter acquiesced. When a journalist noted that such cooperation only incentivizes further proliferation of draconian measures, Mr. Musk shrugged: “If we have a choice of either our people go to prison or we comply with the laws, we will comply with the laws.”It’s hard to fault Mr. Musk for his decision not to put Twitter’s employees in India in harm’s way. But we shouldn’t forget where these tactics came from or how they became so widespread. From pushing the Twitter Files to tweeting baseless conspiracies about former employees, Mr. Musk’s actions have normalized and popularized vigilante accountability, and made ordinary employees of his company into even greater targets. His recent targeting of the Anti-Defamation League has shown that he views personal retaliation as an appropriate consequence for any criticism of him or his business interests. And, as a practical matter, with hate speech on the rise and advertiser revenue in retreat, Mr. Musk’s efforts seem to have done little to improve Twitter’s bottom line.What can be done to turn back this tide?Making the coercive influences on platform decision making clearer is a critical first step. And regulation that requires companies to be transparent about the choices they make in these cases, and why they make them, could help.In its absence, companies must push back against attempts to control their work. Some of these decisions are fundamental matters of long-term business strategy, like where to open (or not open) corporate offices. But companies have a duty to their staff, too: Employees shouldn’t be left to figure out how to protect themselves after their lives have already been upended by these campaigns. Offering access to privacy-promoting services can help. Many institutions would do well to learn the lesson that few spheres of public life are immune to influence through intimidation.If social media companies cannot safely operate in a country without exposing their staff to personal risk and company decisions to undue influence, perhaps they should not operate there at all. Like others, I worry that such pullouts would worsen the options left to people who have the greatest need for free and open online expression. But remaining in a compromised way could forestall necessary reckoning with censorial government policies. Refusing to comply with morally unjustifiable demands, and facing blockages as a result, may in the long run provoke the necessary public outrage that can help drive reform.The broader challenge here — and perhaps, the inescapable one — is the essential humanness of online trust and safety efforts. It isn’t machine learning models and faceless algorithms behind key content moderation decisions: it’s people. And people can be pressured, intimidated, threatened and extorted. Standing up to injustice, authoritarianism and online harms requires employees who are willing to do that work.Few people could be expected to take a job doing so if the cost is their life or liberty. We all need to recognize this new reality, and to plan accordingly.Yoel Roth is a visiting scholar at the University of Pennsylvania and the Carnegie Endowment for International Peace, and the former head of trust and safety at Twitter.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More

  • in

    Cambodia Strongman Hun Sen Wields Facebook to Undermine Democracy

    The Cambodian People’s Party created its Cyber War Room about a decade ago. The goal was to support Prime Minister Hun Sen’s regime through social media propagandizing. Led by the prime minister’s son Hun Manet, a troll army used Facebook and other digital platforms to attack his father’s opposition with disinformation and even allegedly wield death threats.Fast forward to the Cambodian election taking place next month. The CPP’s Cyber War Room is back up and running. General Manet, commander of the Cambodian Army and most likely the country’s next prime minister, is reportedly back at the helm, this time defending his father’s legacy and himself.Facebook is extremely popular in Cambodia, with roughly 12 million of the country’s almost 17 million people on the site. Many people in Cambodia use Facebook as a core means of getting information, and social media platforms are critical for the few journalists still producing independent reporting. The populations of many other countries where governments have continually used social media for manipulation, including the Philippines and Turkey, rely heavily on Facebook as well. So why has state-sponsored trolling like this been allowed to endure for 10 years?It will come as no surprise when I say that Big Tech has a lot of problems on its plate, including fury about transnational digital propaganda campaigns, a global outcry about networked disinformation during the pandemic and panic about both real and hypothetic threats of generative A.I.But as one issue pops into the immediate view, the others don’t go anywhere. Instead, the global problems with our online information ecosystem compound. And while society and tech’s most powerful firms jump from one issue to the next, the abusive disinformation practices in places like Cambodia become entrenched. Governments refine their techniques, and opposition groups become less and less present because they are either trolled into submission, arrested, exiled or killed. It all benefits Big Tech, from Meta to Alphabet, which publicly seizes upon the idea du jour while cutting staffs and curbing efforts aimed at combating standing informational issues.What does this mean for the people of Cambodia? For a people who, in living memory, endured the horrors of genocide and totalitarianism?The Cambodian news ecosystem and the lives of Cambodians are controlled by Prime Minister Hun Sen, who has led them in some capacity for 38 years. He is quick to justify his long reign by pointing to economic gains before Covid — by which time the country achieved lower-middle-income status through tourism, textile exports and a growing relationship with China. His people have languished in many other ways, however: Environmental degradation is rife, corruption is commonplace, and human rights abuses are worsening.Mr. Sen and his cronies own or control all but the thinnest sliver of the country’s media outlets. They recently banned the main opposition party from running in the coming election because of an alleged clerical error. And curtailing speech on social media has been critical to the consolidation of their power. Facebook, Telegram and other platforms have been central to the CPP’s illicit, strategic and authoritarian control of Cambodia’s information space and, consequently, public opinion.Other despots have made use of highly organized state-sponsored trolling outfits to quash dissent. Some, like Mr. Sen, have also hired their kids to run them. In Brazil, Jair Bolsonaro’s Office of Hate, run by his sons, used social media to defame journalists and threaten opposition. Recep Tayyip Erdogan, the autocrat recently re-elected as president of Turkey, benefited greatly from organized troll armies operating on Twitter. Back in Southeast Asia, the increasingly tyrannical regimes of Thailand, the Philippines and Myanmar have all deployed cyber-troops to do their oppressive bidding.Another factor is central to understanding why social media firms have failed to curb state-sponsored trolling around the globe: language.Facebook, YouTube, Instagram, Twitter and other platforms have overwhelmingly focused their efforts to counter harmful and purposely misleading content in English. One reason is that they are based in the United States. Another is the malignant supremacy of Western concerns. But the larger reason is that social media companies cannot or will not supply the resources necessary to moderating content in other languages — particularly those such as Cambodia’s Khmer, which is complex and spoken by about 18 million people worldwide. That’s a small number when compared with the roughly 1.5 billion who speak English.This issue is a major problem for our own democracy too. During the 2020 and 2022 elections, social media platforms failed spectacularly in quashing hateful and disenfranchising content aimed at the tens of millions of Americans who speak Spanish, Chinese, Korean, Tagalog and a variety of other languages. This resulted in communities of color and groups already marginalized in our political system bearing the brunt of digital hate and purposely false information about these contests. According to my research and work with community leaders, this structural disinformation causes apathy, anger and civic disenchantment among minority voters, and as a result, many don’t show up to vote.The strength of global democracy is tied to the number of countries around the world that truly practice it. And while the leaders of relatively strong democracies like the United States obsess over information technology problems and political spectacle in Washington, they fail to do their duty to protect the less fortunate, both in their own country and elsewhere. This, in turn, lets social media companies off the hook.I recently returned from a lecture tour in Cambodia, where I spoke to more than 12 groups of professional journalists, citizen reporters, scholars, students and activists about the informational and political challenges they face online and offline. All told me that they still use platforms like Facebook and Telegram to coordinate, organize and share information about breaking news and elections.Facebook is especially popular in the country, in part because of its controversial Free Basics program, which offers free internet in a number of developing countries via a constrained number of websites (including, naturally, Facebook). Critics derided this as less a benevolent bid to connect the world and more a heavy-handed effort to “capture more of the market in the name of connectivity.” The promise of social media — that it can be the conduit for communication in countries with controlled media systems — remains true for the people I spoke to in Phnom Penh and Sihanoukville. But this potential is quickly dwindling as people lose faith in the safety of online communication. Meanwhile, Facebook remains a potent means for disseminating propaganda.If Meta, Alphabet and other tech firms do not take swift action to curb state-sponsored trolling, and if policymakers and civil society groups in the United States and other democracies don’t put more pressure on authoritarians like Hun Sen, then Cambodians and many others around the world will lose one of their last means of fighting back. We must speak out about the oppression surrounding the Cambodian election, which takes place on July 23 — and speak out about digital injustice.Samuel Woolley is the author of “Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity” and a faculty member at the University of Texas at Austin.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram. More

  • in

    Hun Sen’s Facebook Page Goes Dark After Spat with Meta

    Prime Minister Hun Sen, an avid user of the platform, had vowed to delete his account after Meta’s oversight board said he had used it to threaten political violence.The usually very active Facebook account for Prime Minister Hun Sen of Cambodia appeared to have been deleted on Friday, a day after the oversight board for Meta, Facebook’s parent company, recommended that he be suspended from the platform for threatening political opponents with violence.The showdown pits the social media behemoth against one of Asia’s longest-ruling autocrats.Mr. Hun Sen, 70, has ruled Cambodia since 1985 and maintained power partly by silencing his critics. He is a staunch ally of China, a country whose support comes free of American-style admonishments on the value of human rights and democratic institutions.A note Friday on Mr. Hun Sen’s account, which had about 14 million followers, said that its content “isn’t available right now.” It was not immediately clear whether Meta had suspended the account or if Mr. Hun Sen had preemptively deleted it, as he had vowed to do in a post late Thursday on Telegram, a social media platform where he has a much smaller following.“That he stopped using Facebook is his private right,” Phay Siphan, a spokesman for the Cambodian government, told The New York Times on Friday. “Other Cambodians use it, and that’s their right.”The company-appointed oversight board for Meta had on Thursday recommended a minimum six-month suspension of Mr. Hun Sen’s accounts on Facebook and Instagram, which Meta also owns. The board also said that one of Mr. Hun Sen’s Facebook videos had violated Meta’s rules on “violence and incitement” and should be taken down.In the video, Mr. Hun Sen delivered a speech in which he responded to allegations of vote-stealing by calling on his political opponents to choose between the legal system and “a bat.”“If you say that’s freedom of expression, I will also express my freedom by sending people to your place and home,” Mr. Hun Sen said in the speech, according to Meta.Meta had previously decided to keep the video online under a policy that allows the platform to allow content that violates Facebook’s community standards on the grounds that it is newsworthy and in the public interest. But the oversight board said on Thursday that it was overturning the decision, calling it “incorrect.”A post on Facebook by Cambodian government official Duong Dara, which includes an image of the official Facebook page of Mr. Hun Sen.Tang Chhin Sothy/Agence France-Presse — Getty ImagesThe board added that its recommendation to suspend Mr. Hun Sen’s accounts for at least six months was justified given the severity of the violation and his “history of committing human rights violations and intimidating political opponents, and his strategic use of social media to amplify such threats.”Meta later said in a statement that it would remove the offending video to comply with the board’s decision. The company also said that it would respond to the suspension recommendation after analyzing it.Critics of Facebook have long said that the platform can undermine democracy, promote violence and help politicians unfairly target their critics, particularly in countries with weak institutions.Mr. Hun Sen has spent years cracking down on the news media and political opposition in an effort to consolidate his grip on power. In February, he ordered the shutdown of one of the country’s last independent news outlets, saying he did not like its coverage of his son and presumed successor, Lt. Gen. Hun Manet.Under Mr. Hun Sen, the government has also pushed for more government surveillance of the internet, a move that rights groups say makes it even easier for the authorities to monitor and punish online content.Mr. Hun Sen’s large Facebook following may overstate his actual support. In 2018, one of his most prominent political opponents, Sam Rainsy, argued in a California court that the prime minister used so-called click farms to accumulate millions of counterfeit followers.Mr. Sam Rainsy, who lives in exile, also argued that Mr. Hun Sen had used Facebook to spread false news stories and death threats directed at political opponents. The court later denied his request that Facebook be compelled to release records of advertising purchases by Mr. Hun Sen and his allies.In 2017, an opposition political party that Mr. Sam Rainsy had led, the Cambodia National Rescue Party, was dissolved by the country’s highest court. More recently, the Cambodian authorities have disqualified other opposition parties from running in a general election next month.At a public event in Cambodia on Friday, Mr. Hun Sen said that his political opponents outside the country were surely happy with his decision to quit Facebook.“You have to be aware that if I order Facebook to be shut down in Cambodia, it will strongly affect you,” he added, speaking at an event for garment workers ahead of the general election. “But this is not the path that I choose.” More

  • in

    Facebook Failed to Stop Ads Threatening Election Workers

    The ads, submitted by researchers, were rejected by YouTube and TikTok.Facebook says it does not allow content that threatens serious violence. But when researchers submitted ads threatening to “lynch,” “murder” and “execute” election workers around Election Day this year, the company’s largely automated moderation systems approved many of them.Out of the 20 ads submitted by researchers containing violent content, 15 were approved by Facebook, according to a new test published by Global Witness, a watchdog group, and New York University’s Cybersecurity for Democracy. Researchers deleted the approved ads before they were published.Ten of the test ads were submitted in Spanish. Facebook approved six of those ads, compared with nine of the 10 ads in English.TikTok and YouTube rejected all the ads and suspended the accounts that attempted to submit them, the researchers said.The tests underscored the challenges that social networks face in moderating increasingly partisan and violent content around elections, especially as poll workers are facing additional threats from election deniers and far-right extremists. The Justice Department and other federal agencies warned ahead of this year’s midterm elections that threats against election workers had surged.In a statement, a spokesman for Meta, Facebook’s parent company, said it later disabled several accounts that had submitted the ads. The company also said that ads continue to be reviewed after they go live.“This is a small sample of ads that are not representative of what people see on our platforms,” the spokesman said in a statement. “Content that incites violence against election workers or anyone else has no place on our apps, and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms.”The research adds to previous tests conducted by the same groups, including one this year in which 20 ads were submitted containing political misinformation. In that test, Facebook approved only two of the misleading ads in English from an account in the United States, while TikTok approved about 90 percent of them. YouTube suspended the account that tried to submit the ads.But this round used more overt language that tested Facebook’s largely automated moderating system. Those included direct threats of violence, which were sourced from real statements by election deniers and other far-right extremists in the United States. One ad, which was approved by Facebook, read: “We are going to kill you all.” Another ad, which threatened to abuse children, was also approved.“It was really quite shocking to see the results,” said Damon McCoy, an associate professor at N.Y.U. “I thought a really simple keyword search would have flagged this for manual review.”In a statement, researchers also said they wanted to see social networks like Facebook increase content moderation efforts and offer more transparency around the moderation actions they take.“The fact that YouTube and TikTok managed to detect the death threats and suspend our account, whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” they wrote. More