More stories

  • in

    AI firm considers banning creation of political images for 2024 elections

    The groundbreaking artificial intelligence image-generating company Midjourney is considering banning people from using its software to make political images of Joe Biden and Donald Trump as part of an effort to avoid being used to distract from or misinform about the 2024 US presidential election.“I don’t know how much I care about political speech for the next year for our platform,” Midjourney’s CEO, David Holz, said last week, adding that the company is close to “hammering” – or banning – political images, including those of the leading presidential candidates, “for the next 12 months”.In a conversation with Midjourney users in a chatroom on Discord, as reported by Bloomberg, Holz went on to say: “I know it’s fun to make Trump pictures – I make Trump pictures. Trump is aesthetically really interesting. However, probably better to just not, better to pull out a little bit during this election. We’ll see.”AI-generated imagery has recently become a concern. Two weeks ago, pornographic imagery featuring the likeness of Taylor Swift triggered lawmakers and the so-called Swifties who support the singer to demand stronger protections against AI-generated images.The Swift images were traced back to 4chan, a community message board often linked to the sharing of sexual, racist, conspiratorial, violent or otherwise antisocial material with or without the use of AI.Holz’s comments come as safeguards created by image-generator operators are playing a game of cat-and-mouse with users to prevent the creation of questionable content.AI in the political realm is causing increasing concern, though the MIT Technology Review recently noted that discussion about how AI may threaten democracy “lacks imagination”.“People talk about the danger of campaigns that attack opponents with fake images (or fake audio or video) because we already have decades of experience dealing with doctored images,” the review noted. It added: “We’re unlikely to be able to attribute a surprising electoral outcome to any particular AI intervention.”Still, the image-generation company Inflection AI said in October that the company’s chatbot, Pi, would not be allowed to advocate for any political candidate. Co-founder Mustafa Suleyman told a Wall Street Journal conference that chatbots “probably [have] to remain a human part of the process” even if they function perfectly.Meta’s Facebook said last week that it plans to label posts created using AI tools as part of a broader effort to combat election-year misinformation. Microsoft-affiliated OpenAI has said it will add watermarks to images made with its platforms to combat political deepfakes produced by AI.“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the company said in a blog post last month.OpenAI chief executive Sam Altman said at an event recently: “The thing that I’m most concerned about is that with new capabilities with AI … there will be better deepfakes than in 2020.”In January, a faked audio call purporting to be Joe Biden telling New Hampshire voters to stay home illustrated the potential of AI political manipulation. The FCC later announced a ban on AI-generated voices in robocalls.skip past newsletter promotionafter newsletter promotion“What we’re really realizing is that the gulf between innovation, which is rapidly increasing, and our consideration – our ability as a society to come together to understand best practices, norms of behavior, what we should do, what should be new legislation – that’s still moving painfully slow,” David Ryan Polgar, the president of the non-profit All Tech Is Human, previously told the Guardian.Midjourney software was responsible for a fake image of Trump being handcuffed by agents. Others that have appeared online include Biden and Trump as elderly men knitting sweaters co-operatively, Biden grinning while firing a machine gun and Trump meeting Pope Francis in the White House.The software already has a number of safeguards in place. Midjourney’s community standards guidelines prohibit images that are “disrespectful, harmful, misleading public figures/events portrayals or potential to mislead”.Bloomberg noted that what is permitted or not permitted varies according to the software version used. An older version of Midjourney produced an image of Trump covered in spaghetti, but a newer version did not.But if Midjourney bans the generation of AI-generated political images, consumers – among them voters – will probably be unaware.“We’ll probably just hammer it and not say anything,” Holz said. More

  • in

    How Anti-Asian Activity Online Set the Stage for Real-World Violence

    On platforms such as Telegram and 4chan, racist memes and posts about Asian-Americans have created fear and dehumanization.In January, a new group popped up on the messaging app Telegram, named after an Asian slur.Hundreds of people quickly joined. Many members soon began posting caricatures of Asians with exaggerated facial features, memes of Asian people eating dog meat and images of American soldiers inflicting violence during the Vietnam War.This week, after a gunman killed eight people — including six women of Asian descent — at massage parlors in and near Atlanta, the Telegram channel linked to a poll that asked, “Appalled by the recent attacks on Asians?” The top answer, with 84 percent of the vote, was that the violence was “justified retaliation for Covid.”The Telegram group was a sign of how anti-Asian sentiment has flared up in corners of the internet, amplifying racist and xenophobic tropes just as attacks against Asian-Americans have surged. On messaging apps like Telegram and on internet forums like 4chan, anti-Asian groups and discussion threads have been increasingly active since November, especially on far-right message boards such as The Donald, researchers said.The activity follows a rise in anti-Asian misinformation last spring after the coronavirus, which first emerged in China, began spreading around the world. On Facebook and Twitter, people blamed the pandemic on China, with users posting hashtags such as #gobacktochina and #makethecommiechinesepay. Those hashtags spiked when former President Donald J. Trump last year called Covid-19 the “Chinese virus” and “Kung Flu.”While some of the online activity tailed off ahead of the November election, its re-emergence has helped lay the groundwork for real-world actions, researchers said. The fatal shootings in Atlanta this week, which have led to an outcry over treatment of Asian-Americans even as the suspect said he was trying to cure a “sexual addiction,” were preceded by a swell of racially motivated attacks against Asian-Americans in places like New York and the San Francisco Bay Area, according to the advocacy group Stop AAPI Hate.“Surges in anti-Asian rhetoric online means increased risk of real-world events targeting that group of people,” said Alex Goldenberg, an analyst at the Network Contagion Research Institute at Rutgers University, which tracks misinformation and extremism online.He added that the anti-China coronavirus misinformation — including the false narrative that the Chinese government purposely created Covid-19 as a bioweapon — had created an atmosphere of fear and invective.Anti-Asian speech online has typically not been as overt as anti-Semitic or anti-Black groups, memes and posts, researchers said. On Facebook and Twitter, posts expressing anti-Asian sentiments have often been woven into conspiracy theory groups such as QAnon and in white nationalist and pro-Trump enclaves. Mr. Goldenberg said forms of hatred against Black people and Jews have deep roots in extremism in the United States and that the anti-Asian memes and tropes have been more “opportunistically weaponized.”But that does not make the anti-Asian hate speech online less insidious. Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation, said the misinformation and racist speech has led to a “dehumanization” of certain groups of people and to an increased risk of violence.Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm..css-1xzcza9{list-style-type:disc;padding-inline-start:1em;}.css-rqynmc{font-family:nyt-franklin,helvetica,arial,sans-serif;font-size:0.9375rem;line-height:1.25rem;color:#333;margin-bottom:0.78125rem;}@media (min-width:740px){.css-rqynmc{font-size:1.0625rem;line-height:1.5rem;margin-bottom:0.9375rem;}}.css-rqynmc strong{font-weight:600;}.css-rqynmc em{font-style:italic;}.css-yoay6m{margin:0 auto 5px;font-family:nyt-franklin,helvetica,arial,sans-serif;font-weight:700;font-size:1.125rem;line-height:1.3125rem;color:#121212;}@media (min-width:740px){.css-yoay6m{font-size:1.25rem;line-height:1.4375rem;}}.css-1dg6kl4{margin-top:5px;margin-bottom:15px;}#masthead-bar-one{display:none;}#masthead-bar-one{display:none;}.css-1pd7fgo{background-color:white;border:1px solid #e2e2e2;width:calc(100% – 40px);max-width:600px;margin:1.5rem auto 1.9rem;padding:15px;box-sizing:border-box;}@media (min-width:740px){.css-1pd7fgo{padding:20px;width:100%;}}.css-1pd7fgo:focus{outline:1px solid #e2e2e2;}#NYT_BELOW_MAIN_CONTENT_REGION .css-1pd7fgo{border:none;padding:20px 0 0;border-top:1px solid #121212;}.css-1pd7fgo[data-truncated] .css-rdoyk0{-webkit-transform:rotate(0deg);-ms-transform:rotate(0deg);transform:rotate(0deg);}.css-1pd7fgo[data-truncated] .css-eb027h{max-height:300px;overflow:hidden;-webkit-transition:none;transition:none;}.css-1pd7fgo[data-truncated] .css-5gimkt:after{content:’See more’;}.css-1pd7fgo[data-truncated] .css-6mllg9{opacity:1;}.css-coqf44{margin:0 auto;overflow:hidden;}.css-coqf44 strong{font-weight:700;}.css-coqf44 em{font-style:italic;}.css-coqf44 a{color:#326891;-webkit-text-decoration:underline;text-decoration:underline;text-underline-offset:1px;-webkit-text-decoration-thickness:1px;text-decoration-thickness:1px;-webkit-text-decoration-color:#ccd9e3;text-decoration-color:#ccd9e3;}.css-coqf44 a:visited{color:#333;-webkit-text-decoration-color:#333;text-decoration-color:#333;}.css-coqf44 a:hover{-webkit-text-decoration:none;text-decoration:none;}In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.By the middle of last year, the misinformation had started subsiding as election-related commentary increased. The anti-Asian sentiment ended up migrating to platforms like 4chan and Telegram, researchers said.But it still occasionally flared up, such as when Dr. Li-Meng Yan, a researcher from Hong Kong, made unproven assertions last fall that the coronavirus was a bioweapon engineered by China. In the United States, Dr. Yan became a right-wing media sensation. Her appearance on Tucker Carlson’s Fox News show in September has racked up at least 8.8 million views online.In November, anti-Asian speech surged anew. That was when conspiracies about a “new world order” related to President Biden’s election victory began circulating, said researchers from the Network Contagion Research Institute. Some posts that went viral painted Mr. Biden as a puppet of the Chinese Communist Party.In December, slurs about Asians and the term “Kung Flu” rose by 65 percent on websites and apps like Telegram, 4chan and The Donald, compared with the monthly average mentions from the previous 11 months on the same platforms, according to the Network Contagion Research Institute. The activity remained high in January and last month.During this second surge, calls for violence against Asian-Americans became commonplace.“Filipinos are not Asians because Asians are smart,” read a post in a Telegram channel that depicted a dog holding a gun to its head.After the shootings in Atlanta, a doctored screenshot of what looked like a Facebook post from the suspect circulated on Facebook and Twitter this week. The post featured a miasma of conspiracies about China engaging in a Covid-19 cover-up and wild theories about how it was planning to “secure global domination for the 21st century.”Facebook and Twitter eventually ruled that the screenshot was fake and blocked it. But by then, the post had been shared and liked hundreds of times on Twitter and more than 4,000 times on Facebook.Ben Decker More

  • in

    Young Men, Alienation and Violence in the Digital Age

    As the world was forced into lockdown at the start of the COVID-19 pandemic, Alex Lee Moyer’s documentary “TFW No GF” was released online. The film focuses on an internet subculture of predominately young, white men who already experienced much of life from the comfort of their own homes, pandemic notwithstanding.

    Its title, a reference to the 4chan-originated phrase “that feel when no girlfriend,” reveals the essence of its subjects’ grievances described in the South by Southwest (SXSW) film festival program as first a “lack of romantic companionship,” then evolving to “a greater state of existence defined by isolation, rejection and alienation.” As one of the film’s subjects remarks early on: “Everyone my age kinda just grows up on the internet … 4chan was the only place that seemed real… I realized there were other people going through the same shit.”

    READ MORE

    What does this level of alienation tell us about society today? And how seriously should we take the content found on this online patchwork of messaging boards and forums, each with its own language and visual culture that may at first seem humorous or ironic, but often disguises misogyny, racism and violence? These are difficult and urgent questions, particularly given the emergent incel phenomenon — “incel” being a portmanteau of “involuntary celibate” — which appears to be gaining in strength online.

    Virtual Expressions

    The idea of virtual expressions of alienation and rage translating to actual violence remains a real and present danger, as we were reminded of this May when a teenager became the first Canadian to be charged with incel-inspired terrorism. The documentary, however, avoids confronting the violence that this subculture often glorifies, and the director has since stated that the film was never supposed to be about incels but that it had become impossible to discuss it without the term coming up.

    As it turns out, the men we meet in “TFW No GF” appear to be largely harmless — except perhaps to themselves — and despite the documentary’s lack of narrative voice, it takes a patently empathetic stance. Set against the backdrop of industrial landscapes and empty deserts, this is a United States in decline. Here, role models and opportunities lie thin on the ground, and the closest thing to “community” exists in virtual realms. Each self-described NEET — slang for “not in education, employment or training” — has his own tale of alienation: of alcoholic parents, dead friends or a disenfranchisement with the school system.

    Embed from Getty Images

    For those who study internet subcultures, the memes of Pepe the Frog and Wojak explored in the film will be familiar. Pepe is used as a reaction image, typically in the guises of “feels good man,” and “smug/angry/sad Pepe” and, although not created to have racist connotations, is frequently used in bigoted contexts by the alt-right. Wojak, AKA “feels guy,” is typically depicted as a bald man with a depressed expression.

    One of the documentary’s subjects, “Kantbot,” explains that you “can’t have one without the other … that’s the duality of man.” For these men, Pepe represents the troll self, a public persona that embodies their smug and cocky traits. Wojak denotes a more private and vulnerable self, typified by inadequacy, unfulfillment and sadness. At its core, it is this dichotomy that the documentary seeks to explore, whilst at the same time demanding our sympathies.

    On the surface, the men in “TFW No GF” are united by their failure in finding female partners, a theme which permeates the “manosphere” that includes Men Going Their Own Way (MGTOW) and incels. This latter identity has garnered particular attention in recent years due to the spate of incel violence witnessed in North America, most infamously Eliot Rodger’s Isla Vista attacks in California in 2014 that left six people dead. According to Moonshot CVE, incels believe that “genetic factors influence their physical appearance and/or social abilities to the extent that they are unattractive to women,” with some subscribing to the philosophy of the “blackpill” — namely, that women are shallow and naturally select partners based upon looks, stifling the chances of unattractive men to find a partner and procreate.

    Incels are a diverse and nebulous community, their worldview characterized by a virulent brand of nihilism seen through the prism of a three-tiered social hierarchy dictated by looks. Here, incels find themselves at the bottom of the pile, after “normies,” “Chads” and “Stacys.” Whilst instances of real-world violence perpetrated by incels remain in relatively low in numbers, its potential to mutate into an offline phenomenon is rightly a cause for concern, with Bruce Hoffman et al., making a convincing argument for increased law enforcement scrutiny, noting that the most violent manifestations of this ideology pose a “new terrorism threat.”

    Strange and Hostile World

    A counterterrorism approach alone, however, is unlikely to address the reasons why so many young men (and women: see femcels) are drawn to these virtual worlds. If self-reported narratives on forums such as Incels.net and Incels.co are anything to go by, low self-esteem, bullying and mental health issues are rife. An acknowledgment of the pain, rejection and illness that someone may be suffering from is surely required, however unpalatable that is when faced with the abhorrent imagery and rhetoric that may espouse. Underlying all of this is the need for response based in public health.

    However, the documentary’s empathic approach has been criticized, with The Guardian accusing it of misinformation, particularly in its portrayal of 4chan and the like as harmless, and Rolling Stone criticizing the film’s acceptance of events without challenging the communities support of violence, misogyny and racism. In this sense, the film is reminiscent of the 2016 documentary “The Red Pill,” which followed Cassie Jay’s journey into the world of men’s rights activists, similarly focusing on one side of an ever-complicated debate. Thus, showing compassion should ultimately not be a way of avoiding the difficult conversations and, in the case of inceldom, a failure to do so could be seen as irresponsible.

    As a researcher of internet subcultures, documentaries like “TFW no GF” are valuable in so much as we are granted a rare perspective of these men in their own words. Despite the film’s selectivity and subjectivity — representing a small sample of the infinite experiences and beliefs held by those in this expansive community — it provides us with a vignette of the online spaces that allow for certain hateful ideas to flourish and be sustained.

    For some, the strange and often hostile world of online messaging boards provides a much-needed connection when other doors are closed. For others, they contribute to a more misogynistic, racist and at times violent way of perceiving the world. As COVID-19 continues to rage on, forcing more of us to shift our lives online, the ability to understand and combat deeply entrenched loneliness — as well as its potential to intersect with extreme and even violent corners of the internet — will be essential.  

    *[The Centre for Analysis of the Radical Right is a partner institution of Fair Observer.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More