More stories

  • in

    How JD Vance’s path to being Trump’s VP pick wound through Silicon Valley

    When JD Vance was a student at Yale Law School in 2011, he attended a talk featuring Peter Thiel, the conservative tech billionaire. Although Vance didn’t know Thiel at the time, over the next decade he would become Thiel’s employee, friend and the recipient of his largesse. Thiel’s millions paved the way for Vance to become a senator.Thiel’s talk was “the most significant moment of my time at Yale Law School”, Vance would write in a 2020 essay for The Lamp, a Catholic magazine. In Vance’s telling, Thiel’s talk of the failures of elite institutions and belief in Christianity made him reconsider his own faith and immediately make plans for a career outside of law – one that wound through the worlds of tech and venture capital before politics.skip past newsletter promotionafter newsletter promotionWhile Vance is best known for the hardscrabble origin story he laid out in his memoir Hillbilly Elegy, in the years following his graduation from Yale he developed extensive ties with Silicon Valley’s investors and elites. His time as a venture capitalist, coupled with his status as a rags-to-riches media fixture, helped him make connections central to his political rise, as well as garner him influential supporters that pushed Trump to make him his vice presidential pick.Following a brief period of work in corporate law after he graduated Yale, Vance moved to San Francisco and got a job at Thiel’s Mithril Capital venture firm in 2015. After Hillbilly Elegy became a bestseller in 2016 and brought him to national prominence, Vance joined the venture capital firm Revolution, founded by the former AOL CEO Steve Case.Vance remained a part of the tech VC world after returning to Ohio and leaving Revolution in early 2020. He received financial backing from Thiel to co-found the venture firm Narya Capital – which, like Thiel’s enterprises, was named after an object from The Lord of The Rings, this time a ring of power made for elves. Other prominent investors in Narya included Eric Schmidt, the former Google CEO,and Marc Andreessen, a venture capitalist, who announced his own support for Trump this past week. The stated goal of Vance’s firm was to invest in early-stage startups in cities that Silicon Valley tended to overlook.Narya Capital in 2021 led a group of conservative investors, including Thiel, to put money into Rumble, the video streaming platform that positions itself as a less-moderated and more rightwing friendly version of YouTube. Vance’s co-founder at Narya, Colin Greenspon, touted the investment as a challenge to big tech’s hold on online services – a frequent conservative talking point during the backlash to content moderation around the pandemic and 2020 presidential election. It was also around this time that Thiel, who heavily backed Trump financially during the 2016 campaign, brought Vance to first talk with Trump during a secretive meeting at Mar-a-Lago in February of 2021, according to the New York Times.Vance’s long association with Thiel also proved lucrative during his run for senator in 2022. Thiel put a staggering $15m into Vance’s campaign and, according to the Washington Post, helped court Trump’s endorsement, leading to Vance winning a tightly contested Republican primary race and then the senate election.Although Thiel has pledged in recent years to stay out of donations to the 2024 election, Vance has since flexed his other Silicon Valley connections to ingratiate himself to Trump. The Ohio senator introduced David Sacks, a prominent venture capitalist, to Donald Trump Jr in March, the New York Times reported, and attended Sacks’ pro-Trump fundraiser in June, co-sponsored by Chamath Palihapitiya, Sacks’ co-host on the popular podcast All In. The event, which cost as much as $300,000 to attend, was held at Sacks’s San Francisco mansion and featured the investor thanking Vance for his help making the fundraiser happen. During an informal conversation at the dinner, Sacks and Palihapitiya told Trump to nominate Vance as his VP choice.Sacks spoke at the Republican national convention Monday. In the days prior, he had also called Trump to advocate for Vance as the VP pick, as had Elon Musk and Tucker Carlson, the ex-Fox News host, according to Axios. Thiel also expressed his support for Vance in private calls with Trump, the New York Times reported. When Trump confirmed Vance would be his running mate, Sacks and Musk posted fawning celebrations on Twitter – with Musk saying the ticket “resounds with victory”.Many of Vance’s wealthy tech elite and venture capitalist supporters now appear to be preparing to offer even more tangible support. Investors including Musk, Andreessen and Thiel’s co-founder in Palantir, Joe Lonsdale, are all reportedly planning to donate huge sums of money to back the Trump and Vance campaign. More

  • in

    Meta lifts restrictions on Trump’s Facebook and Instagram accounts

    Meta has removed previous restrictions on the Facebook and Instagram accounts of Donald Trump as the 2024 election nears, the company announced on Friday.Trump was allowed to return to the social networks in 2023 with “guardrails” in place, after being banned over his online behavior during the 6 January insurrection. Those guardrails have now been removed.“In assessing our responsibility to allow political expression, we believe that the American people should be able to hear from the nominees for president on the same basis,” Meta said in a blogpost, citing the Republican national convention, slated for next week, which will formalize Trump as the party’s candidate.As a result, Meta said, Trump’s accounts will no longer be subject to heightened suspension penalties, which Meta said were created in response to “extreme and extraordinary circumstances” and “have not had to be deployed”.“All US presidential candidates remain subject to the same community standards as all Facebook and Instagram users, including those policies designed to prevent hate speech and incitement to violence,” the company’s blogpost reads.Since his return to Meta’s social networks, Trump has primarily shared campaign information, attacks on Democratic candidate Biden, and memes on his accounts.Critics of Trump and online safety advocates have expressed concern that Trump’s return could lead to a rise of misinformation and incitement of violence, as was seen during the Capitol riot that prompted his initial ban.The Biden campaign condemned Meta’s decision in a statement on Friday, saying it is a “greedy, reckless decision” that constitutes “ a direct attack on our safety and our democracy”.“Restoring his access is like handing your car keys to someone you know will drive your car into a crowd and off a cliff,” said campaign spokesperson Charles Kretchmer Lutvak. “It is holding a megaphone for a bonafide racist who will shout his hate and white supremacy from the rooftops and try to take it mainstream.”In addition to Meta platforms, other major social media firms banned Trump due to his online activity surrounding the 6 January attack, including Twitter (now X), Snapchat and YouTube.The former president was allowed back on X last year by the decision of Elon Musk, who bought the company in 2022, though the former president has not yet tweeted.Trump returned to YouTube in March 2023. He remains banned from Snapchat.Trump founded his own social network, Truth Social, in early 2022. More

  • in

    #KHive: Kamala Harris memes abound after Joe Biden’s debate disaster

    In the aftermath of Joe Biden’s disastrous debate performance, left-leaning Americans can’t stop talking about the vice-president online. Memes about Kamala Harris are spreading with a speed and enthusiasm previously unseen on X and Instagram.Supercuts of her set to RuPaul’s Call Me Mother. Threads of her “funniest Veep moments”. Collages of jokes about her over a green album cover a la Charli xcx’s Brat. Numerous riffs on a comment she made about a coconut tree. Previous progressive snark about Harris has cast her either as an incompetent sidekick a la HBO’s Veep or as an anti-progressive cop, a reference to her years as California’s top law enforcement official. But as rumors circle about discussions of Biden dropping out of the presidential race, social media commentary on the nation’s second-in-command has grown more positive – even if ironically so.The Veep clips describing Harris now show Selina Meyer (Julia Louis-Dreyfus) becoming president despite her years of ineptitude. The cop jokes come with side-by-sides of the vice-president and Donald Trump’s mugshot. Witness the rise of the “KHive”, a term coined by MSNBC’s Joy Reid for fans of the vice-president in the style of Beyonce’s Beyhive. And as the memes take a turn, so too have the polls. Recent numbers indicate Harris is having a “surprise resurgence”, polling more positively against Trump than Biden and all other rumored Democratic candidates, including Gavin Newsom and Pete Buttigieg.The bleak wake of the debate is not the first time the vice-president has inspired jokes on social media, though it is the loudest. A video of Harris informing Joe Biden the two had won the 2020 election – most of all her “we did it, Joe” remark – has been a popular meme since the start of the administration.Conservatives have also made jokes at the vice-president’s expense for years now. In a January 2022 interview about the administration’s Covid policies, she gave the tautological answer: “It’s time for us to do what we have been doing, and that time is every day.” Fox News said she had been “crushed for non-answer”. The Daily Wire said she “incoherently babbles”. Ben Shapiro said on TikTok: “Every day, there is a new all-time Kamala Harris clip.”The recent meme cycle, whether joking or authentic, celebrates these kinds of verbal gymnastics, which are characteristic of Harris’s speeches – sometimes profound, sometimes nonsensical. Her most popular quip involves her mother and a coconut tree. In May 2023, she said, “My mother used to – she would give us a hard time sometimes, and she would say to us, ‘I don’t know what’s wrong with you young people. You think you just fell out of a coconut tree?’ You exist in the context of all in which you live and what came before you.” The story was part of a speech on educational economic opportunity for Latino Americans; you can read the full transcript on the White House’s website.A simple coconut emoji has become shorthand for the vice president. Mashups of her coconut tree anecdote have become punchlines in videos, images, and text on X (formerly Twitter), Instagram, and TikTok, racking up tens of thousands of likes and retweets. Several of her other trademark remarks have enjoyed a similar resurgence.The Biden-Harris campaign seems to have taken notice and intends to ride the virtual wave of support, even if it did not immediately respond to a request for comment. The president and vice-president posted a job ad on 3 July in search of a social media strategist for Harris specifically. The aide will write posts for Harris every day in an effort to “expand the vice-president’s voice online”, per Politico.The explosion of Harris content mirrors how Donald Trump’s speeches and tweets spread as memes. His bizarre, idiosyncratic way of talking and tweeting makes for funny reference points on both right and left, insertable into unrelated jokes for the pastiche effect of the best absurd online humor. Outlandish rhetoric that stands out for its flourishes – whether putatively weighty like Harris or unapologetically pugnacious like Trump – makes for good punchlines.Another of Harris’ aphorisms appears with almost comic frequency and has made its way into the online frenzy over her: “What can be, unburdened by what has been.” A supercut of her making the remark in dozens of different public appearances, nearly four minutes of the same phrase repeated over and over again, has been retweeted nearly 9,000 times.A video of her dancing alongside a drum line has also resurfaced, remixed to showcase her ascendancy as Biden’s star fades. As one tweet of the video reads: “Kamala seeing the CNN polls this morning.” Her distinctive laugh, which makes an appearance in the coconut tree tale before her demeanor and tone turn inexplicably somber, has long inspired posts remarking on her willingness to display emotion in public. Biden, by contrast, spoke in a feeble monotone during the debate. Against Trump’s gesticulation and rancor, Biden appeared gray and weak. Observers online wonder: could Kamala stand up to Trump, as she once did to Biden himself?Why the enthusiasm for Harris now? Perhaps despair over the other two options. One tweet crystalizes the reason for the quick shift in the vibes online: “Who cares if she’s weird? At least she’s not a felon or 80.”And is the turn to Harris genuine or just a nihilistic joke in the face of an uninspiring election? The same tweet winks with absurd maximalism of internet speech: “We need a Gemini Rising woman President from California who is on pills+wine, is campy, and didn’t get married until she was middle aged because she was too busy being a 365 party girlboss.”Parts of the tweet are true – Harris’ ascendant astrological sign is indeed Gemini – but “365 party girlboss” is a reference to Charli xcx’s album Brat, another meme of the moment. There’s also no evidence she’s on pills.With the Democratic machine in disarray as rumors of Biden’s resignation swirl, it’s not clear what comes next for the vice-president – or the US. As one tweet blending multiple Harris quips stated, in an attitude of throwing exasperated hands to the sky: “God grant me the serenity to be unburdened by what has been, the courage to see what can be, and the wisdom to live in the context.” More

  • in

    Silicon Valley wants unfettered control of the tech market. That’s why it’s cosying up to Trump | Evgeny Morozov

    Hardly a week passes without another billionaire endorsing Donald Trump. With Joe Biden proposing a 25% tax on those with assets over $100m (£80m), this is no shock. The real twist? The pro-Trump multimillionaire club now includes a growing number of venture capitalists. Unlike hedge funders or private equity barons, venture capitalists have traditionally held progressive credentials. They’ve styled themselves as the heroes of innovation, and the Democrats have done more to polish their progressive image than anyone else. So why are they now cosying up to Trump?Venture capitalists and Democrats long shared a mutual belief in techno-solutionism – the idea that markets, enhanced by digital technology, could achieve social goods where government policy had failed. Over the past two decades, we’ve been living in the ruins of this utopia. We were promised that social media could topple dictators, that crypto could tackle poverty, and that AI could cure cancer. But the progressive credentials of venture capitalists were only ever skin deep, and now that Biden has adopted a tougher stance on Silicon Valley, VCs are more than happy to support Trump’s Republicans.The Democrats’ romance with techno-solutionism began in the early 1980s. Democrats saw Silicon Valley as the key to boosting environmentalism, worker autonomy and global justice. Venture capitalists, as the financial backers of this new and apparently benign form of capitalism, were crucial to this vision. Whenever Republicans pushed for measures favourable to the VC industry – such as changes in capital gains tax, or the liberalisation of pension fund legislation – Democrats eventually acquiesced. On issues such as intellectual property, Democrats have actively advanced the industry’s agenda.This alliance has shaped how the US now finances innovation. Public institutions such as the National Science Foundation and National Institutes of Health fund basic science, while venture capitalists finance the startups that commercialise it. These startups, in turn, build on intellectual property licensed from recipients of public grants to design apps, gadgets and drugs. A good chunk of these profits, naturally, flows back to the venture capitalists who own a stake in these startups. Thanks to this model, Americans now pay some of the highest drug prices in the world – yet when politicians have tried to curb these egregious outcomes, they have been met with accusations from the VC industry that they’re undermining progress.Venture capitalists have been keen to emphasise the role they play in delivering progress. Through podcasts, conferences and publications, they have successfully recast their interests as those of humanity at large. For a clear distillation of this worldview, look no further than The Techno-Optimist Manifesto, a 5,200-word treatise by Marc Andreessen, co-founder of the VC firm Andreessen Horowitz. Its jarring universalism suggests that all of us – San Francisco’s venture capitalists and homeless alike – are in this together. Andreessen urges readers to join venture capitalists as “allies in the pursuit of technology, abundance, and life”. Yet his text quickly reveals its true colours. “Free markets,” he writes, “are the most effective way to organise a technological economy.” (Andreessen has criticised Biden without endorsing Trump.)Andreessen isn’t celebrating technology in the abstract, but promoting what he calls the “techno-capital machine”. This system allows investors like him to reap most of the rewards of innovation, while steering its direction so that alternative models to Silicon Valley hegemony never achieve the kind of take-up that would allow them to drive out for-profit solutions. Andresseen, like all VCs, never stops to consider that a more effective technological economy might not revolve around free markets at all. How can VCs be so sure that we wouldn’t get a better kind of generative AI, or less destructive social media platforms, by treating data as a collective good?View image in fullscreenThe tragedy is that we won’t be trying anything like this any time soon. We’re shackled by a worldview that has fooled us into thinking there is no alternative to a system that relies on poorly paid workers in the global south to assemble our devices and moderate our content, and that consumes unsustainable volumes of energy to train AI models and mine bitcoin. Even the idea that social media might promote democracy has now been abandoned; instead, tech leaders seem more concerned with evading responsibility for the role their platforms have played in subverting democracy and fanning the flames of genocide.Where do we find the much-needed alternative? While researching my latest podcast, A Sense of Rebellion, I stumbled on a series of debates that took place in the 1970s and pointed in the right direction. Back then, a small group of hippy radicals were advocating for “ecological technology” and “counter-technology”. They weren’t satisfied with merely making existing tools more accessible and transparent: they saw technology as the product of power relations, and wanted to fundamentally alter the system itself. I came across a particularly compelling example of this thinking in a quirky 1971 manifesto published in Radical Software, a small but influential magazine. Its author was anonymous, and signed themselves as “Aquarius Project”, listing only a Berkeley-based postal box. I eventually tracked them down, partly because the points they made in that manifesto are so often lost in today’s debates about Silicon Valley. “‘Technology’ does nothing, creates no problems, has no ‘imperatives’,” they wrote. “Our problem is not ‘Technology’ in the abstract, but specifically capitalist technology.”Being hippies, the group struggled to translate these insights into policy demands. In fact, somebody else had done this three decades earlier. In the late 1940s, the Democratic senator Harley Kilgore saw the dangers of postwar science becoming “the handmaiden for corporate or industrial research”. He envisioned a National Science Foundation (NSF) governed by representatives from unions, consumers, agriculture and industry to ensure technology served social needs and remained in democratic control. Corporations would be forced to share their intellectual property (IP) if they built on public research, and would be prevented from becoming the sole providers of “solutions” to social problems. Yet with its insistence on democratic oversight and sharing IP riches, his model was eventually defeated.Instead, our prevailing approach to innovation has allowed scientists to set their priorities, and does not require companies that benefit from public research to share their IP. As Biden’s Chips Act directs $81bn to the NSF, we must now question if this approach still makes sense. Shouldn’t democratic decision-making guide how this money is spent? And what about the IP created? How much will end up enriching venture capitalists? Similar questions arise with data and AI. Should big tech firms be allowed to use data from public institutions to train privately owned, lucrative AI models? Why not make the data accessible to nonprofits and universities? Why should companies such as OpenAI, backed by venture capital, dominate this space?Today’s AI gold rush is inefficient and irrational. A single, authoritative, publicly owned curator of the data and models behind generative AI could do a better job, saving money and resources. It could charge corporations for access, while providing cheaper access to public media organisations and libraries. Yet the merchants of Silicon Valley are taking us in the opposite direction. They are obsessed with accelerating Andreessen’s “techno-capital machine”, which relies on detaching markets and technologies from democratic control. And, with Trump in the White House, they’ll waste no time repurposing their tools to serve authoritarianism as easily as they served the neoliberal agendas of his Democratic predecessors.Biden and his allies should recognise venture capitalists as a problem, not a solution. The sooner progressive forces get over their fascination with Silicon Valley, the better. This won’t be enough, though: to build a truly progressive techno-public machine, we need to rethink the relationship between science and technology on the one hand and democracy and equality on the other. If that means reopening old, seemingly settled debates, so be it.
    Evgeny Morozov is the author of several books on technology and politics. His latest podcast, A Sense of Rebellion, is available now
    Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here. More

  • in

    New York signs parental control of ‘addictive’ social media feeds into law

    New York’s governor, Kathy Hochul, signed two bills into law on Thursday meant to mitigate negative impacts of social media on children, the latest action to address what critics say is a growing youth mental health crisis.The first bill will require that parents be able to stop their children from seeing posts suggested by a social network’s algorithm, a move to limit feeds Hochul argues are addictive. The second will put additional limitations on the collection, use, sharing and selling of personal data of anyone under the age of 18.“We can protect our kids. We can tell the companies that you are not allowed to do this, you don’t have a right to do this, that parents should have say over their children’s lives and their health, not you,” Hochul said at a bill-signing ceremony in Manhattan.Under the first bill, the Stop Addictive Feeds Exploitation (Safe) for Kids Act, apps like TikTok and Instagram would be limited for people under the age of 18 to posts from accounts they follow, rather than content recommended by the app. It would also block platforms from sending minors notifications on suggested posts between midnight and 6am.Both provisions could be turned off if a minor gets what the bill defines as “verifiable parental consent”.Thursday’s signing is just the first step in what is expected to be a lengthy process of rule-making, as the laws do not take effect immediately and social media companies are expected to challenge the new legislation.The New York state attorney general, Letitia James, is now tasked with crafting rules to determine mechanisms for verifying a user’s age and parental consent. After the rules are finalized, social media companies will have 180 days to implement changes to comply with the regulation.“Addictive feeds are getting our kids hooked on social media and hurting their mental health, and families are counting on us to help address this crisis,” James said at the ceremony. “The legislation signed by Governor Hochul today will make New York the national leader in addressing the youth mental health crisis and an example for other states to follow.”Social media companies and free speech advocates have pushed back against such legislation, with NetChoice – a tech industry trade group that includes Twitter/X and Meta – criticizing the New York laws as unconstitutional.“This is an assault on free speech and the open internet by the state of New York,” Carl Szabo, vice-president and general counsel of NetChoice, said in a statement. “New York has created a way for the government to track what sites people visit and their online activity by forcing websites to censor all content unless visitors provide an ID to verify their age.”skip past newsletter promotionafter newsletter promotionNew York’s new laws come after California’s governor, Gavin Newsom, announced plans to work with his state’s legislature on a bill to restrict smartphone usage for students during the school day, though he didn’t provide exact details on what the proposal would include. Newsom in 2019 signed a bill allowing school districts to limit or ban smartphones on campuses.A similar measure proposed in South Carolina this month would ban students from using cellphones during the school day across all public schools in the state. Most schools in the United Kingdom prohibit the use of smartphones during school hours.Although there hasn’t been broad legislation on the subject at the federal level, pressure from Washington is mounting. This week the US surgeon general called on Congress to put warning labels on social media platforms similar to those on cigarette packaging, citing mental health dangers for children using the sites. More

  • in

    Sam Bankman-Fried funded a group with racist ties. FTX wants its $5m back

    Multiple events hosted at a historic former hotel in Berkeley, California, have brought together people from intellectual movements popular at the highest levels in Silicon Valley while platforming prominent people linked to scientific racism, the Guardian reveals.But because of alleged financial ties between the non-profit that owns the building – Lightcone Infrastructure (Lightcone) – and jailed crypto mogul Sam Bankman-Fried, the administrators of FTX, Bankman-Fried’s failed crypto exchange, are demanding the return of almost $5m that new court filings allege were used to bankroll the purchase of the property.During the last year, Lightcone and its director, Oliver Habryka, have made the $20m Lighthaven Campus available for conferences and workshops associated with the “longtermism”, “rationalism” and “effective altruism” (EA) communities, all of which often see empowering the tech sector, its elites and its beliefs as crucial to human survival in the far future.At these events, movement influencers rub shoulders with startup founders and tech-funded San Francisco politicians – as well as people linked to eugenics and scientific racism.Since acquiring the Lighthaven property – formerly the Rose Garden Inn – in late 2022, Lightcone has transformed it into a walled, surveilled compound without attracting much notice outside the subculture it exists to promote.But recently filed federal court documents allege that in the months before the collapse of Sam Bankman-Fried’s FTX crypto empire, he and other company insiders funnelled almost $5m to Lightcone, including $1m for a deposit to lock in the Rose Garden deal.FTX bankruptcy administrators say that money was commingled with funds looted from FTX customers. Now, they are asking a judge to give it back.The revelations cast new light on so-called “Tescreal” intellectual movements – an umbrella term for a cluster of movements including EA and rationalism that exercise broad influence in Silicon Valley, and have the ear of the likes of Sam Altman, Marc Andreessen and Elon Musk.It also raises questions about the extent to which people within that movement continue to benefit from Bankman-Fried’s fraud, the largest in US history.The Guardian contacted Habryka for comment on this reporting but received no response.Controversial conferencesLast weekend, Lighthaven was the venue for the Manifest 2024 conference, which, according to the website, is “hosted by Manifold and Manifund”.Manifold is a startup that runs Manifund, a prediction market – a forecasting method that was the ostensible topic of the conference.Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic.Alongside these guests, however, were advertised a range of more extreme figures.One, Jonathan Anomaly, published a paper in 2018 entitled Defending Eugenics, which called for a “non-coercive” or “liberal eugenics” to “increase the prevalence of traits that promote individual and social welfare”. The publication triggered an open letter of protest by Australian academics to the journal that published the paper, and protests at the University of Pennsylvania when he commenced working there in 2019. (Anomaly now works at a private institution in Quito, Ecuador, and claims on his website that US universities have been “ideologically captured”.)Another, Razib Khan, saw his contract as a New York Times opinion writer abruptly withdrawn just one day after his appointment had been announced, following a Gawker report that highlighted his contributions to outlets including the paleoconservative Taki’s Magazine and anti-immigrant website VDare.The Michigan State University professor Stephen Hsu, another billed guest, resigned as vice-president of research there in 2020 after protests by the MSU Graduate Employees Union and the MSU student association accusing Hsu of promoting scientific racism.Brian Chau, executive director of the “effective accelerationist” non-profit Alliance for the Future (AFF), was another billed guest. A report last month catalogued Chau’s long history of racist and sexist online commentary, including false claims about George Floyd, and the claim that the US is a “Black supremacist” country. “Effective accelerationists” argue that human problems are best solved by unrestricted technological development.Another advertised guest, Michael Lai, is emblematic of tech’s new willingness to intervene in Bay Area politics. Lai, an entrepreneur, was one of a slate of “Democrats for Change” candidates who seized control of the powerful Democratic County Central Committee from progressives, who had previously dominated the body that confers endorsements on candidates for local office.In a phone interview, Lai said he did not attend the Manifest conference in early June. “I wasn’t there, and I did not know about what these guys believed in,” Lai said. He also claimed to not know why he was advertised on the manifest.is website as a conference-goer, adding that he had been invited by Austin Chen of Manifold Markets. In an email, Chen, who organized the conference and is a co-founder of Manifund, wrote: “We’d scheduled Michael for a talk, but he had to back out last minute given his campaigning schedule.“This kind of thing happens often with speakers, who are busy people; we haven’t gotten around to removing Michael yet but will do so soon,” Chen added.On the other speakers, Chen wrote in an earlier email: “We were aware that some of these folks have expressed views considered controversial.”He went on: “Some of these folks we’re bringing in because of their past experience with prediction markets (eg [Richard] Hanania has used them extensively and partnered with many prediction market platforms). Others we’re bringing in for their particular expertise (eg Brian Chau is participating in a debate on AI safety, related to his work at Alliance for the Future).”Chen added: “We did not invite them to give talks about race and IQ” and concluded: “Manifest has no specific views on eugenics or race & IQ.”Democrats for Change received significant support from Bay Area tech industry heavyweights, and Lai is now running for the San Francisco board of supervisors, the city’s governing body. He is endorsed by a “grey money” influence network funded by rightwing tech figures like David Sacks and Garry Tan. The same network poured tens of thousands of dollars into his successful March campaign for the DCCC and ran online ads in support of him, according to campaign contribution data from the San Francisco Ethics Commission.Several controversial guests were also present at Manifest 2023, also held at Lighthaven, including rightwing writer Hanania, whose pseudonymous white-nationalist commentary from the early 2010s was catalogued last August in HuffPost, and Malcolm and Simone Collins, whose EA-inspired pro-natalism – the belief that having as many babies as possible will save the world – was detailed in the Guardian last month.The Collinses were, along with Razib Khan and Jonathan Anomaly, featured speakers at the eugenicist Natal Conference in Austin last December, as previously reported in the Guardian.Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”HoSang added: “From there, they anoint themselves the elite managers of these forces, investing in the ‘winners’ as they see fit.”“The presence of Stephen Hsu here is particularly alarming,” HoSang concluded. “He’s often been a bridge between fairly explicit racist and antisemitic people like Ron Unz, Steven Sailer and Stefan Molyneux and more mainstream figures in tech, investment and scientific research, especially around human genetics.”FTX proceedingsAs Lighthaven develops as a hub for EA and rationalism, the new court filing alleges that the purchase of the property was partly secured with money funnelled by Sam Bankman-Fried and other FTX insiders in the months leading up to the crypto empire’s collapse.Bankman-Fried was sentenced to 25 years in prison in March for masterminding the $8bn fraud that led to FTX’s downfall in November 2022, in which customer money was illegally transferred from FTX to sister exchange Alameda Research to address a liquidity crisis.Since the collapse, FTX and Alameda have been in the hands of trustees, who in their efforts to pay back creditors are also pursuing money owed to FTX, including money they say was illegitimately transferred to others by Bankman-Fried and company insiders.On 13 May, those trustees filed a complaint with a bankruptcy court in Delaware – where FTX and Lightcone both were incorporated – alleging that Lightcone received more than $4.9m in fraudulent transfers from Alameda, via the non-profit FTX Foundation, over the course of 2022.State and federal filings indicate that Lightcone was incorporated on 13 October 2022 with Habryka acting in all executive roles. In an application to the IRS for 501(c)3 charitable status, Habryka aligned the organization with an influential intellectual current in Silicon Valley: “Combining the concepts of the Longtermism movement … and rationality … Lightcone Infrastructure Inc works to steer humanity towards a safer and better future.”California filings also state that from 2017 until the application, Lightcone and its predecessor project had been operating under the fiscal sponsorship of the Center for Applied Rationality (CFAR), a rationalism non-profit established in 2012.The main building on the property now occupied by the Lighthaven campus was originally constructed in 1903 as a mansion, and between 1979 and Lightcone’s 2022 purchase of the property, the building was run as a hotel, the Rose Garden Inn.Alameda county property records indicate that the four properties encompassed by the campus remain under the ownership of an LLC, Lightcone Rose Garden (Lightcone RG), of which Lightcone is the sole member, according to the filings. California business filings identify Habryka as the registered agent of Lightcone Infrastructure and Lightcone RG.Lightcone and CFAR both give the campus as their principal place of business in their most recent tax filings.On 2 March 2022, according to the complaint, CFAR applied to the FTX Foundation asking that “$2,000,000 be given to the Center for Applied Rationality as an exclusive grant for its project, the Lightcone Infrastructure Team”. FTX Foundation wired the money the same day.Between then and October 2022, according to trustees, the FTX Foundation wired at least 14 more transfers worth $2,904,999.61. In total, FTX’s administrators say, almost $5m was transferred to CFAR from the FTX Foundation.On 13 July and 18 August 2022, according to the complaint, the FTX Foundation also wired two payments of $500,000 each to a title company as a deposit for Lightcone RG’s purchase of the Rose Garden Inn. The complaint says these were intended as a loan but there is no evidence that the $1m was repaid.Then, on 3 October, the FTX Foundation approved a $1.5m grant to Lightcone Infrastructure, according to FTX trusteesThe complaint alleges that Lightcone got another $20m loan to fund the Rose Garden Inn purchase from Slimrock Investments Pte Ltd, a Singapore-incorporated company owned by Estonian software billionaire, Skype inventor and EA/rationalism adherent Jaan Tallinn. This included the $16.5m purchase price and $3.5m for renovations and repairs.Slimrock investments has no apparent public-facing website or means of contact. The Guardian emailed Tallinn for comment via the Future of Life Institute, a non-profit whose self-assigned mission is: “Steering transformative technology towards benefiting life and away from extreme large-scale risks.” Tallinn sits on that organization’s board. Neither Tallinn nor the Future of Life Institute responded to the request.The complaint also says that FTX trustees emailed CFAR four times between June and August 2023, and that on 31 August they hand-delivered a letter to CFAR’s Rose Garden Inn offices. All of these attempts at contact were ignored. Only after the debtors filed a discovery motion on 31 October 2023 did CFAR engage with them.The most recent filing on 17 May is a summons for CFAR and Lightcone to appear in court to answer the complaint.The suit is ongoing.The Guardian emailed CFAR president and co-founder Anna Salamon for comment on the allegations but received no response. More

  • in

    Deepfakes are here and can be dangerous, but ignore the alarmists – they won’t harm our elections | Ciaran Martin

    Sixteen days before the Brexit referendum, and only two days before the deadline to apply to cast a ballot, the IT system for voter registrations collapsed. The remain and leave campaigns were forced to agree a 48-hour registration extension. Around the same time, evidence was beginning to emerge of a major Russian “hack-and-leak” operation targeting the US presidential election. Inevitably, questions arose as to whether the Russians had successfully disrupted the Brexit vote.The truth was more embarrassingly simple. A comprehensive technical investigation, supported by the National Cyber Security Centre – which I headed at the time – set out in detail what had happened. A TV debate on Brexit had generated unexpected interest. Applications spiked to double those projected. The website couldn’t cope and crashed. There was no sign of any hostile activity.But this conclusive evidence did not stop a parliamentary committee, a year later, saying that it did “not rule out the possibility that there was foreign interference” in the incident. No evidence was provided for this remarkable assertion. What actually happened was a serious failure of state infrastructure, but it was not a hostile act.This story matters because it has become too easy – even fashionable – to cast the integrity of elections into doubt. “Russia caused Brexit” is nothing more than a trope that provides easy comfort to the losing side. There was, and is, no evidence of any successful cyber operations or other digital interference in the UK’s 2016 vote.But Brexit is far from the only example of such electoral alarmism. In its famous report on Russia in 2020, the Intelligence and Security Committee correctly said that the first detected attempt by Russia to interfere in British politics occurred in the context of the Scottish referendum campaign in 2014.However, the committee did not add that the quality of such efforts was risible, and the impact of them was zero. Russia has been waging such campaigns against the UK and other western democracies for years. Thankfully, though, it hasn’t been very good at it. At least so far.Over the course of the past decade, there are only two instances where digital interference can credibly be seen to have severely affected a democratic election anywhere in the world. The US in 2016 is undoubtedly one. The other is Slovakia last year, when an audio deepfake seemed to have an impact on the polls late on.The incident in Slovakia fuelled part of a new wave of hysteria about electoral integrity. Now the panic is all about deepfakes. But we risk making exactly the same mistake with deepfakes as we did with cyber-attacks on elections: confusing activity and intent with impact, and what might be technically possible with what is realistically achievable.So far, it has proved remarkably hard to fool huge swathes of voters with deepfakes. Many of them, including much of China’s information operations, are poor in quality. Even some of the better ones – like a recent Russian fake of Ukrainian TV purporting to show Kyiv admitting it was behind the Moscow terror attacks – look impressive, but are so wholly implausible in substance they are not believed by anyone. Moreover, a co-ordinated response by a country to a deepfake can blunt its impact: think of the impressive British response to the attempt to smear Sadiq Khan last November, when the government security minister lined up behind the Labour mayor of London in exhorting the British media and public to pay no attention to a deepfake audio being circulated.This was in marked contrast to events in Slovakia, where gaps in Meta’s removal policy, and the country’s electoral reporting restrictions, made it much harder to circulate the message that the controversial audio was fake. If a deepfake does cut through in next month’s British election, what matters is how swiftly and comprehensively it is debunked.None of this is to be complacent about the reality that hostile states are trying to interfere in British politics. They are. And with fast-developing tech and techniques, the threat picture can change. “Micro” operations, such as a localised attempt to use AI to persuade voters in New Hampshire to stay at home during the primaries, are one such area of concern. In the course of the UK campaign, one of my main worries would be about targeted local disinformation and deepfake campaigns in individual contests. It is important that the government focuses resources and capabilities on blunting these operations.But saying that hostile states are succeeding in interfering in our elections, or that they are likely to, without providing any tangible evidence is not a neutral act. In fact, it’s really dangerous. If enough supposedly credible voices loudly cast aspersions on the integrity of elections, at least some voters will start to believe them. And if that happens, we will have done the adversaries’ job for them.There is a final reason why we should be cautious about the “something-must-be-done” tendency where the risk of electoral interference is concerned. State intervention in these matters is not some cost-free, blindingly obvious solution that the government is too complacent to use. If false information is so great a problem that it requires government action, that requires, in effect, creating an arbiter of truth. To which arm of the state would we wish to assign this task?
    Ciaran Martin is a professor at the Blavatnik School of Government at the University of Oxford, and a former chief executive of the National Cyber Security Centre More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More