More stories

  • in

    Sam Bankman-Fried funded a group with racist ties. FTX wants its $5m back

    Multiple events hosted at a historic former hotel in Berkeley, California, have brought together people from intellectual movements popular at the highest levels in Silicon Valley while platforming prominent people linked to scientific racism, the Guardian reveals.But because of alleged financial ties between the non-profit that owns the building – Lightcone Infrastructure (Lightcone) – and jailed crypto mogul Sam Bankman-Fried, the administrators of FTX, Bankman-Fried’s failed crypto exchange, are demanding the return of almost $5m that new court filings allege were used to bankroll the purchase of the property.During the last year, Lightcone and its director, Oliver Habryka, have made the $20m Lighthaven Campus available for conferences and workshops associated with the “longtermism”, “rationalism” and “effective altruism” (EA) communities, all of which often see empowering the tech sector, its elites and its beliefs as crucial to human survival in the far future.At these events, movement influencers rub shoulders with startup founders and tech-funded San Francisco politicians – as well as people linked to eugenics and scientific racism.Since acquiring the Lighthaven property – formerly the Rose Garden Inn – in late 2022, Lightcone has transformed it into a walled, surveilled compound without attracting much notice outside the subculture it exists to promote.But recently filed federal court documents allege that in the months before the collapse of Sam Bankman-Fried’s FTX crypto empire, he and other company insiders funnelled almost $5m to Lightcone, including $1m for a deposit to lock in the Rose Garden deal.FTX bankruptcy administrators say that money was commingled with funds looted from FTX customers. Now, they are asking a judge to give it back.The revelations cast new light on so-called “Tescreal” intellectual movements – an umbrella term for a cluster of movements including EA and rationalism that exercise broad influence in Silicon Valley, and have the ear of the likes of Sam Altman, Marc Andreessen and Elon Musk.It also raises questions about the extent to which people within that movement continue to benefit from Bankman-Fried’s fraud, the largest in US history.The Guardian contacted Habryka for comment on this reporting but received no response.Controversial conferencesLast weekend, Lighthaven was the venue for the Manifest 2024 conference, which, according to the website, is “hosted by Manifold and Manifund”.Manifold is a startup that runs Manifund, a prediction market – a forecasting method that was the ostensible topic of the conference.Prediction markets are a long-held enthusiasm in the EA and rationalism subcultures, and billed guests included personalities like Scott Siskind, AKA Scott Alexander, founder of Slate Star Codex; misogynistic George Mason University economist Robin Hanson; and Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute (Miri).Billed speakers from the broader tech world included the Substack co-founder Chris Best and Ben Mann, co-founder of AI startup Anthropic.Alongside these guests, however, were advertised a range of more extreme figures.One, Jonathan Anomaly, published a paper in 2018 entitled Defending Eugenics, which called for a “non-coercive” or “liberal eugenics” to “increase the prevalence of traits that promote individual and social welfare”. The publication triggered an open letter of protest by Australian academics to the journal that published the paper, and protests at the University of Pennsylvania when he commenced working there in 2019. (Anomaly now works at a private institution in Quito, Ecuador, and claims on his website that US universities have been “ideologically captured”.)Another, Razib Khan, saw his contract as a New York Times opinion writer abruptly withdrawn just one day after his appointment had been announced, following a Gawker report that highlighted his contributions to outlets including the paleoconservative Taki’s Magazine and anti-immigrant website VDare.The Michigan State University professor Stephen Hsu, another billed guest, resigned as vice-president of research there in 2020 after protests by the MSU Graduate Employees Union and the MSU student association accusing Hsu of promoting scientific racism.Brian Chau, executive director of the “effective accelerationist” non-profit Alliance for the Future (AFF), was another billed guest. A report last month catalogued Chau’s long history of racist and sexist online commentary, including false claims about George Floyd, and the claim that the US is a “Black supremacist” country. “Effective accelerationists” argue that human problems are best solved by unrestricted technological development.Another advertised guest, Michael Lai, is emblematic of tech’s new willingness to intervene in Bay Area politics. Lai, an entrepreneur, was one of a slate of “Democrats for Change” candidates who seized control of the powerful Democratic County Central Committee from progressives, who had previously dominated the body that confers endorsements on candidates for local office.In a phone interview, Lai said he did not attend the Manifest conference in early June. “I wasn’t there, and I did not know about what these guys believed in,” Lai said. He also claimed to not know why he was advertised on the manifest.is website as a conference-goer, adding that he had been invited by Austin Chen of Manifold Markets. In an email, Chen, who organized the conference and is a co-founder of Manifund, wrote: “We’d scheduled Michael for a talk, but he had to back out last minute given his campaigning schedule.“This kind of thing happens often with speakers, who are busy people; we haven’t gotten around to removing Michael yet but will do so soon,” Chen added.On the other speakers, Chen wrote in an earlier email: “We were aware that some of these folks have expressed views considered controversial.”He went on: “Some of these folks we’re bringing in because of their past experience with prediction markets (eg [Richard] Hanania has used them extensively and partnered with many prediction market platforms). Others we’re bringing in for their particular expertise (eg Brian Chau is participating in a debate on AI safety, related to his work at Alliance for the Future).”Chen added: “We did not invite them to give talks about race and IQ” and concluded: “Manifest has no specific views on eugenics or race & IQ.”Democrats for Change received significant support from Bay Area tech industry heavyweights, and Lai is now running for the San Francisco board of supervisors, the city’s governing body. He is endorsed by a “grey money” influence network funded by rightwing tech figures like David Sacks and Garry Tan. The same network poured tens of thousands of dollars into his successful March campaign for the DCCC and ran online ads in support of him, according to campaign contribution data from the San Francisco Ethics Commission.Several controversial guests were also present at Manifest 2023, also held at Lighthaven, including rightwing writer Hanania, whose pseudonymous white-nationalist commentary from the early 2010s was catalogued last August in HuffPost, and Malcolm and Simone Collins, whose EA-inspired pro-natalism – the belief that having as many babies as possible will save the world – was detailed in the Guardian last month.The Collinses were, along with Razib Khan and Jonathan Anomaly, featured speakers at the eugenicist Natal Conference in Austin last December, as previously reported in the Guardian.Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”HoSang added: “From there, they anoint themselves the elite managers of these forces, investing in the ‘winners’ as they see fit.”“The presence of Stephen Hsu here is particularly alarming,” HoSang concluded. “He’s often been a bridge between fairly explicit racist and antisemitic people like Ron Unz, Steven Sailer and Stefan Molyneux and more mainstream figures in tech, investment and scientific research, especially around human genetics.”FTX proceedingsAs Lighthaven develops as a hub for EA and rationalism, the new court filing alleges that the purchase of the property was partly secured with money funnelled by Sam Bankman-Fried and other FTX insiders in the months leading up to the crypto empire’s collapse.Bankman-Fried was sentenced to 25 years in prison in March for masterminding the $8bn fraud that led to FTX’s downfall in November 2022, in which customer money was illegally transferred from FTX to sister exchange Alameda Research to address a liquidity crisis.Since the collapse, FTX and Alameda have been in the hands of trustees, who in their efforts to pay back creditors are also pursuing money owed to FTX, including money they say was illegitimately transferred to others by Bankman-Fried and company insiders.On 13 May, those trustees filed a complaint with a bankruptcy court in Delaware – where FTX and Lightcone both were incorporated – alleging that Lightcone received more than $4.9m in fraudulent transfers from Alameda, via the non-profit FTX Foundation, over the course of 2022.State and federal filings indicate that Lightcone was incorporated on 13 October 2022 with Habryka acting in all executive roles. In an application to the IRS for 501(c)3 charitable status, Habryka aligned the organization with an influential intellectual current in Silicon Valley: “Combining the concepts of the Longtermism movement … and rationality … Lightcone Infrastructure Inc works to steer humanity towards a safer and better future.”California filings also state that from 2017 until the application, Lightcone and its predecessor project had been operating under the fiscal sponsorship of the Center for Applied Rationality (CFAR), a rationalism non-profit established in 2012.The main building on the property now occupied by the Lighthaven campus was originally constructed in 1903 as a mansion, and between 1979 and Lightcone’s 2022 purchase of the property, the building was run as a hotel, the Rose Garden Inn.Alameda county property records indicate that the four properties encompassed by the campus remain under the ownership of an LLC, Lightcone Rose Garden (Lightcone RG), of which Lightcone is the sole member, according to the filings. California business filings identify Habryka as the registered agent of Lightcone Infrastructure and Lightcone RG.Lightcone and CFAR both give the campus as their principal place of business in their most recent tax filings.On 2 March 2022, according to the complaint, CFAR applied to the FTX Foundation asking that “$2,000,000 be given to the Center for Applied Rationality as an exclusive grant for its project, the Lightcone Infrastructure Team”. FTX Foundation wired the money the same day.Between then and October 2022, according to trustees, the FTX Foundation wired at least 14 more transfers worth $2,904,999.61. In total, FTX’s administrators say, almost $5m was transferred to CFAR from the FTX Foundation.On 13 July and 18 August 2022, according to the complaint, the FTX Foundation also wired two payments of $500,000 each to a title company as a deposit for Lightcone RG’s purchase of the Rose Garden Inn. The complaint says these were intended as a loan but there is no evidence that the $1m was repaid.Then, on 3 October, the FTX Foundation approved a $1.5m grant to Lightcone Infrastructure, according to FTX trusteesThe complaint alleges that Lightcone got another $20m loan to fund the Rose Garden Inn purchase from Slimrock Investments Pte Ltd, a Singapore-incorporated company owned by Estonian software billionaire, Skype inventor and EA/rationalism adherent Jaan Tallinn. This included the $16.5m purchase price and $3.5m for renovations and repairs.Slimrock investments has no apparent public-facing website or means of contact. The Guardian emailed Tallinn for comment via the Future of Life Institute, a non-profit whose self-assigned mission is: “Steering transformative technology towards benefiting life and away from extreme large-scale risks.” Tallinn sits on that organization’s board. Neither Tallinn nor the Future of Life Institute responded to the request.The complaint also says that FTX trustees emailed CFAR four times between June and August 2023, and that on 31 August they hand-delivered a letter to CFAR’s Rose Garden Inn offices. All of these attempts at contact were ignored. Only after the debtors filed a discovery motion on 31 October 2023 did CFAR engage with them.The most recent filing on 17 May is a summons for CFAR and Lightcone to appear in court to answer the complaint.The suit is ongoing.The Guardian emailed CFAR president and co-founder Anna Salamon for comment on the allegations but received no response. More

  • in

    Deepfakes are here and can be dangerous, but ignore the alarmists – they won’t harm our elections | Ciaran Martin

    Sixteen days before the Brexit referendum, and only two days before the deadline to apply to cast a ballot, the IT system for voter registrations collapsed. The remain and leave campaigns were forced to agree a 48-hour registration extension. Around the same time, evidence was beginning to emerge of a major Russian “hack-and-leak” operation targeting the US presidential election. Inevitably, questions arose as to whether the Russians had successfully disrupted the Brexit vote.The truth was more embarrassingly simple. A comprehensive technical investigation, supported by the National Cyber Security Centre – which I headed at the time – set out in detail what had happened. A TV debate on Brexit had generated unexpected interest. Applications spiked to double those projected. The website couldn’t cope and crashed. There was no sign of any hostile activity.But this conclusive evidence did not stop a parliamentary committee, a year later, saying that it did “not rule out the possibility that there was foreign interference” in the incident. No evidence was provided for this remarkable assertion. What actually happened was a serious failure of state infrastructure, but it was not a hostile act.This story matters because it has become too easy – even fashionable – to cast the integrity of elections into doubt. “Russia caused Brexit” is nothing more than a trope that provides easy comfort to the losing side. There was, and is, no evidence of any successful cyber operations or other digital interference in the UK’s 2016 vote.But Brexit is far from the only example of such electoral alarmism. In its famous report on Russia in 2020, the Intelligence and Security Committee correctly said that the first detected attempt by Russia to interfere in British politics occurred in the context of the Scottish referendum campaign in 2014.However, the committee did not add that the quality of such efforts was risible, and the impact of them was zero. Russia has been waging such campaigns against the UK and other western democracies for years. Thankfully, though, it hasn’t been very good at it. At least so far.Over the course of the past decade, there are only two instances where digital interference can credibly be seen to have severely affected a democratic election anywhere in the world. The US in 2016 is undoubtedly one. The other is Slovakia last year, when an audio deepfake seemed to have an impact on the polls late on.The incident in Slovakia fuelled part of a new wave of hysteria about electoral integrity. Now the panic is all about deepfakes. But we risk making exactly the same mistake with deepfakes as we did with cyber-attacks on elections: confusing activity and intent with impact, and what might be technically possible with what is realistically achievable.So far, it has proved remarkably hard to fool huge swathes of voters with deepfakes. Many of them, including much of China’s information operations, are poor in quality. Even some of the better ones – like a recent Russian fake of Ukrainian TV purporting to show Kyiv admitting it was behind the Moscow terror attacks – look impressive, but are so wholly implausible in substance they are not believed by anyone. Moreover, a co-ordinated response by a country to a deepfake can blunt its impact: think of the impressive British response to the attempt to smear Sadiq Khan last November, when the government security minister lined up behind the Labour mayor of London in exhorting the British media and public to pay no attention to a deepfake audio being circulated.This was in marked contrast to events in Slovakia, where gaps in Meta’s removal policy, and the country’s electoral reporting restrictions, made it much harder to circulate the message that the controversial audio was fake. If a deepfake does cut through in next month’s British election, what matters is how swiftly and comprehensively it is debunked.None of this is to be complacent about the reality that hostile states are trying to interfere in British politics. They are. And with fast-developing tech and techniques, the threat picture can change. “Micro” operations, such as a localised attempt to use AI to persuade voters in New Hampshire to stay at home during the primaries, are one such area of concern. In the course of the UK campaign, one of my main worries would be about targeted local disinformation and deepfake campaigns in individual contests. It is important that the government focuses resources and capabilities on blunting these operations.But saying that hostile states are succeeding in interfering in our elections, or that they are likely to, without providing any tangible evidence is not a neutral act. In fact, it’s really dangerous. If enough supposedly credible voices loudly cast aspersions on the integrity of elections, at least some voters will start to believe them. And if that happens, we will have done the adversaries’ job for them.There is a final reason why we should be cautious about the “something-must-be-done” tendency where the risk of electoral interference is concerned. State intervention in these matters is not some cost-free, blindingly obvious solution that the government is too complacent to use. If false information is so great a problem that it requires government action, that requires, in effect, creating an arbiter of truth. To which arm of the state would we wish to assign this task?
    Ciaran Martin is a professor at the Blavatnik School of Government at the University of Oxford, and a former chief executive of the National Cyber Security Centre More

  • in

    How to spot a deepfake: the maker of a detection tool shares the key giveaways

    You – a human, presumably – are a crucial part of detecting whether a photo or video is made by artificial intelligence.There are detection tools, made both commercially and in research labs, that can help. To use these deepfake detectors, you upload or link a piece of media that you suspect could be fake, and the detector will give a percent likelihood that it was AI-generated.But your senses and an understanding of some key giveaways provide a lot of insight when analyzing media to see whether it’s a deepfake.While regulations for deepfakes, particularly in elections, lag the quick pace of AI advancements, we have to find ways to figure out whether an image, audio or video is actually real.Siwei Lyu made one of them, the DeepFake-o-meter, at the University of Buffalo. His tool is free and open-source, compiling more than a dozen algorithms from other research labs in one place. Users can upload a piece of media and run it through these different labs’ tools to get a sense of whether it could be AI-generated.The DeepFake-o-meter shows both the benefits and limitations of AI-detection tools. When we ran a few known deepfakes through the various algorithms, the detectors gave a rating for the same video, photo or audio recording ranging from 0% to 100% likelihood of being AI-generated.AI, and the algorithms used to detect it, can be biased by the way it’s taught. At least in the case of the DeepFake-o-meter, the tool is transparent about that variability in results, while with a commercial detector bought in the app store, it’s less clear what its limitations are, he said.“I think a false image of reliability is worse than low reliability, because if you trust a system that is fundamentally not trustworthy to work, it can cause trouble in the future,” Lyu said.His system is still barebones for users, launching publicly just in January of this year. But his goal is that journalists, researchers, investigators and everyday users will be able to upload media to see whether it’s real. His team is working on ways to rank the various algorithms it uses for detection to inform users which detector would work best for their situation. Users can opt in to sharing the media they upload with Lyu’s research team to help them better understand deepfake detection and improve the website.Lyu often serves as an expert source for journalists trying to assess whether something could be a deepfake, so he walked us through a few well-known instances of deepfakery from recent memory to show the ways we can tell they aren’t real. Some of the obvious giveaways have changed over time as AI has improved, and will change again.“A human operator needs to be brought in to do the analysis,” he said. “I think it is crucial to be a human-algorithm collaboration. Deepfakes are a social-technical problem. It’s not going to be solved purely by technology. It has to have an interface with humans.”AudioA robocall that circulated in New Hampshire using an AI-generated voice of President Joe Biden encouraged voters there not to turn out for the Democratic primary, one of the first major instances of a deepfake in this year’s US elections.

    When Lyu’s team ran a short clip of the robocall through five algorithms on the DeepFake-o-meter, only one of the detectors came back at more than 50% likelihood of AI – that one said it had a 100% likelihood. The other four ranged from 0.2% to 46.8% likelihood. A longer version of the call generated three of the five detectors to come in at more than 90% likelihood.This tracks with our experience creating audio deepfakes: they’re harder to pick out because you’re relying solely on your hearing, and easier to generate because there are tons of examples of public figures’ voices for AI to use to make a person’s voice say whatever they want.But there are some clues in the robocall, and in audio deepfakes in general, to look out for.AI-generated audio often has a flatter overall tone and is less conversational than how we typically talk, Lyu said. You don’t hear much emotion. There may not be proper breathing sounds, like taking a breath before speaking.Pay attention to the background noises, too. Sometimes there are no background noises when there should be. Or, in the case of the robocall, there’s a lot of noise mixed into the background almost to give an air of realness that actually sounds unnatural.PhotosWith photos, it helps to zoom in and examine closely for any “inconsistencies with the physical world or human pathology”, like buildings with crooked lines or hands with six fingers, Lyu said. Little details like hair, mouths and shadows can hold clues to whether something is real.Hands were once a clearer tell for AI-generated images because they would more frequently end up with extra appendages, though the technology has improved and that’s becoming less common, Lyu said.We sent the photos of Trump with Black voters that a BBC investigation found had been AI-generated through the DeepFake-o-meter. Five of the seven image-deepfake detectors came back with a 0% likelihood the fake image was fake, while one clocked in at 51%. The remaining detector said no face had been detected.View image in fullscreenView image in fullscreenLyu’s team noted unnatural areas around Trump’s neck and chin, people’s teeth looking off and webbing around some fingers.Beyond these visual oddities, AI-generated images just look too glossy in many cases.“It’s very hard to put into quantitative terms, but there is this overall view and look that the image looks too plastic or like a painting,” Lyu said.VideosVideos, especially those of people, are harder to fake than photos or audio. In some AI-generated videos without people, it can be harder to figure out whether imagery is real, though those aren’t “deepfakes” in the sense that the term typically refers to people’s likenesses being faked or altered.For the video test, we sent a deepfake of Ukrainian president Volodymyr Zelenskiy that shows him telling his armed forces to surrender to Russia, which did not happen.The visual cues in the video include unnatural eye-blinking that shows some pixel artifacts, Lyu’s team said. The edges of Zelenskiy’s head aren’t quite right; they’re jagged and pixelated, a sign of digital manipulation.Some of the detection algorithms look specifically at the lips, because current AI video tools will mostly change the lips to say things a person didn’t say. The lips are where most inconsistencies are found. An example would be if a letter sound requires the lip to be closed, like a B or a P, but the deepfake’s mouth is not completely closed, Lyu said. When the mouth is open, the teeth and tongue appear off, he said.The video, to us, is more clearly fake than the audio or photo examples we flagged to Lyu’s team. But of the six detection algorithms that assessed the clip, only three came back with very high likelihoods of AI generation (more than 90%). The other three returned very low likelihoods, ranging from 0.5% to 18.7%. More

  • in

    US cites AI deepfakes as reason to keep Biden recording with Robert Hur secret

    The US Department of Justice is making a novel legal argument to keep a recording of an interview with Joe Biden from becoming public. In a filing late last week, the bureau cited the risk of AI-generated deepfakes as one of the reasons it refuses to release audio of the president’s interview with special counsel Robert Hur. The conversation about Biden’s handling of classified documents is a source of heated political contention, with Republicans pushing for release of the recordings and the White House moving to block them.The justice department’s filing, which it released late on Friday night, argues that the recording should not be released on a variety of grounds including privacy interests and executive privilege. One section of the filing, however, is specifically dedicated to the threat of deepfakes and disinformation, stating that there is substantial risk people could maliciously manipulate the audio if it were to be made public.“The passage of time and advancements in audio, artificial intelligence, and ‘deep fake’ technologies only amplify concerns about malicious manipulation of audio files,” the justice department stated. “If the audio recording is released here, it is easy to foresee that it could be improperly altered, and that the altered file could be passed off as an authentic recording and widely distributed.”The filing presents a novel argument about the threat of AI-generated disinformation from the release of government materials, potentially setting up future legal battles over the balance between transparency and preventing the spread of misinformation.“A malicious actor could slow down the speed of the recording or insert words that President Biden did not say or delete words that he did say,” the filing argues. “That problem is exacerbated by the fact that there is now widely available technology that can be used to create entirely different audio ‘deepfakes’ based on a recording.”Biden’s interview with Hur reignited a longstanding conservative campaign of questioning Biden’s mental faculties and drawing attention to his age, which critics claim make him unfit to be president. While Hur’s report into classified documents found at Biden’s private residence did not result in charges against him, the special counsel’s description of him as an “elderly man with poor memory” became ammunition for Republicans and prompted Biden to defend his mental fitness.Although transcripts of Hur’s interview with Biden are public, conservative groups and House Republicans have taken legal action, filed Freedom of Information Act requests and demanded the release of recorded audio from the conversation as he campaigns against Donald Trump. Biden has asserted executive privilege to prevent the release of the audio, while the latest justice department filing pushes back against many of the conservative claims about the recording.The justice department’s filing argues that releasing the recording would create increased public awareness that audio of the interview is circulating, making it more believable when people encounter doctored versions of it.A number of politicians have become the target of deepfakes created in attempts to swing political opinion, including Biden. A robocall earlier this year that mimicked Biden’s voice and told people not to vote in New Hampshire’s Democratic primary was sent to thousands of people. The political consultant allegedly behind the disinformation campaign is now facing criminal charges and a potential $6m fine. More

  • in

    Trump joins TikTok despite seeking to ban app as president

    Former president Donald Trump has joined social media platform TikTok and made his first post late Saturday night, a video featuring the Ultimate Fighting Championship CEO, Dana White, introducing Trump on the social media platform.The move came despite that fact that as president Trump pushed to ban TikTok by executive order due to the app’s parent company being based in China. Trump said in March 2024 that he believed the app was a national security threat, but later reversed on supporting a ban.The 13-second video was taken as Trump attended a UFC event on Saturday evening in Newark, New Jersey. In the video Trump says it is an “honor” to have joined the app as a Kid Rock song played in the background.“The campaign is playing on all fields,” an adviser to Trump’s campaign told Politico. “Being able to do outreach on multiple platforms and outlets is important and this is just one of many ways we’re going to reach out to voters. TikTok skews towards a younger audience.”Trump’s son, Donald Trump Jr joined the app last week, where he posted videos from the Manhattan courthouse where Trump was convicted on Thursday on all 34 counts for falsifying business records.skip past newsletter promotionafter newsletter promotionJoe Biden signed legislation into law in April 2024 that will ban the social media app from the US, giving TikTok’s parent company ByteDance 270 days to sell the app over concerns the app poses a national security risk. TikTok has sued to block the ban with oral arguments in the case scheduled for September. Biden’s campaign has continued using the app despite the legislation. More

  • in

    Trump reportedly considers White House advisory role for Elon Musk

    Donald Trump has floated a possible advisory role for the tech billionaire Elon Musk if he were to retake the White House next year, according to a new report from the Wall Street Journal.The two men, who once had a tense relationship, have had several phone calls a month since March as Trump looks to court powerful donors and Musk seeks an outlet for his policy ideas, the newspaper said, citing several anonymous sources familiar with their conversations.Musk and Trump connected in March at the estate of billionaire Nelson Peltz. Since then, the two have discussed various policy issues, including immigration, which Musk has become vocal about in recent months.“America will fall if it tries to absorb the world,” Musk tweeted in March.Musk has said he will not donate to either presidential campaign this election, but has reportedly told Trump he plans to host gatherings to dissuade wealthy and powerful allies from supporting Joe Biden in November.It has only been just a few years since Musk and Trump were exchanging insults. At a rally in 2022, Trump called Musk “another bullshit artist”. Meanwhile, Musk tweeted that Trump should “hang up his hat and sail into the sunset”.Musk briefly served on Trump’s White House business advisory group early during his presidency, but Musk dropped out after Trump pulled the US out of the Paris climate accord in 2017.Now, relations appeared to have softened. When Musk acquired Twitter, renaming it X, in 2022, he reinstated Trump’s account. Musk has since asked Trump to be more active on X, according to the Journal, though Trump has largely been loyal to his Truth Social platform.In March, after meeting Musk at Peltz’s estate, Trump told CNBC: “I’ve been friendly with him over the years. I helped him when I was president. I helped him. I’ve liked him.”As the owner of Tesla and SpaceX, Musk has benefited from federal government policies and contracts over the last several years, including rocket-service contracts and tax credits for electric vehicles.Trump in March said he and Musk “obviously have opposing views on a minor subject called electric cars”, with Trump opposing ramping up electric vehicle production and supporting tariffs against foreign EV manufacturing.Peltz, an investor, has been a key connector between Trump and Musk. Peltz and Musk have told Trump that they are working on a large data-driven project designed to ensure votes are fairly counted, though details on the project remain opaque. More

  • in

    The US attempt to ban TikTok is an attack on ideas and hope | Dominic Andre

    I’m a TikTok creator. I’ve used TikTok to build a multimillion dollar business, focused on sharing interesting things I’ve learned in life and throughout my years in college. TikTok allowed me to create a community and help further my goal of educating the public. I always feared that one day, it would be threatened. And now, it’s happening.Why does the US government want to ban TikTok? The reasons given include TikTok’s foreign ownership and its “addictive” nature, but I suspect that part of the reason is that the app primarily appeals to younger generations who often hold political and moral views that differ significantly from those of older generations, including many of today’s politicians.The platform has become a powerful tool for grassroots movements challenging established elites and has amplified voices advocating against capitalism and in support of the Black Lives Matter movement and women’s rights. Moreover, for the first time in modern history, Americans’ support for Israel has sharply fallen, a shift I would argue can be attributed in part to TikTok’s video-sharing capabilities. In particular, the app’s stitching feature, which allows creators to link videos, correcting inaccuracies and presenting opposing views within a single video, has revolutionized how audiences access information and form more informed opinions.US Congress has cited concerns over Chinese data collection as justification for proposing a ban. This rationale might be appropriate for banning the app on government-issued devices, both for official and personal use. Other Americans, however, have the right to decide which technologies we use and how we share our data. Personally, I am indifferent to China possessing my data. What harm can the Chinese government do to me if I live in the United States? Also, I’d point out that viewpoints critical of Chinese policies have proliferated on TikTok, which would seem to indicate that the platform is not predominantly used for spreading Chinese propaganda.If politicians’ concern were genuinely about foreign influence, we would discuss in greater detail how Russia allegedly used Facebook to bolster Trump’s campaign and disseminate misinformation. Following this logic, we might as well consider banning Facebook.I spent a decade in college studying international affairs and psychology for my masters. So while I’m somewhat prepared for tough times in the event of TikTok ending, many others aren’t. TikTok hosts tens of thousands of small businesses who, thanks to the platform, reach millions worldwide. This platform has truly leveled the playing field, giving everyone from bedroom musicians to aspiring actors a real shot at being heard. A ban on TikTok would threaten those livelihoods.A ban on TikTok would also threaten a diverse community of creators and the global audience connected through it. As a Palestinian, TikTok gave my cause a voice, a loud one. It became a beacon for bringing the stories of Gaza’s suffering to the forefront, mobilizing awareness and action in ways no other platform has.Using TikTok’s live-streaming feature, I’ve been able to talk to hundreds of thousands of people each day about the issues Palestinians face. I personally watched the minds change of hundreds of people who asked me questions out of honest curiosity.TikTok has made a real difference in educating people about what is happening in Palestine. The stitch feature is one of the most powerful tools for debunking propaganda spread against Palestinians. This feature does not exist on other platforms and was first created by TikTok; with it, creators can correct information and respond to the spread of misinformation in real time.Removing TikTok would do more than disrupt entertainment; it would sever a lifeline for marginalized voices across the world – people like Bisan Owda, an influential young journalist in Gaza whose TikToks each reach hundreds of thousands of views – or creators like myself, whose family was driven out of Palestine in 1948, and killed during the Nakba. I’ve used TikTok to show all the paperwork of my great-grandfather’s land ownership in Palestine – and his passport – to show how his existence was taken away from him.On TikTok, you’ll find thousands of creators from different ethnic groups teaching the world about their cultures. You’ll also find disabled creators sharing their journeys and experiences in a world designed for able-bodied people. UncleTics, for example, is a creator who lives with Tourette syndrome and creates content about his life while also bringing joy to his audience.Banning TikTok wouldn’t just mean an enormous financial hit for the creators who use the platform – it would stifle the rich exchange of ideas, culture and awareness that TikTok uniquely fosters. We stand to lose a tool that has brought global issues out of the shadows and into the public eye. A ban on TikTok is a ban on ideas and hope.Almost every creator and consumer of TikTok I have spoken to does not care about potential data collection by China. Creators, in particular, don’t expect privacy when we’re posting about our lives on a public platform. If Congress wants to enact laws that make it harder for social-media companies to potentially harvest our data, Congress should do it across the board for all social media platforms – not just ones which happened to be based in non-Western countries.A TikTok ban threatens to destroy millions of jobs and silence diverse voices. It would change the world for the worse.
    Dominic Andre is a content creator and the CEO of The Lab More

  • in

    New York governor said Black kids in the Bronx do not know the word ‘computer’

    The governor of New York, Kathy Hochul, has rapidly backtracked on remarks she made on Monday after she came under a blizzard of criticism for saying that Black children in the Bronx did not know the word “computer”.Hochul had intended her appearance at the Milken Institute Global Conference in California on Monday to showcase Empire AI, the $400m consortium she is leading to create an artificial intelligence computing center in upstate New York. Instead, she dug herself into a hole with an utterance she quickly regretted.“Right now we have, you know, young Black kids growing up in the Bronx who don’t even know what the word ‘computer’ is,” she said. For good measure, she added: “They don’t know, they don’t know these things.”The backlash was swift and piercing. Amanda Septimo, a member of the New York state assembly representing the south Bronx, called Hochul’s remarks “harmful, deeply misinformed and genuinely appalling”. She said on X that “repeating harmful stereotypes about one of our most underserved communities only perpetuates systems of abuse”.Fellow assembly member and Bronxite Karines Reyes said she was deeply disturbed by the remarks and exhorted Hochul to “do better”. “Our children are bright, brilliant, extremely capable, and more than deserving of any opportunities that are extended to other kids,” she said.Few public figures were prepared to offer the governor support. They included the speaker of the state assembly, Carl Heastie, who said her words were “inartful and hurtful” but not reflective of “where her heart is”.The civil rights leader Al Sharpton also gave her the benefit of the doubt, saying that she was trying to make a “good point” that “a lot of our community is robbed of using social media because we are racially excluded from access”.By Monday evening, Hochul had apologized. “I misspoke and I regret it,” she said.In a statement to media, she said, “Of course Black children in the Bronx know what computers are – the problem is that they too often lack access to the technology needed to get on track to high-paying jobs in emerging industries like AI.”skip past newsletter promotionafter newsletter promotionThis is not the first time this year that Hochul has found herself with her foot in her mouth. In February she envisaged what would happen if Canada attacked a US city, as a metaphor for the Israeli military operation in Gaza in response to the 7 October Hamas attacks.“If Canada someday ever attacked Buffalo, I’m sorry, my friends, there would be no Canada the next day,” she said. That apology for a “poor choice of words” was made swiftly, too. More