More stories

  • in

    The Guardian view on Sunak’s foreign policy: a Europe-shaped hole | Editorial

    The alliance between Britain and the US, resting on deep foundations of shared history and strategic interest, is not overly affected by the personal relationship between a prime minister and a president.Sometimes individual affinity is consequential, as when Margaret Thatcher and Ronald Reagan were aligned over cold war doctrine, or when Tony Blair put Britain in lockstep with George W Bush for the march to war in Iraq. But there is no prospect of Rishi Sunak forming such a partnership – for good or ill – with Joe Biden at this week’s Washington summit.Viewed from the White House, the prime minister cuts an insubstantial figure – the caretaker leader of a country that has lost its way. That doesn’t jeopardise the underlying relationship. Britain is a highly valued US ally, most notably in the fields of defence, security and intelligence. On trade and economics, Mr Sunak’s position is less comfortable. The prime minister is a poor match with a president who thinks Brexit was an epic blunder and whose flagship policy is a rebuttal of the sacred doctrines of the Conservative party.Mr Biden is committed to shoring up American primacy by means of massive state support for green technology, tax breaks for foreign investment and reconfiguring supply chains with a focus on national security. Mr Sunak’s instincts are more laissez-faire, and his orthodox conservative budgets preclude interventionist statecraft.The two men disagree on a fundamental judgment about the future direction of the global economy, but only one of them has a hand on the steering wheel. Mr Sunak looks more like a passenger, or a pedestrian, since Britain bailed out of the EU – the vehicle that allows European countries to aggregate mid-range economic heft into continental power.London lost clout in the world by surrendering its seat in Brussels, but that fact is hard for Brexit ideologues to process. Their worldview is constructed around the proposition that EU membership depleted national sovereignty and that leaving the bloc would open more lucrative trade routes. Top of the wishlist was a deal with Washington, and Mr Biden has said that won’t happen. Even if it did, the terms would be disadvantageous to Britain as the supplicant junior partner.If Mr Sunak grasps that weakness, he dare not voice it. Instead, Downing Street emits vague noises about Britain’s leading role in AI regulation. But, in governing uses of new technology, Brussels matters more to Washington. London is not irrelevant, but British reach is reduced when ministers are excluded from the rooms where their French, German and other continental counterparts develop policy.Those are the relationships that Mr Sunak must cultivate with urgency. But his view of Europe is circumscribed by Brexit ideology and parochial campaign issues. His meetings with the French president, Emmanuel Macron, have been dominated by the domestic political obsession with small-boat migration across the Channel. The prime minister has no discernible relationship with the German chancellor, Olaf Scholz. He has not visited Berlin.Negotiating the Windsor framework to stabilise Northern Ireland’s status in post-Brexit trade was a vital step in repairing damage done by Boris Johnson and Liz Truss to UK relations with the EU. But there is still a gaping European hole in Britain’s foreign policy. It is visible all the way across the Atlantic, even if the prime minister refuses to see it. More

  • in

    ‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI

    Rumman Chowdhury often has trouble sleeping, but, to her, this is not a problem that requires solving. She has what she calls “2am brain”, a different sort of brain from her day-to-day brain, and the one she relies on for especially urgent or difficult problems. Ideas, even small-scale ones, require care and attention, she says, along with a kind of alchemic intuition. “It’s just like baking,” she says. “You can’t force it, you can’t turn the temperature up, you can’t make it go faster. It will take however long it takes. And when it’s done baking, it will present itself.”It was Chowdhury’s 2am brain that first coined the phrase “moral outsourcing” for a concept that now, as one of the leading thinkers on artificial intelligence, has become a key point in how she considers accountability and governance when it comes to the potentially revolutionary impact of AI.Moral outsourcing, she says, applies the logic of sentience and choice to AI, allowing technologists to effectively reallocate responsibility for the products they build onto the products themselves – technical advancement becomes predestined growth, and bias becomes intractable.“You would never say ‘my racist toaster’ or ‘my sexist laptop’,” she said in a Ted Talk from 2018. “And yet we use these modifiers in our language about artificial intelligence. And in doing so we’re not taking responsibility for the products that we build.” Writing ourselves out of the equation produces systematic ambivalence on par with what the philosopher Hannah Arendt called the “banality of evil” – the wilful and cooperative ignorance that enabled the Holocaust. “It wasn’t just about electing someone into power that had the intent of killing so many people,” she says. “But it’s that entire nations of people also took jobs and positions and did these horrible things.”Chowdhury does not really have one title, she has dozens, among them Responsible AI fellow at Harvard, AI global policy consultant and former head of Twitter’s Meta team (Machine Learning Ethics, Transparency and Accountability). AI has been giving her 2am brain for some time. Back in 2018 Forbes named her one of the five people “building our AI future”.A data scientist by trade, she has always worked in a slightly undefinable, messy realm, traversing the realms of social science, law, philosophy and technology, as she consults with companies and lawmakers in shaping policy and best practices. Around AI, her approach to regulation is unique in its staunch middle-ness – both welcoming of progress and firm in the assertion that “mechanisms of accountability” should exist.Effervescent, patient and soft-spoken, Chowdhury listens with disarming care. She has always found people much more interesting than what they build or do. Before skepticism around tech became reflexive, Chowdhury had fears too – not of the technology itself, but of the corporations that developed and sold it.As the global lead at the responsible AI firm Accenture, she led the team that designed a fairness evaluation tool that pre-empted and corrected algorithmic bias. She went on to start Parity, an ethical AI consulting platform that seeks to bridge “different communities of expertise”. At Twitter – before it became one of the first teams disbanded under Elon Musk – she hosted the company’s first-ever algorithmic bias bounty, inviting outside programmers and data scientists to evaluate the site’s code for potential biases. The exercise revealed a number of problems, including that the site’s photo-cropping software seemed to overwhelmingly prefer faces that were young, feminine and white.This is a strategy known as red-teaming, in which programmers and hackers from outside an organization are encouraged to try and curtail certain safeguards to push a technology to “do bad things to identify what bad things it’s capable of”, says Chowdhury. These kinds of external checks and balances are rarely implemented in the world of tech because of technologists’ fear of “people touching their baby”.She is currently working on another red-teaming event for Def Con – a convention hosted by the hacker organization AI Village. This time, hundreds of hackers are gathering to test ChatGPT, with the collaboration of its founder OpenAI, along with Microsoft, Google and the Biden administration. The “hackathon” is scheduled to run for over 20 hours, providing them with a dataset that is “totally unprecedented”, says Chowdhury, who is organizing the event with Sven Cattell, founder of AI Village and Austin Carson, president of the responsible AI non-profit SeedAI.In Chowdhury’s view, it’s only through this kind of collectivism that proper regulation – and regulation enforcement – can occur. In addition to third-party auditing, she also serves on multiple boards across Europe and the US helping to shape AI policy. She is wary, she tells me, of the instinct to over-regulate, which could lead models to overcorrect and not address ingrained issues. When asked about gay marriage, for example, ChatGPT and other generative AI tools “totally clam up”, trying to make up for the amount of people who have pushed the models to say negative things. But it’s not easy, she adds, to define what is toxic and what is hateful. “It’s a journey that will never end,” she tells me, smiling. “But I’m fine with that.”Early on, when she first started working in tech, she realized that “technologists don’t always understand people, and people don’t always understand technology”, and sought to bridge that gap. In its broadest interpretation, she tells me, her work deals with understanding humans through data. “At the core of technology is this idea that, like, humanity is flawed and that technology can save us,” she says, noting language like “body hacks” that implies a kind of optimization unique to this particular age of technology. There is an aspect of it that kind of wishes we were “divorced from humanity”.Chowdhury has always been drawn to humans, their messiness and cloudiness and unpredictability. As an undergrad at MIT, she studied political science, and, later, after a disillusioning few months in non-profits in which she “knew we could use models and data more effectively, but nobody was”, she went to Columbia for a master’s degree in quantitative methods.skip past newsletter promotionafter newsletter promotionIn the last month, she has spent a week in Spain helping to carry out the launch of the Digital Services Act, another in San Francisco for a cybersecurity conference, another in Boston for her fellowship, and a few days in New York for another round of Def Con press. After a brief while in Houston, where she’s based, she has upcoming talks in Vienna and Pittsburgh on AI nuclear misinformation and Duolingo, respectively.At its core, what she prescribes is a relatively simple dictum: listen, communicate, collaborate. And yet, even as Sam Altman, the founder and CEO of OpenAI, testifies before Congress that he’s committed to preventing AI harms, she still sees familiar tactics at play. When an industry experiences heightened scrutiny, barring off prohibitive regulation often means taking control of a narrative – ie calling for regulation, while simultaneously spending millions in lobbying to prevent the passing of regulatory laws.The problem, she says, is a lack of accountability. Internal risk analysis is often distorted within a company because risk management doesn’t often employ morals. “There is simply risk and then your willingness to take that risk,” she tells me. When the risk of failure or reputational harm becomes too great, it moves to an arena where the rules are bent in a particular direction. In other words: “Let’s play a game where I can win because I have all of the money.”But people, unlike machines, have indefinite priorities and motivations. “There are very few fundamentally good or bad actors in the world,” she says. “People just operate on incentive structures.” Which in turn means that the only way to drive change is to make use of those structures, ebbing them away from any one power source. Certain issues can only be tackled at scale, with cooperation and compromise from many different vectors of power, and AI is one of them.Though, she readily attests that there are limits. Points where compromise is not an option. The rise of surveillance capitalism, she says, is hugely concerning to her. It is a use of technology that, at its core, is unequivocally racist and therefore should not be entertained. “We cannot put lipstick on a pig,” she said at a recent talk on the future of AI at New York University’s School of Social Sciences. “I do not think ethical surveillance can exist.”Chowdhury recently wrote an op-ed for Wired in which she detailed her vision for a global governance board. Whether it be surveillance capitalism or job disruption or nuclear misinformation, only an external board of people can be trusted to govern the technology – one made up of people like her, not tied to any one institution, and one that is globally representative. On Twitter, a few users called her framework idealistic, referring to it as “blue sky thinking” or “not viable”. It’s funny, she tells me, given that these people are “literally trying to build sentient machines”.She’s familiar with the dissonance. “It makes sense,” she says. We’re drawn to hero narratives, the assumption that one person is and should be in charge at any given time. Even as she organizes the Def Con event, she tells me, people find it difficult to understand that there is a team of people working together every step of the way. “We’re getting all this media attention,” she says, “and everybody is kind of like, ‘Who’s in charge?’ And then we all kind of look at each other and we’re like, ‘Um. Everyone?’” More

  • in

    When the tech boys start asking for new regulations, you know something’s up | John Naughton

    Watching the opening day of the US Senate hearings on AI brought to mind Marx’s quip about history repeating itself, “the first time as tragedy, the second as farce”. Except this time it’s the other way round. Some time ago we had the farce of the boss of Meta (neé Facebook) explaining to a senator that his company made money from advertising. This week we had the tragedy of seeing senators quizzing Sam Altman, the new acceptable face of the tech industry.Why tragedy? Well, as one of my kids, looking up from revising O-level classics, once explained to me: “It’s when you can see the disaster coming but you can’t do anything to stop it.” The trigger moment was when Altman declared: “We think that regulatory interventions by government will be critical to mitigate the risks of increasingly powerful models.” Warming to the theme, he said that the US government “might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities”. He believed that companies like his can “partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes that develop and update safety measures and examining opportunities for global coordination.”To some observers, Altman’s testimony looked like big news: wow, a tech boss actually saying that his industry needs regulation! Less charitable observers (like this columnist) see two alternative interpretations. One is that it’s an attempt to consolidate OpenAI’s lead over the rest of the industry in large language models (LLMs), because history suggests that regulation often enhances dominance. (Remember AT&T.) The other is that Altman’s proposal is an admission that the industry is already running out of control, and that he sees bad things ahead. So his proposal is either a cunning strategic move or a plea for help. Or both.As a general rule, whenever a CEO calls for regulation, you know something’s up. Meta, for example, has been running ads for ages in some newsletters saying that new laws are needed in cyberspace. Some of the cannier crypto crowd have also been baying for regulation. Mostly, these calls are pitches for corporations – through their lobbyists – to play a key role in drafting the requisite legislation. Companies’ involvement is deemed essential because – according to the narrative – government is clueless. As Eric Schmidt – the nearest thing tech has to Machiavelli – put it last Sunday on NBC’s Meet the Press, the AI industry needs to come up with regulations before the government tries to step in “because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right. But the industry can roughly get it right and then the government can put a regulatory structure around it.”Don’t you just love that idea of the tech boys roughly “getting it right”? Similar claims are made by foxes when pitching for henhouse-design contracts. The industry’s next strategic ploy will be to plead that the current worries about AI are all based on hypothetical scenarios about the future. The most polite term for this is baloney. ChatGPT and its bedfellows are – among many other things – social media on steroids. And we already know how these platforms undermine democratic institutions and possibly influence elections. The probability that important elections in 2024 will not be affected by this kind of AI is precisely zero.Besides, as Scott Galloway has pointed out in a withering critique, it’s also a racing certainty that chatbot technology will exacerbate the epidemic of loneliness that is afflicting young people across the world. “Tinder’s former CEO is raising venture capital for an AI-powered relationship coach called Amorai that will offer advice to young adults struggling with loneliness. She won’t be alone. Call Annie is an ‘AI friend’ you can phone or FaceTime to ask anything you want. A similar product, Replika, has millions of users.” And of course we’ve all seen those movies – such as Her and Ex Machina – that vividly illustrate how AIs insert themselves between people and relationships with other humans.In his opening words to the Senate judiciary subcommittee’s hearing, the chairman, Senator Blumenthal, said this: “Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment. The result is: predators on the internet; toxic content; exploiting children, creating dangers for them… Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real.”Amen to that. The only thing wrong with the senator’s stirring introduction is the word “before”. The threats and the risks are already here. And we are about to find out if Marx’s view of history was the one to go for.What I’ve been readingCapitalist punishmentWill AI Become the New McKinsey? is a perceptive essay in the New Yorker by Ted Chiang.Founders keepersHenry Farrell has written a fabulous post called The Cult of the Founders on the Crooked Timber blog.Superstore meThe Dead Silence of Goods is a lovely essay in the Paris Review by Adrienne Raphel about Annie Ernaux’s musings on the “superstore” phenomenon. More

  • in

    OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful’ AI

    The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said “regulation of AI is essential” as he testified in his first appearance in front of the US Congress.Speaking to the Senate judiciary committee on Tuesday, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms.“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his prepared remarks.Altman suggested the US government might consider licensing and testing requirements for development and release of AI models. He proposed establishing a set of safety standards and a specific test models would have to pass before they can be deployed, as well as allowing independent auditors to examine the models before they are launched. He also argued existing frameworks like Section 230, which releases platforms from liability for the content its users post, would not be the right way to regulate the system.“For a very new technology we need a new framework,” Altman said.Both Altman and Gary Marcus, an emeritus professor of psychology and neural science at New York University who also testified at the hearing, called for a new regulatory agency for the technology. AI is complicated and moving fast, Marcus argued, making “an agency whose full-time job” is to regulate it crucial.Throughout the hearing, senators drew parallels between social media and generative AI, and the lessons lawmakers had learned from the government’s failure to act on regulating social platforms.Yet the hearing was far less contentious than those at which the likes of the Meta CEO, Mark Zuckerberg, testified. Many lawmakers gave Altman credit for his calls for regulation and acknowledgment of the pitfalls of generative AI. Even Marcus, brought on to provide skepticism about the technology, called Altman’s testimony sincere.The hearing came as renowned and respected AI experts and ethicists, including former Google researchers Dr Timnit Gebru, who co-led the company’s ethical AI team, and Meredith Whitaker, have been sounding the alarm about the rapid adoption of generative AI, arguing the technology is over-hyped. “The idea that this is going to magically become a source of social good … is a fantasy used to market these programs,” Whitaker, now the president of secure messaging app Signal, recently said in an interview with Meet the Press Reports.Generative AI is a probability machine “designed to spit out things that seem plausible” based on “massive amounts of effectively surveillance data that has been scraped from the web”, she argued.Senators Josh Hawley and Richard Blumenthal said this hearing is just the first step in understanding the technology.Blumenthal said he recognized what he described as the “promises” of the technology including “curing cancer, developing new understandings of physics and biology, or modeling climate and weather”.Potential risks Blumenthal said he was worried about include deepfakes, weaponized disinformation, housing discrimination, harassment of women and impersonation frauds. “For me, perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers,” he said.Altman said that while OpenAI was building tools that will one day “address some of humanity’s biggest challenges like climate changes and curing cancer”, the current systems were not capable of doing these things yet.But he believes the benefits of the tools deployed so far “vastly outweigh the risks” and said the company conducts extensive testing and implements safety and monitoring systems before releasing any new system.“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” Altman said.Altman said the technology will significantly affect the job market but he believes “there will be far greater jobs on the other side of this”.“The jobs will get better,” he said. “I think it’s important to think of GPT as a tool not a creature … GPT 4 and tools like it are good at doing tasks, not jobs. GPT 4 will, I think, entirely automate away some jobs and it will create new ones that we believe will be much better.”Altman also said he was very concerned about the impact that large language model services will have on elections and misinformation, particularly ahead of the primaries.“There’s a lot that we can and do do,” Altman said in response to a question from Senator Amy Klobuchar about a tweet ChatGPT crafted that listed fake polling locations. “There are things that the model won’t do and there is monitoring. At scale … we can detect someone generating a lot of those [misinformation] tweets.”Altman didn’t have an answer yet for how content creators whose work is being used in AI-generated songs, articles or other works can be compensated, saying the company is engaged with artists and other entities on what that economic model could look like. When asked by Klobuchar about how he plans to remedy threats to local news publications whose content is being scraped and used to train these models, Altman said he hopes the tool would help journalists but that “if there are things that we can do to help local news, we’d certainly like to”.Touched upon but largely missing from the conversation was the potential danger of a small group of power players dominating the industry, a dynamic Whitaker has warned risks entrenching existing power dynamics.“There are only a handful of companies in the world that have the combination of data and infrastructural power to create what we’re calling AI from nose-to-tail,” she said in the Meet the Press interview. “We’re now in a position that this overhyped technology is being created, distributed and ultimately shaped to serve the economic interests of these same handful of actors.” More

  • in

    Palantir, the all-seeing US data company keen to get into NHS health systems | Arwa Mahdawi

    Palantir, the all-seeing US tech company, could soon have the data of millions of NHS patients. My response? Yikes!Arwa MahdawiYou might never have heard of tech billionaire Peter Thiel’s CIA-backed analytics company. But it could know all about you if it wins a contract to manage NHS data Peter Thiel has a terrible case of RBF – reclusive billionaire face. I’m not being deliberately mean-spirited, just stating the indisputable fact that the tech entrepreneur, a co-founder of PayPal, doesn’t exactly give off feel-good vibes. There is a reason why pretty much every mention of Thiel tends to be peppered with adjectives such as “secretive”, “distant” and “haughty”. He has cultivated an air of malevolent mystique. It’s all too easy to imagine him sitting in a futuristic panopticon, torturing kittens and plotting how to overthrow democracy.It’s all too easy to imagine that scenario because (apart from the torturing kittens part, obviously), that is basically how the 54-year-old billionaire already spends his days. Thiel was famously one of Donald Trump’s biggest donors in 2016; this year, he is one of the biggest individual donors to Republican politics. While it is hardly unusual for a billionaire to throw money at conservative politicians, Thiel is notable for expressing disdain for democracy, and funding far-right candidates who have peddled Trump’s dangerous lie that the election was stolen from him. As the New York Times warned in a recent profile: “Thiel’s wealth could accelerate the shift of views once considered fringe to the mainstream – while making him a new power broker on the right.”When he isn’t pumping money into far-right politicians, Thiel is busy accelerating the surveillance state. In 2004, the internet entrepreneur founded a data-analytics company called Palantir Technologies (after the “seeing stones” used in The Lord of the Rings), which has been backed by the venture capital arm of the CIA. What dark magic Palantir does with data is a bit of a mystery but it has its fingers in a lot of pies: it has worked with F1 racing, sold technology to the military, partnered with Space Force and developed predictive policing systems. And while no one is entirely sure about the extent of everything Palantir does, the general consensus seems to be that it has access to a huge amount of data. As one Bloomberg headline put it: “Palantir knows everything about you.”Soon it might know even more. The Financial Times recently reported that Palantir is “gearing up” to become the underlying data operating system for the NHS. In recent months it has poached two top executives from the NHS, including the former head of artificial intelligence, and it is angling to get a five-year, £360m contract to manage the personal health data of millions of patients. There are worries that the company will then entrench itself further into the health system. “Once Palantir is in, how are you going to remove them?” one source with knowledge of the matter told the FT.How worried should we be about all this? Well, according to one school of thought, consternation about the potential partnership is misplaced. There is a line of argument that it is just a dull IT deal that people are getting worked up over because they don’t like the fact that Thiel gave a bunch of money to Trump. And to be fair, even if you think Thiel is a creepy dude with creepy beliefs, it is important to note that he is not the only guy in charge of Palantir: the company was co-founded in 2003 by Alex Karp, who is still the CEO; he voted for Hillary Clinton and has described himself as a progressive (although, considering his affinity for the military, he certainly has a different view of progress than I do).My school of thought, meanwhile, is best summarised as: yikes. Anyone who has had any experience of the abysmal US healthcare system should be leery of private American companies worming their way into the NHS. Particularly when the current UK government would privatise its own grandmother if the price was right. I don’t know exactly what Palantir wants with the NHS but I do know it’s worth keeping an eye on it. It’s certainly keeping an eye on you.
    Arwa Mahdawi is a Guardian columnist
    Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 300 words to be considered for publication, email it to us at guardian.letters@theguardian.comTopicsTechnologyOpinionArtificial intelligence (AI)NHSUS politicsHealthcommentReuse this content More

  • in

    What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI | Arwa Mahdawi

    Want to see a half-naked woman? Well, you’re in luck! The internet is full of pictures of scantily clad women. There are so many of these pictures online, in fact, that artificial intelligence (AI) now seems to assume that women just don’t like wearing clothes.That is my stripped-down summary of the results of a new research study on image-generation algorithms anyway. Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)Why was the algorithm so fond of bikini pics? Well, because garbage in means garbage out: the AI “learned” what a typical woman looked like by consuming an online dataset which contained lots of pictures of half-naked women. The study is yet another reminder that AI often comes with baked-in biases. And this is not an academic issue: as algorithms control increasingly large parts of our lives, it is a problem with devastating real-world consequences. Back in 2015, for example, Amazon discovered that the secret AI recruiting tool it was using treated any mention of the word “women’s” as a red flag. Racist facial recognition algorithms have also led to black people being arrested for crimes they didn’t commit. And, last year, an algorithm used to determine students’ A-level and GCSE grades in England seemed to disproportionately downgrade disadvantaged students.As for those image-generation algorithms that reckon women belong in bikinis? They are used in everything from digital job interview platforms to photograph editing. And they are also used to create huge amounts of deepfake porn. A computer-generated AOC in a bikini is just the tip of the iceberg: unless we start talking about algorithmic bias, the internet is going to become an unbearable place to be a woman. More

  • in

    From the Iowa caucuses to the Barnes & Noble fiasco, it’s clear: tech cannot save us | Julia Carrie Wong

    We have fallen for the idea that apps and artificial intelligence can substitute judgement and hard work. They can’t Every four years, journalists from around the world are drawn to the Iowa caucuses like podcasters to a murder. The blatantly anti-democratic tradition appeals to certain journalistic biases: the steadfast belief of the political press that […] More