More stories

  • in

    Dignity in a Digital Age review: a congressman takes big tech to task

    Dignity in a Digital Age review: a congressman takes big tech to taskRo Khanna represents Silicon Valley and the best of Capitol Hill and wants to help. His aims are ambitious, his book necessary Just on the evidence of his new book, Ro Khanna is one of the broadest, brightest and best-educated legislators on Capitol Hill. A graduate of the University of Chicago and Yale Law School who represents Silicon Valley, he is by far the most tech-savvy member of Congress.Silicon Holler: Ro Khanna says big tech can help heal the US heartlandRead moreAt this very dark moment for American democracy, this remarkable son of Indian immigrants writes with the optimism and idealism of a first-generation American who still marvels at the opportunities he has had.Even more remarkable for a congressman whose district includes Apple, Google, Intel and Yahoo, Khanna is one of the few who refuses to take campaign money from political action committees.Once or twice in a “heated basketball game” in high school, he writes, someone may have shouted “go back to India!” But what Khanna mostly remembers about his childhood are neighbors in Pennsylvania’s Bucks county who taught him “to believe that dreams are worth pursuing in America, regardless of one’s name or heritage”.His book is bulging with ideas about how to transform big tech from a huge threat to liberty into a genuine engine of democracy. What he is asking for is almost impossibly ambitious, but he never sounds daunted.“Instead of passively allowing tech royalty and their legions to lead the digital revolution and serve narrow financial ends before all others,” he writes, “we need to put it in service of our broader democratic aspirations. We need to steer the ship [and] call the shots.”The story of tech is emblematic of our time of singular inequality, a handful of big winners on top and a vast population untouched by the riches of the silicon revolution. Khanna begins his book with a barrage of statistics. Ninety percent of “innovation job growth” in recent decades has been in five cities while 50% of digital service jobs are in just 10 major metro centers.Most Americans “are disconnected from the wealth generation of the digital economy”, he writes, “despite having their industries and … lives transformed by it”.A central thesis is that no person should be forced to leave their hometown to find a decent job. There is one big reason for optimism about this huge aspiration: the impact of Covid. Practically overnight, the pandemic “shattered” conventional wisdom “about tech concentration”. Suddenly it was obvious that high-speed broadband allowed “millions of jobs to be done anywhere in the nation”.The willingness of millions of Americans to leave big city life is confirmed by red-hot real estate markets in far flung towns and villages – and a Harris poll that showed nearly 40% of city dwellers were willing to live elsewhere.“The promise is of new jobs without sudden cultural displacement,” Khanna writes.He suggests a range of incentives to spread tech jobs into rural areas, including big federal investment to bring high-speed connections to the millions still without them. This is turn would make it possible to require federal contractors to have at least 10% of their workforces in rural communities.The congressman imagines nothing less than a “recentering” of “human values in a culture that prizes the pursuit of technological progress and market valuations”. A vital step in that direction would be a $5bn investment for laptops for 11 million students who don’t have them.The problems of inequality begin at the tech giants themselves. Almost 20% of computer science graduates are black or Latino but only 10% of employees of big tech companies are. Less than 3% of venture capital lands in the hands of Black or Latino entrepreneurs.If redistributing some of big tech’s gigantic wealth is one way to regain some dignity in the digital age, the other is to rein in some of the industry’s gigantic abuses. Data mining and the promotion of hate for profit are the two biggest problems. Khanna has drafted an Internet Bill of Rights to improve the situation.Throughout his book, he drops bits of evidence to suggest just how urgent it is to find a way to make the biggest companies behave better.“Algorithmic amplification” turns out to be one of the greatest evils of the modern age. After extracting huge amounts of data about users, Facebook and the other big platforms “push sensational and divisive content to susceptible users based on their profiles”.An internal discussion at Facebook revealed that “64% of all extremist group joins are due to our recommendations”. The explosion of the bizarre QAnon is one of Facebook’s most dubious accomplishments. In the three years before it finally banned it in 2020, “QAnon groups developed millions of followers as Facebook’s algorithm encouraged people to join based on their profiles. Twitter also recommended Qanon tweets”. The conspiracy theory was “actively recommended” on YouTube until 2019.And then there is the single greatest big tech crime against humanity. According to Muslim Advocates, a Washington-based civil rights group, the Buddhist junta in Myanmar used Facebook and WhatsApp to plan the mass murder of Rohingya Muslims. The United Nations found that Facebook played a “determining role” in events that led to the murder of at least 25,000 and the displacement of 700,000.The world would indeed be a much better place if it adopted Khanna’s recommendations. But the question Khanna is too optimistic to ask may also be the most important one.Have these companies already purchased too much control of the American government for any fundamental change to be possible?
    Dignity in a Digital Age: Making Tech Work For All Of Us is published in the US by Simon & Schuster
    TopicsBooksSilicon ValleyDemocratsUS politicsUS CongressHouse of RepresentativesUS domestic policyreviewsReuse this content More

  • in

    Facebook leak reveals policies on restricting New York Post's Biden story

    Facebook moderators had to manually intervene to suppress a controversial New York Post story about Hunter Biden, according to leaked moderation guidelines seen by the Guardian.The document, which lays out in detail Facebook’s policies for dealing with misinformation on Facebook and Instagram, sheds new light on the process that led to the company’s decision to reduce the distribution of the story.“This story is eligible to be factchecked by Facebook’s third-party factchecking partners,” Facebook’s policy communications director, Andy Stone, said at the time. “In the meantime, we are reducing its distribution on our platform. This is part of our standard process to reduce the spread of misinformation. We temporarily reduce distribution pending factchecker review.”In fact, the documents show, the New York Post – like most major websites – was given special treatment as part of Facebook’s standard process. Stories can be “enqueued” for Facebook’s third-party factcheckers in one of two ways: either by being flagged by an AI, or by being manually added by one of the factcheckers themselves.Facebook’s AI looks for signals “including feedback from the community and disbelief comments” to automatically predict which posts might contain misinformation. “Predicted content is temporarily (for seven days) soft demoted in feed (at 50% strength) and enqueued to fact check product for review by [third-party factcheckers],” the document says.But some posts are not automatically demoted. Sites in the “Alexa 5K” list, “which includes content in the top 5,000 most popular internet sites”, are supposed to keep their distribution high, “under the assumption these are unlikely to be spreading misinformation”.Those guidelines can be manually overridden, however. “In some cases, we manually enqueue content … either with or without temporary demotion. We can do this on escalation and based on whether the content is eligible for fact-checking, related to an issue of importance, and has an external signal of falsity.” The US election is such an “issue of importance”.In a statement, a Facebook spokesperson said: “As our CEO Mark Zuckerberg testified to Congress earlier this week, we have been on heightened alert because of FBI intelligence about the potential for hack and leak operations meant to spread misinformation. Based on that risk, and in line with our existing policies and procedures, we made the decision to temporarily limit the content’s distribution while our factcheckers had a chance to review it. When that didn’t happen, we lifted the demotion.”The guidelines also reveal Facebook had prepared a “break-glass measure” for the US election, allowing its moderators to apply a set of policies for “repeatedly factchecked hoaxes” (RFH) to political content. “For a claim to be included as RFH, it must meet eligibility criteria (including falsity, virality and severity) and have content policy leadership approval.”The policy, which to the Guardian’s knowledge has not yet been applied, would lead to Facebook blocking viral falsehoods about the election without waiting for them to be debunked each time a new version appeared. A similar policy about Covid-19 hoaxes is enforced by “hard demoting the content, applying a custom inform treatment, and rejecting ads”.Facebook acts only on a few types of misinformation without involving third-party factcheckers, the documents reveal. Misinformation aimed at voter or census interference is removed outright “because of the severity of the harm to democratic systems”. Manipulated media, or “deepfakes”, are removed “because of the difficulty of ‘unseeing’ content so sophisticatedly edited”. And misinformation that “contributes to imminent violence or physical harm” is removed because of the security of imminent physical harm.The latter policy is not normally applied by ground-level moderation staff, but a special exception has been made for misinformation about Covid-19, the document says. Similar exceptions have been made to misinformation about polio in Pakistan and Afghanistan, and to misinformation about Ebola in the Democratic Republic of the Congo.Facebook also has a unique policy around vaccine hoaxes. “Where groups and pages spread these widely debunked hoaxes about vaccinations two or more times within 90 days, those groups and pages will be demoted in search results, all of their content will be demoted in news feed, they will be pulled from recommendation systems and type-ahead in search, and pages may have their access to fundraising tools revoked,” the document reads.“This policy is enforced by Facebook and not third-party factcheckers. Thus, our policy of not subjecting politician speech to factchecking does NOT apply here. If a politician shares hoaxes about vaccines we will enforce on that content.” More