More stories

  • in

    Facebook whistleblower to take her story before the US Senate

    FacebookFacebook whistleblower to take her story before the US SenateFrances Haugen, who came forward accusing the company of putting profit over safety, will testify in Washington on Tuesday Dan Milmo and Kari PaulMon 4 Oct 2021 23.00 EDTLast modified on Mon 4 Oct 2021 23.23 EDTA former Facebook employee who has accused the company of putting profit over safety will take her damning accusations to Washington on Tuesday when she testifies to US senators.Frances Haugen, 37, came forward on Sunday as the whistleblower behind a series of damaging reports in the Wall Street Journal that have heaped further political pressure on the tech giant. Haugen told the news program 60 Minutes that Facebook’s priority was making money over doing what was good for the public.“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money,” she said.How losing a friend to misinformation drove Facebook whistleblower Read moreHaugen is expected to tell lawmakers that Facebook faces little oversight, and will urge Congress to take action. “As long as Facebook is operating in the dark, it is accountable to no one. And it will continue to make choices that go against the common good,” she wrote in her written testimony.Haugen was called to testify before the US Senate’s commerce subcommittee on the risks the company’s products pose to children. Lawmakers called the hearing in response to a Wall Street Journal story based on Haugen’s documents that showed Facebook was aware of the damage its Instagram app was causing to teen mental health and wellbeing. One survey in the leaked research estimated that 30% of teenage girls felt Instagram made dissatisfaction with their body worse.She is expected to compare Facebook to big big tobacco, which resisted telling the public that smoking damaged consumers’ health. “When we realized tobacco companies were hiding the harms it caused, the government took action. When we figured out cars were safer with seatbelts, the government took action,” Haugen wrote. “I implore you to do the same here.”Haugen will argue that Facebook’s closed design means it has no oversight, even from its own oversight board, a regulatory group that was formed in 2020 to make decisions independent of Facebook’s corporate leadership.“This inability to see into the actual systems of Facebook and confirm that Facebook’s systems work like they say is like the Department of Transportation regulating cars by watching them drive down the highway,” she wrote in her testimony. “Imagine if no regulator could ride in a car, pump up its wheels, crash test a car, or even know that seatbelts could exist.”Senator Richard Blumenthal, the Democrat whose committee is holding Tuesday’s hearing, told the Washington Post’s Technology 2020 newsletter that lawmakers will also ask Haugen about her remarks on the 2020 presidential election.Haugen alleged on 60 Minutes that following Joe Biden’s win in the election, Facebook prematurely reinstated old algorithms that valued engagement over all else, a move that she said contributed to the 6 January attack on the Capitol.“As soon as the election was over, they turned them back off or they changed the settings back to what they were before, to prioritize growth over safety. And that really feels like a betrayal of democracy to me,” she said.Following the election, Facebook also disbanded its civic team integrity team, a group that worked on issues related to political elections worldwide and which Haugen worked on. Facebook has said the team’s functions were distributed across the company.Haugen joined Facebook in 2019 as a product manager on the civic integrity team after spending more than a decade working in the tech industry, including at Pinterest and Google.Tuesday’s hearing is the second in mere weeks to focus on Facebook’s impact on children. Last week, lawmakers grilled Antigone Davis, Facebook’s global head of safety, and accused the company of “routinely” putting growth above children’s safety.Facebook has aggressively contested the accusations.On Friday, the company’s vice-president of policy and public affairs, Nick Clegg, wrote to Facebook employees ahead of Haugen’s public appearance. “Social media has had a big impact on society in recent years, and Facebook is often a place where much of this debate plays out,” he said. “But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”On Monday, Facebook asked a federal judge throw out a revised anitrust lawsuit brought by the Federal Trade Commission (FTC) that seeks to force the company giant to sell Instagram and WhatsApp.TopicsFacebookSocial mediaUS SenateUS politicsnewsReuse this content More

  • in

    Facebook ‘tearing our societies apart’: key excerpts from a whistleblower

    FacebookFacebook ‘tearing our societies apart’: key excerpts from a whistleblower Frances Haugen tells US news show why she decided to reveal inside story about social networking firm Dan Milmo Global technology editorMon 4 Oct 2021 08.33 EDTLast modified on Mon 4 Oct 2021 10.30 EDTFrances Haugen’s interview with the US news programme 60 Minutes contained a litany of damning statements about Facebook. Haugen, a former Facebook employee who had joined the company to help it combat misinformation, told the CBS show the tech firm prioritised profit over safety and was “tearing our societies apart”.Haugen will testify in Washington on Tuesday, as political pressure builds on Facebook. Here are some of the key excerpts from Haugen’s interview.Choosing profit over the public goodHaugen’s most cutting words echoed what is becoming a regular refrain from politicians on both sides of the Atlantic: that Facebook puts profit above the wellbeing of its users and the public. “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money.”She also accused Facebook of endangering public safety by reversing changes to its algorithm once the 2020 presidential election was over, allowing misinformation to spread on the platform again. “And as soon as the election was over, they turned them [the safety systems] back off or they changed the settings back to what they were before, to prioritise growth over safety. And that really feels like a betrayal of democracy to me.”Facebook’s approach to safety compared with othersIn a 15-year career as a tech professional, Haugen, 37, has worked for companies including Google and Pinterest but she said Facebook had the worst approach to restricting harmful content. She said: “I’ve seen a bunch of social networks and it was substantially worse at Facebook than anything I’d seen before.” Referring to Mark Zuckerberg, Facebook’s founder and chief executive, she said: “I have a lot of empathy for Mark. And Mark has never set out to make a hateful platform. But he has allowed choices to be made where the side-effects of those choices are that hateful, polarising content gets more distribution and more reach.”Instagram and mental healthThe document leak that had the greatest impact was a series of research slides that showed Facebook’s Instagram app was damaging the mental health and wellbeing of some teenage users, with 30% of teenage girls feeling that it made dissatisfaction with their body worse.She said: “And what’s super tragic is Facebook’s own research says, as these young women begin to consume this eating disorder content, they get more and more depressed. And it actually makes them use the app more. And so, they end up in this feedback cycle where they hate their bodies more and more. Facebook’s own research says it is not just that Instagram is dangerous for teenagers, that it harms teenagers, it’s that it is distinctly worse than other forms of social media.”Facebook has described the Wall Street Journal’s reporting on the slides as a “mischaracterisation” of its research.Why Haugen leaked the documentsHaugen said “person after person” had attempted to tackle Facebook’s problems but had been ground down. “Imagine you know what’s going on inside of Facebook and you know no one on the outside knows. I knew what my future looked like if I continued to stay inside of Facebook, which is person after person after person has tackled this inside of Facebook and ground themselves to the ground.”Having joined the company in 2019, Haugen said she decided to act this year and started copying tens of thousands of documents from Facebook’s internal system, which she believed show that Facebook is not, despite public comments to the contrary, making significant progress in combating online hate and misinformation . “At some point in 2021, I realised, ‘OK, I’m gonna have to do this in a systemic way, and I have to get out enough that no one can question that this is real.’”Facebook and violenceHaugen said the company had contributed to ethnic violence, a reference to Burma. In 2018, following the massacre of Rohingya Muslims by the military, Facebook admitted that its platform had been used to “foment division and incite offline violence” relating to the country. Speaking on 60 Minutes, Haugen said: “When we live in an information environment that is full of angry, hateful, polarising content it erodes our civic trust, it erodes our faith in each other, it erodes our ability to want to care for each other. The version of Facebook that exists today is tearing our societies apart and causing ethnic violence around the world.”Facebook and the Washington riotThe 6 January riot, when crowds of rightwing protesters stormed the Capitol, came after Facebook disbanded the Civic Integrity team of which Haugen was a member. The team, which focused on issues linked to elections around the world, was dispersed to other Facebook units following the US presidential election. “They told us: ‘We’re dissolving Civic Integrity.’ Like, they basically said: ‘Oh good, we made it through the election. There wasn’t riots. We can get rid of Civic Integrity now.’ Fast-forward a couple months, we got the insurrection. And when they got rid of Civic Integrity, it was the moment where I was like, ‘I don’t trust that they’re willing to actually invest what needs to be invested to keep Facebook from being dangerous.’”The 2018 algorithm changeFacebook changed the algorithm on its news feed – Facebook’s central feature, which supplies users with a customised feed of content such as friends’ photos and news stories – to prioritise content that increased user engagement. Haugen said this made divisive content more prominent.“One of the consequences of how Facebook is picking out that content today is it is optimising for content that gets engagement, or reaction. But its own research is showing that content that is hateful, that is divisive, that is polarising – it’s easier to inspire people to anger than it is to other emotions.” She added: “Facebook has realised that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money.”Haugen said European political parties contacted Facebook to say that the news feed change was forcing them to take more extreme political positions in order to win users’ attention. Describing polititicians’ concerns, she said: “You are forcing us to take positions that we don’t like, that we know are bad for society. We know if we don’t take those positions, we won’t win in the marketplace of social media.”In a statement to 60 Minutes, Facebook said: “Every day our teams have to balance protecting the right of billions of people to express themselves openly with the need to keep our platform a safe and positive place. We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true. If any research had identified an exact solution to these complex challenges, the tech industry, governments, and society would have solved them a long time ago.”TopicsFacebookSocial networkingUS Capitol attackInstagramMental healthSocial mediaYoung peoplenewsReuse this content More

  • in

    Facebook whistleblower to claim company contributed to Capitol attack

    US Capitol attackFacebook whistleblower to claim company contributed to Capitol attackFormer employee is set to air her claims and reveal her identity in an interview airing Sunday night on CBS 60 Minutes Edward HelmoreSun 3 Oct 2021 13.13 EDTLast modified on Sun 3 Oct 2021 13.15 EDTA whistleblower at Facebook will say that thousands of pages of internal company research she turned over to federal regulators proves the social media giant is deceptively claiming effectiveness in its efforts to eradicate hate and misinformation and it contributed to the January 6 attack on the Capitol in Washington DC.The former employee is set to air her claims and reveal her identity in an interview airing Sunday night on CBS 60 Minutes ahead of a scheduled appearance at a Senate hearing on Tuesday.In an internal 1,500-word memo titled Our position on Polarization and Election sent out on Friday, Facebook’s vice-president of global affairs, Nick Clegg, acknowledged that the whistleblower would accuse the company of contributing to the 6 January Capitol riot and called the claims “misleading”.The memo was first reported by the New York Times.The 6 January insurrection was carried out by a pro-Trump mob that sought to disrupt the election of Joe Biden as president. The violence and chaos of the attack sent shockwaves throughout the US, and the rest of the world, and saw scores of people injured and five die.Clegg, a former former UK deputy prime minister, said in his memo that Facebook had “developed industry-leading tools to remove hateful content and reduce the distribution of problematic content. As a result, the prevalence of hate speech on our platform is now down to about 0.05%.”He said that many things had contributed to America’s divisive politics.“The rise of polarization has been the subject of swathes of serious academic research in recent years. In truth, there isn’t a great deal of consensus. But what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization,” Clegg wrote.The memo comes two weeks after Facebook issued a statement on its corporate website hitting back against a series of critical articles in the Wall Street Journal.TopicsUS Capitol attackFacebookSocial networkingUS politicsnewsReuse this content More

  • in

    Automakers could be required to install technology to detect drunk drivers

    Automotive industryAutomakers could be required to install technology to detect drunk driversCars would be prevented from starting if the operator is impaired – but some critics worry about who could access the data Edward Helmore in New YorkFri 17 Sep 2021 06.00 EDTLast modified on Fri 17 Sep 2021 06.02 EDTCar manufacturers would be required to include technology to monitor whether US drivers are impaired by alcohol and to disable the vehicle from operating under a proposal contained in the infrastructure bill awaiting a Senate vote.While advocates say the proposal could save thousands of lives, the move has some critics worried it could cross ethical boundaries and raise civil rights issues.The proposal is to develop technology that can passively monitor a driver to detect impairment or passively detect blood alcohol levels and prevent operation of a vehicle if impairment is detected or if levels are too high. The National Highway Traffic Safety Administration (NHTSA) estimates that drunk-driving is involved in 10,000 deaths a year in the US, one person every 52 minutes, and US police departments arrest about 1 million people a year for alcohol-impaired driving.According to the Automotive Coalition for Traffic Safety, which represents the world’s leading automakers, the first product equipped with new alcohol detection technology will be available for open licensing in commercial vehicles later this year.The technology will automatically detect when a driver is intoxicated with a blood alcohol concentration (BAC) at or above 0.08% – the legal limit in all 50 states except Utah – and then immobilize the car.Partly funded by the federal government through the NHTSA, the technology centers on sensors that could measure alcohol in the air around the driver, or a sensor in the start button or driving wheel to measure blood alcohol content in capillaries in a driver’s finger.But the NHTSA has warned that any monitoring system will have to be “seamless, accurate and precise, and unobtrusive to the sober driver”.If the proposal in the infrastructure bill becomes law it will mandate that “advanced drunk and impaired driving prevention technology must be standard equipment in all new passenger motor vehicles.”Within three years of its becoming law, the Department of Transportation would be required to sign off on accepted technology. Carmakers would have a further three years to comply. The transportation secretary, however, can extend the timeframe of approval for up to a decade if requirements are “reasonable, practicable, and appropriate”.According to reports, the agency is keen to avoid a repeat of seatbelt technology in 1970s that was designed to prevent a car from starting unless they were buckled but frequently malfunctioned, stranding drivers.The technology emerged after a panel of auto industry representatives and safety advocates convened by Mothers Against Drunk Driving (Madd) was formed to encourage and support the development of passive technology to prevent drunk-driving. Citing a study by the Insurance Institute for Highway Safety, Madd estimates that more than 9,000 lives a year could be saved if drunk-driving prevention tech were installed on all new cars.The Center for Automotive Research (Car) has said that the challenge for the auto industry is to come up with something that is affordable and functions efficiently enough to be installed in millions of new vehicles.“I don’t think that will be as easy as people might think,” Car’s chief executive, Carla Bailo, told NBC News. An impaired-driver sensor, Bailo added, was likely to be expensive and would have to be especially effective because “people will try to cheat”.But the technology also raises ethical questions about surveillance, even if that surveillance is in service of reducing a social problem that costs $44bn in economic costs and $210bn in comprehensive societal costs, according to a 2010 study.Madd does not support punitive measures, including breath-testing devices attached to an ignition interlock that some convicted drunk drivers are required to use before starting their vehicles, but advocates instead that anti-drunk-driving measures should be integrated in vehicle systems.“If we have the cure, why wouldn’t we use it?” said Stephanie Manning, Madd’s chief government affairs officer. “Victims and survivors of drunk-driving tell us this technology is part of their healing, and that’s what they have been telling members of Congress.”Manning points out that the auto industry has invested billions in autonomous vehicles, and alcohol detection technology is just one type of driver-distraction technology that the Department of Transportation needs to consider.“The technology we favor is the one that stops impaired drivers from using their vehicles as weapons on the road,” Manning said. “The industry has the technology that knows what a drunk driver looks like. The question is, at what point does the car need to take over to prevent somebody from being killed or seriously injured?”But the technology raises serious ethical and data privacy questions, including how to ensure collected data doesn’t end up in the hands of law enforcement or insurance companies.Wolf Schäfer, professor of technology and society at the Stony Brook University, said: “It’s a policy question that has ethical implications. Cars are increasingly pre-programmed and that brings up questions of responsibility for the actions of the car. Is it the programmer? The manufacturer? The person who bought the car?“Many people accept that one shouldn’t drive drunk and if you do, you commit an infraction. But in this situation the car becomes supervisor of your conduct. So ethics-wise, one could get away with that. But should this be reported to authorities presents grave ethical problems,” Schäfer said. “The privacy issues are real because sensors collect data, and what happens to this data is a question that’s all over the place, not just with cars.”The American Civil Liberties Union said it was “still evaluating the proposal”.TopicsAutomotive industryUS politicsAlcoholnewsReuse this content More

  • in

    The State Versus the People: Who Controls the Internet?

    India’s government and Twitter have been fighting a legal battle over compliance with domestic laws. In June 2020, the tech giant failed to make key local appointments required under India’s new information technology rules. Twitter has more than 22 million users in India and was recently categorized as a “significant intermediary” alongside Facebook and WhatsApp.

    Soon after, the companies were required to appoint three Indian officers to a mandatory compliance and grievance redress mechanism. The government’s aim is to make social media companies more accountable to local law enforcement agencies. These officers would help local authorities access data from servers. While WhatsApp and Facebook complied, Twitter did not.

    When Technology Cancels Anonymity

    READ MORE

    In retaliation, the Indian government petitioned and then stripped Twitter of its “safe harbor” immunity that protects them against liability for the content posted by users. Non-compliance has made Twitter vulnerable. Ever since, it has faced four major lawsuits from some of India‘s top statutory bodies, including the National Commission for Protection of Child Rights.

    During an appearance in the Delhi High Court, Twitter denied having any intent to contravene government regulations. In response, one official characterized Twitter’s position as a “prevarication” that “cocks a snook at the digital sovereignty of this country.”

    Today’s Daily Devil’s Dictionary definition:

    Sovereignty:

    A term used to foreignize a rival while claiming moral supremacy and projecting oneself as an insider

    Contextual Note

    India is not alone in expecting tech giants like Twitter and Facebook to be subject to local laws. States across the world use various data localization legislation to assert their sovereignty. Over the years, companies have been asked to place their servers within national jurisdictions. Alternatively, governments have claimed copies of all data and unhindered access to servers located abroad. According to data collected by the European Centre for International Politics Economy, the number of such laws existing in 2010 more than doubled by 2015, from 40 to over 80. That number has been steadily increasing.

    Data localization trends reflect a certain sense of insecurity among states about diluting their digital sovereignty. In an article on digital sovereignty and international conflicts, James A. Lewis defines it as “the right of a state to govern its network to serve its national interests, the most important of which are security, privacy and commerce.”

    Nevertheless, the irony in this scenario is hard to miss. Little prevents information from being circulated on the largest technology markets due to companies and their vulnerability to being influenced. Government access to servers located in India or elsewhere cannot potentially exclude these enterprises themselves. At the very least, host countries pose a constant security threat concerning the data stored on servers in their jurisdiction. These states hold significant leverage.

    Unique Insights from 2,500+ Contributors in 90+ Countries

    Making matters worse, most global servers are concentrated in a select few powerful countries. Four of the world’s five largest server facilities are located in the US. Although seemingly borderless, the internet extensively depends on a physical infrastructure, which makes it vulnerable to interference. This makes data localization ineffective and hard to implement. For instance, since 2016, Forbes has identified multiple instances in which WhatsApp shared data with the US government. It may be that there are no laws that can provide states with the supremacy they seek. 

    These issues are not new. Experts have been dealing with the redundancy of data localization laws for some time. Lewis warns of an impending Balkanization of the internet because of such laws. He posits that each state seeking its own internet may lead to no internet at all.

    However, the struggle between omnipotent states and omnipresent corporations could have deeper impacts on where global power lies. Apart from the practical problem of granting each state supremacy, the fundamental question relates to our current idea of sovereignty and its bearing on our times.  

    Historical Note

    The modern idea of sovereignty was arguably formulated in Europe with the Treaty of Westphalia in 1648. The treaty brought an end to more than a century of continuous religious violence in the Holy Roman Empire. It was the culmination of various failed attempts that built up to the idea of recognizing a supreme authority within a territory that took on the character of a nation-state. However, sovereignty is a much wider concept with many variants.

    Modern sovereignty is envisaged as the political power held by one authority. Historically, that has not always been the case. Overlapping power was distributed between the monarchs and the Catholic Church throughout most of the Middle Ages. Monarchs dealt with the temporal prerogatives of society. Here too, authority was highly distributed among nobles and vassals responsible for maintaining the monarch’s troops. The church dealt with spiritual considerations. It stood as the conscience keeper of both monarchy and society, which permitted the church to play an active role in the secular authorities’ decisions. The church held large tracts of land and actively participated in wars.

    Thus, the Westphalian notion of “supreme authority over a territory” was established by overriding powerful historical forces in the process, including some that were even more powerful than the monarchs.

    The Treaty of Westphalia stripped the churches of their decision-making power. Pope Innocent X immediately expressed his reasoned disagreement along with his taste for provocative adjectives, calling the treaty “null, void, invalid, iniquitous, unjust, damnable, reprobate, inane, and empty of meaning and effect for all time.” 

    Embed from Getty Images

    Present-day corporations share some of the features of the mighty nobles and ecclesiastics of the past. At $2.1 trillion, Apple has a market capitalization greater than the GDP of 96% of the world’s countries, while Amazon surpasses 92% of country GDPs at $1.7 trillion. Undoubtedly, they command the digital realm with no real challenge to their authority and would be unlikely to accept a challenge by believers in modern sovereignty.

    Modern sovereignty was established on the basis of a territorial element, against large trans-border forces like religion. Unsurprisingly, it failed in its motives beyond curbing immediate violence. Today, the forces that states face are similar in their reach. The current scenario concerning data localization laws bears an uncanny resemblance to the Peace of Augsburg of 1555.

    At Augsburg, political forces sought to curb conflict by nationalizing religion. States agreed on the principle of cuius regio, eius religio, meaning that the prince’s religion would be their realm’s religion. This created power blocs consisting of countries practicing similar religions. This provided the historical logic behind the devastating Thirty Years’ War. Today, the Balkanization of the internet may result in blocs of states with similar laws, leading to potentially disastrous outcomes for internet freedom.

    The idea of sovereignty originated from morality and not mere capability. It may be legitimate to ask whether states, rather than people, are the correct party to claim rights to privacy and data protection. States have been equally guilty of exploiting access to private information. Recently, numbers of prominent political figures, journalists, human rights activists and business executives in more than 50 countries appear to have been targeted using the Pegasus spyware supplied by the Israeli cybersecurity NSO Group. Adding data privacy to the laundry list of state prerogatives, only because they concern supposedly foreign elements, may be an act of overreach. 

    Embed from Getty Images

    Instead, it may be up to individuals across the world to decide who they sign a contract with to secure a connection to the gods — the satellites in the sky. Just as monarchs, who unlike the church lacked the power to legislate the terms of salvation, today the state may be incapable of regulating what doesn’t belong to its realm: the internet. At best, it could play the role of facilitating a contract between the people and the tech giants.

    Hence, adapting to the changing times may require revising our concept of sovereignty. It will not be easy. The French philosopher Jacques Maritain, in his book “Man and the State,” traced the significant circumscription of sovereignty in the wake of World War II. States have at least theoretically united and often consensually given up on their supreme authority on subjects such as human rights and climate change. Supranational arrangements like the EU are testaments to the changing times and ideas.

    The rise of a new digital realm may provide us with a chance to forge this change. It could help us question whether issues like data privacy and protection should be subject to a state’s consent or whether they concern the people, who might want to define their own personal sovereignty. 

    *[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news. Read more of The Daily Devil’s Dictionary on Fair Observer.]

    The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More

  • in

    ‘They should be worried’: how FTC chair Lina Khan plans to tackle big tech

    US politics‘They should be worried’: how FTC chair Lina Khan plans to tackle big tech Within weeks of her appointment to the commission, Facebook and Amazon asked that she be recused from antitrust investigationsKari PaulSun 15 Aug 2021 01.00 EDTLast modified on Sun 15 Aug 2021 01.01 EDTLina Khan has some of the biggest companies in the world shaking in their boots.The 32-year-old antitrust scholar and law professor in June became the youngest person in history and the most progressive in more than a decade to be appointed as chair of the Federal Trade Commission (FTC).Khan’s appointment places her at the helm of the federal agency charged with enforcing antitrust law just as it is poised to tackle the giants of the technology industry after years of unchecked power. And it’s clear that big tech isn’t happy about it.Within weeks of Khan’s appointment, both Facebook and Amazon requested that Khan be recused from the FTC’s antitrust investigations into their companies, arguing that her intense criticism of them in the past meant she would “not be a neutral and impartial evaluator” of antitrust issues.Is Biden’s appointment of a pioneering young lawyer bad news for big tech? | John NaughtonRead moreKhan has forcefully argued for the need to rein in powerful firms like Amazon, Facebook, Apple and Google, developing an innovative antitrust argument that has revolutionized the way we think about regulating monopolies.“She understands how these companies are harming workers, innovation and ultimately democracy and is committed to taking them head on,” said Stacy Mitchell, co-director of Institute for Local Self-Reliance, an antimonopoly advocacy organization.“This is a gamechanger.”‘A meteoric rise’Before Khan took it on, antitrust law enforcement in the US had atrophied. For decades, it had functioned under the “consumer welfare standard”, which meant that the government would only take action against a company for anti-competitive practices if consumers were hurt by increased prices. But by the time Khan was a student at Williams and then Yale Law School, tech behemoths had built de facto monopolies by giving away their products for free or at such low prices that no one else could compete.In the early years of the tech boom it was widely assumed that the industry would essentially regulate itself, according to Rebecca Allensworth, a professor of antitrust law at Vanderbilt University. That Yahoo’s popularity gave way to Google and Myspace to Facebook appeared to be proof that “competition in tech was intensive without any government involvement”, she said. “But we have seen how that has really changed, as has our understanding of how these companies can abuse the market.” Slipping through the cracks of these old antitrust standards, tech companies amassed unchecked power, acquiring competitors and scooping up billions of customers. In 2020, Apple became the first American company to be valued at $2tn. That same year, Amazon eclipsed $1tn, joining Microsoft, at $1.6tn, and Google parent Alphabet at $1tn.In her now-famous 2017 Yale Law Journal article, Khan argued that the rise of these mega companies proved that modern American antitrust law was broken, and that the traditional yardsticks by which regulators determine monopolies need to be re-examined for the digital age.Keeping prices low has allowed Amazon to amass a large share of the market, giving it a disproportionate impact on the economy, stifling competition and further perpetuating monopoly, she argued.“The long-term interests of consumers include product quality, variety and innovation – factors best promoted through both a robust competitive process and open markets,” she wrote.She also investigated mergers and examined the impact the resulting tech monopolies have on product quality, suppliers and company conduct. Even if these companies’ practices resulted in some benefits for consumers, they were harmful to markets and democracy at large, she said.The immediate impact of her thesis was undeniable, with the New York Times announcing Khan had “singlehandedly reframed decades of monopoly law”. Politico called her “a leader of a new school of antitrust thought”. Christopher Leslie, a professor of antitrust law at University of California, Irvine, characterized Khan’s rise in recent years as “meteoric”.“It’s unprecedented to have somebody ascend to such an important leadership role in antitrust enforcement so soon after graduating from law school,” he said. “But it’s also unprecedented to have somebody make such a significant impact on antitrust public policy debates so quickly after graduating.”Big tech in the hot seatIn 2019, Khan brought her new approach to antitrust to Congress, serving as counsel to the US House judiciary committee’s subcommittee on antitrust, commercial, and administrative law. Spearheading the committee’s investigation into digital markets, she played a large role in the publication of its landmark report: a 451-page treatise on how companies including Google and Amazon abuse their market power for their own benefit.Khan also served as legal director at the political advocacy group Open Markets Institute and taught antimonopoly law at Columbia until her appointment to the FTC in 2021.Khan’s appointment marked a break from the “revolving door” between the FTC and the private sector, in which people with years of experience defending companies in Silicon Valley become regulators. Her new role also comes at a time when reining in big tech is one of the only issues that unites a deeply divided Congress.The Massachusetts senator Elizabeth Warren said Khan’s leadership of the FTC was “a huge opportunity to make big, structural change” to fight monopolies and Senator Amy Klobuchar praised Khan as “a pioneer in competition policy” who “will bring a critical perspective to the FTC”. The Republican Ted Cruz told Khan he “looked forward” to working with her on these issues.Khan has her critics. The former Republican senator Orrin Hatch has condemned her thesis as “hipster antitrust”. Mike Lee of Utah said she “lacks the experience necessary” for the FTC and that her views on US antitrust laws were “wildly out of step with a prudent approach to the law”.But her appointment coincides with a growing drive among lawmakers to take on the major tech companies, Allensworth said. “Politicians, small businesses and the academic establishment are clamoring for it,” she added.Shortly after naming Khan as chair, Joe Biden signed an executive order calling on federal regulators to prioritize action promoting competition in the American economy – including in the tech space. “Let me be very clear: capitalism without competition isn’t capitalism. It’s exploitation,” he said regarding the order, which contained 72 initiatives to limit corporate power. Biden asked the FTC to better vet mergers and acquisitions and to establish rules on surveillance. He also called for easing of restrictions on repairing tech devices and data collection on consumers.‘A different set of rules’In her first hearing as chair in July, Khan indicated that she was ready to get started, saying the US needs “a different set of rules”.She cited bad mergers – in the past she had criticized Facebook’s acquisitions of Instagram, Giphy and WhatsApp as anti-competitive – as potentially fueling large tech monopolies: “In hindsight there’s a growing sense that some of those merger reviews were a missed opportunity.”One of Khan’s first tasks as chair is likely to be rewriting an FTC antitrust complaint against Facebook that was dismissed in June after the agency failed to demonstrate that the tech giant maintains a monopoly.Meanwhile, Apple and others are set to face FTC scrutiny over repair policies that restrict third-party companies from fixing devices. The agency voted unanimously in July to ramp up enforcement of the right to repair.The attempts by Amazon and Facebook to force Khan’s recusal are signs that big tech won’t go down without a fight. But critics say these efforts amount to intimidation tactics and not much more. Khan does not have any conflicts of interest under federal ethics laws, which typically apply to financial investments or employment history, and the requests are not likely to go far.This is “a PR move”, said Allensworth. “She has made a lot of very public, extremely influential arguments about exactly how tech suppresses competition and now she’s the chairperson of the largest and most important federal agency to do with competition,” she said.“They should be worried,” she added.TopicsUS politicsFacebookAppleGoogleAmazonfeaturesReuse this content More

  • in

    YouTube suspends Rand Paul for video claiming masks ‘don’t prevent infection’

    YouTubeYouTube suspends Rand Paul for video claiming masks ‘don’t prevent infection’Video platform suspends Republican senator, in latest move against a public figure who has spread Covid disinformation Maya YangWed 11 Aug 2021 13.26 EDTLast modified on Wed 11 Aug 2021 15.04 EDTYouTube suspended Republican senator Rand Paul on Tuesday for seven days over a video claiming that masks are ineffective against Covid-19.It is the latest move against a prominent public figure who has spread disinformation about ways to protect against the virus or about the vaccines developed to fight it.“We removed content from Senator Paul’s channel for including claims that masks are ineffective in preventing the contraction or transmission of Covid-19, in accordance with our Covid-19 medical misinformation policies,” a YouTube spokesperson said. “This resulted in a first strike on the channel, which means it can’t upload content for a week, per our longstanding three strikes policy,” the spokesperson added.In the removed video, Paul cast doubt on the effectiveness of masks, saying, “Most of the masks you get over the counter don’t work. They don’t prevent infection,” before adding, “Trying to shape human behavior isn’t the same as following the actual science, which tells us that cloth masks don’t work.”Many public health experts have advised using masks, and the Centers for Disease Control and Prevention (CDC) first recommended the public wear cloth masks in April 2020. More recently the CDC advised vaccinated people to wear masks indoors in Delta surge areas.Last month Paul clashed with Dr Anthony Fauci, the top US infectious disease expert, during a heated discussion about the virus. At one point Fauci said, “Senator Paul, you do not know what you are talking about, quite frankly.”Responding to the YouTube ban, the Republican senator said on Twitter: “A badge of honor … leftwing cretins at Youtube banning me for 7 days for a video that quotes 2 peer reviewed articles saying cloth masks don’t work.”Last week, YouTube removed a Newsmax interview with Paul in which he said that “there’s no value” in wearing masks.Paul’s current strike will be lifted from his account after 90 days if there are no more violations. A second strike within the 90 days will result in a two-week suspension, followed by a permanent ban if his account accrues a third strike.YouTube’s suspension of Paul’s account was issued a day after Twitter suspended Republican Senator Marjorie Taylor Greene’s account for one week for violating the platform’s Covid-19 misinformation rules.TopicsYouTubeRand PaulCoronavirusUS politicsnewsReuse this content More