More stories

  • in

    New York signs parental control of ‘addictive’ social media feeds into law

    New York’s governor, Kathy Hochul, signed two bills into law on Thursday meant to mitigate negative impacts of social media on children, the latest action to address what critics say is a growing youth mental health crisis.The first bill will require that parents be able to stop their children from seeing posts suggested by a social network’s algorithm, a move to limit feeds Hochul argues are addictive. The second will put additional limitations on the collection, use, sharing and selling of personal data of anyone under the age of 18.“We can protect our kids. We can tell the companies that you are not allowed to do this, you don’t have a right to do this, that parents should have say over their children’s lives and their health, not you,” Hochul said at a bill-signing ceremony in Manhattan.Under the first bill, the Stop Addictive Feeds Exploitation (Safe) for Kids Act, apps like TikTok and Instagram would be limited for people under the age of 18 to posts from accounts they follow, rather than content recommended by the app. It would also block platforms from sending minors notifications on suggested posts between midnight and 6am.Both provisions could be turned off if a minor gets what the bill defines as “verifiable parental consent”.Thursday’s signing is just the first step in what is expected to be a lengthy process of rule-making, as the laws do not take effect immediately and social media companies are expected to challenge the new legislation.The New York state attorney general, Letitia James, is now tasked with crafting rules to determine mechanisms for verifying a user’s age and parental consent. After the rules are finalized, social media companies will have 180 days to implement changes to comply with the regulation.“Addictive feeds are getting our kids hooked on social media and hurting their mental health, and families are counting on us to help address this crisis,” James said at the ceremony. “The legislation signed by Governor Hochul today will make New York the national leader in addressing the youth mental health crisis and an example for other states to follow.”Social media companies and free speech advocates have pushed back against such legislation, with NetChoice – a tech industry trade group that includes Twitter/X and Meta – criticizing the New York laws as unconstitutional.“This is an assault on free speech and the open internet by the state of New York,” Carl Szabo, vice-president and general counsel of NetChoice, said in a statement. “New York has created a way for the government to track what sites people visit and their online activity by forcing websites to censor all content unless visitors provide an ID to verify their age.”skip past newsletter promotionafter newsletter promotionNew York’s new laws come after California’s governor, Gavin Newsom, announced plans to work with his state’s legislature on a bill to restrict smartphone usage for students during the school day, though he didn’t provide exact details on what the proposal would include. Newsom in 2019 signed a bill allowing school districts to limit or ban smartphones on campuses.A similar measure proposed in South Carolina this month would ban students from using cellphones during the school day across all public schools in the state. Most schools in the United Kingdom prohibit the use of smartphones during school hours.Although there hasn’t been broad legislation on the subject at the federal level, pressure from Washington is mounting. This week the US surgeon general called on Congress to put warning labels on social media platforms similar to those on cigarette packaging, citing mental health dangers for children using the sites. More

  • in

    Battle lines drawn as US states take on big tech with online child safety bills

    On 6 April, Maryland became the first state in the US to pass a “Kids Code” bill, which aims to prevent tech companies from collecting predatory data from children and using design features that could cause them harm. Vermont’s legislature held its final hearing before a full vote on its Kids Code bill on 11 April. The measures are the latest in a salvo of proposed policies that, in the absence of federal rules, have made state capitols a major battlefield in the war between parents and child advocates, who lament that there are too few protections for minors online, and Silicon Valley tech companies, who protest that the recommended restrictions would hobble both business and free speech.Known as Age-Appropriate Design Code or Kids Code bills, these measures call for special data safeguards for underage users online as well as blanket prohibitions on children under certain ages using social media. Maryland’s measure passed with unanimous votes in its house and senate.In all, nine states across the country – Maryland, Vermont, Minnesota, Hawaii, Illinois, New Mexico, South Carolina, New Mexico and Nevada – have introduced and are now hashing out bills aimed at improving online child safety. Minnesota’s bill passed the house committee in February.Lawmakers in multiple states have accused lobbyists for tech firms of deception during public hearings. Tech companies have also spent a quarter of a million dollars lobbying against the Maryland bill to no avail.Carl Szabo, vice-president and general counsel of the tech trade association NetChoice, spoke against the Maryland bill at a state senate finance committee meeting in mid-2023 as a “lifelong Maryland resident, parent, [spouse] of a child therapist”.Later in the hearing, a Maryland state senator asked: “Who are you, sir? … I don’t believe it was revealed at the introduction of your commentary that you work for NetChoice. All I heard was that you were here testifying as a dad. I didn’t hear you had a direct tie as an employee and representative of big tech.”For the past two years, technology giants have been directly lobbying in some states looking to pass online safety bills. In Maryland alone, tech giants racked up more than $243,000 in lobbying fees in 2023, the year the bill was introduced. Google spent $93,076, Amazon $88,886, and Apple $133,449 last year, according to state disclosure forms.Amazon, Apple, Google and Meta hired in-state lobbyists in Minnesota and sent employees to lobby directly in 2023. In 2022, the four companies also spent a combined $384,000 on lobbying in Minnesota, the highest total up to that point, according to the Minnesota campaign finance and public disclosure board.The bills require tech companies to undergo a series of steps aimed at safeguarding children’s experiences on their websites and assessing their “data protection impact”. Companies must configure all default privacy settings provided to children by online products to offer a high level of privacy, “unless the covered entity can demonstrate a compelling reason that a different setting is in the best interests of children”. Another requirement is to provide privacy information and terms of service in clear, understandable language for children and provide responsive tools to help children or their parents or guardians exercise their privacy rights and report concerns.The legislation leaves it to tech companies to determine whether users are underage but does not require verification by documents such as a driver’s license. Determining age could come from data profiles companies have on a user, or self-declaration, where users must enter their birth date, known as “age-gating”.Critics argue the process of tech companies guessing a child’s age may lead to privacy invasions.“Generally, this is how it will work: to determine whether a user in a state is under a specific age and whether the adult verifying a minor over that designated age is truly that child’s parent or guardian, online services will need to conduct identity verification,” said a spokesperson for NetChoice.The bills’ supporters argue that users of social media should not be required to upload identity documents since the companies already know their age.“They’ve collected so many data points on users that they are advertising to kids because they know the user is a kid,” said a spokesperson for the advocacy group the Tech Oversight Project. “Social media companies’ business models are based on knowing who their users are.”NetChoice – and by extension, the tech industry – has several alternative proposals for improving child safety online. They include digital literacy and safety education in the classroom for children to form “an understanding of healthy online practices in a classroom environment to better prepare them for modern challenges”.At a meeting in February to debate a proposed bill aimed at online child safety, NetChoice’s director, Amy Bos, argued that parental safety controls introduced by social media companies and parental interventions such as parents taking away children’s phones when they have racked up too much screen time were better courses of action than regulation. Asking parents to opt into protecting their children often fails to achieve wide adoption, though. Snapchat and Discord told the US Senate in February that fewer than 1% of under-18 users on either social network had parents who monitor their online behavior using parental controls.Bos also ardently argued that the proposed bill breached first amendment rights. Her testimony prompted a Vermont state senator to ask: “You said, ‘We represent eBay and Etsy.’ Why would you mention those before TikTok and X in relation to a bill about social media platforms and teenagers?”NetChoice is also promoting the bipartisan Invest in Child Safety Act, which is aimed at giving “cops the needed resources to put predators behind bars”, it says, highlighting that less than 1% of reported child sexual abuse material (CSAM) violations are investigated by law enforcement due to a lack of resources and capacity.However, critics of NetChoice’s stance argue that more needs to be done proactively to prevent children from harm in the first place and that tech companies should take responsibility for ensuring safety rather than placing it on the shoulders of parents and children.“Big Tech and NetChoice are mistaken if they think they’re still fooling anybody with this ‘look there not here’ act,” said Sacha Haworth, executive director of the Tech Oversight Project. “The latest list of alleged ‘solutions’ they propose is just another feint to avoid any responsibility and kick the can down the road while continuing to profit off our kids.”All the state bills have faced opposition by tech companies in the form of strenuous statements or in-person lobbying by representatives of these firms.Other tech lobbyists needed similar prompting to Bos and Szabo to disclose their relevant tech patrons during their testimonies at hearings on child safety bills, if they notified legislators at all. A registered Amazon lobbyist who has spoken at two hearings on New Mexico’s version of the Kids Code bill said he represented the Albuquerque Hispano Chamber of Commerce and the New Mexico Hospitality Association. He never mentioned the e-commerce giant. A representative of another tech trade group did not disclose his organization’s backing from Meta at the same Vermont hearing that saw Bos’s motives and affiliations questioned – arguably the company that would be most affected by the bill’s stipulations.The bills’ supporters say these speakers are deliberately concealing who they work for to better convince lawmakers of their messaging.“We see a clear and accelerating pattern of deception in anti-Kids Code lobbying,” said Haworth of the Tech Oversight Project, which supports the bills. “Big tech companies that profit billions a year off kids refuse to face outraged citizens and bereaved parents themselves in all these states, instead sending front-group lobbyists in their place to oppose this legislation.”NetChoice denied the accusations. In a statement, a spokesperson for the group said: “We are a technology trade association. The claim that we are trying to conceal our affiliation with the tech industry is ludicrous.”These state-level bills follow attempts in California to introduce regulations aimed at protecting children’s privacy online. The California Age-Appropriate Design Code Act is based on similar legislation from the UK that became law in October. The California bill, however, was blocked from being passed into law in late 2023 by a federal judge, who granted NetChoice a preliminary injunction, citing potential threats to the first amendment. Rights groups such as the American Civil Liberties Union also opposed the bill. Supporters in other states say they have learned from the fight in California. They point out that language in the eight other states’ bills has been updated to address concerns raised in the Golden state.The online safety bills come amid increasing scrutiny of Meta’s products for their alleged roles in facilitating harm against children. Mark Zuckerberg, its CEO, was told he had “blood on his hands” at a January US Senate judiciary committee hearing on digital sexual exploitation. Zuckerberg turned and apologized to a group of assembled parents. In December, the New Mexico attorney general’s office filed a lawsuit against Meta for allegedly allowing its platforms to become a marketplace for child predators. The suit follows a 2023 Guardian investigation that revealed how child traffickers were using Meta platforms, including Instagram, to buy and sell children into sexual exploitation.“In time, as Meta’s scandals have piled up, their brand has become toxic to public policy debates,” said Jason Kint, CEO of Digital Content Next, a trade association focused on the digital content industry. “NetChoice leading with Apple, but then burying that Meta and TikTok are members in a hearing focused on social media harms sort of says it all.”A Meta spokesperson said the company wanted teens to have age-appropriate experiences online and that the company has developed more than 30 child safety tools.“We support clear, consistent legislation that makes it simple for parents to manage their teens’ online experiences,” said the spokesperson. “While some laws align with solutions we support, we have been open about our concerns over state legislation that holds apps to different standards in different states. Instead, parents should approve their teen’s app downloads, and we support legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps.” More

  • in

    ‘New text, same problems’: inside the fight over child online safety laws

    Sharp divisions between advocates for children’s safety online have emerged as a historic bill has gathered enough votes to pass in the US Senate. Amendments to the bill have appeased some former detractors who now support the legislation; its fiercest critics, however, have become even more entrenched in their demands for changes.The Kids Online Safety Act (Kosa), introduced more than two years ago, reached 60 backers in the Senate mid-February. A number of human rights groups still vehemently oppose the legislation, underscoring ongoing divisions among experts, lawmakers and advocates over how to keep young people safe online.“The Kids Online Safety Act is our best chance to address social media’s toxic business model, which has claimed far too many children’s lives and helped spur a mental health crisis,” said Josh Golin, the executive director of the children’s online safety group Fairplay.Opponents say alterations to the bill are not enough and that their concerns remain unchanged.“A one-size-fits-all approach to kids’ safety won’t keep kids safe,” said Aliya Bhatia, a policy analyst at the Center for Democracy and Technology. “This bill still rests on the premise that there is consensus around the types of content and design features that cause harm. There isn’t, and this belief will limit young people from exercising their agency and accessing the communities they need to online.”What is the Kids Online Safety Act?Sponsored by the Connecticut Democrat Richard Blumenthal and the Tennessee Republican Marsha Blackburn, Kosa would be the biggest change to American tech legislation in decades. The bill would require platforms like Instagram and TikTok to mitigate online dangers via design changes or opt-outs of algorithm-based recommendations, among other measures. Enforcement would demand much more fundamental modifications to social networks than current regulations require.When it was first introduced in 2022, Kosa prompted an open letter signed by more than 90 human rights organizations united in strong opposition. The groups warned the bill could be “weaponized” by conservative state attorneys general – who were charged with determining what content is harmful – to censor online resources and information for queer and trans youth or people seeking reproductive healthcare.In response to the critiques, Blumenthal amended the bill, notably shifting some enforcement decisions to the Federal Trade Commission rather than state attorneys general. At least seven LGBTQ+ advocacy organizations that previously spoke out against the bill dropped their opposition citing the “considerable changes” to Kosa that “significantly mitigate the risk of it being misused to suppress LGBTQ+ resources or stifle young people’s access to online communities”, including Glaad, the Human Rights Campaign and the Trevor Project.To the critics who now support Kosa, the amendments by Blumenthal solved the legislation’s major issues. However, the majority of those who signed the initial letter still oppose the bill, including the Center for Democracy and Technology, the Electronic Frontier Foundation, Fight for the Future, and the ACLU.“New bill text, same problems,” said Adam Kovacevich, chief executive of the tech industry policy coalition the Chamber of Progress, which is supported by corporate partners including Airbnb, Amazon, Apple and Snap. “The changes don’t address a lot of its potential abuses.” Snap and X, formerly Twitter, have publicly supported Kosa.Is Kosa overly broad or a net good?Kovacevich said the latest changes fail to address two primary concerns with the legislation: that vague language will lead social media platforms to over-moderate to restrict their liability, and that allowing state attorneys general to enforce the legislation could enable targeted and politicized content restriction even with the federal government assuming more of the bill’s authority.The vague language targeted by groups that still oppose the bill is the “duty of care” provision, which states that social media firms have “a duty to act in the best interests of a minor that uses the platform’s products or services” – a goal subject to an enforcer’s interpretation. The legislation would also require platforms to mitigate harms by creating “safeguards for minors”, but with little direction as to what content would be deemed harmful, opponents argue the legislation is likely to encourage companies to more aggressively filter content – which could lead to unintended consequences.“Rather than protecting children, this could impact access to protected speech, causing a chilling effect for all users and incentivizing companies to filter content on topics that disproportionately impact marginalized communities,” said Prem M Trivedi, policy director at the Open Technology Institute, which opposes Kosa.Trivedi said he and other opponents fear that important but charged topics like gun violence and racial justice could be interpreted as having a negative impact on young users, and be filtered out by algorithms. Many have expressed concern that LGBTQ+-related topics would be targeted by conservative regulators, leading to fewer available resources for young users who rely on the internet to connect with their communities. Blackburn, the bill’s sponsor, has previously stated her intention to “protect minor children from the transgender [sic] in this culture and that influence”.An overarching concern among opponents of the bill is that it is too broad in scope, and that more targeted legislation would achieve similar goals with fewer unintended impacts, said Bhatia.“There is a belief that there are these magic content silver bullets that a company can apply, and that what stands between a company applying those tools and not applying those tools is legislation,” she said. “But those of us who study the impact of these content filters still have reservations about the bill.”Many with reservations acknowledge that it does feature broadly beneficial provisions, said Mohana Mukherjee, visiting faculty at George Washington University, who has studied technology’s impact on teenagers and young adults. She said the bill’s inclusion of a “Kosa council” – a coalition of stakeholders including parents, academic experts, health professionals and young social media users to provide advice on how best to implement the legislation – is groundbreaking.“It’s absolutely crucial to involve young adults and youth who are facing these problems, and to have their perspective on the legislation,” she said.Kosa’s uncertain futureKosa is likely to be voted on in the Senate this session, but other legislation targeting online harms threatens its momentum. A group of senators is increasingly pushing a related bill that would ban children under the age of 13 from social media. Its author, Brian Schatz, has requested a panel that would potentially couple the bill with Kosa. Blumenthal, the author of Kosa, has cautioned that such a move could slow the passage of both bills and spoke out against the markup.“We should move forward with the proposals that have the broadest support, but at the same time, have open minds about what may add value,” he said, according to the Washington Post. “This process is the art of addition not subtraction often … but we should make sure that we’re not undermining the base of support.”The bill’s future in the House is likewise unclear. Other bills with similar purported goals are floating around Congress, including the Invest in Child Safety Act – a bill introduced by the Democratic senator Ron Wyden of Oregon and the representatives Anna G Eshoo and Brian Fitzpatrick – which would invest more than $5bn into investigating online sexual abusers.With so much legislation swirling around the floors of Congress, it’s unclear when – or if – a vote will be taken on any of them. But experts agree that Congress has at least begun trying to bolster children’s online safety.“This is an emotionally fraught topic – there are urgent online safety issues and awful things that happen to our children at the intersection of the online world and the offline world,” said Trivedi. “In an election year, there are heightened pressures on everyone to demonstrate forward movement on issues like this.” More

  • in

    US surgeon general issues advisory on ‘profound’ risks of child social media use

    Social media use by children and teenagers can pose a “profound risk of harm” to their mental health and wellbeing, the US surgeon general is warning.In a new advisory released on Tuesday, Dr Vivek Murthy calls on tech companies, policymakers and parents to take “immediate action to protect kids now”. He says that in the absence of robust independent research it is impossible to know whether social media is safe for children and adolescents.“The bottom line is we do not have enough evidence to conclude that social media is, in fact, sufficiently safe for our kids. And that’s really important for parents to know,” Murthy told the Associated Press.The 25-page advisory, produced as part of the surgeon general’s ongoing investigation into what he sees as a full-blown youth mental health crisis, points to the ubiquitous use of social media by young people. Up to 95% of 13- to 17-year-old Americans use a social media platform, and more than a third say they do so “almost constantly”.The report shows how current controls on access by children are not working. While most sites apply a minimum age requirement of 13, almost 40% of eight- to 12-year-olds are regular users.The surgeon general’s warning came as the White House put out its own notice on Tuesday about what it called the “unprecedented youth mental health crisis” in the US. The number of children and adolescents dealing with depression and anxiety had risen almost 30% in recent years, with social media a clear factor.The White House is forming a new taskforce on kids and online health and safety. Its job would be to identify the potential harms posed by online platforms and to come up with a tool kit designed to combat the problems for tech companies developing new products.Concern over the effects of popular online apps on children has been building in recent years. In 2021 a whistleblower, Frances Haugen, exposed that Facebook and Instagram knew they were directing young users towards harmful content including material that promoted anorexia – and that they were expressly targeting children under 13.One internal study from Facebook’s parent company, Meta, reported 14% of teenage girls said that when they used Instagram their suicidal thoughts intensified, while 17% of teen girls said it exacerbated eating disorders.In the wake of Haugen’s revelations, Meta sidelined plans to launch a kids’ version of Instagram.Murthy told AP: “I recognize technology companies have taken steps to try to make their platforms healthier and safer, but it’s simply not enough.”His advisory underlines the critical nature of adolescence in the development of the human brain, which leaves kids aged 10 to 19 highly vulnerable to peer pressure. It is within these years that an individual’s sense of self-worth is formed, and it is when mental health challenges such as depression often emerge.The report says that social media use is predictive of a decline in satisfaction with life, especially for girls aged 11 to 13 and boys aged 14 and 15.Accessing apps does have positive benefits, Murthy says, including providing community and connection with others who share similar interests or identity. That can be particularly valuable for LGBTQ+ youth who can easily find each other.Seven out of 10 adolescent girls of colour said they found positive and affirming content this way. Across all user groups, most American adolescents report that social media helps them feel more accepted and supported through tough times.But such positive indicators are currently overshadowed by risk factors, the surgeon general warns. A long-term study of 12- to 15-year-olds found that adolescents who spend more than three hours each day on social media have twice the risk of mental health challenges including depression and anxiety.Figures from 2021 suggest that the current average in that age group is 3.5 hours a day.Excessive social media use, which can result in compulsive or uncontrollable behaviour, can lead to sleep problems which in turn can alter the neurological development of the adolescent brain. Depressive symptoms and suicidal thoughts can ensue, the advisory says.Murthy is calling on tech companies to be more open with the public and to put the health and safety of their young users first when creating new products. He also has words for parents.“For every family, it may not be feasible to stop your child from using social media or there may be benefit,” he told the AP. “But drawing boundaries around the use of social media in your child’s life so there are times and spaces that are protected, that are tech-free, that can be really helpful.” More

  • in

    Iowa state senate votes to allow children to work longer hours and serve alcohol

    In a pre-dawn session on Tuesday, the Iowa state senate voted to allow children to work longer hours and serve alcohol, the latest move by Republican-controlled statehouses to combat a labor shortage by loosening child labor laws.The Iowa bill would expand the number of hours that children under 16 can work from four to six a day, allow minors to work in previously prohibited industries if they are part of a training program, and allow 16- and 17-year–olds to serve alcohol with a parent’s permission.It passed the state senate by a vote of 32-17, two Republicans joining every Democrat in opposition. The vote took place just before 5am, after protests and delay tactics by Democrats.“We do know slavery existed in the past but one place it doesn’t exist, that’s in this bill,” said Adrian Dickey, the Republican responsible for shepherding the bill to passage, according to the Iowa Capital Dispatch.“It simply is providing our youth an opportunity to earn and learn, at the same timeframe as his classmates do, while participating in sports and other fine arts.”Democrats and labor advocates decried the bill, which they say will endanger children by allowing them to work in dangerous fields such as roofing, excavation and demolition.“No Iowa teenager should be working in America’s deadliest jobs,” said Zach Wahls, the senate minority leader. “Iowa Republican politicians want to solve the … workforce crisis on the literal backs of children.”Labor unions have held protests against the bill. Charlie Wishman, president of the Iowa Federation of Labor, said efforts to loosen child labor laws around the US were “a lazy way of dealing with the fact that certain states don’t have enough workers”.In March, the Arkansas governor, Sarah Huckabee Sanders, signed legislation to roll back child labor protections. Lawmakers in Ohio, Minnesota and Wisconsin are also considering loosening regulations.“Can we let kids be kids?” Wishman asked. “It was about 120 years ago when we decided that we wanted to make sure that kids spent the majority of their time in school and not in a workplace, and especially not in a dangerous workplace.”Wishman cited research that has found serious adverse effects for teenagers working more than 20 hours a week.“These legislators don’t care about that because it’s not their kids,” he said. “This law is intended for somebody else’s kids.” More

  • in

    The Guardian view on US book bans: time to fight back | Editorial

    “A book is a loaded gun in the house next door,” warns a character in Fahrenheit 451, Ray Bradbury’s dystopian vision of an America where books are considered so dangerous they must be incinerated. The novel appeared 70 years ago, in the aftermath of Nazi book burnings and amid McCarthyism and Soviet ideological repression. But the urge to ban books has resurged with a vengeance, with the American Library Association (ALA) recording a doubling of censorship attempts in 2022, to 1,269 across 32 states: the highest rate for decades. Pen America, which champions freedom of expression, tallied more than 2,500 cases in the last school year.These attempts are not merely more numerous but are also broadening and deepening. The decisions of school boards and districts take place in the context of politicians grasping electoral advantage and punitive yet often vaguely worded state laws on education – such as the Florida governor, Ron DeSantis’s, Stop-Woke Act. At least 10 states have passed legislation increasing parental power over library stock, or limiting students’ access. In place of spontaneous challenges to single titles come challenges to multiple titles, organised by campaign groups such as Moms for Liberty. The ALA says that 40% of attempts last year targeted 100 books or more.Not only schools but now community libraries too are under scrutiny. The efforts are also increasingly punitive. Missouri Republicans this week voted to defund all of the state’s public libraries after librarians challenged a bill that has removed more than 300 books and that threatens educators “providing sexually explicit material” with imprisonment or a fine of up to $2,000. A library in Michigan was defunded last year; another in Texas is under threat this week.These challenges are overwhelmingly from the right. And while liberal parents have sought to remove titles such as Adventures of Huckleberry Finn from mandatory reading lists over their approach to race, this time the demand from parents is not merely that their child should not have to read particular titles – but that no one’s child should be able to unless they buy it privately.Pen America notes: “It is the books that have long fought for a place on the shelf that are being targeted. Books by authors of color, by LGBTQ+ authors, by women. Books about racism, sexuality, gender, history.” They include works by celebrated children’s writers such as Judy Blume, literary greats including Toni Morrison and Margaret Atwood – and even the comic picture book I Need a New Butt. Librarians are attacked as “paedophiles” over sex education titles or those depicting same-sex relationships. In part, this is a backlash against efforts to diversify reading matter in schools and libraries. The pandemic also gave parents greater insight into what their children are studying and fostered a “parental rights” movement rooted in opposition to mask mandates.The primary cost is to children denied appropriately selected books that could be life-affirming and life-changing – even, perhaps, life-saving. The chilling effect of challenges makes librarians and teachers second-guess their choices and cut book purchases. In two Florida counties, officials this year ordered teachers to cover up or remove classroom libraries entirely, pending a review of the texts – reportedly leaving weeping children begging: “Please don’t take my books.” But parents, librarians and communities are waking up to the threat, and are organising and educating to counter it. Books are the building blocks of civilisation. They must be defended.Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here. More

  • in

    Biden urgeed an investigation into how guns are peddled to kids. Will it stop the ads?

    Last year the Georgia-based gun manufacturer Daniel Defense tweeted an image of a young child with a rifle – about the same size as the child himself – in his lap. “Train up a child in the way he should go, and when he is old, he will not depart from it,” the caption read.The post came just eight days before an 18-year-old shot and killed 19 students and two teachers in Uvalde, Texas – using a weapon made by Daniel Defense.The tweet was swiftly decried by Democratic lawmakers and gun violence prevention groups, who argued that the ads were incendiary and promote violence among the nation’s youngest residents, for whom gun violence is now the leading cause of death.The ways that children are exposed to firearms through television and video games has been studied for decades. Online advertisements became a central part of this discussion last year, around the same time as the Daniel Defense tweet, when WEE1, a Chicago-based gunmaker used images of two cartoon skulls with pacifiers in their mouths and targets in their eyes to market their JR-15, a .22 rifle that is “geared toward smaller enthusiasts”, according to the company’s website.Now, Joe Biden is calling on the Federal Trade Commission (FTC) to examine the ways gun manufacturers market their weapons to Americans, especially children under 18.It’s one of the several executive actions the White House announced Tuesday aimed at expanding last year’s bipartisan Safer Communities act, a sweeping gun control law that strengthened background checks, helped states put in place red flag laws and boosted mental health programs. Here’s a look at what the order does – and doesn’t – do.How are gun companies advertising to kids?Advertisements for firearms are not as ubiquitous as ones for cars or snack foods, and those that do exist are mostly found in places such as gun magazines. Most of these ads are aimed at adults because people under 18 cannot legally buy a gun.Advertisements explicitly meant to appeal to children are rare, but invocations of militarism, patriotism and gender stereotypes that gun manufacturers have long leaned on are being aimed at younger audiences above the age of 18, according to a 2022 Senate joint economic committee report.Gun manufacturers and retailers are also relying on paid gun social media influencers to put their wares in front of new audiences, as a way to skirt tech conglomerates Meta and Google’s ban on ads by gun companies. In July, California became the first state in the US to ban gun manufacturers from marketing their weapons to minors.What’s in Biden’s executive order?Biden’s executive action will result in a report that analyzes the gun industry’s broader gun marketing practices. In his announcement of the order, Biden emphasized examining advertisements aimed at youth and marketing that incorporates military imagery and themes.Before the president tapped the FTC to look into gun ads, Democratic senator Ed Markey of Massachusetts introduced the protecting kids from gun marketing act, which would require the FTC to ban gun companies from advertising to kids. Under the bill, gun companies would be prohibited from using cartoon characters, memes, images of children holding guns, or firearms designed for children in advertising, and from offering branded merchandise to kids.“There are restrictions on cigarette and tobacco advertising, on alcohol advertising, and on cannabis advertising, yet the firearms industry is not subject to any specific restrictions or limitations on their marketing practices,” said a press release announcing the bill.Markey cited WEE1’s marketing for their JR-15 as an example of the type of ads the new policy would potentially prohibit.What comes next?Because Republicans currently control the House, and Democrats only have a slim majority in the Senate, any legislation restricting the way gunmakers advertise is unlikely to reach Biden’s desk. Markey’s proposed legislation does, however, put pressure on tech companies to keep gun ads off their platforms.It is unclear if a report resulting from Biden’s executive order, if published, will lead to new guidelines for the gun industry and their advertising practices. The FTC did not respond to requests for comments.Adhering to Biden’s request means the FTC would, for the first time, analyze and report the way gun manufacturers advertise. The agency currently has guidelines on marketing aimed at minors and closely monitors online ads for privacy violations. However, the agency does not have any explicit guardrails to inform the ways gunmakers and adjacent companies and organizations, including youth shooting sport programs, market to young audiences. More

  • in

    Historic bill aimed at keeping California children digitally safe approved

    Historic bill aimed at keeping California children digitally safe approvedLegislation will require companies to install guardrails for those under age 18 and use higher privacy settings California lawmakers passed first-of-its-kind legislation on Monday designed to improve the online safety and privacy protections for children.The bill, the California Age-Appropriate Design Code Act, will require firms such as TikTok, Instagram, and YouTube to install guardrails for users under the age of 18, including defaulting to higher privacy settings for minors and refraining from collecting location data for those users.It also requires companies to analyze their algorithms and products to determine how they may affect young users, assessing whether it is designed to be addictive or cause additional harm to children.Children’s safety advocates have applauded the bill, which passed in a vote of 33 to 0, saying similar federal legislation is needed to protect young users. The bill is “a huge step forward toward creating the internet that children and families deserve”, said Josh Golin, executive director at advocacy group Fairplay.“For far too long, tech companies have treated their egregious privacy and safety issues as a PR problem to be addressed only through vague promises, obfuscations, and delays,” he said. “Now, tech platforms will be required to prioritize young Californians’ interests and wellbeing ahead of reckless growth and shareholder dividends.”More details to come …TopicsTechnologyChildrenCaliforniaInternet safetyPrivacySocial mediaUS politicsnewsReuse this content More