More stories

  • in

    Total Solar Eclipse Safety: How to Watch Without Hurting Your Eyes

    A number of case studies published after recent total solar eclipses highlight the importance of safe viewing.A young woman visited New York Eye & Ear Infirmary of Mount Sinai Hospital shortly after the eclipse of Aug. 21, 2017. She told Dr. Avnish Deobhakta, an ophthalmologist, that she had a black area in her vision, and then drew a crescent shape for him on a piece of paper.When Dr. Deobhakta examined her eyes, he was astonished. He saw a burn on her retina that was exactly the same shape. It was “almost like a branding,” he said.She had looked at the sun during the eclipse without any protection. The burn was an image of the sun’s corona, its halo-like outer rim.With every eclipse, ophthalmologists see patients who looked at the sun and complain afterward that their vision is distorted: They see small black spots, their eyes are watery and sensitive to light. Usually, the symptoms resolve, although it may take several weeks to a year.But the woman’s retinal burns, which Dr. Deobhakta and colleagues described in a medical case write-up, would not heal. Her retina was permanently scarred and a sign of the severity of injuries that can follow looking at an eclipse without proper precautions.With the coming eclipse in April, ophthalmologists advise people to be careful and not assume that short glances at the sun are safe. Damage can occur, they say, in less than a minute.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    How to Check in on Your Emotional Well-Being

    We know we should get a physical exam every year; we have annual reviews at work; some couples even do periodic relationship audits. And yet many of us don’t regularly check in with our emotional health — though it is arguably the most important contributor to overall well-being. The New York Times talked to experts […] More

  • in

    Yale Apologizes for Its Connections to Slavery

    The university also issued a historical study and announced steps to address this legacy, including new support for public education in New Haven, Conn.Yale University on Friday issued a formal apology for its early leaders’ involvement with slavery, accompanied by the release of a detailed history of the university’s connections to slavery and a list of what it said were initial steps to make some amends.The announcement came more than three years after Yale announced a major investigation into the university’s connections to slavery, the slave trade and abolition, amid intense national conversations about racial justice set off by the murder of George Floyd. And it frames what the school’s leaders say will be a continuing commitment to repair.“We recognize our university’s historical role in and associations with slavery, as well as the labor, the experiences and the contributions of enslaved people to our university’s history, and we apologize for the ways that Yale’s leaders, over the course of our early history, participated in slavery,” the university’s president, Peter Salovey, and the senior board trustee, Josh Bekenstein, said in a message to the university community.“Acknowledging and apologizing for this history are only part of the path forward,” they continued. The university is also creating new programs to fund the training of public schoolteachers for its home city, New Haven, Conn., whose population is predominantly Black. And Yale will expand previously announced research partnerships with historically Black colleges and universities across the country, with a “significant new investment” to be announced in coming weeks.Unlike Harvard, which in 2022 committed $100 million to a “Legacy of Slavery Fund,” Yale did not announce an amount for all its initiatives.David W. Blight, the Yale historian who led the historical research, said in an interview that the purpose of the effort was not “to cast ugly stones at anybody,” but to present the university’s history honestly and unflinchingly.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Chinese Influence Campaign Pushes Disunity Before U.S. Election, Study Says

    A long-running network of accounts, known as Spamouflage, is using A.I.-generated images to amplify negative narratives involving the presidential race.A Chinese influence campaign that has tried for years to boost Beijing’s interests is now using artificial intelligence and a network of social media accounts to amplify American discontent and division ahead of the U.S. presidential election, according to a new report.The campaign, known as Spamouflage, hopes to breed disenchantment among voters by maligning the United States as rife with urban decay, homelessness, fentanyl abuse, gun violence and crumbling infrastructure, according to the report, which was published on Thursday by the Institute for Strategic Dialogue, a nonprofit research organization in London.An added aim, the report said, is to convince international audiences that the United States is in a state of chaos.Artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy the country.One post on X that said “American partisan divisions” had an image showing President Biden and former President Donald J. Trump aggressively crossing fiery spears under this text: “INFIGHTING INTENSIFIES.” Other images featured the two men facing off, cracks in the White House or the Statue of Liberty, and terminology like “CIVIL WAR,” “INTERNAL STRIFE” and “THE COLLAPSE OF AMERICAN DEMOCRACY.”The campaign’s artificially generated images, some of them also edited with tools like Photoshop, have pushed the idea that the November vote will damage and potentially destroy America.via XWe are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Report Says 17 Percent of Gamers Identify as L.G.B.T.Q.

    In its first survey of the video game industry, the advocacy group GLAAD determined that few console games include queer characters.Less than 2 percent of console video games include L.G.B.T.Q. characters or story lines even though 17 percent of gamers are queer, according to GLAAD’s first survey on the industry.The survey, whose results were released on Tuesday, said a majority of respondents had experienced some form of harassment when playing online. But it also found that many queer gamers saw virtual worlds as an escape in states where recent legislation has targeted L.G.B.T.Q. people. Seventy-five percent of queer respondents from those states said they could express themselves in games in a way they did not feel comfortable doing in reality.“That is a statistic that should pull on everyone’s heartstrings,” said Blair Durkee, who led the advocacy group’s survey alongside partners from Nielsen, the data and marketing firm. “The statistic is driven largely by young gamers. Gaming is a lifeline for them.”GLAAD has produced a similar breakdown of queer representation in television since 1996. Its latest report found that 10.6 percent of series regulars in prime-time scripted shows identified as L.G.B.T.Q., which researchers said helped put their video game study in perspective.Tristan Marra, GLAAD’s head of research and reports, said that there were nearly 1,500 participants in the video game survey and that researchers used public information to meticulously search for inclusive content in games that are available on the PlayStation, Xbox and Switch digital libraries.“I am deep into gaming and still have a hard time naming” L.G.B.T.Q. characters, said Raffy Regulus, a founder of NYC Gaymers, which hosts game nights in the city. Regulus pointed to Ellie from The Last of Us and Venture from Overwatch 2 as some recent examples.We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    To Make Blockbuster Shows, Museums Are Turning to Focus Groups

    To shape its new show about life in the Roman Army, the British Museum put questions to members of the public. Other institutions are also using the same technique.Last January, 14 members of the British public entered a wood-paneled room in the back of the British Museum for a secret presentation. They were there to learn about an exhibition still in development, which the museum wanted kept under wraps.Onscreen in a prerecorded video, the museum’s curator of Roman and Iron Age coins, Richard Addy, outlined his plans for a show about life in the Roman Empire’s army. The exhibition would take visitors from a soldier’s recruitment to his retirement, he said, and would feature hundreds of objects, including the armor that warriors wore on the battlefield and letters they wrote home to their families.When the presentation was finished, a staff member from Morris Hargreaves McIntyre, a company that runs focus groups, asked the museum goers for their thoughts on aspects of Addy’s plan, including which types of artifacts the museum should show, how they should be arranged and even how much entry should cost.Most of the participants seemed excited, according to an anonymized report for the British Museum. Several attendees said they especially liked that the exhibition would focus on the stories of individual soldiers, including everyday subjects like their food and pay.Other participants were more critical. “It comes across a little dry,” one said. “It would be quite boring for a kid,” said another.Sometimes the attendees’ feedback could be “a shock to the curatorial ego,” said Stuart Frost, the British Museum official who oversees focus groups.Andrew Testa for The New York TimesA Roman long shield, or scutum, left, and the central part from a legionary shield, right, used to protect the user’s hand and provide a punching weapon.
    Andrew Testa for The New York TimesWe are having trouble retrieving the article content.Please enable JavaScript in your browser settings.Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.Thank you for your patience while we verify access.Already a subscriber? Log in.Want all of The Times? Subscribe. More

  • in

    Crisis at Gaza’s Main Hospital, and More

    The New York Times Audio app is home to journalism and storytelling, and provides news, depth and serendipity. If you haven’t already, download it here — available to Times news subscribers on iOS — and sign up for our weekly newsletter.The Headlines brings you the biggest stories of the day from the Times journalists who are covering them, all in about 10 minutes.Intense, close-quarters combat is taking place near Al-Shifa Hospital, the largest in the Gaza Strip.Khader Al Zanoun/Agence France-Presse — Getty ImagesOn Today’s Episode:Crisis Heightens at Gaza’s Main Hospital Amid Dispute Over Desperately Needed FuelTim Scott Suspends 2024 Campaign, After Sunny Message Failed to ResonateCan’t Think, Can’t Remember: More Americans Say They’re in a Cognitive FogEmily Lang More

  • in

    Does Information Affect Our Beliefs?

    New studies on social media’s influence tell a complicated story.It was the social-science equivalent of Barbenheimer weekend: four blockbuster academic papers, published in two of the world’s leading journals on the same day. Written by elite researchers from universities across the United States, the papers in Nature and Science each examined different aspects of one of the most compelling public-policy issues of our time: how social media is shaping our knowledge, beliefs and behaviors.Relying on data collected from hundreds of millions of Facebook users over several months, the researchers found that, unsurprisingly, the platform and its algorithms wielded considerable influence over what information people saw, how much time they spent scrolling and tapping online, and their knowledge about news events. Facebook also tended to show users information from sources they already agreed with, creating political “filter bubbles” that reinforced people’s worldviews, and was a vector for misinformation, primarily for politically conservative users.But the biggest news came from what the studies didn’t find: despite Facebook’s influence on the spread of information, there was no evidence that the platform had a significant effect on people’s underlying beliefs, or on levels of political polarization.These are just the latest findings to suggest that the relationship between the information we consume and the beliefs we hold is far more complex than is commonly understood. ‘Filter bubbles’ and democracySometimes the dangerous effects of social media are clear. In 2018, when I went to Sri Lanka to report on anti-Muslim pogroms, I found that Facebook’s newsfeed had been a vector for the rumors that formed a pretext for vigilante violence, and that WhatsApp groups had become platforms for organizing and carrying out the actual attacks. In Brazil last January, supporters of former President Jair Bolsonaro used social media to spread false claims that fraud had cost him the election, and then turned to WhatsApp and Telegram groups to plan a mob attack on federal buildings in the capital, Brasília. It was a similar playbook to that used in the United States on Jan. 6, 2021, when supporters of Donald Trump stormed the Capitol.But aside from discrete events like these, there have also been concerns that social media, and particularly the algorithms used to suggest content to users, might be contributing to the more general spread of misinformation and polarization.The theory, roughly, goes something like this: unlike in the past, when most people got their information from the same few mainstream sources, social media now makes it possible for people to filter news around their own interests and biases. As a result, they mostly share and see stories from people on their own side of the political spectrum. That “filter bubble” of information supposedly exposes users to increasingly skewed versions of reality, undermining consensus and reducing their understanding of people on the opposing side. The theory gained mainstream attention after Trump was elected in 2016. “The ‘Filter Bubble’ Explains Why Trump Won and You Didn’t See It Coming,” announced a New York Magazine article a few days after the election. “Your Echo Chamber is Destroying Democracy,” Wired Magazine claimed a few weeks later.Changing information doesn’t change mindsBut without rigorous testing, it’s been hard to figure out whether the filter bubble effect was real. The four new studies are the first in a series of 16 peer-reviewed papers that arose from a collaboration between Meta, the company that owns Facebook and Instagram, and a group of researchers from universities including Princeton, Dartmouth, the University of Pennsylvania, Stanford and others.Meta gave unprecedented access to the researchers during the three-month period before the 2020 U.S. election, allowing them to analyze data from more than 200 million users and also conduct randomized controlled experiments on large groups of users who agreed to participate. It’s worth noting that the social media giant spent $20 million on work from NORC at the University of Chicago (previously the National Opinion Research Center), a nonpartisan research organization that helped collect some of the data. And while Meta did not pay the researchers itself, some of its employees worked with the academics, and a few of the authors had received funding from the company in the past. But the researchers took steps to protect the independence of their work, including pre-registering their research questions in advance, and Meta was only able to veto requests that would violate users’ privacy.The studies, taken together, suggest that there is evidence for the first part of the “filter bubble” theory: Facebook users did tend to see posts from like-minded sources, and there were high degrees of “ideological segregation” with little overlap between what liberal and conservative users saw, clicked and shared. Most misinformation was concentrated in a conservative corner of the social network, making right-wing users far more likely to encounter political lies on the platform.“I think it’s a matter of supply and demand,” said Sandra González-Bailón, the lead author on the paper that studied misinformation. Facebook users skew conservative, making the potential market for partisan misinformation larger on the right. And online curation, amplified by algorithms that prioritize the most emotive content, could reinforce those market effects, she added.When it came to the second part of the theory — that this filtered content would shape people’s beliefs and worldviews, often in harmful ways — the papers found little support. One experiment deliberately reduced content from like-minded sources, so that users saw more varied information, but found no effect on polarization or political attitudes. Removing the algorithm’s influence on people’s feeds, so that they just saw content in chronological order, “did not significantly alter levels of issue polarization, affective polarization, political knowledge, or other key attitudes,” the researchers found. Nor did removing content shared by other users.Algorithms have been in lawmakers’ cross hairs for years, but many of the arguments for regulating them have presumed that they have real-world influence. This research complicates that narrative.But it also has implications that are far broader than social media itself, reaching some of the core assumptions around how we form our beliefs and political views. Brendan Nyhan, who researches political misperceptions and was a lead author of one of the studies, said the results were striking because they suggested an even looser link between information and beliefs than had been shown in previous research. “From the area that I do my research in, the finding that has emerged as the field has developed is that factual information often changes people’s factual views, but those changes don’t always translate into different attitudes,” he said. But the new studies suggested an even weaker relationship. “We’re seeing null effects on both factual views and attitudes.”As a journalist, I confess a certain personal investment in the idea that presenting people with information will affect their beliefs and decisions. But if that is not true, then the potential effects would reach beyond my own profession. If new information does not change beliefs or political support, for instance, then that will affect not just voters’ view of the world, but their ability to hold democratic leaders to account.Thank you for being a subscriberRead past editions of the newsletter here.If you’re enjoying what you’re reading, please consider recommending it to others. They can sign up here. Browse all of our subscriber-only newsletters here.I’d love your feedback on this newsletter. Please email thoughts and suggestions to interpreter@nytimes.com. You can also follow me on Twitter. More