English – The Conversation (50), Snopes.com (20), Quanta Magazine (5), The long read | The Guardian (20), Top stories - Google News (38), Vox (10)
For the first time since the Apollo era, humans are preparing not just to visit the Moon, but to live and work there for weeks, months – and eventually years.
But what would it really be like to spend an extended period on the lunar surface? The answer is exhilarating – and brutally unforgiving. An exciting new era of deep-space exploration is opening up. The US Artemis programme aims to set up an outpost on the Moon’s surface. It marks a fundamental shift in how we explore space.
Rather than just leaving “flags and footprints” as the Apollo missions did, Nasa wants to establish a sustained human presence on the Moon, beginning at the lunar South Pole.
The programme unfolds in stages. In 2022, the Artemis I mission successfully tested the Space Launch System (SLS) rocket and Orion spacecraft as an integrated system on an uncrewed mission around the Moon.
On April 1, 2026, Nasa launched Artemis II a ten-day mission, carrying four astronauts around the Moon.
The four Artemis II astronauts arrived at Kennedy Space Center in Florida on March 27, 2026 to begin final preparations for launch. NASA/Jim RossAs Nasa’s first crewed flight of Orion and SLS, Artemis II is a pivotal mission designed to verify that life-support systems, navigation, thermal protection and deep-space operations all function safely with humans onboard.
Before astronauts can live on the Moon, the journey there must be proven reliable.
Beyond these early missions, Nasa’s long-term vision extends far beyond a single landing. Nasa plans to spend US$20 billion (£15 billion) on a lunar surface base, intended to support repeated and progressively longer surface stays. This is designed to teach us how to operate sustainably beyond Earth – knowledge that will ultimately feed forward to future human missions to Mars, the horizon goal.
Living on the Moon will challenge every organ system in the human body. The lunar environment exposes astronauts to a unique space exposome – the combined set of physical, chemical, biological and psychological stressors encountered beyond Earth.
Regular exercise will be critical for staying healthy on the Moon. Here, Japanese astronaut Satoshi Furukawa works out on the International Space Station. NasaThese include reduced gravity (about one-sixth of Earth’s), chronic exposure to cosmic radiation, extreme temperature swings, toxic lunar dust, isolation, disrupted sleep-wake cycles, and prolonged confinement.
Unlike astronauts in low-Earth orbit, lunar crews operate largely outside Earth’s protective magnetic field. This increases exposure to space radiation, which can damage DNA, disrupt immune function and affect the brain and cardiovascular system in subtle but potentially serious ways.
Reduced gravity also fundamentally alters how blood, oxygen and fluids move around the body. Microgravity can disrupt how blood, oxygen and glucose are delivered to the brain, potentially increasing vulnerability to neurological and vascular dysfunction over time.
This figure was modified with permission. The physiology of survival: Space.To properly understand these risks, we need to look beyond individual organs and instead consider the space integrome – the way that the brain, heart, blood vessels, muscles, bones, immune system and metabolism interact as an integrated whole under space conditions. A small disturbance in one system sends ripples through others.
One of the most challenging aspects is that many space-related physiological changes develop insiduously. Astronauts may feel well while complications simmer beneath the surface, only becoming apparent months or even years later.
That is why Nasa places such emphasis on long-term physiological monitoring and human risk mitigation in its Artemis science strategy.
Read more: Nasa plans to have a permanent base on the Moon by 2030 – how it can be done
The encouraging news is that humans are remarkably adaptable. The challenge is guiding that adaptation in safe and sustainable ways. Space countermeasures are the tools used to reduce risk and preserve astronaut health.
Exercise remains the cornerstone. On the International Space Station, astronauts spend around two hours per day exercising to protect muscle mass, bone density and cardiovascular function. On the Moon, however, exercise systems must be redesigned for partial gravity, where familiar Earth-based loading no longer applies.
Lunar regolith (soil) could be used to create structures that protect habitats from radiation and micrometeoroids. Foster + PartnersNutrition is another powerful countermeasure. Diet influences bone health, muscle maintenance, immune resilience and even how the body responds to radiation.
Personalised nutrition strategies, tailored to individual physiology rather than a “one-size-fits-all” menu, are likely to become increasingly important during long lunar missions.
Artificial gravity is also being explored. Short-radius centrifuges could expose astronauts to brief periods of increased gravitational loading, potentially helping stabilise cardiovascular and neurovascular systems. While still experimental, this approach may prove valuable for future surface missions.
Vegetables grown in a lunar base greenhouse could enhance astronaut nutrition. NasaRadiation protection will rely on multiple layers of defence: habitat shielding – potentially using structures made of lunar soil – early warning systems for solar storms, and operational strategies that limit exposure during high-risk periods.
Crucially, countermeasures should be proactive rather than reactive. Continuous physiological monitoring, wearable sensors and advanced data analytics may allow mission teams to detect early warning signs and intervene before small problems become mission-limiting ones.
Spending extended time on the Moon will be awe-inspiring. Imagine watching Earth hang motionless above a stark, silent horizon, or working under a sky that never turns blue.
A lunar base would teach humans how to operate sustainably beyond Earth. RegoLight, visualisation: Liquifer Systems Group, 2018But it will also be demanding, uncomfortable and unforgiving. The Moon is not just a destination – it is a test of our biology.
If we can learn how to keep humans healthy, resilient and productive on the lunar surface, we take a decisive step toward becoming a truly spacefaring species. Artemis shows that exploration is no longer about brief heroics.
It is about sustainability, adaptability and understanding ourselves as deeply as the worlds we seek to explore.
In learning how to live on the Moon, we may ultimately learn as much about life on Earth as we do about our future beyond it.
D.M.B. is the outgoing Chair of the Life Sciences Working Group and member of the Human Spaceflight and Exploration Science Advisory Committee to ESA. He is a current member of the ESA-HRE-Biology Panel and Space Exploration Advisory Committees to the UK and Swedish National Space Agencies. He is also affiliated to Bexorg, Inc. (USA) focused on the technological development of novel biomarkers of cerebral bioenergetic function in humans. He is supported by a Royal Society Wolfson Research Fellowship (Grant No. WM170007).
In the aftermath of the latest bout of extreme rainfall across New Zealand’s upper North Island, there were some familar scenes.
Submerged pastures. Silt carried by swollen rivers and piled against bridges. Floodwaters surrounding homes whose owners were forced to flee.
As we count the toll of these events, which have wrought billions of dollars in damage over the past few years alone, there are inevitably questions about the hidden hand of climate change.
But just as pressing is another question: just how much worse might they become in a potentially much warmer world, decades from now?
Our newly published research, exploring a range of warming scenarios and drawing on the Ministry for the Environment’s latest climate projections, provides some useful answers.
The results point to a future where extreme rainfall is both more intense and more frequent across much of the country – with some simulated storms bearing the hallmarks of weather disasters from Aotearoa’s past.
It has long been understood that, as global temperatures rise, the atmosphere can hold more water vapour, increasing the likelihood of heavier rainfall during storms.
This broad pattern is borne out in the climate model simulations we examined, which show the most extreme rainfall events are likely to intensify over the coming decades.
But our analysis also enabled us to tease out some finer insights about what may lie ahead.
By the second half of the century, we found the most intense one-day and three-day rainfall events in a typical year – often involving totals of hundreds of millimetres of rain – are projected to increase by around 10% to 20% across much of New Zealand.
The extent of these increases depends on future emissions, with larger shifts under higher greenhouse gas scenarios. Impacts also vary region-by-region.
Some of the largest increases are projected in the central North Island and parts of the South Island’s west coast – regions already prone to some of the country’s most intense rainfall. In contrast, some eastern regions, such as Hawke’s Bay and parts of Canterbury, are expected to see smaller or more variable changes.
Even so, the overall trend is toward more frequent extremes.
We examined changes under a middle-of-the-road emissions scenario, in which global greenhouse gas emissions peak around mid-century before gradually declining, while global warming reaches about 2.7C above pre-industrial levels by century’s end.
By that point, about half of the locations we analysed could have experienced at least a 50% increase in impactful rainfall events – which we define as events that historically occurred about once a decade – relative to New Zealand’s recent climate (1985–2014).
Around 30% of places could see a doubling, and roughly 10% could experience three times as many events. In some places, however, the largest events may still fall within threshold of events in the historical record.
The regional differences we observed reflect a mix of local geography, weather patterns and natural climate variability – meaning chance still plays an important role in how extreme rainfall is experienced in any one place.
In May 1923, days of intense rainfall inundated North Canterbury. In what was one of the most statistically extreme rainfall events recorded in New Zealand’s history, towns were swamped, roads were cut off and hundreds of families were forced to evacuate.
One century later, Cyclone Gabrielle left in its wake flooded communities, thousands of landslides and a national damage bill estimated at between NZ$9–14 billion.
In each of these cases, large-scale weather systems transported vast amounts of moisture across the ocean toward New Zealand before dumping it in torrential downpours.
These major storms also bore patterns that closely resembled those in several of the most extreme simulated rainfall events that we examined.
Naturally-driven rain-makers – be they low pressure systems, ex-tropical cyclones or moisture-packed “atmospheric rivers” – will always remain part of New Zealand’s weather mix.
But, while future extremes are likely to stem from same types of storm systems, the consequences will be more severe.
This carries important implications for how Aotearoa prepares for flood risk today and how it adapts to a warmer, wilder future. More than 750,000 New Zealanders already live in areas exposed to 1-in-100-year rainfall flood events.
If tomorrow’s extreme events exceed historic records more often, infrastructure designed for those past conditions may no longer be enough to protect people and property.
Muhammad Fikri Sigid receives funding from the Natural Hazards Commission.
Hamish Lewis has received funding from the Royal Society of New Zealand via the Marsden Fund, and the New Zealand Ministry of Business, Innovation and Employment via their Endeavour Smart Ideas Fund. He is affiliated with Earth Sciences New Zealand.
Luke Harrington receives funding from the Natural Hazards Commission, the Royal Society of New Zealand's Marsden Fund and the Ministry of Business, Innovation and Employment via their Endeavour Fund Smart Ideas programme.
This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.
Welcome to The Logoff: President Donald Trump is still trying to limit mail-in voting.
What happened? On Tuesday evening, Trump signed an executive order that would create new citizenship lists to determine eligibility to vote and limit who the US Postal Service can send ballots to.
Is this order going to go anywhere? Very likely not, for a number of reasons. First, it’s almost certainly unconstitutional: The Constitution gives states the power to determine the “Times, Places and Manner” of elections, with no role for the executive branch.
There are also practical obstacles. As Kevin R. Kosar, a senior fellow at the American Enterprise Institute, points out, the implementation of the order in time for the 2026 midterms — should it be allowed to go forward — would present serious logistical challenges.
What’s the context? Limiting mail-in voting has been Trump’s white whale since his defeat in the 2020 election, which he has blamed on mail ballots. Most recently, he’s tried to get it done legislatively: The SAVE America Act, which is currently stuck in the Senate (and unlikely to come unstuck), would not only limit mail-in voting, but impose new voter ID requirements.
Despite Trump’s attacks, there is no evidence that mail-in voting is associated with significant voter fraud.
Didn’t Trump vote by mail himself? Yes. Records show Trump, who has attacked mail-in voting as “cheating,” voted by mail in Florida’s recent special elections, despite being in Palm Beach at the time.
What’s the big picture? As Rick Hasen, a University of California Los Angeles law professor, pointed out in a blog post, the order is flimsy enough that it’s more “election denialism theater” than anything else — intended to paint US elections, baselessly, as not secure and in need of reform.
That kind of theater isn’t without real-world dangers. This specific order is unlikely to throw US elections into chaos — but it’s yet another entry in Trump’s long-running campaign to undermine Americans’ confidence in the democratic process.
It’s moon mission day! You can read my colleague Caitlin Dewey on the new space race here, and tune in to watch on YouTube, C-SPAN, and lots of other places. The two-hour launch window opens at 6:24 pm Eastern.
While you’re waiting, here’s a gift link explaining why the Artemis II spacesuits are not white but a specific shade called International Orange (I’m a fan, personally).
Thanks for reading, have a great evening, and we’ll see you back here tomorrow!
Editor’s Note, April 1, 5:00 pm ET: The interview in this piece was conducted when NASA first revealed the crew for Artemis II in 2023. With the launch now taking place, Vox is republishing the piece.
The crew taking part in the Artemis II launch includes two historic firsts: the first woman, Christina Koch, and the first person of color, Victor Glover, to go on a lunar mission. Hailed by NASA spokespeople as “pioneers” and “explorers,” they have been greeted with fanfare befitting “humanity’s crew.”
But behind the Artemis II program are much more corporate goals. It’s not just that private industry helped build the program’s spacecraft. Space mining companies competing for government contracts want to turn the moon into a cosmic gas station. The vision is to mine the lunar surface for rocket fuel that can then propel us all the way to Mars — and beyond, as humanity takes its self-appointed place in the stars.
Mary-Jane Rubenstein told me that vision makes her want to throw up. A Wesleyan professor of religion and science in society, she’s the author of the book Astrotopia: The Dangerous Religion of the Corporate Space Race.
What’s “religion” doing in that title, and why is a religion professor writing a book about the space program? Rubenstein argues that today’s corporate space race — helmed by Elon Musk, Jeff Bezos, Richard Branson, and others who propose to “save” humanity from a dying planet — is actually rehashing old Christian themes that go all the way back to the 15th century, when European Christians colonized the Americas. Remember how Donald Trump described the Artemis mission and eventual settlement of the moon and Mars? He called it “America’s manifest destiny in the stars.”
But as Rubenstein points out, not everyone thinks it’s the moon’s destiny to be strip-mined, or Mars’s destiny to be settled by human colonists. In fact, some believe these celestial bodies should have fundamental rights of their own.
I talked to Rubenstein about the fear of screwing up space like we’ve screwed up Earth: Is that really a fear of trampling on space’s own intrinsic value, or is it more a fear about human nature? A transcript of our conversation, edited for length and clarity, follows.
When you see news about space exploration, like the announcement about who will be going to the moon next year, is your dominant feeling ... excitement? Dread?
It’s a little bit of dread. Because I worry that all this is getting going before the public really understands what’s happening.
One thing I’m worried about is that some of the astronauts will be tokenized to make it clear that Artemis is a feminist and anti-racist movement. But if we’re looking to make space exploration a liberationist project, just putting representatives of different identity groups there isn’t going to be enough. I worry that it’ll look like the job is somehow done because there is a woman and a person of color on this mission.
The mission itself needs to be analyzed from a feminist and anti-racist perspective first. Then you figure out how to do it well, and then you figure out who’s going to be on it.
There are two words you use to refer to the corporate space race in your book, and the rationale for using those words might not be obvious to readers. You talk about it as “religion” and as “colonial.” Why?
What I’m arguing is that the new corporate space race is an extension and intensification of the initial space race of the late ’50s and into the early ’70s. And that that space race is an extension and intensification of the colonial project that settled the Americas.
The journey that Europeans made across the seas to conquer the Americas and then the journey that white-descended Americans made across the North American continent through what’s known as Manifest Destiny gets extended in the mid-20th century as a new frontier is proclaimed to be open, the frontier of outer space. The space race is a new chapter in European-style colonialism — a vertical extension of that colonial project — as an effort to get more land and more resources for an imperial nation.
The colonial project that settled the Americas was underwritten at every major turn by religious language, religious authorities, religious doctrines. Perhaps most profoundly, the reason Spain was able to conquer the New World was that Pope Alexander VI declared that the New World was his to give — and he gave it to Spain. The conquistadors were underwritten by the head of the Roman Catholic Church; therefore God was endorsing the Spanish conquest of the New World.
This language gets taken up in different ways later. You find a claim to land and resources and a justification for destroying indigenous communities, all authorized by biblical claims. North America is understood very early on to be what early preachers will call God’s New Israel. Just as God gave the Land of Canaan to the Israelites on the proviso that they make it a holy land, God was now giving Europeans a new Canaan. The idea is: Go in there, cleanse it of all unholiness and devotion to any other gods, and establish a new kingdom dedicated to the glory of God.
By the way, there are 20 towns in the US that are named New Canaan.
So if America is understanding itself to be God’s new Israel, it’s like saying Americans are God’s new chosen people. How do those religious themes underwrite the modern corporate space race?
When Mike Pence spoke to space-industry professionals [in 2018], he quoted Psalm 139 and said that “even if we go up to the heavens, even there His hand will guide us.” Then in 2020, Trump used the language of Manifest Destiny in his last State of the Union address when he was declaring his priorities for a second term. This was in the [beginning] of the pandemic, people were dying, and his first priority was going to the moon — to embrace “America’s manifest destiny in the stars.”
That was a call-out to the old idea of Manifest Destiny, that God wants light-skinned people of European heritage to inhabit not only the Eastern seaboard but the entire continent. Now the idea that Trump set forth was, it’s not just the continent that God wants America to have, it’s the entire universe.
And just to be clear, lest people think this is just a Trump thing, this is very much something that the Biden administration has decided to continue, right?
Absolutely. There’s absolutely no difference between the Trump administration and the Biden administration when it comes to space.
Some people object to using the word “colonialism” in this context. We think of colonialism as a hugely harmful thing mostly because European colonizers were coming to inhabited lands and destroying indigenous peoples. But if the moon or Mars or space beyond our solar system is uninhabited, how does “colonialism” apply?
The answer that seems compelling to you totally depends on your frame of reference. Perhaps the most difficult for secular, white Westerners to take on would be this. If you talk to indigenous people — I’m thinking particularly of Inuit cosmology, of Ojibwe cosmology, of Bawaka cosmology from Australia — they will tell you that outer space isn’t empty at all, that it actually is inhabited, that there are indigenous people there: their ancestors.
For the Bawaka People, when people die, they’re actually carried up into the Milky Way alongside the stars. So they’re really concerned that if we mine there, we’re actually doing damage to the habitation of the ancestors. And planetary bodies are often said to be sacred or to be divinities themselves. So, from different perspectives, it’s not just a foregone conclusion that there is nothing out there.
If that doesn’t do it for you, colonialism was also fairly destructive for the nations who were doing the colonizing! At the moment we do not have a robust international legal structure in space. If you’re able to set up, say, a mine there, you’re going to have to defend your mine. So the US Space Force is going to be stationed around the mine to make sure nobody else goes there. And suddenly you’ve got the same clamoring for land and resources that tore the nations apart in the late 19th century, and we had two world wars resulting from that. It seems like a bad idea to set ourselves up for that in space.
Also, the pursuit of wealth and explosion of profit tends to make those who are already wealthy much wealthier. We know that widening the gap between exceedingly rich people and exceedingly poor people is not good for most of the population.
What about ways that an extractive approach to space could potentially do damage to land?
This approach means we’re going to get even more rocket launches than we currently have — Elon Musk sends 60 satellites up at a time — and more launch pads being created and those are usually created in spaces like wetlands. Boca Chica, Texas, for example, has been absolutely destroyed by the operations of SpaceX in that area. Ecologically it’s a disaster. And low-Earth orbit is already so crowded that it’s very hard to see the stars, even for astronomers.
The next thing to point out is that the colonial project has been destructive not only of communities, but of land itself. So then the question becomes whether the land of the moon or Mars has any value in itself, which is to say beyond its value to us.
You write about a group of Australian scholars who argue that it’s not okay to damage the surface of the moon or pollute it, that the moon “possesses fundamental rights.” They’ve even issued a Declaration of the Rights of the Moon. This echoes the “rights of nature” movement, which has successfully won legal personhood rights for lakes and forests. Do you think it makes sense to apply that sort of thinking to an extraterrestrial body?
It’s such a hard question. On the one hand, I can understand that people might think there are severe limitations to applying human-derived rights language to natural formations. We might be concerned that modeling the rights of nature on the rights of humans only allows us to value something insofar as it seems human-ish. But my sense is that we’re working within a complicated and insufficient legal framework and that any strategy that works is worth trying.
You cite the philosopher Holmes Rolston III who argues that natural entities have their own value independent of anything humans might want from them. That doesn’t mean we should never eat a carrot or dig up a weed, but it does mean we should spend time considering what we take from the world and how. Rolston offers criteria for how to know when we shouldn’t destroy something. For example, we should respect “places of historical value,” “extremes in natural projects,” “places of aesthetic value,” and “places of transformative value.”
But are these really about a place’s intrinsic moral worth? To me this sounds more like people grasping for language to talk about instrumental worth — what certain places do for us.
I think it’s very hard to measure the value of something in itself. We’re always going to slip into the language of human perspective; we’re always going sneak in our own aesthetic criteria. This project has really demolished anything like academic purism in me. I think we’re going to have to give up on purity, inviolable categories or absolute measurements.
But even if there were just some kind of attention to the landscape itself and to what’s important to us (taken broadly) about that landscape … even if we were just to approach the bodies of outer space in the ways that we approach national parks, where you carry out anything you bring in … we would be doing a lot better.
I like this idea of human judgments as a floor or a minimum. Even if we just are thinking about how to protect a place vis-à-vis what is of instrumental worth to us humans, that’s already going to be some improvement.
I think a lot of people are painfully aware of how humanity has screwed up the Earth. And so maybe there’s this fear about screwing up space. But is that really more of a fear about human nature, as opposed to really being about space’s own intrinsic value?
I don’t think it’s so much a panic with respect to human nature as it is a panic with respect to capitalist nature. It’s not all of humanity that wants to conquer the stars; it’s a destructive subsection of humanity that claims to be speaking on behalf of all of humanity and telling us that either all of humanity is going to become extinct forever or we need to nuke Mars [to terraform it, per Musk’s ideas]. It’s a false zero-sum game.
Right, everyone from Musk to Bezos to Branson says the corporate space race will be for the benefit of humanity. This goes back to Eisenhower, who said the US must develop a national space program “for the benefit of all mankind.” I’ve seen this in the AI race too — OpenAI, for example, says its mission is to ensure that artificial general intelligence “benefits all of humanity.”
In your book you take issue with this language of saving all humanity from going extinct, and you write, “The operative fallacy here is known as longtermism.” Longtermism is a controversial spinoff of a social movement called effective altruism, but you say it’s actually a high-tech version of what Malcolm X called “pie in the sky and heaven in the hereafter.” He blamed America’s racist social system on the Christian teaching that those who suffer on Earth will be rewarded in the afterlife, which he said dissuaded Black Americans from overthrowing their oppressors. How does that map onto your worry about longtermism?
In the book I try to expose the clearly religious heritage of colonialism and the remnants of the kind of thinking we find in Pence and Trump when they say that God wants us to conquer the cosmos. But that’s not the most interesting place that religion is showing up at this point. The most interesting place is much more subtle: It’s in the proclamations that “the world is coming to an end.” They’re offering us a classic messianic logic of impending disaster on the one hand and eternal salvation on the other.
So the locus of religious operation has changed from the Church to these private messiahs. The private messiahs aren’t speaking in the name of any recognized religion — the logic claims to be totally secular. But it actually looks a lot like the Christian logic that says suffering on Earth is justified because there’s going to be redemption in another world.
So is your worry that the longtermist doctrine prioritizes the existence of our species in the far future, so it risks propping up the current destructive systems and keeping us docile about them?
Absolutely. Longtermism gives us a recommended sacrifice of the poor, homeless, and hungry of the Earth, because they’re not the future. It’s actually worse than the Christian promise. The Christian promise is that you yourself may suffer for 80 years but you will be rewarded in the afterlife. Here, there’s no reward for the particular people who are suffering. They’re just going to be thrown by the wayside and die in conditions of poverty and misery. But the human species itself will triumph.
The human species will see the Promised Land but the individuals of today will languish in the desert.
Exactly.
Last question for you: If space exploration can be done in a way that doesn’t screw over people or animals or our planet or other planets, are you all for it?
I’m absolutely for it! And there are so many teachers who know how to do this better. They may not be astronomers and they’re probably not corporate leaders. But there are people who know how to live sustainably. If we can find a way to listen to their example, then great! But that would involve locating those people and probably trying this out on Earth first.
A new era of weight loss medication began on Wednesday: The Food and Drug Administration has approved Eli Lilly’s GLP-1 oral pill for sale in the United States.
The approval for the drug, which will be sold under the brand name Foundayo, marks an important technological inflection point for this class of drugs that is transforming obesity care in the US and around the world. The previous generation of GLP-1 treatments were injections: Patients (or their doctors) had to handle a needle and insert it into their body in order to reap the weight-loss benefits.
It’s hard to estimate exactly how much Americans’ needle aversion has tamped down their uptake of GLP-1 drugs. Other factors — especially costs, as well as concerns about long-term safety and side effects, and a preference for other weight-loss tactics — have undoubtedly played a role, based on patient surveys. But the gap between the share of Americans who have tried a GLP-1 drug (about 12 percent as of last year) and the share who are obese (about 37 percent) suggests there is a sizable percentage of people who could benefit from these drugs but have not been taking them.
It’s possible some of those holdouts were waiting for a more convenient option, without the hassle of a needle — and Lilly is betting their new pill will make GLP-1s accessible for many of them.
“This is an oral medication in the sense that we’re used to an oral medication that we can just put it in our Monday, Tuesday, Wednesday, Thursday tray and take it with our other oral medications without regard to food or most worries about drug interactions or anything like that,” Eli Lilly CEO Dave Ricks told me in an interview last week. “That’s pretty different from a weekly injectable. Obviously, a lot of people use weekly injectables very successfully. But what we’ve learned, I think, is that there are a lot of people waiting for something like this. It’s just a little easier to fit into their busy life.”
How those hopes play out in reality now that the FDA has given its green light remains to be seen. And, as always, a new drug comes with some caveats and tradeoffs. Here’s what you need to know.
If you are thinking, “Wait, isn’t there already a GLP-1 pill?”, you’d be right — but there is a catch.
Novo Nordisk received approval for its Wegovy weight-loss pill in December, and it’s been on the market for a few months. But that drug is a peptide, delivering semaglutide in a large-molecule form that is harder to manufacture and requires more care when taking it. The company advises patients to take their pill immediately upon waking up, with 4 ounces of water, and to then wait for at least 30 minutes before eating or drinking anything else.
The Lilly pill is a small-molecule drug — closer in form to statins or blood-pressure medications. That makes it cheaper to manufacture and avoids some of the drug interaction concerns. The GLP-1 market has been periodically hampered by shortages, and Lilly is betting that putting the drug into this new form will allow them to produce a more robust supply. As Ricks put it to me: “We can make basically as much as we need.”
“Given it is in a pill and not an injection, which reduces supply chain needs around plastics and cold storage, and that is does not have special instructions to take it, it is likely to become a popular choice for primary care [physicians] as they won’t have to demonstrate pen usage, etc.,” Dr. Deborah Horn, medical director for the UT Physicians Center for Obesity Medicine and Metabolic Performance, who has consulted for Lilly, told me over email.
You’re not going to take the Lilly pill for its groundbreaking efficacy: Its convenience is the real pitch.
The pill form could also help mitigate one of the recurring challenges with GLP-1s: people regaining weight if they stop taking it. Injectables can be difficult to stick with over the long term: People get sick of the shots, they might find it hard to stay on top of a once-weekly injection, they don’t want to have to worry about refrigeration when traveling, etc. A once-a-day pill that you can make part of your existing medication routine could, in theory, make it easier for patients to stay on a GLP-1 if that’s appropriate or necessary.
It’s possible that we are in the midst of the “statin-fication” of GLP-1s. Much like statins have become a drug you take long-term to manage your cholesterol, a GLP-1 pill might become something you take for years to manage your weight. People could also potentially shift to a lower dose over time or switch from an injectable to a pill to make the drug more of a maintenance med to keep your weight stable.
“People often lose a lot of weight on Zepbound and get to their goal weight; maybe they lose about 50 pounds. And they’re like, ‘Okay, I don’t need to keep losing weight,’” Ricks said. “An option — and we’ve done the studies and it’ll be indicated within our label — is you can switch to an oral form. And maybe that fits into your life more easily.”
Here’s what the Lilly pill does not represent: a major advance in how effective these GLP-1 drugs are. In clinical trials, patients lost 12 percent of their body weight on average, in line with the original Ozempic injection, but a smidge lower than Mounjaro, Zepbound, and some of the more recent entries into this drug class. You’re not going to take the Lilly pill for its groundbreaking efficacy: Its convenience is the real pitch.
Cost and equitable access are ongoing challenges. Lilly plans to debut the pill at $149 for a month’s supply of the lowest dose, and refills will then be available for $299 within the next 45 days. That’s lower than the initial price point for a month of Wegovy injections available through Costco, for example, but still potentially out of reach for some patients. Ricks told me that Lilly has struck a deal with Medicare to cover the new pill and other GLP-1 treatments for a copay of $50 per month. He added that many insurance plans for higher earners have also started to cover GLP-1 drugs.
But insurance coverage for lower-income Americans, whether on private insurance or Medicaid, remains spotty. Ricks is hopeful that more insurers will come around as the drugs show their long-term value in reducing not only obesity but its associated conditions like heart disease; as part of the company’s deal with the US government, the drug’s cost and health effects will be assessed over time by federal officials, Ricks said.
“It’s hard to think, if it’s 2030, and we have many of these medicines that we’ve proven the benefits for chronic diseases and the government said it’s worth it after this two-year pilot they’re doing — it’s hard to think of too many employers who would say, ‘That’s not for me,” Ricks told me. “If [the government says] it’s worth it, I think that’s a pretty ringing endorsement for insurance.”
Like folks using the injections, some people who took the pill in clinical trials reported unwanted side effects, including gastrointestinal distress and debilitating muscle loss. Those symptoms can often be mitigated through appropriate diet and exercise, but my own reporting suggests that not everyone is receiving the necessary support to avoid those negative consequences. The proliferation of virtual pharmacies that exist largely to prescribe GLP-1s, with no other long-term patient-doctor relationship, adds to the risk that people go on these drugs without appropriate supervision and support.
To truly make the most of the GLP-1 drugs, the entire health care system needs to evolve to make that kind of holistic treatment the norm. But as GLP-1 use rapidly expands at the same time access to primary care is shrinking, it is reasonable to worry whether overstretched clinicians will be able to adapt — or whether many people will still be left to navigate their weight-loss journey on their own.
And finally, this is not the last GLP-1 drug. New iterations are in the works, combining different ingredients to make the treatments more effective or to tamp down on undesirable side effects. The Lilly pill may not be the standard of care for long. GLP-treatment could start to become highly personalized: As Horn put it to me, somebody with obstructive sleep apnea may still want to take Zepbound because that drug has proven effective for both that condition and weight loss at the same time.
She shared a few questions doctors and patients might consider together when deciding which GLP-1 would be right:
We are already seeing the so-called Ozempic effect in obesity data. The US may be finally starting to turn the corner on one of our longstanding health crises. A GLP-1 pill offers a chance to push that progress even further — if we can figure out how to expand access and how to better support patients so they can lose weight in a healthy way.
Clarification, April 1, 4:15 pm ET: A previous version of this post referred to the “semaglutide revolution.” The story has been updated to clarify that not all the medications for weight loss discussed are semaglutides.
The Waikato is New Zealand’s longest river, central to the identity and practices of Waikato River iwi and a source of drinking water for nearly half of the country’s population.
It is also becoming a case study in what happens when very different environmental pressures hit the same system faster than authorities can respond.
A recent RNZ investigation documented worsening toxic algal blooms in hydro lakes in the upper Waikato. Communities around Lake Ohakuri describe water so green it resembles the “Incredible Hulk”, dogs becoming violently ill and mats of toxic slime covering the surface.
These conditions are a long way from Te Ture Whaimana o te Awa o Waikato, the legislated vision for a river safe for swimming and gathering food.
Harmful algal blooms are becoming worse in hydro lakes in the upper Waikato. Adam Hartland, CC BY-NC-SAThe reporting captured genuine community frustration and institutional fragmentation. But to turn concern into effective action, we need to understand why blooms keep forming where they do.
Otherwise, interventions risk missing the mark. The Waikato cannot afford misdirected effort.
The location of the worst blooms is a clue. Lake Ohakuri sits right next to the Ohaaki-Broadlands geothermal field, where decades of extracting hot fluids for power generation have caused the ground to sink by nearly seven metres.
That geothermal activity releases heat, carbon dioxide (CO2) and mineral-rich fluids into the water, all of which promote the growth of cyanobacteria. This includes iron, a nutrient toxic algae need to thrive.
Whether decades of fluid extraction have altered the rate of influx of CO2 and iron remains untested, but the proximity to geothermal fields is striking.
Until now, no one has measured how much of the geothermal CO2 actually dissolves in the river or how far downstream it travels.
During our recent field campaign, we deployed a mobile sensor along the upper Waikato and a technique known as stable isotope analysis to fingerprint the carbon and start filling this gap.
A radio-controlled jet boat equipped with sensors maps dissolved carbon dioxide in the Waikato River. Brian Moorhead, CC BY-SAThe results are stark.
Carbon dioxide concentrations in the geothermal zone reach ten times the background level and the isotopic signature confirms the source as volcanic, not biological.
Huge quantities of dissolved CO2 escape into the atmosphere as the river passes through the hydro lake chain. The water does not return to background levels even by the time it reaches Lake Karāpiro more than a hundred kilometres away.
That lingering excess CO2 could be feeding algal growth well beyond the volcanic zone.
Carbon dioxide levels in the upper Waikato River geothermal zone reach up to ten times the levels seen in Lake Taupo. Adam Hartland, CC BY-SAThe geothermal zone is not the only pressure point. The invasive gold clam (Corbicula fluminea) has rapidly colonised the Waikato since its detection in 2023.
The clams have now been confirmed as far upstream as Lake Maraetai, directly downstream of Ohakuri.
Our research, currently under review, shows the clams are stripping roughly 14 tonnes of calcium carbonate from the river every day, disrupting the water chemistry treatment plants rely on and releasing arsenic in forms that could slip through conventional treatment processes.
Invasive gold clams collected near the Maraetai boat ramp. Michelle Melchior, CC BY-NC-SAAs the clams breathe, they pump carbon dioxide into the water and consume oxygen, tipping the river’s balance away from a system driven by plant-like photosynthesis (which produces oxygen) and toward one dominated by respiration (which releases CO2).
In January 2026, our monitoring buoy in Lake Karāpiro recorded oxygen near the lake bed dropping rapidly toward levels that would suffocate aquatic life.
What prevented a crisis was not management action but weather. Severe storms physically overturned the water column and mixed oxygen back in.
This near miss, averted by luck, is a warning, not a reassurance.
Two very different stressors are now converging on the same river. Geothermal CO2 enriches the water from below, sustaining conditions that help toxic algae grow far downstream.
The clams, spreading upstream into the geothermal reaches, add a second source of CO2 through their breathing, while depleting oxygen and stripping calcium.
What this double pressure will mean for algal blooms – when they form, how long they last and how severe they become – as clam populations continue to expand, is an open and urgent question.
Current monitoring cannot answer it. Toxic algae are sampled monthly at four hydro lakes, with results taking days to return. This is not a criticism of any single agency; national monitoring protocols now predate the compound pressures the river faces.
The local community called for ultrasonic algae-killing buoys, webcams and flushing the lakes. This reflects an understandable desire for visible action, but without understanding the underlying drivers of blooms at these specific locations, we risk treating symptoms rather than causes.
Two million people drink water from the Waikato. Thousands swim in it, fish from it and gather mahinga kai (traditional food gathering) along its length. Iwi have obligations to it that stretch across generations.
The science is telling us, in real-time sensor data, that the system is moving toward thresholds we do not want to cross. The monitoring and governance architecture we have inherited was not designed for the compound pressures now acting on the river.
The question is whether we can build the governance and data-led operational protocols to match the pace of change, before the next bloom or near miss becomes the event we failed to prevent.
Adam Hartland receives funding from the MBIE Endeavour Fund. He is the founder and managing director of Waikato Scientific Ltd, which manufactures scientific water sampling equipment but has no commercial interest in water treatment, clam management or geothermal operations and no financial relationship with any water utility, geothermal operator or governance body.
In a now notorious interview, actor Timothée Chalamet declared he only wanted to work in a creative field people valued. He was not keen on an art form like opera, “where it’s like, ‘Hey, keep this thing alive, even though no one cares about this anymore’.”
His comments predictably triggered a backlash across the opera industry – and some say they contributed to him losing this year’s Best Actor Oscar race. What should the rest of us make of the fuss?
This is the kind of question Melbourne-based opera expert Caitlin Vincent sets out to help answer in her new book, Opera Wars.
Review: Opera Wars: Inside the World of Opera and the Battles for its Future – Caitlin Vincent (Scribner)
Her declared target audience is “the opera curious” as well as “opera lovers”, but the book is no mere operatic primer. Vincent, too, harbours serious doubts about the ongoing relevance and importance of this once venerable European art form.
The book’s driving narrative comes from her exploration of the grounds for such doubts. It examines two broad areas of concern. One is the challenges facing anyone considering an opera career today. Her informed and wide-ranging discussion of them accounts for the more successful and informative parts of the book.
The other is more philosophical, and raises questions like: what is opera good for? What is – or should be – its meaning and value for us today?
Vincent is able to draw extensively on her industry experience: as a professionally trained singer, then an opera producer, and most recently as a librettist.
She is surely right that opera production has never before been as fraught as it is today. Starkly differing views on how opera should be staged are, she suggests, where its “trouble really begins”.
Trained opera singer, opera producer and librettist Caitlin Vincent draws on her experience to diagnose opera’s challenges. Ruth SchwarzenholzThose views can be encapsulated by two German terms: Regietheater (director’s theatre) and Werktreue (true to the work). In essence, they reflect
disagreement about what a staged opera actually is. Is it a reenactment? A reconstruction? An attempt to recreate the original as closely as possible? (Nods in Werktreue.) Or is it an adaptation? A kind of artistic offspring that’s based on the original but also something new? (Winks in Regietheater).
This pithy and useful description gives you a hint of Vincent’s bouncy, casual writing style. Sometimes, however, her penchant for the literary flourish leads to more tendentious claims. One is her declaration that the Werktreue camp (the opera traditionalists)
aren’t just defending nineteenth-century corsets, grimy cravats, and Parisian garrets. They’re also defending more insidious staging traditions.
What are those “insidious staging traditions”? Vincent quotes opera librettist Mark Campbell: “Why are you even trying to save this racist story? Why are you trying to save this sexist story?”
To be sure, opera has had a longheld interest in exotic settings. In more recent times, this has inevitably raised questions about cultural appropriation and misrepresentation.
For instance, Vincent claims that when Puccini came to write his last opera, Turandot (set in ancient China), “he relied on a Chinese music box and a book written by a Belgian customs officer”. It was “not exactly a recipe for a fleshed-out portrayal”.
Puccini likely first encountered the tale, however, in a later rendering by German poet Schiller, who had himself sourced it from the epic poem Haft Peykar – a work by the 12th-century Persian poet Nizami Ganjavi. There, the princess of the opera’s title is not Chinese, but Russian.
This is why it is often instructive to explore an opera’s broader historical context. Notions of cultural authenticity (or the lack of it) can often be more complicated than appearances may suggest.
Vincent’s discussion of race representations in opera fails to clearly distinguish the unquestionably racist theatrical practices of blackface and yellowface (such as found in American minstrelsy) from a wider practice of stage makeup being used to represent non-white people.
Vincent herself later acknowledges that in some instances, “maybe makeup’s just makeup” (although whether that’s now possible is a whole other discussion). Nevertheless, an opportunity to provide the reader with a more nuanced historical context to this understandably fraught issue was largely missed.
Vincent is much more compelling and convincing, however, when she delves into the practical facets of an operatic career. I suspect much of this discussion will be unfamiliar to many readers. Some of it, for instance, deals with technical aspects of how an opera emerges from score to stage.
But Vincent also considers in detail just what it takes for a singer to “make it” today. One source puts the “total cost of trying to make a career as an opera singer at roughly $USD 1 million”. Vincent rightly asks: “where does all that money go?”. Perhaps the harder question, which she later considers, is: where does all that money first come from?
These days, the path to successfully launching an operatic career likely starts with being lucky enough to be born to wealthy parents.
It is little known that opera singers are usually paid per performance. This means they are not usually paid for the weeks of rehearsals beforehand. And there’s no sick pay. Much about these “industry norms” is ripe for reconsideration.
But that’s far from the worst problem a singer can encounter in the operatic workplace. Incidents of harassment, including sexual, are now more frequently reported. They have involved key opera industry figures, like the late music director of the Metropolitan Opera from 1976 to 2016, James Levine.
Nevertheless, I disagree with Vincent’s assessment:
It’s a well-known rationale in Opera Land, among their artistic realms, that genius is sufficient justification for abuse – and that the art that’s produced matters more than any harm caused in the process.
Such cases reveal, time and again, that it’s not any vaunted idea of genius, or the demands of making great art, that inoculate an abuser from moral and legal scrutiny. It is, rather, their access to institutional power: and in the case of opera, typically the power certain men have been allowed to have over the careers of younger women and men.
Vincent’s book, however, reflects an underlying distrust in exploring what the idea of greatness in art (or artists) might actually mean.
Unless we are prepared to consider why we might decide to value certain forms of art over others, opera will indeed appear to reflect little more than a wasteful, unhealthy obsession with grandiose cultural objects from our (white, European, patriarchal) past.
This lack of interest in considering opera’s aesthetic dimension overshadows Vincent’s discussion about the prevalence of the so-called “canon” of operatic works: the “roughly fifty or so opera, all written by white European men in the eighteenth, nineteenth, and early twentieth centuries”.
These works, such as Mozart’s The Marriage of Figaro or Puccini’s La bohème, are, she notes, “universally recognized as masterpieces of the genre”. But she gives the reader no reasons for thinking this is more than an ambit claim, made by people with narrow vested interests in propagating it.
Vincent’s related claim that this canon “has dominated opera stages since the turn of the (twentieth) century” does not stand up to historical scrutiny either. Even a cursory look at the programming that dominated major opera houses on either side of the Atlantic between 1900 and 1930 (such as found in New York’s Metropolitan Opera) reveals that in a typical season, three or more of the operas performed were either local or world premieres.
What turned audiences away from regular doses of operatic novelty? One reason Vincent raises is, I suspect, the least consequential – the rise of high modernist musical styles in the early 20th century, which tended to eschew mass popularity. Much more significant was the double whammy of the onset of the Great Depression and the rise of film “talkies”. Together, they undercut the economic conditions needed to sustain the creation of large-scale new operatic works.
Any discussion of narrowing repertoire choices also needs to consider what is now going on outside established opera companies. Vincent has run a boutique opera company herself, but she still under-recognises how new operas are often produced and performed today by groups operating without state support, salaried staff or even fixed premises.
In the week I wrote this review, for instance, Melbourne witnessed the Australian premiere of Emma O’Halloran’s Grammy-nominated operatic double-bill Mary Motorhead & Trade by the (aptly named) Australian Contemporary Opera Company.
Emma O’Halloran’s Grammy-nominated operatic double-bill Mary Motorhead & Trade.That said, what Vincent describes as a general tendency towards conservatism in programming would certainly hold true for Australia’s national opera company, which rarely performs works outside the operatic “top ten”. And Australian opera audiences are declining.
Poignantly, this year’s annual ad campaign for Australian lamb featured a local telling an overseas visitor that the Sydney Opera House makes Australians happy. “Your nation enjoys opera?” the visitor expectantly asks. The definitive response? “Naaah.”
Vincent concludes with the suggestion that “opera companies are gripped by fear of change, fear for the future, fear of [...] the new”. She quotes American opera producer Beth Morrison, who narrows it to a much more specific problem:
There’s no shortage of people wanting to perform an opera. There’s no shortage of people who want to produce it. There’s just a shortage of people who want to fund it.
I suspect this straightforwardly economic diagnosis actually points to a more profound truth.
We live in a country that is both significantly wealthier and about three times larger in population than it was in 1954, the year the New South Wales government decided to build the Sydney Opera House.
Is opera’s greatest fight really about avoiding a slide into cultural obsolescence? Or is it about how it might survive in an economic system that increasingly resembles Oscar Wilde’s definition of a cynic – concerned with “the price of everything, and the value of nothing”?
Peter Tregear is a co-founder of the Melbourne-based opera company IOpera. He also serves on the Peer Review Panel for Victorian Opera and has also worked in the past for Victorian Opera, Melbourne Opera and Australian Opera.
In 1967, catastrophic bushfires in Tasmania killed dozens of people – and very nearly destroyed Hobart.
A year later, W.D. Jackson, Professor of Botany at the University of Tasmania, published a short but very influential article on why the fires were so bad. He suggested that after Tasmania’s wet eucalypt forests were burned by severe bushfires, there would be a high-risk period during their regrowth when they are at risk of severely burning again.
This, Jackson theorised, was because regrowing saplings form a very dense canopy, with little distance between living leaves and the leaf litter and understorey plants able to ignite canopy fires. If a second fire sweeps through, he predicted the forests could be replaced with more fire-tolerant scrub.
Was Jackson correct? Is regrowth truly more flammable? It’s very difficult to prove regrowth burns more intensely and accelerates bushfire spread, as it’s not practical to undertake neat, perfectly controlled experiments involving severe bushfires.
But sometimes, scientists get lucky. We took advantage of a natural experiment in 2019, when a severe bushfire burned through a research site spanning old growth wet Tasmanian forests and logged areas of regrowth, giving us access to data before and after the fires.
In our new research, we show Jackson was right. Regrowth does indeed burn more intensely than mature forests.
Jackson’s theory has resonated with generations of fire ecologists and fire managers in Australia and internationally, due to how it focuses on the interplay between the age of forests and the risk of bushfires.
Worldwide, vast areas of regrowth forest are recovering from clear-fell forestry and wildfires. In Tasmania alone, remote sensing data suggests a fifth of all tall wet forests are in a regrowth stage younger than 40 years old.
After an old forest is clear-felled, it is regenerated using fire to remove logging debris and then sown with seeds native to the area. This puts it in Jackson’s 30-year danger zone, which begins about 20 years after a fire, when eucalypts begin bearing gumnuts. It ends about 50 years after the fire, when trees are tall enough and moist dense understoreys have developed to lower the risk of devastating fires able to kill mature trees.
If regrowing forests make it through centuries without more fires, they could potentially become temperate rainforests, whose deeply shaded, moist understoreys put them at very low risk of fire.
If another severe fire starts before forests reach this safer period, experts have suggested the flammable regrowth could threaten entire landscapes by making fires more intense.
Some experts suggested forests regrowing from logging were a key factor in the huge area burned during the notorious 2019–20 fire season, though others have disputed this.
This is why Jackson’s theory still matters, almost 60 years after he proposed it.
Testing this theory has long proved difficult.
Forest ecologists have instead typically relied on indirect approaches, such as analysing how severe the fire was using satellite data, or estimating likely fire behaviour based on field measurements of the amount of fuel and how much moisture was present.
These inferential approaches can be scientifically fraught, as they are vulnerable to many assumptions that are hard to test or control for.
A previous attempt to resolve this question by experts, including the renowned Tasmanian ecologist J.B. Kirkpatrick had to be withdrawn due to technical issues. In retracting the paper, the authors noted their results had proven “highly sensitive” to variation in a small number of sites.
In 2019, a lightning strike ignited a fire in Tasmania’s southwest forests. Known as the Riveaux Road fire, it burned through an area of regrowing forest used for research.
This offered a rare chance of a natural experiment. We had pre-fire data on fuel loads, canopy structure and microclimates (areas where local conditions make climate different from surrounding areas) in both mature forests and adjacent areas logged around 40 years earlier.
After the fire passed, we collected more data so we could compare the fire damage (measured by damage to tree canopy) and the effects on the microclimates in both regrowth and mature, unlogged forests.
This natural experiment was conclusive. The areas of post-logging regrowth burned more severely, due to their hotter, drier microclimates and the fact their canopies were closer to the ground.
Interestingly, we found fires in the regrowth didn’t cause the fires to spread further. This was because the damp understorey of the surrounding mature forests could contain the fires.
That’s not to say this would always be the case. The 2019 fire took place in moderate fire weather conditions, meaning it wasn’t especially hot, dry or windy. If severe fire weather was present, this dampening effect would likely have been overwhelmed.
Proving Jackson’s theory isn’t good news for forests.
Climate change means fire weather will arrive more often and be more extreme. Combined with the large areas of forest regrowth, this means we will have to be ready for more fires.
In North American conifer forests, thinning out regrowth and burning off leaf litter and other fuel have proven effective in reducing the risks of fire-prone regrowth. Eucalypts have fundamentally different fire ecologies, so we can’t directly apply that research to Australia. Local research is limited, meaning we don’t know yet if this will work here.
Recent research has shown commercial thinning of regrowth in Tasmania doesn’t reduce the risk of fire, because bark, limbs and smashed trunks left after logging act as fuel.
This means we urgently need to find an effective way to reduce the risk of fires in regrowth in wet eucalypt forests in Tasmania and elsewhere in Australia.
Since the lethal fires of 1967, many Tasmanian communities – including large areas of Hobart – are now surrounded by forests still in the dangerous period of regrowth after logging or fires.
David Bowman receives funding from the Australian Research Council, the New South Wales Bushfire and Natural Hazards Research Centre and Natural Hazards Research Australia. At the beginning of his career he was trained by the late W.D. Jackson and the late JB Kirkpatrick whose publications are central to this story.
If you’ve spent much time on TikTok recently, you may have noticed a strange new type of AI brain rot taking over: fruit dramas.
These AI-generated short dramas feature odd-looking anthropomorphic fruit characters engaging in a range of ethically problematic behaviours. Many storylines, for instance, are based around affairs, racist attitudes, and the sexual assault of women characters.
At face value, the videos come across as so bizarre and grotesque they can be hard to take seriously. That is until you realise they’re amassing hundreds of millions of views. One account called ai.cinema021, which has launched a parody series called Fruit Love Island, has more than 3 million followers.
This content is, at best, a water-guzzling affront to the art of animation and, at worst, actively helping to normalise racism and misogyny. So why does it have so many fans?
These videos exploit core features of human psychology. Combined with addictive platform features (such as infinite scroll), the result is an endless stream of content that keeps us engaged – even if the message is immoral, or simply ridiculous.
Short-form video feeds such as TikTok and Instagram reels operate on similar principles to those used in gambling systems. The human brain is highly sensitive to novelty and unpredictability, both of which are linked to dopamine signalling in reward learning.
When rewards are delivered unpredictably, behaviour becomes more persistent. This pattern, known as “variable reinforcement”, has long been shown to sustain repeated actions, even when rewards are inconsistent.
AI slop videos offer rapid visual novelty and unexpected emotional turns. You don’t know whether the next one will be absurd, funny, tragic, or strangely compelling.
The videos also compress big emotional experiences. A single clip may move from betrayal, to sadness, to revenge, to humour in seconds. This creates emotional volatility, which increases arousal and sustains attention.
Research shows emotionally charged content, especially when it is negative or surprising, is more likely than neutral material to get our attention.
Many viewers describe a sense that these videos feel “off”. The characters are expressive, but often not fully coherent. The narratives resemble human drama, but lack internal logic.
This relates to the idea of the uncanny valley, where near-human representations produce discomfort. Importantly, these videos rarely become disturbing enough to trigger avoidance. Instead they sit in a middle zone. They are strange enough to provoke curiosity, but not uncomfortable enough to make you stop watching.
This creates cognitive tension. According to cognitive dissonance theory, people are motivated to resolve such inconsistencies. And the way to resolve tension in this case is to keep watching, in search of closure. The mind keeps asking: what is this and where is it going?
We’re also more likely to ignore the unethical messaging because of the format. The characters are highly synthetic. This makes the scenarios feel fictional – even when they reflect real social behaviours.
Research on moral disengagement shows people are more likely to relax ethical judgement when the harm appears abstract or indirect. Fruit videos with themes of betrayal, humiliation or assault can be consumed without the discomfort that would arise if real people were involved.
Much like AI slop, social media algorithms don’t prioritise meaning or quality. They prioritise content that captures our attention.
Recommendation systems are driven by metrics such as “watch time”, “completion rate” and “interaction”. High engagement leads to greater visibility, which encourages the production of more similar content, creating a feedback loop.
From an AI governance perspective, these videos highlight an often overlooked risk. That is: generative systems don’t just produce content; they can gradually shape our behaviours – often without us realising. This aligns with broader concerns in AI ethics about behavioural influence and manipulative design working on a large scale.
Avoiding social media entirely is not realistic for many people. But small changes can reduce the pull of AI-generated brain rot.
One approach is to introduce a pause before scrolling to the next video. Even a brief interruption will weaken the reward loop in your brain, and make it easier to put your phone down. When you notice yourself thinking “this feels pointless” or “this is strange”, that’s the best time to stop. In some cases a digital detox might be helpful.
You can also retrain your algorithm. Quickly skip or select “not interested” on videos you don’t want to see – and replace passive scrolling with intentional viewing by seeking out specific content.
Finally, create friction. This might involve disabling automatic playback, or limiting your access to a feed, by disabling the app notification, or removing the app from your home screen.
AI fruit videos may seem trivial and absurd, but they reveal something important about the digital environment. As generative systems scale up, they will only get better at capturing and directing our attention. Understanding the psychology behind this is the first step to resisting it.
The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
For Australia’s building industry, higher fuel costs since the start of the Middle East war have been just the start of the pain.
Countless construction products are made with petroleum-based products. From bitumen for our roads to plastic pipes, prices are rising, with some supplies already facing delays.
This shock hits an industry still recovering from COVID, while also trying to meet surging demand for new homes and major infrastructure across Australia.
Even before the war, there was fierce competition for tradespeople and materials from big infrastructure projects across Australia. These include the A$3.6 billion Brisbane Olympics stadium, Queensland’s $9 billion Bruce Highway upgrade and Victoria’s $8 billion Big Build for new housing. Meanwhile, governments across Australia have fallen behind on a national target to build 1.2 million new homes by mid-2029
Whether you’re in the market for a new home, or you’re a builder, here’s how the Middle East war could impact your project – and its cost.
Builders need diesel to run heavy machinery and deliver materials to site. Diesel prices have been rising even faster than petrol since the war disrupted global supply routes.
Other price rises affecting building products announced since the Middle East war began include:
Most of Australia’s bitumen for sealing roads – from new subdivisions to highways – is imported from Asia and made from crude oil products from the Middle East.
Last month, industry body the Australian Flexible Pavements Association warned road authorities “bitumen prices are anticipated to rise by more than 50% [...] and there is a real risk of stock depletion and stock outs in the near term”.
Last week, the Urban Development Institute of Australia’s Queensland branch shared a members-only alert about “new and rapidly escalating challenges with materials shortages”. This includes longer delays for concrete pipes, needed to connect water to new housing estates.
Now, with plastic pipes also becoming more expensive and harder to source because of the global oil crisis, the institute says:
Industry is now experiencing shortages in concrete pipes and [plastic] pipes with no viable alternatives. This is severely compromising the industry’s ability to provide housing at the rate needed to address the current housing crisis.
The result is price escalation at every stage of the supply chain, including for Australian-made products.
History offers little comfort: house construction prices soared more than 40% between 2020 and 2024. While price increases have slowed since, prices remain elevated from pre-COVID levels.
Read more: Australia has plenty of diesel for now. But running out could upend our economy
Current conditions echo the COVID period, when sudden and unpredictable cost spikes put intense pressure on construction businesses’ viability.
Home builders working under fixed price contracts can only absorb so much cost pressure before they go bust.
Even before this Middle East war, construction already had more insolvencies than any other industry – more than doubling since COVID.
Despite huge demand for new housing, the 2024-25 financial year saw a record 3,490 construction firms enter insolvency – meaning they couldn’t pay their debts as they fell due.
When builders collapse, the contagion spreads quickly: tradies lose jobs, subcontractors go under, projects stall and consumers face financial and emotional devastation.
For the tradies and subcontractors caught in the middle, the fallout can be overwhelming. Male construction workers are nearly twice as likely to take their own lives as other employed Australian men of the same age.
Our 2025 report looked into the root causes behind the high rate of builders going bust.
We found that as of 2024, two-thirds (63%) of building company collapses were concentrated among small builders with fewer than five full-time employees. Typically, they’re operating on thin margins, with unsecured debt and limited financial buffers. These conditions leave even experienced directors vulnerable when supply chains are disrupted and costs surge.
But large builders aren’t exempt. During COVID, large home builder Porter Davis collapsed, leaving 1,700 homes unfinished in Victoria and Queensland. Even Australia’s largest home builder, Metricon, teetered on the brink before recovering.
If this oil crisis lingers, more builders are likely to go bust, slowing down housing supply.
To support the industry, Queensland and New South Wales have both announced a 12-month deferral to the adoption of the National Construction Code 2025, due to start on May 1 this year. At a difficult time, this gives builders more time to adjust to pending changes. It’s unclear if other states will follow.
Longer term, our research recommended a number of reforms, from setting up low-cost independent resolution services to help builders avoid financial disputes with banks or customers, through to strengthening business training for tradies.
For builders, the priority is to be proactive. Spend time identifying high risk areas, looking for lower risk work and keeping financial records current to avoid trading when insolvent.
Stay in close contact with clients, subcontractors, suppliers and – if necessary – your bank about short-term overdraft support. Don’t wait until it’s too late to seek help.
For home buyers, open communication with your builder is essential.
If legitimate cost pressures arise under a fixed price contract, negotiating a fair adjustment may be the best outcome. Working with your builder to negotiate a mutually beneficial solution might cost you more. But that may still be preferable to a builder gone broke and a half-built home.
Lyndall Bryant received funding from the Building 4.0 Cooperative Research Council Grant – an industry-led research initiative co-funded by the Australian Government. That grant was funded in collaboration with the Building and Plumbing Commission (Victoria), Master Builders Victoria and Holmesglen Institute.
Amanda Bull received funding from the Building 4.0 Cooperative Research Council Grant for this research.
Elizabeth Streten received funding from the Building 4.0 Cooperative Research Council Grant for this research. Also has an unrelated grant from INSOL International regarding climate change and insolvency.
The research that informed this article was conducted as part of the Building 4.0 Cooperative Research Centre (B4.0CRC) and funded in collaboration with the Building and Plumbing Commission (Victoria), Master Builders Victoria and Holmesglenn Institute.
Fiona Cheung does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Methamphetamine – more commonly known as meth, crystal or ice – is a highly addictive, stimulant drug.
An estimated 7.4 million people in the world are dependent on it or “addicted” to it. They face multiple health risks including paranoia, feeling suicidal, heart problems, strokes, injuries from accidents, and a higher risk of early death.
But there are no medications approved anywhere in the world to treat meth dependence.
Now, a cheap, safe and readily available medicine that has been used to treat depression for years is showing promise. Our trial of mirtazapine, just published in JAMA Psychiatry, shows people who take it cut back their meth use.
Australia has one of the highest number of people dependent on meth per capita worldwide.
As there are no medications approved for meth dependence anywhere in the world, we have few treatment options.
Currently available treatment options include counselling, detox or withdrawal and long-stay residential rehabilitation. However, access can be difficult and treatment dropout rates are high. Most people who go to rehab relapse.
More sophisticated treatments offered within the community such as contingency management, which involves setting targets and rewards for meeting them, are more effective but aren’t widely available.
Even though there are no approved medications for methamphetamine use, doctors sometimes prescribe existing medications that have shown promise in clinical trials.
Medications that are prescribed off label include prescription stimulants (methylphenidate, lisdexamfetamine, modafinil), the anti-smoking treatment bupropion, the opioid-blocking drug naltrexone (including in combination with bupropion) and antidepressants.
However, these drugs may not work and may cause unnecessary side effects or safety risks.
Studies in recent years suggest the antidepressant mirtazapine may provide some hope.
Two studies were conducted in the United States in an outpatient research clinic in San Francisco, California. Both trials found mirtazapine reduced meth use.
These initial trials were conducted a research clinic with a small group of patients (60 and 120 respectively) who were monitored closely. Patients were at risk of HIV: men and transgender women who had sex with men. Women and people with people with depression were excluded.
So our Australian team wanted to know if mirtazapine would have the same benefit if it was used by doctors in community clinics to treat a larger and more diverse group of patients.
The Tina Trial recruited a larger and more diverse sample of 339 people dependent on meth from six outpatient clinics in Australia.
At the start of the trial, participants had used meth an average of 22 days out of the previous 28.
Half were randomly assigned to either take home mirtazapine (a 30 milligram tablet daily), or a placebo, for 12 weeks. The researchers then tracked days when participants used meth across the 12-week period.
People who received mirtazapine reduced their meth use by more than people who received the placebo (an average reduction of seven out of 28 days compared with 4.8).
So the comparative advantage of mirtazapine was modest: 2.2 days in use out of 28 days.
This benefit was apparent regardless of whether people had depression at the start of the study.
Although this reduction is small, in the absence of any alternative medication this is an important step forward.
Our research team believes mirtazapine has a direct effect on meth dependence, distinct from its ability to reduce depression.
This implies mirtazapine is acting directly on brain systems involved in drug reward, and might restore function to pathways that long-term meth use can disrupt.
Our study found no unexpected safety issues when using mirtazapine to treat meth dependence. The most common side effects were drowsiness and weight gain.
Mirtazapine is not an instant “cure” for meth dependence. But in the absence of any approved medications for methamphetamine use worldwide, it is a critical first step in providing a medications to reduce harms from methamphetamine.
Mirtazapine is cheap, safe and readily available. Many doctors are familiar with its use to treat depression.
It is a take-home medication, making it convenient for people to use. So there is no need for daily clinic visits or close medical monitoring.
It is also “off patent”, meaning there are inexpensive generic versions.
In order for mirtazapine to be routinely prescribed for meth dependence outside a clinical trial, regulators would need to approve it for this purpose. This requires research evidence, like that provided by the Tina Trial.
In the meantime, doctors can prescribe mirtazapine off label. Guidelines on the off label prescribing of medications are available from the Royal Australian New Zealand College of Psychiatrists.
Further information on the Tina Trial is available here.
If you have concerns about your own or someone else’s drug or alcohol use, call the National Alcohol and Other Drug Hotline on 1800 250 015. This 24/7 hotline provides free and confidential information and support.
Rebecca McKetin receives funding from the National Health and Medical Research Council. The Tina Trial was funded by the Australian government’s Medical Research Future Fund (grant no. 2007155).
Shalini Arunogiri receives funding from the National Health and Medical Research Council. She is affiliated with Monash University. She has received speaker honoraria from Camurus and Indivior, and advisory board fees from Indivior and Eli Lilly.
People using other peoples’ ideas, words and creations without acknowledgement is a widespread problem. Plagiarism occurs everywhere from restaurant menus to political speeches and music.
Within academia, plagiarism is seen as a serious breach of integrity for scholars and students.
It’s easy to find media articles claiming plagiarism is increasing among university students. These claims have intensified with the rise of generative AI – which can quickly produce large amounts of text that students can copy and paste into their assignments.
But while AI certainly poses a range of challenges for academic integrity, is plagiarism increasing as much as we think it is?
My team’s new research, which has tracked students at one university over 20 years, suggests it may even be falling.
Precise rates of plagiarism can be difficult to determine. Pre-AI, many claims about increasing plagiarism among students came from cherry picking results of different surveys from different student groups. So they were not comparing apples with apples.
Since AI, we have have a lot of anecdotal reporting of cheating. But we do not have a lot of robust evidence of whether cheating has increased over time.
In a new journal article, my colleagues and I have used a rare longitudinal study of plagiarism to overcome this problem.
Every five years since 2004, our study carried out the same survey on plagiarism with students at Western Sydney University (WSU). This means we have been able to track the same phenomena in the same environment over time.
In our survey students are presented with scenarios representing different forms of plagiarism. For example, a student copying text from a book without citing the book. Students were asked whether the behaviour is plagiarism, to test their understanding of it, and how often, if ever, they have done a similar thing. In 2024, we also also asked students if they used text generated by AI in their university work, without acknowledging it.
We conducted an anonymous survey of mostly undergraduate students, studying in a range of disciplines. The survey started in 2004 on paper and has been fully online since 2014.
The survey was done in the second half of the academic year to ensure students had the opportunity to both learn about and engage in plagiarism.
In 2024, as well as WSU, we included students from five other Australian universities for additional comparison. This gave us sample of more than 2,100 students in total for the latest round.
Over 20 years, the survey has found the percentage of students who engage in any form of plagiarism at least once has fallen every five years, from more than 80% in 2004 to 57% in 2024.
This decline corresponds with various measures, such as the use of text-matching software, which can help detect plagiarism. There has also been more training in referencing and citation rules – this reduces unintentional plagiarism.
Although 14% of students in 2024 indicated they had copied from AI without acknowledgement, most of them also engaged in at least one other form of plagiarism. For example, copying from another student’s assignment.
Copying from AI was the sole form of plagiarism for only 2% of students.
Combining students’ answers to whether they understand plagiarism and whether they engaged in it showed most did so knowingly. For example when it came to verbatim copying from AI, 88% of WSU students who engaged in this knew it was plagiarism.
Interestingly, most plagiarism was accidental 20 years ago when education about academic integrity was less thorough. However, the recent results show students have a better understanding of plagiarism and still do it anyway.
In the survey, two universities used AI detectors (which aim to assess whether a piece of written work has used AI, with mixed results and four did not.
Rates of plagiarism from AI were similar between the universities with and without detectors.
Our survey largely looked at only one Australian university. But despite this limitation, we can interpret the results in optimistic and pessimistic ways.
Optimistically, plagiarism has fallen over 20 years. This suggests measures to detect plagiarism and teach students about proper referencing can help.
On top of this, AI has not turned all students into plagiarists – at least not yet. What our study suggests is students who have plagiarised in some other way may now plagiarise from AI as well.
Pessimistically, over half of all students still plagiarise at some time in their university studies. And, because these surveys rely on self-reports, it is likely these figures represent the minimum number of students who plagiarise. Even when surveys, like ours, are anonymous and online, students may still be hesitant to admit to breaking rules.
This means educating students and policing academic conduct remains an ongoing battle.
Guy Curtis was previously an academic staff member at Western Sydney University when he commenced the research discussed in this article.
On April 1 2026, NASA is sending astronauts back around the Moon. And Australia will play a critical role in helping them get there.
Four astronauts will launch from Florida, bound for the Moon aboard the Orion spacecraft. Similar to the 1968 Apollo 8 spaceflight, the Artemis II mission will orbit the Moon without landing, to test the spacecraft and the systems that support it. It paves the way for the next Artemis missions, with an eventual crewed Moon landing slated for early 2028.
Today’s mission will also mark the first time a Black astronaut, a female astronaut and a non-American (a Canadian) will travel to the Moon system.
Throughout the journey, ground stations in Australia will track the spacecraft and maintain communications. This vital support not only underscores Australia’s space strengths, but also encourages us to consider Australia’s own direction in space.
Australia’s support of NASA space exploration has a long history. A series of tracking stations around Australia were essential to US President John F. Kennedy’s goal of landing a person on the Moon by the end of the 1960s.
As part of NASA’s mammoth human spaceflight efforts, facilities were established around Australia – in Western Australia, Queensland, and the Australian Capital Territory (ACT).
Indeed, Australia hosted more tracking stations than any other country outside the United States, a contribution memorably celebrated in the 2001 film The Dish.
While much celebrated, even after 60 years there’s still much to learn about Australia’s role in putting the first person on the Moon. Much of the archival record of Australian tracking stations during the Apollo era remains inaccessible in Department of Defence storage, rather than having been transferred to the National Archives.
Australia’s contribution to NASA’s space efforts continued past the Apollo program through the Canberra Deep Space Communications Complex at Tidbinbilla, now managed by CSIRO.
This station has operated continuously since the 1970s as part of NASA’s Deep Space Network, which consists of three stations in the ACT, Spain and California. Combined, these stations have supported all NASA’s deep space exploration missions.
Through it, Australia has played a role in well-known missions such as the Voyager exploration of the outer Solar System and the more recent New Horizons mission to Pluto.
Today, Australia’s role as host to tracking stations makes it vital for all communications with the Artemis II mission.
Mission controllers in Houston, Texas talk to the astronauts; data about the spacecraft (telemetry) and science data are returned to Earth in huge quantities; and video is beamed back to millions.
Two networks enable this communication. First, the Near Space Network handles communication with the spacecraft during launch and low Earth orbit.
Second, the Deep Space Network takes over when the spacecraft is in high Earth orbit and for the voyage to and from the Moon.
At the Canberra station, huge dishes between 34 and 64 metres across are capable of transmitting and receiving the huge quantities of data from Orion. These dishes are particularly important given the ten-day mission is expected be the farthest crewed mission from Earth in history.
Even when the Moon and Artemis II are on the other side of Earth relative to Canberra, the system’s global integration means Australian staff remotely operate other facilities when staff there are asleep, or vice versa.
In preparation for this mission, Australian staff at the tracking station outside Canberra have been training for years. Significant upgrades were also completed before the 2022 uncrewed mission, Artemis I.
Further afield, Australians are also involved in developing new methods of communication with far-flung spacecraft. During this mission, the Australian National University will also assist in the mission’s objectives. Scientists will test laser communications with the spacecraft from the Mount Stromlo Observatory outside Canberra.
Mount Stromlo Observatory in 2011. Freeswimmers for Molonglo Catchment Group/Flickr, CC BY-NCAustralia’s contribution to Artemis II comes at a moment of sustained public interest in space. The prominence of figures such as astronaut Katherine Bennell-Pegg, recently awarded Australian of the Year, has ensured space activity remains in the national spotlight.
The Australian Space Agency has sought to grow Australia’s space efforts in a variety of ways, including through the Artemis Accords. Signed by Australia in 2020, this US-led agreement establishes shared principles for civil space exploration that will return the US and partners to the Moon.
Part of Australia’s contribution will be the development of an A$42 million lunar rover, named Roo-ver. This will launch on a future NASA mission.
All this shows Australia has been gradually moving upward in space for a long time. Where the space efforts go from here will depend on a range of factors, including government policy and the capabilities of local industry and research institutions.
Public opinion is vital, given the cost of space exploration. A recent public opinion survey shows Australians are supportive of space activities, if unsure about the country’s direction.
As the four NASA astronauts travel around the Moon, Australia is also presented with an opportunity to talk about its own important role in space, and the future direction the country might take.
Tristan Moss receives funding from Australian Research Council.
If you’ve been worried that this Supreme Court might give President Donald Trump the power to strip citizenship away from Americans, you can go ahead and exhale.
On Wednesday, the Supreme Court heard oral arguments in Trump v. Barbara, a case challenging an executive order Trump issued on his first day back in office, which purports to strip citizenship from children born to undocumented immigrants and from many people who are lawfully present in the United States but who are not yet authorized to remain here permanently.
There is no plausible argument that Trump’s executive order is constitutional. The Constitution’s Fourteenth Amendment provides that “all persons” born in the United States are citizens, with one narrow exception that does not apply in Barbara. Just three days after the executive order was issued, a Reagan-appointed federal judge blocked it — after saying that he’s “been on the bench for over four decades” and that he “can’t remember another case where the question presented is as clear as this one is.”
Trump’s order has never taken effect thanks to lower court orders against it. Many of those orders relied on United States v. Wong Kim Ark (1898), a Supreme Court case that rejected a similar effort to restrict who can be a citizen of the United States nearly 130 years ago.
Still, Trump no doubt bet that this Court — which has a 6-3 Republican supermajority that previously ruled that Trump is allowed to use the powers of the presidency to commit crimes — would ignore both the text of the Constitution and Wong Kim Ark and decide the Barbara case based solely on their partisan loyalty to him.
Wednesday’s oral argument, however, left little doubt that Trump made a bad bet. Of the nine justices, only Justice Samuel Alito, the Court’s most reliable partisan for Republican Party causes, appeared to be a certain vote for Trump — although Justice Clarence Thomas asked ambiguous questions and might join Alito in dissent. That leaves seven justices who appear to believe that Trump cannot simply wipe away the Fourteenth Amendment’s text with an executive order.
The Barbara case turns on the meaning of a single word — “jurisdiction” — which appears in the first sentence of the Fourteenth Amendment: “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.”
Someone is “subject to the jurisdiction” of a nation if they are bound by its laws. So, if Trump were correct that some children of immigrants are not subject to US jurisdiction, it would mean that the federal government was powerless to deport them — even if they were in this country illegally. It would also mean that the United States was powerless to arrest them if they robbed a bank.
As the Court explained many years ago in Wong Kim Ark, there are, in fact, some newborns who are born in the United States but not subject to its jurisdiction. When the Fourteenth Amendment was ratified, the most significant exemption to the birthright citizenship rule applied to citizens of American Indian tribes who were born on tribal lands — because those lands were considered a separate nation from the United States. But, in 1924, Congress granted citizenship to all Indigenous people born in the US.
Today, the Fourteenth Amendment’s “subject to the jurisdiction” rule primarily excludes the children of foreign ambassadors and similar foreign officials from US citizenship, because the families of diplomats often enjoy diplomatic immunity from US law.
Trump’s attempt to expand this “subject to the jurisdiction” exception to include children of undocumented immigrants and people here on a temporary basis received a cold reception from nearly all of the justices. After US Solicitor General John Sauer tried to argue that Wong Kim Ark actually supports Trump’s position, for example, Justice Neil Gorsuch quipped back, “I’m not sure how much you want to rely on Wong Kim Ark.”
Similarly, after Sauer claimed that we live in a new world where pregnant foreign nationals allegedly enter the United States to ensure that their child will be a US citizen, Chief Justice John Roberts responded that “It’s a new world; it’s the same Constitution.”
Justice Brett Kavanaugh, meanwhile, asked several questions suggesting that he’d already decided to rule against Trump and was merely trying to decide what the legal basis for that decision should be. At one point, he asked Cecillia Wang, the ACLU lawyer defending birthright citizenship, whether the Court should rule against Trump based on a 20th century statute that also protects birthright citizenship. At another point, he noted that Trump’s lawyers don’t actually ask the Court to overrule Wong Kim Ark, so he suggested that the Court could issue a very short opinion affirming the lower courts and citing that 1898 case.
For her part, Justice Amy Coney Barrett offered a clever hypothetical exposing a contradiction in Sauer’s legal argument. Sauer argued that the children of immigrants who are not “domiciled” in the United States — meaning that they did not intend to remain here indefinitely — are not citizens. But Sauer also concedes that the Fourteenth Amendment was intended to extend citizenship to enslaved people freed during the Civil War.
So, Barrett asked about an enslaved person who was brought to the United States against their will, who always viewed themselves as a captive, and who never intended to remain in the US. Under Sauer’s “domicile” rule, she pointed out, this person could not be a citizen even though Sauer concedes that the Fourteenth Amendment does give citizenship to freed slaves.
These four Republican justices, along with the Court’s three Democrats, appear likely to reaffirm Wong Kim Ark and to declare Trump’s executive order unconstitutional. It appears that it is still possible for Trump to do something that is so clearly illegal that even this Supreme Court will rule against him.
Little has seemingly gone as Washington planned in the war against Iran.
The Iranian people have not risen up, one hard-line leader has been replaced by another, Iranian missiles and drones keep hitting targets across the Middle East, Iran closed the Strait of Hormuz, driving oil and gas prices up worldwide, and in sharp contrast to Trump’s demand for “unconditional surrender,” Tehran has rejected a 15-point U.S. plan for a ceasefire.
So how did things go so wrong?
As a scholar who researches U.S. forever wars, I believe the answer is simple: Trump, like other U.S. presidents before him, has fallen into what I call the trap of asymmetric resolve. In short, this occurs when a stronger power with less determination to fight starts a military conflict with a far weaker state that has near boundless determination to prevail. Victory for the strong becomes tough, even close to impossible.
When it comes to Iran, the Islamic Republic wants – and needs – victory more than the United States. Unlike the U.S., the Iranian government’s very existence is on the line. And that gives Tehran many more incentives – and in many cases very effective countermeasures – through which to fight on.
Typically, in asymmetric wars the stronger side does not face the same potential for regime death as the weaker side. In short, it has less on the line. And this can lead to lesser resolve, making it hard to sustain the costs of war required to defeat the weaker, more determined rival.
Such dynamics have played out in conflicts dating back to at least the sixth century B.C., when a massive Persian army under Darius I was checked by a much smaller, determined Scythian military, leading in the end to a humiliating Persian retreat.
For the U.S. in the modern era, wars of asymmetric resolve have likewise not been kind.
In the Vietnam War, an estimated 1.1 million North Vietnamese civilians and Viet Cong fighters died compared to 58,000 U.S. troops. Yet, the U.S. proved no match for the North’s resolve. After eight years of brutal war, the U.S. gave up, cut a deal, withdrew and watched North Vietnam roll to victory over the South.
In 2001, the U.S. unseated the Taliban in Afghanistan, set up a new government and built a large Afghan army supported by U.S. firepower. Over the next 20 years, the remnants of the Taliban lost about 84,000 fighters compared to around 2,400 U.S. troops, yet the U.S. ultimately sued for peace, cut a deal and left. The Taliban immediately returned to power.
Many other great powers have fallen into this same trap – and at times in the same countries. Despite far fewer casualties than the Afghan resistance, the mighty Soviet Union suffered a humiliating defeat in its nine-year war in Afghanistan during the 1980s. The same happened to the French in Vietnam and Algeria after World War II.
A similar asymmetry is now playing out in Iran.
Unlike 2025’s 12-day war that largely targeted Iranian military installations, including its nuclear sites, Trump and the Israelis are now directly threatening the survival of the Iranian government. Killing the supreme leader, a slew of other powerful figures, and encouraging a popular uprising made this crystal clear.
Tehran is responding as it said it would were its survival to be at stake. Prior to the current war, Iran warned it would retaliate against Israel, Arab Gulf nations and U.S. bases across the region, as well as largely close the Straight of Hormuz to commercial traffic.
In short, it is going all-in to cause as much pain as it can to the U.S. and its interests.
Iran has suffered the disproportionate number of loses in the current war, both in terms of human casualties and depleted weaponry. As of mid-March, there have been upward of 5,000 Iranian military casualties and more than 1,500 Iranian civilian deaths, compared to 13 dead U.S. service members.
Yet, Tehran isn’t backing down, saying on March 10, “We will determine when the war ends.”
Such Iranian resolve seemingly confounds Trump. Before the war, he wondered why Iran wouldn’t cave to his demands, and he has since conceded that regime change – seemingly a major U.S. goal at the war’s onset – is now a “very big hurdle.”
This conflicts with how Iran was being presented to the American public prior to the war. Secretary of State Marco Rubio said in January that “Iran is probably weaker than it’s ever been.” It has no ballistic missiles capable of hitting the U.S. homeland, a decimated nuclear program and fewer allies than ever across the Middle East.
No wonder a Marist poll from March 6 found that 55% of Americans viewed Iran as a minor threat or no threat at all.
With Iran proving resilient, American public opinion on the war has been definitively negative. This aspect of war resolve can be especially challenging for democracies, where a disgruntled public can vote leaders out of power.
Fading or low U.S. public support for war was likewise a primary driver in past U.S. asymmetric quagmires.
Indeed, the Iran war is more unpopular than just about any other U.S. war since World War II, with polling consistently finding around 60% of Americans in opposition.
For Iran, as a nondemocracy there are far less reliable figures to compare this to on its side. Before the war, the government faced a major public crisis with widespread protests, but for many reasons – including its brutal crackdown and a potential “rally around the flag” effect – Iranian public opinion has proved far less salient.
The Trump administration is attempting to mitigate the impact that asymmetrical resolve has by saying the length and scope of the operation will remain limited.
To reassure the public and calm financial markets, Trump keeps promising a short war and delaying bigger strikes to give space for negotiations that he, not the Iranians, says are ongoing.
History suggests that once faced with a smaller military power showing greater resolve, the larger power has two trajectories. It can succumb to the hubris of power and escalate, such as was the case in Vietnam, Iraq and Afghanistan. Or it can wind down the conflict in an attempt to save face.
Often in the past, leaders of a stronger side opt for the first option of escalation. They just can’t escape thinking that a little more force here or there wins the conflict. President Barack Obama wrongly thought a surge of 30,000 additional U.S. troops into Afghanistan would bring the Taliban to their knees.
Despite signs that he wants out of the Iran war, Trump could still fall to the hubris of power. More U.S. troops are on the way to the Gulf, and B-52 bombers have been flying over Iran for the first time.
As Korea, Vietnam, Iraq and Afghanistan show, following hubris into escalation against a determined foe like Iran will probably come at great cost to the U.S.
The other option – that of winding down the war – is still available to Trump.
And Trump has gone down this route before. He signed a deal in 2020 with the Taliban to end the war in Afghanistan rather than surge more troops in. And just last year, Trump declared victory and walked away from an air war in Yemen when he realized ground forces would be required to overcome the resolve of the Houthis.
The U.S. president could try the same with Iran – saying the job is done then walking away, or entering real, sustained negotiations to end the war. Either way, he’ll need to give something up, such as unfettered access through Hormuz or sanctions relief.
Trump likely won’t like that. But polling suggests Americans will take it. After all, who wants another Vietnam?
Charles Walldorf is a Senior Fellow at Defense Priorities.
One of the largest book publishers in the US has pulled an upcoming horror novel from its scheduled release later this year following accusations that the author used artificial intelligence to write it.
Hachette Book Group was approached with what The New York Times claimed was evidence that Shy Girl by Mia Ballard was allegedly AI-generated. Following this, the publisher said its imprint Orbit was removing the book from publication in the US and UK.
The novel follows Gia, a young woman who is “lonely, broke and depressed with a serious case of OCD”. She encounters a mysterious and rich man who, in exchange for her living as his devoted pet, promises to erase all her debts. The novel follows her time in captivity as she becomes increasingly animalistic in nature.
In an email to The New York Times, Ballard said the controversy “has changed my life in many ways and my mental health is at an all time low”. Ballard has denied personally using AI to write the novel. But she has said that an acquaintance she hired to work on an earlier self-published version incorporated AI tools.
Many people disagree with the use of AI for a host of reasons, from environmental to ethical concerns. But cultivating a climate of distrust around writing and authors is also not necessarily productive, and further pushes AI use into secrecy.
The author now faces a challenging situation, as Hachette withdrawing the book will appear to some to validate the accusations, even if it simply reflects uncertainty.
The book was initially self-published in February 2025 before it was bought by Orbit Books, following a growing industry trend to traditionally publish successful self-published or fan-fiction works.
Issues started to arise regarding the novel’s provenance in mid-2025 on Reddit when one user, who claimed they were a book editor, made a post which pointed out several issues with the novel that suggested it was AI generated.
Their main claim was based on the novel’s repetitive style, something also pointed out by other critical readers. Specifically, they highlighted that almost every noun is preceded by an adjective, actions are frequently described with similes, descriptions came in lists of three and certain words are overused.
The discussion spread to other platforms such as the BookTok community (TikTok users dedicated to discussing books and publishing), Instagram and YouTube.
There is still no final consensus about how Shy Girl was written and Ballard has removed herself from the public eye and taken her social media accounts offline following the scandal. Hachette told The Independent that they “remain committed to protecting original creative expression and storytelling”. They have made no definitive statement on the claims but did tell the NYT that they conducted a thorough and lengthy review of the text.
Readers and publishers have spent years debating the impact of AI in the abstract but 2026 is the year these debates have become reality.
Stories like Shy Girl and The New York Times’ profile of AI romance author Coral Hart, who boasted of using AI to write and self-publish 200 hundred books across 21 pen names in a recent profile by The New York Times, demonstrate that theoretical disputes did not prepare us to be confronted with the reality of AI.
It’s clear that even the suggestion of AI writing inspires immense disgust in many readers. This means that regardless of the truth (if we ever find it out) Shy Girl and Ballard will likely be tainted by this scandal. Therefore, we must ask whether it is possible for publishing and reading to survive not just AI’s increasing normalisation but also the hostile and suspicious environment its use is creating for writers.
As a researcher of contemporary and digital reading culture, I believe we should cultivate an openness around the use of AI in writing by lobbying publishers to provide this information openly and clearly. This is already starting to happen. The Society of Authors, which is the UK’s largest writers’ trade union, has launched a logo to be used to identify “human authored” books – a step toward empowering consumers to know what they are choosing to support with their money.
Copyright law also needs to reflect AI’s reshaping of the creative field. A work requires a human author to be covered under copyright law in the US and any doubts about this are potentially a big part of Hachette pulling Shy Girl from publication due to the publisher’s inability to copyright.
This creates a difficult position for the novel and author. The book’s cancellation looks like confirmation of guilt whereas it may just be doubt. However, UK copyright law does offer protection for computer-generated works. This creates a murky area where AI-generated or assisted works can receive certain legal protections, but not necessarily the same rights as human-authored works.
Under UK law, computer-generated works can qualify for copyright, with authorship attributed to the person who made the necessary arrangements for the work’s creation. However, these works do not benefit from the full range of protections afforded to human authors, particularly moral rights, such as the right to be identified as the author or to object to derogatory treatment of the work.
This framework may change following a recent consultation led by the UK government on copyright and artificial intelligence. The consultation has now closed and the government has not yet implemented definitive legislative changes. However, its stated priorities suggest any reforms will aim to balance protecting creators’ rights with supporting innovation, investment and growth in the AI sector.
It’s an undeniably fraught situation, which is continually developing. In the near future we may unfortunately see more authors like Ballard made examples of while, behind the scenes, many more may be using AI undetected.
Natalie Wall does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Repeated threats to annex Greenland, controversial diplomatic statements, and, more broadly, the antagonistic stance of the American executive toward Europeans point to an unprecedented climate of distrust among transatlantic allies, of which the latest Davos forum provided a revealing example. But what about alliances between intelligence services?
Often perceived as domestic instruments of political power, intelligence services are, in fact, highly internationalised and enjoy significant autonomy in their dealings with foreign counterparts, though this can occasionally be disrupted by political interference.
Cooperation between intelligence services is not new. Some partnerships date back at least to the First World War and are often the result of agreements between services themselves rather than formal decisions by governments. Whether bilateral or multilateral, these alliances underpin a wide range of activities: liaison officers posted abroad, listening stations, participation in international conferences, and routine exchanges of information.
Research in the social sciences has shown how, over time, a dense network of relationships has developed around shared interests. Counterterrorism, nuclear non-proliferation and other perceived imminent threats have provided strong justification for cooperation, including the exchange of data on individuals, organizations or states considered “dangerous.” The widely held belief that sharing information helps prevent attacks has also encouraged the expansion of surveillance mechanisms – often at the expense of robust democratic oversight.
One example is the many partnerships between the National Security Agency (NSA) and several European counterparts. These collaborations have enabled the pooling of advanced technologies – such as artificial intelligence and algorithmic analysis – to collect and process large volumes of private communications. This work also depends on explicit alliances between intelligence agencies and major tech companies, which have become key intermediaries that, willingly or not, make their users’ data available to intelligence services.
The solidarity and trust on display should not obscure the fact that international cooperation remains a space marked by strong rivalries. Services compete to access information, shape priorities and secure an advantageous position in relationships where resources – financial, human, or technical – are unevenly distributed. In this context, espionage between services and other disloyal practices are also part of the game.
These dynamics suggest that intelligence alliances follow a logic of their own rather than unwavering loyalty to political authority. It is in this context that the Danish military intelligence service monitored the communications of several European political leaders on behalf of the NSA. Above all, because they possess in-depth knowledge of the threats facing the world, intelligence services are able to position themselves at the heart of security decision-making, making political leaders dependent on their expertise.
That said, these alliances are not immune to political pressures. Disputes between intelligence services and political leaders have always existed, but the openly hostile attitude of the “reactionary internationale” embodied by the Trump administration and its MAGA supporters has raised concerns about a possible breakdown – or at least a significant weakening of cooperation.
Faced with an unfavourable political context, they are often able to adjust and even turn the situation to their advantage.
Several European intelligence services have thus strengthened their cooperation, even raising the possibility of creating a European Five Eyes – in reference to the Anglosphere intelligence alliance linking Australia, Canada, New Zealand, the United Kingdom and the United States to countries in Europe and Asia.
Others have specifically developed units to better anticipate the unpredictability of the American executive, with tangible effects: staffing in the unit dealing with the United States within France’s DGSE has increased, and the budgets of several European intelligence services are set to rise, benefiting from broader increases in defence spending.
More broadly, history shows that ties between services remain strong even when governments hold divergent positions. In the early 2000s, exchanges between the DGSE and the CIA continued despite disagreements over the war in Iraq.
A more recent example is Brexit, which did not lead to any major rupture in relations between the British police and their European counterparts, who continue to facilitate the flow of intelligence.
As in any relationship, signs of caution, distrust, or even ambivalence can emerge. For example, the British and Danish intelligence services have indicated that they are limiting – but not completely halting – their exchanges with their American counterparts, concerned about the legal implications and, more broadly, the politicisation of US intelligence. Irritated by repeated provocations over Greenland, the Danish military intelligence service went so far as to designate the United States as a national security threat, alongside China and Russia.
Nonetheless, it would be incorrect to assume that, under more normal circumstances, intelligence sharing would happen without any restrictions. Services do not share all their secrets, all the time, with everyone. On the contrary, the restraint shown by some reflects a routine asymmetry in exchanges that persists and can even be heightened during periods of turbulence.
Signs of continuity are evident, underscoring a key reality: intelligence primarily falls under the purview of career professionals, not politicians. Earlier this year, the Davos forum hosted European and Anglo-American intelligence chiefs, including the CIA, in a key meeting to preserve ties with the “Old Continent”.
Concerns among European services are partly linked to Donald Trump’s stated desire to dismantle the “deep state.” While he did follow through on some threats by dismissing personnel within intelligence agencies, these institutions have neither disappeared nor ceased to function. In practice, the executive branch remains dependent on them.
The appointment of controversial figures to lead several agencies, instead of career officials, reflects an effort to align leadership more closely with political and ideological priorities. Current international developments show that intelligence services remain essential to the implementation of foreign policy. Long criticised, the CIA now appears to have returned to favour with the White House, taking advantage of the opportunities offered by the fight against drug trafficking and the conflict in Iran to reaffirm its relevance and legitimacy to political power.
Taken together, these developments highlight the complexity of the relationship between intelligence services and political power – one shaped by both distance and proximity.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Hager Ben Jaffel ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.
The potential to create personalised digital “twins” of your brain and body is a hot topic in neuroscience and medicine today. These computer models are designed to simulate how parts of your brain interact, and how the brain may respond to stimulation, disease or medication.
The extraordinary complexity of the brain’s billions of neurons makes this a very difficult task, of course, even in the era of AI and big data. Until now, whole-brain models have struggled to capture what makes each brain unique.
People’s brains are all wired slightly differently, so everyone has a unique network of neural connections that represents a kind of “brain fingerprint”.
However, most so-called brain twins are currently more like distant cousins. Their performance is barely any closer to the real thing than if the model were using the wiring diagram of a random stranger.
This matters because digital twins are increasingly proposed as tools for testing treatments by computer simulation, before applying them to real people. If these models fail to capture fundamental principles of each patient’s unique brain organisation, their predictions won’t be personalised – and in worst cases could be misleading.
In our latest study, published in Nature Neuroscience, we show that realistic digital brain twins require something that many existing models overlook: competition between the brain’s different systems.
Our findings suggest that without competition, digital twins risk being overly generic, missing out on what makes you “you”.
The human brain is never static. The ebb and flow of its activity can be mapped non-invasively using neuroimaging methods such as functional MRI. A computer model can be built from this, specific to that person and simulating how the regions of their brain interact. This is the idea of the digital twin.
The brain is often described as a highly cooperative system. Yet everyday experiences such as focusing attention or switching between tasks tells us intuitively that brain systems compete for limited resources. Our brains cannot do everything at once, and not all regions can be active together all the time.
Despite this, the vast majority of brain simulations over the past 20 years have not taken these competitive interactions between regions into account. Rather, they have “forced” neighbouring regions to cooperate. This can push the simulated brain into overly synchronised states that are rarely seen in real brains.
In a large comparative study of humans, macaque monkeys and mice, our international team of researchers used non-invasive brain activity recordings to show that the most realistic whole-brain models not only require cooperative interactions within specialised brain circuits, but long-range competitive interactions between different circuits.
To achieve this, we compared two types of brain model: one in which all interactions between brain regions were cooperative, and another in which regions could either excite or suppress each other’s activity. In humans, monkeys and mice, the models that included competitive interactions consistently outperformed cooperative-only models.
Using a large-scale analysis of over 14,000 neuroimaging studies, we found that spontaneous activity in the competitive models more faithfully reflected known cognitive circuits, such as those involved in attention or memory. This suggests competition is crucial for enabling the brain to flexibly activate appropriate combinations of regions – a hallmark of intelligent behaviour.
Visual summary of our study:
When whole-brain models of humans, macaques and mice are allowed to treat interactions between some brain regions as competitive, they consistently do so – generating activity patterns that closely resemble those associated with real cognitive processes. Luppi et al/Nature Neuroscience, CC BYWe concluded that competitive interactions act as a stabilising force, allowing different brain systems to take turns in shaping the direction of the brain’s ebbs and flows without interference or distraction. This ability to avoid runaway activity may also contribute to the remarkable energy-efficiency of the mammalian brain, which is many orders of magnitude more efficient than modern AI systems.
Crucially, models with competitive interactions were not only more accurate but also more individual-specific. This means they were better at capturing the unique brain fingerprint that distinguishes one person’s brain from another’s.
The fact that our findings hold across humans and other mammals suggests they reflect fundamental principles of how intelligent systems work. In each case, we found models with competitive interactions generated brain activity patterns that closely resembled those associated with real cognitive processes.
This could have major implications for translational neuroscience. Animal models are routinely used to test treatments before human trials, yet differences between species often limit how well these results translate. Around 90% of treatments for neuropsychiatric disorders are “lost in translation”, failing in human clinical trials after showing promise in animal trials.
Combining brain imaging data from human patients with whole-brain modelling could radically change this. A framework that works across species would provide a powerful bridge between basic research and clinical application.
If someone needs intervention in the brain, for example due to epilepsy or a tumour, their digital twin could be used to explore how the patient’s brain activity would change when stimulated with different levels of drugs or electrical impulses. This might significantly improve on existing trial-and-error approaches with real patients, and thus provide better treatments.
The general principles of brain organisation across species also offer a path for understanding how to shape the next generation of artificial intelligence. In the not-too-distant future, we may be able to construct digital twins that are more faithful in reproducing the salient features of the human brain – and potentially, AI models that are more faithful to the human mind.
Andrea Luppi receives funding from the Wellcome Trust, St John's College, Cambridge, and the Canadian Institutes of Health Research.
Gustavo Deco receives funding from the European Regional Development Fund, EU ERC Synergy Horizon Europe, and the Department of Research and Universities of the Generalitat of Catalunya.
Morten L. Kringelbach has received research funding from Pettit, Carlsberg and Cillo Foundations as well the ERC. Deco and Kringelbach are the authors of Whole-brain Modelling: Cartography of the Dynamics of Mind. This open-access title is available at https://hedonia.kringelbach.org/whole-brain-modelling/
Have you ever suddenly gone off a food you used to love? This is something people on social media have been talking about – specifically when it comes to chicken.
Users report suddenly becoming disgusted by chicken, sometimes even mid-bite – despite having been able to eat the food just fine previously. The phenomenon is commonly referred to online as the “chicken ick”.
My research is centred on how our sensory system (mainly smell and taste) affects our behaviour. When it comes to the “ick”, it’s all about how we deal with our disgust response.
There are a number of reasons why you might suddenly become “weird” about a food that you used to be fine with. If this has ever happened to you, the good news is there are ways to get over it.
The first reason relates to a change in the way the food is presented.
Maybe one time you noticed your chicken tasted, smelled or looked different than it did other times. This can lead to a mismatch in what’s expected, which can cause your feelings towards that food to suddenly change.
It might also be related to whether you prepared the chicken in a different way to normal. Adding a new ingredient which changes the smell or flavour profile of the dish can also trigger feelings of disgust.
Another possible reason has to do with what you were doing before you got the “ick.”
If you were scrolling on social media looking at unappetising meals before starting to cook your own meal, this can influence the way you subsequently feel about your own food.
Or, if you were preparing the dish near someone who expressed disgust (even if they only made a face), this can influence your own disgust response. The reason this occurs is explained by the human tendency to mimic others via mirror neurons (brain cells that are involved in empathy and imitation) and the related process of emotional contagion – the unconscious process of “catching” the emotion of others.
Some of us are also more sensitive to experiencing disgust than others.
Disgust is an emotion that protects us from things that could potentially harm us – such as foods that are spoiled or unsafe to eat.
Work has shown that people who rate themselves as being more sensitive to feelings of disgust also exhibit higher “ick” tendencies in a dating context (a sudden aversion to a romantic partner). This suggests that people with higher habitual levels of disgust might be more likely to experience the chicken “ick” phenomenon.
Another important factor is how hungry you are at the time.
If you aren’t very hungry, you might be more particular about unexpected food features – such as a different smell, texture or flavour.
On the other hand, when you’re really hungry, you understandably tend to be less sensitive to disgust and may be less likely to notice things that might otherwise have turned you away.
Interestingly, our research found a similar effect also happens when participants were given alcohol. The higher a participant’s blood alcohol level, the lower their sensitivity to disgust.
So, it could be that certain states of being make us more or less likely to experience the “chicken ick.”
Disgust is heightened during pregnancy. Nicoleta Ionescu/ ShutterstockGender might also have an effect.
Research on disgust shows women have a higher sensitivity to disgust than men. It’s theorised that such gendered differences in disgust sensitivity developed as an evolutionary response to be choosier when selecting potential mates and protect offspring from disease.
Disgust is also heightened during pregnancy and appears to be related to immune function.
If you’re someone who has developed the chicken “ick” before, there are two key things you can try to get over this feeling:
Try preparing your chicken differently next time. Your disgust might be linked to the specific way the food was prepared. The next time chicken is on your menu, try cooking it differently (such as using a different recipe or seasoning) or use a different cut of meat (such as chicken breast instead of thighs or wings). This might help you to unlearn your disgust.
Have someone else cook for you. If the texture or smell of the chicken (particularly raw chicken) has put you off of it, try having a loved one prepare the meal for you or go out to eat. This might make it easier for you to eat the cooked dish. Or, buy pre-cooked options from the supermarket that only need to be reheated so you don’t have to handle the raw chicken.
Removing the cues that cause the “ick” in the first place should act as a reset so you can enjoy the food again.
If that still does not work, it could be that you’ve formed a negative association with the food which needs to be “unlearned.”
In this case, it could take a little more time to retrain yourself. Some suggestions for doing this involve pairing food with something positive (such as a favourite food or listening to your favourite music while eating your meal) or even by changing the colour of plateware. By repeating this a number of times, you’ll condition yourself to the pleasant response – and will hopefully be over your chicken “ick.”
Lorenzo Stafford does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
On paper, the numbers look astonishing. The annual rate of inflation in Argentina has plummeted from 211% in 2023 to 31.5% by the end of 2025.
President Javier Milei is taking plenty of credit for the drop. And he spent some time on Wall Street last month, pitching his “chainsaw” approach to public spending as a triumph against inflation.
But as a political economist who has tracked the cyclical history of economic crisis in Argentina, I see a much grimmer story unfolding.
For the drop in inflation is certainly not a victory for Argentine productivity. It’s a byproduct of a deliberate and engineered collapse in people’s wages.
Milei hasn’t fixed the engine of Argentina’s economy, he has simply turned it off. Since he took office in 2023, the country’s manufacturing output has dropped dramatically, with over 2,000 businesses shutting down and 73,000 jobs lost.
In the automotive sector, factories are operating at just 24% of capacity.
These aren’t just dry statistics. Real wages have been crushed so hard that demand for Argentine goods has evaporated. If a manufacturer is only using a third of its machinery because nobody can afford their goods, they lose their ability to put up prices, and inflation rates stop rising.
By drastically reducing demand, Milei has not solved the inflation puzzle. He has simply removed some of the pieces, by making the population too poor to participate in the Argentine economy.
On top of this, the fear of mass unemployment means workers have no choice but to accept an ever smaller share of the nation’s economic pie. Again, low wages serve to prevent the upward spiral of prices.
So the supposed victory over inflation is actually the institutionalisation of lower wages and a lower standard of living for most people.
A recently passed law (officially named “labour modernisation”) reinforces this new reality. It has effectively increased many workers’ hours and reduced their protections, making labour both cheaper and more disposable.
The new legislation has been criticised as a return to working practices of the 19th century. Far from modernising work, it is about normalising a lower wage share of GDP and ensuring that the shrinking slice of the national income for the Argentine worker isn’t just a temporary emergency, but a permanent feature of the model.
And while the government highlights 4% GDP growth forecasts for 2026, that growth is focused in sectors like agriculture, mining and lithium, which create very few jobs. For the average urban worker the economy hasn’t recovered – it has simply bottomed out at a new, lower standard of living.
That doesn’t mean that the drop in inflation counts for nothing. There has been a genuine sense of relief after the triple-digit chaos of 2023.
The simple ability to shop at a supermarket without the price of goods changing dramatically in days will mark a deep psychological shift for many Argentinians.
But that shift is not based on solid ground. Inflation hasn’t been tamed by a more efficient economy – it has been starved into submission.
Yet remarkably, Milei’s “miracle” is already being packaged for export. From the radical fiscal cuts proposed by Trump in the US to the nationalist platforms of Orbán in Hungary and the Vox party in Spain, Milei and his model are being touted as a blueprint for other economies struggling with inflation.
But what looks like a triumph to some is, in reality, a deepening social crisis. Milei’s Argentina is not a blueprint to be followed. It is a warning of what happens when the cure for inflation is more lethal than the disease itself.
For this level of wage suppression is a stark reminder of Argentina’s economic crisis of 2001, a period of total state failure, sovereign default, bank freezes and 20% unemployment that left a permanent scar on the national psyche.
To have surpassed that level of wage suppression today is a damning indictment of Milei’s approach. But while 2001 was a sudden collapse of a monetary system, the 2026 reality is a slow, institutionalised asphyxiation.
The question for the coming years is how such a model can possibly be sustained. Milei has left the country with no economic levers to pull for a genuine recovery.
With negative net reserves, a domestic market in ruins, and multi-billion dollar IMF and private debts hanging over the country, the government’s path is now dictated entirely by a desperate need for dollars that turns every domestic policy into a plea for foreign capital.
This has created an economic vacuum in which there is no credit for small businesses, no surplus for public investment and no consumer demand to entice private capital back into the real economy.
That is why the administration’s pitch to New York investors in March was essentially a desperate plea for capital to fill this void. But Wall Street is not generally in the business of building factories or creating jobs in Argentina.
If anything, its investors will be looking for easy short-term profits in a newly deregulated market. And what emerges then is an economically divided Argentina. On one side of this will be a thriving enclave of mining and agribusiness designed for the global market, and on the other, a vast urban industrial wasteland where millions of Argentinians struggle desperately to make ends meet.
Can Cinar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
It took less than three minutes for an organised crime gang to steal a Renoir, Matisse and a Cezanne painting collectively worth around €9 million (£7.8m) from a private museum near Parma, Italy in March 2026. This is the second high profile art heist in recent months, after the theft of jewellery worth €9.5 million (£8.25m) from Paris’s Louvre in October 2025.
The items stolen are clearly valuable. But, as an expert in the governance of criminal markets, I can tell you acquiring the goods is only the first step. Turning this loot into cash is fraught with risk .
The Italian government takes the protection of its cultural heritage seriously, with a whole department of the Carabinieri (Italian police) devoted to the theft of arts and antiquities. This department scans the global art trade for forged, stolen and illegally exported treasures, demanding their return.
There is little chance of selling the stolen masterpieces on the international art market – even at a knockdown price. Whereas in the past dealers and auction houses might have turned a blind eye to the fishy origins of an outstanding artwork, over the past two decades the norms and procedures of the market have tightened considerably.
Anyone who buys art without checking whether a former owner has registered their interest in the object fails the bona fide (good faith) test. This means that they cannot obtain a good title and so the legal property right remains with the person or institution the artwork was stolen from. Also sales of stolen art where the seller sidestepped due diligence can be voided, meaning the money must be returned.
So reputable dealers and auction houses take their duty of care very seriously. At the very least they check the freely accessible Interpol database of stolen art before the sale. However, private databases – like that of the Art Loss Register – provide greater peace of mind, listing many more lost and stolen objects and limit searching to those with a legitimate interest in an object. When a register finds that someone is trying to bring a stolen artwork into the open market, they collect and pass on all information that could lead the police to its location or the people involved in its sale or storage.
Anything fresh from a museum wall is therefore unsaleable – unless it is jewellery that can be broken up and sold as (expensive) scrap. So, what might be the financial motivation behind this theft?
A Bond-style villain ordering favourite paintings to adorn their lair is an unlikely explanation. Yes, paintings could be stolen to order, but buying art on the open market to launder money is less risky. With high rewards for information or the return of stolen artworks, security and omerta (the code of silence) would have to be completely watertight when displaying stolen treasures.
On the other hand, “rewards for information” could be a motivation for theft in itself. In the middle of the last century, insurers regularly paid “finders” with so little scrutiny that high-value art theft became a profitable low-risk occupation. Institutions like the Art Loss Register broke that cosy coexistence and instead used any leads to help the police conduct recoveries and sting operations.
Nowadays, it is only safe to negotiate a deal over a “finder’s fee” when a stolen object has changed hands so many times that the line to the original thieves is lost in the mist of time. Even so, the ultimate “finder” would be lucky to realise more than 10% of the painting’s value, which they would also likely have to share with the thieves and various shady underworld owners along the way.
However, there is a third reason to steal artworks. Organised crime groups sometimes use stolen artworks as bargaining chips to negotiate more lenient punishment. For example, the Dresden jewellery thieves kept a few pieces of their haul aside to use their recovery to negotiate shorter sentences. Penitentos (“repentant ones”) who want to leave mafia organisations also sometimes provide information on the whereabouts of missing treasures. If there is a perception that stolen artworks can used to reduce a prison sentence or financial compensation package, their underworld value can grow far beyond the finder’s fee.
While it is difficult to verify the assertion that stolen artworks are used as collateral in drug deals, several unique treasures have indeed been retrieved from properties owned by senior mafiosi. These works have not been found in temperature controlled galleries, but rolled up in dank places that make museum curators weep with despair. Let us hope that the beautiful artworks from Parma are treated with respect until we see them again.
This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Anja Shortland does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
From the moment raw ingredients are harvested to when you cook and eat a meal, an invisible process is taking place: the growth of antimicrobial resistance. This happens when microorganisms (bacteria, fungi, and so on) stop responding to antibiotics or disinfectants.
Often described as a “silent pandemic”, antimicrobial resistance is currently one of the greatest threats to global health.
Antimicrobial compounds are routinely used in intensive livestock farming and aquaculture, both to prevent disease among animals kept in overcrowded conditions and to promote faster growth. While this latter practice is declining thanks to food hygiene and safety legislation, their widespread use has created the ideal environment for resistant microorganisms to emerge and proliferate.
This growing trend is highlighted by recent data from the European Food Safety Authority. Their March 2025 report looked at the resistance developed by zoonotic bacteria (which can be transmitted from animals to humans) and indicator bacteria (those used to indirectly study food hygiene and safety).
The report highlights high levels of resistance to ciprofloxacin, an antibiotic commonly used in human medicine, in bacteria such as Campylobacter coli. This microorganism is found in both humans and livestock, particularly chickens, turkeys, calves and fattening pigs. Similar resistance has also been detected in certain strains of Salmonella.
These findings indicate the urgent need to raise awareness, and to use antimicrobials more cautiously.
Leer más: Gutter to gut: How antimicrobial-resistant microbes journey from environment to humans
Resistant “superbugs” are capable of spreading through a number of vectors – including irrigation water, soil, agricultural produce and processing plants – before ending up on our dinner tables. Recognising this web of connections between the environment, livestock and people is the first step towards devising effective strategies to ensure food safety and global health.
A European study, published in Nature Microbiology in June 2025, analysed more than 2,000 samples, including raw materials (such as fresh meat), finished products (such as cheese) and work surfaces from various food industries.
During food’s journey from field to fork, more than 70% of antimicrobial resistance – including resistance to antibiotics used in human and veterinary medicine such as penicillin and streptomycin – is transferred between the bacteria present.
The study identified the ESKAPE group of bacteria (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa and Enterobacter spp.) as the primary cause of this transmission. The main vehicle of transmission is thought to be S. aureus, as it is present on the skin and mucous membranes of around one third of the population, making it a significant factor in food handling.
So how are the “instructions” for surviving antibiotics or disinfectants shared among microorganisms? The answer is simple: by exchanging genes, as easily as people might swap football cards. This is known as horizontal gene transfer, and works through one of three different mechanisms.
In the first, known as transformation, the bacterium takes up free genetic material directly from the environment, much like picking up a note from the ground and putting it in your pocket.
Through the second, known as transduction, the gene is transported via a bacteriophage, a bacterial virus that acts like a messenger delivering a letter.
And finally, there is conjugation, where two bacteria come into physical contact and, like two computers connected by a cable, pass information directly from one to the other.
Leer más: Natural GM: how plants and animals steal genes from other species to accelerate evolution
As if that weren’t enough, the food industry also faces the problem of polymicrobial biofilm formation. These are clusters of microorganisms adhering to surfaces that are highly resistant to external agents and conventional cleaning and disinfection methods. These biofilms can harbour “persister” species which, despite being unable to multiply, endure over time and constitute real sources of contamination. They can also facilitate the transfer of resistance genes.
Biofilms therefore pose a major challenge for current control systems. New technologies in food processing and preservation focus, in part, on combating them through the use of ozone, UV-C light, metallic nanoparticles, cold plasma or even bacteria-specific viruses.
Research focusing on the search for plant-based antimicrobials, such as essential oils, offers a complementary strategy for biofilm control and food preservation. Notable among these compounds are carvacrol (found in oregano and thyme), peppermint essential oil and citral (derived from citrus fruits).
In general, these agents are less toxic than conventional antimicrobials, and are less likely to lead to the development of resistance. By effectively reducing biofilms and eliminating the bacteria that form them, they could help to curb the use of antimicrobials and the rise in resistance to these compounds.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
Las personas firmantes no son asalariadas, ni consultoras, ni poseen acciones, ni reciben financiación de ninguna compañía u organización que pueda obtener beneficio de este artículo, y han declarado carecer de vínculos relevantes más allá del cargo académico citado anteriormente.
Several housing developments are currently underway in Montréal incorporating community‐scale features, including walkable streets, lively commercial corridors, galleries and public spaces.
While building on infill sites already located in the heart of established cities offers many advantages, densification projects can also present complex challenges during implementation.
Drawing on my experiences working as an urban planner and teaching governance at the University of Ottawa, let’s examine emerging trends in urban development projects.
Several development projects currently underway in Montréal include multiple buildings as well as community facilities like parks, gardens, patios, terraces, playgrounds and sports or cultural centers.
So when did the practice of building several structures at once, together with shared community amenities, first begin? The goal of urban planning has always been to organize space, both public and private. But the balance between how these spaces are managed has been conceived in different ways, and has taken on different characteristics over time.
In North America, the commercial development of housing subdivisions expanded significantly after the Second World War. A new transportation technology came to the fore: the automobile.
À lire aussi : Boomburbs: The rapid rise of Toronto’s northern suburbs
The proliferation of very low‐density housing developments from those years onward was a direct consequence of the widespread availability of cars. New York’s Levittown neighborhood is frequently cited as a key example of suburban development patterns and of the rise of housing construction as an assembly‐line process..
Suburban life was uniform and standardized, matching the mass-market approach to post-war housing. Homes were built the same for everyone, not customized for individual owners. Despite this sameness, people liked them, making them profitable and keeping developers focused on these types of houses for decades.
North American governments also encouraged developers to build suburbs by strictly enforcing zoning rules that separated housing from commercial areas and social spaces. The distances between these areas, and even to employment centres, often necessitated car ownership to get around.
In contrast, current developments in Montréal (including Canoë, Quartier des Lumières, Bridge-Bonaventure, Langelier, Quartier Molson, Esplanad-Cartier) emphasize their walkability and proximity to subway stations in their marketing to prospective buyers.
By the late 1970s, critics were already pointing out the weaknesses of communities overly dependent on the automobile. Their concerns included sedentary lifestyles, social fragmentation, reduced community interaction, loss of farmland and various economic and environmental costs.
Current trends toward mixed‐use zoning and walkability can be traced back to the ‘80s, when North American urban planners began organizing the New Urbanism movement. Some of the movement’s principles:
Although Canadian cities have continued to build suburbs on their outskirts, several initiatives now aim to encourage urban development and infill projects.
Many municipal governments are reconsidering and rolling back the regressive zoning policies they once enforced. The consensus has shifted so much that the federal government has even offered direct funding to municipalities as an incentive to adopt policies that support greater urban density.
À lire aussi : Dense, compact urban growth is favoured by mid-sized Canadian cities
These fundamental questions about mixed‐use development are accompanied by important design details that help make cities pleasant and human‐scaled.
Modernist developments of the “Tower on the Park” type have often included lawns with pedestrian paths. But these pedestrian facilities tended to regard walking primarily as a recreational activity alongside green space.
In contrast, today’s urban densification projects are more socially oriented, making it possible to walk or bike to shops, community centres, schools and other everyday destinations. New developments place community spaces close to commercial amenities like grocery stores and make them more accessible for people who want to walk, cycle or use public transit.
A typical example of the ‘Tower on the Park’ style, the Penn South housing co-operative (1962) in Manhattan features numerous towers surrounded by a strip of landscaped grounds so that the buildings don’t face the street directly. (Wikimedia)The infrastructure costs associated with urban sprawl have also encouraged municipalities to recognize the development potential of unused or underused land within their cities. But planning and developing these kinds of sites can be more complex than building on peripheral greenfield land, precisely because they are located within existing urban areas.
Montréal’s Molson site, for example, benefits from historic buildings, but it’s challenging to rethink a place that previously served industrial purposes.
Urban redevelopment can entail significant costs, such as the time and resources required to remediate industrial contamination, carry out archeological assessments or consult with affected stakeholders. Redeveloping urban land parcels may also involve more complex governance arrangements, depending on former ownership and the relevant land‐use rights.
Although these infill sites are close to existing road networks, public transit, and other infrastructure, the costs of connecting them to these systems remain.
For example, in the case of the Namur‐Hippodrome project in Montréal, developers have been reluctant to build the roads, sewers and other infrastructure needed to serve the site, preferring that this work be carried out by government.
This situation reminds us that, ultimately, developers are for‐profit companies that weigh the costs and benefits of projects in terms of their own expenditures rather than broader social objectives.
For several years, cities have aimed to create more diverse and appealing urban spaces, featuring varied building heights, destinations and housing types. A thoughtful mix of public and private spaces encourages community interaction and provides natural “eyes on the street,” enhancing the sense of safety. Montreal’s new developments are designed to be vibrant, lively and community-focused.
The creation of social spaces aims to encourage positive social interactions and prevent feelings of anonymity or disconnection during the regular course of daily activities.
The Molson project includes a large park along the banks of the St. Lawrence River. The Esplanade-Cartier project has emphasized social amenities, featuring not only a pedestrian street but also a “project house” with a community garden on the rooftop terrace. Rooftop gardening is already well established in Montréal commercially, but this project is among the first community garden on the roof of a private building in Québec.
À lire aussi : Growing pains: An Ontario city’s urban agriculture efforts show good policy requires real capacity
Building within already developed areas can be complex. The 10-year consultation and planning process for the Namur-Hippodrome project in Montréal is one example, but there are others.
Just a stone’s throw from Parliament Hill, the redevelopment of the Lebreton Flats in Ottawa has been promised for decades. The process of developing the Ookwemin Minising residential neighbourhood on Toronto’s waterfront has been underway since 2017.
The numerous urban densification projects currently underway in Montréal indicate a growing recognition that suburban development models lacked diversity.
Today’s projects aim to place new housing near vibrant cultural and commercial corridors. While elements of the 15-minute neighborhood concept — such as mixed-use zoning that encourages walking and transit — aren’t new, stakeholders seen to be embracing New Urbanism principles in planning, design and construction. It’s clear there’s been a shift in thinking about what makes a neighbourhood desirable.
Christina Bouchard ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.
Last summer, a wedding photographer walked begrudgingly into a physics laboratory outside Rome. Feeling uninspired by the intricate machinery around him, he decided to turn off the lights. “I wanted to create a world that was a bit more intimate,” said the photographer, Marco Donghia. He had been brought into the lab to participate in a photography contest by his sister Raffaella Donghia…
In the wake of Trans Day of Visibility, the risks of being seen are clearer than ever, from rising hate crimes and online harassment to the spread of anti-trans legislation.
But visibility alone is not enough. Trans people are still systematically under-counted or obscured in the data that shapes policy. In an era when policy and even advocacy are increasingly data-driven, counting trans people properly remains essential — without it, inequality cannot be adequately addressed.
To do so, we need to improve data collection, analysis and sharing practices.
Although governments and organizations are increasingly collecting data on trans people, current methods can lead to under-counting.
When Canada became the first country in the world to publish census data on trans and non-binary people, it collected that information using a household questionnaire. Parents of trans youth might have been the ones filling out the answers for their children.
This likely contributes to under-counting because younger people are typically more likely to identify as trans — except 15- to 19-year-olds, who often still live with their parents.
The drop-off is lower in countries like Scotland, which use private, individual questionnaires, offering a potential model for others.
But even when trans people are included in data sets, they can disappear during analysis.
Trans people can disappear during analysis when grouped with other LGBTQ2S+ people, a pattern seen across both academia and community-based research.
For example, studies on political candidates that treat LGBTQ2S+ people as a single group often find little evidence of discrimination, yet studies examining trans candidates separately show that they face voter bias.
Similarly, while LGBTQ2S+ candidates overall raise less money than straight, cisgender candidates, the causes differ. For many sexual minority candidates, funding gaps stem from structural inequalities in incumbency, past political experience and district competitiveness, while trans candidates would still raise less money even if those inequalities disappeared.
Disaggregated analyses therefore show that targeted interventions — such as bias-reduction efforts and dedicated funds — remain necessary for trans candidates.
Some organizations have recognized the perils of aggregation and worked to produce research that makes trans people and their experiences visible. The Community-Based Research Centre (CBRC), Canada’s leading data collector on queer and trans health, offers a compelling example.
Initially focused on cisgender gay, bisexual and queer men, CBRC later expanded to include trans men, non-binary and Two-Spirit people. However, as samples broadened further to include all queer and trans identities, subgroup-specific findings risked being overshadowed unless data were disaggregated in reporting. In response, the organization began producing research that specifically examines trans experiences.
But even when data are collected and analyzed appropriately, access remains an obstacle.
Sharing data can also pose barriers to trans-specific advocacy and policymaking when that data is inaccessible or only released in aggregate forms.
The 2021 census highlights this issue. Apart from Statistics Canada’s original release and a report showing poorer socioeconomic outcomes, we still know very little about trans people.
Statistics Canada usually only makes gender-based data from the 2021 census publicly available under the categories “Men+” and “Women+,” randomly assigning non-binary people to either group and not indicating whether anyone is trans.
If researchers want information about trans people, they must request access to a Research Data Centre through a lengthy process involving security clearance, fingerprinting, a credit check and long wait times, making it difficult to study these communities.
A few practical and co-ordinated changes in how data are collected, analyzed and shared would improve trans visibility. Here are four ways to start:
Although we share ethical concerns around trans people’s privacy, there is often a way to share data without making individuals identifiable.
Visibility is complicated, but being counted in data is essential.
While better practices won’t fix everything, they are a good place to start. Because without better data, we cannot design effective policy or advocate for meaningful change. Let’s ensure trans people are counted, too.
Elizabeth Baisley has received funding from the Social Sciences and Humanities Research Council of Canada.
Francesco MacAllister-Caruso previously worked for the Community-Based Research Centre (CBRC) from 2020 to 2025.
Quinn M. Albaugh has received funding from the Social Sciences and Humanities Research Council of Canada.
The government’s new social cohesion action plan, Protecting What Matters, is frank about its urgency: “Social cohesion is ... not just a good in and of itself. It is also a vital front in the resilience of our national security.”
The 2024 Southport attacks and subsequent disorder, rising religious hate crime, unrest over migration policy and domestic extremism have all forced the issue of community division. Yet the government’s answer, built around integration, interfaith dialogue and civic ceremonies, mistakes the symptom for the disease.
“Cohesion” is vague, unmeasurable and elastic enough to mean whatever the government of the day needs it to mean. People describe the places they love as close-knit and safe, not “cohesive”.
A better framework would be community resilience: the measurable capacity of neighbourhoods to absorb shocks, resist divisive narratives and recover from crises. You cannot integrate people who are isolated, impoverished and without the infrastructure to bring them together. COVID laid bare what the evidence already showed: communities with stronger social infrastructure and higher levels of social capital demonstrated greater resilience to the pandemic’s social and economic shocks.
The government strategy does contain a chapter on “resilient communities”. However, it frames resilience narrowly, as emergency management of religious and political extremism, rather than as the everyday and routine fabric that makes any form of solidarity possible at all.
There is an extraordinary gap in Protecting What Matters. While there is acknowledgement of the effects of “visible deterioration of public services”, the word “poverty” does not appear once. The plan frames division through religion, identity and Islamophobia, which are outcomes and proxies, not root causes.
A study of over 15,000 residents across 839 English and Welsh neighbourhoods, validated by a 2024 analysis of the Understanding Society dataset, shows that deprivation, not diversity, erodes trust, participation and neighbourliness. Once you control for poverty, diversity is associated with higher volunteering and charitable giving. The crisis of solidarity is a crisis of resources, not cultural difference.
There is an undertone of nostalgia in the government’s plea for communities to “integrate”, a wistfulness for tight-knit mining towns where everyone knew their neighbour. But those communities were built on something material: secure jobs, union membership, working men’s clubs and shared economic fate.
More in Common’s 2025 polling finds that 44% of Britons sometimes feel like strangers in their own country – a figure that could be read as evidence of cultural division. But More in Common’s own analysis shows this alienation is concentrated in economically left-behind areas, not diverse ones. People do not feel like strangers because their neighbours look different. They feel like strangers because the institutions that once made them feel they belonged – clubs, pubs, unions and jobs – have gone.
The loss of social infrastructure has been devastating to communities across Britain. chrisdorney/ShutterstockThe argument that more homogenous communities are more cohesive is seductive, but weak. Britain’s most ethnically diverse neighbourhoods are not its least cohesive – they are, as Manchester researchers found, its healthiest. Mining towns were cohesive despite being male-dominated, often racially exclusive and economically coercive. The lesson is to replicate not their demographics, but the material conditions: jobs, institutions and shared infrastructure that give people a reason to show up.
Work provides far more than income: it furnishes identity, routine and daily social connection. Unemployment is not merely an economic condition; it is an isolating one.
A recent randomised controlled trial by the Department for Work and Pensions found that structured group job-search workshops improved both mental health and employment outcomes among benefit claimants, precisely because they restored the social support, routine and shared purpose that work normally provides. Community resilience cannot be separated from economic development. Departments such as DWP and Jobcentre Plus have a direct stake in the social capital agenda.
Research I have conducted at the Independent Commission on Neighbourhoods (ICON) and a recent Joseph Rowntree report show that social infrastructure is key to resilience, but that different communities have different needs.
New housing developments need parks and primary schools from day one: accessible spaces that create early encounters and establish trust between newcomers. Established but deprived communities need to restore what has been stripped away, whether the pub, the library or the community centre. Sports facilities build bridging connections across difference, faith buildings deepen bonds within communities and civic spaces create the linking ties between residents and institutions. The task is to match the infrastructure to the social capital gap, not apply a single template everywhere.
The real test, which my colleagues and I call the “Wet Wednesday Night Test”, is whether your investment in social infrastructure gets 14 people to turn up for football (or cub scouts, or a book group) on a wet Wednesday in February. Nobody comes to “build social capital”. They come because the pitch is free, the lights work and there are hot showers. The pint afterwards does more for integration and social capital than any strategy document ever will.
People don’t show up to the football pitch to ‘build social cohesion’. Natee K Jindakum/ShutterstockICON’s research, drawing on over 100 peer-reviewed studies, shows that social infrastructure generates £3.50 for every £1 invested. Every £10,000 invested prevents an estimated £105,000 in riot damages.
During the 2011 riots, 71% of incidents occurred in areas ranked among the most deprived 10% of England – the same year in which 287 community centres had closed. The government described this as a “social cohesion” problem; it was a social infrastructure problem.
The government’s £5 billion Pride in Place programme makes a start at investing in communities. But more investment is needed to address the challenges in our most deprived neighbourhoods, where people face life expectancy four years below the national average.
A serious approach would use existing schools, job centres and childcare settings as social hubs, and make public transport free for under-18s so that young people can move around their own towns. And, it would tackle the poverty, insecure work and collapse of institutions that once gave people a reason and the means to show up for each other.
Build those foundations and what politicians call “cohesion” will follow. Nobody will use that word to describe what they feel when they step outside of their front door. They will just say it is a good place to live. That is enough.
Adam Coutts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
As we emerge from a relentlessly gloomy winter in the UK, many are itching for a holiday in the sun. For some that means seeking warmer climates abroad and hopping on a plane to get there.
But as climate change brings wetter winters to the UK, flying for holidays is fuelling rapidly rising aviation emissions. And addressing this not only needs a shift towards climate-friendly travel but a reimagining of where holidays take place.
For years we’ve been sold the promise of guilt-free flying through green technologies such as sustainable aviation fuels and carbon offsetting from polluting airlines.
But all come with significant limitations and none are ready to deliver the emissions reductions we need within the time we have. Ultimately, without curbing demand, current climate policies will not deliver any major emissions reductions in aviation. That makes it more important to reduce how much we fly.
In the UK, aviation is set to become the largest emitting sector by 2040 and this rise is being driven primarily by leisure travel. This includes vacations and visiting family or friends with the majority of departing passengers flying for holidays.
The good news is the growth in aviation emissions isn’t being caused by your annual holiday to Spain. Most flights are taken by a relatively small number flying several times a year, with 70% of flights taken by just 15% of people. This group is also more likely to take frequent short-haul flights which could be replaced by train. Shifting the behaviour of this elite group (from planes to trains) would have a significant impact on cutting emissions.
Trains are significantly better for the climate compared to flying, with a single flight from London to Berlin clocking up the same amount of carbon as 11 trips by train.
As a researcher focusing on how to promote flight-free holidays to reduce aviation emissions, I used to find this reassuring. We didn’t necessarily need to change where we went for holidays. We just needed to get frequent flyers on trains instead of planes.
But, sadly, it isn’t that simple. Recent research has found the majority of UK aviation emissions actually come from long-haul leisure flights. So even if all flights on routes that could be completed by rail in under 24 hours were replaced, this would only address around 14% of UK aviation emissions.
Reducing aviation emissions therefore requires not only getting frequent flyers to shift from planes to trains, but asking wider questions about where people want to go and why.
Reducing demand for flying isn’t just a structural challenge addressing cheap flights and expensive trains, but also a social one. Five minutes scrolling on Instagram bombards you with bucket list destinations and influencers implying a life well lived is a passport full of stamps.
Since the rise of budget airlines in the 1990s, flying for holidays has become increasingly normalised socially, despite largely remaining something only a relatively wealthy few do regularly. And the pull isn’t just about cost and convenience.
Research shows if cost and time weren’t an issue, people say they would fly more. Flying has become a means to an end in reaching the exotic, unfamiliar and – crucially for British people – the sun.
Tourists associate distance with novelty, contributing to domestic holidays being less popular than those abroad. There’s almost a hierarchy of destinations where places furthest away and more novel feel more desirable. My ongoing research on how people talk about holidays reflects this – some questioned whether the UK even counts as a holiday.
I have found that holidays in far-away places seemed to impress participants more than those spent in the UK and Europe, often with responses such as “wow” and “amazing”. Destinations further afield were referred to as “grand”, “swanky”, “extravagant” and “big”, contrasting with the language used when discussing holidays closer to home with “only”, “little” and “just”. In this way, the places we visit on holiday act as social currency in conversations. Being well travelled grants us cultural capital, the accumulated knowledge and experience of the world signalling social status.
But ideas of a good holiday are open to change. In one survey, half of the respondents said they flew less because they knew someone who had given up flying due to climate change. So social influence works in both directions.
Some, for example those part of the slow travel movement, are already resisting the idea that closer destinations are somehow lesser. Participants in our ongoing research described planning trips around where they can feasibly get to by train, making the journey part of the holiday or foregrounding quality time with loved ones over the destination.
This isn’t about giving up holidays abroad and foregoing the sun, especially if you’re only flying to a European destination once or twice a year. Structural change, like fairer pricing and better rail connections, is also essential (and long overdue) if people are to make changes.
Even taking the train from London to Edinburgh costs on average 60% more than flying and this will persist until airlines are taxed fairly and train tickets are made the same price or cheaper than plane tickets. These are policies which the public supports.
So as we look ahead to summer it’s worth asking if what we’re actually longing for – whether it be warmth, rest, adventure, quality time, cultural interest or a change of scenery – really requires a long-haul flight (or lots of short-haul flights). A sustainable holiday starts with asking that question before deciding where to go.
Sarah Barfield Marks does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
By Easter 2026 it was still not clear when – or how – the war initiated by Israel and the US against Iran would end. But what was already clear was that it would harm Africa in a number of ways.
Firstly, it would adversely affect the global supply and prices of oil and gas, fertilisers and food. Secondly, local currencies would be affected. More than a month after the war had started a number of African currencies had begun to lose value against the US dollar.
Thirdly, interest rates stopped falling and further rate increases were highly likely. Fourth, there will be a decline in access to affordable foreign financing.
How should Africa respond?
African countries cannot avoid being harmed by the current Gulf war. Nevertheless, based on my work in international economic law and global economic governance, I think there are two lessons that, if followed, can help the continent emerge from the crisis in a better place.
First, governments and societies need to be pragmatic. Their first priority must be to do whatever they can to mitigate the impact of the war, particularly on their most vulnerable citizens. This will require governments to make trade-offs.
They will have to reallocate budgets to at least maintain the level of imports necessary to meet the society’s basic needs. They will need to convince their creditors to help finance their necessary imports. They will also need to persuade them to be flexible enough that they leave governments with at least some policy space.
Second, states and societies need to identify opportunities within the crisis for actions that over the medium term can help them meet their financing, economic, environmental and social challenges. This requires collaboration between the state and its non-state stakeholders. Business, labour, religious groups, civil society organisations and international organisations all have something to contribute.
The focus of Africa’s efforts in the short term must be on minimising the negative effects of the war and on managing the state’s external debts in the most sustainable and effective way.
This is easy to state, but hard to implement. This is particularly the case in the current international environment, in which it is not realistic to expect donor countries and other international sources of finance to be particularly generous.
African countries will need to convince their creditors to acknowledge that this crisis is beyond Africa’s control and that they should not compound the pain that’s being experienced. This will require, at a minimum, that the creditors agree to suspend debt payments for the next year.
Creditors have already accepted the principle that debt payments can be suspended when debt challenges arise from sources beyond the debtor’s control. Many of them have accepted clauses requiring such action under specific conditions in their most recent debt contracts. They also did this during COVID.
Second, African countries, which are already heavily indebted, should challenge their multilateral creditors to accept the consequences of being among the biggest creditors for the continent. This includes the World Bank, the International Monetary Fund and the African Development Bank. By custom these institutions are treated as preferred creditors. This means that they get paid before all other creditors. Instead of participating in any debt restructurings, they also make new loans to the debtor in crisis. This shifts the debt restructuring burden onto the debtor’s other creditors. It also increases the total amount owed to the multilaterals.
This cannot continue. These institutions need to be more creative in providing Africa to financing. This should include:
Third, governments should work with the Alliance of African Multilateral Financial Institutions to use these institutions more effectively to finance African development. For example:
Fourth, African governments must build on the efforts they began last year to become a more effective advocate for African development financing interests at the international level. Among these efforts was the initiative by African ministers of finance to develop common African positions on sovereign debt restructurings. Another was South Africa’s launch of the African Expert Panel that proposed a number of initiatives on African debt and development financing.
African countries should advocate for the IMF to review its governance arrangements so that it becomes more accountable and responsive to developing countries, including African states and societies.
They should also advocate for the IMF to more use its existing resources, including its gold reserves, more creatively to support Africa.
Second, Africa should call for a debate on the preferred creditor status of multilateral financial institutions. This has become particularly relevant because the members of the Alliance of African Multilateral Financial Institutions are claiming that, like all other multilateral financial institutions, they are entitled to this status.
It is not clear that there are good arguments for excluding these institutions from preferred creditor status while protecting the position of the legacy institutions. This suggests that there is a need for some general principles that help determine which institutions should be treated as preferred creditors. These should be acceptable to all multilateral financial institutions and other market participants.
Third, African societies must make every effort to demonstrate that they are taking control of their own development. They should demand that their governments and all other actors in African development finance behave responsibly in regard to the financial, economic, environmental and social aspects of these transactions.
Another medium term objective should be to limit the illicit financial flows that are so often associated with international trade and investment. This goal would be advanced by the successful conclusion of the current efforts to agree on a UN Framework Convention on International Tax Cooperation.
Danny Bradlow is s Senior Non-Resident Fellow, Global Development Policy Center, Boston University and a Senior Fellow, South African Institute of International Affairs
During the Mau Mau uprising between 1952 and 1960, the British colonial government confined an estimated 150,000 Kenyans in a sprawling network of “emergency” detention camps.
None of those held in the camps had been found guilty in a court of law. Instead, they were detained on suspicion of supporting the uprising.
British control over Kenya was effectively declared in 1895. A distinctive feature of colonial rule was the decision to encourage white settlement. These settlers were granted vast tracts of Kenya’s most fertile land and pushed policy in an increasingly harsh and unequal direction.
By the early 1950s, many African Kenyans were facing severe land shortages in the countryside and desperate living conditions in urban areas.
In 1952, this situation erupted into the Mau Mau uprising, a broadly anti-colonial rebellion.
The British government responded with overwhelming force. It declared a state of emergency and suppressed the uprising militarily.
Revelations about the extreme violence employed in some emergency detention camps made the continuation of British rule untenable. Particularly key was the Hola massacre of 1959. Guards beat 11 detainees to death and the colonial government attempted to cover up the crime.
Outrage at these events shattered Britain’s grip on the colony, and Kenya achieved independence in 1963 under the leadership of Jomo Kenyatta.
A great deal is known about these detention camps. They were sites of neglect and brutal violence. Detainees were forced to go through a so-called rehabilitation system designed to make them renounce their support for Mau Mau.
In practice, they were subjected to brutal compulsory labour, were at risk of assault and lived in unhygienic conditions. Some of those who refused to cooperate ultimately faced systematic, state-sanctioned torture.
I am a historian researching punishment in Kenya, and I have been investigating the deeper history of detention camps. My research shows that this emergency detention system was shaped by an earlier network of “ordinary” detention camps. These were established in 1926 and processed more than 400,000 people before the uprising.
These camps, intended as a milder alternative to prison, evolved into a poorly regulated system characterised by exploitation, overcrowding and weak accountability.
These findings challenge the idea that the detention system of the 1950s was exceptional. Instead, it was rooted in long-standing colonial practices, shaped by economic incentives, administrative gaps and coercive labour systems.
Understanding this deeper history matters because it changes how we view the Mau Mau emergency. It proves that the brutal 1950s detention system didn’t just emerge from nowhere – it was built on a foundation of state violence and disorder that had been normalised for decades.
Influenced by a draconian-minded European settler minority, the Kenyan colonial government adopted a harsh approach to punishing the local population. Judges frequently imprisoned Africans for “technical” offences lacking criminal intent. These included failing to pay tax and minor violations of coercive labour laws.
By the 1920s, Kenya’s prisons were overcrowded and “technical” offenders inevitably mixed with hardened criminals.
In response, the colonial government introduced detention in 1926 as a supposedly milder alternative for technical offenders who had simply broken administrative rules. In theory, prisons were to be reserved for those who had committed crimes involving moral violation. In practice, however, these distinctions didn’t (or couldn’t) hold.
To visibly separate detention from imprisonment, the colonial government gave day-to-day control of detention camps to district commissioners (the powerful heads of local governments), not the prison department.
However, this separation was incomplete. Detainees were legally classified as prisoners (though they were not informed of this). The prison department retained ultimate authority over the camps.
This overlapping authority produced a gap in accountability, which ultimately proved disastrous.
In 1930, seeking to divert more people from formal prisons, government officials removed almost all sentencing restrictions on detention. Subsequently, the only limitations were that sentences had to be under six months and that those with more than one prior prison conviction were ineligible.
Numbers surged immediately, with more convicted offenders sent to detention than formal prisons almost every year until 1952.
Judges increasingly used detention for serious offences, including manslaughter. A limited criminal records system meant that individuals with prior convictions – sometimes as many as 16 – ended up in detention.
Conversely, the amendment did not stop harsh magistrates from continuing to send significant numbers of minor offenders to prisons.
This blurring of populations, combined with a lack of structural and legal separation, meant detention camps mutated into a parallel prison system, serving a different colonial master, district commissioners, but lacking fundamental distinction.
Detention camp living conditions were atrocious. Most district commissioners delegated almost all duties to Kenyan African “overseers”. Overseers were under-trained. Yet they were expected to be on duty constantly and often had to guard more than 60 detainees, making meaningful supervision impossible.
Camps were generally collections of temporary wattle-and-daub huts. Over time, these decayed but were not replaced, resulting in squalid conditions.
Furthermore, overcrowding was endemic. Food rations were poor and basic facilities were often absent. Sickness rates were significant. Detainees responded by escaping at a rate of more than one a day.
In 1937, a high-level committee condemned the system as dangerous and inefficient. Calls for reform from London also grew.
But nothing changed.
Why?
The primary reason was economic. Detainees were a vital reservoir of free labour for cash-strapped district commissioners. When camps were introduced, local governments’ labour budgets were cut. This made detainee labour crucial for maintaining government stations.
In the late 1930s, penal officials sought to reintroduce stricter eligibility criteria for detention. However, they abandoned this idea as it would add to overcrowding in the prison system.
Trapped by bureaucratic gridlock, underfunding and economic dependency, Kenya’s detention system limped into the 1952 emergency – unreformed.
Ultimately, “ordinary” detention camps persisted until the 1980s, far outliving their emergency counterparts.
This history exposes stark continuities between the pre-emergency and Mau Mau penal systems. Furthermore, as they were under the control of district officials and lacked standard prison regulations, existing detention camps could, and did, easily become dumping grounds for Mau Mau suspects in the early months of the emergency. Ordinary detention was both a model and enabling mechanism for emergency detention.
Ian Caistor-Parker receives funding from the UK Economic & Social Research Council
Bobi Wine’s escape from Uganda is not just a striking episode in itself, it also offers insight into the current state of the opposition – particularly his National Unity Platform party – and into the divergences within the Yoweri Museveni regime.
The Ugandan opposition leader had been in hiding for almost two months after the January 2026 presidential election, which Museveni won by 72%. Wine came second with 25% of the vote. Museveni, 81, has been in power since 1986.
Wine, born Robert Kyagulanyi, entered formal politics in 2017 when he won a parliamentary by-election.
He soon emerged as one of the leaders of the People Power movement, a loose, generationally charged mobilisation built around the slogan “People Power, Our Power”. It took shape in the aftermath of protests against the removal of presidential age limits in 2018. At the time, the opposition appeared largely exhausted and unlikely to unseat the regime. Bobi Wine and People Power therefore brought a new energy to Uganda’s opposition.
People Power later formalised into the National Unity Platform party, which Wine used to vie for the presidency in 2021. He secured about 35% of the presidential vote against Museveni’s 59%. National Unity Platform became the largest opposition force in parliament with 57 seats.
These results also highlighted the constraints of electoral politics in the face of extensive repression.
This is a pattern that would again become apparent in the 2026 elections.
As several human rights organisations noted, the 2026 elections took place in an environment marked by widespread repression and intimidation.
After the vote, Wine went into hiding. He posted photos and videos seemingly from Kampala, triggering roadblocks and searches across the capital city. On 18 March 2026, he resurfaced in the United States.
I have researched Ugandan politics for over 20 years, and recently published an article analysing the structural challenges Wine’s political party faces in Uganda’s authoritarian context.
Drawing on this work, my reading is that Wine’s escape reveals controlled tensions within Museveni’s regime, where different factions appear to disagree on how to handle the opposition – without signalling a full split. At the same time, it exposes a deeper dilemma for Wine and his party: how to balance international advocacy with maintaining grassroots legitimacy at home.
This moment matters because it highlights the structural constraints facing opposition politics in Uganda, and raises questions about whether meaningful political change can occur within the current system.
The contrasting approaches within the Museveni regime are illustrated by events that followed the 2026 election. In the weeks following the vote, defence force chief Muhoozi Kainerugaba (Museveni’s son) issued a series of unusually explicit statements about Wine.
In a now-deleted tweet, he claimed that 22 members of the National Unity Platform – whom he labelled “terrorists” – had been killed. He added that he was praying that the next death would be Wine’s.
On 26 January, the defence chief escalated this rhetoric, stating that he wanted Wine “dead or alive”. These statements built on earlier threats, including about beheading Wine.
Taken together, they amount to sustained violent threats directed at the main opposition leader.
Read more: Uganda’s autocratic political system is failing its people – and threatens the region
Set against this, however, is the fact that Wine was able to evade capture for nearly two months and ultimately leave the country.
It emerged that he did so with assistance from high-level state and security officials.
The same sources and regime insiders reported that intelligence services had informed Museveni about Wine’s whereabouts. The president chose not to act upon this information.
Taken together, these events suggest differences within the regime between factions in the security services, or more broadly between Muhoozi and other centres of power. Potentially even within the first family itself.
But these differences should not be overstated.
The episode does not indicate an open or consolidated split. Criticism of Muhoozi within the regime remains tightly constrained.
What this suggests is a regime where disagreements are contained within narrow limits. Wine’s escape, therefore, points less to a rupture than to an ongoing negotiation over power and strategy within the ruling elite.
And this is becoming increasingly important in light of the anticipated transition beyond Museveni.
Wine’s political strength has always come from where he came from.
He was rooted in the ghetto, and more broadly among urban youth who had long been mobilised by opposition politics but rarely felt represented by it.
Earlier figures like Kizza Besigye could appeal to this group, but Wine embodied it. He spoke the same language and made politics feel accessible to people often treated as outsiders.
That sense of authenticity was central to the early momentum of People Power. It also mattered that Wine broke with a long-standing pattern in Ugandan politics: he did not come from the western region, the core of the ruling elite.
But this “outsider” appeal has become harder to sustain over time. As People Power turned into a political party, and as Wine himself became more embedded in formal politics and international networks, parts of that original base began to feel that something had shifted.
What once felt like a movement of “one of us” increasingly risks being seen as something closer to the political establishment it set out to challenge.
As my research shows, this is not unusual. It is a core dilemma when protest movements turn into parties, especially under repression.
The social media backlash to Wine’s appearance in the United States needs to be read through that lens.
It not only echoes criticism from Museveni that Wine is an “agent of foreign interests”, but also from within the opposition where some radical voices argue that he should have stayed and faced the regime, even if that meant prison. Besigye, for instance, is facing treason charges after he was abducted and extradited from Kenya in 2024.
This criticism echoes a longstanding divide within opposition politics in Uganda: should opposition leaders embody defiance on the ground, or navigate politics through institutional spaces?
Read more: The making and breaking of Uganda: an interview with scholar Mahmood Mamdani
Being in the US reinforces a growing perception that Wine is becoming more distant from the people who carried the risks on the ground.
If the party cannot connect its international advocacy and diaspora support back to the everyday struggles of its supporters in Uganda, this episode will likely deepen the feeling that the party has become more of the same.
There is an uncomfortable reality here. Wine serves a function for the regime. His presence helps maintain the appearance of political competition, particularly within the international community.
Wine now faces a choice. Engaging in electoral politics risks reinforcing the system he seeks to challenge. Stepping outside it risks isolation, repression or loss of political relevance.
How he navigates this tension will shape not only his political trajectory, but also that of his party.
Kristof Titeca does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Insects make up to 90% of all animal species on the planet, and most of them can be found in the tropics, the regions around the equator. Yet we still know surprisingly little about how these species will cope with rising temperatures driven by climate change.
I am an animal ecologist, studying how organisms respond to climate change. My research aims to provide a better understanding about whether and how insects might be affected by heat. I was part of a team of scientists who studied more than 2,000 insect species along elevational gradients, from lowland areas up to high mountain regions, in Kenya and Peru.
Using a field experiment, we measured the heat tolerance of insects across many different groups. This is important because most previous studies either combine inconsistent datasets or focus on a single species.
Our goal was to understand how entire insect communities respond to heat. We looked at a large variety of insects, such as flies, bees, beetles, butterflies and grasshoppers, to name just a few.
We found that many are likely to face dangerous levels of heat stress. This was true even under conservative assumptions, including the possibility that species move into cooler habitats.
In parts of tropical Africa, where temperatures are already high and rising quickly, this could put large numbers of species at risk. Flies are the most vulnerable group. Our findings suggest that many tropical insects, especially those in lowland regions, are already living close to their thermal limits and may struggle to survive further warming.
Tropical regions, including much of Africa, contain the greatest diversity of insect life on Earth. The loss of these species would have far-reaching consequences for ecosystems, agriculture and human well-being.
Insects are essential to ecosystems – and to people. They play a key role in food production through pollination. While bees are the best-known pollinators, ants, flies and beetles also make major contributions. They are also nature’s recyclers. Dung beetles, for example, break down waste and carcasses, helping prevent the spread of disease and maintaining healthy soils.
Insects also form the backbone of food webs. They are both predators, such as dragonflies hunting mosquitoes, and prey for birds, reptiles and mammals. Without insects, ecosystems would quickly begin to fail.
To understand how insects respond to temperature, we used mountains in Kenya and Peru as “natural laboratories”. Despite being on different continents, these countries share key features: they lie in the tropics, have steep mountain ranges, and host exceptionally high biodiversity.
Air temperature decreases predictably with elevation, allowing us to study how species cope with different thermal conditions across short distances. In Kenya, this meant working from hot lowland savannas to cooler highland forests. The study was carried out from Watamu to Mount Kenya, including the Taita Hills. In Peru, the gradient ranged from the lowland Amazon region, including some parts of Manu National Park, to the high Andes close to Cusco.
Read more: African mountains are feeling the heat of climate change
We collected insects manually, trying to cover the entire community and include a large diversity across the major insect groups. For each single individual, we measured what is known as the critical thermal maximum: the temperature at which an insect loses motor control due to heat stress. To do this, we gradually increased temperature until each insect entered a heat coma.
We then compared these limits with real-world temperatures, using both field measurements and satellite data. This allowed us to calculate “thermal safety margins” (how close species are to the temperatures they can tolerate).
The results were striking: insects, such as dung beetles, from lowland areas already live very close to their thermal limits, while insects from higher elevations overall have a greater buffer, meaning they could possibly adjust to more heat.
We found that some insects can temporarily increase their heat tolerance through short-term physiological responses, such as producing heat shock proteins – special molecules that help protect and stabilise their cells when temperatures rise. But we found that this ability is limited in insects living at low elevations.
Insects from mid- and high elevations could slightly increase their heat tolerance after prior exposure to heat. In contrast, lowland insects showed little to no such capacity. This is worrying, because these are the species already experiencing the highest temperatures.
To understand why some insects tolerate heat better than others, we also used a deep learning model to predict the stability of their proteins.
We found that insect groups with more heat-stable proteins also had higher heat tolerance. For example, flies tended to have lower tolerance and less stable proteins, while grasshoppers showed higher tolerance and more stable proteins.
This suggests that heat limits may be constrained by fundamental protein architecture. This means that many species may not be able to evolve fast enough to keep pace with rapid climate change.
We then explored the potential consequences of future warming. Using climate projections up to 2100, we compared expected temperatures under different scenarios with the heat limits of lowland insects.
Extreme heat can kill insects within seconds. But even lower levels of heat stress can have long-term effects, reducing reproduction and causing population declines.
Insects can sometimes avoid heat by moving into cooler microhabitats, such as shaded vegetation, forest understories or soil. This makes habitat complexity critically important. In African landscapes, intact forests, savannas and shrublands can provide these thermal refuges. But habitat loss and fragmentation reduce these options.
Maintaining climate corridors, connections between cooler and warmer areas, will be essential to allow species to move and survive.
Our findings suggest that many tropical insects are already close to their limits. Without strong action to limit global warming and protect habitats, climate change could push them beyond survival. The consequences would not only be ecological but could deeply affect society on many levels.
Kim Holzmann receives funding from the Deutsche Forschungsgemeinschaft (DFG).
People in many parts of the world are worried about rocketing energy bills as the conflict in the Gulf continues. But for the majority of renters in Sweden’s apartment blocks, this is not so much of an immediate concern.
Part of the reason for this is that many buildings have communal laundries where washing machines and dryers (as well as water and heating) are provided and the cost is included in the rent.
In Sweden, nearly one third of all water and energy is consumed domestically, with two thirds of this through activities relating to cleanliness. Electricity for washing and drying clothes also accounts for a substantial share of residential electricity use.
Communal laundries are one of Sweden’s environmental success stories. They began as part of the post-war million homes project, when modern apartment blocks were equipped with shared tvättstugor (laundry rooms) instead of individual residents having to buy their own machines. These rooms usually have a handful of semi-industrial washing machines, dryers and drying rooms serving an entire building. Access is through a communal booking system, and use is free to residents of the building.
I live in the Swedish city of Lund. Since I met my husband 11 years ago he’s been the go-to person for laundry, and takes the loads down to the communal room. For our family, using this facility is convenient because someone else takes care of maintenance and servicing of the machines and we don’t pay extra for washing clothes. It’s all included in the housing association fee, which is negotiated for the building annually.
Tvättstugor rose to prominence during the Swedish government’s widespread building programme of the 1960s and 1970s, as part of a commitment to improving living conditions and creating a fairer society.
Clean running water, reasonably priced central heating and access to a laundry were part of a broader social project: raising living standards collectively, through shared infrastructure. This often meant that shared facilities such as laundry rooms and heating were included in the rent at no extra cost. This means that many people living in apartment blocks dotted around many Swedish cities don’t have to worry about too much about hikes in energy costs for washing, or heating, if, as expected, household energy prices rise this summer, due to the conflict in the Gulf.
Around 51% of Sweden’s housing is in these apartment blocks (2.3 million homes). And a survey of tenants in Sweden in 2020 found that around 53% have access to the tvättstuga.
If each household in Sweden had its own appliances, the material stock of machines – and future waste – would escalate quickly. A tvättstuga, by contrast, can serve dozens of residences with just a few semi-industrial machines that are built to last, maintained professionally and replaced strategically. It is a denser, leaner way of organising cleanliness.
Shared laundry spaces change how often we wash. Interviews and time-use data suggest that people with easy access to their own machine tend to wash more frequently, with smaller loads. If the washing machine is in the next room and energy and water are relatively cheap, it is tempting to wash “just in case”, or to avoid the minor inconvenience of airing clothes or dabbing away a single stain. When you have to book a slot, carry clothes down to the basement and work within a fixed time window, the calculation shifts. People batch their washing, fill machines properly and think twice before throwing something in after a single wear.
Communal laundries also make technological improvement easier. Upgrading a handful of machines in a shared space is far more straightforward than relying on hundreds of individual households to replace old appliances. Shared infrastructure can be a powerful lever: change the system once, and many people benefit.
But tvättstugor are also social spaces. Where I live the laundry room doubles as a small community centre. There’s a children’s book swap, a noticeboard with local events, and a steady trickle of neighbourly encounters. My husband has his gang of dads that he sees there every Sunday. They chat while folding, sharing tips about laundry liquid and life. Negotiations over booking times, cleaning lint filters and wiping benches are not always idyllic – there are passive-aggressive notes and the odd conflict – but they are also a form of everyday democracy. We learn, in a very concrete way, how to share resources, negotiate conflict, respect common rules and live together.
Two dads folding the washing in the author’s communal laundry area. Tullia Jack, CC BYDespite the environmental and social benefits, communal laundries are disappearing from new housing schemes. Many municipal housing companies are not including tvättstugor in new builds. This is a shame because it’s not possible to solve the energy crisis individually.
We need shared infrastructures – from tvättstugor to public transport to district heating (a centralised heating system that distributes heat to a range of buildings). Sweden shows how these facilities can work in practice: these shared laundry rooms spread costs, reduce waste and nudge people towards sufficiency. Just as importantly, they give us a reason to meet, compromise and practice our negotiation skills. This can help us build the solidarity needed to tackle the climate crisis.
Tullia Jack receives funding from Formas, grant number 2024-02280.
The Iranian regime is certainly brutal. But it’s also powerful as it continues to project its might after a month of illegal air strikes by the United States and Israel.
Read more: Iran’s attacks drone on, with the U.S. at risk of losing the war
Iran is in the top 10 per cent of countries by size and population, has the third largest proven petroleum reserves and controls strategically crucial geography.
Furthermore, both the regime and many ordinary Iranians are prepared to defend the country. Since 1953, when the U.S. helped orchestrate a coup to overthrow Iran’s democratically elected Prime Minister Mohammad Mosaddegh, Iranians have understood they’re in America’s crosshairs.
This was especially true after the 1979 Islamic Revolution that overthrew the shah and during the U.S.-backed Iraq war against Iran that killed a million Iranians in the 1980s. As a result, Iran has spent decades beefing up and decentralizing its military capability.
In contrast, Dan Caine, chairman of the Joint Chiefs of Staff, warned U.S. President Donald Trump in February that the U.S. was short on both munitions and allied support for a war against Iran. Israel, America’s partner in war, is also short, especially in interceptor munitions. Trump and Israeli leader Benjamin Netanyahu dismissed the concerns, which suggests they planned a short war.
Critics have accused Trump of dragging the U.S. — or allowing it to be dragged — into a “forever war.” Those critics include those in his MAGA base, a problem for Trump as he anticipates November’s mid-term elections.
One unconventional option that might expedite victory, discussed during Trump’s first term, is to use nuclear weapons against Iran. Trump has said nukes won’t be used, but he’s well-known for erratic reversals.
A nuclear strike might expedite surrender, but it took two strikes on Japan in 1945 before the Japanese surrendered, and, failing an Iranian surrender, several strikes might be required to destroy the military capability distributed across Iran’s 31 provinces. Because many Americans would be appalled by a nuclear attack, putting the mid-terms at risk, the nuclear option is unlikely.
Much of the concern about Trump’s election machinations heading into the mid-terms is focused on the manipulation of procedures and officials. The legacy of the Jan. 6, 2021 attacks on the U.S. Capitol is one extreme possibility, as is manipulating the Iran war to achieve electoral gains.
Violent protesters, loyal to Donald Trump, storm the U.S. Capitol on Jan. 6, 2021. (AP Photo/John Minchillo)Trump will probably lean into his rhetorical strengths and try to convince Americans the U.S. has won when it hasn’t. Claiming victory in the face of its absence is not new to him. Even in his second term, Trump continues to push the false claim that he won the 2020 election.
Consider the bizarre drama that started on March 21 when Trump and Iran exchanged dire threats. Then, out of the blue, Trump declared the existence of peace talks, which Iran denied. Perhaps they are imaginary talks on the way to an imaginary victory for Trump.
Read more: Why Donald Trump is such a relentless bullshitter
It seems clear Trump is planning to declare victory well ahead of the mid-terms — and in part because of them. Such a strategy would involve baiting opponents into “forever war” criticisms, only to ridicule them in stump speeches, generating the image of a president who finishes his wars.
A declared victory in Iran and a timely exit, in addition to the liberation of Venezuela and a possible Cuban coup, might all coalesce into potent election messaging for the Republicans.
Soon enough, Trump may announce something akin to former president George W. Bush’s premature proclamations about the Iraq War in 2003 by saying something like this:
“Major combat operations in Iran have ended. The United States and Israel have prevailed. We do not know the day of final victory, but we have seen the turning of the tide.”
If successful, he will secure two more years “like nobody’s ever seen before” of Republican congressional dominance.
In this May 2003 photo, U.S. President George W. Bush declares the end of major combat in Iraq as he speaks aboard the aircraft carrier USS Abraham Lincoln off the California coast. The war dragged on for many years after that. (AP Photo/J. Scott Applewhite)The battle for November will feature a few competing narratives in the U.S. But there are four major hurdles for Trump in particular.
Whichever scenario prevails, Americans will likely lose. Their complete war costs could include repercussions from the unprecedented illegal bombing of Iran, as well as from unnecessarily turning regional allies into targets.
All of this is tied to what many Americans regard as increasing Israeli aggression, including the killing of 70,000 people in Gaza, which the U.S. has facilitated with funding, political cover and its widely mocked Board of Peace.
America’s democracy, economy and credibility are waning as Trump shamelessly pursues self-aggrandizement and self-enrichment.
“That makes me smart,” he might say, but only a failed leader serves his own interests at the expense of his country.
John Duncan is affiliated with Science for Peace, a charitable organization dedicated to popular education and research on the intersections of demilitarization, decarbonization and social justice.
In most bumblebee species, the queens spend their winters buried underground in a tiny cavity the size of a grape. For six to nine months, they enter a deep sleep-like state called diapause, waiting for spring.
As climate change brings more intense rainfall in many regions, these overwintering queens face increasing risks of unstable underground conditions, including flooding.
It’s a good thing, then, that these insects can survive days underwater without drowning. Remarkably, our new research reveals they achieve this through a process of continually breathing while submerged for up to eight days.
We initially discovered that overwintering bumblebee queens can survive submersion due to an accident.
During an experiment at the University of Guelph, some of the tubes in which queens were overwintering in the lab refrigerator inadvertently filled with water.
Initially, we assumed the queens had died. But after emptying out the water, they began to move and soon recovered. This suggested that bumblebee queens might be able to survive submersion.
A queen bumblebee breathing underwater. (Charles-Antoine Darveau)So, we designed a follow-up experiment involving 143 common eastern bumblebee queens (Bombus impatiens).
This confirmed it was no fluke: the queens withstood complete submersion for up to a week.
This raised an intriguing question: how can this terrestrial insect pollinator survive underwater? Answering it required a different approach — we needed to study their physiology.
The queen is the heart of a bumblebee colony and its only chance of producing the next generation. While we often hear the buzz of workers visiting flowers during the summer, queens are rarely seen. They spend much of the season inside the nest, laying eggs that will become workers and, later in the summer, males and new queens.
When winter comes, most members of the colony die and only the newly produced queens survive. After mating, these new queens disperse and burrow underground, each settling into a tiny cavity where they enter diapause.
When spring finally returns, the queens that survived their long underground sleep emerge from their burrows and begin the important task of founding a new colony.
Read more: Worker honey bees can sense infections in their queen, leading to revolt
To understand how these queens can survive submersion, we studied their breathing and metabolism in our lab at the University of Ottawa.
During diapause, queens are already in extreme energy-saving mode. The energy they need to stay alive (known as their metabolic rate) drops by more than 99 per cent. When submerged, energy needs drop even further. With such tiny oxygen demands, underwater breathing becomes possible.
But how did we determine whether queens are actually breathing underwater? One way is by measuring the gases exchanged with the surrounding water. We did this and the results were striking: queens continuously consumed oxygen and released carbon dioxide underwater throughout an eight-day period of submersion.
A bumblebee queen in her hibernaculum (underground burrow). (Sabrina Rondeau)Many aquatic insects use a simple trick to breathe underwater. A thin layer of air clings to their body, allowing them to use their normal air-breathing system — the tracheal system. Oxygen from the surrounding water slowly diffuses into this air layer. Bumblebee queens likely rely on the same mechanism.
Still, underwater respiration alone does not fully meet the queen’s energy needs. To bridge the gap, queens also produce some energy through anaerobic metabolism — a process that does not require oxygen. This pathway produces lactic acid, which we detected in queens during submersion.
These physiological tricks allow queens to survive underwater, but come at a cost. After resurfacing, queens spend several days recovering, using far more energy than they would have if they had never taken the plunge.
Read more: Below freezing but still moving: How salamanders stay active in winter
Bumblebee queens spend the winters alone, buried underground and relying on stored energy to survive until spring. Their ability to tolerate days of submersion — and even breathe underwater — reveals an unexpected resilience to one of the hazards of life below ground.
This matters because bumblebee colonies depend entirely on the survival of overwintering queens. If a queen dies during winter, the colony she would have founded the following spring will never exist.
This ability to survive submersion could play an important — and previously overlooked — role in the resilience of threatened bumblebee populations.
Even in terms of familiar and comparatively well-studied insects like bumblebees, there is still so much to learn about the surprising ways they cope with environmental challenges.
Sabrina Rondeau received funding from the Natural Sciences and Engineering Research Council of Canada, the Fonds de recherche du Québec—Nature et technologies and the Weston Family Foundation.
Charles-Antoine Darveau receives funding from the Natural Sciences and Engineering Research Council Discovery Grants program.
Nigel Raine receives funding from the Natural Sciences and Engineering Research Council, the Horizon Europe ProPollSoil project, the Canada Foundation for Innovation Innovation Fund, the Ontario Ministry of Agriculture, Food and Agribusiness, the Canadian Wildlife Federation and the Weston Family Foundation.
About 70 per cent of the species on Earth are insects. They are fundamental components of most ecosystems: they comprise half of the biomass on the planet, pollinate flowers, decompose dead organic matter and play multiple roles in food webs. They are quite literally everywhere, including in and around our homes, but they have also been declining at alarming rates in many places.
The societal implications of this potential “insectageddon” could be catastrophic, including losses in human food production. However, confirming suspicions of global declines is difficult because we lack reliable data on insect populations in many parts of the world.
We simply don’t have the infrastructure around the planet that would allow us to track insect populations altogether. That means we don’t know how insect populations are responding to different global changes, and we might be failing to design effective conservation policies and track whether current actions are working.
Efforts to rapidly generate global indicators of insect population trends are therefore crucial. In our recently published paper, colleagues and I explain how a global butterfly index could help track butterfly populations worldwide — and how we can reach this important objective.
One reason why insects have been neglected in conservation is that they are often ignored — if not feared — by many people. Many of us have been brought up to be cautious around insects, whether they’re bees, spiders or other critters.
There is, on the other hand, broad interest in vertebrate species. Bird-watching has been part of human societies for hundreds of years. The fact that larger animals capture public interest has arguably stimulated global efforts to calculate indicators of trends in their populations, like the Living Planet Index by the World Wildlife Fund and other organizations.
Read more: What’s the difference between moths and butterflies? Look at their antennae
While insects have generally not benefited from the attention that other animals have received, butterflies are one exception to this rule. These insects, with their captivating patterns and colours, have long fascinated people and have been represented in many traditions across cultures.
Our love for butterflies is reflected in a substantial history of monitoring. In the 1970s, the British entomologist Ernest Pollard initiated the practice of recording butterfly populations on his butterfly walks in England. Fifty years later, hundreds of “Pollard walks” are done across Europe and in many other regions of the world.
Recording the presence of a species in an area is important work. However, equally fundamental are efforts that capture changes in insect populations over time. Nonetheless, a global synthesis of butterfly population monitoring programs has, to date, been missing.
Our recent paper fills that gap. We co-ordinated an international consortium with the goal of better understanding opportunities and challenges for calculating a global butterfly index that captures trends across butterfly populations worldwide.
Bringing together scientists from all continents except Antarctica, we were able to collate an incredible dataset including more than 45,000 population trends for over 1,000 butterfly species. We used this data set to:
Recording the presence of a species in an area is important work. However, equally fundamental are efforts that capture changes in insect populations over time. (Unsplash/David Clode)Despite an unprecedented effort, we found that only populations of around five per cent of species worldwide have been monitored.
It’s important to note that the data set is mostly concentrated in Europe and North America and biased in favour of generalist species (those able to survive in diverse environments) as well as species easier to detect.
Nonetheless, we found that species are on average declining, and sensitive butterflies expected to suffer from global change tended to decline more steeply than the rest of our sample. Populations outside of Europe and North America were too sparse to support robust inferences.
Read more: Butterflies declined by 22% in just 2 decades across the US – there are ways you can help save them
Developing this study left us with a few lessons learned. There is substantial work to do if we aim to calculate a truly global indicator of butterfly population trends.
For instance, many parts of the Global South will need support to swiftly develop national monitoring programs, and research in the tropics is needed to better understand what monitoring methods would work best in hyper-diverse regions.
The good news is that butterflies are already one of the most visible and monitored insect groups, which will ameliorate the challenges associated with developing indicators of insect populations. Existing monitoring schemes can provide a template upon which new initiatives can be developed.
Ultimately, developing a global butterfly index will be key to providing long-overdue tracking of insect population changes. Crucially, it could also act as a flagship for broader insect conservation.
Governments are expected to set measurable biodiversity targets in line with their commitments under international agreements such as the Kunming-Montreal Global Biodiversity Framework. However, insects remain largely overlooked in these targets, and it’s impossible to set meaningful targets without robust indicators.
Developing a robust butterfly index is therefore fundamental to help guide conservation and to better understand the scale of the biodiversity crisis, as well as to communicate it to the public.
Butterflies carry a strong emotional value. That can help build support for conservation in a way that less appreciated insects cannot achieve.
Our consortium is helping to create such momentum: this year, members of our team are kick-starting a Global Butterfly Week and conversations around formalizing an international organization are under way.
We are hoping that colleagues interested will join us for the next iterations of these projects. Please reach out.
Federico Riva has received funding from the Horizon program.
Before dawn on March 1, 2026, Iranian Shahed drones struck two Amazon Web Services data centers in the United Arab Emirates. A third commercial data center in Bahrain was hit, though it is less clear whether it was deliberately targeted. Iran has also indicated that it considers commercial data centers to be targets.
This is the first time that a country has deliberately targeted commercial data centers during wartime. Data centers have been targets of espionage and cyberattacks in the past, notably when Ukrainian hackers destroyed data stored in a Russian military-affiliated data center in 2024. This, however, was a physical attack. Drones damaged buildings.
Advances in artificial intelligence have increased the importance of data centers. The U.S. military, in particular, has made great use of AI systems for decision support in its attacks on Iran and Venezuela. Given how important data centers are, Iranian forces could be targeting the infrastructure Iran’s leaders believe is supporting strikes on Iran.
It is not altogether clear that these particular data centers were used by the U.S. military. Instead, the attacks may have been part of a broader effort to punish the United Arab Emirates for its ties with the U.S.
In my experience as a Ph.D. candidate at Georgia Tech studying how technology drives changes in international security, I don’t think the attacks signal any significant change in the nature of warfare. But they are forcing nations to recognize that data centers are targets of war – even if they don’t directly support military operations.
The United States military is increasingly incorporating advanced AI capabilities into its decision support systems. From the operation to capture Venezuelan President Nicolás Maduro to supporting military strikes against Iran, the U.S. has been using AI, especially Anthropic’s Claude, for intelligence analysis and operational support.
AI is unlocking faster ways to carry out operations in war, but the AI tools the military often uses are not located on a plane or ship. When a service member uses Claude, the computing infrastructure that powers the model and its analysis usually goes to a secure Amazon Web Services cloud that hosts secret government data and software tools.
The basics of data centers explained.Commercial data centers are where the cloud lives. The next time you pull up Netflix and watch your favorite shows, you are likely streaming the programming from a data center, possibly AWS. When AWS data centers go down, outages affect all sorts of entertainment, news and government functions.
With AI as a driver of economic growth, data centers are key forms of infrastructure. They ensure that AI can continue to run, as well as much of the underlying internet that governments and industry rely on. When Iran attacked the UAE’s data centers, it caused widespread disruption to the local banking system.
Commercial data centers enable most of the technology that runs the modern world, including AI systems. Disrupting them is key to disrupting the military and society of a country. Given that AWS provides and operates many of the commercial data centers where the cloud lives, it is likely that its data centers will continue to be targeted in conflict.
Researchers at Just Security noted on March 12, 2026, that the United States requires cloud-computing service providers to store government and military data within the U.S. or on Department of Defense bases: “Moving such data to Amazon data centers in the Gulf region would require special authorization; we are unaware if that has been granted.”
Nevertheless, Iran’s Islamic Revolutionary Guard Corps claimed the strikes were against data centers supporting “the enemy’s” military and intelligence activities. And 10 days after the initial attack on the data centers, an Iranian news agency claimed that major tech company data centers and other physical assets in the region were considered “enemy technology infrastructure.”
Instead of military reasons, Iran may well have targeted the UAE to rattle the global economy and garner attention. Given the prominence of the Gulf as a major recipient of U.S. technological investment, the attack may also have been a symbolic one aimed at the heart of U.S.-Gulf cooperation. AI infrastructure such as commercial data centers is a growing part of U.S. leadership in the region, and this war could jeopardize the future of AI infrastructure in the Gulf.
This model shows a massive data center, part of the Stargate project involving U.S. tech companies, currently under construction in the United Arab Emirates. Giuseppe CACACE/AFP via Getty ImagesThough data centers are increasingly important for national security, the economy and society at large, it can be tempting to suggest these strikes represent a fundamental shift in the nature of war. While that is a possibility, it is important to remember that Iran launched thousands of missiles and drones at targets in the UAE. Though the vast majority were intercepted, the two that struck data centers are a small portion of the ones that got through to civilian targets in UAE territory, including strikes on airports and hotels.
The relative vulnerability of commercial data centers – they are large, relatively fragile and lack dedicated air defenses – suggests that the ones in the UAE may have been targets of opportunity or convenience. In other words, they were hit because they could be hit.
Nevertheless, it seems likely that as the use of AI tools and other cloud-based resources continues to grow in importance for countries around the world, commercial data centers will be targets in future conflicts.
Dennis Murphy is affiliated with Georgia Tech, the Georgia Tech Research Institute, the RAND Corporation, the Notre Dame International Security Center, and the Astra Fellowship. He previously was affiliated with Lawrence Livermore National Lab, Marine Corps University, and the Cambridge University ERA Fellowship.
In May 2025, the U.S. Department of Justice began sending letters to state governments demanding copies of statewide voter registration lists. The request was unprecedented: It demanded not only publicly available voter data, such as names and addresses, but also sensitive information, including driver’s license and Social Security numbers.
That data is considered highly sensitive because it can be used to commit identity theft, access financial or government records, and facilitate targeted harassment or intimidation, particularly if the data were mishandled or leaked.
Underlying these requests is the Trump administration’s stated goal of rooting out fraudulent and illegal voting. With voter data in its hands, the DOJ seeks to identify ineligible voters and mandate state election officials to remove those voters from the rolls.
States have responded in a variety of ways. Some have fully complied with the requests, some partially complied, and many outright refused to provide any voter information. For the latter states, the Trump administration has taken the fight to court and sued to get the information, claiming that federal law requires the states to hand it over.
The majority of cases are still going through the courts.
I’m an election law scholar who focuses on election administration. This battle over voter data has raised numerous questions about the Trump administration’s motives, the legality of its actions and, more generally, the role of the federal government in election administration.
The DOJ has a tough road ahead in convincing election officials and judges across the country that all of its demands in these cases are constitutionally legitimate.
States have exclusive authority to govern and administer state and local elections. The federal government, on the other hand, historically has played a much more limited role in election regulation and administration. By constitutional design, Congress may regulate only the “time, place, and manner” of federal elections – in other words, the procedural elements of elections for federal offices.
And even then, states hold concurrent authority to regulate federal elections.
Nevertheless, in his second administration President Donald Trump has sought to expand the federal government’s control over elections. In February 2026 he called on Congress to “nationalize” elections. He has also made an administration priority the passage of the SAVE America Act, a bill that would mandate states to turn away any voter without documentary proof of U.S. citizenship.
Trump’s initiatives apparently stem from conspiratorial allegations that the 2020 presidential election was rigged against him, resulting in fraudulent and illegal voting that gave Joe Biden the presidency. And they are ultimately what animates the DOJ’s crusade for voter information from the states, with Attorney General Pam Bondi having recently stated that “accurate, well-maintained voter rolls are a requisite for the election integrity that the American people deserve.”
So far, the DOJ has sent requests to at least 48 states and the District of Columbia demanding their complete voter registration lists – information on every individual registered to vote in the given state.
In doing so, the DOJ has asked the states to sign onto an agreement under which they agree to remove within 45 days any voters that the DOJ flags as ineligible. But by signing this agreement, a state is effectively handing over the administration of its voter rolls to the federal government.
Only 12 states – Alaska, Arkansas, Indiana, Louisiana, Mississippi, Nebraska, Ohio, Oklahoma, South Dakota, Tennessee, Texas and Wyoming – have fully complied with the requests, handing over to the DOJ private information such as the driver’s license and Social Security numbers of their registered voters.
Five states, meanwhile, have provided publicly available voter information – name, address and party affiliation – to the DOJ while withholding more sensitive information. The remaining 31 states of the 48 to receive requests, along with the District of Columbia, have refused to give any voter list to the federal agency.
The DOJ has sued 29 states for refusing to hand over voter lists and has also sued the District of Columbia, sparing only Iowa, Alabama and South Carolina. Only one sued state – Oklahoma – has thus far capitulated to the DOJ.
In these lawsuits, the DOJ cites three legal sources that supposedly give the agency the right to request voter information from state officials.
First, the DOJ points to a provision of the National Voter Registration Act of 1993 that requires states to “make available for public inspection” all records necessary to ensure the accuracy of their voter registration lists. As critics note, though, this provision does not require states to reveal sensitive voter information. All 50 states are, in fact, currently in compliance with the act’s mandate.
Second, the DOJ invokes the Help America Vote Act of 2002 and its requirement that all states must maintain a computerized, statewide voter registration list. Nevertheless, no provision in that law provides explicit authority to the federal government to request these registration lists from state officials.
Finally, the DOJ has argued that the states have an obligation under the Civil Rights Act of 1960 to comply with the agency’s demands. Specifically, Title III of the act permits the U.S. attorney general to request for inspection “all records and papers” kept by state election officials relating to “any application, registration, payment of poll tax, or other act requisite to voting.”
While perhaps the strongest of the three arguments, that title of the Civil Rights Act goes on to require the attorney general to offer a “statement of the basis and the purpose” of their request.
In the DOJ’s requests to states, Bondi has apparently provided zero justification as to why the states must hand over sensitive voter information to the DOJ. Indeed, any stated purposes appear unrelated to the Civil Rights Act’s aims of combating racial discrimination.
Attorney General Pam Bondi wants the states’ voter information because, she says, ‘accurate, well-maintained voter rolls are a requisite for the election integrity that the American people deserve.’ J. Scott Applewhite/AP PhotoThere are further legal questions regarding whether the states could even comply with the DOJ’s proposed 45-day deadline for removing declared ineligible voters.
For example, the National Voter Registration Act forbids states from removing people from the voter rolls in certain instances without first providing notice and waiting two federal election cycles – a timeline well beyond 45 days.
In the 29 targeted states, federal courts have thus far dismissed four lawsuits in California, Georgia, Michigan and Oregon. Oklahoma, as noted above, has settled its case with the DOJ. While the remaining lawsuits have yet to fully play out, the DOJ likely faces less-than-sympathetic judges in these cases.
Even if the DOJ loses in court, though, the federal government may continue attempting to receive states’ voter information through other means.
The SAVE America Act, for instance, currently under consideration in the U.S. Senate, contains a provision that incentivizes states to submit their voter registration lists to the U.S. Department of Homeland Security on a quarterly basis or otherwise subject their residents to stringent voter ID laws. Should Congress pass the act, the executive branch would have much clearer federal authority to force voter data from state election officials.
John J. Martin does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Winter is more than just a season in the western U.S. – it is a savings account to get farms and homes through the long, dry summer ahead. As the snowpack that accumulates in the mountains through winter slowly melts in late spring and summer, it feeds into rivers and reservoirs that keep communities and ecosystems functioning.
The April 1 snowpack measurement has long been the single most important number in western water management, considered a strong proxy for how much water the mountains are holding in reserve.
But in 2026, that savings account has been woefully deficient.
Across the western United States, temperatures from November through February were among the warmest on record, with many areas 5 to 10 degrees Fahrenheit (2.8 to 5.5 degrees Celsius) above the 20th-century average. March continued to break heat records, leaving California’s snowpack at just 18% of normal on April 1. At lower elevations, the higher temperatures meant a significant part of the winter’s precipitation fell as rain rather than snow. In some places, snowfall accumulated but melted quickly during warm periods.
The total area of the western U.S. with snow cover was exceptionally low compared with the rest of the 21st century. National Snow and Ice Data CenterAs a result, even regions that received near- or above-normal precipitation for the season failed to build substantial snowpack. In the northern Rockies and the mountains of the Pacific Northwest, any above-average snow accumulation was largely confined to the highest elevations, while middle and lower elevations had relatively little snowpack.
This situation is a hallmark of warming winters. As global temperatures rise, the freezing line where precipitation changes from rain to snow moves up the mountains, shrinking the area capable of sustaining a seasonal snowpack.
At the vast majority of the U.S. Natural Resources Conservation Service’s snow measurement stations across the West, the snowpack’s snow-water equivalent on March 30, 2026, was less than 50% of the 1991-2020 median. Natural Resources Conservation Service Temperatures were well above the 20th-century average across the western U.S. in winter 2025-26. National Centers for Environmental InformationThe exceptionally warm winter of 2025–26 across much of the western U.S. delivered a powerful preview of what the regional water cycle in a warmer climate may increasingly look like: less snow and a fundamental reshaping of the hydrograph – the chart of how much water flows through streams across the year.
The consequences of this shift for water supplies are already visible in streamflows.
In multiple river basins in the West, streamflows were above average in winter and early spring, and some locations were approaching record-high levels. Historically, that water would have remained frozen in the snowpack until late spring. Instead, precipitation arriving as rain – along with intermittent midwinter melting events – increased the runoff.
Scientists who study natural water flows, as I do, pay attention to the hydrographs of streamflows in river basins to see when the water flow in mountain streams is strongest and how long that flow is likely to continue into summer.
This hydrograph showing two years of water flows in the St. Mary River near Babb, Mont., reflects the difference between a typical late-spring peak, as 2025 saw, and several midwinter peaks from warm temperatures and rain, as 2026 is seeing. U.S. Geological SurveyIn recent years, rising temperatures have led to a redistribution of streamflows throughout the winter and early spring in ways that are fundamentally reshaping the hydrographs of snowmelt-dominated rivers. Rather than a single dominant peak during late spring or early summer, smaller peaks emerge in winter and early spring. At the same time, the traditional snowmelt pulse, relied on to fill reservoirs in late spring, weakens.
In effect, the hydrograph is flattening. The winter of 2025–26 illustrates this phenomenon: Higher early-season streamflows suggest the West will see less runoff later in the year when communities, farms and wildlife need it.
Nowhere does the convergence of record warmth, depleted snowpack and altered hydrology carry higher stakes than in the Colorado River Basin. More than 40 million people in seven states plus Mexico and 5.5 million acres of farmland depend on the river’s water, but the river’s flow is no longer meeting demand.
The April-through-July 2026 runoff into Lake Powell – the reservoir behind Glen Canyon Dam and the primary index of the Upper Colorado River Basin’s annual water budget – is currently forecast to rank among the lowest in recent decades. It has been tracking close to the grim years of 2002 and 2021, considered benchmarks of western drought.
Unless spring brings substantial late-season snowfall to the high mountains, 2026 could join those years as a marker of how thin the margin between water supply and demand has become in a river system already under sustained stress from two decades of drought and water overuse.
The low reservoir levels in the basin in 2026 and the low snowpack are adding fears of water shortages just as the seven states that rely on the Colorado River are struggling to reach a new water use agreement.
The winter of 2025–26 highlights two emerging realities.
First, temperature is increasingly dominating precipitation in determining western water supplies. Even above-normal precipitation cannot compensate for persistent warmth when it falls as rain rather than snow and accelerates snowmelt in the mountains.
Second, the nature of the West’s streamflows is shifting in ways that complicate water management.
Rain-on-snow events can produce flooding in winter, as the Seattle area saw in late December 2025. A low snowpack also means less runoff in summer, which can exacerbate water shortages and raise the wildfire risk as landscapes dry out. Even if a year has normal precipitation, if it falls as rain or there is earlier snowmelt, then evaporation through summer, in a warmer climate, will leave less water in the system.
Snowpack declines, earlier runoff, elevated winter flows and flattened hydrographs are all consistent with long-standing projections for the western United States as global temperatures rise.
What makes the winter of 2025-26 notable is how clearly these signals appeared, even in a year without widespread precipitation deficits.
This shift highlights the need for adaptive reservoir operations – the ability to adjust water storage and release decisions in real time to capture earlier runoff and preserve water for longer dry seasons, while still maintaining space in reservoirs for flood control during wetter winters. For communities across the West, it also reinforces the growing reality that the familiar seasonal rhythm of mountain water is changing.
This article, originally published April 1, 2026, has been updated with California’s April 1 measurements.
Imtiaz Rangwala receives funding from USGS, NOAA, NSF and USDA. He is affiliated with Boundless In Motion.
On a summer morning a couple of years ago, we went for a hike on the fabled Bright Angel Trail, one of the most popular trails in Grand Canyon National Park.
As scholars of tourism and outdoor recreation, our conversation inevitably turned to the visitor experience at the Grand Canyon and a question that has plagued the parks since their inception: Are the national parks overcrowded?
It’s not a simple question, but we believe that people have been answering it incorrectly for over a century, as many Americans have come to a decisive conclusion: Yes. Of course, some specific locations within parks can get overcrowded, and some people are more sensitive to crowding. But often people wrongly assume that a busy park means a crowded one, or that having other people around is inherently bad.
Old Faithful in Yellowstone National Park is a major attraction that many visitors to the park want to witness. Neal Herbert/National Park ServiceAs far back as 1935, one of the founders of the American wilderness movement, Aldo Leopold, described the national parks as “over-crowded hospitals trying to cope with an epidemic of esthetic rickets.” Presidents and Congress have discussed the issue, as have most major newspapers in the nation at one time or another. Magazine headlines and research articles have decried “overcrowding” and the “parks being loved to death.” Our own research fields have fueled this obsession with crowding.
However, our analysis, as well as work by others, has found that being in public parks and natural environments with other visitors is a powerful opportunity to enhance experiences in these places rather than detract from them. We note that the assumption that nature is best experienced in solitude or only within one’s own group reflects a generally western and elitist perspective and does not align with how people often actually experience parks or the quality of their experiences in the presence of others.
Watching the sunrise on Cadillac Mountain in Acadia National Park in Maine can be an enjoyable collective experience. Kent Miller/National Park ServiceHiking on the Bright Angel Trail that morning in 2023, we noticed vignettes that challenge the crowding assumption. The trail was busy. It was early, and the south wall of the canyon was still blanketed in shade. Two families, one hiking up, the other hiking down, were chatting and sharing insights at a bend in the trail.
A young hiker shared her binoculars with an elderly man, focusing them on a distant, rare desert bighorn sheep. An older couple smiled as a gaggle of kids ran uphill and past them, racing to the ice cream promised to them on the canyon rim. Another hiker pointed out a California condor sailing above. Together, we marveled at its beauty.
Despite the number of people on the trail, we did not feel crowded. Rather, we felt a spontaneous sense of community and togetherness. Anthropologist Victor Turner called this feeling “communitas,” a shared experience that a person might encounter during a pilgrimage, surrounded by other travelers with a shared journey and goal.
Others have written, too, about the value of experiencing national parks in the presence of others, finding a shared sense of awe and affirmation of values. Visitor surveys conducted in the Cleveland National Forest near San Diego in summer 2025 show that the more people experience communitas, the less likely they are to feel crowded.
In fact, in 1922, more than a decade before Leopold’s lament about “over-crowded hospitals,” conservation advocate Robert Sterling Yard described the sharing of space and time as crucial elements of the national park experience. He noted that a person “sits around the evening camp fire with a California grape grower, a locomotive engineer from Massachusetts, and a banker from Michigan.” He continued: “Here the social differences so insisted upon at home just don’t exist. Perhaps for the first time, one realizes the common America – and loves it.”
It is important to remember that each person has the power to shift how they react to the number of people around them in parks. Being crowded is not an objective state; it is a negative perception of social conditions. For instance, when people see visitors as different from themselves, they are more likely to feel crowded. But by recognizing that tendency and reframing the interaction, people are more likely to find communitas.
Communitas, too, is in the eye of the beholder. When people view social interactions in parks as opportunities to share in a larger collective experience, they open up the possibility of experiencing communitas.
This summer, we urge you to enter America’s national parks with perhaps a different perspective. Experience the collective awe of watching Old Faithful erupt with a few hundred of your fellow pilgrims in Yellowstone National Park. Find wonderment in sharing a sunrise from Cadillac Mountain in Acadia National Park or sunset from Olmstead Point in Yosemite National Park. Seek the power that comes from seeing the Grand Canyon for the first time, together.
Most importantly, be the person who points out that condor, gives a neighboring camper that extra hot dog, or takes that photo for another family. Find your communitas, help others to do the same and, together, consider the grandeur of the shared experience of America’s national parks.
Will Rice receives funding from the National Park Service and USDA Forest Service.
Bing Pan receives funding from National Park Service, Department of Interior, USA.
When Michael Jackson died in 2009,his fairly straightforward 5-page will left everything he owned to a family trust – an estate planning technique for giving away property that allows for privacy. The trust benefits Jackson’s three children and his mother, but nearly two decades later, Jackson’s estate, now worth an estimated US$2 billion, still hasn’t been fully distributed to the trust.
The most recent of many legal skirmishes to come to the public’s attention involves Paris Jackson, Michael Jackson’s daughter. She is asking a court to take a closer look at how the pop icon’s estate is being handled by its executors – the people responsible for managing it.
Paris Jackson has accused executors John Branca and John McClain of paying themselves and the estate’s lawyers too much, and for leaving $464 million owned by the estate uninvested. If that’s true, it would mean there is less money than there should be left over for her and her father’s other heirs. Branca is an entertainment lawyer, and McClain is a music executive.
Both were selected by Michael Jackson and named as executors in his will. They have repeatedly disputed Paris Jackson’s allegations and asserted that Paris has received at least $65 million in payouts from the estate.
Paris Jackson also has accused Branca of misusing his position as producer of “Michael,” an upcoming Michael Jackson biopic reportedly financed by Jackson’s estate, to cast an A-list celebrity – Miles Teller – to play the role of Branca himself in the film. According to Paris, the casting choice was costly and unlikely to increase box office revenue.
Paris Jackson has also stated that the $150 million film is a “botched production.” The executors have responded by arguing that the application of their expertise to other productions about the singer has already provided a huge payoff to the estate. The executors also recently won a court battle against Paris Jackson that ended with a judge ordering her to pay their attorney’s fees in a related dispute.
As law professors who study the transfer of property after death, we find that when disputes over inherited wealth become national news, they are often difficult to understand because this type of legal process is obscure and most people never interact directly with the probate court system.
This case illustrates what happens to property after death, even if the dispute is unusual due to the unique assets involved.
Michael Jackson poses with his friend, real estate developer Mohamed Hadid, Hadid’s children and Jackson’s children – Michael Joseph Jr., left, Paris, center, and Prince Michael II, second from right, in 2008 . Mohamed Hadid via Getty ImagesWhen someone dies, whether or not they’re a celebrity, any property they owned usually goes through a legal process called probate.
Probate is a court process that’s designed to notify everyone who may have an interest in the estate and to make sure that all property the dead person owned is handled properly. The court oversees the collection of assets, the payment of debts and taxes, and the distribution of any remaining assets to heirs.
This process can be completed in roughly one year for typical estates that do not contain unusual assets or erupt into litigation. But when the estate is large, complicated or disputed, probate can last for years or decades.
One of us, Reid Weisbord, co-authored a study of probate cases in San Francisco and found that the average estate remains open for a year and a half, and hotly contested and complex cases tended to linger in the system for two years or longer.
In one of the most extreme examples, resolving probate disputes over the estate of actress and model Marilyn Monroe took more than 40 years after her 1962 death.
When people draft their wills, they typically name one or more executors.
Most people who do that choose a child, grandchild, spouse or sibling to serve in that role. On occasion, people choose a lawyer or other professional to serve as executor. That’s what happened in Jackson’s case.
Being an executor for the man who revolutionized pop music after a successful run as a child star is even more complex than it would be for most huge estates because it includes music rights, business interests and licensing agreements that continue to earn money.
Like other executors in this situation, the men handling Jackson’s estate have hired lawyers, accountants and other professionals to assist them. The cost of paying for those professional services comes out of the estate. In this case, Paris Jackson is complaining that the compensation paid to executors of her father’s estate has been excessive. According to her legal complaint, they were paid more than $148 million through the end of 2021, a number that “dwarfs any amount distributed to Paris or her siblings.”
To be sure, the Jackson case is an extreme example of probate battles. But about 1 in 9 estates are legally disputed for a wide range of reasons that include:
Executors have many important responsibilities. They must find and protect the dead person’s property, pay their estate’s debts, file tax returns, manage investments and eventually distribute property to the estate’s heirs.
The law says executors must act in the best interests of the estate and its beneficiaries. This is called a fiduciary duty, meaning they must act carefully and honestly.
In real life, it’s hard for executors to be completely neutral.
If the estate hires executors who do not stand to inherit anything from it, they usually expect to be paid for their work. Managing an estate, especially a large one, can take years and require specialized skills.
If the executor is also a beneficiary, meaning they are named in the will or an associated trust, the situation can be even more complicated because they have a personal financial stake in the outcome. Even if they act in good faith, heirs and other people named in the will may question their decisions.
This kind of conflict of interest is often unavoidable, but it is one reason why disputes over fees and decision-making are so common.
Revenue from Michael Jackson’s intellectual property, including the world-touring ‘MJ The Musical’ and a new biopic, is still flowing into his estate. Dia Dipasupil/Getty ImagesDisputes over executor pay are not unusual. But this case stands out because of the type of spending being challenged.
Jackson’s estate is not just collecting his assets and then distributing them. It is actively managing a complex portfolio of intellectual property rights that includes movies, music deals, publicity rights and other business ventures.
That raises a question that can be hard to answer: Are some expenditures from the estate benefiting those managing the estate rather than those who inherit from it?
Paying top lawyers or investing in a film could increase the estate’s value. But Jackson’s relatives may see those same decisions as unnecessary or excessive.
Paris Jackson’s latest legal challenge reflects this tension. Executors get broad power to run an estate, especially one that operates like a business. But they must still justify their decisions to the people who will inherit the estate’s assets once it has settled. That’s why the choice of executor is so important.
As this dispute moves forward, the court will continue to supervise the process, which is helpful when the parties cannot agree on how to settle an estate. In the end, the case highlights a basic truth about probate: Even after death, managing wealth can be complicated, slow and deeply contested.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
When the NFL draft comes to town, the host city’s hotels, bars and restaurants fill, while its downtown gets three days of national exposure.
Detroit’s 2024 draft drew more than 775,000 fans and generated a reported US$214 million in economic impact, including $161 million in visitor spending, according to The Associated Press. Visitor spending is the money directly spent by visitors coming into the city for the event, whereas the economic impact includes the ripple effect of money circulating before, during and after the event – like restaurants buying more food from suppliers, hotels hiring extra staff, and vendors purchasing additional inventory.
Pittsburgh is set to host the 2026 NFL draft April 23-25. According to Steelers executive Dan Rooney III, the event could bring to the city 500,000 visitors and an economic impact of $200 million.
The arrival of the NFL draft requires coordinated planning across public safety, transportation and health systems to manage massive crowds safely. Icon Sportswire via Getty ImagesEconomic impact almost always leads the news when a city lands the NFL draft. The first numbers that people tend to hear come from team officials, city leaders and local boosters who project visitors, spending and exposure.
My academic work examines emergency planning and safety in different sporting environments. The question I often ask is not how much money an event like the NFL draft brings in, but what it takes to deliver it safely?
Preparing for an event of this scale requires careful planning and real public costs.
The NFL draft is not just a fan event. Like a marathon, a championship parade or a major outdoor concert, it is a mass gathering that must be planned as a public safety and emergency response operation.
Research shows that such large-scale events can increase demand for emergency services, strain local systems and require careful coordination across agencies far in advance.
Ahead of the 2026 draft, Pennsylvania officials have described months of preparation involving emergency management, law enforcement and transportation agencies. That means walk-throughs of event spaces, traffic planning, risk assessments, scenario-based drills and testing how agencies would communicate during a disruption or emergency. One public example came in January when Pennsylvania State Police landed a helicopter at Point State Park for training.
In my experience, large crowds in dense public spaces require more than a security presence. They require systems that can handle routine medical issues such as dehydration, falls or minor injuries, while preserving enough ambulance, emergency department and first-responder capacity to respond if a serious emergency occurs. They also require agencies to coordinate quickly when conditions change. Most of that work unfolds before the cameras arrive.
Protocols used in sports medicine can offer a clear example of what reliable preparedness looks like.
Emergency action planning in athletic training revolves around everyone knowing their jobs, knowing how to communicate and how to find equipment and transport patients. The aim is a response that is coordinated, rehearsed and dependable under pressure.
It is this system that allows the roughly 30 medical personnel on an NFL sideline to function as one unit rather than as individual responders. Scale that logic up to a city like Pittsburgh, filled with hundreds of thousands of people, and the same questions emerge: Where will care for fans be delivered? What can be treated on-site, and what requires ambulance transport? How will patients reach a local hospital when roads are closed or crowds disrupt access? And if weather, security concerns or a serious medical incident interrupts normal city flow, who communicates what, and to whom?
Detroit’s draft showed how large and complicated that operating system can become. The event covered roughly 2 million square feet (186,000 square meters), or about 46 acres – roughly the size of 35 football fields.
At that size, the draft is better understood as a temporary urban system layered onto an existing city than as a traditional fan festival. Crowds have to be directed, access points controlled, medical teams positioned and emergency routes protected. The challenge, then, is not merely attracting visitors. It becomes designing a citywide plan capable of absorbing them without overloading the systems responsible for keeping them safe.
Hosting the NFL draft will require Pittsburgh to implement coordinated crowd control strategies. Icon Sportswire via Getty ImagesFor example, road closures and congestion are often treated as inconveniences or side effects of a major event, but they are also part of the emergency response environment. Gatherings of hundreds of thousands of people disrupt normal traffic, which can complicate emergency medical services operations. Transportation planning is therefore less about convenience than about clinical risk management because time to care depends in part on whether access routes remain usable when they are needed most.
A staffing plan can be excellent on paper, but if emergency vehicles cannot move efficiently, response times slow down. Traffic design can affect care delivery just as much as placement of medical teams and first responders does.
Once the draft is understood as a public safety operation, the question of how much it costs looks different, too.
Pittsburgh City Council approved $1 million in funding for the 2026 NFL draft. State and local agencies have also been planning security, transportation and emergency response operations well in advance.
Ahead of the 2025 draft in Green Bay, Wisconsin state lawmakers sought $1.25 million to reimburse local law enforcement and fire departments in the Green Bay area for part of their event-related costs. After the draft had ended, Gov. Tony Evers announced that the state would give an additional $1.8 million to the city of Green Bay, the village of Ashwaubenon and Brown County to cover security and other public safety costs associated with hosting the event.
Those figures do not cancel out the economic upside of hosting the NFL draft, but they do show that significant public resources are used to make that upside possible. Staffing, overtime, EMS staging, traffic control and interagency coordination are integral to the event, not background details.
What makes the full public cost harder to pin down is that not every expense is disclosed or presented in one place. In Pittsburgh, for example, state, county and city officials have collectively earmarked at least $14 million for the official nonprofit tourism agency responsible for planning the draft, VisitPittsburgh. The nonprofit is required to provide a $5 million match.
Detroit’s draft drew more than 775,000 fans to the city in 2024. Aaron J. Thornton via Getty Images EntertainmentPennsylvania State Police said they, too, are coordinating security planning, traffic tactics, risk assessments and interagency exercises, while declining to provide an estimated cost for that work, citing security reasons. That means the public cost of hosting the draft may be visible only in part.
The headline economic impact figure is designed to measure the event’s upside. It is not a net figure that subtracts the full cost of security, emergency response and other public operations required to make the event possible.
In my view, the deeper story is not simply that the NFL draft brings money into a city; it is that an event of this scale depends on systems that are built, staffed and tested well before the first draft pick is announced. When those systems work, the headlines stay focused on visitors, spending and exposure. If they are strained by a medical emergency, security incident or breakdown in crowd flow, attention shifts immediately to the infrastructure underneath it.
The economic story matters. But without the hidden work that supports it, it cannot exist on its own.
Adam Annaccone does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
President Donald Trump appeared on former Deputy FBI Director Dan Bongino’s podcast in February 2026, where he stated: “The Republicans should say, ‘We want to take over, we should take over the voting.’ The Republicans ought to nationalize the voting.”
Trump’s call to nationalize elections, to transfer the constitutionally mandated control of elections from local to federal authorities, drew bipartisan opposition and added to Democratic fears that the president may attempt to interfere with upcoming midterm elections.
Despite Trump’s call to “nationalize the voting,” the U.S. Constitution clearly notes that states run elections – not the federal government.
The federal government, however, has a role to play in national elections – as an observer. Federal observation ensures that Americans cast their votes on election day without reprisal.
Initially dispatched to deter voter discrimination against Black Americans after passage of the Voting Rights Act of 1965, election observers ensured that those qualified to vote could do so without trouble.
But with its 2013 ruling in the Shelby County v. Holder case, the U.S. Supreme Court changed the federal government’s relationship to the election process. The ruling significantly weakened the federal govenment’s ability to send federal observers to the polls.
As a scholar of civil rights and voting rights, I know that federal oversight during elections has always been a valued part of the electoral process, even when subject to criticism.
Yet, this current moment, with the Trump administration’s efforts to cast doubt on the legitimacy of the 2026 midterms, feels different. What I have noticed recently is how the public’s thinking has shifted about the federal oversight of elections. Where once it was largely welcomed as an ensurer of fairness and proper procedures, now it is seen as a misuse of authority.
The key contribution of the Voting Rights Act that Americans are typically taught about in school is its abolition of racial discrimination in voting. The measure put a stop to poll taxes and literacy tests, which had disproportionately reduced Black voter registration.
But the act also created the type of federal observation of elections that is most familiar to Americans today.
The measure allows the Department of Justice to deploy federal observers to polling stations. That deployment can happen through a court order or by requirement to places with documented histories of voter suppression. The latter was determined by a section of the Voting Rights Act that also details the guidelines for which places merit that designation.
An estimated 1,000 Black Americans wait to vote in the Democratic primary in Birmingham, Ala., on May 3, 1966, the first major Southern election after passage of the 1965 federal Voting Rights Act. AP PhotoFederal observers take notes, often beside poll monitors, and document potential unlawful practices by poll workers.
Unlike monitors, federal observers are stationed inside polling stations. They keep notes on the tallying of votes and verify those thrown out. And where the Justice Department requires the permission from respective districts to send monitors, federal observers are sent by the U.S. attorney general and do not require the same permission.
Historically, observers were also charged with registering voters at polling stations and local registrars’ offices with the specific goal of assisting disenfranchised minorities.
Determined to maintain Jim Crow laws that enforced racial segregation, several Southern Democrats opposed the Voting Rights Act.
Some Americans also criticized the act as government overreach. And they castigated the U.S. attorney general in 1965 when he dispatched federal registrars to the South following the passing of the measure, and when he sent federal observers to the South for the 1966 congressional elections.
Despite this opposition to federal observers, and just months after the Voting Rights Act’s passage, the U.S. Commission on Civil Rights wrote that federal observers received “praise from registration workers and the (voter registration) applicants.”
Within a few years of the act, roughly 1 million Black Southerners had registered to vote. Over time, federal election observers began to focus less on registering voters, practically phasing out this practice by the 1980s, and serving only as observers.
Over the decades, conservative politicians, as they gained more seats in Congress and state legislatures, developed new strategies – they filed lawsuits, rearranged voting districts – to circumvent what they argued was federal overreach in the election process. These changes helped them gain political influence and promoted their philosophy of states’ rights. They were successful.
The increase in conservative political influence gave way to an increasingly conservative Supreme Court. This was reflected in the U.S. Supreme Court’s 5-4 ruling in Shelby County v. Holder.
In that ruling, the court struck down the section in the Voting Rights Act outlining the guidelines for deciding whether a county or state needed federal oversight. With no guidelines to follow, the federal government removed most of its oversight.
After the court’s ruling, several states – Texas, Alabama and Mississippi, for example – made rapid changes to the voting process. Those included new voter ID laws, the purging of voter registration rolls and gerrymandering. These changes have resulted in further voter disenfranchisement, disproportionately effecting Black and Hispanic voters.
People wait in line outside the Supreme Court on Feb. 27, 2013, to listen to oral arguments in the Shelby County v. Holder voting rights case. AP Photo/Evan VucciThe Voting Rights Act guidelines had also helped determine where to send federal observers. With this section revoked, the federal government’s ability to send federal observers, in the way it had done for roughly 50 years, also disappeared.
The Justice Department sent federal observers to five states during the 2016 presidential election, compared to 23 states during the 2012 presidential election.
Since Shelby, disagreements over federal oversight persist and the role of federal observers has changed.
In 2024, the Justice Department announced it planned to send out 86 monitors on Election Day, the most federal monitors in two decades, due to concerns of possible partisan interference in elections. Some Republican-led states threatened to ban them from the polls.
To send out federal observers, the Justice Department needs a court order. But during the 2024 elections, courts determined that only four states needed federal observer oversight.
During the Civil Rights Movement, federal election observers were the strongest line of defense to ensure fair voting.
Recently, however, the federal government’s election focus – such as attempting to require voters to provide documentary proof of U.S. citizenship when registering to vote – has shifted to what it says is voter fraud and accusations of cheating.
Still, one thing has remained certain. Federal observers are important. Their history, even now as they are less prevalent, can inform how we discuss the federal government’s role in elections.
Allison Mashell Mitchell does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Detroit Mayor Mary Sheffield delivered her first State of the City address on March 31, 2026, at Mumford High School on the city’s northwest side.
In the speech, Sheffield touted the accomplishments of her administration’s first 90 days, which included bringing the cash assistance program RxKids to Detroit. Sheffield also announced a new initiative called Ride to Rise, which offers free bus service to the city’s K-12 students year-round.
Sheffield stressed mandates to tackle poverty, support youth development and seniors, build more single-family homes, increase homeownership and make the city a welcoming place for small businesses to grow and thrive.
That commitment to improving the lives of Detroiters, according to Sheffield, is reflected in the $US3 billion budget she introduced on March 9, 2026.
“This budget is a statement of our priorities and our values,” Sheffield said during the address.
One thing that’s missing from her budget proposal is any mention of participatory budgeting – something that Sheffield often championed during her 12 years serving on the City Council.
On the campaign trail, Sheffied said that participatory budgeting allows “residents to feel empowered and have a direct say in how their tax dollars are spent.”
I’m a professor of political science and author of a recent book called “Budget Justice” about grassroots politics. I think Sheffield had it right on the campaign trail – communities around the country want to democratize the budget process so that local governments better address their needs and increase transparency and accountability.
I gained this perspective by serving on New York City Mayor Zohran Mamdani’s transition team on community organizing, mass governance and participatory budgeting.
Participatory budgeting is a democratic experiment that gives constituents, rather than elected officials, power to decide how to allocate a portion of public funds. Although Detroit often holds community engagement forums and open calls for grant funding, participatory budgeting differs because it puts the power of the purse into the people’s hands.
I first encountered participatory budgeting in 2011. Leaders from the grassroots organization Community Voices Heard and others helped to bring it to New York City during the Occupy Wall Street protests. Protesters who were part of that movement questioned why banks received governmental bailouts while households struggling with predatory student debt did not. I joined the rulemaking steering committee for New York’s new participatory budgeting process and stayed involved for the next decade.
New York’s process consists of four stages each year. In the fall, residents learn about the process through public service announcements, local media, door-knocking outreach or word of mouth. They then attend neighborhood assemblies where they pitch thousands of proposals for community projects. Frequently, a simple question gets them started: “How would you spend $1 million of the city’s budget?”
Meeting face to face matters. I’ve observed dozens of these assemblies, and people are much less likely to troll others in person than they are online, when they are anonymous and fueled by keyboard courage.
Over each winter, some residents volunteer to research and curate the proposals that will end up on the ballot. They also work with city agencies to develop ideas into full-fledged proposals. In New York, these projects have ranged from curb extensions at intersections identified as dangerous by local residents to summer arts camps and conflict resolution training programs.
Each spring, residents vote for the proposals that they want implemented.
Each summer, winning projects get funded.
In New York City, voting week for 2026 participatory budgeting proposals is April 11-19.
In the fiscal year 2026 budget cycle, New Yorkers allocated $30 million in public funds as part of the city’s $116 billion budget.
The nonprofit Community Development Project reported that 68% of the 17,000 people who voted on participatory budget proposals at the time of the survey had never worked together on a community issue before. Roughly 1 in 4 stated that they were not eligible to vote in regular elections, primarily because of being under age 18 or holding an undocumented immigration status.
For many, participatory budgeting helped them to understand their communities in new ways. As one participant put it, “I was able to see the needs (of) the community in a way I’ve never seen before. ... I didn’t know how bad of an asthma cluster there was in public housing. I don’t have kids, so I don’t know about needs at school. I don’t have any relatives who live in senior housing, so I didn’t know about the issues they faced.”
Participatory budgeting also produced ripple effects. Participants were 8.4% more likely to vote than those who had not participated in the process. The effects are even greater for those who have lower probabilities of voting, such as low-income and Black voters.
In Detroit, only 22% of voters took part in the most recent municipal election. Participatory budgeting could be a tool for increasing turnout.
Voting makes you feel good, but only 1 in 5 voters in Detroit came out for the most recent municipal election. City of Detroit/FlickrIn my experience, participants need to feel they are doing meaningful work.
Research shows that participatory budgeting works best when communities allocate significant pots of money through the process, when residents are trained and encouraged to stay engaged beyond the process, and when combined with efforts to change practices in other parts of government, too.
In Boston, the Better Budget Alliance works to make sure projects that didn’t get funded through the city’s participatory budgeting process still get included in community demands for the larger city operating budget, and vice versa.
In New York, the Mamdani administration has just announced a new Office of Mass Engagement that aims to deepen the levels of transparency, listening and follow-through in the city.
In other words, experiments such as participatory budgeting can serve as an entry point to transformational change.
That change may look like the ambitious and growing national people’s budgets movement, which brings together local residents and community groups to protest budget cuts on essential services, articulate budget priorities and democratize the budget process. Unlike participatory budgeting, the movement’s campaigns often ask questions regarding divestments – for example, from jail expansions – as well as investments. It also concerns itself with taxes and the revenue side of the budget, and how budgetary powers should be shared by the mayor, city council, agencies and residents.
In Brazil, where participatory budgeting first began, the process was seen as an investment in working-class residents. Brazilian cities that implemented the process collected 16% more in taxes than cities that did not implement the process. Cities with participatory budgeting were seen as more legitimate, making their residents more willing to support additional taxes. These cities also boasted of higher tax collection and compliance rates.
Participatory budgeting also helped residents to harness the popular pressure and political will to reject development projects – such as luxury hotels – that they felt reflected business interests more than public needs. Because citizens expressed interest in providing funds for prenatal health, prominent political scientists even credit participatory budgeting with lowering infant mortality.
In American cities such as New York and Detroit, participatory budgeting processes could in time take on more challenging issues, such as universal day care or social housing.
Opaque budgets and an austerity mindset lead to distrust in government, perpetuating anti-tax sentiments.
This undermines the capacity of government to get things done. Robust participatory budgeting can help residents press for what they value most and serve as a tool to help cities such as Detroit thrive.
Celina Su served on New York Mayor Zohran Mamdani's transition team subcommittee on community organizing. She also served on the New York city-wide steering committee for participatory budgeting and advised the process for its first decade, from 2011 to 2021.
In April 2026, four astronauts are scheduled to fly around the Moon. As part of NASA’s Artemis II mission, they will become the first humans to do so in half a century. One crew member, pilot Victor Glover, will become the first Black astronaut to ever orbit the Moon.
Glover’s achievement is worth celebrating. But it’s also worth remembering that he belongs to a long and underappreciated history. America’s first Black explorer didn’t fly an Apollo rocket or sail with the U.S. Exploring Expedition. He traveled with Lewis and Clark, and he was known by a single name: York.
I’m a historian who spent five years writing a book about Lewis and Clark, and I found new documents that show York was one of the most important people on their expedition. Even in a party that could number as many as 45 men, York stood out – for his courage, his skill and his sacrifices that helped the famous captains reach the Pacific Ocean.
York was born in Virginia around 1770. Growing up, he was a creative and sociable child, unusually tall with dark hair and a dark complexion – “black as a bear,” a contemporary noted.
He was also enslaved by the Clarks. William Clark, who was around the same age, was also unusually tall, though his hair was a rusty red, and sometimes the boys played together. But the playing stopped once York turned 9 or 10. That’s when he joined the adult slaves in working full time. That’s also when he began to note the differences between his life and William’s – differences that became only clearer once William started ordering him around.
In the 1780s, the Clark household headed to Kentucky. York met a Black woman there and married her. He also became William’s “body servant.”
A body servant was a slave who stayed close to his owner and prioritized his comfort, laying out his clothes and serving his meals. When Meriwether Lewis asked Clark to join his expedition, in 1803, Clark ordered York to accompany him.
Perhaps York was excited for this adventure. Perhaps he was not – it would be punishing, and he would be separated from his wife.
Either way, York didn’t have a choice.
York proved his worth from the start. Once they reached St. Louis, the soldiers, later known as the Corps of Discovery, rushed to raise winter quarters. Working in hail and snow, York and the others built log huts. They needed rough planks for their tables and bunks, but the carpenters had only a single whipsaw to make them. They chose two men to operate this crucial tool. One of them was York.
On May 14, 1804, the corps began ascending the Missouri River. York helped row and tow the party’s barge, which was the size of a semi-truck trailer. He carried a rifle and hunted – according to the expedition’s journals, he was only the fifth named member to bring down a buffalo. York cooked for the captains. He collected scientific specimens. He nursed the sick, including several soldiers and, later on, Sacagawea, a Shoshone woman who would also prove essential to the expedition’s success.
York helped Lewis and Clark’s expedition cross rapids in the Columbia River. Carleton Watkins/Oregon Historical SocietyThe soldiers were not always kind in return. During this period, officers rarely brought along enslaved body servants. York’s race probably made some of the men angry or uncomfortable. One day, someone threw so much sand in his face that it nearly blinded him. Clark claimed it was “in fun,” but he also wrote that York was “very near losing his eyes,” and no one else got cruelly sprayed with sand.
That fall, during councils with Native leaders, York played a surprising and vital role. The Arikara, Mandan and Hidatsa all crowded in to see him and to touch his skin. They had never met a Black person before, and York showed off his strength and played with the Native children. Later, the Arikara said York was “the most marvelous” thing about the corps.
The next year, the expedition crossed the Rockies and the Continental Divide. York’s most important – and most overlooked – contributions came soon after. On the Columbia River and its tributaries, the party had to dig out five new canoes and then paddle them through treacherous rapids.
Lewis and Clark allowed only their best rivermen on these foaming, rock-riven waters. One of them was almost certainly York. During my research, I found an unpublished letter in which Clark praised York’s ability to “manage the boats.”
Just as important, York was a strong swimmer, a rare thing in an era when many people never learned to swim.
On the Columbia River, the corps survived a series of terrifying choke points – soggy hazards they referred to as the “Long Narrows” and the “Great Chute.” After that came the ocean. They had traveled together for more than 4,000 miles (6,400 kilometers), and when the captains asked the men to vote on where to put their final winter quarters, they made sure to ask York, too.
In his elk-skin journal, William Clark recorded York’s winter quarters vote. Missouri Historical SocietyIt was the latest sign that his role had changed during this epic journey. But those changes began with York. In the West, he found ways to make choices and assert himself. He sent a buffalo robe to his wife in Kentucky. When Clark told him to scale back his performances for Native people, York ignored him – because he wanted to, and because he could.
York’s vote was also evidence that, like Victor Glover today, he was an official American explorer, a key member of a sprawling, federally funded mission. From 1804 to 1806, the government devoted a larger percentage of its budget to the corps than it devotes to NASA today.
Part of that money was earmarked for York. The Army gave officers who brought along their slaves a monthly ration or its cash equivalent. When the corps made it home, the government paid US$274.57 for York’s labor, a sum similar to what the privates received. But that money didn’t go to York. It went to Clark.
There have been many Black explorers in American history. Thomas Jefferson launched other expeditions besides Lewis and Clark’s, and those expeditions also included enslaved people, though their names have not survived. Isaiah Brown served on the Wheeler Survey, which mapped the West in greater detail after the Civil War. Matthew Henson accompanied Robert Peary on his Arctic expeditions, which received some federal support. More recently, NASA has depended on Black astronauts such as Guy Bluford, Mae Jemison and Jeanette Epps, among others.
York and Victor Glover are, for now, the first and most recent examples of this inspiring tradition. But their contributions go beyond that. When the captains asked York to vote on the winter quarters, they were acknowledging in some small way that he’d proven he was more than a body servant.
Of course, York had always been more than that. It just took 4,000 miles for Lewis and Clark to see it.
Craig Fehrman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Cancer is becoming increasingly common among young people, with cases slowly and steadily rising every year for the past decade. And what type of insurance adolescents and young adults have affects at what stage of cancer they’re diagnosed and how long they survive.
As researchers who study cancer disparities in young adults, we examine the social and systemic factors that shape who survives a cancer diagnosis. In our recent review of the scientific literature – an analysis that included nearly 470,000 Americans between the ages of 15 and 39 who had been diagnosed with cancer – we found that insurance status is one of the clearest and most consequential factors.
Young people with private health insurance lived longer than those on Medicaid or without insurance. Depending on the cancer, this survival advantage ranged from a modest 8% lower risk of death for lymphoma to a drastic 2 to 2.5 times lower risk of death for melanoma and multiple other cancer types.
People between the ages of 15 and 39 have especially unstable access to health coverage in the U.S.
Young people in this age group are often finishing school or starting new jobs, including positions that don’t offer benefits. They’re also aging off a parent’s insurance plan, which happens when you turn 26 under current U.S. law. This instability leaves many young people uninsured or underinsured.
The consequences of no or insufficient health coverage go beyond inconvenience. Adolescents and young adults already tend to see smaller improvements in cancer survival over time compared to children and older adults. This gap has puzzled researchers for years.
Insurance instability appears to make this gap even wider.
Health insurance does far more than cover hospital bills. It determines whether a patient can access a specialist, how quickly treatment begins and whether they are eligible to enroll in a clinical trial.
Strikingly, patients on Medicaid and uninsured patients often had similar cancer outcomes – and both did worse than those with private insurance. This suggests that simply having some form of coverage isn’t enough if that coverage doesn’t actually open doors to quality care.
What kinds of cancer treatment a patient can access, including clinical trials, is ultimately determined by their insurance. SeventyFour/iStock via Getty Images PlusOne underdiscussed consequence of insurance status is access to clinical trials. These studies are often the pathway to the most advanced treatments available. Yet research has found that the type of insurance a young cancer patient has is a significant predictor of whether they enroll in a clinical trial, with higher enrollment rates for those with private insurance.
For cancers such as early stage Hodgkin lymphoma – a cancer more common in young adults – treatment decisions and access to newer approaches can vary significantly based on where and how a patient receives care, which is often tied to their insurance status.
The body of research we analyzed primarily tracked patterns in existing data rather than through controlled experiments. That makes it difficult to say with certainty that insurance status directly causes differences in survival.
However, the pattern we observed was consistent across many studies. Moreover, most studies recorded insurance status only at the time of diagnosis, which misses changes that happen during treatment. Patients may lose or gain coverage in the middle of their care.
Future research that tracks insurance continuously throughout treatment, standardizes how coverage is categorized and examines specific cancer types and age subgroups in greater depth could clarify the picture further.
Financial stress can force patients to choose between essential medical care or basic necessities. Jacob Wackerhausen/iStock via Getty Images PlusThe good news is that insurance is something society can change. Based on our research, a few key areas stand out.
Expanding coverage could help keep more young cancer patients insured. This might look like policies allowing young adults to stay on a parent’s plan longer, expanding Medicaid and reducing gaps in coverage after diagnosis.
Improving what Medicaid actually covers could make it easier for patients to access top cancer centers. Many doctors and cancer centers limit how many Medicaid patients they see because reimbursement rates are low.
Connecting with financial counselors, patient navigators and care coordinators could help young patients on public insurance or those who lack insurance navigate the system. This support could enable them to get timely access to the right treatments and clinical trials.
Early screening for financial barriers can prompt timely referrals to financial counseling, assistance programs or social work before patients experience treatment delays. Financial support can help patients complete treatment, make their appointments and improve their outcomes.
The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.
When I was a teenager in the early 1980s, I realized the potential of using video games in education. The same high school classmates who couldn’t pass a test at school could somehow remember what potion or scroll to use to tame dozens of kinds of monsters in video games that they loved, like “Dungeons and Dragons” and “Rogue.”
A couple of decades later, as a professor of astronomy and astrophysics at Penn State University, I became frustrated by teaching poorly attended classes. As many as 300 students had signed up to attend my lectures, but only half regularly showed up for class.
So in 2015, I decided to transform my course, “The Artistic Universe,” which explores astronomy through artistic expression, into a video game called “University of Mars.” Instead of attending lectures, students simply play the game to learn the same astronomy content. Most of the students who graduated from Penn State last year earned college credit for playing this video game course.
Because of its blending of astronomy and art, “The Artistic Universe” is one of the most popular choices to satisfy the school’s requirement that students take an interdisciplinary course in order to graduate.
Students can conduct colorful star searches in this video game. Jane Charlton, CC BYStudents taking this class study abroad, so to speak, at the fictional University of Mars. There, they contemplate what it would feel like to live on another planet. They fly from planet to planet in the solar system and return a baby alien to its home planet, which is orbiting another star. They fly out into the vastness of space, explore the collections of stars called galaxies, and assemble their own universes.
Students learn about astronomy – understanding constellations, planets, stars, galaxies and the expanding universe – all while playing this video game.
Students then do a range of assignments, including creative writing and visual art, based on what they learned in the game.
Students today learn differently than they did three decades ago when I started teaching at Penn State. Most of my students are no longer inclined to systematically read a textbook, as my old astronomy class required. Some students pick and choose whether to attend class based on whether they can get the same information from notes or slides.
By peeling off the layers of the Sun or flying through the razor-thin disk of a spiral galaxy in the video game I created, students can learn and have fun, making school feel less like work. The game helps students think about humanity’s place within a vast universe.
Playing a video game brings the students into a story. It pushes students to imagine life beyond Earth and to recognize that life is likely scattered all throughout the universe.
It is amazing to consider the potential for life in varied forms spread over the vastness of space. It is humbling to know that humans have existed for less than 500,000 years out of the more than 13 billion years since the universe began to expand.
The entire history of humanity is a microscopic piece of something so much bigger. And where did our universe come from? What will happen to it? Is it part of an even larger structure in multiple dimensions? I hope every student will reflect on what all of this means to them as they develop their own purpose in life.
The “University of Mars” begins at a colony on Mars, and players venture off into the solar system, to nearby stars and to other galaxies. It also includes many mini-games. There’s one that involves moving the Moon around the Earth to understand its phases, and another in which players manipulate light and gases to understand spectroscopy – the study of how light interacts with matter.
After playing the game, students participate in creative writing exercises, including letters, journal entries or even original songs, that speak to their experience. In one creative writing assignment, students are prompted to detail the first human encounter with an alien, as they experience in the game. In another, they grapple with whether to join a collective consciousness – a scientific version of a hive mind – and lose their individuality while gaining immortality.
Art projects include creating their own planet and an alien that lives there, and turning black-and-white images from the Hubble Space Telescope into colorful ones using Photoshop.
“The Artistic Universe” accomplishes different things for different students. Exposure to art puts some in touch again with their artistic side that may have been set aside in college to pursue more practical things.
They likely won’t remember the details from the game and the class a decade from now, but hopefully they will remember the feeling they had when they realized just how far away the nearest star and nearest galaxy really are.
Experiencing the story of the universe by playing a video game can bring comfort and familiarity to students. From this position of comfort, they can pause and reflect. For many, the “University of Mars” experience gives a new appreciation and perspective on life.
Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.
Jane Charlton does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
As a researcher in autism and education and a former secondary school teacher, it took me a while to realise that autistic school staff were rarely included in conversations about inclusion and diversity in schools.
With colleagues, I started the Autistic School Staff Project in 2019, focusing on the experiences, needs and aptitudes of autistic teachers and other education staff.
Our findings show that autistic school staff can experience significant sensory issues in school. These can be from bright, flickering lights, odours from the canteen, and crowding in corridors or during meetings. The greatest impact of all comes from noise: shouting from children and staff during break times, the clang of the school bell and the roar of traffic when windows are open in the summer.
Interestingly, it’s not only a question of volume levels. Whispering from children and humming from technology can also be highly distracting and contribute to feelings of fatigue and overload.
Autistic teachers also told us that the ways neurotypical colleagues communicated and interacted with them could be disorientating and exclusionary. Staff meetings that seemed to lack focus, chit-chats in the school corridor, gossip and school politics could be experienced as confusing and irrelevant.
At the same time, autistic teachers felt their own communication style of being direct and to the point could be misunderstood as rudeness. Similarly, staff social events were often not enjoyed by autistic teachers, even though neurotypical colleagues seemed to really rate them. Changes announced at the last minute by the school leadership team, with instructions that did not seem to make sense, could be highly stressful for autistic teachers. Covering for absent teachers was also found to be very unsettling.
Noisy school environments can cause sensory issues for autistic teachers. ShutterstockMost tellingly, a number of participants felt they could not be open about being autistic. A key reason for this concerned negative and stigmatising attitudes towards autism that they had to face in school. The teachers also said that autistic children could be poorly treated. Autistic teachers sometimes had to sit through autism training, conducted on the assumption that no-one present was autistic, where the same negative attitudes were evident.
As a result, autistic school staff could be extremely wary about sharing with anyone that they were autistic. They worried that this information would have a negative impact on their careers. Suppressing an autistic identity, known as masking, has been linked with mental health issues.
While some of our participants had been able to disclose being autistic in school, and had even had a good experience of this, others said that it had made life even harder. This was because attitudes would change towards them in a negative way, or they might not even be believed.
Fortunately, a number of positives also came out of our study. Monotropism – a key autistic trait that denotes a tendency to have very intense interests – can mean that autistic teachers develop strong subject expertise and teach with passion. Even the job itself links with monotropic tendencies, as autistic teachers told us that they loved their work and were highly motivated by it. In addition, autistic teachers felt that they were very thorough and organised.
Above all, autistic teachers felt they were making a significant contribution to supporting inclusion in school. They were sensitive to the needs of neurodivergent children and others at risk of marginalisation, and were willing to try alternative approaches with children who were struggling. One teacher said:
I never gave up on a child because I think probably too many people gave up on me. I could see myself in a lot of the children.
In addition, some of those who had been open about being autistic were valued by colleagues because of their insights in relation to neurodiversity. Autistic teachers also felt that they could be a role model for autistic children and their parents.
Autistic teachers are a valuable part of the school workforce and are already making an important contribution to inclusion. However, it’s important to remove the barriers they can face across their careers.
This includes providing more flexibility and support for autistic student teachers. Making recruitment practices inclusive and accessible – such as by providing questions in advance, and offering in-person and remote options for interviews – would also benefit autistic teachers, as would developing neurodiversity-inclusive school communities.
Participants were clear that autism training should be run by autistic people, and that withdrawing to a quiet space should not be misinterpreted by colleagues as being anti-social. Addressing the sensory impacts of schools would benefit both children and staff. Providing staff with agency in decision-making can be empowering. We also need to reconsider the conventional role of the teacher, and question if the current format of standard duties, such as parents’ evenings and covering for absent colleagues, should be re-evaluated through a neurodiversity-inclusive lens.
Rebecca Wood has received funding from the ESRC and the John and Lorna Wing Foundation.
The EU is in the process of creating a new system that will make it easier to return irregularly present migrants to their country of origin. The legislation, known as the Returns Regulation, includes measures that make it possible to detain more people – including children and families – for longer periods of time.
This marks a major shift in European migration policy, as until now EU Member States could only detain irregularly present migrants as a last resort under specific circumstances. Not all States even had detention policies, and where they did, detention centres were almost always within the EU. This meant legal safeguards could be closely monitored.
However, a recent vote has paved the way for the new Regulation to create “return hubs” in third countries outside the EU. The European Commission presents these offshore detention centres as an “innovative solution” to migration management, with assurances that they will ensure “fundamental rights”.
In practice, it will be very difficult to monitor potential human rights violations and enforce European standards in return hubs outside Europe. The Commissioner for Human Rights of the Council of Europe Michael O'Flaherty has warned that they could run the risk of creating “human rights black holes”.
The Returns Regulation was proposed by the European Commission in March 2025. Its general approach was agreed by the Council of Ministers in December 2025, and it was endorsed by the European Parliament on March 26.
The legislative process is now entering the last stages of negotiations. It is expected that the Returns Regulation, including the legal provisions for return hubs, will be adopted before the summer to replace the 2008 Returns Directive.
Attempts to establish return hubs date back to the 1980s, though none were successful. One of the most prominent was the UK’s 2003 proposal to the European Council to establish regional centres for the management of irregular migration. It was heavily criticised and did not obtain support among the other 13 EU Member States at the time.
More recently, in 2024, Italy established migrant detention centres in Albania. European Commission president Ursula von der Leyen has cited the centres as a model for EU-wide migration management.
The recently approved draft of the Returns Regulation allows for return hubs to be set up in third countries with whom the EU has made an agreement.
Such agreements can only be concluded with a designated “third country” where international human rights law is respected. This includes the principle of non-refoulement, which prevents individuals from being sent to territories where their life or physical integrity are at risk.
However, my own research argues that for a third country agreement to be fully in compliance with States’ obligations under the 1951 UN Refugee Convention, it must always guarantee the full range of refugee rights. These include the right to asylum, as well as socio-economic rights guaranteed by the Refugee Convention. It also means providing access to public education, employment, housing, social security, and courts.
None of these rights are explicitly guaranteed in the draft Returns Regulation. Sending refugees to countries outside the EU runs the risk that refugees may never be granted the full range of rights that they are entitled to under the Refugee Convention.
Return hubs are made possible by the combined effect of the new Returns Regulation and other instruments in the 2024 Pact on Migration and Asylum.
In particular, the “safe third country” concept in the 2024 Asylum Procedures Regulation has been expanded by a new Regulation, which was adopted in February 2026.
This concept allows Member States to reject asylum applications outright, without examining the merits of the claim, on the basis that the applicant would be safe from danger or persecution in a “safe third country” outside the EU. These are countries that have an agreement with the EU (or with its Member States), in which they commit to examine asylum applications made by individuals rejected by the EU.
However, even when an applicant does not meet the criteria to be considered a refugee or a person otherwise in need of international protection, international human rights law may still impose obligations of non-return on EU Member States.
While individuals have the right to appeal the decision that a country is “safe” for them, the new Regulation means they may not have the right to stay in the territory of the Member State concerned while the court examines that appeal.
By sending applicants abroad to non-EU countries, the new rules could make it all but impossible to exercise EU-guaranteed rights. This includes the right to asylum and the right to an effective remedy, which expressly encompasses the possibility of being advised, defended and represented in court.
The system designed by the EU will result in both individuals and families with children being deported to countries where they have no links. They will be subject to detention outside the EU, under conditions that will be difficult to monitor.
Children are of particular concern. The EU’s own data shows that thousands of minors, including those travelling with families, go missing after arriving in Europe. Many are feared to be exploited and abused for sexual or labour purposes.
If these issues arise within Europe’s borders – where immigration detention is governed by rule of law standards developed by the European Court of Human Rights and the Court of Justice of the EU – they will undoubtedly be much harder to monitor outside the EU.
With this new system, Europe is departing from its historical role in championing and establishing the international regime for the protection of human rights in the early 20th century. As the world sees more victims of conflict, human rights violations and persecution than ever before, such a regime is now more necessary than ever.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
María-Teresa Gil-Bazo has received research and consultancy funding from the EU and from the UN. In 2024 she was awarded a Jean Monnet Chair on Asylum and Migration in the EU by the European Commission and in 2015 she was appointed External Expert of the Asylum Agency of the EU (formerly EASO), with a five-year mandate.
Prime Minister Anthony Albanese on Thursday will announce interest-free loans for businesses hit by the fuel crisis, in a speech also promising the crisis won’t divert the government from economic reform in the budget.
The loans will help truckies, freight companies and fuel and fertiliser producers. This comes after cuts in excise and the Heavy Vehicle Road User Charge and moves to assist small business were also announced this week.
Albanese, addressing the National Press Club in a speech released in part ahead of delivery, will say the May 12 budget will be “our government’s most important budget to date and it will be our most ambitious. It has to be.”
His message will go some way to reassuring those who fear the prime minister might want to scale back reform ambitions of treasurer Jim Chalmers for the budget, given the crisis and accommpanying uncertainty.
But Albanese says: “economic reform that drives growth, boosts productivity, tackles inflation and lifts living standards is always necessary.
"And in times of uncertainty such as this, it is urgent.”
The Labor government was investing in economic resilience well before this crisis, he says.
“For our government, international uncertainty is not an excuse to delay, or hold back reform – it is the reason we must press ahead.
"Because we will not generate the same prosperity or create the same opportunities, if we continue to rely on an economic model designed in a different time and built for a more predictable world.
"Nor can we go back to those days.”
“Anyone who pretends that the solution to housing or jobs or wages or health is to somehow to recreate the 1950s or 60s, or whatever time they imagine everything was hunky dory, is simply not being fair dinkum.
"Australia will not find our future security in the past. Or by copying approaches from overseas.
"We have to invest in it, build it and create it for ourselves.”
Albanese says while planning for a more resilient Australia, “our number one priority remains helping people with the cost of living”. That balance would be struck in the budget.
The interest-free loans will be provided under the government’s $1 billion Economic Resilience Program.
“No government can promise to eliminate the pressures this crisis will impose. But we can be a buffer against the worst of it,” Albanese says.
“Providing this stability and security amidst uncertainty does not mean standing still while the world changes around us.
"It means anticipating and creating change, true to Australian values and in Australia’s interests.
"Because if people feel like the economy is not working for them, if they’re putting in the effort but not seeing the reward, if planning for the future feels like a luxury, then government cannot provide stability, just by keeping things as they are.”
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
The only thing harder than losing weight is keeping it off. Many people who lose weight find themselves stuck in the cycle of “yo-yo dieting” – losing weight and gaining it all (and sometimes more) back again.
Research on yo-yo dieting has long indicated it can be harmful for your health. But a recent paper has now suggested yo-yo dieting might not be as unhealthy as we’ve been led to believe.
This recent paper, published in BMC Medicine, presents the findings of two separate weight loss trials that were conducted five years apart.
The first trial (trial 1) looked at 278 participants who were overweight or obese. Participants were randomised to follow either a low-fat or low-carb Mediterranean diet – either with or without exercise. All participants lost a comparable amount of weight at the end of the 18-month trial. But those who incorporated exercise achieved the biggest decrease in visceral fat (a dangerous type of fat that is stored around the organs).
The second trial (trial 2) was conducted five years later. Similar to trial 1, the 294 participants followed a Mediterrean-style diet for 18 months. But this time, one group ate a diet very high in polyphenol-rich foods (naturally-occurring plant compounds which have been linked to health benefits such as lower risk of chronic disease). The second group ate a normal Mediterranean diet, while the third group followed normal healthy diet guidelines.
While both Mediterrnanean diet groups lost weight and saw improvements in their overall health, the polyphenol group lost more visceral fat.
A unique aspect of trial 2 was that it included around 80 participants from trial 1. Some of these participants weighed more than they did at the start of the first trial. Such weight recidivism is common following weight loss. This is due to various biological and physiological functions that reduce metabolism and increase hunger, causing people to regain weight and store fat.
The authors compared the people who rejoined the research project against their health and weight status at the start of trial 1. They assessed body weight and other aspects of health – including body fat and blood sugar levels. Despite the re-joiners weighing around the same (if not more) than they did at the start of trial 1, they had lower levels of abdominal fat and visceral fat five years later.
Their metabolic health was also better than it was at the start of the first trial, based their blood lipid (fat) levels, cardiovascular health and blood sugar control.
On the surface, this appears to be good news – suggesting participants retained some of the health benefits of the weight they lost the first time around, despite regaining the weight.
Yet, the results suggest that the very adaptations which helped the re-joiners stay healthy despite regaining weight could potentially have repercussions later. To understand why this is the case requires a grasp of how the body responds to a calorie deficit.
Our fat stores (known as adipose tissue) serve as our main energy (calorie) buffer when there’s no food to provide that fuel. These stores are sacrificed to cover the energy shortfall, causing fat cells to shrink. Visceral fat is the first to go, followed by the more beneficial fat cell stores.
But when people stop dieting, the body puts priority on regaining lost fat. Indeed, our body replenishes fat stores far more quickly than it does muscle or protein stores. More importantly, in response to this shrinkage, the body compensates by making more fat cells. It does this to help the body better cope the next time there’s a fuel crisis.
The body responds to weight loss by creating more fat cells. Spectral-Design/ ShutterstockSo dieting literally makes you fatter in the long run. But thankfully, this will most likely be healthier subcutaneous fat (in the hips, thighs, buttocks and torso) instead of around the organs as harmful visceral fat.
So even though you’ll be carrying excess weight, you’ll experience fewer of the metabolic issues caused by unwanted visceral fat – such as insulin resistance and high cholesterol, which elevate your risk of cardiovascular disease and diabetes.
But with the higher capacity to store fat comes the risk of overshooting your original weight. This may also have implications for yo-yo dieting.
In the paper, the re-joiners who took part in trial 2 did manage to lose weight again. But, on average, they lost slightly less than the trial’s first-timers. That said, when all of the participants from trial 2 were followed up five years later, the re-joiners from trial 1 had regained less, too. They had also retained more of the health benefits of losing weight.
Taking stock of the whole weight-loss journey, it appears that those who regained weight and then joined trial 2 are at a comparable place at the end of ten years to those who just did trial 1.
But there are a few caveats to the trial’s findings.
First, the paper only examined body fat. It didn’t provide any information on lean tissue (such as muscle). This is important, as when we lose weight we lose both fat and muscle. Given muscle’s importance for a healthy metabolism, a lack of muscle could result in even greater weight gain.
Read more: Weight loss: why you don’t just lose fat when you’re on a diet
It’s also not clear whether regaining weight changes the nature of muscle tissue. There are two key types of muscle fibre. Type 1 is smaller and more efficient at burning fat. Type 2 is larger, faster and more powerful – important for explosive exercise.
If an overall loss of muscle results in muscle fibres changing from type 1 to type 2, this could increase risk of health problems – including sarcopenic obesity and earlier onset of age-related health issues associated with muscle loss.
Overall, the paper shows us that weight loss is still beneficial for your health – even if it requires a few attempts to get to your goal weight. But to avoid potentially gaining more weight the second time around, it’s key to establish good diet and lifestyle changes that are sustainable long-term.
Adam Collins is affiliated with Form Nutrition Ltd as consultant Head of Nutrition
Literature expresses complex and nuanced ideas – the powerful feelings that define us as human beings and the detailed observations that illuminate all aspects of our lives. It does so with words put together with consummate skill.
So, surely silence is a nothingness, an affront to the communication of both rational argument and strong emotion – literature’s opposite, even its anathema?
Well, no. In my new book Silence: A Literary History, I’ve set out to show that, over 1,200 years, English literature has spoken to us – and spoken to us eloquently – through silences as well as through words. Without silences, both formal and thematic, we wouldn’t have the exquisite hush of medieval lullabies, the suspenseful secrets of the realist novel, or the jagged fragmentation of modernist poetry.
We would lose implicitness, a good deal of ambiguity, much precision, a powerful mode of protest and a variety of moods. Iago would explain exactly why he wanted to destroy Othello in Shakespeare’s play. The dog would bark in the night time in The Hound of the Baskervilles, by Arthur Conan Doyle. And D.H. Lawrence’s sex scenes would come with a running commentary.
If silence has a starting point in English literary history, it’s a man at sea. The 9th-century poem The Wanderer, composed in the Old English language of the Anglo-Saxons, communicates the sheer strangeness of silence via an alien grey seascape in which the protagonist is utterly alone.
This silence is composed not of complete noiselessness – the hail beats on the waves and a seabird occasionally mews – but of an intense and total absence of human voices.
A reading of The Wanderer.The poem conveys the difficulty of this silence – its wretched, aching loneliness and its perpetual reminder of lost happiness. But it also portrays silence as a duty, the mark of a seasoned warrior forged by Graeco-Roman stoicism, the Germanic hero ethos and Christian asceticism.
And it confronts readers, here at the very beginnings of English literature, with a silent inner voice: the necessary basis of an interior life.
Scroll on 1,200 years. En route, we will take in the tongue-tied silences of Renaissance love poetry, the green silences of 18th-century pastoral scenes and the dumbfounded wonder of the romantic sublime.
We will pause, awestruck, at Tennyson’s great epic of speechless grief, In Memoriam. We will relish the social silences of the Victorian novel, from the hilariously awkward to the emotionally profound.
The fascism-bordering silences of Modernism will make us shiver, before we ponder 20th-century experiments with visual, acoustic and dramatic silences. And we will arrive at the genre-defying, multimedia poetry collection that is Jay Bernard’s Surge (2019).
In 2016, Bernard took up a residency at the George Padmore Institute in London, an archive dedicated to radical Black history in Britain. The New Cross fire, which in 1981 had killed 13 young Black people, was playing on their mind. And then on June 14 2017, as Bernard puts it: “Grenfell happened”.
Bernard was sickened by the similarities: “The lack of closure, the lack of responsibility and the lack of accountability” at the centre of both conflagrations.
Surge’s response takes its title from a remark by the Black activist Darcus Howe, one of the organisers of the Black People’s Day of Action in 1981: “When you surge and you don’t deal with the question, barbarism expresses itself.”
Jay Bernard talks about their work.Speaking over the barbarism, Surge registers a gamut of other silences as it winds between the New Cross and Grenfell fires, and historic and ongoing injustices to Black people.
There is the “muffling” of the New Cross fire by the police, and the details that were literally “tippex’d out” of the file. The silence of the media cannot dispel the weighty silences of the ghostly dead. Then there are the silences that surround transness: hiddenness, rejection and defiance of conventional categories.
With this last issue, we can scroll back up the centuries again. The 13th-century romance Silence, written in Old French by a Cornishman, Heldris de Cornualle, relates the legend of a girl-child being brought up as a boy called Silence because women are forbidden to inherit their parents’ estates. This causes a furious argument between the characters of Nature and Nurture, which anticipates our own age’s differences over transness by eight centuries.
“They have insulted me,” complains Nature, “by acting as if the work of Nurture / were superior to mine!”
But Reason, on behalf of Nurture, urges Silence to resist Nature’s blandishments, or “you will never train for knighthood afterwards. / You will lose your horse and chariot.”
Nature is the winner in the story, but the poem is able to accommodate Silence as both male and female – effortlessly embracing apparent contradictions in such lines as “he was a girl”.
Woman Reading in the Reeds, Saint-Jacut-de-la-mer by Édouard Vuillard (1909). The Fitzwilliam MuseumI believe noticing silences in literature makes us better readers. We come to recognise that some things are better left unsaid – indeed, that some things can’t be said. As a result, our antennae become attuned to literature’s stock-in-trade: the indirect and the inexplicit.
Importantly, we become aware of who hasn’t spoken. All this means we gain a better understanding of what communication is, and how we interact with other people. As our reading acquires a new, slower tempo and a new rhythm, our interpretations change.
What can silences speak to us about? Some of the profoundest aspects of our existence: our understanding of what makes a self; our sense of sacredness; our most powerful and intimate feelings; our place in the natural world; our capacity for wonder. All we have to do is notice.
The excerpt from Silence: A Thirteenth-Century French Romance was translated by Sarah Roche-Mahdi. This article features references to books that have been included for editorial reasons, and may contain links to bookshop.org. If you click on one of the links and go on to buy something from bookshop.org The Conversation UK may earn a commission.
Kate McLoughlin was awarded a Major Research Fellowship by the Leverhulme Trust to write Silence: A Literary History.
Every mission to deep space is fraught with danger. A hardware failure during launch, an equipment malfunction far from Earth, or a small space rock hitting the vehicle are all scenarios astronauts will train for.
As humans set off on the Artemis II mission, visiting the Moon for the first time in more than 50 years, one persistent threat they face is from solar radiation.
Intense bursts of radiation from the Sun, known as solar particle events, can endanger the lives of space travellers, particularly those venturing beyond low Earth orbit (LEO). During these events, high speed, charged particles stream out from the Sun and into space.
Exposure to these particles could lead to radiation sickness or, in the worst cases, prove fatal. On space stations and other crewed vehicles travelling in LEO, astronauts are afforded a high degree of protection by the magnetic bubble surrounding Earth (the magnetosphere).
But in interplanetary space, where Artemis II is headed, humans are much more exposed to outpourings of solar radiation.
Live coverage of the Artemis II mission from Nasa.The Sun’s magnetic activity fluctuates on a cycle lasting roughly 11 years. During this cycle, sunspots (areas of reduced temperature caused by intense magnetic fields) can cause eruptions known as flares, as well as solar particle events. These rise and fall in frequency with the solar cycle.
The current solar cycle reached its maximum, when the Sun is generally at its most active, in 2024 and is now in a slowly declining phase leading to the minimum, when the Sun is quietest. The current cycle should reach the minimum in 2031.
Not all solar cycles are the same and the current one has been rather undistinguished in terms of activity, as was the previous cycle that reached maximum in 2014. Recently, however, the Sun woke from its slumber.
Solar activity (represented here by sunspot numbers) fluctuates on an 11-year cycle. NoaaOn November 11 2025, a large solar particle event increased ground level radiation by about 145% for two hours, as measured by the University of Surrey’s neutron monitor at the Met Office station at Lerwick, Shetland.
This was also detected by University of Surrey SAIRA (Smart Atmospheric Ionising Radiation) monitors installed on two transatlantic flights and on rapid response meteorological balloon flights at Lerwick, Cambourne and near Utrecht in the Netherlands.
Work is in hand to unscramble this complex event to determine the radiation increases worldwide using the University of Surrey computer model MAIRE (Model for Atmospheric Ionising Radiation Effects). This calculates radiation levels at aviation altitudes for normal atmospheric conditions, as well as for enhanced radiation events caused by increased solar activity.
Three immediate research papers are in production to describe the radiation monitors and their calibration, to summarise the flight data and to compare the data with available models.
The Earth’s magnetosphere acts as a shield, protecting the planet from solar particles. EsaThe solar particle event on November 11 2025 serves to tell us that, whatever the probabilities might be, the Sun can always take us by surprise.
To underline the importance of such events for deep space missions, let’s rewind the clock to 1972. At the time, the Sun was in a similar declining phase in its 11-year cycle as we are today. Then, between August 2 and August 11 1972, the Sun unleashed one of the largest solar particle events of the space age.
A massive solar particle event occurred between the Apollo 16 (pictured) and Apollo 17 missions in 1972. Nasa / Charles M. Duke JrThis gigantic release of charged particles from the Sun occurred in between the Apollo 16 (April 1972) and Apollo 17 (December 1972) missions to the Moon.
This event was much bigger than the one in November 2025 – by a factor of 40. If it had taken place while astronauts were in space, the radiation dose could have caused severe illness or even death.
The Apollo crews had a lucky escape. But the solar particle event made an impact on Earth. The ensuing geomagnetic storm is thought to have caused 4,000 US-laid mines to spontaneously detonate in Hanoi harbour during the Vietnam war, causing confusion and alarm on both sides.
Travelling to the Moon means astronauts are no longer protected by the Earth’s magnetic bubble, or magnetosphere. NasaThere are ways to prepare for similar events in future. The most dangerous aspect of this radiation is its high energy component, which can penetrate shielding on spacecraft. The Surrey Space Centre Space Environment & Protection team are currently working on a detector, called the High Energy Proton instrument, that definitively measures this high energy component of solar particle radiation.
It does this through the light flashes emitted when the particles transit a transparent medium at velocities exceeding the speed of light. Astronauts often report seeing such flashes of light, even with their eyes closed, that can be caused by solar particles or high-energy cosmic rays passing through the retina or optic nerve.
Read more: Why has it taken so long to return to the Moon?
The University of Surrey radiation detectors could now fly on a lunar orbiting mission towards the end of the decade. On this mission, they will characterise the danger to lunar bases as well as to the Earth. Nasa is planning to spend US$20 billion (£15 billion) on a base at the south pole of the Moon. A separate outpost is planned by China and Russia.
Radiation warning systems can give astronauts the time they need to retreat to storm shelters within a base or spacecraft where increased and specially designed shielding is used.
Engineers use storage lockers as a radiation shelter inside a mockup of Orion. NasaIf astronauts travelling in Orion – the spacecraft used on Artemis II – receive advance word of a solar storm, they are instructed to get into storage lockers in the floor of the spacecraft. This places the crew next to Orion’s heat shield, making this area one of the most protected parts of the vehicle.
Warning systems can also help on Earth. During periods of high solar radiation, controllers could instruct aircraft to fly at lower altitudes and latitudes – and in extreme cases remain grounded.
One big difference between the Apollo and Artemis missions is in the rapid development of microelectronics since the 1960s and 70s. This has led to trillion-fold increases in computer memory density and thousand-fold improvements in speed.
The Apollo computers were pioneering, but struggled to cope with the workload as Neil Armstrong and Edwin “Buzz” Aldrin descended to the lunar surface during the Apollo 11 mission in 1969. However, there is a downside to this as the technology packed into modern spacecraft is vulnerable to radiation.
Read more: Heat shield safety concerns raise stakes for Nasa’s Artemis II Moon mission
The charge depositions from individual particles often exceed the amount required to change the state of the computer memory bits. In some cases it could destroy the device. It is now arguable whether the greater hazard from solar particle events is to astronaut health or to the flight electronics aboard spacecraft.
In 1972, the Apollo astronauts were very lucky. In this new age of exploration, when so many nations have designs on travel to deep space, we can’t afford to leave astronaut safety to the whims of fortune.
Clive Dyer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
One firm, Taiwan Semiconductor Manufacturing Company (TSMC), produces more than 90% of the world’s most advanced semiconductor chips. These chips are essential for smartphones, artificial intelligence, high-performance computing and cutting-edge military systems.
Taiwan’s dominance of advanced chips acts as a chokepoint for the global economy. Days or weeks without their manufacturing would affect the supply and price of numerous products around the world. This is comparable to how the current disruption to shipping in the Persian Gulf due to the Iran war is affecting oil-dependent markets globally.
Taiwanese semiconductor manufacturing supremacy has transformed the island nation into what I have described in my research as a “niche superpower”. It wields outsized global influence by commanding a strategically indispensable industry.
Taiwan did not stumble into this position. In the 1970s, Taiwanese technocrats recognised that the nation could not immediately compete at the world’s electronics frontier. One of them was Kwoh-Ting Li, then minister of economic affairs, who is often referred to as the “father of Taiwan’s economic miracle”.
At that time, Taiwan lacked the financial capital and technological skills to compete with industry leaders such as Japan and the US. So rather than trying to dominate the entire semiconductor industry from design through to production, Taiwanese policymakers focused on building capabilities in precision manufacturing. This is the most operationally demanding part of the semiconductor value chain.
Established in 1973 by the Taiwanese government, the Industrial Technology Research Institute carefully acquired semiconductor process technology through licensing agreements with the now defunct US firm Radio Corporation of America (RCA). It then trained a generation of Taiwanese engineers.
TSMC produces over 90% of the world’s most advanced semiconductors. jackpress / ShutterstockThe pivotal moment came in 1987, when Morris Chang established TSMC. Chang, a US-trained engineer who had spent decades at American semiconductor multinational Texas Instruments, devised what is now known as the “pure-play foundry” model.
Rather than designing and manufacturing its own branded chips, this meant that TSMC would manufacture chips for other firms. This strategic choice was transformative because it reassured American and European semiconductor companies that TSMC would not compete with them. It allowed major tech firms such as Qualcomm and later Nvidia to outsource chip production to Taiwan without fear of intellectual property leakage or strategic rivalry.
The Taiwanese semiconductor industry grew within the Hsinchu Science Park, a major industrial cluster south of the Taiwanese capital of Taipei. By the early 1990s, Hsinchu Park hosted more than 140 chip manufacturing firms and employed around 30,000 workers. The strength of the cluster attracted legions of Taiwanese engineers back from the US, helping Taiwan become the global leader in the production of advanced semiconductors.
Taiwan’s semiconductor dominance has played an overt role in protecting the island from its existential threat – a Chinese invasion. This phenomenon was explicitly named in 2021 in an article published in Foreign Affairs magazine, where the former Taiwanese president, Tsai Ing-wen, argued that Taiwan’s semiconductor industry acts as a “silicon shield”.
The dependence of the global economy on Taiwanese-made advanced chips, she argued, means the disruption caused by a Chinese invasion would trigger catastrophic global economic consequences. Taiwan’s allies would thus be compelled to come to its defence.
In recent years, Taiwan’s silicon shield has come under threat. Following the start of US export restrictions on advanced chipmaking equipment to China in 2020, Beijing has accelerated its efforts to build indigenous capacity in chip manufacturing. It has significantly increased investment in its semiconductor industry.
Semiconductors were the underperformer in the Made in China 2025 strategy, through which Chinese leadership aimed to transform their nation into a high-tech manufacturing superpower. China fell short of its goals for the localisation of semiconductor production and global market share, missing targets by the 2025 deadline.
However, Chinese chip manufacturers like HiSilicon and Semiconductor Manufacturing International Corporation have been gaining momentum. A proposal by 13 Chinese chip industry executives in March outlined aims to increase self-sufficiency to 80% by 2030. China’s semiconductor self-sufficiency is currently around 33%.
At the same time, Washington is pushing to bring semiconductor manufacturing back onshore. Biden-era initiatives such as the Chips and Science Act offered incentives for TSMC’s sprawling manufacturing facility in Arizona, which opened in 2022 as part of US efforts to boost domestic chip production.
These incentives for TSMC included up to US$6.6 billion (£5 billion) in direct investment and significant tax credits. TSMC committed an initial US$65 billion to the plan, with the Trump administration announcing in March 2025 that the company would boost its US investment by a further US$100 billion.
Elon Musk also recently announced plans for advanced chip facilities in Texas for his two companies, Tesla and SpaceX. In light of Musk’s concerns that companies like TSMC are not producing the volume of chips his companies need, the so-called “Terafab” venture aims to consolidate every stage of the semiconductor production process under one roof and is expected to cost in the range of US$25 billion. Other companies investing in chip fabrication in the US include Micron, Texas Instruments and Intel.
Despite US and Chinese efforts, replicating Taiwan’s manufacturing ecosystem is difficult. It requires not only capital and equipment, but also knowledge that has been accumulated over decades as well as dense supplier networks and an unparalleled engineering workforce.
TSMC has struggled to hire talent in Arizona, and has resorted to flying thousands of workers in from Taiwan in a bid to improve the skills of locals. And while TSMC is now producing semiconductors at the cutting edge of 2-nanometre scale, the Chinese self-sufficiency goals aim to have “entirely domestically produced equipment” for the less sophisticated 7-nanometre and 14-nanometre generations of chips.
The difference between 2nm and 7nm chips is significant – a 45% increase in performance while using 75% less power. The narrower chips are used for advanced processes such as cutting-edge AI, while the wider ones are used in a broader range of electronics, like smartphones, desktop processors and automobiles.
Taiwan’s semiconductor story is ultimately one of strategic foresight. By choosing manufacturing over design, embedding itself within US-led technological networks and cultivating world-class process expertise, Taiwan transformed structural vulnerability into structural power.
Through its semiconductor dominance, Taiwan stands out as the quintessential niche superpower. But history shows that superpower status, including in niches, is never permanent. The technological frontier moves, rivals learn and allies hedge.
For Taiwan, remaining indispensable to the global economy will require not only staying ahead technologically. It will also require carefully orchestrating the political, financial and human capital foundations that made its silicon shield possible in the first place.
Robyn Klingler-Vidra received a research grant from the Chiang Ching-kuo Foundation between 2019 and 2023. The grant funded research on the educational and professional background of north-east Asia's innovation policy leaders across the post-war period. The study was published in World Development in April 2025 and is available here: https://www.sciencedirect.com/science/article/pii/S0305750X25000646.
Prime Minister Anthony Albanese has warned Australians to brace for difficult months ahead, but pledged the government will protect them as well as it can, in an address to the nation on the impact of the Middle East War.
Albanese said he wanted to be “upfront” about the situation.
“The months ahead may not be easy”, he said.
“Australia is not an active participant in this war, but all Australians are paying higher prices because of it.
"And the reality is, the economic shocks caused by this war will be with us for months.”
In his address, Albanese said it was the Australian way for people to want to do their bit, and suggested “simple ways” they could.
“You should go about your business and your life, as normal. Enjoy your Easter.
"If you’re hitting the road, don’t take more fuel than you need - just fill up like you normally would.
"Think of others in your community, in the bush and in critical industries.
"And over coming weeks, if you can switch to catching the train or bus or tram to work, do so.
"That builds our reserves and it saves fuel for people who have no choice but to drive.
"Farmers and miners and tradies who need diesel, every single day.
"And all those shift workers and nurses, who do so much for our country.”
He said on Monday National Cabinet had adopted the National Fuel Security Plan to keep Australia moving and make sure it was prepared for the future “so that if the global situation gets worse and our fuel supplies are seriously disrupted over the long term, we can coordinate the next steps together”.
The government was working to bring the price of fuel down, to make more fuel here and keep it on-shore, and to get more fuel into the country, Albanese said.
“These are uncertain times.
"But I am absolutely certain of this: we will deal with these global challenges, the Australian way.
"Working together - and looking after each other.”
Michelle Grattan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
An author and freelance journalist has admitted to using AI to help him write a book review for the New York Times.
Alex Preston’s review of Jean-Baptiste Andrea’s novel Watching Over Her, published by the New York Times in January 2026, draws phrases and full paragraphs from Christobel Kent’s Guardian review. The “error” was brought to light by a reader, who alerted the New York Times to the similarities.
Preston told the Guardian he is “hugely embarassed” and “made a huge mistake”.
Alex Preston has admitted to using AI to help write a book review. HachetteThe Times promptly dropped Preston, calling his “reliance on A.I. and his use of unattributed work by another writer” a “clear violation of the Times’s standards”. An editor’s note now precedes the review online, advising readers of the issue and providing a link to the Guardian review.
Preston’s apology to the Guardian raises more questions than it resolves. The portion quoted online seems to speak more to the issue of unattributed work than his use of AI. It reads: “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in.” This implies that if he had removed the “overlapping” language, the issue would have been avoided.
As a literary critic and scholar, I believe the deeper question isn’t whether or not critics should do more to hide their use of AI – but the ethics of using it at all.
The role of the critic isn’t to summarise or repackage art, but to actively participate in a conversation about it. “Good criticism thrives in the complexity of its environment,” writes critic Jane Howard, who is also The Conversation’s Arts + Culture editor. “Each review sits in conversation with every other review of a piece of art, with every other review the critic has written.”
In other words, the critic is in conversation with both the artist and the audience. The critic’s emotional and intellectual engagement with art – and their translation and communication of meaning – is intrinsic to their role as mediator. That role is deeply human.
Perhaps information can be outsourced, but emotional engagement can’t. Nor can an individual perspective, filtered through one human’s reading, viewing, listening and experiences.
There are valid arguments outlining the functional uses of AI, and warning against significant climate repercussions. But there is also an escalating concern around the intrusion of AI into creative expression.
Shy Girl was cancelled due to AI accusations against its author.Last month, author Mia Ballard was accused of using AI to write her horror novel, Shy Girl. It was withdrawn from publication in the UK and cancelled from scheduled publication in the US, after “readers on platforms such as Goodreads and Reddit had questioned whether sections of the text bore hallmarks of AI-generated prose”, according to the Guardian.
In 2023, German artist Boris Eldagsen sparked controversy when he revealed that his prize-winning photograph The Electrician was AI generated. In 2025, Tilly Norwood, the first fully AI-generated “actress” ignited debate around whether so-called synthetic actors were a tool for creative expression, or a threat to human creators.
In 2025, writers were “horrified” to discover that their work had been pirated by Meta to train AI systems.
If the question that underlies these examples is “what is the role of art”, this latest debacle adds “and what is the responsibility of the critic”?
Art criticism in Australia is what Howard describes as a “niche within a niche”. The sector is unbearably small, so most critics have an additional day job and are in close professional and personal proximity to the artists whose work they review.
Some critics of the critics, such as writer Gideon Haigh, have suggested this has led to a culture of what literary academic Emmett Stinson called “too-nice” criticism.
But I would argue generosity is fundamental to public-facing criticism – and that the critic reviewing in the public sphere has a responsibility to writers and readers.
The writer might safely assume that when we’re publishing a review that surmises their book’s successes and failings against its ambition, we have, at the very least, taken the time to read and carefully consider their work, and our own response to it.
This unspoken pact is broken when the writer begins to use AI – particularly when a professional reviewer like Preston seems to outsource his assessment to it.
Such fiascos point to a disturbing future where readers’ opportunities to build community and develop empathy through engagement with literature is outsourced entirely to AI.
Australian literature academic Julieanne Lamond has said “when we write reviews we have to do it ‘naked’ – as individual readers, with a public to judge our judgements”. In other words, we sit at the middle of a pact between the writer of a book and their potential readers.
Done well, criticism is literature. As Australian author, playwright and critic Leslie Rees argued in 1946, good literary criticism is a “real and creative service to literature”.
Watching Over Her is at the centre of a controversy over the use of AI in writing a New York Times book review.Popular criticism, written for the general public and published as journalism, might sit on a different playing field from scholarly criticism. But its obligation to readers – to convey real and honest opinions about books and bring readers into a conversation about literature – is no less significant. There is a shared obligation to be honest, and surely this honesty extends to a transparency about AI use.
French professor and essayist Phillipe Lejeune, best known for his work on autobiography, used the term the “autobiographical pact” to describe the relationship between the writer of a memoir and the reader. That is, the reader accepts what the memoirist says as truth, based on the writer’s acknowledgements of their own biases and subjectivity.
We might transfer a similar pact to the reviewer and their reader. Should the reader not be able to trust that the review they’re reading is the critic’s own?
Hannah Bowman, a literary agent from Liza Dawson Associates, recently described mistrust as the book industry’s greatest peril: “it’s essential for all parties in the publishing process to have transparency and clarity in conversations about how AI tools are being used by any party, especially in the creative process”.
In failing to disclose his use of AI, Preston has not only embarrassed himself, but broken the trust of his readers.
Bec Kavanagh is a freelance critic for The Guardian.
Butter chicken has disappeared from some restaurant menus in India. Sri Lanka declared every Wednesday a public holiday. Laos cut its school week to three days. Egypt ordered shops and cafes to close by 9 pm. In Thailand, government workers were told to take the stairs instead of the elevator. And in South Korea, the president urged citizens to take shorter showers.
These are wartime policies, even though none of these countries are actually fighting a war. All of them, however, are caught in the blast radius of one being fought thousands of miles away. That’s because the closure of the Strait of Hormuz, triggered by the US-Israeli strikes on Iran that began on February 28, has detonated a crisis that reaches into kitchens, classrooms, hospitals, and fields across the Global South.
Twenty-one miles wide at its narrowest point, before the war, the Strait carried 20 percent of global oil, 20 percent of liquefied natural gas (LNG), a third of seaborne fertilizer, and nearly half of the world’s sulfur exports. Commodity shipments have fallen by 95 percent. The Strait is, in effect, closed, and the consequences are cascading through the lives of an estimated 3.2 billion people in countries now subject to some form of fuel rationing, power cuts, or energy restrictions.
Start with food. India imports the majority of its cooking gas through the Strait, and the disruption hit almost immediately. Black-market prices for a single liquified petroleum gas (LPG) cylinder — the kind that powers a family kitchen there — have nearly tripled. Restaurants across the country have slashed their menus; a 70-year-old Mumbai institution trimmed its elaborate multicourse Ramadan offerings to just four dishes. A chain in the same city stopped selling dosa entirely, because the dish requires an open gas flame. A handwritten sign at a Bengaluru restaurant went viral: “There will be no roti due to gas cylinder crisis (due to war between Iran and USA).” Nearly 10,000 restaurants in the state of Tamil Nadu alone face closure.
The fertilizer crisis hasn’t yet had the same level of immediate effects, but the longer-term impact looks grim. The Gulf produces roughly a third of the world’s exports of urea, a key ingredient in fertilizer, and the closure hit at the single worst moment in the agricultural calendar — just as Northern Hemisphere farmers need to apply fertilizer for spring planting.
Bangladesh has shut down four of its five state-owned urea plants. Nepal, which produces zero chemical fertilizer domestically, has seen urea prices jump 40 percent ahead of its critical paddy season. In Brazil, sugar mills are diverting their new harvest toward ethanol — which is more profitable, with oil above $100 a barrel — which could tighten global sugar supplies for months.
The World Food Programme warns that 45 million more people globally could be pushed into acute food insecurity — an increase of 15 percent on current hunger levels. As if that’s not enough, the closure of the strait has stranded vital United Nations food aid in warehouses in Dubai, crippling the ability of relief agencies to get supplies where they’re needed most.
Then there’s the environmental fallout, which may be the single most consequential long-term effect of the crisis.
The disruption of relatively clean LNG supplies has triggered a coal resurgence across Asia and beyond. Japan is planning to lift rules that required its oldest, dirtiest coal plants to run at less than 50 percent capacity, which means more carbon dioxide and other pollution spewed into the air. South Korea removed its own seasonal cap on coal power and delayed the retirement of three coal plants. Thailand, the Philippines, and Indonesia are all expanding coal operations. And in Europe, Germany is reviewing whether to restart mothballed coal plants.
Coal companies — whose product is the single-biggest contributor to climate change — are reaping the benefit. Australia’s Yancoal is up 40 percent since the war began, while Pennsylvania-based Core Natural Resources is up 30 percent. And once turned on, coal plants can be politically difficult to shut down again, which would risk a longer-term carbon lock-in. And it’s not just about climate change. In India, the government has formally permitted restaurants and hotels to burn wood, dried crops, and cow dung — undoing years of clean-fuel progress and putting more lives at risk in the process in a single directive.
If you squint, there could be an eventual silver lining to all of this. In Nepal, over 70 percent of new car sales are already electric. Electric rickshaws are selling out in Pakistan. The Chinese electric car maker BYD is now projecting overseas sales to be 15 percent higher than they were expected before the war. One energy analyst called this “Asia’s Ukraine moment” — a shock that could accelerate the shift to renewables the way Russia’s invasion pushed Europe toward wind and solar.
Hastening the clean energy transition, however, won’t put food on the table for billions of people throughout the Global South, and more coal and other dirty fuels in the short term will endanger more lives around the globe. The world’s poor may not be fighting the Iran war, but they are surely suffering from it.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
On June 5, 1981, the Centers for Disease Control and Prevention published a brief, clinical report in its Morbidity and Mortality Weekly Report about five young men in Los Angeles who had developed a rare and deadly form of pneumonia.
The write-up, barely a page long, ran in between a report on dengue infections among US travelers and an assessment of measles cases. No one who read it could have known this was the opening chapter of the deadliest infectious disease epidemic since the 1918 flu — one that would kill an estimated 44 million people worldwide and reshape medicine, politics, and culture in ways we’re still reckoning with. It would eventually be called human immunodeficiency virus, or HIV.
For the next 15 years, an HIV diagnosis was, functionally, a death sentence, as the immune system was hollowed out on a slow march to full-blown AIDS. The virus mutated so rapidly that every early attempt at treatment felt like trying to hit a moving target in the dark. And the dark was where many of the earliest victims were forced to live, stigmatized by society. It took until September 1985 for President Ronald Reagan to even say the word “AIDS” publicly, by which point some 6,000 Americans had already died.
By 1993, HIV had become the leading cause of death for all Americans aged 25 to 44. Not just gay men. Not just intravenous drug users. Everyone in the prime of their lives. In 1995, at the epidemic’s American peak, 50,628 people died of AIDS in a single year. Globally, new infections peaked the following year at around 3.4 million. In the hardest-hit cities of sub-Saharan Africa, one in five adults were HIV positive. Entire generations of parents were being wiped out. By 2000, AIDS was the leading cause of death on the African continent.
The story could have ended there: The virus had won while the world looked away. But it didn’t. What happened instead, through a combination of activist fury, scientific ingenuity, and an act of bipartisan political will that still seems improbable in hindsight, is one of the great reversals in the history of medicine. It’s a narrative that provides hope not just that we might one day get to zero and eradicate HIV, but that the world can overcome what may seem like the most hopeless challenges.
For the first decade of the epidemic, the US government’s response was defined by indifference, until activists decided to make that impossible. The group Act Up turned unimaginable grief into political force, storming the Food and Drug Administration, shutting down Wall Street, and transforming funerals into protests. They were loud and furious and provocative — and effective: Act Up and allied organizations pressured the FDA into creating accelerated drug approval pathways and shamed pharmaceutical companies into expanding access to experimental HIV treatments.
The clinical turning point came at the 1996 International AIDS Conference in Vancouver. Researchers including Dr. David Ho presented data on combination antiretroviral therapy — what would become known as HAART. Scientists combined multiple drugs into a cocktail that attacked HIV at different stages of its life cycle — basically surrounding the virus so it had nowhere to evolve to.
The results were staggering: 60 percent to 80 percent declines in rates of AIDS, death, and hospitalization. Patients who had been days from death recovered so dramatically that doctors called it the “Lazarus effect.” One physician’s practice went from 37 patient deaths in 1995 to zero in 1998. Nationally, AIDS deaths in the United States fell 63 percent in three years. HIV dropped from the No. 1 killer of young Americans to No. 5 by 1997 — an unprecedented decline for any leading cause of death in modern history.
But the Lazarus effect had a brutal asterisk. Early antiretroviral therapy cost $10,000 to $15,000 per patient per year. For most Americans with HIV, that was doable with a mix of insurance and government funding. For the tens of millions infected in impoverished sub-Saharan Africa — where the epidemic was orders of magnitude worse than in the West — those lifesaving drugs were all but unobtainable. In January 2003, nearly a decade after antiretrovirals had become widespread in the US, only about 50,000 people in all of sub-Saharan Africa were on the drugs. Thirty million were infected. Roughly 12 million Africans died of AIDS between 1997 and 2006 while high costs and supply bottlenecks kept the treatment that would have saved their lives out of reach.
It’s not hard to imagine an alternate history where this inequality of death persisted. After all, we implicitly accept this ingrained inequality in so many other areas, from extreme poverty to childhood mortality.
But that’s not what happened. The same activist energy that had forced the FDA’s hand in the 1990s turned its attention to the global treatment gap, joined by an unlikely alliance of evangelical Christians motivated by faith, public health officials who saw a security threat, and a president who cited the parable of the Good Samaritan.
During his 2003 State of the Union address, President George W. Bush pledged $15 billion over five years to fight AIDS abroad through what he called the President’s Emergency Plan for AIDS Relief, or PEPFAR. The House passed the legislation that created PEPFAR 375-41, a sign of just how broad the coalition behind it was.
In April 2004, a 34-year-old man in Uganda named John Robert Engole became the first person in the world to receive PEPFAR-supported antiretroviral therapy. By the end of 2005, some 400,000 people were on treatment through the program. By 2008, it was 2 million around the world — a 40-fold increase from the 50,000 Africans on ART when Bush made his speech.
PEPFAR has since invested over $120 billion and, by its own estimates, saved 26 million lives. The cost of treating one patient in a low-income country fell from roughly $1,200 a year in 2003 to $58 by 2023. As my former colleague Dylan Matthews once wrote, PEPFAR is “one of the best government programs in American history.”
The downstream effects of PEPFAR and other advances in HIV treatment and prevention are extraordinary.
Annual global AIDS deaths have fallen from a peak of 2.1 million in 2004 to 630,000 in 2024 — a 70 percent reduction. Some 30.7 million people in low- and middle-income countries are now on antiretroviral therapy worldwide, up from fewer than 400,000 just two decades ago. That’s nearly an 80-fold increase.
What this all means is that someone diagnosed with HIV today who gets on treatment can expect a near-normal lifespan, which is an outcome that would have been literally unimaginable to anyone living through the 1980s and early 1990s.
On top of far better treatment, the toolkit for preventing people from getting HIV in the first place has become far more effective, which has helped lead new infections to drop more than 60 percent from 3.4 million in 1996 to 1.3 million. PrEP — a daily pill that reduces the risk of contracting HIV by up to 99 percent — has been available since 2012, and more than 3.5 million people around the world have taken it at least once. Last year the FDA approved lenacapavir, a twice-yearly injection that Science magazine named its 2024 breakthrough of the year. In the PURPOSE 1 trial of the drug, among more than 2,100 women in South Africa and Uganda, there were zero HIV infections. Not a low number. Zero.
HIV treatment, essentially, has become so effective that it now acts as prevention as well. HIV experts call it Undetectable equals Untransmittable, or U=U. Studies encompassing over 100,000 acts of condomless sex where one partner is HIV positive and another is not have found zero linked transmissions. That means someone living with HIV who is virally suppressed cannot pass the virus on sexually, which is a step toward both normalizing a disease that was once so feared and further curtailing the epidemic. And these tools can work at scale: The SEARCH trial showed that community health workers in rural Kenya and Uganda, armed with smartphone apps and the ability to immediately provide antiretroviral treatment to anyone testing positive, cut new infections by 70 percent.
And yet, more than 630,000 people still die of AIDS every year — roughly one every minute. Some 9.2 million people who need treatment still aren’t getting it. Children are the worst off: only 55 percent of those under 14 with HIV are on therapy, compared to 78 percent of adults. And the epidemic’s burden falls hardest on the most marginalized: sex workers, men who have sex with men, people who inject drugs, and transgender people now account for over 55 percent of all new infections globally — up from 44 percent in 2010.
Two-thirds of all people living with HIV are in sub-Saharan Africa, where external funding finances around 80 percent of prevention programs. That has left them vulnerable as the global HIV response faces its gravest funding threat in decades.
PEPFAR’s statutory authorization lapsed in March 2025 without congressional reauthorization. A January 2025 stop-work order froze programs worldwide. The effective dismantling of USAID — with 90 percent of contracts canceled — has gutted the program’s infrastructure. UNAIDS modeling suggests that if these disruptions become permanent, the result could be 6 million additional infections and 4 million additional deaths by 2029. South Africa alone has already laid off some 8,000 health care workers because of funding cuts.
And the threat isn’t only abroad: More than 20 US states are now considering cuts to the AIDS Drug Assistance Program, the safety net that covers a quarter of all Americans living with HIV — including Florida, where 16,000 people briefly lost coverage before an emergency fix that lasts only through the summer. A recent Johns Hopkins study estimated that eliminating the program’s parent legislation could increase new infections in major US cities by nearly 50 percent by 2030
For the first time in the 45-year history of this epidemic, we have genuinely effective tools to end it: drugs that treat, pills and injections that prevent, even hopes for a potential vaccine. The gap between where we are and where we need to be is no longer a question of science. It is a question of money and political will — the same forces that, two decades ago, helped produce the most effective global health program in American history.
The story of HIV is a story of what humanity can accomplish when it decides something matters enough. We’ve made that decision before. The question is whether we’ll make it again.
A version of this story originally appeared in the Good News newsletter. Sign up here!
I have yet to see Project Hail Mary, the buzzy space blockbuster starring Ryan Gosling. But who needs science fiction when you have...science reality?
At 6:24 pm Eastern, NASA is scheduled to launch four astronauts on a 10-day journey around the moon. The launch is part of the Artemis program, which hopes to return humans to the moon by the end of the decade and establish a bona fide base on the lunar surface.
This launch is part of a bigger, global push to return to the moon. So, this morning, we’re looking at what’s driving the new space race — and where it’ll blast off in the future.
It’s been many moons since a human last set foot on the lunar surface. But over the past five years, unmanned missions and lunar flybys like Artemis II have become markedly more frequent.
Since 2023, government space agencies, nonprofits, and private companies from Russia, India, China, and Japan have all attempted lunar landings to mixed (but generally successful) results. South Korea launched its first lunar orbiter, Danuri, in 2022. Israel also attempted an unmanned moon landing in 2019, though its craft suffered an engine failure.
America’s last lunar venture went down in February 2024, when the US landed an unmanned lunar spacecraft called Odysseus near the moon’s south pole; its first in 50 years. Odysseus carried six NASA experiments and six commercial items, including a Jeff Koons sculpture.
If all goes to plan, Artemis II will mark the first time humans have travelled into deep space since the Apollo program. (The astronauts are orbiting — but won’t land on — the moon.) They could also set a new record for distance travelled from earth.
“It is a fact: We’re in a space race,” former NASA administrator Bill Nelson told Politico.
The first space race was driven by geopolitical competition between nations — and there’s still an element of that, as Nelson’s full comments to Politico suggest. (He went on to warn that the Chinese could try to claim territory on the moon, though a 1967 treaty prohibits that.)
Generally speaking, however, today’s wave of lunar exploration is driven less by Cold War-style rivalry than by commercial interests. Private equity firms have poured hundreds of billions of dollars into private space companies over the past 10 years, seizing on lucrative government contracts and aiming to capture a share of a fast-growing market.
The unmanned lunar spacecraft that landed on the moon in February 2024, for instance, was produced by Intuitive Machines, a Texas-based engineering firm. For the Artemis missions, the US has relied heavily on technology developed by Boeing, Lockheed Martin, Elon Musk’s SpaceX, and Jeff Bezos’s Blue Origin.
These companies hope to supply the infrastructure for future space exploration, transportation and logistics, which will — according to plans NASA announced last week — include a $20 billion US base on the moon. Longer-term, the lunar surface could also theoretically be mined for valuable resources or used as a refueling station for longer, deep-space missions.
That is, after all, the ultimate ambition of the Artemis project: to put astronauts back on the moon, yes — but as a stepping stone to one day getting them to Mars. Manned lunar missions help scientists better understand how extended space travel affects the human body, as well as test life-support, communication, and navigation systems. Researchers have also hypothesized that the large ice deposits at the moon’s south pole — first discovered in 2008 — could be transformed into breathable air, drinkable water, or fuel for longer trips.
But those sorts of ambitions are still years away, at best. Artemis II is the second of five planned missions in the Artemis program, each intended to build on the one before it. Humans aren’t expected to return to the lunar surface until Artemis IV, currently slated for 2028. And it won’t be until Artemis V that NASA lays the planned groundwork for that permanent lunar base.
This is all the more reason to seize your chance to watch the launch tonight. Whatever the plans, we don’t know for sure when American (and Canadian!) astronauts will head towards the moon again. You can stream it on NASA’s YouTube channel or via C-SPAN.
Since it first began in 1981, the HIV epidemic has killed more than 44 million people. For a generation, a diagnosis was essentially a death sentence, and for much of the world it remains a daily threat, with some 1.3 million people newly infected in 2024 alone.
But something remarkable has happened. Deaths from HIV-caused AIDS have fallen 70 percent since their peak. Around 30 million people are on antiretroviral treatment, drugs that turned that death sentence into a manageable condition. And we’re now on the cusp of breakthroughs that would have seemed like fantasy a decade ago: long-acting drugs that can prevent infection with a single injection every six months, and even the real possibility of a vaccine.
For the first time, the end of HIV is a plausible goal. Yet this is also the moment when the global funding and political commitment that made this progress possible are being pulled back. Health programs that have saved millions of lives face deep cuts, both abroad and at home.
Over the next several months, Future Perfect will be exploring the fight against HIV, here in the US and overseas, from the political and the pharmaceutical to the personal and the painful. There’s never been a more important moment for this coverage, because the overriding question in front of us is no longer whether we can end HIV. We know we can. It’s whether we will.
This series was sponsored by Gilead. Vox had full editorial discretion over the content of this reporting.
This story appeared in The Logoff, a daily newsletter that helps you stay informed about the Trump administration without letting political news take over your life. Subscribe here.
Welcome to The Logoff: President Donald Trump’s war with Iran has pushed US gas prices to their highest point in more than three years.
What happened? On Tuesday, the national average for a gallon of gasoline cleared $4 for the first time since August 2022, capping an increase of more than $1/gallon since the war with Iran began.
The spike is largely a consequence of Iran’s decision early in the war to close the Strait of Hormuz to most traffic. Until recently, about one-fifth of the global oil supply flowed through the strait.
Why do gas prices matter so much? While gas isn’t the biggest item in most people’s budgets, it’s one measure of the cost of living that many Americans interact with on a regular basis and is prominently posted near roads everywhere. Trump has also regularly boasted about bringing gas prices down, often offering false statistics.
They’re also an indicator of the broader energy shock wracking global markets: Oil prices are hovering somewhere above $100/barrel, diesel prices are at $5.45/gallon, and jet fuel prices have doubled.
High energy prices will soon trickle down to many other aspects of American life, making food, air travel, and consumer goods more expensive. Disruptions caused by the Strait of Hormuz won’t stop there, either: As my colleague Bryan Walsh explains, fossil fuels are an input in the fertilizers that help feed the world; a shortage will be felt acutely.
What comes next? It’s anyone’s guess. In a Truth Social post on Tuesday, Trump suggested that he could be looking to bring US involvement in the war to an end with the Strait of Hormuz still largely closed, matching reporting by the Wall Street Journal. An earlier effort by Trump to recruit US allies to help open the strait fell flat.
The strait remains closed, though, the worse the global energy crisis will get — and that’s not something Trump can opt out of.
If all goes to plan, the US will send astronauts to the moon tomorrow for the first time in more than 53 years. Unlike the Apollo mission in 1972, the Artemis II crew won’t land on the lunar surface; for this trip, they’ll slingshot around it Apollo 13-style before returning to Earth.
The Artemis rocket is set to lift off tomorrow evening from Cape Canaveral, Florida, at 6:24 pm ET (assuming the weather is good). We’re covering it today too, though, because space is exciting, and that way you’ll have more time to read this great Wired story about what the mission will entail.
Have a great evening, and we’ll see you back here tomorrow!
It costs the UK economy £700m a year, and criminal gangs are operating with near impunity. Every time a lorry gets robbed, raided or hijacked, it’s Mike Dawber who investigates
In August 2021, Mike Dawber, the UK’s leading detective in cargo crime, got a call from officers in Bradford CID. They were planning to search two warehouses that contained, in their words, an awful lot of suspicious goods. This was a job that required Dawber’s expert eye. He drove an hour from his home, in the unmarked police car that doubles as his office, and arrived to discover the description barely did it justice.
As soon as he walked in to the first warehouse, he noticed 17 pallets of golfing equipment. They had, he knew, been stolen three weeks before from a truck at Lymm motorway services, just outside Manchester. He reckoned they were worth about £1m. As Dawber continued his survey, he came across 18 pallets of Asics trainers, stolen three years before, at Warwick services. Then 14 pallets of lawnmowers: five years before, from a truck on the A1 at Colsterworth. He came across IT equipment, sportswear, high-end fashion, electrical goods, toasters, microwaves, beauty products. One pallet was simply labelled “Eyelash technology”. Dawber didn’t know what eyelash technology was, exactly, but he later learned that a pallet of it was worth more than £500,000.
Continue reading...In theory, a universe can come in any shape or size, but scientists prefer to think about three basic kinds of universes: one that’s expanding, one that’s collapsing, and one that stays the same. Out of these three simplified models, an expanding universe is the hardest for physicists to understand. Yet it’s exactly the one our real world most resembles. When physicists calculate what’s going on…
His novel was praised for giving a voice to the victims of Algeria’s brutal civil war. But one woman has accused Kamel Daoud of having stolen her story – and the ensuing legal battle has become about much more than literary ethics
By Madeleine Schwartz. Read by Kate Handford
Continue reading...In the summer of 1912, word reached Robert Fiske Griggs that the apocalypse had arrived on Kodiak, an inhabited island off the coast of Alaska. The following year, Griggs, a botanist at the University of Ohio, led the first of several expeditions to the island, where he and a team glimpsed a disquieting sight: Kodiak was shrouded in a full foot of ash. And it wasn’t just the island.
Steeped in gaming and rightwing culture wars, Musk and his team of teenage coders set out to defeat the enemy of the United States: its people
By Ben Tarnoff and Quinn Slobodian. Read by Vincent Lai
Continue reading...The United States is one of the only countries on Earth that doesn’t guarantee new parents paid leave after a child is born — time to recover, bond with a newborn, and get on your feet as a new family. Only about one in four private-sector workers has access to it, and among the lowest-wage workers, virtually none do. The rest must cobble together vacation days, sick time, or unpaid leave, if they can afford to take any time off at all.
The common assumption is that America just doesn’t care enough about parents to fix this problem. But the policy has actually long drawn broad bipartisan support in the United States.
The real problem is that parental leave has been yoked to a far more ambitious federal package for over 30 years, one that faces a much steeper price tag and an uphill battle to passage. It’s a strategic political choice advocates and Congress have made, and keep making.
By contrast, virtually every other country started with basic maternity or parental protections — and built outwards over decades, eventually adding paid time off for fathers, for workers with serious health conditions, and for those caring for sick family members. In most countries, parental leave isn’t bundled with medical or caregiving leave either — it’s its own program, designed specifically around the needs of new parents.
In the US, though, efforts to provide paid leave have been aimed at addressing a wider variety of situations and needs all at once. That approach goes back decades, to when a coalition of disability rights groups, feminist organizations, seniors’ groups, and labor unions teamed up to pass the Family and Medical Leave Act of 1993. The FMLA guarantees eligible workers up to 12 weeks of unpaid, job-protected leave, and Democrats have been trying to make it paid ever since.
Their latest version, put forth in 2025, is the broadest it’s ever been, encompassing paid leave circumstances like caregiving for step-grandchildren and the spouse of a sibling, as well as leave for survivors of stalking or sexual assault. It’s an admirable effort to define all the relationships and situations that might require love and support, but one that makes the legislation even more expensive and challenging to get over the finish line.
The evidence for what paid parental leave can do is extensive: it’s been linked to lower infant mortality, fewer hospitalizations, more timely vaccinations, and better maternal mental health. When fathers take leave, researchers have found improvements in children’s school performance and mothers’ postpartum health. And employers have little to fear — studies of state programs have consistently found no adverse effects on productivity or costs.
A more realistic path to reform could start with a modest federal guarantee — say, six weeks of paid leave for parents, both mother and father, after their baby is born. That would still allow lawmakers to add other kinds of federally guaranteed medical or caregiving leave later, or extend parental leave itself over time.
That’s how virtually every other country has done it. But lawmakers in the US are reluctant to push for anything less.
“We may only have one bite at the apple, so the idea is not to undercut ourselves before negotiations even begin,” a staffer for New York Sen. Kirsten Gillibrand, the lead sponsor of the Democrats’ flagship bill, the FAMILY Act, told me. It’s a version of the same logic that guided Democrats’ push to pass child care, paid leave, elder care, pre-K, and an expanded child tax credit all at once in 2021 — but, in that case, when advocates were loath to prioritize, they ended up with nothing.
The fear that Congress only gets one shot per generation at legislating a big social policy has shaped Democratic thinking for years, but it’s increasingly at odds with recent history. Federal lawmakers have returned to the same big issues — health care, climate, economic relief — multiple times within just a few years, often building on earlier efforts rather than waiting for one perfect bill.
“Bipartisan proposals to make progress on paid leave won’t inhibit passing broader social insurance efforts,” said Curran McSwigan, an economic policy expert at Third Way, a centrist Democratic think tank. “Laying some of that groundwork now can actually help achieve that goal.”
During the pandemic, when support for federal paid leave surged, advocates believed a comprehensive bill was finally within reach. But among at least some advocates, there’s a growing realization that the window for one big paid leave package has closed.
“That coalition might no longer want to get something real done,” one longtime activist, speaking on background to talk candidly about the politics, told me. “They might want to talk about something beautiful, it’s become more a soapbox to stand on than a possible reality.”
The pursuit of federally guaranteed paid leave goes back to the mid-1980s, when advocates began drafting federal legislation to ensure women didn’t lose their jobs after having a baby. But the bill quickly evolved. Opponents of maternity-leave mandates, led by pro-business interest groups, argued that pregnancy-only leave gave women “special treatment” under federal anti-discrimination law, prompting sponsors to reframe their proposal as gender-neutral. They also expanded it to cover workers caring for sick relatives, aging parents, and their own serious health conditions. Disability groups, unions, and senior organizations like AARP signed on, and the coalition grew.
After nine years of legislative battles and two presidential vetoes, the FMLA finally passed in 1993. Compared to what was first proposed, it had a shorter leave period (12 weeks, down from 18), a higher threshold for which employers qualified (those with 50 employees, up from 15), and stricter eligibility requirements. But sponsors preserved the broad architecture that covered new parents alongside workers with their own health conditions, those caring for sick relatives, and aging family members — and that became the template Democrats have worked to establish ever since.
Even in the states that have adopted a comprehensive model, new parents often fall through the cracks.
When states started creating their own paid leave programs — California first in 2004, then New Jersey, and eventually 14 Democratic-led states in all — they adopted the FMLA’s comprehensive framework, bundling medical, caregiving, and parental leave into a single social insurance program. Those early state programs were relatively modest, and most gradually expanded their wage replacement rates, duration, and worker eligibility as political support and funding allowed.
At the federal level, a comprehensive paid leave approach has been a harder sell. It’d cost hundreds of billions of dollars and has never come close to passing, not even when Democrats had unified control of government.
Comprehensive paid family and medical leave bills also have other problems. They treat leave as a form of worker insurance, meaning you pay in through your job earnings and draw benefits when you need time off. That works reasonably well if you’ve been steadily employed, but not if you’re young, just out of school, or between jobs. Even a parent who qualified for leave for their first child might not qualify with their second if they haven’t been back at work long enough. The 2020 version of the FAMILY Act would have excluded 30 percent of new parents from qualifying altogether.
And because the FAMILY Act is so expensive, Democrats have had to water down their proposed benefits, replacing only a portion of workers’ income. That might be manageable for an older worker earning a decent salary, but it doesn’t come close to covering the bills for a young parent making near minimum wage. The result is that many low-income parents who are eligible for paid leave don’t end up taking it because they can’t afford to live on a fraction of their already meager salaries — or they take on new debt or put off paying their other bills to do it.
Even in the states that have adopted a comprehensive model, new parents often fall through the cracks. A 2024 Niskanen analysis estimated that in six states with paid leave, between 18 and 26 percent of residents of childbearing age are ineligible because they haven’t worked recently enough to qualify.
The latest version of the FAMILY Act tries to fix some of the earlier problems by providing more generous benefits for low-income workers and loosening eligibility rules. But it still requires a recent work history to qualify, and the benefits still aren’t generous enough for many low-income parents to afford to actually take time off.
Gillibrand’s office told me that they don’t know how much their new bill would cost, or how many new parents would be eligible or likely to use the benefit. But it makes sense to hew to FMLA’s structure, they said, because people are already familiar with that framework.
Other advocates said providing the same amount of leave, regardless of the reason, helps prevent employers from discriminating against those they think are more likely to take leave, like women of childbearing age, and sends a positive message that people with different needs should be treated equally. “It’s a sound way to ensure you’re not inadvertently creating disincentives to hire particular types of people,” Vicki Shabo, a paid leave expert at New America, told me.
But other experts disagreed. Matt Bruenig of the People’s Policy Project argued that it made no sense to treat all forms of paid leave the same: They serve different purposes and different populations. And amending the FAMILY Act or some other federal proposal to focus on parents would be easy, he adds. “Give me a day, and I’ll put a couple sentences in there that’ll make sure that everyone who has a newborn at least gets $800 a week,” Bruenig said.
Maya Rossin-Slater, a health economist at Stanford, added that the evidence that parental leave in the US leads to hiring discrimination is “really, really thin and limited,” largely because the leave on offer here is just not very long. In European countries where parental leave can stretch for several years, she said, there’s a clearer case — but that’s a far cry from the few months being discussed in America.
Parental leave has quietly been building real momentum in Congress over the past ten years. In 2017, a joint working group from the American Enterprise Institute, a conservative think tank, and the Brookings Institution, a liberal one, proposed eight weeks of paid parental leave that would pay new parents roughly 70 percent of their usual wages. Their idea never became legislation, but Adrienne Schweer, who leads the Paid Leave Task Force at the Bipartisan Policy Center, said the effort helped bolster more concerted bipartisan efforts.
Momentum came from unexpected places. Ivanka Trump made six weeks of paid parental leave her signature cause in her father’s first term, even getting the president to call for it in his 2019 State of the Union. Sen. Marco Rubio introduced a bill in 2018 that would let new parents draw from their own Social Security savings to cover the months around birth, delaying retirement by about six months in exchange. That proposal had critics across the spectrum, but Schweer praised it for establishing the principle that parental leave could apply to parents regardless of whether they were currently working.
“It was bold because it was creative,” she said. “If you were a stay-at-home mom who had worked in your teens and twenties and built up some Social Security credits, you could still access the benefit. You didn’t need a current employer. You didn’t need to return to work afterward.”
That principle — that parental leave should reach all new parents, not just those currently in the workforce — is central to the case for splitting it off from other kinds of worker leave.
A newborn’s need for care doesn’t depend on whether their parents had recent earnings. And most daycares won’t even accept infants younger than six weeks for liability and vaccination reasons, meaning families without paid leave have no source of income and no child care option during those early weeks that matter most.
By 2019, the Republican-led Congress passed 12 weeks of paid parental leave for all 2.1 million federal civilian employees, tucked inside a bipartisan defense bill signed by Trump. The same year, Republican Sen. Bill Cassidy and then-Democratic Sen. Kyrsten Sinema introduced a bill that would have given new parents up to $5,000 after the birth or adoption of a child — money they could use to cover lost wages while taking time off. Their bill was paid for by slightly reducing the child tax credit over the following decade.
The bipartisan energy hasn’t faded. More recently, in 2023, working groups on paid leave launched in both chambers, co-chaired by Rep. Chrissy Houlahan (D-PA) and Rep. Stephanie Bice (R-OK) in the House, and Cassidy and Gillibrand in the Senate. The House group rallied around draft legislation to help coordinate paid leave benefits across state borders and establish federal grants to allow states to start or expand their own programs.
The Senate group didn’t make any formal proposals, but a staffer from Gillibrand’s office told me Cassidy’s goal was to bring more Republicans into the fold, and that “the way Republicans now talk about paid leave is different from five years ago, which is a win.” This past winter, the influential and Republican-led House Education and Workforce Committee held a hearing on paid leave.
Red states have taken action too. Since the Supreme Court overturned Roe v. Wade in 2022, at least a dozen GOP-led states have granted or expanded paid parental leave for state employees — framing it as “pro-life” policy. The benefits are modest, but they show that parental leave specifically is a less partisan issue, especially in a political climate where both parties are competing to prove they’re better for families.
Still, not everyone is convinced a narrower parent-only approach would break the federal logjam. Shabo, of New America, is skeptical. “This often comes up as a possibility, but we have seen no evidence that it would garner more support from people who wouldn’t support comprehensive paid leave,” she told me.
Patrick Brown, a family policy expert at the conservative Ethics and Public Policy Center, acknowledged the challenge is convincing more Republicans to get on board. “Lots of GOP lawmakers want to provide paid leave through some way that isn’t raising taxes, mandating benefits, or creating new entitlements,” he said. “The problem is, that’s kind of an impossible circle to square.”
The pandemic briefly made the comprehensive approach seem within reach. The Families First Coronavirus Response Act, signed by Trump in March 2020, created temporary federal paid leave. The government paid people to stay home, and wage replacement at scale became politically normal. Support for paid leave also surged in polling, and employer support grew alongside it. When the Democratic-controlled Congress in 2021 began working on a massive reconciliation bill, advocates decided to go big or go home.
But as Senate holdouts forced Democrats to pare back the package, last-ditch proposals to save paid leave — including one that would have limited it to new parents only — came too late. Paid leave was cut, and then the bill itself collapsed. The result today is strategic paralysis. Democrats have shown they’re willing to make real concessions on policy — the 2025 FAMILY Act abandons the payroll tax that funded earlier versions, for example, and it still doesn’t push for full wage replacement, even though its drafters know that’s important for low-income workers. But the one thing they’ve been unwilling to seriously reconsider is the comprehensive structure that bundles several different kinds of leave.
As wonks continue to debate the details, Schweer told me she hopes her fellow advocates will be brave enough to try something different. “Don’t let great be the enemy of the good,” she said. “Look for the spaces where there’s bipartisan interest and push it as far as it can go to cover the most people — especially the people least likely to get it from their employers.”
A modest but universal paid parental leave guarantee — one that offers at minimum full wage replacement to low-income workers and a meaningful benefit to everyone else — would accomplish a raft of goals both parties claim to care about.
It would give every newborn a better start, close the gap between birth and the earliest age most daycares accept infants, and make life healthier and more affordable for all parents. It would also establish a floor that Congress could build on, just as every other country has done. It wouldn’t be the last fight Congress ever has on paid leave. But after 30 years of failing to pass a dream bill, it’s time for lawmakers to pick one they can win.
LLMs-gone-rogue dominated coverage, but had nothing to do with the targeting. Instead, it was choices made by human beings, over many years, that gave us this atrocity
On the first morning of Operation Epic Fury, 28 February 2026, American forces struck the Shajareh Tayyebeh primary school in Minab, in southern Iran, hitting the building at least two times during the morning session. American forces killed between 175 and 180 people, most of them girls between the ages of seven and 12.
Within days, the question that organised the coverage was whether Claude, a chatbot made by Anthropic, had selected the school as a target. Congress wrote to the US secretary of defense, Pete Hegseth, about the extent of AI use in the strikes. The New Yorker magazine asked whether Claude could be trusted to obey orders in combat, whether it might resort to blackmail as a self-preservation strategy, and whether the Pentagon’s chief concern should be that the chatbot had a personality. Almost none of this had any relationship to reality. The targeting for Operation Epic Fury ran on a system called Maven. Nobody was arguing about Maven.
Continue reading...In ancient Greece, Euclid showed that if you agree on a small list of preliminary principles, or axioms, you can use deductive reasoning to reveal all sorts of new mathematical truths. But although these early proofs, as mathematicians call them, were derived using the laws of logic, they sometimes also contained hidden, unstated assumptions or relied on misleading intuitions. In these cases…
It’s natural to think of math as being fundamentally abstract. Whether it’s invented or discovered, its truths are so literally universal that even aliens would agree (so the thinking goes) that 2 and 2 make 4. The actual work of mathematics, though, typically involves something utterly earthbound: “making marks on paper or blackboards,” said David E. Dunning, a historian of mathematics and…
We are raiding the Guardian long read archives to bring you some classic pieces from years past, with new introductions from the authors.
This week, from 2022: A wave of bestselling authors claim that global affairs are still ultimately governed by the immutable facts of geography – mountains, oceans, rivers, resources. But the world has changed more than they realise
By Daniel Immerwahr. Read by Christopher Ragland
Continue reading...It’s one thing to remove a PM from office, as happened to the former cricketer in 2022. But it’s another thing to try to eradicate the most famous person in Pakistan’s history
Just so we’re clear, the following is a fact. Not opinion, not a point of view, not a hot take. Fact. There is no Pakistani – male, female, dead, alive, real, imagined – as famous as Imran Khan. Every turn in a multifarious public life has abounded in fame, first as a cricket legend, then as a beloved philanthropist who built a cancer hospital for the poor, latterly as a maverick politician who swept to power promising reform, and now, as the sole occupant of a cell in Pakistan’s most notorious jail. So famous he’s been the subject of two death hoaxes – most recently in November, when he went unseen for so long that many concluded he had died.
There have been others with greater accomplishments. There may come others in the future. But in almost 79 years of Pakistan, in the pure currency of fame, of being known and recognised, of being talked about, of being the one Pakistani everyone can name, there is nobody beyond Imran. (He is almost universally known by his first name alone.) It holds even now, two years into the state’s attempts to erase him from public life. In that time, they’ve barred TV channels from saying his name on air and stopped newspapers from publishing his picture; they’ve even scrubbed him from the footage of his greatest sporting triumph.
Continue reading...When Nato helped overthrow Gaddafi in 2011, there were hopes of a new beginning. More than a decade later, a former CIA asset runs the country – and Libya has become yet another lesson in the unintended consequences of foreign intervention
By Anas El Gomati. Read by Mo Ayoub
Continue reading...In a few isolated communities in central Nigeria, some babies are believed to be bad omens. Olusola and Chinwe Stevens run a thriving home for babies at risk. But what happens when the families want them back?
By Adaobi Tricia Nwaubani. Read by Nneka Okoye
Continue reading...How close are we to the sci-fi vision of autonomous humanoid robots? I visited 11 companies in five Chinese cities to find out
Chen Liang, the founder of Guchi Robotics, an automation company headquartered in Shanghai, is a tall, heavy-set man in his mid-40s with square-rimmed glasses. His everyday manner is calm and understated, but when he is in his element – up close with the technology he builds, or in business meetings discussing the imminent replacement of human workers by robots – he wears an exuberant smile that brings to mind an intern on his first day at his dream job. Guchi makes the machines that install wheels, dashboards and windows for many of the top Chinese car brands, including BYD and Nio. He took the name from the Chinese word guzhi, “steadfast intelligence”, though the fact that it sounded like an Italian luxury brand was not entirely unwelcome.
For the better part of two decades, Chen has tried to solve what, to him, is an engineering problem: how to eliminate – or, in his view, liberate – as many workers in car factories as technologically possible. Late last year, I visited him at Guchi headquarters on the western outskirts of Shanghai. Next to the head office are several warehouses where Guchi’s engineers tinker with robots to fit the specifications of their customers. Chen, an engineer by training, founded Guchi in 2019 with the aim of tackling the hardest automation task in the car factory: “final assembly”, the last leg of production, when all the composite pieces – the dashboard, windows, wheels and seat cushions – come together. At present, his robots can mount wheels, dashboards and windows on to a car without any human intervention, but 80% of the final assembly, he estimates, has yet to be automated. That is what Chen has set his sights on.
Continue reading...We are raiding the Guardian long read archives to bring you some classic pieces from years past, with new introductions from the authors.
This week, from 2022: Austerity, the pandemic and now the cost of living crisis have left many schools in a parlous state. How hard do staff have to work to give kids the chances they deserve?
By Aida Edemariam. Read by Lucy Scott
Continue reading...Steeped in gaming and rightwing culture wars, Musk and his team of teenage coders set out to defeat the enemy of the United States: its people
In 2025, when Elon Musk joined the government as the de facto head of something called the “department of government efficiency”, he declared that governments were poorly configured “big dumb machines”. To the senator Ted Cruz, he explained that “the only way to reconcile the databases and get rid of waste and fraud is to actually look at the computers”.
Muskism came to Washington soaked in memes, adolescent boasts and sadistic victory dances over mass firings. Leading a team of teenage coders and mid-level managers drawn from his suite of companies, Musk aimed to enter the codebase and rewrite regulations and budget lines from within. He would drag the paper-pushing bureaucracy kicking and screaming into the digital 21st century, scanning the contents of cavernous rooms of filing cabinets and feeding the data into a single interoperable system. The undertaking combined features of private equity-led restructuring with startup management, shot through with the sensibility of gaming and rightwing culture war. To succeed, he would need “God mode”, an overview of the whole.
Continue reading...Innocent people are being frozen out of basic banking services – and it all traces back to reforms rushed through after 9/11
By Oliver Bullough. Read by Elis James
Continue reading...In spring 2003, exuberance at the fall of Saddam was swiftly followed by a descent into deadly chaos. Whether moving independently or embedded with troops, Guardian reporters witnessed the violence on the ground
By Ian Mayes. Read by Karl Queensborough
Continue reading...Once violently defended from extinction, Welsh is still a part of daily life. By learning my family’s language, I hoped to join their conversation
My maternal grandmother died 20 years ago. The funeral was held in a small Methodist chapel in the lush Conwy valley of north Wales. Her entire life – she had almost reached 100 – was spent in these hills. The drizzle that morning had slicked the trees and turned the slate of the chapel black. Our family, gathered under umbrellas, entered in order of seniority: Mum, now the family elder, with Dad on her arm, then my six aunts and uncles with their spouses, and finally the cousins, led by my brother Mark and me.
The room was austere. White walls, sturdy wooden furniture, a plain cross on the wall. Our family squeezed into box pews in the centre of the chapel. A couple of older men among the crowd reminded me of my grandfather, who had died decades earlier: similar thatches of black hair; dark, weathered complexions; history-book faces.
Continue reading...We are raiding the Guardian long read archives to bring you some classic pieces from years past, with new introductions from the authors.
This week, from 2021: Growing up in Essex, my summers in Iran felt like magical interludes from reality – but it was a spell that always had to be broken
By Arianne Shahvisi. Read by Serena Manteghi
Continue reading...When Trump granted white South Africans refugee status, he was echoing a falsehood about Black people taking revenge for years of brutality. But no one flourishes in a repressive police state
There’s a little town in the scrub in South Africa – a full day’s drive from the country’s big cities – that has become perhaps the most scrutinised place on earth, given its size. It is 9 sq km (3.5 sq miles) of suburban-style houses harbouring about 3,000 people, with a main drag, a municipal swimming pool, one gas station and some pecan farms. Nothing of consequence ever really happens there, a fact the townspeople take as a point of pride. And yet over the past three decades, dozens of English-language news outlets have made a pilgrimage to it, often more than once. The New York Times alone has run four dedicated profiles. The essays have kept pace year after year, quoting the same people over and over, even as nothing of note occurred. There’s been no war, no disaster.
That changelessness is the point. No people of colour are allowed to live in the town, called Orania. The name is a nod to the river that runs nearby. Orania’s founders established it in 1991, the year after South Africa’s best-known Black liberation leader (and future president), Nelson Mandela, was freed following 27 years in prison.
Continue reading...In the 50 years since equal rights for women were enshrined in UK law, the campaigners have been reduced to caricatures, or forgotten. But their struggle is worth remembering
By Susanna Rustin. Read by Carlyss Peer
Continue reading...Our current approach to mental health labelling and diagnosis has brought benefits. But as a practising doctor, I am concerned that it may be doing more harm than good
By Gavin Francis. Read by Noof Ousellam
Continue reading...When Nato helped overthrow Gaddafi in 2011, there were hopes of a new beginning. More than a decade later, a former CIA asset runs the country – and Libya has become yet another lesson in the unintended consequences of foreign intervention
In July 2025, four of Europe’s most senior officials landed in eastern Libya for an urgent meeting. Italy’s interior minister had watched migrant arrivals surge during the previous six months. Greece’s migration chief was reeling after 2,000 people reached Crete in a single week. Malta’s home minister feared his island was next. And the EU’s migration commissioner was scrambling to rescue an agreement worth many hundreds of millions that was visibly failing to stop the boats.
Libya is a place where crises converge. Its 1,100-mile coastline, the longest Mediterranean coastline in Africa, has become the main departure point for migrants heading north. Since Muammar Gaddafi was toppled in 2011, the country has been torn apart by successive civil wars. Russia, Turkey, Egypt and the UAE arm rival factions, and the contest no longer stops at Libya’s borders. From military bases in the south, Russia and the UAE funnel weapons and fighters into Sudan’s civil war, which has driven hundreds of thousands more refugees north towards Libya’s coast.
Continue reading...We are raiding the Guardian long read archives to bring you some classic pieces from years past, with new introductions from the authors.
This week, from 2022: Hu Xijin is China’s most famous propagandist. At the Global Times, he helped establish a chest-thumping new tone for China on the world stage – but can he keep up with the forces he has unleashed?
By Han Zhang. Read by Emily Woo Zeller
Continue reading...