In 2022, the physics Nobel prize was awarded for experimental work showing that the quantum world must break some of our fundamental intuitions about how the universe works.
Many look at those experiments and conclude that they challenge “locality” — the intuition that distant objects need a physical mediator to interact. And indeed, a mysterious connection between distant particles would be one way to explain these experimental results.
Others instead think the experiments challenge “realism” — the intuition that there’s an objective state of affairs underlying our experience. After all, the experiments are only difficult to explain if our measurements are thought to correspond to something real. Either way, many physicists agree about what’s been called “the death by experiment” of local realism.
But what if both of these intuitions can be saved, at the expense of a third? A growing group of experts think that we should abandon instead the assumption that present actions can’t affect past events. Called “retrocausality”, this option claims to rescue both locality and realism.
What is causation anyway? Let’s start with the line everyone knows: correlation is not causation. Some correlations are causation, but not all. What’s the difference?
Consider two examples. (1) There’s a correlation between a barometer needle and the weather – that’s why we learn about the weather by looking at the barometer. But no one thinks that the barometer needle is causing the weather. (2) Drinking strong coffee is correlated with a raised heart rate. Here it seems right to say that the first is causing the second.
The difference is that if we “wiggle” the barometer needle, we won’t change the weather. The weather and the barometer needle are both controlled by a third thing, the atmospheric pressure – that’s why they are correlated. When we control the needle ourselves, we break the link to the air pressure, and the correlation goes away.
But if we intervene to change someone’s coffee consumption, we’ll usually change their heart rate, too. Causal correlations are those that still hold when we wiggle one of the variables.
These days, the science of looking for these robust correlations is called “causal discovery”. It’s a big name for a simple idea: finding out what else changes when we wiggle things around us.
In ordinary life, we usually take for granted that the effects of a wiggle are going to show up later than the wiggle itself. This is such a natural assumption that we don’t notice that we’re making it.
But nothing in the scientific method requires this to happen, and it is easily abandoned in fantasy fiction. Similarly in some religions, we pray that our loved ones are among the survivors of yesterday’s shipwreck, say. We’re imagining that something we do now can affect something in the past. That’s retrocausality.
The quantum threat to locality (that distant objects need a physical mediator to interact) stems from an argument by the Northern Ireland physicist John Bell in the 1960s. Bell considered experiments in which two hypothetical physicists, Alice and Bob, each receive particles from a common source. Each chooses one of several measurement settings, and then records a measurement outcome. Repeated many times, the experiment generates a list of results.
Bell realised that quantum mechanics predicts that there will be strange correlations (now confirmed) in this data. They seemed to imply that Alice’s choice of setting has a subtle “nonlocal” influence on Bob’s outcome, and vice versa – even though Alice and Bob might be light years apart. Bell’s argument is said to pose a threat to Albert Einstein’s theory of special relativity, which is an essential part of modern physics.
But that’s because Bell assumed that quantum particles don’t know what measurements they are going to encounter in the future. Retrocausal models propose that Alice’s and Bob’s measurement choices affect the particles back at the source. This can explain the strange correlations, without breaking special relativity.
In recent work, we’ve proposed a simple mechanism for the strange correlation – it involves a familiar statistical phenomenon called Berkson’s bias (see our popular summary here).
There’s now a thriving group of scholars who work on quantum retrocausality. But it’s still invisible to some experts in the wider field. It gets confused for a different view called “superdeterminism”.
Superdeterminism agrees with retrocausality that measurement choices and the underlying properties of the particles are somehow correlated.
But superdeterminism treats it like the correlation between the weather and the barometer needle. It assumes there’s some mysterious third thing – a “superdeterminer” – that controls and correlates both our choices and the particles, the way atmospheric pressure controls both the weather and the barometer.
So superdeterminism denies that measurement choices are things we are free to wiggle at will, they are predetermined. Free wiggles would break the correlation, just as in the barometer case. Critics object that superdeterminism thus undercuts core assumptions necessary to undertake scientific experiments. They also say that it means denying free will, because something is controlling both the measurement choices and particles.
These objections don’t apply to retrocausality. Retrocausalists do scientific causal discovery in the usual free, wiggly way. We say it is folk who dismiss retrocausality who are forgetting the scientific method, if they refuse to follow the evidence where it leads.
What is the evidence for retrocausality? Critics ask for experimental evidence, but that’s the easy bit: the relevant experiments just won a Nobel Prize. The tricky part is showing that retrocausality gives the best explanation of these results.
We’ve mentioned the potential to remove the threat to Einstein’s special relativity. That’s a pretty big hint, in our view, and it’s surprising it has taken so long to explore it. The confusion with superdeterminism seems mainly to blame.
In addition, we and others have argued that retrocausality makes better sense of the fact that the microworld of particles doesn’t care about the difference between past and future.
We don’t mean that it is all plain sailing. The biggest worry about retrocausation is the possibility of sending signals to the past, opening the door to the paradoxes of time travel. But to make a paradox, the effect in the past has to be measured. If our young grandmother can’t read our advice to avoid marrying grandpa, meaning we wouldn’t come to exist, there’s no paradox. And in the quantum case, it’s well known that we can never measure everything at once.
Still, there’s work to do in devising concrete retrocausal models that enforce this restriction that you can’t measure everything at once. So we’ll close with a cautious conclusion. At this stage, it’s retrocausality that has the wind in its sails, so hull down towards the biggest prize of all: saving locality and realism from “death by experiment”.
As a young professor of chemistry at Harvard in the 2000s, David Liu was trying to accelerate evolution.
This quest took place on the cellular level. Inside every cell in the body, molecules known as proteins act like tiny machines, carrying out biological functions—their efficiency honed by eons of natural selection. And scientists had discovered ways to engineer proteins that were even more efficient, or were built to fix specific problems.
But the work of changing the makeup of a protein was slow: a graduate student would chivvy evolution along by hand, painstakingly altering the experimental conditions once a week or so, and it would be a year before they’d know whether the whole thing had failed or succeeded.
Now, Liu turned a thought over in his mind: could a clever researcher press fast-forward on the process so dozens of new forms rose and fell in the space of a day?
Liu and his graduate student Kevin Esvelt envisioned a way to do this by using a phage—a type of virus that infects bacteria, whose life cycle, crucially, can be as short as 10 minutes. By giving a phage instructions to create a specific protein, then rewarding subsequent generations that successfully produced that protein, they were able to replicate the natural selection process with surprising speed.
In 2011, when the team announced their results in Nature, they could run 200 generations in about eight days. In less than one week, they evolved three new enzymes, their purposes custom-designed by Liu and his colleagues. This process, called phage-assisted continuous evolution, or PACE, has since become a powerful tool for scientists working to advance research and cure diseases. “[It] really reminds me of some of the real classics in the field,” says Jennifer Doudna, Ph.D. ’89, S.D. 2023, a professor of biochemistry at the University of California, Berkeley and a recipient of the 2020 Nobel Prize in Chemistry.
Since then, Liu—now Cabot professor of the natural sciences—has become a leading pioneer of inventive processes that operate on the molecular level in living organisms. He has also become renowned, in his lab and beyond, as a scientist who seems to be in perpetual motion: existing on little sleep, spinning off on side pursuits, and developing companies to springboard his laboratory insights into real-world use.
During the past 18 months alone, two treatments based on his research—one for a genetic lung disease, the other for a lethal metabolic disorder—have made headlines, as did advanced mouse studies focused on curing progeria, a disease that makes children age prematurely. And there’s more to come.
Today, Liu’s office at Harvard is decorated with things that move and toys and oddities that spin in the air. (Liu is also the Merkin professor at the Broad Institute and a Howard Hughes Medical Institute investigator.) Their persistent energy matches that of the man himself—the same energy that was evident from his youth. Liu grew up in Riverside, California, where orange groves and desert stand side by side. He remembers his father, an aerospace engineer, leaving at 4 a.m. for a two-hour commute to El Segundo, the aerospace hub where his company was located. Liu’s mother, a physics professor, held down the fort at home with Liu and his sister and became one of the first female tenured faculty members in her department at the University of California, Riverside.
Liu’s parents didn’t exert pressure on their son to be a scientist. It was his childhood spent outdoors, watching the natural world, that set him on that path. All day at school he looked forward to going out into his backyard, a wilderness of weeds and insects. He was particularly fascinated by ants and their ability to somehow leave trails, invisible to humans, for each other to follow. Was it possible, he wondered, to remove whatever chemical it was they sensed and lay a new path to somewhere else? “I wanted to ask and answer my own questions,” he says. “I was curious about how things worked.”
At 18, Liu participated in the Junior Sciences and Humanities Symposium and won a trip to the 1990 Nobel Prize award ceremony in Stockholm. There, he watched Harvard chemist E.J. Corey give his Nobel lecture. Corey is a mastermind of organic synthesis—the branch of chemistry that uses complex cascades of reactions to build molecules. Afterwards, Liu, then a freshman at Harvard, approached the Nobelist.
“I asked him a question about insect juvenile hormone,” Liu recalls. How could one part of this molecule change, while other parts, apparently identical, stayed untouched? Corey explained that he knew how the molecule would fold up, protecting some areas, exposing others to change.
“It was pretty awesome, which just made him seem even more like some kind of chemistry god to me,” Liu recalls. He exhales, a little self-consciously, and continues, “Then I asked him, while we were at this Nobel Prize lecture, if I could work in his lab.” Corey gently suggested that Liu take some organic chemistry classes first, then check in with him later.
With the help of his organic chemistry professor, Joseph Grabowski, Liu made Corey’s acquaintance again at Harvard, eventually joining his research group. With the elder scientist’s blessing, he decided that the place for him was where chemistry met biology. As a graduate student, he studied at the University of California, Berkeley with chemistry professor Peter Schultz, who had been tinkering with the genetic code to build proteins in a test tube that did not occur in nature.
Liu started working on getting such a system up and running in living cells, developing a process—which penalized cells with enzymes that functioned normally and rewarded cells with those that acted unusually—that would reshape protein evolution in the laboratory. Other scientists noticed: after earning his Ph.D. in 1999, he was hired at Harvard as an assistant professor.
“It was an interesting time. I had no idea what I was doing, which may have been the biggest source of awkwardness,” Liu recalls. “But also now, only five years after leaving Harvard as an undergrad, I was suddenly a colleague of all of these professors.”
He didn’t want to presume, so he defaulted to extreme deference. “I would say, ‘Oh, thank you, Professor Schreiber, for that comment. Thank you, Professor Corey.’ And I remember, after one of the faculty meetings in my first fall, one of the professors pulled me aside and said, ‘David, we think you should just call us by our first names.’”
As he got his research group off the ground, Liu wanted to continue evolving proteins. He was particularly inspired by a paper from Martin Wright and Gerald Joyce at the Scripps Institute, which focused on the molecules of RNA. Lab evolution of both custom proteins and custom RNAs had always taken a long time. But Wright and Joyce, remarkably, set up a system where all day and night, the RNA continuously evolved, spinning through about 300 generations in 52 hours. “I just thought the idea was so amazing,” Liu says, “and it remains one of the most beautiful papers I’ve ever read.”
He showed it to Esvelt as they brainstormed a way to evolve proteins with similar speed. Esvelt had the idea to start with a virus called the filamentous bacteriophage, which infects the bacterium E.coli. First, he put a gene for a protein he wanted to evolve into the virus. Then he took some of the machinery the virus needed to live and put it in E.coli. Finally, he engineered the E.coli so that it would only hand over what the virus needed if that targeted protein was produced at high levels.
The virus faced enormous pressure to make this protein. At the same time, each generation provided a new chance for interesting new mutations in the gene to creep in. By speeding up or slowing down the flow of fresh host cells into a vessel called the lagoon, Esvelt controlled the pace of the evolution.
It was an intense period in Esvelt’s life—Liu, he says, seems to need only about four hours of sleep a night, and they were often trading emails and ideas at unusual times, racing to generate new data in order to apply for funding for the next phase of the PACE project.
“We never lost hope at the same time,” Esvelt says. “I persisted in the times when he had lost it, and he kept me going through the more frequent times when I was frustrated.”
With PACE, Liu had a way to evolve enzymes with custom purposes. Having done this, Liu set his sights on another ambitious target: editing the code of life itself. It was around this time that CRISPR-Cas9, a system for cutting the genome, was first being described—Doudna, one of its developers, recalls talking with Liu about it early on. CRISPR tools allow scientists to bind and snip DNA and introduce other genetic material.
CRISPR on its own is not a medicine, however. “I quickly realized that most of the genetic diseases that one might want to treat with genome editing could not be treated [only] by cutting DNA, because cutting DNA disrupts the gene,” says Liu. “Instead, they needed to be treated by correcting a mutation back to a healthy sequence.” What if, he wondered, we could make enzymes that would actually reverse a mutation chemically?
DNA can be pictured as two long ribbons of letters, running parallel to each other. Each letter stands for a particular type of nucleic acid base, and each base has its own partner on the matching ribbon: adenine (A) binds to thymine (T), and cytosine (C) binds to guanine (G). Many dangerous mutations are the result of a simple alteration of one pair of bases. Sickle cell anemia, for instance, is the result of an A turned to a T.
With postdoctoral fellow Alexis Komor, Liu discussed ways to alter a mutated base pair back to the healthy version. Komor brought three elements together: an enzyme that could alter a single C, a targeting system from CRISPR-Cas9 to aim it at the right part of the genome, and another protein that solidified the change. In 2016, she and Liu described using this setup to cleanly convert a C:G pair to an A:T pair in living cells, without the organism missing a beat; the double helix was never sliced.
A flurry of innovation filled the next few years. In 2017, Liu and graduate student Nicole Gaudelli described a technique for going the other way: making an A:T pair into a C:G pair, an effort Liu attributes to “heroic” work by Gaudelli. And in 2019, Andrew Anzalone and Liu published another method for tackling yet more mutations. Anzalone, also a graduate student, completed his complex project in a year and eight days.
The response from scientists exploring gene editing to correct diseases was immediate. “It’s a great tool for the field to be able to make targeted changes in one step,” Doudna reflected. There have been at least 23 clinical trials using the techniques, focusing on illnesses ranging from lung cancer to metabolic diseases.
Targeted editing could help with more than just genetic diseases. Because Liu’s techniques provide very precise, clean ways to alter DNA, they can improve existing treatments where DNA is edited, such as CAR-T-cell immunotherapy for cancer. In this treatment, a patient’s own immune cells are removed from the body and given genes that allow them to make antibodies against their specific type of cancer cells. When the immune cells are reintroduced into the patient’s body, they hunt down the cancer, often curing the disease altogether.
In 2021, Alyssa Tapley, a 12-year-old in England, was diagnosed with leukemia. Chemotherapy and a bone marrow transplant failed. In 2022, she had a few weeks left to live when her hospital arranged for her to join a trial using Liu’s techniques. Her immune cells were altered so they could target the cancer cells, and Tapley lived. She is still alive and cancer-free. In April 2025, when Liu was awarded the Breakthrough Prize, a major international science award, Tapley attended the ceremony.
Liu brings the same kind of intensity he displays in his research to other aspects of life. His office is lined with geological specimens, labeled with information about where they were collected, which he gives away to his students as, one by one, they graduate. “When I first joined,” Esvelt recalls, “he had taken up painting and become quite good at that. Then he took up woodworking and installed a giant lathe system in his basement and made some really amazing wooden bowls out of various pieces of wood that he found while hiking or walking along the beach.”
Next came an exploration of photography and optics. “You can sort of sense,” Esvelt says, “he wants to be able to do everything.” (“Ask him about his skills in poker,” Doudna suggests.)
That ceaseless energy shapes the environment of his lab, where he long ago stopped hiring people for specific achievements but instead focused on bringing in researchers he thought would work well with others. Although the goal was never publishing for publishing’s sake, on average, he estimates, the lab publishes a paper nearly every 17 days and has maintained this pace for about five years.
“What made it so fruitful is that he just really allowed me and others to be who we are,” says Gaudelli. “There was a sense of freedom.” She felt that there was a current of quiet confidence running through the lab; if you could imagine something, you could build it.
“He has this internal locomotive,” Anzalone adds, “that really pushes him to get the most.” He laughs, remembering the time when he and Liu were preparing to submit their gene editing paper. Liu would wake up at 4 a.m. and edit Anzalone’s draft. During the day, Anzalone would perform more experiments, chasing down final details. At night, Anzalone would add to the draft, working until about 2 a.m. Two hours later, Liu would be up to work on it again.
In March, news broke that one of Liu’s editing techniques had helped do something remarkable: researchers had used it to cure a baby boy’s lethal metabolic disorder. It was an astounding achievement that demonstrated the potential that stems from basic science.
At the same time, the Trump administration was beginning its campaign against science funding. Billions of research dollars have since been frozen or canceled nationwide; a federal judge recently ordered the administration to reinstate more than $2 billion in funding to Harvard, though the White House said it plans to appeal the ruling. Now, Liu is very worried about how to protect the next generation of scientists from the storm. Students and postdoctoral researchers need time and freedom to think, he says, to live without fear that their institutional support will just evaporate.
Even before federal funding for research was suddenly stripped from labs across the country, the frailty of the support system for young scientists concerned him. In 2020, after fruitlessly exploring other avenues for securing better pay for his lab members, Liu quietly decided to start dividing his salary among them every year.
Liu has co-founded many companies, three of which are publicly traded. Anzalone, after finishing his doctorate, went on to help lead Prime Medicine, a new start-up co-founded by Liu to commercialize the gene editing technique they’d developed together. Along with CRISPR pioneers Doudna and Feng Zhang, Liu also helped found Editas Medicine, which explores ways to use this technology to treat serious diseases. Gaudelli joined Beam Therapeutics, where she spent the past few years advancing a gene editor that may be used to treat sickle cell anemia. (She has since moved on to become an entrepreneur in residence at Google Ventures.)
“Accumulating wealth is not anywhere in the top 100 things I want to do,” Liu says. “My students and the work that we do, and the patients who reach out, and the families like Alyssa [Tapley]’s—these are all so much more meaningful than how many digits are in a bank account.”
And while Prime, Beam, and Editas are vehicles that aim to produce treatments and medicines that change lives, he notes, they draw their strength from the academic science that was their inspiration. Their vigor is linked to the freedom scientists have to imagine things that have yet to exist.
“Science matters,” Liu says. “Universities matter.” Enormous dividends are paid over decades by the funding of young scientists just getting underway, just beginning to imagine what comes next, Liu says, “who will go on to make their own discoveries, or to teach, or to work in industry to help develop the next great drug.”
Gaudelli thinks back to the way the Liu lab seemed to encourage transformative thinking, such as the idea that a letter of genetic code could be changed almost as easily as a light switch can be flipped. “I was allowed to imagine something that seemed impossible,” she says. “And then, one day, it was not.”
Hydraulic mechanics may have indeed been the driving force behind the construction of ancient Egyptian pyramids.
In a preprint paper, scientists concluded that the Step Pyramid of Djoser in Saqqara, Egypt—believed to be the oldest of the seven monumental pyramids and potentially constructed about 4,500 years ago—offers a remarkable blueprint for hydraulic engineering.
The hydraulic-powered mechanism could have maneuvered the oversized stone blocks forming the pyramid, starting from the ground up. The research team says the Step Pyramid’s internal architecture is consistent with a hydraulic elevation mechanism, something that’s never been reported before at that place or in that time.
By lifting the stones from the interior of the pyramid in what the authors call a “volcano fashion,” the water pressure from the hydraulic system could have pushed the blocks into place. If proved out, this research shows the Egyptians had a powerful understanding of advanced hydraulic systems well before modern scholars believed they did. That begs the question: Was this the first major use of the system, or had it been in play previously?
No matter the answer, pulling it off at the Step Pyramid would have been no easy feat.
The team believes that based on the mapping of nearby watersheds, one of the massive—and yet unexplained—Saqqara structures, known as the Gisr el-Mudir enclosure, has the features of a check dam with the intent to trap sediment and water. The scientists say the topography beyond the dam suggests a possible temporary lake west of the Djoser complex, with water flow surrounding it in a moat-like design.
As a Nile tributary fed the area, a dam could have created a temporary lake, potentially linking the river to a “Dry Moat” around the Djoser site, helping move materials and serving the hydraulic needs.
“The ancient architects likely raised the stones from the pyramid center in a volcano fashion using the sediment-free water from the Dry Moat’s south section,” the authors write.
In one section of the moat, the team found that a monumental linear rock-cut structure consisting of successive, deep-trench compartments combines the technical requirement of a water treatment facility—and a design still often seen in modern-day water treatment plants—by including a settling basin, retention basin, and purification system.
“Together, the Gisr el-Mudir and the Dry Moat’s inner south section work as a unified hydraulics system that improves water quality and regulates flow for practical purposes and human needs,” the authors write. The team believes the water available in the area was sufficient to meet the needs of the project.
Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.
Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.
The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.
The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.
That’s now largely impossible with the modern scientific enterprise so reliant on computers.
To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.
Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.
This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?
I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.
Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.
What’s more, the first line of defense for combatting wrongness—peer review—is fundamentally broken, with absolutely no guarantees of filtering out innocent mistakes and guilty fraud. Peer reviewers don’t have the time, effort, or inclination to pick over new research with a fine-toothed comb, let alone give it more than a long glance. And even if a referee is digging a little too deeply, whether revealing some fatal flaw or sniffing at your attempt at fraud, you can simply go journal shopping and take your work somewhere more friendly.
With the outer walls of scientific integrity breached through the failures of peer review, the community must turn to a second line of defense: replication.
Replication is the concept that another scientist—preferably a competing scientist with an axe to grind because they would love nothing more than to take down a rival—can repeat the experiment as described in the published journal article. If they get the same result, hooray! Science has progressed. If they get a different result, hooray! Science has progressed. No matter what, it’s a win.
Too bad hardly anybody bothers to replicate experiments like this, and when they do, they more often than not find that the original results don’t hold up.
This is called the “replication crisis” in science. It first reared its ugly head in the 2010s in the field of psychology, but I don’t think any field is immune.
There’s a complex swirling septic tank of problems that all contribute to the replication crisis, but the first issue is that replication isn’t sexy. You don’t get to learn new things about the world around us; you just get to confirm whether someone else learned new things about the world around us. As an added ugly bonus, nonresults often don’t even get published. Novelty is seen as a virtue, and if you run an experiment and it doesn’t provide a positive result, journals are less likely to be interested in your manuscript. Additionally, because replication isn’t seen as sexy, when it is done, it isn’t read. Replication studies do not get published in high-impact-factor journals, and authors of replication studies do not get as many citations for their work. This means that their h-index is lower, which lowers their chances of getting grants and promotions.
Second, replication is becoming increasingly hard. With sophisticated tools, mountains of data, complex statistical analyses, elaborate experimental design, and lots and lots of code, in many cases, you would have no hope of attempting to repeat someone else’s work, even if you wanted to.
The end result of all this, as I’ve said before, is that fraud is becoming increasingly common, increasingly hard to detect, and increasingly hard to correct.
The lackluster mirage that is peer review doesn’t adequately screen against mistakes, and with the increasing complexity of science (and with it of scientific discourse), fraud just slips on by, with too many layers between the fundamental research and the end product.
It looks like science is getting done, but is it really?
Sure, retractions and corrections appear here and there, but they are far too few and far between given the amount of fraud, both hard and soft, that we know exists. The conditions in every field of science are ripe for fraud to happen, and nobody has developed a decent enough policing system to root it out—in one case, a journal issued a retraction 35 years after being notified. The kind of investigations that lead to full retractions or corrections don’t come often enough, and meanwhile, there’s just too much science being published. A perusal of the Retraction Watch website reveals instance of fraud after fraud, often coming too late, with little to no major repercussions.
Nobody has the time to read every important paper in their field. It’s a nonstop tidal wave of analysis, math, jargon, plots, and words, words, words. Every scientist writes as much as possible, without really digging into the existing literature, adding to the noise. Heck, take my field, astrophysics, for example. Every single day there are about forty to sixty brand-new astronomy and astrophysics papers birthed into the academic world. Every single day, even holidays! According to surveys, scientists claim that they read about five papers a week. In my not-so-humble opinion, that’s a lie, or at least a big stretch, because there are different definitions of the word “read,” varying from “reading the abstract, the conclusions, and the captions of the prettier pictures” to “absorbing this work into the fabric of my professional consciousness.”
The vast majority of science published is not being consumed by the vast majority of its intended audience. Instead, most scientists just skim new titles, looking for something that stands out, or dig up references during a literature search to use as citations. We’re just running around publishing for the sake of publishing, increasing our chances of striking gold (having a major paper stand out among the noise and get a lot of citations) or at least building a high h-index by brute force of an overwhelming number of publications.
This is exactly the kind of environment where fraud not only exists but abounds. Who cares if you fudge a number here or there—nobody’s actually going to read the thing, not even the referee! And the conclusions you draw in your paper will form the basis of yet more work—heck, maybe even an entire investigative line.
So why doesn’t it stop? We all know and agree that fraud is bad, that uncorrected mistakes are bad, that impossible-to-replicate results are bad. If we all agree that bad things are bad, why don’t we make the bad things go away?
Put simply, there’s no reason to do better. The incentive in science right now is “keep publishing, dang it,” not to waste your time doing careful peer reviews, making your code public, or replicating results.
I’m not saying that most scientists are attempting to pull off willful fraud. I do believe that most scientists are good people trying to do good things in the world. But with all the incentives aligned the way they are, it’s easy to convince yourself that what you’re doing isn’t really fraud. If you refuse to make your code public for scrutiny, are you committing fraud? Or righteously protecting your research investment? If you had to twist the analysis to get a publishable result, are you committing fraud? Or just finally arriving at the answer you knew you were going to get anyway? If you stop checking your work for mistakes, are you committing fraud? Or are you just… finally done, after months of hard work? If you withdraw your article from submission because the referee was getting too critical, are you committing fraud? Or are you protesting an unreasonable peer reviewer?
Scientists aren’t really incentivized to change any of these problems. The system is already in place and “works”; it allows winners and losers to be chosen, with the winners receiving grant funding, tenured positions, and fancy cars. Well, maybe not the fancy cars. Universities aren’t motivated to change these practices because, so long as their winners continue to bring in grants, it lets them roll around in giant taxpayer-funded money pits (I don’t know if this image is totally accurate, but you get the idea).
The very last people who aren’t really motived to change any of this are the publishers themselves. Most publishers make the scientists pay to get their research printed, and they don’t have to pay for the peer review that makes science all scientific. But the big bucks come from the subscriptions: in order to read the journal, you need to either cough up a few dozen bucks to access a single article or pay prohibitively high fees to get annual access. The only people who can afford this are the big universities and libraries, locking most of our modern scientific progress behind gilt-ivory doors.
Altogether, the scientific and technical publishing industry rakes in about ten billion dollars a year, often with double-digit profit margins. Of course they don’t want this ship to change course. I can’t really blame them; they’re just playing by the accepted rules of their own game.
Researchers have discovered that the brainstem, a part of the brain responsible for controlling basic functions like breathing and heart rate, also plays a crucial role in regulating the immune system. This finding, published in Nature on May 1, shows that brainstem cells can sense inflammatory molecules in response to injury and adjust their levels to prevent infections from damaging healthy tissues. This adds to the known functions of the brainstem and suggests new potential targets for treating inflammatory disorders such as arthritis and inflammatory bowel disease.
Homeostasis is the body's ability to maintain stable internal conditions despite external changes. The brain’s involvement in homeostasis is well-known, as it controls heart rate and blood pressure during stress. Now, this concept extends to the immune system, where brainstem neurons act as regulators. Using genetic techniques in mice, researchers identified brainstem cells that adjust immune responses to pathogens. These neurons function as a "volume controller" for inflammation. Experiments showed that the vagus nerve, which connects the body and brain, plays a central role in this immune control. Cutting the vagus nerve deactivated brainstem neurons, confirming this connection. By altering the activity of brainstem neurons, researchers could control inflammation levels. Increasing activity reduced inflammation, while decreasing it led to excessive inflammatory responses.
This study reveals that the brainstem’s regulation of inflammation could be leveraged to treat immune disorders. By targeting these neural circuits, new treatments for conditions such as rheumatoid arthritis and inflammatory bowel disease may be developed. Historically, the brain was viewed mainly as the center of higher functions like memory and emotion. However, this study highlights its significant role in monitoring and regulating the body's physiological states, including the immune system.
Further research could explore additional brain circuits involved in immune regulation. Understanding these pathways may lead to innovative therapies for managing inflammation in various diseases. This discovery adds a new dimension to our understanding of the brain-body connection. By identifying the brainstem's role in controlling inflammation, researchers have opened new avenues for treating immune-related disorders, potentially transforming medical approaches to managing inflammation and maintaining overall health.
The implications of this study are far-reaching. For instance, conditions like rheumatoid arthritis, inflammatory bowel disease, and other inflammatory disorders could see new treatment strategies that specifically target these neural circuits. The ability to control these circuits could also help manage inflammation in a more precise manner, reducing the side effects often associated with broad-spectrum anti-inflammatory drugs.
The study also prompts a reevaluation of the brain’s role in overall health. Previously, the brain was primarily considered the seat of memory, emotion, and other higher-order functions. However, this research underscores the brain’s integral role in monitoring and regulating vital physiological processes. This broader understanding of the brain’s functions could lead to new insights into how various diseases develop and how they can be treated.
Further exploration into the brainstem's role in immune regulation could reveal more about how the nervous system interacts with the immune system. This could uncover additional therapeutic targets and improve our ability to treat a wide range of inflammatory and autoimmune conditions. The discovery of this brain-body connection represents a significant advance in neuroscience and immunology, highlighting the importance of interdisciplinary research in uncovering complex biological systems.
In summary, the identification of brainstem neurons that regulate inflammation adds a crucial piece to the puzzle of how the body maintains balance and responds to threats. This research opens new pathways for developing targeted treatments for inflammatory diseases and enhances our understanding of the intricate connections between the brain and the immune system. For more details, the full study can be found on Nature.
Source
The Milky Way's history is more recent and dynamic than previously believed. This revelation comes from data collected by the European Space Agency's Gaia spacecraft, which is mapping over a billion stars in the Milky Way and beyond. Gaia tracks their motion, luminosity, temperature, and composition, offering unprecedented insights into our galaxy’s past.
The Milky Way has expanded over time by absorbing smaller galaxies. Each collision left ripples in the star populations, altering their movement and behavior. Gaia aims to decode the Milky Way’s history by examining these ripples, focusing on the positions and motions of over 100,000 nearby stars. This is just a small sample of the approximately two billion celestial bodies Gaia observes.
Dr. Thomas Donlon from Rensselaer Polytechnic Institute and the University of Alabama explained, "We get wrinklier as we age, but our work reveals that the opposite is true for the Milky Way. It’s a sort of cosmic Benjamin Button, getting less wrinkly over time." He noted that by studying how these ripples diminish, scientists can determine when the Milky Way had its last significant merger. This event occurred billions of years later than previously estimated.
The Milky Way’s halo contains a large group of stars with unusual orbits, thought to be remnants of a major merger. This merger, known as Gaia-Sausage-Enceladus, was believed to have occurred between eight and eleven billion years ago, when the Milky Way was young. However, Gaia’s latest data suggests these stars arrived during a different event, much more recent than once thought.
Dr. Heidi Jo Newberg from Rensselaer Polytechnic Institute stated, "For the wrinkles of stars to be as clear as they appear in Gaia data, they must have joined us less than three billion years ago — at least five billion years later than was previously thought." She explained that new star ripples form as stars move through the Milky Way's center. If the stars had joined eight billion years ago, the ripples would have merged into a single feature.
The findings propose that these stars came from a more recent event named the Virgo Radial Merger, which occurred less than three billion years ago. This discovery reshapes the understanding of the Milky Way’s history.
Dr. Donlon highlighted the importance of Gaia’s contributions, "The Milky Way’s history is constantly being rewritten at the moment, in no small part thanks to new data from Gaia." He added that the picture of the Milky Way’s past has changed significantly in the last decade, and this understanding will continue to evolve.
The revelation that a large portion of the Milky Way was acquired only in the last few billion years contradicts previous models. Many astronomers had considered recent major collisions with dwarf galaxies to be rare. The Virgo Radial Merger likely brought along other smaller dwarf galaxies and star clusters, all joining the Milky Way simultaneously.
Future research will determine which smaller objects, previously associated with the ancient Gaia-Sausage-Enceladus merger, are actually linked to the more recent Virgo Radial Merger.
Source
Unveiling the Mysteries of Smell
In the realm of senses, we've mastered the art of splitting light into colors and sounds into tones. Yet, the world of odor has long remained an enigma. Is it too complex, too personal to map? Surprisingly, the answer is no.
Recent advancements have revolutionized our understanding of smell, drawing on collaborations between neuroscientists, mathematicians, and AI experts. Unlike our intuitive grasp of colors and sounds, the world of smells has eluded easy categorization. But now, a groundbreaking 'odor map' published in Science has changed the game.
This map isn't just a catalog of smells; it's a set of rules for understanding them. Just as a geographical map tells you that Buffalo is closer to Detroit than to Boston, the odor map reveals that the smell of lily is closer to grape than to cabbage. More remarkably, it allows us to pinpoint any chemical's location on the map, predicting how it smells based on its properties. It's akin to a formula that, given a city's population size and soil composition, can precisely locate Philadelphia's coordinates.
The Evolution of Odor Perception
But how do our noses create this 'odor space'? Unlike Newton's study of light or the analysis of pitch, smell defies simple tools like tuning forks. Early attempts to categorize odors, like Linnaeus' and Haller's schemes, lacked empirical rigor. They were more about intuition than data.
One bold attempt, by Hans Henning in 1916, proposed an 'odour prism' with six vertices corresponding to primary odors. While Henning's theory was flawed, it sparked a quest for the underlying principles of smell. Later efforts, like Susan Schiffman's odour maps in the 1970s, provided valuable insights but fell short of a complete solution.
The Rise of AI in Decoding Odors
Enter the age of AI. In 2017, the DREAM challenge brought AI into the fold, leading to models that could predict odors with impressive accuracy. These 'random forests' of AI models can be complex, mimicking human judgment in convoluted ways. They can predict that a chemical smells like rose based on a multitude of factors, not just its structural properties.
The Osmo Revolution: Giving Computers a Sense of Smell
Osmo, a startup born from Google Brain's digital olfaction group, is at the forefront of this revolution. Led by Alex Wiltschko, Osmo is training AI models to understand smells using simplified molecular graphs. These models, inspired by the brain's processing, can compute distances and angles in 'odour space', predicting how a chemical will smell based on its relationship to others.
The Future of Odor Science
The odour space isn't a simple geometric shape like a circle or prism. It's more like a rugged landscape of chemical continents, each representing a different aspect of human ecology. Two chemicals might smell alike not because they're structurally similar, but because they play similar roles in nature.
In conclusion, the study of smell has evolved from introspective musings to data-driven AI models. While we're far from fully understanding the geometry of odor, these advancements have brought us closer than ever. Perhaps smell has been the last great sensory mystery because its mathematics are the most esoteric. But with the ongoing work of researchers like those at Osmo, we're unlocking the secrets of scent, revealing a world rich in meaning and possibility.
Fusion energy, the process that powers the sun, holds immense promise as a clean and limitless energy source. For decades, scientists have grappled with the immense technical challenges of replicating this process on Earth. However, recent breakthroughs suggest significant progress, with Europe emerging as a potential frontrunner.
From Dream to Reality: Challenges and Advancements
Fusion requires creating and containing extremely hot plasma, a state of matter where atoms are stripped of electrons. Maintaining this unstable state has been a major hurdle. However, advancements in materials science, magnets, and laser technology are paving the way.
Recent achievements highlight this progress. A UK startup achieved record pressure in a fusion reaction. Europe's Joint European Torus (JET) machine set a new record for energy output. Korean researchers sustained a 100-million-degree Celsius reaction for a record 48 seconds. These milestones, along with numerous others, indicate significant strides in pressure, energy production, and reaction duration – all crucial for viable fusion power.
The 2030s: Fusion's Breakout Decade?
Experts predict a boom in the 2030s, with many aiming for operational reactors. A recent poll suggests 65% of experts believe fusion-generated electricity will be commercially viable by 2035, rising to 90% by 2040.
Fusion's appeal lies in its potential to provide clean baseload power, complementing renewable sources like wind and solar. Unlike nuclear fission, fusion produces minimal long-term waste and requires almost no cooling water. Its fuel sources, readily available isotopes of hydrogen, are practically limitless.
The Global Race Heats Up
Governments recognize the significance of fusion. The US recently allocated a record $763 million for research. China established a consortium of leading industrial giants to develop a viable fusion reactor.
Europe: A Strong Contender
Europe boasts a robust fusion research infrastructure. EUROFusion, a collaborative effort by EU member states, spearheads research and development. Their flagship project, ITER, a €22 billion reactor under construction in France, is expected to produce its first plasma next year. Other European facilities, like Germany's Wendelstein 7-X, have been instrumental for startups like Proxima Fusion.
The UK, a longstanding leader in fusion research, plays a pivotal role. The Culham Centre for Fusion Energy is a global hub, housing the recently retired JET machine and currently developing its successor – the STEP project, a grid-connected reactor aiming for net energy production.
Challenges and Opportunities for Europe
While Europe excels in research, the US enjoys a funding advantage. American startups like Commonwealth Fusion, backed by prominent figures like Bill Gates, have secured billions of dollars. This dwarfs funding available to European counterparts. Additionally, some European startups, like Germany's Marvel Fusion, are lured to the US by faster funding opportunities.
To maintain its competitive edge, Europe needs to bolster support for its startups. "Sufficient public funding and policy incentives are crucial to attract private investment," emphasizes Cyrille Mai Thanh of the Fusion Industry Association.
A Brighter Future Powered by Fusion?
Nearly 70 years after embarking on this journey, humanity is closer than ever to harnessing the power of the sun. Competition in fusion energy, driven by the urgent need for decarbonization, can only benefit everyone. The dawn of a clean and abundant energy source may be closer than we think, with Europe potentially leading the charge.
Every August, Las Vegas hosts the notorious "hacker summer camp," comprising the Black Hat and Defcon hacker conferences. Amidst this gathering, a select group of security researchers were invited to hack a Vegas hotel room, uncovering vulnerabilities in its technology.
Ian Carroll, Lennert Wouters, and their team have revealed a technique named Unsaflok, which exploits security flaws in Saflok-brand RFID-based keycard locks by Dormakaba. These locks, installed on 3 million doors worldwide, are susceptible to a method that allows intruders to open any room with just two taps on a specially crafted keycard.
The researchers discovered weaknesses in Dormakaba's encryption and the MIFARE Classic RFID system, which Saflok keycards use. By reverse-engineering Dormakaba's front desk software, they were able to create a master key that can open any room on a property.
Although Dormakaba is working on a fix, only 36 percent of installed Safloks have been updated so far. The full fix may take months to years to roll out completely. The researchers stress the importance of hotel guests knowing the risks and suggest using the NFC Taginfo app to check if their keycard is still vulnerable.
While there have been no known exploits of Unsaflok, the researchers believe the vulnerability has existed for a long time. They urge caution, advising guests to avoid leaving valuables in their rooms and to use the deadbolt as an additional safety measure.
The discovery underscores the importance of security in hospitality technology and serves as a reminder for businesses to prioritize the security of their systems.
In 2023, global emissions hit a record high, with a significant portion of the blame falling on hydropower. Droughts around the world led to a drop in generation from hydroelectric plants, forcing a reliance on fossil fuels to fill the gap.
Hydropower, a key source of renewable electricity, faced unprecedented challenges due to weather conditions last year. The decrease in hydropower generation contributed to a 1.1% rise in total energy-related emissions in 2023, with hydropower accounting for 40% of that increase, according to a report from the International Energy Agency.
Hydroelectric power plants use dams to create reservoirs, allowing water to flow through the power plant as needed to generate electricity. This flexibility is valuable for the grid, especially compared to less controllable renewables like wind and solar. However, hydropower is still dependent on weather patterns for reservoir filling, making it vulnerable to droughts.
The world added approximately 20 gigawatts of hydropower capacity in 2023. However, weather conditions caused a decrease in the overall electricity generated from hydropower. China and North America were particularly affected by droughts, leading to increased reliance on fossil fuels to meet energy demands.
Climate change is expected to further impact hydropower generation. Rising temperatures will lead to more frequent and severe droughts, while warmer winters will reduce snowpack and ice that fill reservoirs. More variability in precipitation patterns will also affect hydropower generation, with extreme rainfall events causing flooding instead of storing water for later use.
While hydropower is not expected to disappear, the future grid will need to be resilient to weather variations. A diverse range of electricity sources, combined with robust transmission infrastructure, will help mitigate the impacts of climate change on energy generation.
In conclusion, the challenges faced by hydropower in 2023 highlight the need for a flexible and diverse energy mix to meet climate goals in the face of a changing climate.