For people who use the word “science” as a bludgeon and trumpet their strict commitment to fact and reason, the Obama administration and its supporters are strangely incapable of rational analysis of new climate-change regulations.
President Barack Obama’s Environmental Protection Agency released draft rules last week to create a vast new regulatory apparatus with no input from Congress — in other words, to govern in its accustomed highhanded, undemocratic manner. The goal is to reduce carbon emissions from existing power plants, in particular coal-fired plants, to 30 percent below 2005 levels by 2030.
The rhetoric around the rules has involved self-congratulation about how they are the inexorable result of taking climate science and the reality of dangerous global warming seriously. “Science is science,” President Obama said in an open-and-shut tautology about global warming during an interview with New York Times columnist Tom Friedman. By the same token, math is math, and the new regulations make no sense.
While the regulations are stringent enough to impose real economic costs — especially in states that produce coal or heavily use coal power, or whose economies have grown relatively robustly since 2005 — they have almost no upside in fighting global warming. That’s because the U.S. is only part of the global carbon-emissions picture, and a diminishing one at that.
We account for roughly a sixth of global emissions, and our emissions have fallen the past few years more than those of any other major country. In fact, we’ve already achieved about half of the administration’s 30 percent goal, in part through the boom in natural gas, which produces half the carbon emissions of coal….
The regulatory fight against global warming runs up against this reality: Anything we do on our own short of returning to a subsistence economy is largely meaningless, while we can’t force other countries to kneecap their economies based on a fashionable cause with no immediate bearing on the well-being of their often desperately impoverished citizens.
In an attempt to square this circle, supporters of the new EPA rules say they are an exercise of American leadership that will encourage other countries to crimp their economies, especially the world’s biggest emitter, China.
How has the power of example worked so far? We are a liberal democracy. We allow a robustly free press. We don’t imprison dissenters. We don’t steal the industrial secrets of other countries and give them to companies owned by government insiders. In all these things, we provide a model for Beijing, and have done so for a long time. Yet the Chinese Politburo stubbornly pursues what it believes is in its best interest.
Why will China be shamed by our pointlessly self-flagellating new policy on power plants into adopting economically harmful regulations of its own based on speculative models showing a far-off threat of higher temperatures?
The best policy for the U.S. is not command-and-control regulation, as economics writer Jim Manzi points out, but maintaining an environment favorable to technological innovation. No one would have predicted the fracking revolution of the past few years that has both displaced coal and benefited the broader economy. But the self-declared adherents of “science” prefer the satisfaction of pointlessly self-defeating gestures.
Assistant Professor Chao-Lin Kuo surprises Professor Andrei Linde with evidence that supports cosmic inflation theory. The discovery, made by Kuo and his colleagues at the BICEP2 experiment, represents the first images of gravitational waves, or ripples in space-time. These waves have been described as the “first tremors of the Big Bang.”
In ancient times, the notion of a flat Earth was the scientific consensus, and it was only a minority who dared question this belief. We are among today’s scientists who are skeptical about the so-called consensus on climate change. Does that make us modern-day Flat Earthers, as Mr. Kerry suggests, or are we among those who defy the prevailing wisdom to declare that the world is round?
Most of us who are skeptical about the dangers of climate change actually embrace many of the facts that people like Bill Nye, the ubiquitous TV “science guy,” say we ignore. The two fundamental facts are that carbon-dioxide levels in the atmosphere have increased due to the burning of fossil fuels, and carbon dioxide in the atmosphere is a greenhouse gas, trapping heat before it can escape into space.
What is not a known fact is by how much the Earth’s atmosphere will warm in response to this added carbon dioxide. The warming numbers most commonly advanced are created by climate computer models built almost entirely by scientists who believe in catastrophic global warming. The rate of warming forecast by these models depends on many assumptions and engineering to replicate a complex world in tractable terms, such as how water vapor and clouds will react to the direct heat added by carbon dioxide or the rate of heat uptake, or absorption, by the oceans.
We might forgive these modelers if their forecasts had not been so consistently and spectacularly wrong. From the beginning of climate modeling in the 1980s, these forecasts have, on average, always overstated the degree to which the Earth is warming compared with what we see in the real climate.
For instance, in 1994 we published an article in the journal Nature showing that the actual global temperature trend was “one-quarter of the magnitude of climate model results.” As the nearby graph shows, the disparity between the predicted temperature increases and real-world evidence has only grown in the past 20 years.
When the failure of its predictions become clear, the modeling industry always comes back with new models that soften their previous warming forecasts, claiming, for instance, that an unexpected increase in the human use of aerosols had skewed the results. After these changes, the models tended to agree better with the actual numbers that came in—but the forecasts for future temperatures have continued to be too warm.
The modelers insist that they are unlucky because natural temperature variability is masking the real warming. They might be right, but when a batter goes 0 for 10, he’s better off questioning his swing than blaming the umpire.
The dinosaurs, birds, and early mammals found in the fossil beds of northern China are famous—both for their exceptional preservation and for their incredible diversity. But no one knew how they died or why hundreds of creatures from different habitats were buried together on ancient lake floors.
Now researchers say they were likely killed by a series of volcanic eruptions more than 120 million years ago. The ash entombed and preserved them, much like the doomed victims of Pompeii.
After analyzing fossils and sediment, Baoyu Jiang of China’s Nanjing University and his team concluded that lethal, sudden pyroclastic volcanic eruptions marked by air blasts, hot gas, and ground-hogging clouds of fine ash likely smothered, charred, and then carried forward everything in their path to create these bone beds, according to the study published in Nature Communications.
The finding explains why so many creatures would come to be buried on lake floors, and how they remained well preserved enough to retain signs of soft tissue features, such as feathers, tens of millions of years later.
Throughout much of the 20th century, the academic community had little patience with alchemists and their vain efforts to transmute base metals into gold. Any contemporary scholar who even dared to write about alchemy, historian Herbert Butterfield warned, would “become tinctured with the kind of lunacy they set out to describe.”
But, in the 1980s, some revisionist scholars began arguing that alchemists actually made significant contributions to the development of science. Historians of science began deciphering alchemical texts—which wasn’t easy. The alchemists, obsessed with secrecy, deliberately described their experiments in metaphorical terms laden with obscure references to mythology and history. For instance, text that describes a “cold dragon” who “creeps in and out of the caves” was code for saltpeter (potassium nitrate)—a crystalline substance found on cave walls that tastes cool on the tongue.
This painstaking process of decoding allowed researchers, for the first time, to attempt ambitious alchemical experiments. Lawrence Principe, a chemist and science historian at Johns Hopkins University, cobbled together obscure texts and scraps of 17th-century laboratory notebooks to reconstruct a recipe to grow a “Philosophers’ Tree” from a seed of gold. Supposedly this tree was a precursor to the more celebrated and elusive Philosopher’s Stone, which would be able to transmute metals into gold. The use of gold to make more gold would have seemed entirely logical to alchemists, Principe explains, like using germs of wheat to grow an entire field of wheat.
Principe mixed specially prepared mercury and gold into a buttery lump at the bottom of a flask. Then he buried the sealed flask in a heated sand bath in his laboratory.
One morning, Principe came into the lab to discover to his “utter disbelief” that the flask was filled with “a glittering and fully formed tree” of gold. The mixture of metals had grown upward into a structure resembling coral or the branching canopy of a tree minus the leaves.
Changing the agricultural game is what Monsanto does. The company whose name is synonymous with Big Ag has revolutionized the way we grow food—for better or worse. Activists revile it for such mustache-twirling practices as suing farmers who regrow licensed seeds or filling the world with Roundup-resistant superweeds. Then there’s Monsanto’s reputation—scorned by some, celebrated by others—as the foremost purveyor of genetically modified commodity crops like corn and soybeans with DNA edited in from elsewhere, designed to have qualities nature didn’t quite think of.
So it’s not particularly surprising that the company is introducing novel strains of familiar food crops, invented at Monsanto and endowed by their creators with powers and abilities far beyond what you usually see in the produce section. The lettuce is sweeter and crunchier than romaine and has the stay-fresh quality of iceberg. The peppers come in miniature, single-serving sizes to reduce leftovers. The broccoli has three times the usual amount of glucoraphanin, a compound that helps boost antioxidant levels. Stark’s department, the global trade division, came up with all of them.
“Grocery stores are looking in the produce aisle for something that pops, that feels different,” Avery says. “And consumers are looking for the same thing.” If the team is right, they’ll know soon enough. Frescada lettuce, BellaFina peppers, and Beneforté broccoli—cheery brand names trademarked to an all-but-anonymous Monsanto subsidiary called Seminis—are rolling out at supermarkets across the US.
But here’s the twist: The lettuce, peppers, and broccoli—plus a melon and an onion, with a watermelon soon to follow—aren’t genetically modified at all. Monsanto created all these veggies using good old-fashioned crossbreeding, the same technology that farmers have been using to optimize crops for millennia. That doesn’t mean they are low tech, exactly. Stark’s division is drawing on Monsanto’s accumulated scientific know-how to create vegetables that have all the advantages of genetically modified organisms without any of the Frankenfoods ick factor….
[G]enetically modifying consumer crops proved to be inefficient and expensive. Stark estimates that adding a new gene takes roughly 10 years and $100 million to go from a product concept to regulatory approval. And inserting genes one at a time doesn’t necessarily produce the kinds of traits that rely on the interactions of several genes. Well before their veggie business went kaput, Monsanto knew it couldn’t just genetically modify its way to better produce; it had to breed great vegetables to begin with. As Stark phrases a company mantra: “The best gene in the world doesn’t fix dogshit germplasm.”
What does? Crossbreeding. Stark had an advantage here: In the process of learning how to engineer chemical and pest resistance into corn, researchers at Monsanto had learned to read and understand plant genomes—to tell the difference between the dogshit germplasm and the gold. And they had some nifty technology that allowed them to predict whether a given cross would yield the traits they wanted.
The key was a technique called genetic marking. It maps the parts of a genome that might be associated with a given trait, even if that trait arises from multiple genes working in concert. Researchers identify and cross plants with traits they like and then run millions of samples from the hybrid—just bits of leaf, really—through a machine that can read more than 200,000 samples per week and map all the genes in a particular region of the plant’s chromosomes.
They had more toys too. In 2006, Monsanto developed a machine called a seed chipper that quickly sorts and shaves off widely varying samples of soybean germplasm from seeds. The seed chipper lets researchers scan tiny genetic variations, just a single nucleotide, to figure out if they’ll result in plants with the traits they want—without having to take the time to let a seed grow into a plant. Monsanto computer models can actually predict inheritance patterns, meaning they can tell which desired traits will successfully be passed on. It’s breeding without breeding, plant sex in silico. In the real world, the odds of stacking 20 different characteristics into a single plant are one in 2 trillion. In nature, it can take a millennium. Monsanto can do it in just a few years.
And this all happens without any genetic engineering. Nobody inserts a single gene into a single genome.
Flu vaccines have worked on the same principles since investigators first made them in the 1940s. Each vaccine contains flu antigens—bits of viral molecules that can trigger an immune response. The antigens used in routine flu vaccines are fragments of a mushroom-shaped protein, called a hemagglutinin, that protrudes from a flu virus’s surface and helps the pathogen cling to cells inside an infected individual. Once exposed to those bits of protein, a person’s immune system produces sentinel molecules called antibodies that will recognize any flu virus possessing the same hemagglutinin and direct an attack against it.
Flu is a rapidly evolving virus, however, and the structure of hemagglutinin in a given strain changes in small ways every season. Even a minor alteration can make it much more difficult for the immune system to identify and eliminate a flu virus that is nearly identical to its earlier version. This is why we have to get new flu shots every year.
Scientists have searched for decades for a way to outsmart the flu virus rather than always hurrying to outpace it. The first glimpse of more efficient vaccines appeared in 1993, when Japanese researchers discovered that mice sometimes generate a single antibody that blocks infection by two flu strains with different hemagglutinins. Fifteen years later several different teams demonstrated that humans occasionally make these cross-protective, or broadly neutralizing, antibodies as well. Most of these antibodies bind not to a hemagglutinin’s mushroom cap but rather to its slender stem—a region of the molecule where, as it turns out, less structural mutation takes place. Because the stem’s makeup is similar across many strains of flu, the researchers reasoned, an antibody that recognizes it could potentially protect against a range of viral strains with distinct caps.
Building on this discovery, several groups have altered the structure of hemagglutinins, creating a cap to which the immune system does not react. Animals exposed to these tweaked proteins produce cross-protective antibodies that bind to the stalk rather than strain-specific antibodies that home in on the cap. Other scientists are trying to get animals and people to make antibodies against a different viral protein, M2, which is embedded in the flu virus’s membrane and helps it enter cells. Like the hemagglutinin stalk, M2 changes little.
Additional teams are focusing on completely different strategies, such as designing a vaccine that encourages the production of T cells, the attack dogs of the immune system. T cells produce broader, longer-lasting immunity than antibodies, but classic flu vaccine formulas do not encourage their activity. Others are administering a sequence of vaccines against different flu strains so that the immune system assembles a diverse antibody artillery.
Pouncing lions, fish-swallowing seals, and even your bone-chewing family dog can all trace their roots back to a small, tree-dwelling ancestor. Bones unearthed from a 55-million-year-old fossil trove have revealed a diminutive creature at or near the root of today’s formidable lineage of carnivorous mammals.
Paleontologist Floréal Solé of the Royal Belgian Institute of Natural Sciences and a team of colleagues recently described more than 250 new teeth, jaw, and ankle bone specimens of Dormaalocyon latouri, named for the Belgian locality of Dormaal where the fossil was first found in a site long famed for early Eocene epoch remains.
Fossilized jaw bones and teeth, including baby teeth, provide valuable evidence of the ancient animal’s taste for flesh. According to Solé, Dormaalocyon is the most primitive known member of the carnivoraforms group. That group is represented by today’s 280-plus species of living carnivorous mammals, the order Carnivora, which includes lions, seals, bears, cats, dogs, and others—all of which count this creature as their ancestor.
For anyone with insomnia in the New York metro area, the ads have become ubiquitous: three middle-aged men dressed in cornflower blue lab coats, holding mysterious technical equipment, and offering the owners of haunted houses (or haunted anything, really) their unique ghost capture and removal services.
I first saw one after falling asleep to the dulcet drawl of Charles Rose on “CBS News Nightwatch.” The spot feels like a parody of those local commercials starring used car salesman and “crazy” warehouse owners. It ends with the team pointing their fingers at the camera, like Uncle Sam in an army recruitment poster, and shouting flatly over the din of passing traffic, “We’re ready to believe you!”
You may know of these men already. They’re the Ghostbusters.
Until the beginning of the current fall semester—when Columbia University abruptly shuttered its psychology department’s program in paranormal studies—Dr. Egon Spengler, Dr. Ray Stantz and Dr. Peter Venkman had been conducting research into extra-sensory perception and recurring manifestations of what they call vaporous apparitions and psychokinetic activity. “Psychics, ghosts, floating stuff, to the lay person. But to us it’s way more technical,” Dr. Venkman explains, half ignoring me as he rifles through the bottom drawer of a filing cabinet, then fixing me with a cold stare. “Stuff floats for a lot of different reasons.”
"Dr. Spengler and Dr. Stantz are the only people I’ve seen who have taken all these parallel dimensions proposed by Bosonic String Theory and Superstring Theory, and are attempting to correlate them to supernatural events," says Freeman Dyson, a theoretical physicist and mathematician at Princeton. "They’re the only ones actually gathering hard data on subatomic behavior during these unexplained occurrences—at the Ivy League-level anyway." Notorious among colleagues for his contrarian streak, Dyson has avidly followed Dr. Stantz and Dr. Spengler’s articles in the journal of London’s Society for Psychical Research. It is outré reading material for a winner of both the prestigious Max Planck medal and the Harvey Prize, but Dyson is effusive in his praise for Stantz and Spengler’s felicitously documented case studies.
"[But] that third name doesn’t sound familiar to me," he says.
It was Dr. Venkman, in fact, who lead the charge to commodify the trio’s academic research into a for-profit enterprise, talking Stantz into mortgaging his family home to purchase a headquarters for their business in lower Manhattan’s TriBeCa neighborhood. In just a few short months, the Ghostbusters have since rocketed to prominence, following a string of alleged successes in what they call the “reclamation of paranormal phenomena.” Clients, judging from press reports, have included a business at Rockefeller Plaza, a restaurant in Chinatown, the fashionable Manhattan night club The Rose, and their first widely publicized case at the five-star Sedgewick hotel.
One of the foundations of the scientific method is the reproducibility of results. In a lab anywhere around the world, a researcher should be able to study the same subject as another scientist and reproduce the same data, or analyze the same data and notice the same patterns.
This is why the findings of a study published today in Current Biology are so concerning. When a group of researchers tried to email the authors of 516 biological studies published between 1991 and 2011 and ask for the raw data, they were dismayed to find that more 90 percent of the oldest data (from papers written more than 20 years ago) were inaccessible. In total, even including papers published as recently as 2011, they were only able to track down the data for 23 percent.
“Everybody kind of knows that if you ask a researcher for data from old studies, they’ll hem and haw, because they don’t know where it is,” says Timothy Vines, a zoologist at the University of British Columbia, who led the effort. “But there really hadn’t ever been systematic estimates of how quickly the data held by authors actually disappears.”