Why Nuclear Power Is Completely Natural

Roughly 40% of the American public opposes nuclear power,  according to some surveys. We haven't commissioned a truly new plant in 40 years. You might say that for decades, nuclear power has "felt the Bern" of a generally distrustful public glare. Disaster movies, environmentalist nitwits, and eternal doomsayers beat us up about its "unnatural" dangers.

But I wonder if many of these people know that our planet quietly ran its own naturally occurring nuclear fission reactors for more than 100,000 years?

These natural nuclear reactors were born in Earth's crust by pure chance, and yet, they ran for hundreds of thousands of years without exploding in a giant runaway chain reaction, or even melting down in a small-scale reaction, all with no oversight and no fuss. When they burned through their fuel supply, they simply died out and lay dormant until humans came along to discover them.

Scientists have discovered at least 16 natural reactors beneath the surface of the African country of Gabon. They lie in two large deposits of Uranium known as Oklo and Bangombe. Altogether, they produced very roughly 100,000 watts of power at any given time. During their lifetimes, this amounted to about the amount of power that 15 large US reactors produce in one year.

To read a highly detailed description of the Oklo system, written by a scientist involved in researching it, click here.

I'll give you an abridged version:

The reactors ran off of the element Uranium isotope U-235. Nearly two billion years ago, when the reactors started up, this isotope was more common in Earth's crust than it is today. When enough of this material is brought in close enough proximity, a nuclear chain reaction takes off. A fissioning U-235 atom spits out neutrons. When these neutrons strike a second and a third U-235 atom, those atoms undergo fission as well, sending out even more neutrons.

The Oklo deposit contained a just-high-enough density of U-235 atoms to put them close enough together for a chain reaction to occur under the right circumstances.

To run continuously, a reactor has to reach a steady balance. For every uranium atom that fissions, one of its neutrons must be absorbed by another atom. If the neutrons produced by one fission cause, on average, more than one further fission, the chain reaction exponentially blooms into a runaway meltdown or explosion. If one fission's neutrons set off an average of less than one subsequent fission, the process withers and dies.

On their own, most of the neutrons produced by a fission are flying out too fast to attain a high probability of sparking the next fission. Getting close to steady balance requires help. This is achieved by a moderator. The moderator is a substance that surrounds the U atoms, forcing the neutrons they give off to pass through. In passage the neutrons are slowed down; this drastically increases the probability that they will be absorbed by another U-235 atom which will then fission to continue the chain reaction.

The Oklo reactors used a moderator that is extremely common in 20th century nuclear plants: water. According to research, water naturally flowed into these reactors and moderated for perhaps 30 minutes. Slowly, all of those passing neutrons heated the water until it boiled off. At this point the reaction petered out because the fission neutrons were no longer slowed down and absorbed by other uranium atoms. The reactor cooled down, gradually allowing liquid water to flow back in. After a few hours the reactor filled and fission proceeded again.

And so on. And on. For an eon.

As their natural supply of fissile uranium dwindled, the regulated fission gradually died down and the sites cooled off. The waste products of that reaction were sealed into the ground where they will likely never hurt a soul. The leftover uranium is actually not very useful for building nuclear weapons.

This natural design is at once fundamentally similar to our own reactor designs and totally different in its precise details. But its philosophical importance is very clear. We needn't fear that we are doing the unnatural by harnessing nuclear fission to produce power. Nature itself has run nuclear reactors for a thousand times longer than we have even known the secrets of their power.

Is the Earth Flat? Test It for Yourself

Is the Earth flat? You may THINK you know the answer. But, with rapper B.o.B saying "flat!" and physicist Neil deGrasse Tyson saying "round!" how can you really know?

Here at the highly empirical (and underground) headquarters of RealClearScience, we don't want you to take anything on faith, only on evidence. Here's how you can test whether or not the Earth is flat for yourself.

Materials:

Commitments:
You
A friend 500+ miles away
Less than 10 minutes
A piece of flat ground outside

Each of you will need:
A phone
A ruler of exactly the same length*
A tape measure
Strong tape
A Protractor

*This includes any small areas between the ends of the ruler and where the numbers start! If your ruler has these, do yourself a favor and saw them off-- who wants to deal with that?

 

Procedure:

You're going to need one friend who lives at least a few hundred miles from you. The further away they are, the better it will work. You're also going to need a day with enough sunlight to cast clear shadows at both of your locations. On such a day, here's what to do:

1. Find a time that both of you can go outside for ten minutes. Then, pick a precise time, down to the minute, to make a measurement. On your phone, go to the official US atomic clock time, listed at time.gov to make sure your time is accurate.

2. Ten minutes before that time, both of you need to go outside and set up the experiment as follows:

Find a level spot. The easiest way is to choose a paved sidewalk and then see if your roll of tape will roll in any direction. If it doesn't, the ground is level. Otherwise, you can use a level.

Here's the only tricky part: Set your protractor upright on the ground. Now, take your ruler, stand it upright on the ground, and tape it to the protractor at a right angle so that the side of the ruler goes through the center point and exactly the 90-degree line at the top:

 

Now, turn your contraption so that the flat of the ruler faces into the sun.

At the exact time specified, measure the distance from the base of the ruler to the end of its shadow. Measure this shadow length very carefully and record the number.

 

Data & Calculation:

Data for this experiment consists of just two numbers: your shadow length measurement and your friend's.

Find the difference between the two measurements (subtract). If you measured 150 mm and they measured 162 mm, you have 12 mm.

 

Result:

If the Earth is indeed flat, the difference in shadow lengths should be this number:

0

If your number is not 0, the Earth is not flat.

 

Analysis:

This result applies, via something called the small-angle approximation, so long as you believe that the sun is 93 million miles from the earth. It even works if you believe that the sun is as close as a few million miles away. At these distances, all rays from the sun hit the earth at so close to parallel angles that you can't measure the difference with this simple experiment. Shadows of identical objects cast by light rays coming from the same direction can only have different lengths if the objects casting them are rotated at some angle with respect to one another. Since both of your rulers were set level to the local flat surface of the earth, the surface itself must be angled, curving between the rulers.

If you got zero, you have a novel scientific finding, and you'll need to repeat it very carefully and calculate and propagate your experimental uncertainties to make sure. Then email me.

 

Extra Credit: How Big Around is the Earth?

If the Earth is indeed round you can calculate its diameter, accurate to within a percent or two, with one modification to this experiment. (More precisely you can calculate its radius of curvature. To ascertain that the Earth is roughly completely round, you need to repeat this measurement repeatedly across all time-zones and find the radius of curvature to be the same everywhere.)

It's a hard modification though. You must take your measurement at a precise time and only at one line of longitude. The time must correspond to when the sun is at perfect zenith overhead at local noon at a spot that must be in the tropics. The location is constrained to directly north or south of that tropical zenith spot.

Ancient Greek mathematician Eratosthenes did just this around 250 BC and got a reasonably accurate result.

 

Conclusion:

You don't have to take scientific facts on face value. You can always test them and see for yourself.

(Top Image: AP)

How Thunderstorms Trigger Asthma Epidemics

June 24, 1994 began hot and humid in London. But around midday, the heat that dominated the morning began to dissipate. The humidity remained, however, and as the afternoon marched on, the sky turned ominous. A mighty storm was brewing.

The inclement weather that eventually rolled in as the sunlight started to wane was a Mesoscale Convective System. Similar in many ways to a tropical cyclone, systems like these are formed when smaller thunderstorms jumble together and loosely organize. The resulting weather mass is like a multi-headed monster, thrashing wildly in multiple directions with whipping winds, tumultuous rain, and frenetic lightning.

As sewers and streets brimmed with water, area hospitals soon grew inundated as well, with patients presenting strange symptoms that seemed to have nothing to do with the storm whatsoever. Over the next thirty hours, at least 640 patients visited emergency departments complaining of extreme wheezing and troubled breathing -- they were suffering from asthma attacks. Several thousand more may have been affected without seeking medical attention.

The incident remains the largest outbreak of asthma ever recorded, affecting even those with no history of the condition, and catalyzed scientific study into the strange link between thunderstorms and asthma.

Nearly twenty-two years later, that link is now coming to be well understood. A recent review published to the journal Clinical and Experimental Allergy asserts that there is now sufficient evidence to suggest a causal relationship between thunderstorms and asthma outbreaks.

The recipe for an epidemic seems to rest on a few key factors: an extremely elevated concentration of grass pollen, typical during late spring and early summer, a decrease in temperatures (colder, denser air keeps pollen concentrated near the ground), and a sufficiently turbulent storm.

"Grass pollen is usually too large to enter the small airways of the lungs and is filtered out by the nose," writes Megan Howden, a respiratory physician in Melbourne, Australia. "But stormy winds and moisture can cause the pollen to rupture into tiny particles, which are small enough to be inhaled."

"The outflow winds of a thunderstorm then concentrate these tiny particles at ground level, where they can easily enter the small airways of the lungs and cause an acute asthma attack in those who are allergic to grass pollens."

Thunderstorm-related asthma epidemics occur predominantly in Europe and Australia. Large outbreaks have not been documented in the United States, but the condition still arises. A study examining emergency department asthma visits in Atlanta, Georgia found a slight three percent uptick in visits on days following thunderstorms.

At risk are people with pollen allergies subjected to outdoor conditions during a thunderstorm.

"Subjects... who stay indoors with the window closed during thunderstorms are not involved; further there are no observations on the involvement of asthma in non-allergic subjects," the reviewers reported.

People with mild allergies and little to no history of asthma can be particularly endangered, as they may not have access to a fast-acting inhaler. Asthma is rarely deadly in the developed world, but it is a potentially life-threatening condition that killed a quarter of a million people in 2011, the most recent year for which solid data exists.

Doctor Isabella Annesi-Maesano, the Research Director at the French Institute of Health and Medical Research and one of the co-authors of the review, believes that thunderstorm-related asthma may grow more prevalent this century.

"Such a risk is likely to increase in relation with climate change and related extreme events," she and her team concluded.

(Image: AP)

Huge Hypocrisies of the Anti-GMO Movement

The anti-GMO movement, championed by groups like GMO Free USA, Millions Against Monsanto, Just Label It, and U.S. Right To Know, is rife with hypocrisy -- so much, in fact, that it makes your head spin. But hypocrisy to this extent is exactly what happens when uninformed, rigid ideology gets entangled in a nuanced, scientific issue.

To avoid hypocrisy when discussing genetic modification, maintain an open mind and think critically. It might also help to read these four blantant examples of hypocrisy committed by opponents of genetic modification.

1. The anti-GMO movement calls for transparency, honesty, and openness, yet their arguments are full of misinformation and falsehoods.

Between trumpeting misleading studies, harassing public scientists, and ignoring science when it questions their ideology, the anti-GMO movement has no credibility in calling for "transparency."

"They have absolutely no right to call for transparency when they are lying through their teeth everyday about GMOs," Steven Novella, President of the New England Skeptical Society, recently stated on the Skeptics' Guide to the Universe, "The whole anti-GMO campaign is built on lies and misinformation. They do not have the moral authority to call for transparency."

2. Members of the anti-GMO movement frequently accuse their opponents of being "shills" for corporations, yet the movement is heavily funded by the multi-billion dollar organic and natural foods industry.

To their credit, they admit it. U.S. Right to Know's largest single donor is the Organic Consumers Association, and Just Label It is sponsored by brands like Annie's, Applegate, Organic Valley, and Stonyfield Organic. Just Label It even openly solicited for 25 bloggers to become "GMO Labeling Advocates" -- literally paid shills. All the bloggers had to do was promote Just Label It's campaign, and if their shilling was deemed to be good enough, the bloggers would be compensated $500.

Apparently, the anti-GMO movement doesn't understand why science writers who have no connection to the biotech industry whatsoever would write in favor of genetic engineering. So they've come to the conclusion that journalists and bloggers must receive money under the table. In reality, most science writers who write favorably of GMO technology do so simply because scientific evidence currently indicates that GMOs are safe and generally beneficial.

3. The anti-GMO movement calls for GMOs to be labeled, but makes no mention of labeling pesticides.

The anti-GMO movement claims that consumers have a "right to know" what's in their food. It's hard to disagree with that statement. But for the principal interests behind the movement, their desire for the "right to know" seems to stop at GMOs. Why not other things? Why not have a label for how far the food traveled from its source, or for when it was picked, or for pesticide use? Organic proponents of the anti-GMO movement specifically balk at pesticide labeling, claiming that everyone already knows that organic farming uses pesticides. Actually, they don't, and the organic industry knows it. Because if organic consumers knew those foods were produced with pesticides, they'd probably be less likely to purchase them. That's almost certainly why the anti-GMO movement, heavily funded by organic food companies, halts their labeling efforts at GMOs.

4. The anti-GMO movement is against genetic modification, but it doesn't seem to mind mutagenesis.

Though this may surprise many natural consumers, organic crops sold in the U.S. are allowed to be mutated by radiation or chemicals in order to create variants with desirable traits. The process is called mutagenesis, and it sounds like a frightening path to Frankenfoods! Yet mutagenesis is not the focus of labeling efforts...

Frankly, mutagenesis shouldn't be irksome, because foods produced by the method are quite safe to eat. What is irritating, however, is the hypocrisy in claiming that transgenesis, the primary mode of genetic modification in which a gene from one organism is transplanted to another in order to achieve a desired trait, may be unsafe, when the overwhelming majority of scientists recognize that foods produced via the process are quite safe.

(Image: AP Photo/Jacquelyn Martin)

Full Disclosure: The author owns a minority stake in a small food business.

The Incomprehensible Power of a Supernova

In a brief flash, the supernova of a single star can burn brighter than the billions of suns of an entire galaxy:

Supernova 1994D: NASA/ESA

That supernova at bottom left is not sitting in front of the galaxy NGC 4526. It's in the outer edge of that galaxy, 55 million light years away.

Last summer, astronomers found the most powerful supernova they had ever seen, an event called ASSASN-15lh. Their report published in the journal Science last week contained a measurement of the total power of this explosion: (2.2+/-0.2) x 1045 Ergs per second. That's an esoteric number phrased in unfamiliar units. What's the real meaning of this much power?

Astronomers look at a stellar object and measure its luminosity: the amount of energy it releases per second. (Further, this is a measure called bolometric luminosity: the total power radiated out across all frequencies of electromagnetic waves.) This sort of measurement is very familiar to us, as we use several scales that measure energy per unit time. A watt, a horsepower, and calorie burned per hour are human measures of power.

The numbers we're discussing are so big that we have to use exponential notation -- unless you want to read numbers like 220000000000000000000000000000000000000 watts.

A quick review on exponential notation: 102 = 100; 104 = 10,000; 3.5 x 104 = 35,000.

Let's start by getting rid of ergs. An erg is a ten-millionth of a joule. ASSASN-15lh radiated 2.2 x 1038 joules of energy per second, which happens to be exactly the definition of a watt. It's like the universe turned on a couple of 1038 W light bulbs. 1038 W is one hundred billion, billion, billion, billion watts. Now we've got a different problem: energy scales that are hard to imagine. What can we compare this to?

Converting a chunk of uranium smaller than a pea directly to energy via E = mc2 produced the nuclear blast that leveled Hiroshima. The energy of ASSASN-15lh is comparable to converting the entire moon to pure energy every 30 seconds. The biggest thermonuclear blast ever created was a billion trillion times less energy than one second of this supernova.

Our sun produces 3.8 x 1026 watts of power. So, this supernova was about 580 billion times brighter than our sun. The explosion radiated, every second, as much power as the sun has produced total over the past 18 millennia.

The Milky Way galaxy in which we live burns with roughly 8 x 1036 watts. For its few dying days, the supernova is nearly 30 times more luminous than our entire galaxy.

All of our smaller human endeavors are still more incomparable.

For example, 746 watts is equal to a single horsepower. A redlining Ferrari engine might produce 600 HP, or about 450,000 watts. Our supernova is like 1032 Ferrari engines at full throttle.

A massive power plant produces about 109 watts. The total electrical energy generated by all power plants on earth is about 7.9 x 1019 Joules per year. In a single nanosecond, ASSASN-15lh expelled more energy than those power plants could produce, operating at full capacity, for 2.8 billion years.

As intelligent as we are, our brains and our experiences don't prepare us to comprehend numbers and scales so big. Cosmic events like supernovas boggle our minds. Human works are dwarfed by the magnitudes of interstellar space like a single-celled organism by the vastness of the ocean in which it floats. Astronomy reminds us of how intelligent we are, but also how very tiny.

Why Harry Potter Doesn't Have Free Will

Harry Potter is the best-selling book series of all-time, with more than 450 million copies spread throughout the world. It also presents one of the most intriguing time travel scenarios in modern fiction.

The time travel plot (link contains spoilers), which occurred in book three of the series, Harry Potter and the Prisoner of Azkaban, was not merely compelling for its literary merits, but because it was logically consistent within the context of spacetime. When Harry and Hermione travel back in time, everything that happened from their original perspective still happens from their new perspective. No novel timelines were created. The past events caused by the time travelers from the future always occurred in the timeline. If -- against all odds -- time travel were ever invented, this would be the only way it could operate in the known universe.

Apart from being integral to realistic time travel, spacetime consistency prompts a mind-bending and seemingly unavoidable conclusion: Harry Potter has no free will. And if time travel were ever woven into our reality, it would likely demonstrate that we don't either.

To illustrate why, let's turn to Caltech cosmologist Sean Carroll. In his 2010 book, From Eternity to Here: The Quest for the Ultimate Theory of Time, Carroll set up a time travel scenario stripped down to the barebones. Imagine a gate, he wrote. When you travel through it, it takes you one day into the past.

"You walk up to the gate, where you see an older version of yourself waiting for you. The two of you exchange pleasantries. Then you leave your other self behind as you walk through the gate into yesterday. But instead of obstinately wandering off, you wait around a day to meet up with the younger version of yourself (you have now aged into the older version you saw the day before) with whom you exchange pleasantries before going on your way."

Next, Carroll introduces the idea of choice.

"Once you actually do jump backward in time, you still seem to have a choice about what to do next. You can obediently fulfill your apparent destiny, or you can cause trouble by wandering off. What is to stop you from deciding to wander? That seems like it would create a paradox. Your younger self bumped into your older self, but your older self decides not to cooperate, apparently violating the consistency of the story."

If this were, say, the mode of time travel presented in Back to the Future, you'd simply ignore the consistency... and wind up in a future where your parents are rich, your dad's former bully is now his handyman, and you somehow have a complete memory of your alternate history.

But, though very entertaining to watch, that doesn't make sense, and this isn't Back to the Future. Returning to Carroll's gate scenario, you met with your older self, so you must eventually meet with your younger self when you become your older self.

"That is because, from your personal point of view, that meet-up happened, and there is no way to make it un-happen, any more than we can change the past without any time travel complications," Carroll explained. "There may be more than one consistent set of things that could happen at the various events in space-time, but one and only one set of things actually does occur. Consistent stories happen; inconsistent ones do not."

"We would… have to abandon free will," he continued, "because witnessing part of our future history implies some amount of predestination."

Professor Fay Dowker, a Professor of Theoretical Physics at Imperial College London, concurs.

"When the full spacetime story is consistent… then you can’t have free will. Whatever happens has to happen and you can’t change that. You can’t make a new decision. You can’t decide not to go in the time machine and go back in time," she said recently on the BBC Radio 4 program The Infinite Monkey Cage.

Harry Potter's elaborate time travel plot in which -- spoiler -- Harry is saved by himself and Carroll's simplified gate scenario are both examples of a causal loop, a form of the more wordy and intricate closed timelike curve. Logician Kurt Gödel conjured up the idea of closed timelike curves back in 1949, and ever since, thinkers across a variety of disciplines have contemplated their existence and their potential ramifications.

The matter of free will may be the most debated, and though most who contemplate it side with Carroll and Dowker, the matter will never be settled until a time travel machine is actually produced. But that doesn't really seem up for debate. To the best of our knowledge, the laws of physics do not permit time travel.

(Image: AP)

How New & Improved Incadescent Bulb Works

Incandescent bulbs give off warm, beautiful light that mimics the radiance of the sun. Unfortunately they require much more electricity than compact-fluorescent and LED bulbs. While some of us are willing to pay more for sunny light, federal regulations are trying to phase the old bulbs out.

New research proposes to bring back incandescent bulbs by drastically increasing their efficiency to compete with LED bulbs. Pairing the reduced power draw of modern LED bulbs with the simplicity and beautiful light quality of incandescent lights is a fantastic idea. How does it work?

At the heart of the tried-and-true incandescent light is a simple metal wire called the filament. The filament is encased in a glass shell. All of the air is sucked out of the shell to prevent oxidation (like the tarnishing of silver) and argon or a similar gas is pumped in so that if a metal atom boils off of the hot surface it doesn't hit the glass and coat it. The freed atom instead bounces off of an argon atom and returns to the filament.

The bulb produces light by simply running lots of electrical current through the wire filament. The electricity flowing through heats the wire so hot that it glows. Upping the current will make the filament hotter and brighter until it melts. Incandescents typically employ the element tungsten -- symbolized W, for its old name, wolfram -- because it can be heated to almost 6200 degrees F (3422 degrees C) before melting.

The glowing hot filament gives off visible light but also infrared light that we feel as heat. The filament is essentially what is called a black body: a very hot object that glows and gives off light of a very particular set of colors. These colors are solely determined by how hot the black body is. Heated to the temperature of a light bulb, the black body filament is actually giving off much more infrared light than visible light. Infrared light is completely invisible to the human eye, so it's wasted energy for lighting purposes.

All this wasted infrared light makes the bulb inefficient: only about 2% of the electricity burned in a standard household bulb is converted to visible light. Newer CFL bulbs improve this efficiency to ~5-10% and good LED bulbs can be 15% efficient. A 2% efficient light bulb requires five times more electricity to shine as brightly as a 10% efficient bulb.

The new research report claims that an efficiency as high as 40% could be achieved from a filament bulb with some upgraded technology inside. A closer read of the paper shows that the best efficiency estimated in a bulb of their own making was 6.6%. Well below 40%, but very good for a simplified research prototype.

The core of the new bulb design is entirely unchanged. The innovation in this research is to surround the tungsten filament in the center of the bulb with a specially designed shroud. The shroud needs to perform two functions. The first is to allow visible light to pass through and illuminate the room around the bulb; the second is to reflect all that infrared light that provides only waste heat and no illumination back into the filament. The reflected light re-heats the filament, producing more light (and more heat too).

The engineering design of the shroud is the novel success of the work. It's a two-dimensional screen that appears to be roughly the size of a postage stamp, but vastly thinner. The shroud is built of dozens of alternating thin layers, stacked like a baklava. Each layer is only about 100 nanometers thick.

This stack of layers makes a device called a photonic crystal. It's a sort of toll-gate that sends visible light out and sends infrared waves back to reheat the filament.

Photonic crystals are built of differing materials that allow light to travel more or less quickly, to force the light waves to interfere with one-another. Careful computer-aided design can tailor the shape of the photonic crystal (in this case, the thickness of its stacked layers) to pass certain colors of light through and reflect or destroy other colors. The fiery iridescence of opals is caused by a natural occurrence of nearly this exact type of structure.

To force all the light from the filament to interact with the photonic crystal, the final bulb design sandwiches a flat filament between two of these flat photonic crystals. Think baklava, filament, baklava.

While the device the researchers actually built has 90 layers in its photonic crystals, the ideal device would have 300 layers made from four different types of materials to achieve a 40% efficiency. (The 90-layer prototype alternates between just two materials -- silicon dioxide and tantalum pentoxide.)

Most of the parts of this new bulb are extremely cheap. Producing the photonic crystals with all their layers is trickier, but production would employ fabrication processes familiar to makers of computer chips. The success of a small team of researchers producing a 6% efficient bulb suggests that the future may be bright for this new technology.

(AP photo)

3 Religions Where Science Takes Center Stage

Organized religion is often viewed as inherently at-odds with science. But this is a myth. Many who believe are also able to live a life guided by logic and reason, and followers of almost any faith can contribute meaningfully to the scientific endeavor.

There are a number of religions, however, which hold science with such high regard that the scientific method is essentially a tenet of their beliefs. Here are three of the most pro-science religions:

 

1. Church of Reality

The Church of Reality espouses a simple doctrine: "If it's real, we believe in it." Based on this principle alone, you may already find yourself at least a member in spirit. The religion refers to itself as "doubt-based" and holds science as the ultimate arbiter of what is real and what isn't. If you'd like to gather other believers and hold your own gathering, the Church of Reality recommends communing over pizza.

 

2. The Circle of Reason

Circleofreason Logo 2013.png

The Circle of Reason is a belief system which declares rationalism to be the chief conduit of knowledge. Furthermore, it asserts that everyone, both theists and atheists, have the ability and an obligation to use reason when navigating the world.

Followers of the Circle refer to themselves as "methodologists" rather than "worldview proponents."

"We simply espouse more consistently practicing the most basic tenets of reasoning thought and action: Accepting reality, questioning assumptions, and mastering emotionality."

Sounds a bit like becoming a Vulcan... just without the pointy ears.

 

3. The Church of the Flying Spaghetti Monster

File:Touched by His Noodly Appendage.jpg

In January 2005, 24-year-old Oregon State University graduate student Bobby Henderson wrote an open letter to the Kansas State Board of Education, who at the time was considering whether intelligent design should be taught alongside the theory of evolution in biology classes. In the letter, Henderson expressed dismay, not simply that intelligent design was being considered, but that the method of creationism he ascribed to was not being considered. You see, Henderson adhered to the belief that an invisible Flying Spaghetti Monster created the universe.

Originally intended as satire, Henderson's letter has since inspired what its followers insist is a genuine religion: Pastafarianism. Pastafarians, as many followers call themselves, strongly believe in the scientific method and the findings it produces, with one minor asterisk, that all of it is affected by the Flying Spaghetti Monster's "Noodly Appendage."

If you're willing to concede that possibility, and to occasionally don a pasta strainer, the Church of the Flying Spaghetti Monster might just be the pro-science religion for you!

(Top Image: AP)

Update 1/20: A previous version of this post stated that the Church of Reality was started by author Michael Dowd. That is incorrect. The church was actually founded by Marc Perkel. We apologize for the error.

The Blobfish Isn't as Ugly as You Think

In 2013, Psychrolutes marcidus, otherwise known as the blobfish, garnered global attention when it was voted the world's ugliest animal. The contest, put on by the Ugly Animal Preservation Society, was not meant as a slight towards the blobfish and its rivals, but as a notice to the world that all animals, not just the cute, furry, and mammalian ones, deserve attention, too.

The unknown number of blobfish today, dwelling near the sea floor at depths between 2,000 and 3,900 feet, are undoubtedly indifferent to their surface notoriety, but if they somehow had knowledge of their slimy reputation, they would have good reason to protest. The public perception of the blobfish is based on one of the most unflattering images you can imagine, equal to the worst driver's license photo... times ten.

Mark McGrouther, the Ichthyology Collection Manager at the Australian Museum ought to know. He was there when "Mr. Blobby's" photo (shown above) was taken back in 2003 on the deck of a research vessel just off the coast of Southern Australia. He recalled the fateful moment for Smithsonian Magazine last November.

“His mashed facial features may have resulted from being stuck at the back of the net, squeezed between all sorts of other marine life. By the time he was dumped on the deck of the Tangaroa and exposed to the air, his skin had relaxed. He would have looked a good deal less blobby on the seafloor."

There's a good reason for that. Blobfish reside at a place where ambient pressures are 60 to 120 times greater than at sea level, and they are wonderfully adapted to their crushing environment. Instead of having a gas-filled air bladder to control buoyancy, which would almost certainly implode at the depths where they dwell, blobfish simply have gelatinous skin that is ever-so-slightly less dense than water. That, coupled with their light bones and internal organs, means that the roughly foot-long blobfish just sort of hover above the ocean floor. Their big mouths, melted into frowns out of water, likely remain agape most of the time in water, sucking in floating edible morsels like tiny crustaceans.

All of this leads to a dispelled myth and a simple truth: The blobfish may be the ugliest fish out of water, but in water, it's not so bad. It actually looks downright dapper in this illustration, drawn by scientific artist Rachel Koning.

https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/Two_Psychrolutes_marcidus.jpg/640px-Two_Psychrolutes_marcidus.jpg

Scientific illustrations are the best insights we currently have into the blobfish's native look, as one has never actually been spotted in the wild. Photos of its close cousin P. occidentalis do suggest that the drawings are accurate, however.

So think twice before calling the blobfish "ugly." After all, a human probably wouldn't look nearly as good at 3,000 feet deep as the blobfish looks at the surface.

(Images: Kerryn Parkinson, Rachel Koning)

Can We Make a Laser Blaster?

Like every American kid who grew up with Star Wars, my ears pricked up at recent stories about Chinese laser rifles. Are we going to see wars fought with laser blasters in the near future?

The answer is simple but the explanation and details are not.

There are already plenty of weapons based on lasers as well as laser-like directed energy weapons that focus electromagnetic waves upon a target. The U.S. Military has spent quite a lot of money to design a number of these devices.

All of these are enormous, complex machines that require a ship, airplane or large truck. They are not portable. They are generally focused on either shooting down missiles, melting UAVs or causing skin irritation to disperse crowds.

So, how about the personal arms? To get some idea of their practicality we need to think about what is required of such a weapon.

A true blaster has to be able to hit a person anywhere and hurt them. It must be small enough and light enough for a single soldier to carry it. It can't be a specialized gun that can only target the eyes or a specific weak area like the genitals. That would be impractical in close quarter combat and could be easily blocked by small pieces of shielding or even simple goggles.

A general use sidearm must then knock a target down by heating up and destroying a big chunk of any skin and underlying tissue it hits. NIST research on firefighting says that tissue is completely destroyed when it hits about 75o C (167o F). We'll model tissue as what is basically is: water. A more complex model would look at specific heat transfer and damage properties of skin, but even medical and scientific research in this area is not complete.

Let's guess that we need to heat up 25 cm3 of tissue (about a large finger's worth) to cause massive damage. That's roughly a hole the diameter of a .38 caliber bullet that goes three inches into the body. Heating one cubic centimeter of water by one degree C requires 4.2 Joules of energy. Hence we need to deposit about 105 J on the target. A Joule is a total amount of energy. Lasers are most generally rated in watts: the number of Joules that they can deliver in one second.

Unlike a bullet that delivers all of its impact in a millisecond or two, a laser can deliver more and more damage as it stays on a target. Now we need to guess how long we have to train the laser pistol on the body to dump 105 J into the target. Of course the shorter the time period we can hold the spot on the target, the more watts of power the laser will need to sustain.

In a chaotic combat environment being able to hold the laser trained on a single spot for even one second seems nearly impossible. If we make a very broad estimate that perhaps one tenth of one second might be a reasonable time on target, we can start to estimate the laser power needed. 105 J/.1 s = 1050 W of power.

A 1000 W laser is roughly one million times more powerful than a typical small laser pointer. A laser metal cutter often uses a laser of roughly this strength; more powerful versions pack up to 10000 W. An industrial laser demolition gun (yes, this is extremely cool-here's a video) brings roughly 5000W of power. These devices sure look tough enough to put some damage on some skin.

Considering all of the above, it seems that we need no less than 1000 W and ideally more like 10000 W of power for our blaster.

Current cutting edge handheld laser weapons are puny by comparison with those numbers. A 1990's model had about .015 W. That's not even enough to cause a skin burn. The power of the laser in these new Chinese weapons is unclear, but it's likely no more than 10 W: Short on power by a factor of 100 to 1000. A laser of this strength can light a match or burn a small hole in a sheet of paper, but no way they can blast a hole in a person. Working in laser labs I can say that while a brief exposure to a laser of this power will leave you with a small burn, it's nothing medically serious.

All of the above ideas consider that the laser hits someone directly on bare skin. Of course that's not too realistic.

Bullets are hard to stop because of their incredible kinetic energy transfer, but even they can be blunted by proper body armor. Thermal energy transfer can be stopped as well. Think of a bulletproof vest, but made of asbestos instead of Kevlar. A firefighter's vest might provide some protection.

A quick calculation shows that surrounding the body with a one-inch thick layer of water could absorb enough of a very brief laser blast to reduce the damage to the body to only the level of moderate burns. What's more, a vest that circulates water throughout could easily transfer the heat away so quickly as to possibly prevent all damage. A thick slab of metal could also absorb much of the thermal energy, though it would likely still allow some damage to tissue beneath.

All of these musings lead us to a conclusion that will likely disappoint fans looking to wield Han Solo's blaster pistol: Until the technology of compact laser mechanisms improve by leaps and bounds, we won't have a laser-powered blaster sidearm.

(Image: AP)

Faking Data for a Good Cause

Americans don't always trust scientific research (or so studies say). Sometimes they're absolutely right to be suspicious, as recent incidents concerning reproducibility, philosophy and advocacy masquerading as science, and the politicization of science, demonstrate. Questionable studies say stories don't help either.

Today I get to tell you about an experiment that is going above and beyond the normal call of science to double-check itself for accuracy.

The Laser Interferometer Gravitational-Wave Observatory--LIGO--refreshingly avoids the trap of cutesy acronyms such as SMART or AMAZING. LIGO is a project only one step below the famous Large Hadron Collider in size and difficulty. The main facilities consist of four ultra high vacuum tubes; each tube is over two-and-a-half miles long. Exquisitely sensitive interferometers within the tubes measure spacetime, hoping to see it grow or shrink by a factor of roughly 10-21 (one billionth of one billionth of one percent, or a total distortion a thousand times smaller than a proton) as a gravitational wave ripples through. The mission and the incredible precision of the facility in pursuing this work are worthy of an entirely separate account.

Scientific details aside, I was fascinated to read about a method, effective but also brutal, that LIGO uses to check itself for scientific accuracy. It's called blind injection.

A thousand or so scientists work on LIGO. Out of this group a handful are chosen for a special mission: the injection team. They are allowed to manipulate the raw data from the LIGO instruments. They may choose to hide fake detection signals in that data without telling anyone else. The rest of the project operates entirely in the dark. Unaware, the whole collaboration may find the signal and believe that they are on the verge of a scientific breakthrough that will earn them a Nobel Prize. They may also realize that the detection is a fake or else fail to catch it at all.

The fake is introduced in a particularly smart way: the injection team is allowed to directly wiggle the mirrors in the detector to imitate the movement caused by a true gravity wave flowing through the detector. Hence, the only way to know if a signal really came from black holes smashing together millions of light years away is to finish an entire scientific study of the event and then ask the injection team to reveal whether the data was faked.

This investigation carries out a real-world test of the facility. Can the enormous and complex analysis, carried out by hundreds of scientists all over the world, correctly identify the expected signal of a real gravitational wave? That's a very important question: an enormous experiment looking for something that has never been seen has no a priori operator's manual.

A fake signal can carefully calibrate the expectations of the operators as well. It forces them to run every single instrument through every single possible test to see if it could have accidentally introduced spurious noise. After witnessing the faster-than-light neutrino debacle of two years ago, a collaboration knows how vitally important it is to check for every single possible source of error, no matter how small. (That mistake was eventually traced to a loose cable between two machines.) Scientists must conduct extensive investigations of miniscule possibilities along the lines of: Could a stray puff of air have bumped the side of the tube? Could we have our own loose cable somewhere?

Blind injection also lines up the ability of the instrument to correlate its result with other methods of detection. Did other telescopes of different types see further evidence for the right kind of event happening in the right place in the sky? A black hole merger also emits a massive flash of x-rays, for example.

Does the blind injection idea work in practice?

When a potential gravitational wave signal was detected on the instrument in September of 2010, the entire LIGO team went to work. Through six months of late nights, they looked at every last detail and wrote up scientific reports of their finding. When the paper was complete, a mass vote was held to decide whether to submit that paper to the eager scientific journals of the world. The vote was a unanimous yes. Dramatically, the team responsible for injecting events dramatically opened an envelope to reveal whether the event was real or an injected fake.

The envelope opened in March 2011 to reveal a fake. The good news was that the team correctly identified the signal. The better news was shown by comparing the final study results to what the injection team had expected them to see. There were two discrepancies. Both of these turned out to be mistakes by the blind injection team themselves, revealed by the sweat of the blind LIGO team!

There is one brutal coda to this process: experimenters work very long hours at projects like LIGO. When a massive discovery is looming on the horizon, they squeeze extra work into nights, weekends, and extra shifts away from their homes and families. Tests like this run the risk of burning out researchers who sacrifice everything only to find out that their work was for naught. The blind injection method is great science but it's extremely hard on individual scientists.

Tough it may be, but this sort of rigor is extremely pertinent in the current climate of science. Poor methods, biased intentions, and reproducibility problems undermine everyone's confidence in scientific studies. The researchers at LIGO go above and beyond to try to insure utmost accuracy and care in their results.

(Image: AP)

String Theory Has Failed as a Scientific Theory

String theory has been the darling of the theoretical physics community for decades. It has reigned as the dominant theory in prestigious US research institutions since the 1980s. Elegant books, TV shows, and grandiose TED talks have hyped it to the public. Brilliant theoretical physicists tell us that this theory is the best answer to the hardest problem that their field has ever attacked.

All that is fine, but here's the unequivocal truth: string theory has failed as a scientific theory.

What do you do when you can't succeed when playing by the rules? You can go home or you can try to change the them. And that's precisely what string theory is now doing. In case my experimentalist bias is shining too clearly, let's restate this as a (somewhat) balanced question:

Is it time to change our notions of what science is to accommodate a single spectacularly difficult problem and the decades of struggle by those theories attempting to tackle it?

Remarkably, this exact question has been hearing open debate among proponents of string theory. String theorists met jointly with academic philosophers at a conference last month to talk about what we require of a theory for it to be held as correct. Do we need to test it experimentally? Or, are the qualities of beauty, consistency, mathematical interest, and greater funding proof enough?

It is a debate on which of two philosophies science ought to follow: empiricism or rationalism. The choice, to this physicist, is stingingly clear.

Science has been for its entire history fantastically successful precisely because it requires experimental tests to verify and confirm its claims. That criterion can be defined simply: empiricism. Ideas are not true simply because of their logic or conceptual beauty but because they are observed by human senses -- or the extension thereof by cameras, telescopes, spectroscopes, thermometers, and so forth -- and verified. Empiricism is not necessarily the best system of philosophy for all endeavors. Moral human beings accept many ideas and laws that are not learned from observation but instead found within (or without) and supported by the heart.

A different type of philosophical system describes the new guidelines that string theorists lust for: rationalism.

Rationalism derives truth via the process of deductive logic. Rationalism is the system of the mathematician. Theorems are logically correct because they can be built logically from a dictionary of axioms followed by deductions. Mathematics, however, need not be true in the external universe that we live in and perceive. Mathematics need only make logical sense in the minds of mathematicians. That's not a dismissal: mathematical tools are fantastically useful not only in physics but in engineering, medicine, accounting and a host of other human endeavors.

The history of science is littered with theories that sounded rationally simple and logically brilliant but turned out to be utterly false empirically. Meat does not spontaneously grow maggots. The Universe does not revolve around the earth. The bumps on the human skull do not reveal the intelligence of the brain residing within. Light does not travel through a luminiferous aether. The failure of these theories was not found through rationalist logic but through careful experiment upon nature itself.

The fire igniting critics of string theory is not personal animus or professional jealousy. It's the idea that a single theory has become so entrenched and popular in its field that its failures cannot be addressed truthfully. Now, physicists ask that the rules be bent or changed just to accommodate it. To loosen the principles of our fantastically successful scientific method just to allow for one passing theoretical fad to continue would be a disaster.

(AP photo)

Is It Time for the Dietary Guidelines to Die?

The 2015 Dietary Guidelines were officially released yesterday. Put out every five years, the guidelines urged Americans to limit salt intake to under 2,300 milligrams per day, consume less than ten percent of their calories from added sugar, and restrict saturated fat intake. They also suggested eating less red and processed meat. Unlike years past, the guidelines eased up on eggs, following research showing that dietary cholesterol isn't as bad as was once thought.

Overall, the new guidelines are a shift in the right direction -- slightly more science-based -- but the difference is minimal. The guidelines still mostly ignore growing evidence that low-carbohydrate diets can be just as, if not more, healthy as so-called "balanced" diets. They also continue to tout the notion that sodium intake needs to be limited, contrary to recent evidence that it probably doesn't.

For thirty-five years, the dietary guidelines have offered middling advice. The fact that they are slightly better is little consolation to the millions of Americans who've tried to follow them and have found themselves overweight and unhealthy.

"Americans in general have been following the nutrition advice that the... US Departments of Agriculture and Health and Human Services have been issuing for more than 40 years," a team of researchers reported last year in the journal Nutrition. "Consumption of fats has dropped from 45% to 34% with a corresponding increase in carbohydrate consumption from 39% to 51% of total caloric intake."

You know what's happened over that same time period. Obesity has skyrocketed to epidemic proportions. Of course, there's no way to tell if this is due to or in spite of the guidelines, but few would disagree that the guidelines have had little effect in curbing the rise in obesity.

That begs an important question: If the dietary guidelines are so ineffectual, why even have them? Is it time for the dietary guidelines to die?

Supporting this notion is the glaring fact that nutrition science is notoriously terrible: characterized by poorly conducted research, pervasive industry influence, and a diffuse sense of confirmation bias almost akin to religion. Asking a government-selected panel to distill generally pathetic evidence into a broad set of guidelines is like asking a two-year-old to nail Jello to a wall with a plastic play hammer.

Dr. James Hamblin over at The Atlantic offers a reasonable point to the contrary, however. "Everyone agrees that having more data in the realm of nutrition science would be ideal, but this cannot paralyze us from acting to the best of our knowledge."

But here, another problem arises: who gets to decide upon which knowledge to act? Every five years, that duty falls to the dozen or so members of an advisory committee, who effectively create the guidelines. Unfortunately, they are not required to list their potential conflicts of interest, unlike authors in the vast majority of scientific journals.

"A cursory investigation shows several such possible conflicts," Nina Teicholz reported in the British Medical Journal. "One member has received research funding from the California Walnut Commission and the Tree Nut Council, as well as vegetable oil giants Bunge and Unilever. Another has received more than $10 000 from Lluminari, which produces health related multimedia content for General Mills, PepsiCo, Stonyfield Farm, Newman’s Own, and 'other companies.'"

Past panels have also had links to the dairy industry, which might help explain why the guidelines have regularly recommended drinking three cups of milk per day, despite the fact that a sizeable portion of the U.S. population has trouble digesting lactose, the primary sugar in milk, and that the benefits of milk have long been overstated.

The panel is also bombarded by lobbyists from "MillerCoors to the US Dry Bean Council — all vying for prime space on MyPlate,'" Sheila Kaplan reported for STAT. "More than 220 people registered to lobby on the guidelines during 2015, with many more unregistered."

The ultimate aim of the dietary guidelines is just as noble today as it was when they were created back in 1980: to help Americans eat and live as healthy as possible. But it doesn't appear that the guidelines are working. Ultimately, the failing isn't theirs, but ours. Subject to intense political lobbying, and the whims of our inherent biases, the guidelines are doomed to an ideological bent. It might be best for individual citizens to take their nutrition into their own hands, rather than look to the government for guidance.

(Image: AP)

Why Many Mice Studies Are Meaningless

Earlier this year, Australian researchers touted a new ultrasound treatment for Alzheimer's disease that restored memory function in 75 percent of mice treated. Professor Jürgen Götz, the study's co-author, was positively glowing in his assessment.

"The word ‘breakthrough’ is often misused, but in this case I think this really does fundamentally change our understanding of how to treat this disease, and I foresee a great future for this approach."

Götz' findings were widely disseminated in the popular press to much fanfare. But, while one hopes that his team's fantastic findings will translate to humans, odds are, they probably won't.

In 2006, a team of scientists from the University of Toronto reviewed 76 of the most highly cited animal studies published between 1980 and 2000, the vast majority published in prestigious journals like Cell, Science, and Nature. The reviewers found that only 37 percent of the works had been replicated in randomized trials on humans. Of the remaining 48 studies, 14 were contradicted in further trials and 34 remained untested more than a decade after being published.

"Patients and physicians should remain cautious about extrapolating the findings of prominent animal research to the care of human disease," the team concluded.

The outlook for successfully translating cancer treatments from animals to humans is much worse. Only eight percent usually make the cut. The rate for stroke may be even more abysmal. When Malcolm Macleod, a Scottish neurologist at the University of Edinburgh, went hunting for new stroke therapies back in 2003, he and his colleagues found 603 drugs which had been tested on animals. Of those, only 97 ended up being tested in humans, and just one worked.

"Animal models are limited in their ability to mimic the extremely complex process of human carcinogenesis, physiology and progression," McMaster University scientists Isabella Mak, Nathan Evaniew, and Michelle Ghert, wrote in 2014. "Therefore the safety and efficacy identified in animal studies is generally not translated to human trials."

While the systems that regulate gene activity are generally the same in mice and humans, there are key biological differences in other areas that prevent successful results from applying to humans. Transcription factor binding sites, where information is passed on, differ for between 41 and 89 percent of the genes that our species share. Moreover, unlike humans, mice used in studies are often highly inbred. The mouse immune system is also drastically different compared to a human's. Laboratory rodents are also often overfed and sedentary. Thus, the positive effects of some drugs might result from improving factors associated with an unhealthy lifestyle rather than the drug counteracting the disease, itself.

Considering the sizable gulf between mice and men, it doesn't help that rodent studies are often poorly conducted. Reporting on the problem for Science Magazine in 2013, Jennifer Couzin-Frankel noted, "For ethical and cost reasons, researchers try to use as few animals as possible, which can mean minuscule sample sizes. Unblinded, unrandomized studies are the norm." One of the scientists she interviewed described the methodology in mice studies as "stone-age." A prime example of an outdated practice is how mice are "randomly" selected.

"You stick your hand in a cage, and pull out a rat," Ian Roberts, a professor of epidemiology at the London School of Hygiene and Tropical Medicine, told The Scientist. "The rats that are the most vigorous are hardest to catch, so when you pull out 10 rats, they're the sluggish ones, the tired ones, they're not the same as the ones still in the cage, and they're the control. Immediately there's a difference between the two groups."

Problems may also arise after the groups are selected. Research published this week to PLoS Biology suggests that as much as 42 percent of stroke and cancer studies lose animals from treatment groups during the course of experiments. This loss could lead to a treatment's effectiveness being overstated by as much as 175 percent.

While inherent biological differences between mice and humans cannot be reconciled, methodological drawbacks can be easily remedied. Investigators can start by simply applying the same level of rigor demanded by a human clinical trial. We need to truly treat our furry friends as we would be treated.

(Image: AP)

Cancer Reporting Needs Less Hype & More Hope

Every year, hundreds of articles are published touting a "revolutionary" new treatment for cancer. Yet despite the uplifting language, cancer continues to kill over a half a million people in the United States every year. Either something is wrong with the treatments, or something is wrong with the coverage.

According to a recent study, the latter seems to be the case.

This past summer, Case Western Reserve University medical student Matthew Abola and Oregon Health and Sciences University Assistant Professor Vinay Prasad scoured Google News for articles on cancer treatments published over four days (from June 21 to June 25) in the wake of the American Society of Clinical Oncology conference. Specifically, they wanted to examine the articles that used superlative terms like “breakthrough,” “miracle,” “cure,” “revolutionary,” “groundbreaking,” and “marvel.”

The duo discovered 94 articles from 66 separate news outlets which used superlatives to describe cancer treatments. In the majority of cases (55%), superlatives were used by journalists, but physicians were guilty of using flowery language in a little over a quarter of the documented instances.

Prasad and Abola also found that 18 of the 36 drugs described with superlatives had not even received FDA approval. More alarmingly, five of the 36 drugs did not even have any clinical data from human trials to support their effectiveness.

The most hyped treatment, a combination of ipilimumab and nivolumab, was hailed with superlatives in more than twenty instances. In a clinical trial with more than a hundred patients suffering from melanoma, the treatment improved survival by an average of 4.2 months compared to controls. That's certainly praiseworthy, but it is by no means a "revolutionary" improvement, especially considering that 43 percent of subjects assigned to the treatment discontinued use as a "result of serious adverse reactions."

The persistent disconnect between reality and reporting is problematic.

"Some of the drugs are actually very excellent drugs. It’s reasonable to feel excited about them," Prasad told the Washington Post. "But you want to balance that against reasonable expectations. You want hope, but realistic hope. That’s what we all strive for."

And there is good reason for optimism. Between early detection, preventative measures, and improved treatments, the cancer mortality rate has fallen from a peak of 215 deaths per 100,000 people in 1991 to 172 deaths per 100,000 people in 2010. Just like the tortoise in Aesop's fable, who beat the hare in a race by continually putting one reptilian foot in front of the other, we are slowly, but surely winning the war on cancer.

Maybe one day scientists will stumble upon a true "miracle" cure. But until that actually happens, anyone reporting on cancer treatments has a duty to cancer patients, as well as their friends and families, to be write within the bounds of evidence, spreading genuine hope, not false hope. In 2014, Oliver Childs tackled the notion of "miracle" cancer cures over at Cancer Research UK:

"We only hear about the success stories – what about the people who have tried it and have not survived? The dead can’t speak, and often people who make bold claims for ‘miracle’ cures only pick their best cases, without presenting the full picture.

This highlights the importance of publishing data from peer-reviewed, scientifically rigorous lab research and clinical trials. Firstly, because conducting proper clinical studies enables researchers to prove that a prospective cancer treatment is safe and effective. And secondly, because publishing these data allows doctors around the world to judge for themselves and use it for the benefit of their patients.

This is the standard to which all cancer treatments should be held."

Anyone reporting on cancer treatments should keep this advice in mind, and keep the hype to themselves.

Source: Matthew Abola & Vinay Prasad. The Use of Superlatives in Cancer Research. JAMA Oncol. October 29, 2015. doi:10.1001/jamaoncol.2015.3931

(Image: Shutterstock)

The World Would Be a Much Better Place If There Were Eyes Everywhere

Picture a futuristic world with eyes everywhere... It would be much cleaner, inhabited by citizens who are more prosocial, cooperative, and honest. This vision might come as a surprise to anyone who's read George Orwell's classic novel 1984, which described a surveilled society as a dystopia, filled with propaganda and distrust. But I'm not referencing a world run by Big Brother; I'm simply talking about what would happen if there were eyes everywhere -- literally -- as in pictures of them. As psychologists have repeatedly found over the years, pictures of eyes subtly slapped in public places can induce a variety of positive behaviors in passersby.

The "watching eyes effect" was originally discovered nine years ago over a matter of workplace curiosity. For many years, members of the University of Newcastle Division of Psychology had the option of donating money whenever they served themselves tea or coffee in the common area. One day, Melissa Bateson, Daniel Nettle, and Gilbert Roberts decided to turn this staple of everyday work life into an experiment. The set-up was simple. A small, 15 x 3.5 centimeter banner was placed on a nearby cupboard door at eye height. The banner displayed either flowers or a pair of eyes, and the image was alternated every week for ten weeks. Though none of the researchers' colleagues reported being aware of these subtle manipulations, their generosity changed markedly week by week. When images of eyes were shown, they paid roughly 2.8 times more for their drinks!

The annals of psychology are littered with the unreplicated remains of behavioral hypotheses and quirky effects, and if no other studies had been performed in the wake of this brief experiment, the "watching eyes effect" probably never would have garnered attention. But four years later, Bateson and Nettle rekindled their initial workplace study in a new arena. They hung up two different sorts of posters at a local restaurant, one with pictures of eyes, and the other with flowers. Both types of posters featured text asking patrons to clean up after themselves. As it turned out, the diners who ate in the presence of the eye posters were twice as likely to follow the signs' directions and throw away their leftover food and trash. What's more, when the duo repeated the experiment with posters of eyes and flowers containing messages unrelated to littering, the effect persisted.

Images of eyes have also been shown to alter behavior for the better in other contexts. In an eleven-week field experiment conducted in a supermarket, donations to charity collection buckets rose 48 percent when eyes were displayed nearby. Other studies have shown that images of eyes placed on a college campus or in the vicinity of bus stops leads to a reduction in litter, either because people are less likely to litter in the first place, are more likely to pick up others' trash, or spend more time picking up others' trash.

Curiously, the "watching eyes effect" is generally weaker, or even nonexistent, when explored in more controlled settings in the lab. In various studies, researchers have asked subjects to play the dictator game, in which one participant chooses whether or not to split a cash prize with another participant, sometimes in the presence of eyes and sometimes not. The psychology literature is generally split between studies which show a positive effect and those that do not (PDF). Does this suggest that the watching eyes effect may not truly exist? Possibly. But it may also show that it's simply more powerful in public settings. Nettle and Bateson, the researchers most fluent in the "watching eyes effect," suggest that eyes motivate prosocial behavior because they induce a feeling of being watched. It's easy to see how this perception would be more powerful in an open, social setting compared to a closed, laboratory-based setting.

But you can see for yourself. The "watching eyes effect" makes for fun and easy citizen science. The next time you want to induce some "honest" behavior, bring along an extra pair of eyes!

(Image: Shutterstock)

I'm Completely Fed Up with Nutrition Science. You Should Be, Too.

Nutrition science is bad for your health! Not really, of course, but if you worried about every single study that linked a certain food to a negative health outcome, you'd probably go insane.

Red meat? Cancer. Grapefruit? Cancer. Cheese? Cancer. Artificial sweeteners? Obesity. Sugar? Obesity. Milk? Bone fracture. The list could go on and on, but let's get to the meat of the article.

I'm fed up with nutrition science, and you should be, too.

It was not a single study that evoked my distaste, but a nauseating status quo that's become too much to bear.

The problems with nutrition science begin with how most of its research is conducted. The vast majority of nutrition studies are observational in nature -- scientists look at people who eat certain foods and examine how their health compares with the health of people who don't eat those foods or eat them at different frequencies. But as I reported earlier this year, these sorts of studies have a high chance of being wrong. Very wrong.

In 2011, statisticians S. Stanley Young and Alan Karr teamed up to analyze twelve randomized clinical trials that scrutinized the results of 52 observational studies. Most of the observational studies showed various vitamin supplements to produce positive health outcomes. However, the superior clinical trials disagreed.

"They all confirmed no claims in the direction of the observational claims," Young and Karr revealed in Significance Magazine. "We repeat that figure: 0 out of 52. To put it another way, 100% of the observational claims failed to replicate. In fact, five claims (9.6%) are statistically significant in the clinical trials in the opposite direction to the observational claim."

Observational studies are common in nutrition research because they are relatively cheap and easy to pull off. But you get what you pay for. These studies are often shoddy, primarily because they cannot effectively control for confounding variables. Most also suffer from another key drawback, one that may render them totally meaningless: self-reported data. Subjects report their food consumption by remembering what and how much they ate. Memory is not a recording; it is a reconstruction, making it prone to error. In fact, a 2013 study found that the majority of respondents in the CDC's National Health and Nutrition Examination Survey (NHANES), a survey program that provides data for a plethora of epidemiological studies, reported eating fewer calories than the bare minimum they would need to survive! Something is seriously flawed here.

Unfortunately, when nutrition scientists employ the gold standard of scientific research -- randomized, controlled trials -- the quality of evidence isn't always much better. As health researcher Aaron Carroll wrote for the New York Times:

A 2011 systematic review of studies looking at the effects of artificial sweeteners on clinical outcomes identified 53 randomized controlled trials... only 13 of them lasted for more than a week and involved at least 10 participants. Ten of those 13 trials had a Jadad score — which is a scale from 0 (minimum) to 5 (maximum) to rate the quality of randomized control trials — of 1. This means they were of rather low quality... The longest trial was 10 weeks in length.

The dearth of high quality evidence and bounty of low quality, conflicting research has left the door open for snake oil salesmen to peddle their ineffectual and potentially dangerous products, often under the guise of scientific validity. How is the public to tell what is correct when scientists can't even agree?

Muddying the waters further is the stream of cash pouring into nutrition science from corporate interests. Nestlé funds research, as does Dannon. Coca-Cola has recently been accused of funding scientists who focus on physical activity as the primary cause for obesity rather than the copious amounts of sugar found in their undeniably unhealthy soft drinks. Many of the members of the advisory committee for the federal government's dietary guidelines also have ties to industry.

The ultimate point of nutrition research is to apprise the public of what they should and should not eat. What really is healthy? What isn't? But this endeavor may have been doomed from the start. As was recently showcased in research published to the journal Cell, what's healthy for one person may not be healthy for someone else. Tina Hesman Saey summarized the study over at ScienceNews:

"The researchers made the discovery after fitting 800 people with blood glucose monitors for a week. The people ate standard breakfasts supplied by the researchers. Although the volunteers all ate the same food, their blood glucose levels after eating those foods varied dramatically. Traits and behaviors such as body mass index, sleep, exercise, blood pressure, cholesterol levels and the kinds of microbes living in people’s intestines are associated with blood glucose responses to food, the researchers conclude."

Between poorly conducted research, pervasive corporate influence, and the simple fact that everybody reacts to specific foods differently, nutrition science as a whole must be taken with a gigantic grain of salt.

(Image: Shutterstock)

The (Ultimate) Top Ten Science Stories of 2015

As 2015 draws to a close, those who cover science look forward with anticipation to an exciting year ahead, but they also look back at the noteworthy year that was. 2015 yielded a great many discoveries, insights, and captivating stories. (It also yielded some fascinating BS.) To recall and rank the most important of these is a challenge. This year, instead of crafting our own list, we decided to try something a little different. Since aggregation is what we do, we decided to combine lists from other outlets into an "ultimate list" -- one list to rule them... you get the idea.

Methods:

We scoured the Internet for "top science stories" lists, selecting only those from sources deemed reputable. Points were awarded to each story based on its ranking. For example, on a typical top ten list the #1 story earned ten points, #2 earned nine, #3 earned eight, and so on. Lists that had fewer than ten rankings were normalized to a 10-point scale. For the lists that did not rank the stories, each story earned 5.5 points, which is the average score if you add together all the digits from 1 to 10 and divide by ten.

The List (as of 12/20):

1. New Horizons Reaches Pluto (61.5 points)

Pluto commanded the podium this year, as NASA's New Horizons spacecraft brought the diminutive dwarf planet into focus for the first time in human history. Among the things we learned, Pluto may still be geologically active, and it has a heart.

2. Gene Editing Takes Center Stage (48.5 points)

Thanks to a revolutionary technique called CRISPR, which was actually developed three years ago, gene editing was a major topic of discussion in 2015. CRISPR may allow scientists to effectively edit genetic diseases out of humanity. It could also potentially be used to engineer humans, themselves.

3. A New Species of Human: Homo naledi (47.5 points)

In September, scientists announced that they had discovered 1,500 fossil specimens belonging to at least fifteen individuals of a new species of human: Homo naledi. The remains were found in the Rising Star Cave System in South Africa.

4. Quantum Entanglement Confirmed (26 points)

Einstein might not like it, but "spooky action at a distance" seems to actually be real. Experiments published this fall both confirmed the counterintuitive and quirky component of quantum mechanics.

5. A Climate Change Deal Is Struck (18 points)

Though just a little over a week old, most outlets agreed that the climate change deal struck in Paris was a monumental achievement. Nearly 200 nations agreed to limit their carbon emissions in order to prevent significant global warming later this century.

6. Salty Water Spotted on Mars (14 points)

In late September, NASA announced that the Mars Reconnaissance Orbiter had spotted signs of salty brines flowing on the Martian surface. Though it wasn't the first time that liquid has been spotted on Mars, the finding is the most detailed and conclusive to date. It was also curiously timed to coincide with the release of The Martian, a science fiction movie starring Matt Damon.

7. Science's Lack of Replication (13 points)

Science has long been nagged for its surprising lack of reproducibility, but this year, a number of reports showcased the scale of the problem. The biggest eye opener: up to $28 billion is spent annually in the life sciences on research that can't be reproduced.

8. Processed Meat and Red Meat Linked to Cancer (11 points)

After reviewing hundreds of studies, the World Health Organization classified processed meats as "carcinogenic" and red meat as "probably carcinogenic." But don't worry, eating red or processed meats almost certainly won't kill you.

9. A Massive Earthquake in Nepal (9.5 points)

On April 25th, a violent earthquake rattled Nepal. All told, over 9,000 were killed and 23,000 were injured in the catastrophe.

10. ISIS Wages War on Archaeology (8 points)

This year, the Islamic terrorist group ISIS killed a number of archaeologists and destroyed dozens of ancient sites throughout the Middle East, most notably the ruins of Palmyra. According to National Geographic, ISIS uses the "destruction of cultural heritage to demonstrate their 'piety' and stoke division within local populations." They also view "the practice of archaeology as a foreign import that fans Iraqi nationalism and impedes their ultimate goal."

Sources:

Boston Museum of Science, Business Insider, Science Media Centre - New Zealand, Washington Post, Gizmodo, ScienceNews, Science Magazine

(Image AP)

Post-Docs: Academia's Miserable Waiting Room

Congratulations, you just earned your science PhD! Are you headed into the world of private employment? Bully for you: you can expect to immediately earn a higher salary, likely receive more lifetime income, enjoy better benefits, and experience vastly greater job stability upon leaving the academic fold.

Do you want to continue searching for new knowledge and striving to advance your research field? Sincerely speaking, is your ultimate career goal to land a vaunted slot as a tenured university research professor? Your new reality is comparatively grim: post-doctoral ("post-doc") appointments.

A post-doc is a position that is all of these things: temporary, usually necessitating a cross-country move, extremely demanding, and extremely low-paying. A post-doc is typically hired for a term of one to three years and paid somewhere between one-half and one-third of the salary that he or she could make outside of academia. Job openings are scattered across the continent.

PhDs continuing in academia generally take two or more of these appointments. They work and wait for their chance at a faculty job that includes a shot at tenure -- the so-called tenure-track position. A strong majority of them are never hired to such a position and ultimately change career tracks after several years. Sounds painful, but you've no choice if you want a shot at becoming a professor.

The root trouble is that most established university research professors produce many PhD students over the course of a career. By the time Professor X's retirement clears an opening for a new professor, he or she may have produced 20 PhDs. Chances are that the majority of these 20 go to sleep at night and dream of earning his slot, or one like it elsewhere.

If Professor X is often introduced as "Nobel-Prize Winner Professor X," his recommendation letters will find many of his PhD students a permanent university home. A few extremely deserving PhDs will amass such fantastic credentials that they will win tenure-track positions based on merit alone. Most applicants hired as professors will have some combination of a famous (in the research field) advisor and fabulously productive resumes.

That PhD holder who has dreamed for years of becoming a tenured faculty member faces a simple and brutal numbers game. I'll discuss my field as an example. In 2008, 1,499 people earned a PhD in physics. That same year, only 342 individuals were hired for physics faculty positions in the US. Coarsely, between four and five PhDs are produced for every tenure track hire.

We can look deeper into those basic numbers. About 60% of those hirings were for the prestigious research-focused university positions that most post-docs desire. Further, 30% of the positions were filled by someone without a PhD granted in this country. Thus there were roughly 10 US-granted physics PhDs for a single research-focused university tenure-track US faculty hire in the field. (The upside here is that if you want to teach as much as research, your chances of receiving a professorship improve substantially.)

The reason that PhDs will accept these post-doc positions is simple. The struggle for tenure-track positions is so fierce that credentials well beyond a PhD are now required to be competitive. Holding two or three consecutive post-docs is now common. Most PhD graduates are ~31 at graduation and can expect to post-doc for 3-7 years while trying for a faculty spot.

This means that your family plans are likely on hold. Or, you're willing to try to raise a family while working 50-60 hours a week, moving across the country every 24 months and making a salary of as little as $26,000. (This is low, as ~35-40K is a more standard figure in many fields). You'd do about as well working in the service industry, despite holding a hard science PhD. Of course, your spouse has to pack up and find a new job biennially too, unless you want to live 2,000 miles apart and only have dinner over Skype for two years (depressingly common).

This is purely a situation of supply-and-demand. Demand for tenure-track faculty jobs vastly outstrips the number of openings. And yet, thousands of new PhDs still aspire to compete for those precious spots. If you enjoy research more than anything else, a post-doc is for you. If you'd rather be broke and alone than not be a university professor, a post-doc is definitely for you. Prepare to work and sacrifice through several years of tough living for your shot at that dream.

(AP photo)

We Still Don't Know How Stars Explode

Stars explode. This we know. But it may surprise you to learn that we still don't exactly know how.

Stellar explosions -- supernovae -- are well known to scientists who gaze out into the cosmos. It is the destiny of stars that are roughly eight to twenty-five times the mass of our sun to go out in a fiery, yet silent bang: a type II supernova. When such stars use up the fuel that drives stellar fusion -- primarily hydrogen and helium -- the core progressively fuses into heavier and heavier elements until it turns into iron. At this point, it ceases to emit radiation and no longer generates outward pressure. As a result, the star begins to collapse under its crushing gravity, forming a compact neutron star just fourteen miles in diameter. But suddenly, in the midst of this collapse, powerful shocks begin and resonate throughout the receding outer layers, until finally, in a luminous explosion, those layers are blasted away, spewing out elemental remnants that fertilize the surrounding interstellar medium. In this enriched bath of starstuff, planets may form which could one day sprout life.

The prior description forms a satisfying and seemingly complete tale of life, death, and rebirth, but there's actually a crucial detail that for years has remained an incredible mystery: Why would a star that's imploding under the force of gravity suddenly and rapidly reverse course and instead spectacularly explode outward? Intuition suggests that the collapse would continue until the star fizzles out, and this is exactly what happens when massive computers are programmed to simulate supernova!

“In a huge failure of theory, most computers can’t actually make a star explode," Dr. Fiona Harrison, a Professor of Physics and Astronomy at the California Institute of Technology, revealed in a NASA presentation at the Smithsonian National Air and Space Museum held September 30th in Washington, D.C. "The explosion halts until an ad-hoc mechanism – the sloshing around of the central part of the star – is put in by theorists to make the star fly apart."

But is this computational sleight of hand what actually happens? Another idea is that the fading stellar core starts rapidly rotating, launching streams of gas that spur the stellar blast.

As you can imagine, it has been very difficult for scientists to determine the exact explanation. Not only are supernovae exceedingly rare on human timescales, but their immense brightness -- occasionally equivalent to the light output of an entire galaxy -- makes it incredibly tough for astronomers to peer into heart of the explosion and observe what's really going on.

However, with the launch of the Nuclear Spectroscopic Telescope Array (NuSTAR) in 2012, the tides of discovery sharply turned. The telescope (seen above), the first capable of focusing light in the high energy X-ray region of the electromagnetic spectrum, turned its gaze toward a supernova remnant called Cassiopeia A, whose light started reaching Earth a mere 300 years ago. Back on Earth, astronomers were able to pierce the dense outer layers of the stellar corpse and stare straight into the heart of the supernova remnants. And what did they see?

“The shape of the explosion was bubbly, like what you would expect if that sloshing mechanism theorists predicted really happens in life,“ Harrison said.

The observation offers strong support to prevailing theory, but as it is just a single data point, more evidence is needed before we can once and for all lay to rest the tantalizing mystery of how stars explode.

(Images: NASA/JPL-Caltech)