Why Are There So Many Drones All of a Sudden?

(AP photo)

You're out walking around the park and suddenly you hear a strange buzzing sound. Your head snaps up and your eyes dart around the sky, looking for a bird, a plane, an injured man in a cape, something. After a moment of confusion, you spot the glint of an LED. Your eyes lock in on a tiny swooping object. It's like a loud, clumsy, hideous hummingbird. A drone!

Drones have become ubiquitous. People antagonize animals with them:

And spy on their neighbors:

"Personal stunt drones" capture skateboarding pre-teens:

Teenage BMX riders:

And grown men leaping off of tall things:

Five years ago, almost nobody had seen one of these tiny craft. Why the explosion in pint-sized private aircraft loaded with cameras and lights and accessories? Has some technology suddenly matured and revolutionized the industry?

Actually, yes. Smartphones and miniaturization.

In the past, small devices for controlling aircraft were the purview of the military. These military devices were expensive and mostly illegal to sell for private use. Additionally they were still a bit too bulky for use on aircraft as small as personal private drones. The massive boom in smartphones has changed that. There has been a rush to build ultra-small devices on tiny chips, capable of living inside a wafer-thin smartphone and not draining its diminutive battery.

Do you play games on your phone? Every time you lean the phone sideways to slide your character across the screen, a tiny accelerometer chip in the phone is measuring that tilt and reacting accordingly. (Look here for an in-depth discussion on how they work.) The short of it is that a technology called MEMS (microelectromechanical systems) has matured to allow microscopic silicon simple machines to be built on the same chips as the electronics to read them. This means that instead of a bulky and large mechanical gyroscope, a chip smaller than a dime can easily measure acceleration in all three spatial directions.

And, it's not just accelerometers either. GPS chips that allow for precise location of the craft have undergone a similar technological revolution. These chipsets used to be too large, power-guzzling, and hot. Now the same slim smartphones house GPS on another chip the size of a dime.

That tiny high-def camera that lives in your phone is at home on the drone too. Right next to the minute Micro-SD card on which it stores images.

All these tiny devices weigh almost nothing and don't draw much power. This allows a drone to be flown and controlled by a tiny circuit board, so the craft can weight almost nothing outside of the engines and batteries to power them.

The drone population has literally exploded along with their capabilities for troublesome use. What is the legal situation? At this point it's all up in the air, so to speak. A 2007 FAA ban on commercial use will end this year. Regulators are currently working on new legislation for both commercial and private use. Until then, your best defense is a water hose:

Five Myths About Oil

The world runs on oil. According to the United States Energy Information Administration, in 2011, the 6.965 billion people on Earth collectively used about 3,669,353,105 gallons of the stuff, combusting it in cars as gasoline, laying it down in asphalt, and processing it into lubricants.

Our reliance on this energy-dense liquid prompts questions. For starters, what the heck is it? Oil consists of hydrocarbons -- compounds containing carbon and hydrogen -- and other carbon-containing (organic) compounds. When combusted, oil's hydrogen-carbon bonds split apart, releasing a large amount of energy, energy that can be harnessed.

Oil is cheered by some and maligned by others. Everyone seems to have an opinion on it. Out of the incessant discussion, myths have arisen. Here are five of them.

Myth #1: Oil is mostly dinosaur bones. Oil is a "fossil fuel," formed from the remains of organisms that died millions of years ago. Dinosaurs certainly fit this description, and we dig up their fossilized bones all the time! But though dinosaurs reigned for 135 million years, not many of them died in a position where they could be buried and crushed over the eons into coal, natural gas, or oil.

"If you took all of the dinosaurs that ever lived and... squished them up in order to get the oil out of them, we'd probably go through that oil in... a couple of days," paleontologist Jack Horner told Vsauce.

In actuality, the oil used to make the gasoline in your car almost certainly formed from oceanic microorganisms like plankton and algae that lived millions, if not billions, of years ago. When they died, they sank to the bottom of the ocean and began to decompose. Over time, they became buried. As more and more sediment formed on top of them, heat and pressure crushed them into fossil fuels. 

Myth #2: Americans use the most oil. This is only partly true. By far and away, the U.S. consumes the most oil of any other country. But on a per capita basis, Americans aren't the world-leading gas-guzzlers. We rank 22nd, behind countries like Singapore, Kuwait, Luxembourg, Bermuda, and our neighbor to the north: Canada.

Myth #3: All crude oil is black. When you think of oil, you probably picture a black sludge. Most oil is black, but it can be yellow, red, or even green in hue. Crude oil's color is a clear indicator of quality -- the more contaminants that are present, the darker it will be. The highest quality oil will actually resemble the vegetable or olive oil in your kitchen: amber or golden in color.

Myth #4. The first commercial oil well was in the U.S. Though Edwin Drake's relatives might claim otherwise, their ancestor's commercial oil well in Titusville, Pennsylvania was not the first of its kind. Wells in Russia, Poland, and Romania were already in operation. Drake's well did, however, attract the first great wave of investment into oil drilling and refining.

Myth #5. The world will run out of oil very soon. Oil's demise has been greatly exaggerated for decades. There's no question that fossil fuels are finite, but predicting when they will run dry is no easy task. Proven reserves continue to increase the more we explore and as technology advances. It may be more likely that humanity will phase out the use of fossil fuels before we even run out. But with demand still increasing, nobody precisely knows when that will be, either.

(Images: AP, Niagara)

FOOF: The Chemical Most Chemists Avoid

Dioxygen difluoride sounds rather harmless: just two of what you breathe and two of what's in your toothpaste. It even has an adorable, cushy nickname: FOOF. But most sane chemists know dioxygen difluoride is not a chemical to be trifled with.

An orange-yellow solid, dioxygen difluoride melts at 109.7K to an orange-red liquid. Note, that's Kelvin, not Celsius. That means FOOF melts at -262.2 °F! The chemical wouldn't even solidify on the coldest-known day on Earth, July 21, 1983, when the recorded temperature at the Soviet Vostok Station in Antarctica plummeted to −128.6 °F.

But a frigid melting point isn't the most exciting thing about FOOF. The most exciting thing is that it reacts violently with almost anything it comes into contact with, and by react, I mean explode. FOOF is one of the most furious oxidizers known to man -- it rips electrons from other compounds. Oxygen does the same thing to fuel combustion, but not quite so feverishly as FOOF.

Due to dioxygen difluoride's excitable nature, chemist Derek Lowe absolutely refuses to work with it, calling it "Satan's kimchi." He references a 1962 paper by one A.G. Streng as proof for his claim.

Streng was very likely the first chemist to explore and document FOOF's volatile nature. Though his report is characteristically dry, as one would expect for a paper published in the prestigious Journal of the American Chemical Society, its thesis is thrilling. As Streng discovered firsthand, FOOF explodes when mixed with just about everything, even at "cryogenic conditions." Derivatives of "violent," "vigorous," and "explosive" frequently appear throughout Streng's account of his experimental escapades, prompting the reader to wonder just how the man escaped with his life.

"If the paper weren't laid out in complete grammatical sentences and published in JACS, you'd swear it was the work of a violent lunatic. I ran out of vulgar expletives after the second page. A. G. Streng, folks, absolutely takes the corrosive exploding cake, and I have to tip my asbestos-lined titanium hat to him," Lowe remarked.

Fortunately (or unfortunately, depending upon how you look at it), you won't find FOOF in your run-of-the-mill chemistry lab. It requires storage below 100K, and can only be created by mixing fluorine and oxygen at very low pressures then running a current through the mixture, or by mixing the two elements in a stainless steel vessel at 77.1K, or by heating them at 1,300 °F and subsequently cooling the reactants with liquid oxygen.

If there's one thing to remember about FOOF, it's that it goes poof!

(Image: Sassospicco)

Breakthrough Institute: Everything Modern Environmentalism Should Be

"[K]nowledge and technology, applied with wisdom, might allow for a good, or even great, Anthropocene [Age of Humans]. A good Anthropocene demands that humans use their growing social, economic, and technological powers to make life better for people, stabilize the climate, and protect the natural world."

That statement, from An Ecomodernist Manifesto, summarizes the primary guiding principle and cri de coeur of the Breakthrough Institute, an environmental think-tank whose mission is to simultaneously prioritize human flourishing and environmental responsibility. Who could possibly disagree with that?

Many mainstream environmentalists, apparently.

Last week, I was invited to be a panelist for a discussion of GMOs at the annual Breakthrough Dialogue in Sausalito, California. (Full disclosure: Breakthrough paid for my flight and hotel room, but not the funky coffee mojito I purchased at Philz Coffee.) The most interesting part of the conference was not my panel, however, but the debate that occurred on the very first night of the meeting.

Mark Lynas, the internationally renowned environmentalist and author who famously "converted" from being anti-GMO to pro-GMO, kicked off the debate with his vision of the Good Anthropocene. He believes that technology, far from being a curse, will play a vital role in healthfully shaping our planet in the coming decades and beyond.

To make his point, he said that the typical hunter-gatherer lifestyle (which, by the way, is unscientifically mythologized by paleo dieters) requires some 10 square kilometers of land per human being. This is clearly unsustainable; the population of the UK, for instance, would need to forage on a piece of land the size of North America. Modern agriculture solved this problem, yet we still act as hunter-gatherers in our oceans, where we are literally overfishing some populations to extinction. Aquaculture is an obvious fix. Such optimistic technological solutions characterize Breakthrough's vision for the Good Anthropocene.

Yet, such optimistic ideas are not obvious to doomsday prophets. In response to Mr. Lynas' positive vision, Clive Hamilton, an ethics professor and merchant of gloom, essentially believes that mankind is incapable of doing anything good in the long run. Because the shift from the Holocence (the prior geological epoch) to the Anthropocence has in many ways been catastrophic, Dr. Hamilton rejects the notion that the Anthropocence can be "good."

Glaringly absent from his diatribe, as is typical of those who adhere to such an apocalyptic worldview, was the proposal of a single solution to any problem. This was not lost upon the moderator, Oliver Morton from The Economist, who asked to much laughter: "If the Anthropocene cannot be described as 'good', then how would you describe it? Short?"

Predictably, Dr. Hamilton had no answer. He did, however, rail against "America's obsession with nuclear power," geoengineering, and other "techno-fixes." He also managed to cram in a non-sequitur about the Koch Brothers and Exxon, and then flatly denied the reality that environmentalists are largely to blame for the lack of clean energy sources by refusing to embrace nuclear power. Germany, for instance, had to bring back coal because -- at the behest of environmentalists -- the government is phasing out nuclear.

In short, Clive Hamilton forcefully but unintentionally demonstrated the sheer intellectual bankruptcy of mainstream environmentalism.

With their knee-jerk pessimism and apparent belief in the parasitic nature of mankind, it is no surprise that environmentalists have failed to win over many hearts and minds. It is time for the optimistic, pro-science, and pro-humanity vision of the Breakthrough Institute to become the fresh new face of modern environmentalism.

(Photo: Alex Berezow)

Pot Is Being Legalized, and the Kids Are All Right

One of the most prevalent arguments against the legalization of medicinal and recreational marijuana is an emotional cry that's echoed for decades: "Think of the children!" But with 23 states now legalizing medicinal marijuana and four states legalizing the drug for recreational use, data is beginning to come in that deals directly with that concern. And thus far, it's telling us that "the kids will be all right."

Let's get something out of the way: No sane proponent of marijuana legalization is arguing that marijuana use is healthy for children. Though there may be some potential benefits for children with epilepsy and other chronic conditions, there are many documented short- and long-term adverse effects of marijuana use in otherwise healthy children. Psychoactive substances are terrible fodder for the adolescent brain.

So it's very good news that legalizing medical marijuana does not appear to increase teenage use. Two large studies, one from 2013 and the other published earlier this month, both returned similar findings. It's still too soon to tell if full legalization will increase use among adolescents.

Accidental ingestion of marijuana edibles is another big concern. Kids aren't likely to light up a joint, but they are considerably more inclined to eat a wayward pot-laced brownie or piece of candy.  Recently, a study found that the rate of accidental exposure to marijuana among children younger than six jumped 148% between 2000 and 2013. During that time, nine states legalized medicinal marijuana and two states legalized recreational use. While that relative increase is large and almost certainly linked to legalization, it should be noted that the actual numbers are still small, especially when viewed in context. Just 250 kids were exposed to marijuana in 2013, and all of them recovered. Compare that with other risks around the home.

"In 2013 alone more than 11,000 calls were made to the National Poison Data System for kids five years of age and under for exposure to alcohol. Three children died from alcohol exposure. [There were] also 45,000 calls for exposure to antihistamines, almost 28,000 calls for antimicrobials, [and] more than 25,000 calls for cough and cold medicines," pediatric health researcher Aaron Carroll reported.

There are more than health benefits to consider when discussing the legalization of marijuana. In states where marijuana is illegal, teenagers caught possessing the drug can face stiff penalties, even felony charges. It's for this reason that the American Academy of Pediatrics (AAP) "strongly supports the decriminalization of marijuana use for both minors and young adults," though the organization currently opposes full legalization and is opposed to use by minors.

"The illegality of marijuana has resulted in the incarceration of hundreds of thousands of adolescents, with overrepresentation of minority youth," the AAP states. "A criminal record can have lifelong negative effects on an adolescent who otherwise has had no criminal justice history. These effects can include ineligibility for college loans, housing, financial aid, and certain kinds of jobs."

When considering the effects of marijuana legalization on adolescents, the benefits seem to outweigh the harms. With more data, the balance could shift. Polls indicate that marijuana use will increase if the drug is legalized, so it's up to public health officials and lawmakers to enact policies that ensure safety. The AAP recommends "strict enforcement of rules and regulations that limit access and marketing and advertising to youth," a minimum purchase age of 21 years, and reinvesting fees and taxes into cannabis research and public health campaigns.

(Image: AP)

Can Islam Come Back to the Light of Science?

Sunday was the summer solstice in the northern hemisphere. As Earth's axis tilted, the sun reached its highest position in the sky, bathing the upper latitudes in enduring light. Residents of Fairbanks, Alaska experienced a day lasting nearly 22 hours, while denizens of Duluth, Minnesota witnessed a day lasting a more modest 16 hours.

In this, the International Year of Light, it is only fitting to mention the man who literally wrote the book on light: Ibn al-Haytham. A devout Muslim captivated by science, he believed that seeking truth and knowledge about the natural world would bring him closer to God. His quest -- one that was both scientific and spiritual -- led him to produce his masterpiece: the Book of Optics. Published roughly a thousand years ago, the tome described light more accurately than ever before, and most importantly, did so with meticulously detailed experimental evidence. Pivotally, Ibn al-Haytham outlined his experiments so that anyone could repeat them. His actions may have constituted the birth of the scientific method, itself.

Ibn al-Haytham published his monumental work during a golden age for science in the Middle East. Between roughly 750 and 1258, discoveries flowed from the Islamic world like water down the Tigris and Euphrates. No other region on Earth came close to rivaling the intellectual renaissance of the Middle East. Mighty libraries were established in Cairo, Aleppo, and Baghdad. Scholars from across the world gathered in metropolitan cities to share ideas. Revolutionary inventions and processes were a dime a dozen.

Then it all ended. Crusaders from Europe and invading Mongols from Asia were the primary culprits. War left the Islamic world in tatters. For years, poverty and division reigned. Eventually, from the ashes of burned books, broken libraries, and forgotten ideas, a more medieval mindset emerged, one that de-emphasized curiosity and stressed blind faith. During the golden age, Islamic leaders read passages in the Qur'an like “The scholar’s ink is more sacred than the blood of martyrs," and “Can they not look up to the clouds, how they are created; and to the Heaven how it is upraised; and the mountains how they are rooted, and to the earth how it is outspread?” and saw science as divine. But in a new era, Muslim leaders looked to science with apathetic eyes. Life's answers were already available in the holy texts; there was no need for further investigation.

This outlook hobbled science in the Islamic world for centuries. As The Economist reported, the effects persist today:

In 2005 Harvard University produced more scientific papers than 17 Arabic-speaking countries combined. The world’s 1.6 billion Muslims have produced only two Nobel laureates in chemistry and physics. Both moved to the West: the only living one, the chemist Ahmed Hassan Zewail, is at the California Institute of Technology. By contrast Jews, outnumbered 100 to one by Muslims, have won 79. The 57 countries in the Organisation of the Islamic Conference spend a puny 0.81% of GDP on research and development, about a third of the world average. America, which has the world’s biggest science budget, spends 2.9%; Israel lavishes 4.4%.

There are signs the situation is improving. Thomson Reuters' latest global research report showed that the Arabian, Persian & Turkish Middle East is capturing an increasing share of scientific output, growing even faster than Asia and Latin America. Given recent political upheaval in the region, which has been both positive and negative, it will be interesting to see if this trend continues.

In the Reuters report, Ahmed Zewail, the first Egyptian scientist to win a Nobel Prize in a scientific field, shared three key ingredients to spur science in the Islamic world:

"First is the building of human resources by eliminating illiteracy, ensuring active participation of women in society, and improving education. Second, there is a need to reform national constitutions to allow freedom of thought, the minimizing of bureaucracy, the development of merit based systems, and the creation of a credible – and enforceable – legal code. Finally, the best way to regain self-confidence is to start centers of excellence in science and technology in each Muslim country to show it can be done, to show that Muslims can indeed compete in today’s globalized economy and to instill in the youth the desire for learning."

Unfortunately, Zewail's ideas will likely face roadblocks from entrenched, oppressive regimes and the dogmatic religion upon which they operate. Many Islamic societies are steeped in religious rules that restrict freedom and expression, two ideals key to science. Moreover, brutal punishments often await those that try to change the status quo.

The spread of ISIL is another unexpected barrier to science's spread in the Islamic world. In areas under the radical group's control, teaching evolution is out of the question, and physics and chemistry are taught with a key asterisk: Allah sets all the rules. The most heartbreaking thing of all is that students are not allowed to learn mathematics.

"They will never know that the word 'algebra' comes from a Persian mathematician, Al-Khwarizmi, who wrote the famous 9th century treatise Kitab al-Jabr Wa l-Mugabala," Karen Graham lamented.

In this, the International Year of Light, there are signs of stagnation and signs of renewal for Islamic science. One can only hope that the inhabitants of the Islamic world will look to their glorious past and see illuminated a way forward.

“Can they not look up to the clouds, how they are created; and to the Heaven how it is upraised; and the mountains how they are rooted, and to the earth how it is outspread?”1 - See more at: http://reviewofreligions.org/9422/from-the-archives-islam-and-science-concordance-or-conflict/#sthash.saaycfyx.dpuf

(Image: Spectator)

The 2,000-Year-Old Greek Artifact That May Be the World's First Analog Computer

In October 1900, a team of sponge divers led by Greek Captain Dimitrios Kondos ran afoul of a severe storm in the Aegean Sea. Rather than risk a perilous journey through roiling waters, Kondos and his crew took shelter on the island of Antikythera. Thanks to their conservative choice, the divers' tempestuous misfortune did not end in tragedy. Instead, it began decades of discovery.

While anchored offshore, the divers explored the shallow depths. Along with the usual fare -- sponges and other bottom-dwelling sea animals -- they discovered something quite extraordinary: a massive shipwreck. Rotting corpses and dead horses were strewn nearby. Subsequent dives would wash away the macabre first impression. The ship was brimming with beautiful artifacts: statues, glasswork, pots, weapons...

What the divers found, confirmed by archaeologists and scientists over the following century, was a 2,000-year-old Roman merchant vessel filled with Greek treasure. To date, it remains the largest ancient shipwreck ever discovered, "the Titanic of the ancient world," as Brendan Foley, an archaeologist at the Woods Hole Oceanographic Institution, describes it.

The Antikythera wreck yielded a bounty of artifacts when it was first discovered. By 1901, the Greek Education Ministry and the Royal Hellenic Navy salvaged a number of exquisite statues. Relatively ignored amongst the larger objects was what can only be described as a small, 340 mm × 180 mm × 90 mm "lump." When the relics were hauled to the National Archaeological Museum in Athens, the lump was disregarded. It sat around for months, until one day, a curator saw that it had split apart. Examining the object more closely, he noticed intricate gear wheels. Clearly, this object was much more than a mere lump! In fact, the outside that had split was a wooden box. Inside was a strange device!

Well over a century later, the Antikythera Mechanism (as the object is now called) is one of the most studied artifacts of the ancient world. Amazingly, scientists have managed to see through the wear and tear wrought by the object's 2,000-year saltwater bath and decipher its purpose. It wasn't easy. Almost every type of x-ray examination has been put to use. The Antikythera Mechanism Research Project, staffed by an international team of researchers, has trucked in machines the size of trucks to image the object inside and out.

Their proddings have revealed that the object is essentially an astronomical calculator, predicting the positions of the Sun, Moon, planets, lunar and solar eclipses, and registering calendar cycles and even the dates of the Greek Olympic Games. Before sinking to its watery fate, it might have looked something like this:

The mechanism's sophistication is incredible. The gear work is precise and complex, and scientists understandably can't agree on how everything was exactly arranged. Dating estimates range between 205 and 100 BC. Pergamum, Rhodes, and Syracuse have been considered as potential places where the mechanism was constructed. Though the prospect is enticing, it is highly unlikely that Archimedes, who lived in Syracuse, constructed the device.

 

Despite its mechanical brilliance, the device was flawed from the moment it was devised. Based on an Earth-centered view of the cosmos, the positions of the planets were doomed to inaccuracy from the start. However, the device does constitute an exquisite attempt to model the workings of the solar system. As lead Antikythera researchers Tony Freeth and Alexander Jones wrote in 2012:

"In short, the Antikythera Mechanism was a machine designed to predict celestial phenomena according to the sophisticated astronomical theories current in its day, the sole witness to a lost history of brilliant engineering, a conception of pure genius, one of the great wonders of the ancient world—but it didn’t really work very well!"

(Images: Wikimedia Commons, Tony Freeth, Freeth et. al.)

Yes, Cops Are Racially Biased, And So Are You

What's up with America's police these days?

Scarcely a week goes by, it seems, without news of a questionable incident. One week it's a shooting of an unarmed black man. The next, it's an overt abuse of power. Last week, it was excessive force at a raucous pool party with mostly black teens.

The impression all the media coverage gives is that, across the country, police are overly excitable, gun-toting racists. Overwhelmingly, police aren't racist. There are a few bad apples, and they should be dealt with accordingly.

But though police may not be racist, they are biased.

In 2006, University of Colorado psychologist Joshua Correll invited hundreds of police officers and community members of all races, genders, and ages into his lab for an experiment. He and his team sat everyone down, one by one, in front of a computer and showed each subject different images of black and white suspects holding a gun or some other object in their hand. The subjects had a little over a half a second to press a button and make a life-or-death decision: shoot or don't shoot.

Correll found that police were slightly more likely to deem black suspects as threatening compared to white suspects, and pull the imaginary trigger. But that wasn't the only thing Correll discovered.

It turned out that community members -- both black and white -- had the exact same racial bias! Moreover, they actually performed far worse than cops on Correll's tests, deciding to shoot unarmed suspects and black suspects much more often.

As Correll told This American Life, his research shows that Americans are generally afflicted with implicit racial biases, akin to subconscious stereotypes.

"For most of us we would say things like, 'Oh, I like black people... I don't have any negativity toward them.' That's an explicit report of an attitude. But it goes through an editing process. When I flash a picture up on a screen and ask you to respond in 630 milliseconds, you don't have time to edit. Everybody has this gut response that is 'black means threat.'"

By itself, this gut response is not racist. The bias exists, because, by the numbers, blacks are more likely to commit certain types of crime -- noticeably homicide -- than whites. But thanks primarily to the news media and a certain never-ending reality television show, that racial bias has grown entrenched and over pronounced. Stereotyping has gone too far.

Now, for ordinary citizens, this bias doesn't have an outlet to inflict harm. But amongst armed police, it does. That's why police forces across the country need to address it. The biggest step, Las Vegas police training officer Marla Stevens told TAL, is to simply recognize that the bias exists. Once cops recognize it, they can consciously act to counter it.

Incredibly, awareness training like this seems to work. In 2012, psychologists published the results of a 12-week intervention, showing that it dramatically reduced subjects' implicit biases.

"The intervention is based on the premise that implicit bias is like a habit that can be broken through a combination of awareness of implicit bias, concern about the effects of that bias, and the application of strategies to reduce bias," the researchers wrote.

For police, implicit biases can be difficult to overturn. Such ingrained cognitive biases developed and persisted over hundreds of thousands of years because they kept us alive. In the prehistoric savannah, it was an evolutionary advantage to perceive a threat and survive rather than ignore a threat and become lunch.

However, the American justice system is predicated on morality, not evolution. We rightfully celebrate and uphold the mantra of "Innocent until proven guilty." That must apply equally to men and women of all races, in all situations, regardless of ingrained bias.

(Image: AP)

Why You Shouldn't Care That Yogurt, Mouthwash, Red Meat, Burnt Toast, and Bras Have Been Linked to Cancer

I hate to break it to you, but almost everything in daily life has been linked to cancer: burnt toast, hot dogs, poor tooth brushing, you name it!

You now have two choices: panic or continue on with your day.

I recommend the latter.

Much of the health information you read online or hear on the morning news comes from observational studies -- scientists look at people who eat certain foods, or take certain drugs, or live certain lifestyles and see how their health compares with the health of people who don't do those things. Studies like these have revealed some disconcerting links: Women who eat yogurt at least once a month have twice the risk of ovarian cancer. People who drink coffee twice a day have double the risk of pancreatic cancer. Individuals with a "Type A" personality have more heart attacks.

There is, however, a general trend in regards to observational studies. They have a very high chance of being flat out wrong.

Twenty-seven years ago, a trio of researchers surveyed the epidemiological literature. They found 56 health claims based on observational studies where research was in direct conflict. An average of 2.4 studies supported each claim, while 2.3 studies did not. Unsurprisingly, most of the claims were tied to cancer risk.

In 2011, statisticians S. Stanley Young and Alan Karr teamed up to analyze twelve randomized clinical trials that scrutinized the results of 52 observational studies. Most of the observational studies showed various vitamin supplements to produce positive health outcomes. However, the superior clinical trials disagreed.

"They all confirmed no claims in the direction of the observational claims," Young and Karr revealed in Significance Magazine. "We repeat that figure: 0 out of 52. To put it another way, 100% of the observational claims failed to replicate. In fact, five claims (9.6%) are statistically significant in the clinical trials in the opposite direction to the observational claim."

What has gone so wrong with observational studies? In the past, epidemiologists used them to conclusively demonstrate the grave public health risks posed by smoking, which led to much-needed regulation and oversight. Observational studies also made plain the benefits of vaccines, drinking water fluoridation, and motor vehicle safety belts.

First, design of observational studies can be problematic. Many rely on self-reported data regarding eating or lifestyle behaviors, for example. Any large study population also comes with a host of potentially confounding variables that can muck up the results. Lastly, observational studies often "fish" for results, so to speak. Young and Karr cited an experiment which found that women who eat cereal have more baby boys, an overtly ridiculous result that makes no biological sense.

"The data set consisted of the gender of children of 740 mothers along with the results of a food questionnaire, not of breakfast cereal alone but of 133 different food items... Breakfast cereal... was one of the few foods of the 133 to give a positive."

You see, with a typical p-value of .05 to denote a "significant" result, there's basically a 5% chance that, just by luck, a claim will be significant. So if researchers test enough outcomes, one is bound to hit the mark.

Also problematic is the nature of epidemiological studies, themselves. They simply aren't good at teasing out subtle risks. A 3000% increase in the risk of lung cancer from smoking is certainly genuine, but a 38% increased risk for breast cancer due to occupational exposure of electromagnetic fields may be completely bogus.

"With epidemiology you can tell a little thing from a big thing. What's very hard to do is tell a little thing from nothing at all," Michael Thun, the former director of analytic epidemiology for the American Cancer Society told Science Magazine.

Young and Karr have a plan to fix observational studies. Their seven-step approach basically introduces peer review throughout the entire process, from data collection, to analysis, to actually writing the report. They liken their approach to a product manufacturer maintaining quality control at key steps, rather than only after the product is created.

If ever adopted, Young and Karrs's method will hopefully restore some credibility to epidemiology, and, just maybe, the public will finally get to stop reading headlines that make tooth brushing out to be a life or death affair.

(Image: Shutterstock)

Solar Airplanes: A Flight of Fancy

Solar Impulse 2, a solar powered airplane that has enough room for exactly one person (the pilot), has made international headlines as it makes its historic trek around the planet. Media outlets from BBC News to Live Science have described the flight as a "revolution."

Quirky, it is; a revolution, it is not.

Consider the longest leg of its global voyage, when Solar Impulse 2 flies from Japan to Hawaii. In a commercial aircraft, the journey takes about seven and a half hours; Solar Impulse 2, on the other hand, will arrive in Hawaii after four or five days. There are two reasons for this sluggish pace: (1) Jet engines require fossil fuels, so solar powered airplanes must revert to electric engines and propellers, which are slower; and (2) The laws of physics.

That second problem is particularly tricky because there is nothing we can do about it. Put simply, there is not enough energy in sunlight to fly a commercial aircraft.

To illustrate why, we will use as an example the Douglas DC-7, a propeller-driven airplane whose cruise speed was 359 mph and could seat 105 passengers. The total horsepower of its engines was 13,600, which is equivalent to 10,141,520 watts. This is the amount of power the aircraft needs to take off and fly. Any source of power must meet that threshold. If a DC-7 was plastered with solar panels, how much power would it get from sunlight?

The surface area of a DC-7's wings is 1,463 square feet. To be generous, we will double that number to 2,926 square feet in order to account for all upward-facing surfaces that could potentially host solar panels. At its cruising altitude at noon on a cloudless day, a DC-7 would be exposed to sunlight that provides a power of 1,200 watts per square meter, which is equivalent to 111.48 watts per square foot. Now, a calculation:

(2,926 square feet of solar panels) x (111.48 watts/square foot) = 326,190 watts

Keep in mind, that figure is assuming 100% solar cell efficiency, which is an absurdly optimistic assumption since the best solar cells are only 30-40% efficient. But even assuming a miraculous 100% efficiency, the solar panels come up well short of the 10 million watts needed to operate the aircraft. In fact, it provides a measly 3.2% of the necessary power. And again, that's in perfect conditions. An 8 AM flight would be lucky to capture even 1% of the needed power due to the low angle of the sun.

So, does that mean solar power is useless for flight? No, not at all. Extremely light aircraft, such as drones, could easily use solar power. But, for the foreseeable future, commercial aircraft will continue to be fuel guzzlers.

(AP photo)

Six Wordy Ways Famous Scientists Have Called for More Research

"More research is needed." No scientific publication or claim seems complete without those four words. It may be the quintessential adage of proper science: an appeal to evidence, and an admission that you might very well be wrong.

The phrase has existed for centuries in a variety of forms, and many of the most famous scientists have uttered it...

"Seeing that a great number of biological phenomena are characteristic of associations of species, it is to be hoped that this theory may receive further verification...

                                  - Leigh Van Valen, when proposing the "Red Queen Hypothsis"

 

"In the future I see open fields for far more important researches. Much light will be thrown on the origin of man and his history."

                                  - Charles Darwin, Origin of Species

 

"We can only see a short distance ahead, but we can see plenty more that needs to be done."

                                  - Alan Turing, in his 1950 paper "Computing Machinery and Intelligence"

 

"Sufficient and decisive as these arguments appear, it cannot be superfluous to seek for further confirmation."

                                  - Thomas Young, in his seminal paper arguing that light is a wave.

 

"I must confess that I have in the course of this research made myself more and more familiar with this thought, and venture to put the opinion forward, while I am quite conscious that the hypothesis advanced still requires a more solid foundation"

                                 - Willhelm Roentgen, when announcing his discovery of X-rays

 

"In this third book I have only begun the Analysis of what remains to be discovered about Light and its Effects upon the Frame of Nature, hinting several things about it, and leaving the Hints to be examined and improved by the farther Experiments and Observations of such as are inquisitive."

                                 -Sir Isaac Newton, Opticks

H/T Proceedings of the Natural Institute of Science

Dr. Oz Hires a Clone to Be His 'Fact-Checker'

There was a brief glimmer of hope that Dr. Oz had changed. In response to criticism that received national attention, the dangerous, money-grubbing quack who makes a mockery of medical science and pollutes our national airwaves issued a press release stating that he has hired Dr. Michael Crupain, who will be "responsible for researching and vetting scripts, evaluating expert guests, ordering and editing medical animations and overseeing liaisons with the show's Medical Advisory Board. He will also lead efforts to enhance the show's ongoing dialogue with the medical community."

That job description is essentially for a fact-checker. Finally, facts matter to Dr. Oz. Right?

Well, no. Dr. Michael Crupain, as it turns out, is an anti-GMO activist. And he has made a name for himself at Consumer Reports by spreading misinformation about agriculture and biotechnology.

In 2011, he penned this diatribe against GMOs that is one untruthful statement after another. To support his claims, he relies on the "research" of junk science organizations, such as the Union of Concerned Scientists and other organic food advocacy groups. Thus, he ends up regurgitating the same myths that organic foodies use to support their very expensive, albeit science-free, lifestyles.

For instance, Dr. Crupain says that GMOs don't improve yields. That is wrong; they do. (A meta-analysis in PLoS ONE suggests crop yields are up 22%.)

He says that GMOs aren't good for farmers. That is wrong; they are. (Farmers willingly and eagerly buy GMOs because their profits skyrocket. The same meta-analysis concludes that profits are up 68%.)

He says that GMOs increase the use of pesticides. According to some research, that is true, but it is incredibly misleading. The term "pesticides" consists of both herbicides (which are relatively harmless) and insecticides (which are relatively toxic). While overall pesticide use is up in the U.S., the use of toxic insecticides is down. And that's a big win for the environment. (The PLoS ONE meta-analysis had contradictory data; it concluded that overall global pesticide use was down 37%.)

In fact, a separate, massive review in the journal Critical Review of Biotechnology, which examined all aspects of GMOs, concluded that the technology posed no threat to human health or the environment.

When Dr. Crupain is not attacking biotech, he is engaging in chemophobia. For an article in Consumer Reports, Dr. Crupain said, "We’re exposed to a cocktail of chemicals from our food on a daily basis." Oh no, not chemicals! (Dihydrogen monoxide and ascorbic acid, you're on notice.)

Unfortunately, it appears as if Dr. Oz's "fact-checker" is just a clone. We could call him Mini-Oz.

Source: Klümper W, Qaim M (2014) A Meta-Analysis of the Impacts of Genetically Modified Crops. PLoS ONE 9(11): e111629. doi:10.1371/journal.pone.0111629

(AP photo)

Why Everything We 'Know' About Diet and Nutrition Is Wrong

For decades, the federal government has been advising Americans on what to eat. Those recommendations have been subject to the shifting sands of dietary science. And have those sands ever been shifting. At first, fat and cholesterol were vilified, while sugar was mostly let off the hook. Now, fat is fine (saturated fat is still evil, though), cholesterol is back, and sugar is the new bogeyman.

Why the sizable shift? The answer may be "bad science."

Every five years, the Dietary Guidelines Advisory Committee, composed of nutrition and health experts from around the country, convenes to review the latest scientific and medical literature. From their learned dissection, they form the dietary guidelines.

But according to a new editorial published in Mayo Clinic Proceedings, much of the science they review is fundamentally flawed. Unlike experiments in the hard sciences of chemistry, physics, and biology, which rely on direct observational evidence, most diet studies are based on self-reported data. Study subjects are examined for height, weight, and health, then are questioned about what they eat. Their dietary choices are subsequently linked to health outcomes -- cancer, mortality, heart disease, etc.

That's a poor way of doing science, says Edward Archer, a research fellow with the Nutrition Obesity Research Center at the University of Alabama, and lead author of the report.

"The assumption that human memory can provide accurate or precise reproductions of past ingestive behavior is indisputably false," he and his co-authors write.

Two of the largest studies on nutritional intake in the United States, the CDC's NHANES and "What We Eat," are based on asking subjects to recall precisely what and how much they usually eat.

But despite all of the steps that NHANES examiners take to aid recall, such as limiting the recall period to the previous 24 hours and even offering subjects measuring guides to help them report accurate data, the information received is wildly inaccurate. An analysis conducted by Archer in 2013 found that most of the 60,000+ NHANES subjects report eating a lower amount of calories than they would physiologically need to survive, let alone to put on all the weight that Americans have in the past few decades.

So self-reported data based on memory recall is inaccurate, but that should come as no surprise to anyone familiar with how memory works. Memory is not a recording; it's a mental reconstruction shaped by thoughts, feelings, and everything that occurred after the event one is trying to remember. Everybody is susceptible to false memories.

And yet, again, much of epidemiological dietary research is based on asking subjects to recall the easily altered details of what they ate! No wonder seems to point every which way!

"The American public deserves the best possible science. It is time to stop spending billions of health research dollars collecting pseudoscientific, anecdotal data that are essentially meaningless," Archer said in a press release.

Diet studies based on self-report are conducted because they are easy. But in this case, what's easy is not at all better. Sure, the scientific literature on nutrition is bulging with studies, but at the same time, it's watered-down with weak, meaningless information. Perhaps that's why nutrition has become rife with hucksterism.

"The greatest obstacle to scientific progress is not ignorance but the illusion of knowledge created by pseudoscientific data that are neither right nor wrong," Archer writes.

Instead of focusing on inaccurate dietary advice, Archer urges a renewed focus on physical activity as a tool for maintaining health.

Nutrition research is awash in woo. To fix that, scientists should conduct only the most rigorous studies, preferably randomized-controlled trials, and funding agencies like the NIH and the CDC should only give grants to this sort of research.

The motto of the Royal Society of London, the oldest scientific society in the modern world, is Nullius in Verba.

"This phrase... is translated as “on the word of no one” or “take no one’s word for it” and suggests that scientific knowledge should be based not on authority, rhetoric, or mere words but on objective evidence," Archer writes.

Ironically, self-reported data directly contradicts the Royal Society's motto. Credulous nutrition scientists are literally taking everyone at their word. This has to end.

(Image: AP)

What's the Most Efficient Language?

It's 2025, and alien explorers from a distant planet are set to make first contact with Earth. Conveniently for us, they have a universal translator. Unfortunately, for us, they've intercepted far too many episodes of Jersey Shore and have a skewed perception of our species.

So, in an effort to put humanity's best foot forward, Earth's welcoming committee -- composed of scientists, politicians, celebrities, etc. -- decides to make first contact in a language that conveys the most information in the shortest amount of time, a symbolic gesture to show how efficient and intelligent we are. So, which spoken language should they choose? There are between 5,000 and 7,000 languages on Earth, so the decision won't be easy.

If scientists had a lot of time and resources on their hands, this is how they might go about researching the question: First, select standard written passages of decent length and word variety. Then, travel the world and record at least a dozen speakers of every language reading those passages aloud at their normal cadence. Count the overall number of syllables used for each passage and measure the time it took subjects to read their passage. Divide the syllable count by time to get the number of syllables spoken per second. Next, come up with some value for how much meaning is packed into each syllable, which will give you an average information density per syllable. Finally, use those values to derive an "information rate."

University of Lyon researchers François Pellegrino, Christophe Coupé, and Egidio Marsico didn't travel the world, nor did they survey every single language, but back in 2010, they did use the process above to determine the speech information rate of seven of the world's most spoken languages: English, French, Italian, Japanese, Spanish, Mandarin, and German.

English came out on top, but not by much. Most of languages grouped pretty closely together, however, Japanese lagged behind the rest.

Interestingly, the languages that conveyed the least amount of information per syllable, like Spanish, Japanese, and French, tended to be spoken at a faster rate. This allowed these languages (apart from Japanese) to deliver a similar amount of information compared to more meaning-dense languages like Mandarin and English.

Of course, if Earth's welcoming committee really wanted to be ambitious, they could create a new, artificial language, one whose aim "is the highest possible degree of logic, efficiency, detail, and accuracy in cognitive expression via spoken human language, while minimizing the ambiguity, vagueness, illogic, redundancy, polysemy (multiple meanings) and overall arbitrariness that is seemingly ubiquitous in natural human language."

Conveniently enough, there's already a language like that, Ithkuil, created by John Quijada, a middle manager at the California D.M.V. Don't let Quijada's humble background fool you, Ithkuil is surprisingly brilliant, although painfully complex. Nobody, not even Quijada, can speak it fluently. (Here is a sample.)

We wouldn't need to speak it fluently, however. We'd just need to deliver a brief welcome to our extraterrestrial visitors, enough to impress them, apprise them of our species and our myriad languages, and inform them that we are interested in peace.

Hopefully their universal translator will be able to comprehend Ithkuil...

(Image: AP)

Can You Propel Yourself in Space by Farting?

The subreddit AskScience is an intellectual gold mine. Curious folk of all ages pose rousing scientific questions to be answered by scientists and experts. Often, the queries deal with peculiarities of how the world and universe function, or the psychological oddities of human behavior. Other times, they detour into the ridiculous. Last month, user sleepwalken asked a question that was both hilarious and thought provoking:

"If you farted hard enough in space, could you move yourself around?"

The slightly sophomoric, yet stimulating question quickly rose to the top of the forum, and received a serious response from user VeryLittle, a physicist and moderator of the Physics subreddit.

"Essentially, farts are rocket fuel," he (or she) replied. "Gas diffusing will carry a small amount of momentum backwards, so it must exert a force on the person, pushing them forward."

However, that force would be very small, he calculated. The average person produces roughly one liter of flatulence a day, composed mostly of hydrogen, carbon dioxide, and methane, along with smaller amounts of nitrogen and oxygen. "That comes out to 0.5 g of flatulence every day for a normal person," VeryLittle wrote.

Now, let's guess that a fart leaves the butthole at about 1 m/s - again, not entirely unreasonable. So putting all this together, we can find that a day's worth of farts carries backwards momentum equal to

(1 m/s)(0.5 grams) = 0.0005 kg m/s

So for momentum to be conserved, the astronaut will now be traveling 7.7x10-6 m/s forward, which is only about 1000x faster than hair grows. If an astronaut in space farted every day, it would take 10,000 years for him to get up to a normal highway speed.

VeryLittle's analysis does seem to dash one's hopes for a "natural" anal jetpack. However, as a great many match-wielding high school boys have discovered, there's a simple way to get more energy out of one's flatulence. VeryLittle continues:

"The gases I listed above are combustible - specifically methane. Just spewing the gas backwards to get a push forward would be like putting your SUV in neutral and trying to propel it forward with a supersoaker that sprays gasoline backwards. Instead of throwing it backwards, you can explode it backwards to generate thrust, like a real rocket."

"If we take the methane to be about 1% of our flatulence, and the energy of combustion to be 890 kJ/mole, then we find that the chemical potential energy of the gas is about 100 million times greater than the kinetic energy backwards. If we had one of those fancy gas backpacks that they put on cows to harvest the methane from their farts and a jetpack to burn it, then this gas would be enough to get a particularly flatulent astronaut up to highway speed in a day."

That's on paper, at least. But how would it work in practice? Astrophysicist Ethan Siegel notes that the combustion would require some oxygen to oxidize the reactants. Moreover, most of the energy produced would turn into heat, not momentum, he says. So the acceleration would be slower than predicted.

Incidentally, NASA has been well aware of the explosive power of flatus for some time now. Back in 1969, researchers at the University of California - Berkeley sought ways to minimize the fire hazard presented by farting onboard spacecraft by feeding subjects diets designed to minimize the output of bodily gases.

(Image: Shutterstock)

Time to Get Rid of Science Textbooks?

For the average college science major, a trip to the bookstore can be a backbreaking affair. Shelling out hundreds of dollars for a hefty stack of dry, often incomprehensible books is not exactly the best way to start off a semester... then you have to carry all of them home.

Nobel Prize-winning physicist Richard Feynman shared a similar distaste for science textbooks, but from a different perspective. A few years after World War II, Feynman was asked to serve on California's State Curriculum Commission, which advised the State Board of Education on which textbooks to assign to students. He quickly garnered a reputation for being difficult to please.

"...the books were so lousy," Feynman recalled in Surely You're Joking, Mr. Feynman. "They were false. They were hurried... The definitions weren't accurate... They were teaching something they didn't understand, and which was, in fact, useless, at that time, for the child... Everything was written by somebody who didn't know what the hell he was talking about, so it was a little bit wrong, always!"

Feynman wasn't being critical and contradictory just because he enjoyed being critical and contradictory (though, in many instances, he did), he was intensely passionate about sharing the joys of science and found most of the textbooks to be woefully deficient at doing so.

"How anybody can learn science from these books, I don't know, because it's not science," he lamented.

As you might've guessed, Feynman's tenure on the commission was very brief.

Decades later, science textbooks have improved, but fundamental flaws still remain. Chiefly, they are products of politically stained standards that promote superficial, comprehensive coverage of disciplines. Thus, wrote Bruce Alberts, former editor-in-chief of the journal Science, textbooks encourage "skin-deep" learning.

"Take, for example, my 12-year-old grandson’s life science textbook. Approved by the State of California, it is filled with elaborate drawings and covers an astonishingly broad range of biology. But the text is largely incomprehensible for its student audience, reminding me of a commercial exam-cramming guide that proudly states: 'We'll show you that you don't really have to understand anything. You just have to make a couple of simple associations, like these. Aerobic respiration with: presence of oxygen, more ATP produced . . . Anaerobic respiration with: absence of oxygen, less ATP produced.' When my grandson and his classmates successfully complete that book and the class based on it, it is clear that they will know nothing of the kind of biology that inspires passion in the souls of the scientists..."

Unless new methods of teaching are tried and established, science may soon not even break the skin, rendering it easily washed away and forgotten. Unlike other textbooks for disciplines that are more static, science textbooks constantly require updating as new knowledge pours in. And so, books grow thicker or details get thinner.

The simple fact is that basing science education on boring tomes is no longer tenable.

Professors Sally G. Hoskins and Leslie M. Stevens are just a couple of educators who have innovative ideas how to teach students about science. Their particular method takes the emphasis away from mere memorization of scientific facts and focuses more on teaching students how to think scientifically. In their college classes, undergraduate biology students work with directly with a small selection of published experiments to actively learn how science is done. The focus is narrower and more in-depth.

This approach likely won't fit well in grade school or high school settings, but another simple change would: open-book tests. As Hoskins and Stevens wrote:

Open-book tests reflect the reality that no working biologist we know walks into their laboratory and carries out a series of experiments based exclusively on memorized information without looking at any written material, logging on to a computer, or conversing with anyone. Research science is an open-book activity. The exam questions, often short-essay questions or requests for critical analysis of data, reflect issues previously discussed in class. Successful answers are ones that demonstrate that students have the data-decoding skills and analytical ability that lead to genuine understanding of the article's findings.

An open testing format would permit teachers to spend more time on select topics and less on drilling a wide selection of facts into students' brains. Memorization is mundane and oftentimes overwhelming. It's what scares prospective scientists away from science.

Though we may be stuck with science textbooks for a while longer, educators can start taking steps to minimize their influence. Science was never meant to be restricted to microscopic text. Its realm is the universe.

Further Reading: Michael Klymkowsky and Melanie Cooper. "Now for the hard part: The path to coherent curricular design."

(Image: AP)

Why Creationism Belongs in Science Class

Creationism is not science, but science class is the perfect place to discuss creationism.

In the past, I've espoused the opposite belief: that merely mentioning creationism in a scientific setting inadvertently lends credence to it. But after reading an excellent op-ed by MacEwan University Professor P. Lynne Honey, published last week in the journal Frontiers in Psychology, I have changed my mind.

Honey, who teaches critical thinking and evolutionary behavior, once avoided discussing creationism in class, but eventually realized it could be harnessed for scientific learning.

"When I began teaching I did not teach creationism, as I focused instead on my areas of expertise. Over time it became clear that students had questions about creationism, and did not understand the difference between a scientific approach to knowledge and non-scientific approaches. This led me to wonder whether ignoring supernatural views allowed them to remain as viable ‘alternatives’ to scientific hypotheses in the minds of students... I began to explain creationism in my classes, and to model the scientific thought process that leads to a rejection of creationism."

Honey's methods, which she has refined over many years, are simple, yet powerful:

"Creationism is presented as a sociopolitical controversy rather than a scientific controversy. I emphasize that there is no question about the validity of evolution as an explanatory model, and I present creationism as a political or ‘denialist’ movement rather than a competing theory with its own strengths and evidence. I then present several common assertions from creationism (e.g., that there are no transitional fossils), and refute them using scientific evidence. At the same time, I explain several of the common logical fallacies that are evident in creationist arguments. I encourage students to ask questions, and force me to defend my statements. I then ask them to attempt to generate hypotheses and tests of creationism. Their struggles with this task lead them, logically, to the conclusion that many creationist assertions are unfalsifiable and therefore nonscientific."

Honey's head-on approach is far superior to the ignorance approach touted by atheist scientists Richard Dawkins and Jerry Coyne. When 42% of Americans persistently believe that "God created humans in their present form 10,000 years ago," it behooves science educators to engage them, not ignore or ridicule them, which will serve only to further entrench their unscientific beliefs. 

"Dawkins and Coyne argued that creationism ‘no more belongs in a biology class than alchemy belongs in a chemistry class’," Honey writes. "I argue that alchemy could belong in chemistry classrooms, if it demonstrated why some methods of gathering knowledge are more valid than others."

Moreover, Honey writes, discussing creationism in science class imparts vital critical thinking skills.

"If I simply state that creationism is not scientific, then I ask my students to take my word for it because I am the authority as a scientist. They may adopt my views, because I’m an authority, but not because they have internalized the logic that led to my views. When I allow them to apply the scientific method to creationism, they practice being scientists themselves."

Honey urges science educators to not only bring creationism into the classroom, but other forms of pseudo- and non-science as well.

"As educators, we can take the opportunity to tackle topics that students may see in the media, on social media, or around the dinner table, and model our thought processes as we explain how scientists come to conclusions. We can emphasize that not all statements are equally valid, not all ‘authorities’ are equally authoritative, and not all hypotheses are equally testable. We can also allow students to practice their logic skills, and apply them to new topics that arise with each poorly informed Facebook meme, or celebrity fad. Non-science and anti-science views do have a place in the science classroom, because they can be used to train students in the logic associated with scientific thought."

Source: Honey P. (2015) Why I teach the controversy: Using creationism to teach critical thinking. Front. Psychol. 6:793. doi:10.3389/fpsyg.2015.00793

(Image: AP)

Obese and Pregnant: Bad for Mother and Baby

For decades, doctors, nonprofits, websites, and advice gurus have had no qualms with telling pregnant women what they should and shouldn't do. Topping the list of no-no's are smoking and drinking (though the latter seems to be safe in moderation.) In a perfect world, expecting mothers would have the best scientific evidence at their fingertips and the wisdom to be able to make up their own minds about the risks associated with certain activities and lifestyles. But in the real world, pregnant women may have to juggle work, commuting, errands, and children, all while experiencing the nontrivial pains of pregnancy. They rely on health professionals to deliver helpful and sometimes frank advice based on good science. Today, there's one area where experts have been far too quiet: obesity.

Though often deemed too taboo to even discuss, it may be time to make obesity the new pregnancy taboo. Scientists and health experts have long warned about the personal health risks of obesity, but they've stayed relatively mum on the risks of maternal obesity. The consequences are considerable for both mother and baby.

That much was made plain in a systematic review of reviews published last week. A team of European researchers led by Trinity College Dublin's Jamille Marche explored all of the published research on obesity and its effects on mothers and babies, focusing on the highest quality studies and reviews.

They found that the risk of developing gestational diabetes was four times higher in obese women, and nine times higher in severely obese women. Gestational diabetes, which is similar to normal diabetes but usually ends after delivery, puts babies at an increased risk of heavier birth weight, preterm birth, as well as respiratory distress syndrome.

Marche and colleagues also discovered that obese women with a BMI above 35 (roughly 210 pounds for a woman who is 5'5" tall) nearly doubled their risk of delivering their babies preterm before 33 weeks. Preterm birth affects roughly one out of every nine infants born in the U.S. and is associated with a variety of complications. According to the CDC, preterm-related causes of death accounted for 35% of all infant deaths in 2010. Moreover, preterm birth is a leading cause of long-term neurological disabilities in children.

The review further revealed that fetal death was 34% more likely for obese women (180 pounds for a 5'5" tall woman) and twice as likely for severely obese women (above 210 pounds). The rate of fetal death in the U.S. was 6.26 per 1,000 births in 2011.

Obese expecting mothers also face an increased risk of preeclampsia, a potentially dangerous pregnancy complication characterized by high blood pressure and bodily organ damage. Moreover, their babies were more likely to suffer congenital anomalies and excessive weight at delivery.

The kicker of all these consequences is that obesity begets obesity.

"Maternal obesity is the most significant factor leading to obesity in offspring and, coupled with excess weight gain in pregnancy, also results in long-term obesity for women," the reviewers write.

Roughly one-third of women in the United States aged 20-39 -- the prime age range for pregnancy -- is obese. More than half of all pregnant women are overweight or obese.

Doctors working delivery rooms across America are noticing. Delivering a baby to an obese mother presents very real challenges to mother, baby, and doctor. In a March editorial published in the New York Times, Claire A. Putnam, an obstetrician and gynecologist at a Kaiser Permanente Hospital, wrote that we should consider creating "special labor and delivery centers for severely obese patients that are equipped with automatic lifts, specially designed monitors and appropriately trained teams."

Above all, she said, "We need to end the taboo against talking frankly about obesity. Doctors need to be sensitive and nonjudgmental, and patients should not take offense, especially when their health, and their children’s health, is at stake."

"It is crucial to reduce the burden of adverse maternal and foetal/child outcomes caused by maternal obesity," Marche and her colleagues write. "Women with obesity need support to lose weight before they conceive, and to minimize their weight gain in pregnancy."

(Image: Shutterstock)

Why You're Fat (Compared to a Chimpanzee)

If you teach him to pose and ignore the lack of spray tan, a chimpanzee will wipe the floor at a bodybuilding competition. Male bodybuilders might measure in at five percent body fat when they strut their stuff, but the untrained male chimpanzee averages 0.005%!

The difference is even starker when you compare apples to apples, so to speak. The bodies of female chimpanzees are roughly 3.6 percent fat. The average woman might have 24 - 31 percent. Male chimps have 0.005%. Human males have 12 - 20 percent.

For how similar humans and chimpanzees are -- we share about 99% of our DNA -- these disparities are remarkable, especially when you consider that human males generally require at least 4% body fat to remain in good health, and women require about twice that. So what accounts for the marked difference in leanness between our two species? Put more simply, why are humans so much fatter than chimps? Anthropologists Adrienne Zihlman and Debra Bolter, respectively based out of UC-Santa Cruz and Modesto College, presented an idea in yesterday's issue of the Proceedings of the National Academy of Sciences.

As a genus, humans, from Homo sapiens (that's us) to our extinct ancestors Homo neanderthalensis and Homo erectus, are wanderers. Over the vast majority of our history, which spans hundreds of thousands of years, we have roved from place to place, inhabiting a wide range of habitats. We moved with the seasons, we moved to find food, we moved -- perhaps -- just to move. Our adaptability was our key adaptation, an evolutionary leg-up on the competition. The ability to store fat was vital to this lifestyle. Body fat cushions internal organs, but it also serves as a repository of energy that can be readily broken down and used to power muscles. Humans might fatten up at one environment, then move on to another. When food was scarce, we could count on our fat to sustain us, at least temporarily.

Chimpanzees, on the other hand, are localized to specific environments where food is often plentiful, primarily the forests of West and Central Africa. Fatty stores of energy aren't required, but strength to climb food-bearing trees is. Natural selection favored brawn, causing chimps to shed fat as unnecessary weight.

Interestingly, this may have hindered chimpanzees' brain development. Human brains are about three times larger than chimp brains, and this may be because we exchanged muscle for fat. Muscles and brains are metabolically expensive, requiring gobs of energy to function. With less muscle and more fat, humans had more energy to dedicate to brains.

While we hunger for information about our origins, good data can be hard to find. Zihlman and Bolter substantiated their idea by measuring the body compositions of thirteen naturally deceased bonobos, one of two species of chimpanzees. That the body fat of the bonobos was so low was "unexpected," they say. Pregnant or lactating females had higher levels of fat, as much as 8.6%, in order to have enough energy to nurse their babies. But regardless of age, body mass, or captivity, all males measured in at below 1/100th of a percent of body fat. Under no conditions did they store an appreciable amount of fat.

While fat storage aided our ancestors on the savannah, it's not doing denizens of the modern world any good. Copious food and a pampered lifestyle cause many humans to build a body fat composition of 35% or higher. While that might've been beneficial ten thousand years ago, it's of limited use now.

Source: Adrienne L. Zihlmana and Debra R. Bolter. "Body composition in Pan paniscus compared with Homo sapiens has implications for changes during human evolution." June 2015. PNAS. www.pnas.org/cgi/doi/10.1073/pnas.1505071112

(Image: AP)

The Biggest Myth About Buying Local Food

One of the most common reasons consumers buy local food is to protect the environment. Often implicit in this rationale is the idea that eating local reduces one's carbon impact, and thus helps to slow climate change. The evidence for this seems pretty clear cut. The average food product travels more than 1,500 miles to reach on your plate. That transport, whether by boat, plane, train, or truck, emits pollution. So buying your food closer is almost always better. Right?

Wrong. Let's delve a little deeper.

Proponents of local food are absolutely correct in saying that closer production reduces the carbon impact of transporation, but transportation is not the only facet of food production that releases greenhouse gases, which contribute to climate change. In fact, it's actually a pretty paltry one. Production accounts for the vast majority of agriculture's carbon footprint, while transport to the grocery store, and finally your table, accounts for less than a tenth! This value varies depending upon the foodstuff, ranging from a low of 1% for red meat to a high of 11% for fruits and vegetables.

This means that buying local really doesn't have much of an impact on climate change. In fact, in a 2008 paper, Carnegie Melon University's Christopher Weber and H. Scott Mattthews calculated that if a family reduced all of their "food miles" to zero -- bascially meaning they grew all of their own food in their kitchen (good luck with that mess) -- the reduction in carbon impact would be equal to driving a 25 mile per gallon car 1,000 miles less per year. The frugal SUV driver would be making a more eco-conscious sacrifice.

Considering that production accounts for such a significant chunk of the energy that goes into food, there are actually many circumstances where buying local can actually be worse for the climate.

"For example, an acre of land in Idaho can produce about 50% more potatoes than an acre of land in Kansas," Steven Novella, president of the New England Skeptic Society, pointed out on a recent episode of The Skeptics' Guide to the Universe. Buying a local potato in this circumstance would be quite inefficient.

Moreover, for eco-conscious lamb consumers in the United Kingdom, it actually makes more sense to purchase lamb raised 11,000 miles away in New Zealand than lamb raised down the street. Why?

"New Zealand sheep are generally pastured and raised on farms using hydroelectric power," wrote Gary Adamkiewicz, an environmental scientist at Harvard.

The simple fact is that certain climates and soils are more suited to certain crops. It's more economically efficient and climate-friendly to mass produce in those locations and distribute across thousands of miles.

But just because buying local isn't necessarily better for the climate doesn't mean it's not beneficial. There are many reasons to buy from your local farmers. For example, freshly harvested food that hasn't been warehoused for weeks retains more of its nutrients. You're also supporting local familes and building a balanced community.

Perhaps the most important reason to buy local is that it supports farmers striving to preserve genetic diversity. Food that travels has been bred and optimized for shelf life, size, appeal, and heartiness. These cookie-cutter changes sometimes sacrifice taste and nutrition.

"Smaller local farms, in contrast, often grow many different varieties of crops to provide a long harvest season, an array of colors, and the best flavors," the University of Vermont's Vern Grubinger wrote.

Feel free to buy local, but bear in mind that it may not be best for the climate.

(Image: AP)