Great Chefs Are Highly Skilled Food Chemists

Cooking and organic chemistry have a lot in common. Both employ esoteric language and precise, step-by-step instructions. Chefs and chemists alike put strange ingredients into fancy glassware, apply heat, and create something new. The only real difference is, at the end of the project, a chef can eat his creation, while only a psychotic chemist would consider doing the same.

The parallels between the two professions were not lost upon the authors of Modernist Cuisine: The Art and Science of Cooking, a five-volume set that describes, among many other things, how to concoct bizarre meals using equipment typically found in a laboratory. (Pea butter and drinkable bagels, anyone?) Similarly, a new commentary by Michael Brenner and Pia Sorensen in the journal Cell highlights molecular gastronomy, a burgeoning field that seeks to apply the principles of chemistry to food for the purpose of expanding textures and flavors.

One of the more striking examples that the authors discuss is how changing the temperature at which an egg is cooked can have radically different outcomes. When an egg is placed in boiling water (100 deg C.), the high temperature causes the proteins inside to lose their natural shape (a process called "denaturation") and to form different bonds. It is for these chemical reasons that the texture of the egg changes. However, if the egg is cooked at a lower temperature, say 65 deg C., the transformation at the molecular level will not be nearly as extensive, and an entirely different texture arises. Amazingly, as the authors write, "a well-trained chef can predict the temperature of the water bath within a half a degree based on the texture of the egg, and eggs cooked even just a couple degrees apart have very different culinary applications."

Another culinary peculiarity is "hot ice cream." Most people are familiar with gelling agents, such as gelatin, which can be found in products like Jell-O. Unfortunately, it is not possible to eat "hot Jell-O," as the gelatin would melt. There are gelling agents, however, which form a gel at higher temperatures (50-90 deg C.), and adding them to flavored cream allows for the enjoyment of hot ice cream, which apparently feels and tastes just like its colder counterpart.

The authors then turn their attention to flavor, and they discuss three main ways for chef-chemists to enhance or create new ones:

Concentration. A centrifuge is a device that spins liquids placed in bottles at a high velocity. The centrifugal force causes particles to fall out of solution, forming a sludge at the bottom. This process can concentrate flavor molecules, and it is precisely how pea butter is made. As Popular Science describes:

Fresh peas are blended to a puree, then spun in a centrifuge at 13 times the force of gravity. The force separates the puree into three discrete layers: on the bottom, a bland puck of starch; on the top, vibrant-colored, seductively sweet pea juice; and separating the two, a thin layer of the pea's natural fat, pea-green and unctuous.

The centrifuge isn't the only tool available for concentrating flavors. The rotovap evaporates, condenses, and collects volatile aroma compounds by applying a partial vacuum. One Spanish chef uses the rotovap to collect scents from eucalyptus leaves and forest soil.

Chemical reaction. Any process that involves the swapping of electrons is a chemical reaction, during which the fundamental nature of the reactants is changed. Melting an ice cube is not a chemical reaction because the same molecule, water, is present before and after the transformation. Boiling an egg, however, is an example of a chemical reaction. So is the browning of meat and breads in a mouth-watering process called the Maillard reaction (which, incidentally, may be partially responsible for triggering peanut allergies).

Fermentation. Fermentation is a special type of chemical reaction that involves microbes. Bacteria and yeast can convert various organic molecules into waste products, deriving energy in the process. Remember, though, that one organism's waste product is another's feast. From the yeast's point of view, the alcohol in beer and wine, the world's most popular fermented beverages, is nothing more than a useless waste product. But for brewmasters and vintners, it is liquid gold. Intrepid chefs are now trying their hands at microbiology, mixing unorthodox foods and microbes in the hopes of inventing the next culinary phenom.

The authors argue that, increasingly, chefs have taken science into the kitchen. Thus, it is imperative to gain a more thorough understanding of the chemistry of cooking in order to fully unleash the modernist revolution. Aspiring chefs, therefore, ought to pay close attention in their high school science classes.

Source: Michael P. Brenner and Pia M. Sorensen. "Biophysics of Molecular Gastronomy." Cell 161 (1): 5–8. Published: 26-March-2015. DOI:

(AP photo)

It's Time to Talk About 'Male Menopause'

Do women have a monopoly on menopause? Well, depends who you ask.

Purveyors of testosterone supplements would likely say "no." As men age, production of the "male hormone" drops off, resulting in reduced muscle mass, an increase in depression, insomnia, lowered libido, erectile dysfunction, and -- according to questionable advertisements -- a marked decline in manliness. These symptoms constitute a male form of menopause: "andropause."

It's true, testosterone production slows as men age, but the decrease is gradual, roughly 1% per year after age thirty, however it's not the same for every man. This means the onset of symptoms is spread out over time, if indeed they occur at all. That's nothing like genuine menopause, which all women eventually endure. Usually between the ages of 45 and 55, women cease to produce the reproductive hormones estradiol and progesterone over a relatively short timespan. The change halts menstruation, ending a woman's ability to have children, and gives rise to symptoms ranging from hot flashes to back pain to mood disturbance. Men, on the other hand, remain fertile even to very old age.

While all respected experts agree on the definition of menopause, there isn't even an accepted term for andropause. Some doctors instead refer to "partial androgen deficiency of the aging male" or "late-onset hypogonadism." As for andropause's definition, some clinicians use "andropause" as an umbrella term to describe the gradual decrease in testosterone levels that all men face. Others insist that's far too broad of a definition. To say that all men experience andropause is to pathologize male aging, they argue. While that may be good for pharmaceutical companies marketing testosterone replacement therapies, it's not in keeping with scientific evidence.

In 2010, researchers from Imperial College London and Manchester University studied 3,369 adult men between the ages of 40 and 79 and found that just 2% of them experienced true andropause, which they defined by the presence of four specific markers: the lack of an erection in the morning, decreased sex drive, erectile dysfunction, as well as a total testosterone level below 3.2 nanograms per milliliter.

For these men, testosterone replacement may be a viable option, but many researchers still urge caution. Long-term studies to determine the safety of testosterone therapies are still lacking. In the past, menopausal women were regularly given hormone therapies until a landmark study found that older women prescribed estrogen and progesterone actually faced increased risks of blood clots, heart attack, stroke, and breast cancer.

To conclude, old age is not a cause for alarm. Male menopause is likely oversold and its treatments are overhyped.

(Image: Shutterstock)

The Most Endangered Elements

Since its earliest inception, the periodic table has come in a variety of forms. The modern version, created by Nobel Prize-winning chemist Glenn Seaborg in 1944, is elegant in its simplicity and powerful in its ability to enlighten. Not only can the table elucidate the elements that comprise it, it can also be used to accurately predict the properties of elements that haven't even been discovered.

In 2011, Mike Pitts and his colleagues at the Chemistry Innovation Knowledge Transfer Network augmented the periodic table for a different purpose: to show which elements are at risk of becoming endangered. Their analysis revealed 44 elements facing limited supply or under threat of becoming scarce or inaccessible. Included among them are all of the rare earth elements (prominently used in cell phones and consumer electronics), as well as zinc, gallium, germanium, helium, silver, and even phosphorus.

"It is quite a sobering table," Martyn Poliakoff, a newly knighted University of Nottingham chemist and Foreign Secretary of the Royal Society, remarked last May. "Most chemists would not expect zinc to be more endangered than platinum in terms of supply."

The story of how each of the 44 endangered elements received their classification differs, but the reasons are broadly tied to issues of supply versus demand, inefficient or nonexistent recycling practices, difficulty of extraction or purification, or rarity in Earth's crust.

Particularly at risk are rare earth elements widely used in cell phones and green technologies. Each iPhone contains dozens of these. Every new megawatt of wind power installed requires nearly a ton of rare earth permanent magnets. The battery of the world's most popular hybrid, the Toyota Prius, contains 10 to 15 kilograms of lanthanum (a rare earth) alone.

The world received a pricey wake-up call to the issue in 2011, when China -- which controls a gigantic proportion of the rare earth metal supply -- doubled or even tripled the prices of many rare earth minerals over a span of just three weeks. Prices have since calmed, but they still remain above 2011 baseline levels.

Rare earths aren't the only elements threatened. Helium, the second most abundant element in the universe, is one of the nine most endangered on the table and may be subject to serious supply issues by the end of the century. Its scarcity is due to wanton usage and it's ephemeral nature. Helium in the atmosphere is actually able to escape into space. The United States, which produces 75% of the world's helium and maintains one of the largest stores, sets prices for the element, and these prices need to go up, scientists say, by as much as fifty fold. That way, helium would be used more efficiently and reserved for scientific endeavors like cryogenics and medical research.

Phosphorus, a vital fertilizer for modern agriculture, is also endangered, listed as a "potential future risk." What's needed in the case of phosphorus is just a little reallocation, however. Every year, humans "produce" 3 billion kilograms of phosphorus via urine and feces. We just have to figure out how to get it to the proper places where it can be of use.

If you're worried about humanity exhausting the supply of the endangered elements on the table and winding the clocks back to a pre-technological age, fret not, that won't happen.

"Unlike petroleum, the elements cannot run out, because, apart from helium which can escape into space and uranium which is fissile, the elements are essentially indestructible. Therefore, it is a question of human activity taking elements from relatively concentrated deposits and distributing them so thinly over the planet that they are no longer easily recoverable," Poliakoff said.

Chemists, many of whom view the situation as an awesome opportunity to flex their mental muscles, are already working to catalyze solutions. Chief among these efforts are innovative chemical methods to recycle the elements in consumer electronics.

Market forces will also spur action, Roderick Eggert, Deputy Director of the Critical Materials Institute, said from the American Chemical Society's Green Chemistry and Engineering Conference last year. Exploration will increase, research and development will vastly improve efficiency, and certain elements may be substituted in products. Government can facilitate these processes through providing funds and streamlining regulation, he added.

The original periodic table immortalized the elements for all to see. The periodic table of endangered elements reminds us that we still must use them responsibly.

(Image: Chemistry Innovation Knowledge Transfer Network)

Was the Agricultural Revolution a Massive Fraud?

For almost all of Homo sapiens' estimated 200,000-year history, our species lived as hunter-gatherers, uprooting wild plants, gathering wild nuts, picking wild fruit, and hunting wild game. And life wasn't that bad. Granted, it was nomadic and occasionally uncertain. We lived off the land and were at the land's mercy. But more often than not, nature provided all the sustenance we needed.

Around 12,000 years ago, the human way of life began to change drastically. We stopped moving around and foraging for food and instead brought plants and animals to us. This change, known as the Neolithic, or Agricultural, Revolution, heralded the beginning of agriculture as we know it. Generally, it's considered an unquestionable advancement that led to improved living conditions, increased lifespan, and ultimately to the development of technology and all the perks of modern life.

But many anthropologists and historians now question whether the advent of agriculture was the pure progress that we denizens of the developed world all presume it to be. In his new book, Sapiens: A Brief History of Humankind, historian Yuval Noah Harari of the Hebrew University of Jerusalem even goes so far as to call the Agricultural Revolution "history's biggest fraud."

"The Agricultural Revolution certainly enlarged the sum total of food at the disposal of humankind, but the extra food did not translate into a better diet or more leisure. Rather, it translated into population explosions and pampered elites. The average farmer worked harder than the average forager, and got a worse diet in return," he writes.

Harari's arguments aren't without basis. From studying the skeletons of Native Americans who transitioned from foraging to farming, Emory University paleoanthropologist George Armelagos found a 50% increase in tooth enamel defects indicative of malnutrition, a fourfold increase in iron-deficiency anemia, and a threefold rise in bone lesions. Another study examined the stature of Early Europeans who lived between 40,000 and 8,600 years ago. Before the Agricultural Revolution, men stood roughly 5'10" tall and women were 5'6". Afterwards, average male height dropped to 5'5" and average female height fell to 5'1". Those heights have since recovered, but researchers suspect that a dietary switch from less protein to more refined carbohydrates may have been to blame for the decline.

In an article in Discover Magazine, UCLA physiologist and popular science writer Jared Diamond gave three reasons why agriculture may have hampered health:

First, hunter-gatherers enjoyed a varied diet, while early farmers obtained most of their food from one or a few starchy crops. The farmers gained cheap calories at the cost of poor nutrition, (today just three high-carbohydrate plants -- wheat, rice, and corn -- provide the bulk of the calories consumed by the human species, yet each one is deficient in certain vitamins or amino acids essential to life.) Second, because of dependence on a limited number of crops, farmers ran the risk of starvation if one crop failed. Finally, the mere fact that agriculture encouraged people to clump together in crowded societies, many of which then carried on trade with other crowded societies, led to the spread of parasites and infectious disease.

While a switch to agriculture was both a blessing and a curse for mankind, the crops we domesticated received only benefits.

"Ten thousand years ago wheat was just a wild grass, one of many... Suddenly, within a few short millennia, it was growing all over the world," Harari writes.

Think about it: Does wheat serve us? Or, do we serve wheat?

"Wheat didn't like rocks and pebbles, so Sapiens broke their backs clearing fields. Wheat didn't like sharing its space, water, and nutrients with other plants, so men and women laboured long days weeding under the scorching sun. Wheat got sick, so Sapiens had to keep a watch out for worms and blight. Wheat was defenceless against other organisms that liked to eat it... so the farmers had to guard and protect it."

Growing wheat and other food crops allowed humanity to multiply as never before, but, at the time, it did little to improve the life of the average human.

"This is hard for people in today's prosperous societies to appreciate. Since we enjoy affluence and security, and since our affluence and security are built on foundations laid by the Agricultural Revolution, we assume that the Agricultural Revolution was a wonderful improvement," Harari says.

The Agricultural Revolution moved much of humanity forward, but it also left a sizable portion in the dust.

Even today, when technology has allowed us to improve and streamline food production, 805 million people remain undernourished and 1.9 billion adults are overweight or obese. There's little question that the Agricultural Revolution ultimately benefited humanity, but it was not without costs, some that we are still paying.

(Image: Shutterstock)

The New York Times Should Seriously Consider Not Writing About Science Anymore

G.K. Chesterton once quipped, "If a thing is worth doing, it is worth doing badly." He was speaking of the most important things in life, such as faith, volunteering, and parenting. He was not speaking of journalism. That, if done badly, should be ceased.

Enter The New York Times. America's so-called "newspaper of record," the once proud Gray Lady, has seen better days. Its circulation is dwarfed by that of its crosstown rival, The Wall Street Journal. Founded merely 33 years ago, USA Today's circulation and influence have skyrocketed. And The Economist, a weekly British newspaper, has grown to become perhaps the most influential print publication in the world.

What has gone so wrong for the NYT? Many things are to blame. The paper's leftish editorial page is out of step with a large portion of the American public. A high-profile scandal, in which journalist Jayson Blair was caught fabricating articles, damaged its credibility. The biggest factor, however, is the rise of credible challengers -- both print and digital -- that simply do better journalism. There is little incentive to spend money to read the NYT when superior news coverage (and more sensible editorializing) can be found elsewhere.

The NYT's science coverage is particularly galling. While the paper does employ a staff of decent journalists (including several excellent writers, such as Carl Zimmer and John Tierney), its overall science coverage is trite. Other outlets cover the same stories (and many more), in ways that are both more in-depth and more interesting. (They are also usually free to read.) Worst of all, too much of NYT's science journalism is egregiously wrong.

Cell Phones and Cancer

The most recent example of NYT malfeasance comes in an article that cites Jospeh Mercola -- an anti-vaxxer, a quack purveyor of fraudulent alternative "remedies," and a Dr. Oz groupie -- as a source for a story on the safety of wearable electronic devices. Dr. Mercola believes that cell phones are dangerous to the human body. They are not. Yet, the NYT's award-winning (!!!) technology journalist naively regurgitates all of this nonsense, even going so far as to claim that while Wi-Fi signals are probably safe, 3G signals may not be.

How embarrassing. Cell phones do not and cannot cause cancer. As Michael Shermer explains, the photons involved in telecommunications are of insufficient energy to break chemical bonds, which is necessary for them to cause cancer. Wi-Fi and 3G signals both come from the same region of the electromagnetic spectrum, alongside TV and radio waves. So, not only is it unscientific, it is simply illogical to conclude that 3G signals pose some sort of unique risk.

NYT Foodies

Reliance on fringe, pseudoscientific sources has become something of a trend at the NYT. Its most deplorable reportage involves the science of food, particularly GMOs. Henry Miller, the former founding director of the FDA's Office of Biotechnology, reprimands anti-GMO foodie Mark Bittman for "journalistic sloppiness" and "negligence" in his "[inability] to find reliable sources."

Furthermore, in a damning exposé, Jon Entine reveals that Michael Pollan, a food activist and frequent NYT contributor, "has a history of promoting discredited studies and alarmist claims about GMOs." Even worse, Mr. Entine writes that Mr. Pollan "candidly says he manipulated the credulous editors at the New York Times... by presenting only one side of food and agriculture stories." Mr. Pollan was also chided by plant scientist Steve Savage for disseminating inaccurate information on potato agriculture and fearmongering about McDonald's French fries.

On many matters concerning nutrition or health, the NYT endorses the unscientific side of the debate. For instance, The Atlantic criticized a New York Times Magazine essay on the supposed toxicity of sugar. At Science 2.0, Hank Campbell mocked an NYT writer's endorsement of gluten-free diets, and chemist Josh Bloom dismantled a painfully inaccurate editorial on painkillers.

Teach the Controversy

It gets worse. Brian Palmer, a journalist at Slate, details how the NYT was "dupe[d]" into reporting on a fictitious condition known as "post-treatment Lyme disease syndrome." While it is certainly possible that a subset of patients are suffering from some sort of post-infection immunological sequelae, the majority of patients are likely suffering from a different ailment altogether (chronic fatigue, perhaps, or that insidious process known as aging). Yet, as Mr. Palmer notes, the NYT piece cites a fringe medical doctor and describes how antibiotic treatment works for some patients, even though actual data refutes that claim.

Mr. Palmer rightly concludes that the author and "the editors at the New York Times ought to have known better."

Science journalist Deborah Blum, in a scathing post for PLoS Blogs, denounces NYT columnist Nicholas Kristof for his infamous chemophobic rants. Mr. Kristof, who appears to believe that chemical is a four-letter word, routinely engages in "sloppy" and "less than thorough" journalism, as Ms. Blum writes. She concludes with this exhortation: "I wish he would focus and do it right. Or not do it at all."

The NYT Makes All Journalists Look Bad

As a whole, too many of the NYT's science articles take a pro-fearmongering, anti-technology viewpoint that is buttressed with dubious research. Even though interested readers can easily find plenty of thoughtful science journalism elsewhere, the NYT's shoddy reporting still matters. The journalistic malpractice that regularly stains the pages of that once great paper besmirches the reputation of all journalists.

For our sake as well as their own, the NYT ought to restrict science writing to only those staff members capable of it. However, if the NYT fails to rectify this problem and continues to demonstrate an inability to meet the minimal standards of acceptable scientific discourse, then maybe it ought to consider axing its science coverage completely.

Either do it right, or don't do it at all.

(AP photo)

Want to Get to Know Someone? Make 'em Laugh.

We don't laugh just because something is funny. For that matter, we rarely laugh just because something is funny.

In the early 1980s, Robert Provine, a psychology professor at the University of Maryland, Baltimore County, and a few collaborators observed -- or, more accurately, creepily eavesdropped on -- 1,200 real-world episodes of laughter. They found that in only a fifth of instances people laughed at something even remotely funny.

"Most of the laughter seemed to follow rather banal remarks, such as 'Look, it's Andre,' 'Are you sure?' and 'It was nice meeting you too,'" Provine described in American Scientist. He continued:

Even... the funniest of the 1,200 pre-laugh comments were not necessarily howlers: "You don't have to drink, just buy us drinks," "She's got a sex disorder - she doesn't like sex," and "Do you date within your species?" Mutual playfulness, in-group feeling and positive emotional tone - not comedy - mark the social settings of most naturally occurring laughter.

"Provine’s discoveries suggest that laughter is inherently social, that at its core it’s a form of communication and not just a byproduct of finding something funny," Peter McGraw and Joel Warner remarked last year in Slate.

And that makes perfect sense. After all, the chuckle and guffaw were around long before sitcoms ruled weeknight television or comedians cracked jokes in front of howling audiences.

More than likely, laughter serves as an evolved bonding mechanism, a "social vocalization of the human animal," as Provine put it. We laugh, especially in social settings, to subtly broadcast our desire to be part of the group.

This educated postulation provides context for a new study published last week to the journal Human Nature. Researchers based out of the Institute of Cognitive Neuroscience at University College London as well as Oxford's Department of Experimental Psychology found that laughter makes people more likely to disclose intimate information.

Humans reveal personal information about themselves every day, but exactly what is revealed deponds on a variety of factors: who we're talking to, our mood, our level of intoxication, for example. Laughing seems to loosen the tongue, as well.

The researchers recruited 112 subjects and randomly assigned them to watch either a performance by stand-up comedian Michael McIntyre (who apparently the Brits find hilarious), an excerpt from the “Jungles” episode of the BBC series Planet Earth, or a golf instruction video. Subjects viewed the 10-minute videos in groups of four. Those who watched the comedy clip laughed vastly more than those who watched the other clips.

Here's what happened next:

In an ostensibly unrelated experiment on “social communication,” participants were instructed to sit in separate corners of the room and were given a (randomly assigned) piece of colored card (red, blue, green, or yellow). They were asked to show the card to the other participants and to remember which individuals held which colored cards. They were then asked to face away from each other and to complete a questionnaire which instructed them to compose a message for one of the members of the group...

In the message, subjects were told to write down five pieces of information which would be shared with one of the other three participants. They would then be able to interact with the person. Later on, two independent observers read all of the pieces of information and rated them for intimacy on a scale of 1 to 10.

On average, the information provided by those who watched the comedy video was rated as more intimate (5.24) than the information given by those who watched the golf (4.04) or the Planet Earth (4.54) videos.

Examples of more intimate disclosure statements were "In January I broke my collarbone falling off a pole while pole dancing," and "I'm currently living in squalor (with mice!)." Less intimate statements went something like "I am at Worcester College in my first year," or “I love eating different foods from around the world."

The researchers theorize that endorphins -- morphine-like chemicals released by the body's pituitary gland -- may play a role in laughter's apparent intimacy-boosting effects.

"Given laughter’s ability to trigger endorphin activation and the role of endorphins in the formation of social bonds, laughter may increase willingness to disclose intimate information because the opioid effect of endorphins makes individuals more relaxed about what they communicate," they write.

Interestingly, the laughing subjects didn't seem to be aware that they were more willing to disclose.

"It might be that the endorphin release triggered by laughing relaxes people into feeling like they are revealing little and this in fact encourages them to reveal a lot," lead researcher Alan Gray explained to RCScience.

It seems that softening a situation with laughter may be a good way to get to know somebody better, or, for the more mischievous, a potential key to coaxing out unflattering information.

Source: Alan W. Gray, Brian Parkinson, Robin I. Dunbar. "Laughter’s Influence on the Intimacy of Self-Disclosure." Human Nature. March 2015. DOI:     10.1007/s12110-015-9225-8

Giant Plastic Island: Fact or Fiction?

Have you heard of the giant plastic island in the Pacific Ocean? Several times in casual conversation, I've been told that mankind is ruining the oceans to such an extent that there are now entire islands of plastic waste. Daily Kos tells us that this "island" is twice the size of Texas!

This struck me as incredible, in the most literal sense of the word, so I decided to look into the claim.

First, we can do a quick feasibility calculation. The mass of polyethylene terephthalate (PET), the plastic from which most water bottles are made, required to create a two-Texas-sized island just one foot thick is 9 trillion pounds. That's 15 times more than the world's annual production of plastic. Even if a year's worth of the world's spent plastic bottles could be airlifted out over the ocean and directly dropped in one spot, this island could not be made.

That's a rough calculation, so I decided to look more into peer-reviewed research on plastic accumulation in the oceans. The global network of ocean currents will sweep up floating debris and push it towards certain locations. Around the earth, this debris accumulates in regions called gyres, where a current vortex will tend to push suspended matter into a central area and trap it. There are major gyres in the North and South Atlantic and Pacific and the Indian Ocean.

Outside of those five major gyre regions, plastic density is very low and sometimes effectively zero when measured.

Scientific studies measure the amount of plastic present in the gyres by trawling through with enormous nets. The method is painstaking and subject to variation due to differences in mesh sizes of nets, trawling speed of the boat, the diligence of the measuring researchers and other variables.

A well-known 2001 study measured the density of plastic debris in the world's largest gyre, in the North Pacific. They found roughly 5 kg of plastic per square kilometer. This abstract figure is easier to handle when you break it down into a more fathomable number. In an area the size of an Olympic regulation swimming pool, this amounts to roughly two bottle caps' worth of plastic.

A 2013 study conducted in the South Pacific gyre found that plastic density there was 100 times lower than the North Pacific. That's one plastic bottle cap in roughly 50 Olympic pools.

This bottle cap analogy is not entirely correct either. Most of the plastic is actually broken down into microscopic bits or films suspended in the seawater at varying depths. They are mostly invisible to the naked eye. It's like you took the bottle caps and ground them into particles finer than sand.

So, here are the facts. Much of the ocean contains little to no plastic at all. In the smaller ocean gyres, there is roughly one bottle cap of plastic per 50 Olympic swimming pools' worth of water. In the worst spot on earth, there is about two plastic caps' worth of plastic per swimming pool of ocean. The majority of the plastic is ground into tiny grains or small thin films, interspersed with occasional fishing debris such as monofilament line or netting. Nothing remotely like a large island exists.

Clearly, the scale and magnitude of this problem is vastly exaggerated by environmental groups and media reports. Some researchers in the field agree, explicitly pointing out that these scare-stories "undermine the credibility of scientists."

(AP photo)

Ummmm... Here are Seven Facts You... uh... May Have Not Known About 'Um' and 'Uh'

"Uhhhhh." "Ummmm."

Listen to any speech or prolonged conversation and you'll likely find it peppered with one or both of these two filler words. One of my professors in college probably uttered "um" over one hundred times per fifty-minute lecture.

When utilized, filler words are generally thought to signal that the speaker has paused to think but still has more to say, allowing the continuation of a thought. Linguists, psychologists, anthropologists, and other scientists have studied their use for decades. Here are some of the most interesting facts and findings on filler words:

1. They may not technically be words. While some scientists, like Professors Herbert Clark and Jean Fox Tree, have argued that, "uh and um are conventional English words, and speakers plan for, formulate, and produce them just as they would any word," others, like the University of Edinburgh's Martin Corley and Oliver Stewart, contend that "there is little evidence to suggest that they are intentionally produced, or should be considered to be words in the conventional sense."

2. Fillers come in many languages. In Arabic, speakers often say "ya'ni" ("I mean") or wallāh ("by God"). Filipinos say "ah," "eh," "ay," and "ano." The French utter "euh" (which sounds like a very French thing to say). American Spanish speakers say "este" and "o sea." Koreans say "eung," "eo," "ge," and "eum." The Japanese say "eeto," "etto," "ano," "anoo," etc.

3. Men prefer "uh." Women prefer "um." While fillers constitute roughly 1% of all words spoken by American men and women, the two sexes differ in their favorites. According to research published in 2011 from Stanford's Eric Acton, um was "the 24th most spoken word among women, and the 43rd among men," while uh was "ranked 25th for men and 62nd for women."

4. "Um" is not a flattering thing to say. In 2002, Professor Jean Fox examined people's responses to the hearing "um" during a conversation. Subjects deemed speakers who used "um" to be less honest and more uncomfortable than those who did not.

5. But it's not all bad to say "um" when speaking. According to a study conducted in 1995, if you're going to pause during a speech, it's better to say "um" or "uh" rather than remain silent. Subjects rated speakers who used "um" as more competent. However, speakers who didn't pause at all were judged to be the best.

6. "Uh" and "um" are not the same. "Uh" is used during briefer delays in speech, while "um" is used for longer delays. Uhs also seem to heighten listeners' attention and make it easier for them to identify subsequent words, while ums do not.

7. "Uh and "um" on drugs. Many are probably aware of the anesthetic effects of the drug ketamine, but fewer probably know how it alters the use of filler words. In a small, unpublished, placebo-controlled, double-blind study, researchers at the University of Georgia gave subjects small amounts of ketamine (much smaller than an anesthetic dose) and examined their speech. When subjects were under the influence of ketamine, they used significantly more ums and uhs than when sober.

H/T Improbable

(Image: Shutterstock)

A Bright Future for One of the Most Hated Plants

Of the estimated 400,000 plant species, tobacco may be the most maligned. But for that unenviable reputation, the blame lies squarely with the hucksters who packaged it with poison and dishonestly sold it to the public. By itself, tobacco -- scientifically dubbed Nicotiana -- is rather pretty. Many species are grown solely for ornamental purposes. Nicotine, the addictive substance for which the plant is named and which suffuses throughout the leaves, stem, roots, and flowers, is actually intended to ward off insects.

Tobacco isn't the only plant to contain nicotine, but it does have more than most and is easy to cultivate, which is enough to explain why roughly 6.7 million tons of tobacco are produced throughout the world every year, most of which will go into cigarettes.

But recently, as tobacco cultivation for cigarettes has started to slow, thanks in part to a worldwide anti-smoking effort, there's a rising trend in using the plant to do scientific good.

The most publicized example came last year, when scientists utilized genetically modified tobacco to produce ZMapp, perhaps the most promising experimental treatment for Ebola. ZMapp was used to treat seven individuals infected with the deadly virus, two of whom died. A company called Kentucky BioProcessing manufactures ZMapp by immersing young tobacco plants in a liquid containing a specific gene. The plants take up that gene, which tells them to make disease-specific antibodies. CBS News' Bob Simon referred to the plants as "Xerox machines for antibodies."

Years ago, tobacco plants were at the forefront of genetic modification. In 1982, scientists inserted and expressed bacterial genes in a tobacco plant, producing the first genetically modified plant. One-tenth of all crops are now genetically modified.

Tobacco is now among "the most often used model plants for research in the field of physiology, biochemistry, molecular biology and genetic engineering." The international research database PubMed shows 5,771 papers on transgenic tobacco.

Julian Ma, chair of the Hotung Molecular Immunology Unit at St George's, University of London, explained to why tobacco is such a terrific tool.

"It is very easy to work with from a biotechnology point of view. Perhaps more importantly, when thinking about future production, it is not a food crop (so we do not have to be concerned about our genes or antibodies flowing into the food chain), but it is a major world crop, for which a considerable amount of horticultural expertise has already been developed."

Ma is growing human antibodies inside tobacco plants. With his "plantibodies," he's taking aim at diseases like rabies and HIV. Scientists have also engineered tobacco plants which can remove environmental pollutants like lead and mercury from the soil, a process called bioremediation. Other groups want to develop tobacco as a biofuel, which seems to make more sense than producing heavily subsidized biofuels from corn.

Last year, in an innovative feat of engineering, researchers tweaked tobacco plants with two genes from cyanobacteria to more efficiently photosynthesize. Photosynthesis is the fundamental biochemical process by which plants convert energy from sunlight into chemical energy in the form of sugar. If crop plants could be made to photosynthesize more efficiently, yields could be drastically improved.

Though tobacco has indirectly killed millions, the stage is set for the plant to be reborn as a scientific tool for good.

(Image: AP)

Chris Matthews' Sick Obsession with Racism

If you disagree with me, you're a racist.

Those of you unfortunate enough to watch more than 30 seconds of Hardball with Chris Matthews have likely learned that the host, and the American left-wing in general, has a peculiar obsession with racism. It is not an exaggeration to say that "Republican racism" has been a major theme (if not the major theme) of his show for the past seven years. In his latest rant, he concludes that Republican opposition to President Obama, from Day One, is largely explained by racism. In his own toxic words: "The age of Jim Crow managed to find a new habitat in the early 21st Century Republican Party."

In 21st Century America, accusing somebody of racism is one of the most odious charges one can bring. And though Mr. Matthews clearly does not understand this, our dedicated readership knows fully well that extraordinary claims require extraordinary evidence. So, is there extraordinary evidence to conclude that today's Republican Party is motivated primarily by racism?

No, not at all.

In my opinion, the best way to determine what a person's true feelings are toward another race is to ask him about interracial marriage. It is very difficult to claim that a person is racist if he believes that marrying a person of another race is perfectly acceptable. As it turns out, polling data from Gallup in 2011 exists on this very question, broken down by age, region, and political party. (See chart. Note that there is an updated Gallup poll from 2013, but this poll did not break down the data by political party.)

A few points stand out: (1) Interracial marriage is approved nearly unanimously by millennials; (2) People over 65 are sort of racist, but elderly racists are still outnumbered 2-to-1; (3) The supposedly racist South overwhelmingly approves of interracial marriage; (4) Self-identified Republicans overwhelmingly approve of interracial marriage (77%), but more self-identified Democrats approve of it (88%).

The 11-point gap between Republicans and Democrats hardly proves that the Republican Party is the new home of Jim Crow, as Mr. Matthews puts it. Though I do not have access to the original data, I would suggest that the primary predictor of whether a person is racist is age, not political affiliation.

This notion is further supported by two facts: First, the elderly are more likely to vote Republican than Democrat. (Importantly, many of those elderly Republican voters once voted Democrat. The Gallup article notes, "Seniors move from a reliably Democratic to a reliably Republican group.") Second, racism is practically non-existent among millennials. (Notably, a pre-election poll by Harvard in 2014 showed that among "definite" voters, 51% of millennials preferred a Republican Congress.)

Thus, the data strongly suggests that racism is NOT an ideological phenomenon, but an old person phenomenon.

Instead of poisoning our public dialogue with baseless accusations, it would be nice if more talking heads spoke responsibly and backed up their opinions with facts. Sadly, that simply may be asking too much of them.

(AP photo)

Hilariously Stupid Science Questions: We Swear This Is the Last One! Maybe...

While it's questionable whether hilariously stupid science questions will make you laugh, or even chuckle, they are clinically proven to debase your intelligence. Which is why one has to exclaim... five times?!

Yes. Five times we've shared questions like these. Will they ever end? Or will we keep sharing them until our cognitive functions -- and maybe even yours -- dwindle to those of an amoeba?

While you endlessly agonize over that hypothetical, here are ten more hilariously stupid science questions courtesy of the the one section of Reddit where there's absolutely no logic allowed.

My girlfriend says she needs time and distance. Is she calculating velocity?

I was told to set my clock back an hour when it showed 2AM on November 1st. I've done this 8 times now. When can I stop setting the clock back?

My chemistry homework is asking me to rank the bonds by relative strength. Could Pierce Brosnan or Daniel Craig beat Sean Connery in a fight?

Statistics show that 1 out of 5 traffic deaths are caused by drunk drivers. Does this mean sober drivers are the real menace?

If 4 out of 5 people suffer from diarrhea does that mean the fifth one enjoys it?

I have autism. Does that mean I'm immune to vaccines?

How can there be organic molecules on Mars when the nearest Whole Foods is no closer than 54.6 million kilometers away?

How much carbon dating should I do before I'm ready for carbon marriage?

Why didn't anyone take anti-depressants to prevent the great depression?

Did the Cold War end because of global warming?

via Reddit

(Image: Shutterstock)

Why the Drake Equation Is Useless

If you like science fiction, you're probably familiar with the Drake equation. This famous one-line formula solves for the number of intelligent alien civilizations within our galaxy with whom we might be able to communicate. Supporters of the search for extraterrestrial life (SETI) often refer to the expression to bolster their case.

There's just one BIG problem with the Drake equation. It's completely useless! In fact, I believe it may actually misrepresent the search for ET and limit our ideas about it. Here's why.

N = R . fp . ne . fi . fl . fc . L

That's the entire Drake Equation. As far as formulae go, it's very simple. We want know N, the number of civilizations we might hope to detect by telescope. The problem is in the parameters all multiplied together -- all those f's and n's and so forth. Out of those seven variables, we know the exact value of none of them.

We have a very rough estimate of the first variable and a foothold on the second; the remaining five are posed in such a way that they are essentially impossible to measure.

R is the rate of star formation in the galaxy. Telescopes can tell us about star formation. The problem is that there are many different types of stars which form at different rates and evolve into further types throughout their lifetime. An expanding giant star may swallow or melt planets while a dying dwarf star may slowly freeze them. Formation rates vary across the galaxy and through time as well, but an educated estimate of R is possible.

The fraction of stars with planets, fp, is another number we might vaguely guess at. We currently discover hundreds of exoplanets every year, but most of our detections require a planet with a lucky orbit passing directly between us and the star. This condition and limitations on other exoplanet hunting strategies mean that we can only see some fraction of all existing planets. The smaller or further away a planet is from its star, the less likely we are to detect it.

These first two variables may be roughly estimated with decades of painstaking astronomical research. Now we are stuck with five truly useless numbers.

How can we measure the "habitability," ne, of a planet? Is a planet always habitable, or does it cycle through periods of brutal sterility and lush calm over millennia, millions of years or eons? A planet with extreme conditions might be inhospitable to new life, but life that began in better times might adapt and eventually flourish. Can life thrive in boiling hot water? We used to think not, but we were wrong. We now know that life can survive at far below freezing temperatures too. Is an atmosphere necessary and if so, what sort?

The main question in discussions of habitability is: "How many exoplanets have water?" This question dodges a vastly larger one: Does life even require water, or can life totally different from us form without it? We have no idea what the limitations of life are. In lieu of any scientific criteria for this answer, we just guess: conditions might need to be like those on Earth. We are fundamentally limited in any attempt at scientific study by having one single example of life to observe.

This leads directly to the next parameter, fl. This is the fraction of hospitable planets which go on to develop life. There is no way to measure this without directly visiting many exoplanets, which is practically impossible.

fi and fc are, respectively, the likelihoods of any existing life forms developing intelligence and that intelligent life in turn releasing high-powered electromagnetic radiation into space. Suppose something we recognize as living does evolve. How likely is "intelligent" life? Well, it depends on how you define intelligence. The Drake equation reduces intelligence to "it can produce strong EM waves." Now you need to know how many of these "intelligent" life forms will choose to beam things around with those waves. This is not a question addressable by science.

The final parameter L is the average number of years a civilization will survive. This number too is formulated purely with science fiction and not science fact.

The worst thing about the Drake equation is that it gives us a false idea of grasping the problem we are trying to solve. A mathematical equation connotes some scientific study or understanding of a subject. But this is misleading: SETI is simply NOT a scientific endeavor. It's entirely a leap of faith, albeit a leap that uses tools devised by science. It's like searching for paranormal activity with an electronic sound recorder.

Further, I think the Drake equation takes some of the wonder and imagination out of the idea of alien life. Contemplating the vast number of places life could develop and the myriad forms it might take is a beautiful exercise. It shouldn't be constrained by grandiose limiting assumptions and the search for useless percentages. Life need not be just like us or be found by our most obvious searches. Extraterrestrial life today is the realm of science fiction and imagination, of dreams and endless possibility.

The popular web comic XKCD satirizes Drake's equation by adding a new term: Bs, "the amount of BS you are willing to buy from Frank Drake."

(AP photo)

Review: 'Strongest' Research Shows No Link Between Gun Ownership Rates and Higher Crime

Researching the link between gun prevalence and crime is inherently tricky. When society itself is your laboratory, it's almost impossible to properly account for confounding variables that might skew the results. All sorts of factors, ranging from unemployment to alcohol use, can get in the way.

Acknowledging the limitations of current research on the link between gun ownership and crime, Florida State University criminologist Gary Kleck sorted through dozens of studies to first separate the best from the worst, and then determine what the strongest studies tell us. His efforts were recently published in the Journal of Criminal Justice.

"All research is flawed, and all bodies of research are incomplete," Kleck noted, "but that does not mean we cannot distinguish the less flawed work from the more flawed, and draw tentative conclusions based on the best available research conducted so far."

Kleck included 41 studies that examined the association between measured gun levels and crime rate in his analysis, then used three specific criteria to gauge the strength of the studies.

First, he looked for a validated measure of gun ownership. In-depth surveys and percent of suicides with guns were two of the few acceptable measures. Second, he checked to see if confounding variables were properly controlled for and how many were included. Third, he checked to see whether the researchers used procedures that would rule out reverse causality, i.e. whether crime rates actually caused gun ownership to increase. (Past studies have shown that when crime rises in an area, gun ownership often increases, likely for purposes of self-defense.)

In all, the 41 studies produced 90 findings on gun ownership and various crime rates. Of these, 64% found no statistically significant positive affect between gun ownership and crime. However, 52% did identify a link between gun ownership and homicide.

When Kleck applied his three methodological criteria (valid measure of gun ownership, causality procedures, controlled for >5 confounding variables) to the studies, he found that the more criteria they met, the more likely they were to show no link between gun ownership and crime. The reversal was particularly noticeable for homicide. While 65% of the studies that met none of the criteria found a link between gun ownership and homicide, the three studies that met all of the criteria did not.

"The overall pattern is very clear – the more methodologically adequate research is, the less likely it is to support the more guns-more crime hypothesis," Kleck remarked.

Kleck offered two explanations for the finding.

"The most likely explanation is that most guns are possessed by noncriminals whose only involvement in crime is as victims, and defensive gun use by crime victims is both common and effective in preventing the offender from injuring the victim."

Kleck has been researching crime for over thirty years. In the past, he's published studies showing that capital punishment has no effect on homicide rates and that victims who resist rape attempts are more likely to escape and no more likely to suffer worse injuries. He's best known for the hotly debated National Self-Defense Survey published in 1994, which estimated that there were 2.5 million incidents of defensive gun use in 1993.

Kleck's current review indirectly conflicts with a widely cited review published last year by researchers from the University of California. They found that access to firearms in the home is linked to a 3.2x higher risk of suicide and a 2.0x higher risk of homicide.

Understandably, scientists have had a difficult time nailing down a correlation between gun prevalence and crime. In all likelihood, the relationship is not as simple as politicians and ideologues would like us to believe.

Source: Gary Kleck. "The Impact of Gun Ownership Rates on Crime Rates: A Methodological Review of the Evidence." Journal of Criminal Justice 43 (1): 40-48. Jan-Feb 2015. doi:10.1016/j.jcrimjus.2014.12.002

(AP photo)

Why People Think They've Spoken to God

In the Biblical story of Jacob told in Genesis 28, Jacob settled down for the night when trekking through the deserts of Canaan and fell asleep:

“And he dreamed that there was a ladder set up on the earth, and the top of it reached to heaven; and behold, the angels of God were ascending and descending on it! And behold, the Lord stood above it, and said, ‘I am the Lord, the God of Abraham your father and the God of Isaac. The land on which you lie I will give to you and to your descendants; and your descendants shall be like the dust of the earth, and you shall spread abroad to the west and to the east and to the north and to the south; and by you and your descendants shall all the families of the earth bless themselves.”

Jacob thought he received a message from God, but most modern cognitive scientists would say that's unlikely. Instead, they would proffer that his mind had manufactured vivid images, emotions, ideas, and sensations. Jacob's apparent theophany was a dream, nothing more.

In Ancient Greece and Rome, people believed that dreams were direct messages from deities. While the precise mechanism for dreams remains unknown, scientists have narrowed down their origin to somewhere in the brain, not the beyond.

Sleep is divided into four distinct stages. In the final stage, called rapid-eye movement (REM), the brain lights up with electrical activity, almost as if awake. REM sleep is when most dreaming occurs, and when they are most vivid. During this phase, in our sleeping minds, we are transported. We see, hear, and feel. We experience things we would never experience in real life. We face the otherworldly and the supernatural.

It is no wonder then that dreams have played a major role in the historical evolution of religions, says Patrick Mcnamara, a neurologist at Boston University, and Kelly Bulkeley, a Visiting Scholar at the Graduate Theological Union in Berkeley, California, in a new paper published to Frontiers in Psychology.

Researchers working in many different parts of the world have found that people in traditional societies treat dreams as the sources of their religious ideas, including their concepts of their gods and other supernatural beings. It is likely that ancestral populations also treated them as such. Dreams were considered proof of the gods and a spirit realm since dreams were involuntary and emotionally vivid experiences that involved the dreamer’s soul encountering other beings including long deceased relatives and so on.

Why do people occasionally prescribe sacred status to figures in our dreams? One theory from evolutionary psychology holds that humans are inclined to presume intelligence or agency in unlikely places, as doing so might help recognize patterns that might have been missed, thus granting a better chance of survival. As the intuitive argument goes, it's better to assume the sound of a snapping twig was caused by a bear or a tiger, rather than just a falling branch. And so in dreams, we might be more likely to assume that advice or warning from a talking tree or a burning bush comes from a higher being who's looking out for us -- it's not merely meaningless drivel.

Dreams offer, perhaps, the perfect setting to "converse" with a deity, Bulkeley and Mcnamara say, for three major reasons. First, dreams bring forth "mental stimulations of alternate realities." Second, they are replete with theory of mind attributions, in which we attribute mental states to other beings or entities. And third, they allow us to give value or significance to our experiences. "The neurobiology of... sleep states is now understood to involve forebrain mesocortical dopaminergic systems that directly compute value and dis-value," Bulkeley and Mcnamara write.

Considering these factors, Bulkeley and Mcnamara reach an intriguing conclusion.

"All humans are endowed with brains innately primed to daily generate god concepts in dreaming."

Source: Mcnamara P and Bulkeley K (2015). Dreams as a source of supernatural agent concepts. Front. Psychol. 6:283. doi: 10.3389/fpsyg.2015.00283

(Image: Shutterstock)

A Bias in Field Goals and Penalty Kicks?

Seattle Seahawks fans are just now mentally recovering from the heartbreaking Super Bowl loss to the New England Patriots. (Honestly, even Captain Picard knew to give the ball to Beast Mode.) However, the fact that Seattle recently won a Super Bowl in 2014 relieves a bit of the sting.

Not so for the Buffalo Bills. They have zero Super Bowl wins in four appearances. (To add insult to injury, they made the Super Bowl four years in a row, from 1991 to 1994, and lost every single one of them. Ouch.) In what became possibly the most infamous Super Bowl loss of all time, the Buffalo Bills had the opportunity to clinch victory in the closing seconds of Super Bowl XXV (see video). But their field goal kicker, Scott Norwood, missed. Wide Right.

If it makes him feel any better, Mr. Norwood is not alone. A study in PLoS ONE showed that, from 2005-2009, kickers in Australian football (which is more like rugby than American football) tended to miss to the right. But why?

Previous studies have indicated that humans have attentional asymmetry, which ultimately means that we have a hard time judging the location of true center. When attempting to bisect objects that are within reach, we tend to have a leftward bias; for objects further away, we tend to have a rightward bias.

So, the researchers, who hail from Australia, examined this phenomenon in greater detail. They determined how accurately 212 students could kick a soccer ball between goal posts that were placed four meters away. Just like the professional footballers, their kicks tended to skew to the right. Furthermore, when the students were asked to touch the center of the goal with a long stick, they missed the true center by about six millimeters to the right. The authors attributed this error to attentional asymmetry.

It would be very interesting to determine if the same rightward bias exists among field goal attempts in the NFL and penalty kicks in soccer. In the meantime, athletes should bear in mind that their eyes and brains are slightly deceiving.

Source: Nicholls MER, Loetscher T, Rademacher M (2010) Miss to the Right: The Effect of Attentional Asymmetries on Goal-Kicking. PLoS ONE 5(8): e12363. doi:10.1371/journal.pone.0012363

(AP photo)

Can Porn Give You Erectile Dysfunction?

If articles in the popular media can be believed, a sizable portion of young men in the developed world are becoming addicted to online pornography and desensitized to real sex. When faced with flesh and blood women, they find themselves unable to perform.

Writing in the Globe and Mail last November, columnist Leah Mclaren argued that porn-induced erectile dysfunction is a big problem facing what she called "Gen-XXX." Anecdotes led the way in her controversial piece, but she also claimed the support of science.

"Porn-induced erectile dysfunction is now well documented by the mainstream medical community. Dr. Oz devoted a show to the topic last year..." she wrote.

Ah. Yes. Doctor Oz. In light of his past quackery and a recent study showing that four out of every ten claims he makes aren't based on based on evidence, I am leery to trust "America's Doctor." What about the rest of the medical community?

As it turns out, there are precisely zero published studies on porn-induced erectile dysfunction. Surprised by the dearth of information, I emailed sexologist Dr. Jill McDevitt, who has a Ph.D in human sexuality. She replied saying that she'd never heard of the condition, and upon investigation, could find no research addressing it.

What she did find, just like me, were a host of entertainment and anti-pornography sites all too willing to fill the information void with scientific-sounding explanations. Leading the way is Gary Wilson's Your Brain on Porn.

Wilson, a science teacher, explains porn-induced erectile dysfunction in the context of an overarching addiction to pornography, the existence of which is hotly contested. It goes a little something like this: Pornography is far more attainable and stimulating than monogamous sex. Masturbating to porn triggers the release of neurotransmitters tied to the reward system of the brain, like dopamine, granting a euphoric rush of good feelings. Repeatedly masturbating to porn puts this reward system into overdrive, which eventually alters the structure of the brain, ultimately numbing the pleasure response and making it harder to get aroused by real world sex.

Wilson pairs his intuitive account with all sorts of testimonials on his website from men who quit masturbating to pornography. "My skin and eyes looks alive for the first time in years," one user states. "ED cured. I am now engaged to an awesome girl," says another. The result is a narrative extremely compelling to a host of young men.

Many scientists at the Kinsey Institute, perhaps the leading scientific organization on sexual health, are not convinced, however. When a popular science YouTube channel posted a video touting Wilson's theory on sex addiction, Kinsey Research Fellow Debby Herbenick responded bluntly:

"Most sex researchers don't recognize 'porn addiction' as a true addiction (nor do most of us recognize 'sex addiction' as a true addiction). This is a common topic of conversation among scientists in my field. The video is mostly speculation; empirical data to back up the statements in the video are enormously lacking."

Herbenick also added that "many of the professionals who think it can be treated are selling treatment programs for porn/sex addiction." Indeed, Wilson recently published a book.

Wilson's approaches may border on hucksterism, but it seems he honestly believes the ideas he's touting and thinks the science is there to back them. In an interview with Vice, he cited a recent study from Cambridge University which found that "pornography triggers brain activity in people with compulsive sexual behaviour similar to that triggered by drugs in the brains of drug addicts." The researchers found that many of these people also had erectile troubles.

He does, however, neglect to mention a key caveat mentioned by the Cambridge researchers.

“Whilst these findings are interesting, it’s important to note, however, that they could not be used to diagnose the condition. Nor does our research necessarily provide evidence that these individuals are addicted to porn – or that porn is inherently addictive," they wrote.

It's Wilson's tendencies to neglect nuance and oversimplify that irk a great many of the scientists studying pornography's effect on the brain and sexual health. While the Cambridge study lent support to the notion that porn is addictive, others have found the opposite.

Pornography, like food, can be consumed in both healthy and unhealthy ways, and impart both positive and negative effects. There are some who claim that people can be addicted to food, but few would say that it's inherently addictive.

“Whilst these findings are interesting, it’s important to note, however, that they could not be used to diagnose the condition. Nor does our research necessarily provide evidence that these individuals are addicted to porn – or that porn is inherently addictive. - See more at:

(Image: Shutterstock)

Gluten Sensitivity Has Not Just Been Proven

According to certain media reports, a new study has conclusively demonstrated that non-celiac gluten sensitivity, also known as gluten intolerance, is real.

The authors of the study disagree.

"It does not represent crucial evidence in favor of the existence of this new syndrome," they write.

In the social media-driven era of "share first, ask questions later," it's easy to see why this mix up occurred. The study, "Small Amounts of Gluten in Subjects with Suspected Nonceliac Gluten Sensitivity: a Randomized, Double-Blind, Placebo-Controlled, Cross-Over Trial," sounds extremely rigorous. It's even published in a respectable journal, Clinical Gastroenterology and Hepatology, which is run by the American Gastroenterological Association. Moreover, the research team is based out of the University of Pavia, a top-notch university in Italy established over 600 years ago. So, when you read the study's abstract and learn that subjects given a gluten pill experienced higher levels of abdominal pain, bloating, and even depression compared to those given a placebo, you might think that non-celiac gluten sensitivity (NCGS) is indeed what's causing their uncomfortable symptoms and that, despite what RCS reported last May, the condition -- which supposedly affects more than 18 million Americans -- does in fact exist.

NCGS may very well be real, but science has yet to conclusively show it. And the current study doesn't alter the situation one bit.

The researchers recruited a total of 61 individuals suspected of having NCGS -- which means they reported experiencing deleterious symptoms like headaches, abdominal pain, and bloating after eating gluten but were confirmed to not have celiac disease or wheat allergy. The subjects were then put on a gluten-free diet for one week. Afterwards, they were split into two groups. In one group, subjects received 10 pills containing either 4.75 grams of gluten to ingest over one week (no more than two per day) and in the other, subjects received 10 pills containing 4.75 grams of rice starch. Each group then returned to a gluten-free diet for a week, after which they switched positions and spent another week taking either the placebo or gluten pills. Throughout the process, subjects recorded the severity of 28 different symptoms -- 15 gastrointestinal and 13 extraintestinal (i.e. anxiety, headache, rash, etc.) -- on a scale of 0 (absent) to 3 (severe). Both the subjects and the researchers giving them the pills we're blinded to the identity of the pills.

When the researchers tallied the data, they found that subjects reported a higher overall symptom score when taking the gluten pill.

"The median overall score after 1-week of gluten consumption was 48 (range 1-156), while after 1-week of placebo it was 34 (range 0-178)," they detailed.

Individually, only six symptoms were statistically worse on the gluten pill compared to the placebo, including abdominal pain, bloating, and depression. 

Upon closer examination, however, the researchers noticed that 50 of the 59 subjects who completed the study (two dropped out) either had en equally negative response to gluten or placebo (likely due to the nocebo effect), showed no statistically significant differences, or experienced a worse response to the rice starch pill! In other words, the researchers identified only nine subjects who they suspected might actually be sensitive to gluten.

A result like this could easily occur due to random chance. It also could be explained by the study design. After one week on a gluten-free diet, even people without gluten sensitivity would likely experience adverse gastrointestinal symptoms when reintroduced to gluten. Sadly, however, the researchers did not have a control group of subjects without suspected NCGS, which they noted as a limitation.

Despite one report that the study offers a "physical explanation" for NCGS, the authors very clearly refute this.

"Our study does not provide any progress in identifying possible biomarkers of NCGS."

Far from proving that gluten sensitivity is real, at best, what the current study tells us is that only a small fraction of people who think they have NCGS may actually have the condition, if indeed it exists at all.

Source: Di Sabatino, Volta, Salvatore, Biancheri, Caio, De Giorgio, Di Stefano, Corazza GR. "Small Amounts of Gluten in Subjects with Suspected Nonceliac Gluten Sensitivity: a Randomized, Double-Blind, Placebo-Controlled, Cross-Over Trial." Clin Gastroenterol Hepatol. 2015 Feb 19. pii: S1542-3565(15)00153-6. doi: 10.1016/j.cgh.2015.01.029

(AP photo)

The Bullet Ant Sting May Be the Worst Pain Known to Man. It Can Also Make You Feel Great.

Over his career, adventurer and naturalist Steve Backshall has endured all sorts of pain, but perhaps the worst came from a creature "no bigger than a fuse."

Meet the bullet ant.

Native to the western rainforests of South America, this insect is the largest ant in the world, reaching over an inch in length. It also packs a sting thirty times more painful than a bee's.

Despite its ferocious traits, the ant is normally quite docile towards humans and other larger animals, reserving the use of its mighty sting solely for defensive purposes. But you won't be thinking about the critter's motives should you find yourself stung. You'll only be thinking about the all-encompassing pain.

"With a bullet ant sting, the pain is throughout your whole body," Backshall described on a recent episode of the BBC's Infinite Monkey Cage. "You start shaking. You start sweating… It goes through your whole body… Your heart rate goes up, and if you have quite a few of them, you will be passing in and out of consciousness. There will be nothing in your world apart from pain for at least three or four hours."

The ant's torturous sting is the most painful in existence according to entomologist Justin Schmidt, who's masochistically experienced and chronicled more insect bites and stings than any other human.

With homage to Schmidt, the true experts on bullet ant stings may be the Satere-Mawe of Brazil. As part of their warrior initiation rites, teenage boys are required to adorn gloves filled with bullet ants for ten minutes, and to endure the unbelievable pain with (mostly) calm composure. Boys looking to become men must repeat the feat a total of twenty times. After each trial, the initiates' hands are left blackened, paralyzed, and swollen.

Their disturbing injuries don't last, however. One of the amazing things about the sting of the bullet ant is that there are little to no lasting effects. There are no scientifically documented reports of deaths, perhaps because, according to some estimates, it would take 2,250 stings to kill a 165-pound human. That's a difficult number to reach, even for the Satere-Mawe. Moreover, after twenty-four hours, the injected neurotoxin, called poneratoxin, is entirely flushed from the body.

"It’s an almost completely pure neurotoxin," Backshall said. "One of the reasons why people can use it for tribal initiation ceremonies is because although it causes extraordinary pain, it’s not dangerous. There’s almost no allergens. There’s no danger of a histamine reaction to the venom."

And once the toxin is gone, you feel fantastic, Backshall added.

"You have such a massive overdose of adrenaline that you feel like a god. For a week afterwards I felt like if I leapt off a cliff I could have flown."

An adrenaline boost isn't the only potential benefit of the bullet ant's poneratoxin. It's being explored for use as an insecticide. And ironically, very low doses of the toxin may actually serve to block pain.

The microgram injection from a bullet ant sting isn't quite low enough, however, as Australian comedian Hamish Blake recently found out.

Poor guy...

(Image: Smartse)

Widely Reported Study Showing Dangers of E-Cigs Has One Little Problem...

E-cigarettes are a topic of contentious debate. Are they an effective way to wean smokers off of traditional cigarettes? Are their dangers understated? Are they "gateway" devices to tobacco products? In short, all things considered, do e-cigs benefit or denigrate public health?

Earlier this month, a new PLoS ONE study entered into this contentious fray, and its results were fairly damning to claims that e-cigarettes are relatively safe.

To sum the study up: A research team primarily based out of Johns Hopkins University exposed mice to e-cigarette vapor in a small chamber for three hours a day for two weeks so that the levels of cotinine -- a metabolite of nicotine and biomarker for tobacco smoke exposure -- in their blood was roughly similar to the amount of cotinine seen in the blood of e-cig users. They found that mice which reached these levels of blood cotinine after exposure to e-cig vapor had compromised immune systems and were more susceptible to viral and bacteria infections compared to mice only exposed to normal air.

"Despite the common perception that E-cigs are safe, this study clearly demonstrates that E-cig use, even for relatively brief periods, may have significant consequences to respiratory health in an animal model; and hence, E-cigs need to be tested more rigorously, especially in susceptible populations," the researchers concluded.

Their cautionary message was blared across the web, in outlets like Discovery News, The Guardian, the BBC, and The Verge, and was widely shared on social media.

But it seems, however, there's a "little" problem with the study that escaped the scrutiny of early reporting. Mice have vastly higher rates of cotinine metabolism than humans. While the half-life of cotinine in mice is roughly 35 to 50 minutes, the half-life of cotinine in humans is approximately 20 hours! This massive disparity means that mice have to be exposed to much higher amounts of e-cigarette vapor at faster rates in order to reach comparable cotinine levels to humans.

In a detailed comment posted to the original study last Thursday, Drs. Alexey Mukhin and Jed Rose from the Center for Smoking Cessation at Duke University Medical Center pointed out this key tidbit of information, and subsequently calculated, based on the study's methods, how much nicotine the mice were exposed to. Then they translated that amount to a daily exposure for a human, which ended up being between 300mg and 370 mg per day.

That's a lot. To reach that level of nicotine, a human would have to take between 3,600 and 4,600 e-cig puffs per day. But that's just the low-end estimation. As Mukhin and Rose further explained (emphasis mine):

"It should be noted that in our calculations we postulated that the blood for cotinine measurement was taken immediately after the end of 90 min of exposure. In the results section the authors stated that “Blood was collected … within 1 h of the final exposure” but in the methods section they stated that “exposure was assessed by measuring serum cotinine at 1 h after exposure.” If the last statement is correct, because of the fast elimination of cotinine in mice the level of exposure in the study was 3 times higher (2^(60/37.5) ≈ 3) than the above-calculated values. In other words, to obtain the same exposure in humans the e-cig user should take 11000 – 13000 puffs per day. Assuming 8 hours of sleep per day, in order to acquire such a high number of puffs e-cig users would need to take 11-13 puffs per minute and thus practically take an e-cig puff with each breath."

That much vaping would undoubtedly compromise anybody's immune system!

In the wake of their convincing debunking, Mukhin and Rose stated, almost euphemistically, "We recommend that the results of the discussed study should be interpreted with caution and that more studies with more realistic levels of e-liquid exposure should be conducted."

Rose admitted that he does have a patent purchase agreement with Philip Morris International, which has an obvious stake in the e-cig market. But despite the apparent conflict of interest, he and Muhkin's analysis does seem sound.

RealClearScience reached out to Dr. Thomas Sussan and Dr. Shyam Biswal, the lead authors of the study, offering them a chance to respond to Mukhin and Rose's comment. *Sussan replied on March 1st:

For decades, animal studies have provided the basis for human studies, and this current study should provide rationale for future human studies. The mice in our study were allowed to breathe freely inside a chamber containing 20% e-cigarette vapor for 3 hours per day. This exposure is comparable to what a human e-cig user may be exposed to, after accounting for differences in lung capacity, and is well below one puff per breath.

*Article updated to include Sussan's reply

(Image: AP)

Net Neutrality: Obamacare for the Internet?

Fox News is in a tizzy over net neutrality. (This topic has been covered extensively in the technology press; see the archive at RealClearTechnology.) In a nutshell, net neutrality requires internet service providers (ISPs, such as Comcast, Cox, and Verizon) to treat all data equally. An ISP would not be allowed to commit high-tech acts of extortion by, for instance, threatening websites with slower internet speeds unless they fork over extra cash.

Unfortunately, this is not merely hypothetical. An epic battle, described by The Oatmeal in entertaining comic book format, between Comcast and Netflix has shown how powerful ISPs can bully websites. Briefly, Netflix customers, who stream movies online, make up a large portion of internet traffic. So, Comcast decided to slow down Netflix users' internet speeds unless Netflix paid them a lot of money. (See chart.)

Credit: The Oatmeal via Washington Post/Netflix

If you are a Netflix customer, having your movies stream more slowly because Comcast doesn't get along with Netflix hardly seems fair. If you aren't a Netflix customer, keep in mind that Comcast could choose to throttle whatever other websites it doesn't like. Net neutrality is meant to prevent that.

Enter Fox News. Special Report, which is quite frankly one of the better political news shows on television, presented a very biased report by correspondent Peter Doocy. The tone of the report was largely conspiratorial. Mr. Doocy bizarrely compared net neutrality to "Obamacare" -- even going so far as to rechristen it "Obamanet" -- and claimed without any evidence that net neutrality will "slow down the internet." (See video.) Notably, the report does not even mention the apocalyptic battle between Comcast and Netflix.

Then, in a follow-up report (video), Mr. Doocy says: "If the internet is regulated like a road or a utility, people will notice. First, 'slower broadband,' then 'less investment,' which means 'fewer broadband choices' -- that's according to critics of the plan -- using similar rules in Europe as a model."

Mr. Doocy is implying that over-regulation has caused those silly Europeans to have slower internet connections than Americans. But, that is demonstrably untrue. As reported in Xconomy, Akamai, a cloud services provider, ranks the U.S. #10 in the world for internet speed. Who beats us? A lot of European countries: Ireland (#9), Latvia (#8), Sweden (#7), Czech Republic (#6), Switzerland (#5), and Netherlands (#3). South Korea, at #1, continues to humiliate everybody else.

Furthermore, Mr. Doocy's claim that net neutrality will lead to "fewer broadband choices" is dubious. Most Americans already have little choice in ISP. Quartz reported on a study by Softbank, a Japanese telecom, that says 67% of Americans have two or fewer ISPs (providing at least 10 Mbps download speed) from which to choose. A solid 30% have access to either one or zero ISPs. For all practical purposes, when it comes to internet service, most Americans are living under a de facto state of monopoly or duopoly. Even with increased regulation, it is difficult to see how net neutrality rules could make that worse.

The Fox News segment is irritating for two additional reasons. First, the network appears, as a matter of default, to oppose any and all of President Obama's policies, regardless of their merit. The technology community is in near unanimous support of net neutrality. Second, information on net neutrality is rather easy to find. Yet, it is as if Mr. Doocy did almost no research whatsoever before producing his report.

Ideological affiliations aside, Americans simply deserve better journalism than that. For a much better explanation of net neutrality, see the following WSJ video: