RealClearScience Journal Club

Science Figures Interpreted and Analyzed by RealClearScience

Beware of Possible Cuisine-Drug Interactions

Alex B. Berezow - April 20, 2015

Taking drugs, either the legal or illegal sort, creates problems. One of them is that drugs can interact with each other, often in bad ways. Frustatingly, some drugs are known to interact even with the various foods we like to eat. From the viewpoint of food-drug interactions, the most problematic food may be the humble grapefruit, which is known to interact with about 85 drugs, ranging from antidepressants and statins to clot-busters and Viagra.

Grapefruit isn't the only troublemaker. Dairy, garlic, caffeine and several other foods and/or compounds found within them have been implicated in adverse food-drug interactions. This fact led a team of Macedonian researchers to ask a much broader question: Are certain cuisines susceptible to adverse food-drug interactions and, if so, with which drugs?

To determine this, the team assembled data on known food-drug interactions and linked to it to data on common ingredients found in the world's cuisines. (See figure. The drugs are listed by ATC code, an international classification system. Also, note that the data is presented per mille, not percent.)

Panel A depicts that Asian, Latin American and southern European cuisines will probably have adverse interactions with drug categories B, C and V, which are those that affect blood/blood forming organs, the cardiovascular system, and "various" other locations, respectively. The authors specifically blame garlic and ginger as the problematic ingredients.

Panel B shows that drug categories D (dermatologicals), G (drugs affecting the genito-urinary system and sex hormones), and J (systemic anti-infectives, such as antibiotics and antivirals) will likely cause trouble for North Americans and most Europeans. Milk is the primary culprit.

Panel C displays that category R (respiratory system) drugs are irksome throughout much of the world, mostly because of their interaction with coffee, tea and grapefruit. (However, note that the scale on panel C is much smaller than panel A and B, meaning there are far fewer possible cuisine-drug interactions for this category.)

Analyzing all the potential food-drug interactions, the authors determine that milk and garlic are the two most problematic ingredients in the world. (See figure. Panel A depicts where milk is commonly used, while Panel B depicts where garlic is commonly used.)

The authors point out that the spread of culture around the world -- due both to globalization and travel -- will bring people into contact with foods with which they have had little prior experience. Since about 70% of Americans are taking at least one prescription drug (and 20% are taking five or more), it would be quite useful to know how various cuisines interact with them.

Source: Milos Jovanovik, Aleksandra Bogojeska, Dimitar Trajanov & Ljupco Kocarev. "Inferring Cuisine - Drug Interactions Using the Linked Data Approach." Scientific Reports 5, Article number: 9346. Published: 20-March-2015. doi:10.1038/srep09346

(AP photo)

The First Ever Video of a Cracking Joint

Ross Pomeroy - April 15, 2015

Scientists based out of the University of Alberta have -- for the first time -- imaged a joint cracking in real time, effectively putting to rest a decades-long debate in the process. They revealed their success in the journal PLoS ONE.

Doubtless you've experienced the physiological wonder that is a cracking knuckle. The audible pop it makes can sometimes be heard across an entire room, making many bystanders wince. But they probably have nothing to cringe about. While joint cracking may sound painful, it's not associated with any adverse health effects -- arthritis, for example.

Everyone knows that bending or stretching a joint is what causes it to crack, but what's going on under the skin? First off, a joint is where two bones meet. At the ends of each bone is soft, cushioning cartilage. Connecting the cartilage -- and thus the bones -- is a synovial membrane that's filled with a thick, lubricating fluid. Bending the joint can cause the membrane to stretch, which in turn causes the pressure inside it to drop and a bubble of dissolved gas to form within the fluid. The whole process is called tribonucleation.

“It’s a little bit like forming a vacuum,” says Professor Greg Kawchuk, the lead researcher. “As the joint surfaces suddenly separate, there is no more fluid available to fill the increasing joint volume, so a cavity is created...”

For decades, prevailing wisdom has held that the popping noise is tied to these bubbles, but scientists have debated whether the sound is caused by the bubble's formation or its collapse.

Thanks to Kawchuk and his team, we now know it's the former. When they watched a volunteer's knuckles crack inside an MRI machine in real time, the pop clearly occurred when the bubble formed. Moreover, the bubble persisted well after the sound was heard.

"This work provides the first in-vivo demonstration of tribonucleation on a macroscopic scale and as such, provides a new theoretical framework to investigate health outcomes associated with joint cracking," the researchers say.

Source: Kawchuk GN, Fryer J, Jaremko JL, Zeng H, Rowe L, Thompson R (2015) Real-Time Visualization of Joint Cavitation. PLoS ONE 10(4): e0119470. doi:10.1371/journal.pone.0119470

Women Preferred 2:1 in Academic Science Jobs

Alex B. Berezow - April 13, 2015

The lack of women in science and engineering has long been a sore spot in academia. Even though girls are just as good as boys (if not better) in science and math, men greatly outnumber women in academic science jobs. Why?

This is not an easy question to answer, partially because many people who legitimately try to answer it are branded as "sexists." At least one person actually lost his job trying to answer this question. Lawrence Summers, the former president of Harvard, proposed the possibility of a difference in the standard deviation of IQs for men and women. His idea, based on more than just mere speculation, was that geniuses and idiots were both more likely to be men, which would explain why it is men who tend to be professors or criminals. For that suggestion, he was essentially fired.

There are other more conventional hypotheses. The politically correct one is gender bias and discrimination. This hypothesis was supported in a damning 2012 PNAS study, which showed that science professors preferred male applicants over female ones for a job as laboratory manager. Even worse, the men were believed to be more competent and were offered more money, despite the fact that the applications were identical in every aspect, except for gender, of course.

But there is also evidence of less malicious forces at play. Cornell University Professors Wendy Williams and Stephen Ceci have presented evidence that women choose to avoid math-intensive fields. This isn't due to a lack of ability, but because they prefer to be doctors or veterinarians. The same authors also argue in American Scientist that motherhood plays an enormous role:

"Our own findings as well as research by others show that the effect of children on women's academic careers is so remarkable that it eclipses other factors in contributing to women's underrepresentation in academic science."

Now, Professors Williams and Ceci are back with yet another study, this time in PNAS, that will certainly add more gasoline to the fire of public debate.

The authors had 363 faculty members read and rate narratives of prospective job candidates for an assistant professorship. In order to avoid tipping their hand in regard to the purpose of their experiment, they varied details of the candidates' lifestyles, e.g., whether or not they were married or had children. Their results showed that both male and female professors of biology, engineering, psychology and economics preferred women over men who shared the same lifestyle by a margin of roughly two-to-one. Furthermore, the same 2:1 preference existed for all lifestyles. The only exception were male economists, who showed no statistically significant gender preference. (See figure.)

The researchers offer a remarkable conclusion:

"Our research suggests that the mechanism resulting in women's underrepresentation today may lie more on the supply side, in women's decisions not to apply, than on the demand side, in antifemale bias in hiring. The perception that STEM fields continue to be inhospitable male bastions can become self-reinforcing by discouraging female applicants, thus contributing to continued underrepresentation, which in turn may obscure underlying attitudinal changes."

In other words, the authors suggest that telling women that male professors are sexist pigs is both untrue and preventing women from applying for academic jobs.

That, if correct, is good news, because it means that the "glass ceiling" for women in science is based on perception rather than actual discrimination. That should make it easier to shatter.

Source: Wendy M. Williams and Stephen J. Ceci. "National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track." PNAS. Published online before print: 13-Apr-2015. doi: 10.1073/pnas.1418878112

(AP photo)

These Five Chemicals Cause the Most Injuries

Ross Pomeroy - April 13, 2015

There are many compounds that can cause serious bodily harm. Some of the deadliest you can probably name: arsenic trioxide, sodium cyanide, strychnine. But those compounds don't actually injure that many people, mostly because they're so rare.

As a new Centers for Disease Control (CDC) report shows, the most injurious chemicals are encountered in everyday life, at home, at work, or at school.

Between 1999 and 2008, nine states -- Colorado, Iowa, Minnesota, New York, North Carolina, Oregon, Texas, Washington, and Wisconsin -- collected data on chemical-related health incidents. 57,975 were reported, producing 15,506 injuries and 354 deaths.

The top five chemicals associated with injury were carbon monoxide, ammonia, chlorine, hydrochloric acid, and sulfuric acid. They collectively accounted for 3% of single chemical releases but 37% of all injured persons, the CDC announced.

Carbon monoxide (CO) by far caused the most injuries. A colorless and odorless gas created during combustion, it's released in small, safe amounts by many consumer appliances. Problems arise when those appliances malfunction in some way and release higher levels of the compound. When inhaled in higher concentrations, CO can cause dizziness, headaches, nausea, and even result in death. All homeowners would be wise to outfit their homes with CO detectors.

Combined, the four other chemicals roughly equaled CO in injuries. In gaseous form, ammonia can burn the skin, throat and lungs. Most ammonia injuries occurred in agriculture and food manufacturing. Chlorine is a chemical used in many industrial products and to disinfect water. Chlorine inflicted many of its injuries at swimming pools and print manufacturers. Hydrochloric acid and sulfuric acid are both highly corrosive and did much of their damage in industrial settings.

Much of the injuries occurred as a result of equipment failure, human error, inadequate safety precautions, or some other kind of accident. Most were treated without lasting health ramifications.

"Understanding the nature of the top five chemicals that resulted in injuries can help researchers effectively target reductions in morbidity and mortality," the CDC said. "Carbon monoxide and ammonia by far caused the most injuries, deaths, and evacuations and therefore need more attention toward prevention."

Source: CDC

(Image: AP)

Pathogen Jumped from Humans to Rabbits in 1976

Alex B. Berezow - April 6, 2015

Zoonotic diseases, such as the plague and Ebola virus, jump from animals to humans. Often, but not always, such interspecies transmission occurs following mutations in the pathogen's genome that make it more suitable for targeting a new host. But, infectious disease is not a one-way street. This same evolutionary process also makes possible "reverse zoonosis" (more properly dubbed zooanthroponosis) -- i.e., the transmission of disease from humans to animals.

Now, an international team of researchers has discovered that a strain of Staphylococcus aureus, which previously only infected humans, jumped into rabbits about 40 years ago. Shockingly, it only took a single mutation for such a dramatic change to occur. The authors report their findings in the journal Nature Genetics.

The researchers examined a particular strain of S. aureus called ST121. The strain is associated with disease in both humans and rabbits, specifically causing skin abscesses and mastitis (inflammation of the mammaries) in the latter. By injecting bacteria under the skin of rabbits, the authors showed that three different kinds of rabbit-derived ST121 caused abscesses, while three different kinds of nearly identical human-derived ST121 did nothing. (See panel B in the figure below.)

Notably, Panel A shows that merely 300 bacteria from the rabbit-derived ST121 strains were sufficient to cause infections in 70% to 80% of the rabbits. However, a gigantic dose of 100,000 bacteria from the human-derived ST121 strains was completely harmless. The authors then set out to determine why this was the case.

A phylogenetic analysis of several ST121 whole-genome sequences, combined with an estimate of how quickly mutations occurred, suggested that ST121 jumped from humans to rabbits sometime around 1976. The authors identified several candidate genes that they believed to be responsible for the human-to-rabbit transition. Their subsequent investigation led them to a gene called dltB, which encodes a somewhat enigmatic membrane protein. 

A "slam-dunk" experiment then definitively linked the dltB mutation with the ability of ST121 to infect rabbits. The authors introduced a single mutation into a human-derived strain of ST121 that could not infect rabbits. (The mutation changed the amino acid threonine to the amino acid lysine at position #113 in the protein encoded by dltB.) That single change was sufficient to convert ST121 into a strain that could infect rabbits. (See figure.)

Draw your attention to "Human strain F." Note that the natural human-derived strain (called wild-type and listed as "WT" in the figure) is unable to infect rabbits. However, the strain carrying the threonine-to-lysine mutation (T113K) infects rabbits. This one small change was sufficient to change ST121 from a strain that infected only humans to one that could also infect rabbits.

The authors then tried to determine why this mutation had such an effect. Unfortunately, their investigation hit a brick wall, though they did rule out several possible explanations.

The takeaway message from their research is that the ease with which pathogens can hop between species is rather unsettling. We should always remember that as humans continue to expand our footprint around the globe, we may be only one mutation away from being infected with a strange and dangerous new pathogen.

Source: David Viana, María Comos, Paul R McAdam, Melissa J Ward, Laura Selva, Caitriona M Guinane, Beatriz M González-Muñoz, Anne Tristan, Simon J Foster, J Ross Fitzgerald & José R Penadés. "A single natural nucleotide mutation alters bacterial pathogen host tropism." Nature Genetics 47, 361–366. Published: 16-Feb-2015. doi:10.1038/ng.3219

(AP photo)

A New Species of Frog Can Dramatically Transform in Just Five Minutes

Ross Pomeroy - April 2, 2015

An international team of researchers has identified a new species of frog that can radically alter its skin texture in a matter of minutes. They describe the shape-shifting amphibian in a study appearing in the April issue of the Zoological Journal of the Linnean Society.

Case Western Reserve University scientists Katherine and Tim Krynak originally found the frog, dubbed Pristimantis mutabilis, in the summer of 2009, during an evening stroll on the Santa Rosa River Trail in Reserva Las Gralarias, a forest sanctuary on the western slope of the Andes Mountains in Ecuador. When captured, the fingernail-sized amphibian was pockmarked with spiky, wart-like growths on its skin. The Krynaks quickly placed the critter in a plastic container with some moss. The next day, when the discoverers brought the animal into the lab to take pictures, they watched in surprise as a radical transformation took place. Over roughly five minutes, the nodules on the skin, called tubercles, retreated, leaving the animal almost completely smooth to the touch. They could hardly tell it was the same frog!

More of the tiny amphibians, more simply called Mutable Rainfrogs, were found and studied in the years after their discovery in 2009. The researchers theorize the transformation is for camouflage. The frogs adopt a spiky texture when in mossy, wet environments.

"In these habitats, skin texture that has the appearance of moss or detritus likely conceals the individual from visual predators, such as birds and arachnids," they write.

How the frogs accomplish the change is still unknown, but the researchers put forth an educated guess.

"We speculate that it could involve allocation of more or less water to existing small structures (e.g. warts and tubercles) on the skin."

Many animals -- most notably octopuses -- can rapidly alter their skin color to blend in with their surroundings, but rapid changes in skin texture are far rarer. Other species of frogs and mammals can change their appearance in connection to climate or mating, but those transformations usually take weeks or months.

"We can only speculate on how widespread this ability to modify skin texture is in anurans (frogs)," the researchers write, "as to our knowledge our study is the first to describe this ability."

Source: Guayasamin, J. M., Krynak, T., Krynak, K., Culebras, J. and Hutter, C. R. (2015), Phenotypic plasticity raises questions for taxonomically important traits: a remarkable new Andean rainfrog (Pristimantis) with the ability to change skin texture. Zoological Journal of the Linnean Society, 173: 913–928. doi: 10.1111/zoj.12222

(Images: Guayasamin et. al. / Wiley)

Need to Recover from a Workout? Fast Food Is Just as Effective as Supplements

Ross Pomeroy - April 1, 2015

After a strenuous workout, top athletes and everyday exercisers regularly reach for energy bars, protein powders, or recovery drinks, thinking that these dietary supplements provide boosts that normal foods do not.

A new study, however, finds that -- when it comes to exercise recovery -- supplements are no better than fast food.

The multi-billion-dollar sports supplement industry is a true behemoth. With catchy taglines and sparkling testimonials from top athletes, they've convinced millions of people to use their products. University of Montana graduate student Michael Cramer decided to find out if their claims of superiority stood the test of science, so he pit some of the most oft-used supplements, including Gatorade, PowerBar, and Cytomax "energy" powder, against a few of McDonald's most vaunted contenders: hotcakes, hash browns, hamburgers, and fries.

Cramer invited eleven highly trained male athletes to take part in the study. After fasting for 12 hours, all of them completed a rigorous 90-minute endurance workout. Subsequently, subjects assigned to fast food were given hotcakes, orange juice, and a hash brown, while subjects assigned to supplements were given Gatorade, organic peanut butter, and Cliff Shot Bloks. Two hours later, the fast food group consumed a hamburger, Coke, and fries, while the supplement group scarfed down Cytomax powder and PowerBar products. Two hours after their second meal, all subjects rode 20 kilometers on a stationary bike as quickly as possible.

Both the supplement and fast food meals were roughly equal in calories, carbohydrate, and protein, though, as one might guess, the fast food had much more sodium and slightly more fat. At various times, subjects underwent muscle biopsies and blood work to gauge blood glucose, lipid, insulin, and glycogen levels.

A week later, subjects came back into the lab and repeated the experiment, this time eating the diet they weren't assigned to previously.

Upon analysis, Cramer found that athletes completed the time trial just as quickly after eating fast food compared to supplements. Moreover, levels of muscle glycogen were actually higher for the fast food (FF) group than for the supplement (SS) group, though the difference was not statistically significant. Glycogen is a key energy source in muscles that's primarily replenished through carbohydrate intake. Think of glycogen as your muscles' fuel; when it's depleted, exercise performance suffers. Furthermore, Cramer found no difference in insulin, glucose, cholesterol, or triglyceride levels. Subjects reported equal amounts of stomach discomfort.

Though the research was solidly controlled, the findings are limited by the small number of subjects. Moreover, the results may not apply to less-trained individuals.

While it probably isn't wise to completely replace your post-workout supplements with chicken nuggets and cheeseburgers, there's little doubt that many exercise supplements aren't all they're cracked up to be. To the body, a simple carbohydrate is a simple carbohydrate, whether it comes from a pricey powder or a french fry. So if you're faced with choosing a Muscle Milk or a Happy Meal after a workout, don't feel bad about dining at the Golden Arches every once in a while.

Source: Cramer MJ, Dumke CL, Hailes WS, Cuddy JS, Ruby BC. "Post-exercise Glycogen Recovery and Exercise Performance is Not Significantly Different Between Fast Food and Sport Supplements." Int J Sport Nutr Exerc Metab. 2015 Mar 26.

(Image: AP)

Monkeys Suffer Human-Like Depression

Alex B. Berezow - March 30, 2015

It's no secret that depression is a major problem worldwide. A map published by the Washington Post depicts the prevalence of depression around the world. Every country on Earth, rich or poor, has citizens who experience depression. Coping with sadness is a part of the human condition, but why some of us suffer in near perpetuity is not understood. Decades of research have produced little more than expensive pills that work only slightly better than a placebo.

There are many possible reasons why depression research has produced such disappointing results. One of them may be the fact that most depression research utilizes rodents in conditions that do not properly imitate human life. For instance, forcing a mouse to swim stresses it out, but this hardly replicates the social and cultural conditions in which humans develop depression. Besides, stress and depression, while related, are not the same. 

In an effort to address this problem, a team of researchers at Chongqing Medical University, in collaboration with Wake Forest University, examined depression in cynomolgus macaque monkeys living in social colonies. Each colony lived in its own enclosure and contained two males and roughly 20 females (which reflects the natural male-female ratio), as well as their offspring. Fifty-two such colonies were observed, and a total of 1,007 female monkeys were screened for depression. Twenty females exhibiting frequent depression symptoms (e.g., lack of interest in eating, mating, and grooming) were selected, and they were matched with 20 healthy controls as well as with 10 monkeys that exhibited symptoms of depression due to experimental social isolation.

The authors found that the naturally depressed and the isolated monkeys exhibited some similar behaviors, specifically nursing infants for shorter durations and sitting on the floor for longer durations. Most importantly, just like what occurs among humans, they found a whole host of metabolic differences between the naturally depressed monkeys and the healthy controls.

The authors conclude that their model of depression using cynomolgus macaques mimics "real life" with all its psychosocial stressors and, hence, is superior to other models. They are probably right, but there are some major caveats.

First, the metabolic profiles of depressed monkeys were not identical to those of humans. Second, the authors failed to reverse symptoms of depression in the macaques when they were treated with ketamine, an antidepressant. Third, monkeys are expensive and research that utilizes them will tend to have smaller sample sizes. It would be incredibly inconvenient, for instance, to raise 100 monkeys with the hope that 5 of them might develop depression. To increase the number of test subjects, researchers would likely have to force monkeys into social isolation, which would be ethically dubious.

Having said that, studying depression in laboratory animals is important in its own right. The authors show rather convincingly that depression occurs naturally in macaques, and the disease manifests in a way that would be familiar to humans. Not only does this have ethical implications for primate research, it also raises questions about the reliability of biomedical data collected using depressed animals. Surely, such physiological changes must be factored in.

Source: Fan Xu et al. "Macaques Exhibit a Naturally-Occurring Depression Similar to Humans." Scientific Reports 5, Article number: 9220. doi:10.1038/srep09220  Published: 18-March-2015

(AP photo)

A New Treatment for Criminal Psychopaths?

Ross Pomeroy - March 25, 2015

German researchers have identified a potential treatment for criminal psychopathy. They outline their method and results in the March 24th release of Nature's Scientific Reports.

Criminal psychopaths are notoriously difficult to rehabilitate. Their personality disorder, which renders them antisocial, callous, and disinhibited, makes them three to four times more likely to reoffend after release from prison compared to non-psychopaths. For many, prison is neither a punishment nor a deterrent, making it an altogether ineffective remedy.

Psychopathy is often said to be incurable, but that's partially because scientists have yet to conclusively identify the underlying causes. You can't fix something until you know how it's broken. The predicament hasn't stopped behaviorists from trying to treat psychopathy, though. Behavioral modification focused on positive reinforcement has shown promise among youth psychopaths, resulting in a marked drop in recidivism and violent behavior.

Adult criminal psychopaths are harder to treat, however. Neuroplasticity, the tendency of the brain to change, significantly diminishes with age, making adults far less malleable.

Undeterred, lead researcher Dr. Lilian Konicar and her team recruited 14 criminal psychopaths -- all with long and severely violent histories -- for an intensive program designed to teach them to control their brain activity. The method, called neurofeedback, has been used preliminarily in the treatment of Attention Deficit Hyperactivity Disorder with generally positive results. It involves hooking up a patient to an EEG machine to monitor brain activity. The activity is then represented on a computer screen by a graphical object. Patients then try to move the object by controlling their brain activity, receiving positive feedback for moving it in an indicated direction, like a video game.

Konicar had the psychopaths perform neurofeedback training during 25 sessions spread over 3 months. After the study, subjects demonstrated improved control of their brain activity. They also reported reduced levels of impulsivity and aggression, assessed via an in-depth questionnaire before and after the intervention. Crucially, Konicar found that the subjects better able to control their brain activity reported larger reductions in aggression.

The results of the study are preliminary, but promising. Studies with more subjects, adequate control groups, as well as measurements which don't rely on self-report, will be needed to further validate neurofeedback's potential as a treatment for criminal psychopathy. Konicar is conservatively optimistic.

"This study demonstrates improvements on the neurophysiological, behavioral and subjective level in severe psychopathic offenders after SCP-neurofeedback training and could constitute a novel neurobiologically-based treatment for a seemingly change-resistant group of criminal psychopaths."

Source: Konicar, L. et al. Brain self-regulation in criminal psychopaths. Sci. Rep. 5, 9426; DOI:10.1038/srep09426 (2015).

(Top Image: Silence of the Lambs)

 

Literature Review Links Coffee & Bladder Cancer

Alex B. Berezow - March 23, 2015

The most interesting man in the world has nothing on coffee, which is the most interesting beverage in the world. Coffee continues to be the subject of countless studies, some more serious than others. Thanks both to science and intrepid entrepreneurs, for instance, we have learned the chemistry of perfect coffee, the best time of day to partake, and why drip coffee is likelier to spill than a latte. You may think that coffee and pooping have nothing in common, but you would be wrong. Again, thanks to science, we know why it is worthwhile to pluck beans out of elephant dung, why coffee makes you poop, and why coffee should go in your mouth, not your butt. If all of that isn't enough, you can now take coffee classes at some universities.

Of course, coffee has also been the subject of intense epidemiological investigation. Overall, while the literature seems to indicate that coffee is more beneficial than harmful, it does have its downsides. And researchers from China have now highlighted one of those disadvantages.

Every year, there are around 330,000 new cases of bladder cancer worldwide, and 130,000 die from the disease. According to the National Cancer Institute, nearly 75,000 Americans acquire bladder cancer annually, making the U.S. incidence rate 20.5 per 100,000 people. Because the incidence rate for all cancers in the U.S. is 460.4 per 100,000, that means roughly 4% of new cancers in the U.S. are cancers of the bladder.

The cause of bladder cancer is not well understood, but some studies have fingered coffee as a risk factor. In order to clarify this, the Chinese research team conducted a meta-analysis of the literature. They started with 1,788 articles and, using various criteria, whittled the list down to 40 studies. They found that, on average, coffee drinkers were 33% more likely to develop bladder cancer than those who refrained from drinking coffee. (See figure.)

The figure shows the odds ratios (OR) for each study. (The odds ratio measures the association between a particular exposure, in this case coffee, and a particular outcome, in this case bladder cancer. If OR = 1, the exposure is not linked to the outcome; if OR < 1, the exposure may help prevent the outcome; if OR > 1, the exposure may help cause the outcome.) For coffee and bladder cancer, the researchers found that the OR = 1.33, meaning that coffee raises the risk of bladder cancer by 33%. (*See footnote.*)

Importantly, the authors also showed a dose-response relationship: The more coffee a person drinks, the more likely he is to develop bladder cancer. (See figure.)

Graphs A and B depict OR plotted against daily cups of coffee. (Chart A represents data from case-control studies, while Chart B represents data from cohort studies.) Graph A shows that the OR increases by 0.05 with each additional daily cup of coffee; Graph B shows that the OR increases 0.03 for each additional daily cup of coffee.

Translated, that means each additional daily cup of coffee you drink raises your risk of bladder cancer by 3% to 5%. If, for instance, you drink three daily cups of coffee, your risk of developing bladder cancer will be 9% to 15% higher than for those people who do not drink coffee.

Should that prevent you from drinking coffee? Nope. Like mentioned earlier, coffee has several health benefits. But, if you are a heavy coffee drinker, you might want to take it down a notch. However, most of you should not worry and should continue to enjoy your daily coffee (or two) with reckless abandon... or in my case, an iced, triple-shot, vanilla latte.

[Note: The epidemiology sticklers out there will know that this interpretation is not exactly correct. Technically, odds ratios do not allow for a direct comparison of risks. However, because bladder cancer is infrequent, the odds ratio is a good approximation of relative risk. For a deeper discussion on how odds ratios can be misleading, see this BMJ paper (PDF). Epidemiology can be quite tricky, which is why science/health journalists need to treat such studies with care.]

Source: Weixiang Wu, Yeqing Tong, Qiang Zhao, Guangxia Yu, Xiaoyun Wei & Qing Lu. "Coffee consumption and bladder cancer: a meta-analysis of observational studies." Scientific Reports 5, Article number: 9051. Published: 12-Mar-2015. doi:10.1038/srep09051

(AP photo)

More Journal Club

Applying for Grants May Be a Waste of Time
March 21, 2015

Most researchers view applying for federal grants as a necessary burden, a task that's simply a part of the...

Malaria May Be Driving Evolution of Blood Type
March 16, 2015

Malaria is a deadly disease that, according to the World Health Organization, infects roughly 200 million...

Giant Ocean Arthropod Rivals Largest in History
March 11, 2015

A trio of paleontologists has announced the discovery of a fossil belonging to a new species of ancient...

Journal Club Archives