The vegetative state is a medical and legal gray area like few others. A disorder of consciousness, it occurs following severe brain damage. The victim becomes locked in a state of "wakefulness without awareness." Are they little more than a husk, mentally dead and deprived of any effective humanity? Or are they covertly aware, trapped inside a bodily prison? It can be immensely difficult to tell.
But with the widespread advent of functional magnetic resonance imaging (fMRI), doctors may have a new way of determining awareness in vegetative patients. By placing the patient inside a brain scanner and providing him or her with verbal mental imagery tasks, doctors can detect changes in brain activity that could signal awareness.
This was first demonstrated in 2006 on high-powered fMRI machines. Adrian Owen, a neuroscientist at The Brain and Mind Institute of The University of Western Ontario, detected signs of awareness in a patient who had previously been diagnosed as in a vegetative state.
Now, in a new study published to PLoS ONE, Owen and two colleagues, Davinia Ferna ndez-Espejo and Loretta Norton, sought to determine whether or not commercial brain scanners found at many hospitals could register brain activity that could herald awareness.
Fifteen right-handed healthy volunteers took turns in an fMRI machine and were asked to imagine swinging an arm to hit a tennis ball in a tennis match. They were also instructed to imagine walking from room to room in their house and to visualize all the objects they would encounter. In both mental tasks, the low-powered brain scanners registered detectable brain activity responses at statistically significant levels, indicating that aware individuals can respond mentally to specific verbal prompts.
"Our findings justify a broader use of mental imagery fMRI paradigms, which will make them available to a larger number of patients to ensure they receive an accurate diagnosis," the authors write.
Just yesterday, a study published in The Lancet found that a brain scanning method called positron emission tomography could be used to predict whether people in vegetative states will regain consciousness. Coupled Owen's study, the findings forward our understanding of awareness, and could give hope -- or some measure of closure -- to the friends and families of those locked in persistent vegetative states.
Source: Fernández-Espejo D, Norton L, Owen AM (2014) The Clinical Utility of fMRI for Identifying Covert Awareness in the Vegetative State: A Comparison of Sensitivity between 3T and 1.5T. PLoS ONE 9(4): e95082. doi:10.1371/journal.pone.0095082
"The path to a woman's heart is through her stomach."
I ignored a great many things my wonderful father said, but this old maxim stuck with me. Women were a colossal mystery to me in my teens (now they're just a mystery), so I clung to anything that would grant me even a meager chance of scoring a date.
Now, my dad's advice has a study published in a prestigious science journal to back it up.
Reporting in the Proceedings of the National Academy of Sciences, an international team of scientists has found that low blood glucose levels correlate to greater amounts of aggression and anger between spouses. Translation: both men and women are happier with their significant other when they're satiated with food.
Glucose, one of the simplest sugars, is the human body's preferred source of energy. The brain particularly craves glucose, and when it doesn't get enough of the stuff, mental processes like decision-making and self-control fall by the wayside. Deficits in the latter spiral into other deleterious effects.
"People have more difficulty controlling their attention, regulating their emotions, and overriding their aggressive impulses," the researchers say.
With a bounty of prior evidence in tow, the research team, led by Ohio State University psychologist Brad Bushman, sought to test whether low glucose levels contribute to violent tendencies between intimate partners.
For 21 days, 214 participants from 107 heterosexual couples measured their glucose levels each morning before breakfast and each evening before bedtime. To gauge their aggression, every participant was given a voodoo doll and instructed to stick between 0 and 51 pins in it each evening depending upon how angry they were with their spouse. The quirkily macabre procedure was to be completed in private and the results recorded.
Once the 21 days concluded, participants were brought into the lab and told that they would be competing with their spouse in a very basic computer game involving pressing a button as fast as possible. Winners were allowed to blast the loser with a cacophonous mixture of sounds like fingernails scratching on a chalkboard, dentist drills, and ambulance sirens, at an intensity and duration of their choice.
"Basically, within the ethical limits of the laboratory, participants controlled a weapon that could be used to blast their spouse with unpleasant noise," the researchers described.
In reality, subjects were competing against a computer and randomly lost 13 of the 25 trials. When debriefed at the end, only 3 of the participants suspected the ruse.
The results were revealing. After controlling for relationship satisfaction and gender, the researchers found that daily evening glucose levels predicted sticking fewer pins into the voodoo dolls. In other words, on nights where a subject's glucose levels were lower, they felt more anger towards their spouse.
For the computer-based aggression test, the researchers averaged subjects' evening glucose levels, finding that lower levels predicted higher aggression in the task. Subjects with lower average glucose levels punished their spouses with louder volumes and longer durations of noise.
The study's approach was a strength, in that it tested subjects in their own homes as they carried out their normal habits. But such a design allows for confounding variables to seep in. Could alcohol intake have affected the voodoo doll results? Or what about the time that participants performed their nightly "stabbing"?
Overall, the research suggests that partner aggression can be curbed by maintaining blood glucose at normal levels. When you're feeling especially angry towards your spouse for no apparent reason, your blood sugar may be low. So reach for a small glass of orange juice or a snack-sized candy bar, not a frying pan.
Source: Brad J. Bushman, C. Nathan DeWalld, Richard S. Pond, Jr., and Michael D. Hanusa. "Low glucose relates to greater aggression in married couples." PNAS. April 2014.www.pnas.org/cgi/doi/10.1073/pnas.1400619111
Back in the 90's, Keystone Beer ran a very popular commercial warning against the dangers of "bitter beer face."
Of course, Keystone's solution was to drink Keystone beer. I disagree. Mass-produced American beers suck. They taste like fizzy water with a couple of rusty nails thrown in for flavor. There's got to be a way of enjoying superior beers while simultaneously avoiding the social stigma associated with beer bitter face.
Thankfully, a new study in PLoS ONE by Dutch scientists provides hope for the future. The researchers have discovered that some compounds called flavanones can block ("antagonize") the bitter taste receptors found on our tongues.
The researchers conducted the experiments in vitro using a human cell line called HEK293, which is easily manipulated in the lab. They expressed a human bitter taste receptor called hTAS2R39 in the cells. The cells could then be activated by adding a bitter compound found in green tea, called epicatechin gallate (ECG). The bitter taste receptor would bind to ECG, and the cells would then trigger an intracellular release of calcium ions. In the experimental system used by the researchers, this calcium release would cause fluorescence, which could be detected. Any flavanone that, in the presence of ECG, reduced the amount of fluorescence was therefore blocking the bitter taste receptor.
One of their experiments is shown below. (Ignore the "non-induced" conditions.)
The experiment shows that bitter compound ECG activates the cells, but compound #6 (4'-fluoro-6-methoxyflavanone) successfully blocked the bitter receptor. At even higher concentrations of 4'-fluoro-6-methoxyflavanone, the bitter receptor was completely blocked. The implication is that, if fed to humans, 4'-fluoro-6-methoxyflavanone may be able to greatly reduce the perception of bitterness in foods.
There are limitations to this study. First and foremost, it was conducted in cell culture, not in actual humans. Second, there are many different kinds of bitter taste receptors on our tongues, so blocking only one of them will not totally eliminate bitter tastes. (Besides, the specific receptor in this study is not involved in detecting the bitter molecules found in beer.) Third, it is unknown if 4'-fluoro-6-methoxyflavanone is actually safe for human consumption. And fourth, there is no known natural source of 4'-fluoro-6-methoxyflavanone, and producing it would require costly chemical synthesis.
Still, these findings are promising, not just for bitter foods but for medications, as well. And some day, perhaps "bitter blockers" could be added to your favorite beer, lest you acquire the dreaded bitter beer face.
Source: Roland WSU, Gouka RJ, Gruppen H, Driesse M, van Buren L, et al. (2014) 6-Methoxyflavanones as Bitter Taste Receptor Blockers for hTAS2R39. PLoS ONE 9(4): e94451. doi:10.1371/journal.pone.0094451
In a new study, both male and female subjects were able to accurately evaluate the intelligence of men simply by viewing photographs of their faces.
While many avow that you can't judge a book by its cover, researchers Karel Kleisner, Veronika Chvátalová, and Jaroslav Flegr, all based out of Charles University in the Czech Republic, showed that if that book is a man, you probably can.
For the study, which is published in PLoS ONE, 80 science students from Charles University -- 40 men and 40 women -- took an in-depth exam to gauge their IQ and were subsequently photographed with neutral face expressions. Another 160 participants assessed the photographs, judging the subjects' attractiveness and intelligence on a scale of 1 (the highest ranking) to 7 (lowest ranking).
When Kleisner, Chvátalová, and Jaroslav Flegr tore into the data, they returned an intriguing finding.
"Raters were able to estimate intelligence with an accuracy higher than chance from static facial photographs of men but not from photos of women."
Next, the team placed the photographed faces on computerized grids in an attempt to glean whether any facial features were associated with perceived intelligence.
"Faces that are perceived as highly intelligent are rather prolonged with a broader distance between the eyes, a larger nose, a slight upturn to the corners of the mouth, and a sharper, pointing, less rounded chin," the authors described. "By contrast, the perception of lower intelligence is associated with broader, more rounded faces with eyes closer to each other, a shorter nose, declining corners of the mouth, and a rounded and massive chin." (See the image above.)
But when the researchers checked to see if actual intelligence (indicated by measured IQ) was associated with specific facial features, they found no significant correlations.
"This means that our raters accurately assessed intelligence from faces of men based on visual cues that simply are not explicable from shape variability in men’s faces."
So why could both men and women accurately predict the intelligence of men, but not women, based on their looks?
One explanation the researchers put forth is that physical cues of intelligence may be sexually dimorphic. So while intelligence may be physically plastered on the faces of men, women may signal it in other ways.
While the study's principal finding merits a booze-bolstered conversation at a dinner party, it will need to be replicated among men of broader age and cultural backgrounds before we take it too seriously.
Source: Kleisner K, Chvátalová V, Flegr J (2014) Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women. PLoS ONE 9(3): e81237. doi:10.1371/journal.pone.0081237
Europeans are rather proud of how environmentally superior they believe themselves to be. For instance, Europeans were indignant when the U.S. decided to not ratify the Kyoto Protocol. In Germany, environmentalism is trendy, and the Green Party is actually a viable political organization, holding 10% of the seats in Parliament. Europeans also unscientifically (and ironically) reject nuclear power and GMOs, all over supposed concern for the environment.
But, Europe has a dirty little secret. Actually, it's not all that little. Seven of the ten biggest "dead zones" in the world are in the Baltic Sea, right in Europe's backyard. And new research in PNAS says it's mostly their fault.
Dead zones form all over the world (see map) but are most concentrated around the U.S., Europe, Japan and Korea. They form when an excess of nutrients (e.g., agricultural fertilizer or processed sewage) makes its way into the ocean or other body of water. The spike in nutrients causes algal blooms, a process known as eutrophication. When the algae (which are often toxic) die, they are decomposed by bacteria that consume oxygen in the process, leaving little behind for fish and other organisms. Large areas become largely uninhabitable for marine animals. Obviously, this negatively affects the entire ecosystem, as well as local economies that rely on seafood and tourism.
Because of its geography, which limits its ability to exchange water with the Atlantic Ocean, the Baltic Sea is naturally prone to developing dead zones. The authors examined oxygen concentrations on the bottom of the Baltic Sea over a 115-year period. (See figure.) Red areas are hypoxic (less than 2 mg oxygen per liter), and black areas are anoxic (no oxygen). The model before 1960 is less certain due to sparse data collection.
As shown, the authors believe that hypoxic and anoxic zones have greatly increased. They believe that the major factor at play is an excess of anthropogenic nutrients. The same problem is to blame for the dead zones in the Gulf of Mexico and Chesapeake Bay. The solution is straightforward, though not necessarily easy to implement: Stop polluting so much.
But, the authors identify another factor: Warmer temperature. The temperature at the bottom of the Baltic Sea has increased by around 2 degrees Celsius in the past century. This has two effects. First, oxygen is less soluble in warmer water. Second, warmer temperatures provoke faster metabolism in bacteria, causing them to consume oxygen more quickly. Warmer temperature, therefore, exacerbates the problem of oxygen depletion.
The takeaway lesson is that a combination of natural and man-made factors are slowly destroying some of our most precious resources. We can reverse some of this damage if we act now. So, instead of continuing with business-as-usual (by pointing fingers and assigning blame, but doing very little), perhaps it's time that we all just pitch in and clean things up.
Source: Jacob Carstensen, Jesper H. Andersen, Bo G. Gustafsson, and Daniel J. Conley. "Deoxygenation of the Baltic Sea during the last century." PNAS. Published online before print: 31-March-2014. doi: 10.1073/pnas.1323156111
A vegetarian diet is associated with higher rates of allergies, cancer, and mental illness, as well as a poorer quality of life compared to carnivorous diets, according to a new study.
The research, published in February in the journal PLoS ONE, surveyed 1,320 Austrians, evenly portioned to four different nutritional groups: a vegetarian diet, a carnivorous diet rich in fruits and vegetables, a carnivorous diet less rich in meat, and a carnivorous diet rich in meat. Subjects were matched based on age, sex, income, education, and occupation. All information was attained through face-to-face interviews.
The results were bleak for vegetarians.
"Overall, vegetarians are in a poorer state of health compared to the other dietary habit groups," the authors reported.
Vegetarians suffered from higher rates of allergies, cancer, anxiety, and depression. They were also vaccinated less often than all of the other groups, and visited the doctor for preventative check-ups less frequently than subjects eating a carnivorous diet rich in fruits and vegetables.
But despite the findings, American meat-eaters should resist the urge to pester their plant-eating brothers and sisters. Why? For starters, the study has a host of inescapable limitations. All of the data, including diet information, is self-reported. Thus, we have no idea precisely what vegetarians or the various meat-consuming groups were actually eating. The data is also cross-sectional. "Therefore, no statements can be made whether the poorer health in vegetarians in our study is caused by their dietary habit or if they consume this form of diet due to their poorer health status," the authors admit. Moreover, the study was based in Austria, and the Austrian diet and lifestyle significantly differs from the American diet and lifestyle.
Even if the study wasn't severely limited, it wouldn't be enough to overturn prior evidence. In a 2009 review, the Academy of Nutrition and Dietetics (formerly the American Dietetic Association), the largest organization of food and nutrition professionals in the U.S., declared that "appropriately planned vegetarian diets, including total vegetarian or vegan diets, are healthful, nutritionally adequate, and may provide health benefits in the prevention and treatment of certain diseases." More recently, a 2012 review published in the journal Public Health Nutrition, found that vegetarian diets have not shown any adverse effects on health.
Source:Burkert NT, Muckenhuber J, Großschädl F, Rásky É, Freidl W (2014) Nutrition and Health – The Association between Eating Behavior and Various Health Parameters: A Matched Sample Study. PLoS ONE 9(2): e88278. doi:10.1371/journal.pone.0088278
Fortunately for the zebras, they have evolved white and black stripes as camouflage to confuse lions. That's the conventional wisdom, anyway. But, as is so often the case, the conventional wisdom appears to be wrong.
For their study, the authors collected stripe pattern data on seven wild equids: plains zebra, Grévy's zebra, mountain zebra, African wild ass, Przewalski's horse, kiang, and Asiatic wild ass. They then overlaid a map of these equids' ranges with information on predators' ranges, temperature, biomes, tsetse fly ranges, and tabanid ranges (which were determined by proxy using measurements on temperature and humidity). They found that the presence of equid stripes most closely correlated with the ranges of biting flies, such as tabanids and tsetse flies. In other words, equids are more likely to have stripes if they live in an area that also has biting flies.
Simultaneously, the authors' map mostly eliminated the other possible explanations that exist for zebra stripes: camouflage, predator confusion, body temperature control, and social identification. However, the presence of rump and leg stripes correlated with the range of hyenas, so it is possible that the stripes play a role in keeping hyenas at bay. The authors argue, though, that adult zebras are generally too big and powerful for hyenas to hunt, so it is unlikely that striping patterns play an important role here. Also, Asian equids that were once hunted by wolves and tigers do not have stripes.
Thus, the evidence from this and previous studies indicates that zebras have stripes to avoid flies, not lions. Now, they need to find a way to avoid humans.
Source: Tim Caro, Amanda Izzo, Robert C. Reiner Jr, Hannah Walker & Theodore Stankowich. "The function of zebra stripes." Nature Communications5:3535. (2014). DOI: 10.1038/ncomms4535
There's a controversy surrounding breastfeeding. No, not the controversy about whether it's appropriate for mothers to breastfeed in public, say, in front of a class of college students. Science can't help with that one.
I'm talking about the other breastfeeding controversy: How do babies get milk out of those things, anyway? Apparently, the answer to that question has been the subject of a century-long debate. Who knew?!
You might think that the breastfeeding process is relatively straightforward: The baby latches on and sucks like a little Hoover vacuum cleaner. But, you would be wrong. The biomechanics of suckling are rather complicated. True, the baby does suck by lowering the atmospheric pressure in its oral cavity. However, it also "mouths" (i.e., squeezes the nipple and areola between its gums) and engages in "tongue peristalsis" (i.e., moving its tongue in a wavelike manner). See? Complicated. (If you are in need of a visual reminder of how suckling works, see this video clip of Jim Carrey from the movie Liar Liar.)
The scientific issue at the heart of this research, therefore, is, "Which of these suckling mechanisms is most responsible for extracting milk?" Finally, we may now have an answer to that question. Israeli and American scientists placed an imaging device under the chin of a breastfeeding mother and recorded ultrasound videos. And, thankfully, the prestigious journal PNAS found this research sufficiently compelling to publish in its gilded pages.
A screenshot from one of the videos is shown below.
They even went to the trouble of constructing a computer model.
After analyzing the videos and running simulations on their computer model, the scientists declared triumphantly: "We have resolved a century-long scientific controversy and demonstrated with a 3D biophysical model that infants suck breast milk by subatmospheric pressures and not by mouthing the nipple–areola complex to induce a peristaltic-like extraction mechanism."
Translation: Babies suck.
Source: David Elad et al. "Biomechanics of milk extraction during breast-feeding." PNAS. Published online before print: 24-March-2014. doi: 10.1073/pnas.1319798111
Through movies and other forms of popular culture, everybody knows that suppressing memories of tragedies is bad for your mental health. Completely unbeknownst to you, those tragic images that are buried deep inside your unconscious can fester for years -- decades even -- affecting everything from your behavior to your innermost thoughts.
Or maybe not. New research published in PNAS suggests that this commonly held belief from pop psychology may not actually be true.
The authors had 24 participants go through a three-stage experiment. (See first figure.)
In the first stage ("learning"), volunteers learned to associate words with pictures. For instance, the word "duty" was associated with a picture of binoculars. In the second stage ("TNT," meaning "think/no-think"), participants were shown the cue word and told to either think about the associated object or to not think about the associated object. In the third stage, blurry images of the objects they studied or brand new objects were displayed. When the object became recognizable, the participants were told to press a button. Stages #2 and #3 were performed inside of an fMRI machine in order to determine which parts of the brain were active.
The researchers made two very interesting discoveries. The first was that volunteers took longer to recognize objects that they were told to not think about. (See second figure.)
As shown, it took volunteers 2,482 milliseconds (ms) to recognize objects they hadn't seen before ("unprimed"). Objects that the volunteers learned but did not use in Stage #2 of the experiment served as a "baseline" control. It took them 2,249 ms to recognize these objects, and 2,269 ms to recognize objects that they actively thought about. However, the volunteers took 2,310 ms to identify the objects they were told specifically not to think about. This implies that suppressing visual memories of an object affects a person's ability to later perceive the very same object.
The fMRI data the researchers collected led them to a second discovery. Suppressing images activated the right middle frontal gyrus (MFG). The authors used Bayesian modeling to conclude that the right MFG was suppressing the brain region involved in visual perception, called the fusiform cortex. They believe this explains why it took longer for volunteers to recognize objects that they had actively suppressed in their minds.
Of course, this research does not disprove the existence or effect of unconscious memories. But, it certainly challenges the notion that suppressing memories is bad. On the contrary, suppressing traumatic images may make it harder to recall or experience them again in the future.
So go ahead and suppress all those horrible childhood memories you have of being bullied on the playground or accidentally walking in on your parents. It might just work.
Source: Pierre Gagnepain, Richard N. Henson, and Michael C. Anderson. "Suppressing unwanted memories reduces their unconscious influence via targeted cortical inhibition." PNAS. Published online before print: 17-Mar-2014. doi: 10.1073/pnas.1311468111
A team of researchers has identified multiple genes linked to musical aptitude, providing further evidence that musical ability is heritable.
The team, led by Irma Jarvela of the University of Finland, analyzed the genomes of 767 people, representing 76 families. Fifteen of the families were selected for having several professional musicians, while the rest were simply recruited from advertisements. Subjects older than seven were assessed for musical aptitude. In three separate tests, subjects were asked to discern small differences in tone pitch and duration, and to detect minute differences in sound patterns from various musical sequences. Armed with this abundance of genomes and test results, the researchers then associated specific genes with musical aptitude.
The most notable link was found on the third chromosome, near the gene GATA2. GATA2 produces a transcription factor key to the development of the inner ear. Another gene, PCDH7, was also strongly linked to musical aptitude. PCDH7 is known to be expressed in the cochlea, the auditory portion of the inner ear, and the amygdaloid complex in the brain.
"In neuroscientific studies, [the] amygdala has been linked to emotional interpretation of musical sounds." the researchers explained.
Currently, scientists are unable to precisely pinpoint how much of musical ability is tied to genetics and how much is tied to training. Putting a percentage on this complicated interplay would almost certainly be an oversimplification.
What we do know is that musical training, especially at an early age when the brain is most malleable, boosts the growth of neurons and makes the brain more "plastic" -- able to change. This can positively affect many cognitive abilities, but especially musical ones. A study conducted in 1998 found that 40% of musicians who initiated training at 4 years of age or younger developed absolute pitch -- the ability to effortlessly name a note or a group of notes upon hearing them -- whereas only 3% of who started training at or after 9 years of age did.
As studies like the current one become more commonplace, bolstering our knowledge of innate musical ability, it may become possible to sequence a baby's genome and determine if he or she is predisposed to musical stardom.
Source: J Oikkonen, Y Huang, P Onkamo, L Ukkola-Vuoti, P Raijas, K Karma, VJ Vieland. and I Järvelä. "A genome-wide linkage and association study of musical aptitude identifies loci containing genes related to inner ear development and neurocognitive functions." Molecular Psychiatry advance online publication, 11 March 2014; doi:10.1038/mp.2014.8