« Alex Berezow: November 2012 | Newton Blog Home Page | Alex Berezow: January 2013 »

December 2012 Archives

An Easy Way to Reduce Traffic Congestion?

The holidays are here, so everybody knows what that means: Last-minute gift-buying, restless children, and sitting in traffic while shuttling back-and-forth between your parents' house and your in-laws'. There isn't much we can do about the first two problems, but scientists are trying to figure out how to fix the last one.

In a new traffic study, researchers for the first time used cell phone data to track the locations of drivers in Boston and the San Francisco Bay Area. Typically, traffic studies rely on driver diaries, but the new method allowed for the collection of immense quantities of data, data which is far more accurate. By using cell phone data, they could pinpoint "major driver sources" (MDSs) and their relationship to traffic flow. 
shutterstock_94817578.jpg

What the researchers found surprised them. Most traffic congestion is due to just a handful of MDSs, which have a much higher than average commute time.

The authors modeled what would happen if traffic was reduced from 0.1% to 1%. They specifically focused on reducing traffic from the handful of MDSs with the greatest congestion. (In Boston, that was 15 MDSs; in the Bay Area, 12 MDSs.) The model predicted that targeting just these handful of sites would cause a major reduction in congestion and travel time.

It should be pointed out that in order to achieve such a reduction in congestion and travel time, the authors reduced traffic from the major MDSs anywhere from 2.5% to 25% in Boston and 2.7% to 27% in the Bay Area. That's a lot of cars to reduce. Most likely, tolling these high congestion MDSs would be the best way to get drivers out of their cars, but that's rarely a popular solution.

And which cities could benefit the most from this research? Here's a list of American cities with the worst traffic:

1. Honolulu
2. Los Angeles
3. San Francisco
4. New York City
5. Bridgeport, CT
6. Washington, DC
7. Seattle
8. Austin
9. Boston
10. Chicago

Perhaps city planners in these cities should take note of this study.

Source: Pu Wang, Timothy Hunter, Alexandre M. Bayen, Katja Schechtner & Marta C. González. "Understanding Road Usage Patterns in Urban Areas." Scientific Reports 2, Article number: 1001 doi:10.1038/srep01001

(Image: Traffic via Shutterstock
Enhanced by Zemanta

December 2012 Archives

Ebola Decoy Protein Fools Immune Response

English: Biosafety level 4 hazmat suit: resear...

Biosafety Level 4 (BSL-4) lab where Ebola research is conducted. (Photo: Wikipedia)

The Ebola virus simultaneously elicits fear and fascination. It causes a horrifying death for its victims, who suffer from a hemorrhagic fever. Even though Ebola is primarily transmitted via direct contact with infected organisms or their fluids, recent research suggests that Ebola may also be transmitted through the air. Creepy enough as all that is, new research demonstrates that Ebola may have yet another trick up its sleeve.

But first, a little background:

The surface of the Ebola virus is covered with glycoprotein (GP), which binds host cells, allowing the virus to enter. Ebola is an RNA-based virus, which means its genetic information is stored in RNA, not DNA. Thus, it has to use a special enzyme (an RNA-dependent RNA polymerase) to create mRNA (the molecule which is translated into protein). This error-prone enzyme is vital to generating GP, which comes in two forms. The first is a full-length version, which is generated by a mysterious process known as RNA editing. Essentially, the polymerase enzyme purposefully mis-transcribes the RNA genome, about 20% of the time, to generate an "edited" mRNA from which the full-length version of GP is translated. The other 80% of the time, editing does not occur, and a shorter GP is produced. This short GP is secreted outside of the virus-infected cell and is known as secretory GP (sGP). 

Now, researchers have demonstrated that sGP is a clever invention. The Ebola virus appears to subvert the host's immune response by using sGP as a decoy to divert away precious antibodies.

Full-length GP is prominently displayed on the surface of the virus, and a successful human antibody response needs to target this protein. But, Ebola doesn't want that to happen, as it would neutralize the virus and prevent it from infecting cells. To counter this immune assault, Ebola deploys sGP, which subverts the immune response in two ways.

The first way is by passively absorbing antibodies directed at the full-length GP. Full-length GP and sGP share many structural features, so antibodies directed against the full-length GP can be tricked into binding the decoy sGP.

But, there's a second, more insidious way that sGP can subvert the immune response. It can trigger the proliferation of B-cells (antibody-producing cells) that preferentially bind sGP. In other words, sGP redirects the immune response to produce antibodies more suitable for binding the decoy sGP, not the full-length GP. Because so much sGP is produced by Ebola, this decoy protein skews the immune response toward attacking it, leaving the full-length GP on its surface free of neutralizing antibodies and, hence, free to attack host cells. The authors refer to this new pathogenic mechanism as "antigenic subversion."

The researchers caution that such viral treachery may have implications for vaccine design.

Source: Mohan GS, Li W, Ye L, Compans RW, Yang C (2012). "Antigenic Subversion: A Novel Mechanism of Host Immune Evasion by Ebola Virus." PLoS Pathog 8(12): e1003065. doi:10.1371/journal.ppat.1003065.


Enhanced by Zemanta

December 2012 Archives

Why America's Best Scientists Don't Get Funded

John Ioannidis doesn't mind stirring the pot. He is famous in the scientific community for very candidly explaining why scientists are getting many things wrong. He has gone so far as to claim that perhaps 90% of the studies in medical journals are incorrect, and he has the research to back it up.

A possible explanation for this is publication bias, the tendency of scientists to submit, and/or journals to accept, positive results (i.e., data which supports a particular hypothesis and is thus "exciting") instead of negative results (i.e., data which essentially says, "Nothing interesting happened here.") As a consequence, journals are crammed full of studies which are never replicated. Moreover, follow-up studies which fail to replicate previous ones often are not published.

Writing in The Atlantic, David Freedman summarizes the problem well:

Imagine, though, that five different research teams test an interesting theory that's making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly "proves" it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news?
Other explanations include poor or biased experimental design, emphasizing statistical significance over biological relevance, or -- in extreme cases -- outright fraud.

As if all of that wasn't discouraging enough, Dr. Ioannidis has just struck again. This time, he has shown that America's best scientists aren't necessarily the ones to receive funding.

To arrive at this conclusion, he and his co-author Joshua Nicholson examined the most highly cited papers (1,000+ citations) written by U.S.-based biomedical scientists. A random sample of these papers showed that 60% of these exceptional scientists were not being funded currently by the National Institutes of Health (NIH) as principal investigators (i.e., "lead scientists"). shutterstock_52697125.jpg

Next, they examined professors in "study sections." These are groups of scientists who have been awarded NIH grants; because of their expertise, they are invited by the NIH to determine which of their colleagues also deserve NIH funding. As expected, most (83%) of study section scientists were currently receiving funds from the NIH.

But, here's the rub: How many of the study section scientists had written highly cited papers? A mere 0.8%. In other words, the scientists deciding which of their colleagues deserve funding are not themselves exceptional scientists. (They may be good scientists, but they aren't the best.) They also tend to award funding to research that is similar to their own.

Combined with the fact that a whopping 60% of exceptional scientists were not receiving NIH funding, Nicholson and Ioannidis lament that "not only do the most highly cited authors not get funded, but worse, those who influence the funding process are not among those who drive the scientific literature."

They believe that the current funding system has a built-in conflict of interest. Though the NIH is supposed to award funding based solely on merit, that clearly is not always happening. Why? Sometimes it is for benign reasons, such as scientists leaving academia or being in graduate school at the time of their high-impact publication.

But there is also a more malevolent reason, which the authors possibly hint at, but don't explicitly state: Study section scientists do not have a strong incentive to fund exceptional, creative colleagues. Why? Because, very likely, they are also competitors.

Nicholson and Ioannidis propose several solutions, one of which is essentially automatic funding for the truly exceptional scientists. Perhaps this could break the perception that the NIH is rewarding conformity over ingenuity.

Source: Joshua M. Nicholson & John P. A. Ioannidis. "Research grants: Conform and be funded." Nature 492, 34-36 (06 December 2012). doi:10.1038/492034a

Image: Pile of Money via Shutterstock
Enhanced by Zemanta