Scientists have discovered a diet that ensures you will live cancer free for the rest of your life. It's being hailed as a "miracle," a "marvel," a "breakthrough," and even a "quantum leap in nutrition."
Past studies have indicated that pretty much everything in life causes cancer. To circumvent this unpleasant truth, scientists had to think outside the box. The diet they created is nothing short of revolutionary.
There's no cooking, no grocery shopping, and no annoying delivery drivers. In fact, there's no food or water whatsoever! You don't consume anything!
Nine subjects took part in the study, which was not published in a peer-reviewed journal. All of the participants abstained from eating or drinking for the duration of the study. When examining the results, the researchers were utterly amazed to find that none of the subjects showed any signs of cancer.
"The 'no-nutrient' diet was not associated with any form of cancer," the researchers reported. "Moreover, the three children that took part in the study showed no signs of autism."
Vindicated by the new research is popular health blogger, Vani Hari, also known as "The Food Babe." For years, Hari has urged her followers to avoid all toxins and chemicals (the 'no-nutrient' diet has none), an effort that has provoked harsh scrutiny from the scientific community. That independent scientists have now proven her correct beyond a shadow of a doubt is a delightful piece of irony.
The researchers don't plan to perform a follow-up study, but they do intend to lobby Congress to revise the recently-released 2015-2020 U.S. Dietary Guidelines to include the 'no-nutrient' diet.
"We expect the Guidelines to be updated very quickly in light of our incontrovertible findings," they said in a press release.
The researchers admitted one minor side effect of their diet: All of the subjects passed away after five days.
Author's Note: This article is satire and all of its content is completely fabricated. The author does not actually recommend depriving yourself of food and water. Nor does he seek to trivialize the plight of cancer patients or denigrate those diagnosed with autism. The article is intended to highlight -- in an utterly absurd fasion -- the often ridiculous nature of nutrition science and how it is reported in the popular press.
(Image: Salimfadhley/Wikimedia Commons)
5:30 A.M., Monday July 16th, 1945: The day dawned brighter than ever before over the New Mexico desert. But it was not the Sun's soothing rays that set the landscape alight; it was the radiant flash of the very first atomic bomb.
Trinity, the nuclear offspring of the Manhattan Project, detonated with the force of 21,000 tons of TNT. The accompanying fireball reached temperatures of 8,430 degrees Kelvin, hotter than the surface of the sun, and sent a mushroom cloud of smoke and debris soaring more than seven miles into the sky.
That day, every human on the planet was reborn into a nuclear era, one where mankind now held the power to end its existence. Also born that day was an otherworldly, greenish glass, a physical reminder of the cataclysmic explosion. Scientists dubbed the strange material trinitite.
The ghostly glass littered the ground for hundreds of meters around the blast site, though it might be more accurate to say that it "transformed" the ground. The sand, which blanketed the desert the day before, had been replaced by this new material. Walking on it was like setting foot on the surface of an alien world.
Trinity's atomic blast catalyzed the transformation. Amidst destructive turbulence and searing heat, sand was thrown up into the fireball, where it melted, reformed, and rained down upon the ground. Scientists discovered proof for this storm strewn all over in the form of trinitite beads -- molten drops that solidified before they hit the ground. Years later, these pebbles are still surfacing as ants excavate them from their tunnels.
Despite its distinctly eerie appearance, trinitite really isn't that much different from sand. The glass is composed of silicon dioxide, better known as quartz, the second-most abundant mineral in Earth's continental crust. Closer inspection, however, reveals a material tainted with trace amounts of forty different elements, many of them radioactive.
In fact, to this very day, trinitite remains radioactive, buzzing with activity from isotopes of cobalt, barium, europium, uranium, and plutonium. It's safe to handle, but one would be ill advised to make jewelry out of it.
Much of the trinitite created on that fateful July day more than sixty years ago has now been bulldozed and buried, but rare specimens do reside in the hands of collectors. Rarer still, is red trinitite, which gets its color from the presence of copper. When scientists examined samples of red trinitite under a microscope, they found metallic, round blobs within the glass. These "chondrules" were melted pieces of iron and lead from the bomb itself, mementos encased in an atomic glass.
Primary Source & Images: Eby, N., Hermes, R., Charnley, N. and Smoliga, J. A. (2010), Trinitite—the atomic rock. Geology Today, 26: 180–185. doi: 10.1111/j.1365-2451.2010.00767.x
(Top Image: Shaddack)
Most everyone has a pretty good idea of what an atomic explosion looks like. Through images and video, we know the flash, the fireball, the mushroom cloud. Seeing it all in person is quite different, however.
One of the few firsthand accounts immortalized to paper comes courtesy of the inimitable Richard Feynman, who was present for the very first detonation of a nuclear weapon. The test, codenamed "Trinity" was carried out on July 16, 1945 in the Jornada del Muerto desert of New Mexico. The 20-kiloton blast was the culmination of years of work by the scientists of the Manhattan Project. One of those scientists, the 27-year-old Feynman, sought to view his handiwork with his own eyes:
They gave out dark glasses that you could watch it with. Dark glasses! Twenty miles away, you couldn't see a damn thing through dark glasses. So I figured the only thing that could really hurt your eyes (bright light can never hurt your eyes) is ultraviolet light. I got behind a truck windshield, because the ultraviolet can't go through glass, so that would be safe, and so I could see the damn thing.
Time comes, and this tremendous flash out there is so bright that I duck, and I see this purple splotch on the floor of the truck. I said, "That's not it. That's an after-image." So I look back up, and I see this white light changing into yellow and then into orange. Clouds form and disappear again--from the compression and expansion of the shock wave.
Finally, a big ball of orange, the center that was so bright, becomes a ball of orange that starts to rise and billow a little bit and get a little black around the edges, and then you see it's a big ball of smoke with flashes on the inside of the fire going out, the heat.
All this took about one minute. It was a series from bright to dark, and I had seen it. I am about the only guy who actually looked at the damn thing--the first Trinity test. Everybody else had dark glasses, and the people at six miles couldn't see it because they were all told to lie on the floor. I'm probably the only guy who saw it with the human eye.
Actually, Feynman wasn't the only person who chose not to don their safety glasses that day. Ralph Carlisle Smith, the future assistant director of Los Alamos Scientific Laboratory, also observed the explosion with the naked eye. Here's what he saw:
"I was staring straight ahead with my open left eye covered by a welders glass and my right eye remaining open and uncovered. Suddenly, my right eye was blinded by a light which appeared instantaneously all about without any build up of intensity. My left eye could see the ball of fire start up like a tremendous bubble or nob-like mushroom. I Dropped the glass from my left eye almost immediately and watched the light climb upward. The light intensity fell rapidly hence did not blind my left eye but it was still amazingly bright. It turned yellow, then red, and then beautiful purple. At first it had a translucent character but shortly turned to a tinted or colored white smoke appearance. The ball of fire seemed to rise in something of toadstool effect. Later the column proceeded as a cylinder of white smoke; it seemed to move ponderously. A hole was punched through the clouds but two fog rings appeared well above the white smoke column."
There are other accounts, of course, from those who did not actually see an atomic explosion, but felt its effects infinitely more than either Feynman or Smith. Over 100,000 people lost their lives when atomic bombs were dropped on Hiroshima and Nagaski. Here are a few of their stories.
(Image: Jack Aeby)
Religion is declining in America.
This is actually something fairly new. For decades, religion has been on the wane in developed countries worldwide, with statistical models going so far as to predict its eventual extinction in nine countries: Australia, Austria, Canada, the Czech Republic, Finland, Ireland, the Netherlands, New Zealand and Switzerland. America was pretty much the sole country bucking the trend to nonbelief. No longer.
In 1998, 62 percent of Americans said they were “moderately” or “very” religious. In 2014, that number dropped to 54 percent. According to a recent study, irreligion is particularly pronounced amongst younger Americans.
"Nearly a third of Millennials were secular not merely in religious affiliation but also in belief in God, religiosity, and religious service attendance, many more than Boomers and Generation X’ers at the same age," the authors wrote. "Eight times more 18- to 29-year-olds never prayed in 2014 versus the early 1980s."
In light of the new data, it seems inevitable that as demographics change over a matter of decades, religious practitioners will become a minority group in the United States. What's driving the decline?
While a variety of factors are likely at play, I'd like to focus on what may be the most significant contributor: science.
We are perhaps the first generation of humans to truly possess a factually accurate understanding of our world and ourselves. In the past, this knowledge was only in the hands and minds of the few, but with the advent of the Internet, evidence and information have never been so widespread and accessible. Beliefs can be challenged with the click of a button. We no longer live in closed, insular environments where a single dogmatic worldview can dominate.
As scientific evidence questions the tenets of religion, so too, does it provide a worldview to follow, one that's infinitely more coherent.
Sir James George Frazer, often considered one of the founding fathers of modern anthropology, wrote that -- when stripped down to the core -- religion, science, and magic are similar conceptions, providing a framework for how the world works and guiding our actions. He also noted that humanity moved through an Age of Magic before entering an Age of Religion. Is an Age of Science finally taking hold?
Bemidji State University psychology professor Nigel Barber expounds upon Frazer's thoughts even further.
"[He] proposed that scientific prediction and control of nature supplants religion as a means of controlling uncertainty in our lives. This hunch is supported by data showing that more educated countries have higher levels of non belief and there are strong correlations between atheism and intelligence."
Frazer's hunch is also supported by a recent study published journal Personality and Individual Differences. Querying 1,500 Dutch citizens, a team of researchers led by Dr. Olga Stavrova of the University of Cologne found that belief in scientific-technological progress was positively associated with life satisfaction. This association was significantly larger than the link between religion and life satisfaction. Moreover, using the World Values Survey, they extrapolated their findings worldwide. As Ronald Bailey reported in Reason:
Stavrova and company concluded that the "correlation between a belief in scientific–technological progress and life satisfaction was positive and significant in 69 of the 72 countries." On the other hand, the relationship between religiosity and life satisfaction was positive in only 28 countries and actually negative in 5 countries.
"Believing that science is or will prospectively grant... mastery of nature imbues individuals with the belief that they are in control of their lives," Stavrova concluded.
So not only does science dispel religious belief, it also serves as an effective substitute for it. Science will never drive religion completely extinct, but religion may be marginalized to a small minority bereft of influence.
One of science's primary aims is to seek out knowledge that will hopefully better our world and the lives of all who live on it. That's something we all can believe in.
At least 14 million people in United States are currently diagnosed with cancer, and around half of them, at one time or another, have pursued alternative therapies for their disease. While some of these therapies can help alleviate the debilitating side effects associated with cancer, none are effective at treating the disease or curing it outright.
But facts and evidence haven't stopped snake oil salesmen from pushing their ineffective panaceas. Here are six of the strangest "cancer cures" ever sold:
1. Emu Oil. Some products that FDA regulators examine present a genuine challenge to classify as legitimate or bogus. "Pure emu oil" was not one of them. Its claim to "eliminate skin cancer in days" was particularly specious.
Harvested from the adipose tissue of the emu, a large flightless bird, emu oil may impart some medicinal benefits, but they are thus far unconfirmed.
2. Electrohomeopathy. In the 19th century, Count Cesare Mattei found a way to capitalize on the burgeoning practice of homeopathy. First, add "electro" to its name. Second, sell custom electric devices to bolster traditional homeopathic treatments. The scheme worked brilliantly. The practice, itself, did not. Despite its ineffectiveness, it is still practiced today, particularly in bastions of naturopathic medicine like India, Pakistan, and Bangladesh.
Homeopathy is bunk. Providing a spark of electricity doesn't change that.
3. The Grape Cure. Grapes make for a delicious, nutritious snack and even produce a remarkable burst of plasma when microwaved! But while the multifaceted fruit is good for a great many situations, it isn't effective at curing cancer.
Tell that to Johanna Brandt, who pioneered a grape-only diet for curing cancer. Dr. Stephen Barrett dispels her quackery.
"There is no scientific evidence that the Johanna Brandt's "Grape Cure" has any value. Even worse, her recommended diet is deficient in most essential nutrients and can cause constipation, diarrhea, cramps, and weight loss that is undesirable for cancer patients."
4. Germanic New Medicine. According to Ryke Geerd Hamer, the founder of Germanic New Medicine, severe diseases like cancer result from shocking events that trigger psychological conflict. This conflict manifests physically as disease. To cure the disease, you simply need to resolve the conflict.
Doubling down on crazy, Hamer claims that evidence-based medicine is a Jewish conspiracy designed to kill non-Jews. The German Cancer Society and the German Medical Association strongly disagree.
5. Zap Away the Parasites. For decades, Hulda Regehr Clark claimed to have "The Cure for All Cancers.” The "cure" of which she spoke and wrote so glowingly, was a "zapper" device that supposedly removed disease-causing parasites from the body and resulted in a 95 percent cure rate. Sales of her books and unfounded treatments earned her millions of dollars.
Of course, her cure has never been substantiated by any sort of evidence, nor could it save her. Clark died of cancer in 2009.
6. Venus Flytrap Extract. Due to their carnivorous nature and quirky looks, the venus flytrap is one of the few plants that is actively poached, so much, in fact, that it is at risk of extinction. No doubt contributing to the plant's desirability are dubious claims that it can "eat cancer." Venus flytrap extract is sold in the form of an herbal remedy called Carnivora. Despite its fantastic name, no clinical studies have shown the supplement to be effective in the treatment of cancer.
IN 2005, one day before the comet Tempel 1 made its closest approach to the Sun, NASA scientists got a chance to embrace their inner Hulks. Like rambunctious schoolchildren giddy to cause a little mayhem, they smashed an 820-pound impactor into the comet at tremendous speed, and then -- undoubtedly with large grins plastered upon their faces -- watched what happened.
Almost instantly, a massive cloud of dust began spewing from the 72-trillion-kilogram comet. Subsequent analysis from the nearby Deep Impact probe revealed the presence of silicates, carbonates, metal sulfides, amorphous carbon, and hydrocarbons, as well as water ice, within the plume -- in short, the stuff that life is made of. When the enriched dust cloud dissipated, scientists were able to view their handiwork: a crater 328 feet wide and 98 feet deep.
In the wake of NASA's Deep Impact mission, interest in comets grew by orders of magnitude. Scientists had their first concrete evidence that the frozen hunks of water, rock, and various gases contained the building blocks of life. No longer mere objects to be charted by astronomers and ogled by sky watchers, comets now demanded an existential reverence.
TRAVEL BACK 4 billion years and you might find yourself in the middle of a storm of cataclysmic proportions. At this time, when the planets of the young Solar System weren't neatly synced into their elliptical orbits, it has been theorized that Uranus and Neptune rammed into a reservoir of icy comets, sending asteroidal and cometary debris raining down on the inner planets. During the Late Heavy Bombardment, as the event is called, the Earth was getting slammed, so much, in fact, that if life existed, the surface may have been sterilized. As many as 22,000 objects rocked our home over a period of 300 million years. However, in subsurface cracks created by the pummeling, life could have been boosted, or even seeded. Recent research suggests that impacts of comets containing organic compounds could generate peptides, the building blocks of proteins. The Solar System's most cataclysmic storm could very well have been a drizzle of life.
EVEN MORE ASTOUNDING, some of the comets that struck Earth could have already contained life. The chances are remote, but it is possible. According to recent research published to the journal Astrobiology, large comets with a radius of over 10 kilometers could contain liquid water at their cores. The decay of radioactive isotopes of aluminum or iron could supply the heat necessary to melt the inner ice. Katharina Bosiek, along with her colleagues Michael Hausmann and Georg Hildenbrand, suggest that a thick layer of dust could protect the core's liquid environment from solar radiation, echoing learned speculations found in prior research. Their findings make the hopeful words of Nalin Chandra Wickramasinghe, the Cardiff University astrobiologist who was one of the earliest proponents of panspermia, believable.
"Supposing comets were seeded with microbes at the time of their formation from pre-solar material, there would be plenty of time for exponential amplification and evolution within the liquid interior," he wrote in 2009.
In this view, large comets could be seen as enchanting snow globes just waiting to be smashed upon fertile ground, thus releasing the microbial life contained inside. It's not inconceivable. Some of Earth's extremophiles display surprising resilience to the inhospitable conditions of space, and they didn't even evolve there.
Skepticism is called for, however. Given the sometimes transient nature of comets and the harsh conditions of space, it's hard to imagine that life, if it ever existed inside them, could still exist today. Still, the tantalizing notion makes a mission to the Solar System's Kuiper Belt or Oort Cloud, where as many as 100,000 comets reside, that much more tempting.
Reference: Bosiek Katharina, Hausmann Michael, and Hildenbrand Georg. "Perspectives on Comets, Comet-like Asteroids, and Their Predisposition to Provide an Environment That Is Friendly to Life." Astrobiology. March 2016, ahead of print. doi:10.1089/ast.2015.1354.
It seems odd to say that scientists were ecstatic about the opportunity to shoot a critically endangered whale, but that was exactly how Katie Jackson and her colleagues at the North Atlantic Right Whale Program felt when they saw Whale 1334 on a mild February day in 2013 off the coast of Jacksonville, Florida.
The weapon of choice was a harmless one, of course. A bolt from the large crossbow would certainly harm or kill a human, but it would be little more than a pinprick to an animal the size of a school bus, and a valuable pinprick at that. A mechanism at the end of the bolt would collect a tiny piece of blubber from 1334, enough for biologists to sample and study her DNA. When Jackson's partner Tom Pitchford connected with the shot, the duo was elated.
For decades, 1334's genetic information had been prized more any other right whale's. Over a timespan of thirty years, she had been the most productive mother of all North Atlantic right whales, giving birth nine times. Yet her comings and goings were puzzling to say the least. She did not show up in regions where the whales typically congregated, and she disappeared for years at a time. In her recent book Resurrection Science, journalist M.R. O'Connor expounded upon the mystery.
"She was first seen off the southern coast back in the early 1980s and reappeared there periodically. But unlike the others, 1334 did not show up in the Bay of Fundy [off the coast of Maine] in the summers with the rest of the right whales. No one saw her again, until she appeared in Florida three years later with a new calf. And then the same thing happened three years later... 1334 gave birth during years when biologists saw calving rates stall and even decline in the general population. In 2000, she was the only right whale to give birth to a calf."
Considering that just five hundred right whales remain in the world, 1334's mysterious, yet prolific procreating was instrumental in keeping the species alive. Could there be secrets in her DNA that might prevent their extinction?
As O'Connor described, a right whale pregnancy is a monumental task. Pregnant females must consume as many as 4 million calories a day in the form of miniscule zooplankton. The binging doesn't stop even when the calf is born after a yearlong gestation, for that's when the nursing begins, which roughly lasts another year. Due to the great expense of reproduction, female right whales are able to delay pregnancy until they've stored up enough energy in the form of blubber to afford giving birth.
Thus, when Trent University geneticist Brad White started examining 1334's DNA in spring 2014, he had a hunch that her genotype permitted her to birth calves regardless of good or poor nutrition. That hunch is still being explored.
"Nothing has jumped out yet about the DNA profile," he told RCS in an email.
Of course, 1334's success could be more attributed to her behavior than to her hard-wiring. The massive whale goes her own way, and it's entirely possible that she's stumbled upon more hospitable grounds to mate and rear a calf. Scientists still don't know exactly where she travels.
That an animal weighing well north of 100,000 pounds can disappear so easily is remarkable, especially considering that right whales were once hunted and killed like cattle.
O'Connor simultaneously laments and appreciates that the mystery of 1334 remains unsolved.
"As much as I want to know where 1334 goes, I cheered her elusiveness and hoped that the ocean is still big enough for her to escape the forces threatening her kind."
Primary Source: M. R. O'Connor. Resurrection Science: Conservation, De-Extinction and the Precarious Future of Wild Things, St. Martin's Press, 2015
Though difficult to fathom, just 1,500 years ago, English was a wisp of a language, spoken by a smattering of Germanic tribes as they migrated from mainland Europe to the island of Britain. Today, linguists whisper and wonder: will English eradicate all other languages?
To do so would be a tall task. English's 339 million native speakers are outnumbered by those who speak Spanish (427 million) and Mandarin Chinese (897 million).* What's more, English's native speaking population has been decreasing steadily. While this situation seems to suggest that English is on the way out, globally, it's actually ascending. That's because 510 million people from all over the world have elected to learn English as a second language, and more start learning every day. No other language comes close.
In science, business, and the media, English dominates. Learning the language is a cheap price of admission to join an increasingly interconnected world.
A side effect is that other languages are starting to fall by the wayside. Prominent linguist David Graddol estimates that as many as 90 percent of the world's 6,000 to 7,000 languages will go extinct this century. His learned guess is echoed by John McWhorter, a linguistics professor at Columbia University. Backing them both is evidence from a study published in 2014. Researchers modeled declines in hundreds of languages and found that, on average, a language is going extinct every two weeks. If this trend continues to play out over the next century, 2,600 languages will be gone. The researchers suggested that a burning desire to benefit from economic growth is what's causing lesser-spoken languages to go up in smoke. More and more, education and employment hinges upon being able to communicate in modern society. This means that parents are not passing on rarer, obsolete languages to their children.
Writing in the Wall Street Journal, McWhorter had this to say on the situation:
"It is easy for speakers to associate larger languages with opportunity and smaller ones with backwardness, and therefore to stop speaking smaller ones to their children. But unless the language is written, once a single generation no longer passes it on to children whose minds are maximally plastic, it is all but lost. We all know how much harder it is to learn a language well as adults."
So as esoteric tongues die, vastly fewer will remain. But will English emerge on top?
"Some may protest that it is not English but Mandarin Chinese that will eventually become the world’s language, because of the size of the Chinese population and the increasing economic might of their nation," McWhorter wrote. "But that’s unlikely. For one, English happens to have gotten there first. It is now so deeply entrenched in print, education and media that switching to anything else would entail an enormous effort... Also, the tones of Chinese are extremely difficult to learn beyond childhood, and truly mastering the writing system virtually requires having been born to it."
While Chinese may remain the most spoken language on account of the large and growing native population that speaks it, English certainly isn't going anywhere. One of the chief reasons is that it has cemented itself as the defining cosmopolitan language of our time. In a 2010 study, Gary Lupyan of the University of Pennsylvania and Rick Dale of the University of Memphis found data to suggest that as more and more non-native speakers learn a language, they inadvertently hack away at the extraneous edges. Over time, the language grows more streamlined and simple to learn. There's no question that English has evolved considerably over the years. Just compare the flowing prose of John Adams and Abraham Lincoln to the simplified of Hillary Clinton or Donald Trump.
Of course, linguistic evolution could be completely shaken up by technological advancement. A Star Trek-style universal translator is one of the holy grails of science fiction, and companies like Google are hard at work trying to craft it. If such a device ever enters the realm of reality, it could dismantle the Tower of Babel for good.
*Sentence updated 3/21 to reflect 2016 statistics from Ethnologue.
Last week at the Newton Blog, my colleague Ross Pomeroy discussed a famous puzzle known as one of Zeno's Paradoxes. He presented the resolution of the problem of fitting infinitely many things into a finite space through the understanding of fractals: shapes that repeat the same pattern infinitely many times.
Such an approach might resonate with the original Greek mathematicians who worked on this very problem. They primarily solved physical problems through methods relying upon the geometry of shapes and lines; most high school geometry was discovered by famous Greeks such as Euclid.
Let's crack this venerable nut using the methods of a physicist.
First, a brief recapitulation of the problem: A sprinter completing the 100 meter race has finished one half of his race as he passes the 50 meter (m) mark. He's completed one half of the remaining distance at 75 m, and one half of the 25 m then remaining when he runs 12.5 m further and reaches 87.5 m. As he continues, he finishes one half of each remaining distance, but at each distance, a small amount further remains.
Continuing to divide each remaining half into another small half, we never find the final distance to finish equal to zero -- the point of the finish line. It just keeps getting smaller forever. How can the runner ever reach a line that requires him to run an infinite number of distances?
An answer is provided by the application of the calculus, as first described by the patron saint of inventing mathematics for physical problems -- and the greatest physicist of all time by count of hypothetical Nobel Prizes -- Isaac Newton.
The physical picture is simple: as the distance remaining gets smaller, the rate at which the runner covers it grows faster. The first 50 meters might take 5 seconds, the next 25 meters take 2.5 seconds, the next 1.25 m takes 1.25s, the next .625 m takes .625 s, and so on. The smaller the distance, the faster it's covered. As distances approach infinitely small, the time it takes to run them approaches infinitely small as well.
The total time to run the 100 m is all of these smaller and smaller times, added together.
What's not clear, until we possess the powerful ideas of Newton's calculus, is whether all of those small times add up to some finite number (say 9.58 seconds for Usain Bolt, or 15.58 seconds for a more normal person) or whether they add up to infinity -- an infinitely long time to run the 100 m!
Calculus can handle this sort of problem by using a tool called a limit. Running at a steady and world-class speed of 10 m per second, we would find our times to cover each successive remaining half of the course to be 5 s + 2.5 s + 1.25 s + 0.625 s + ... and so on. You can check that this is equal to: 10s*(1/2 + 1/4 + 1/8 + 1/16 + ...). The Zeno's paradox boils down to simply what the numbers in those parentheses add up to: infinity or some actual number.
Using the limit, we can subtract out the infinite number of terms, perform some mop-up algebra, and corner the resulting troublesome infinity under the bottom of a fraction where 1/infinity = 0 to kill it. Then we get our answer: 1/2 + 1/4 + 1/8 + ... = 1. (Here are more rigorous and simpler proofs if you're so inclined.) Those infinitely many fractions all add up to a small whole, giving us a finite time when we put the speed multiplier back in. This is why runners break the tape just like they expect to.
That may still sound a touch arcane, but the idea is right at the heart of much of physics. The calculus provides us with ways of analyzing the rate of change of many many natural processes. It does so by fighting off the infinities that appear to stymie our understanding as distances and times grow smaller and smaller.
This summer, sprinters from around the world will gather in Rio to compete in the 100-meter dash. Should you choose to tune in, you'll be treated to electrifying race after electrifying race. With each crossing of the finish line, you'll also witness something seemingly impossible: a runner completing an infinite number of tasks in roughly ten seconds flat. Compared to such a monumental achievement, who cares about a gold medal? An athlete has just made infinity occur within a finite frame!
How is such a thing possible? To find out, we must first travel back to around 470 BC, when the great Greek Philosopher Zeno of Elea was wowing his compatriots -- including a young Socrates -- with his keen intellect, and in particular, his playful paradoxes. In one of these paradoxes, Zeno described a race and a runner, noting that before the runner completes his goal, he must first travel half the distance. Once halfway, he must then travel halfway again, and again, and again. If this was applied to a 100-meter race, our sprinter would run 50 meters (1/2), 25 meters (1/4), 12.5 meters (1/8), 6.25 meters (1/16), and so on until he passes the finish line.
Since one can technically always travel half of some set distance, that would mean the sprinter completes an infinite number of tasks! Zeno argued that this is impossible, and thus concluded that movement must be an illusion.
Since each leg of the 100-meter dash is exactly half the remaining distance to the finish line, it makes sense that the more legs we add up, the closer we'll get to the full 100 meters. So we would expect S1000, say, to be bigger than S10, and therefore closer to 100, but not quite equal to 100. Pushing this reasoning a step farther, in some sense the sum of all the numbers in the sequence must be equal to 100.
Here we have found a point of contact between the finite and the infinite: the sum of infinitely many numbers adding up to something finite. In the right context it seems to make perfect sense: if you split up 100 meters into infinitely many shorter pieces, then of course the sum of the lengths of all the pieces should be equal to the total length of 100.
Outside of fancy philosophical musings, there's a far simpler way to make something infinite fit within a finite space: Make a fractal, a mathematical set that exhibits a repeating pattern at every scale to infinity!
Perhaps the most basic example of a fractal is the Koch snowflake, an extrapolation of Swedish mathematician Helge von Koch's curve, in which a straight line is divided into three equal segments and the middle segment is replaced by two sides of an equilateral triangle of the same length as the segment being removed. This is then repeated for all of the straight lines an infinite number of times.
Zoom in on an edge of the fractal, and this is what you'll see!
So as counterintuitive as it may sound, it is quite possible to contain an infinite number of things within a finite space!
Most people spend between 30 and 47 percent of their waking hours spacing out or lost in thought. But for a small percentage of these daydreamers, their airy fantasies and idle ruminations transform into a powerful compulsion that crowds out reality.
Though not formally recognized as a psychological disorder, researchers are starting to study and characterize this potentially debilitating condition. Maladaptive daydreaming, they've dubbed it.
Eli Somer, a Professor of Clinical Psychology at the University of Haifa and the scientist who first reported on maladaptive daydreaming more than a decade ago, is not trying to pathologize everyday imaginations. Such reveries are normal and even beneficial. Somer simply wants to acknowledge and hopefully find ways to treat those daydreamers whose dreams literally dominate their days. To that end, he has spearheaded a sizable chunk of the thus far scant research on the condition, even helping to create a tool to diagnose it.
In light of his published efforts, hundreds of maladaptive daydreamers have contacted Somer volunteering to take part in research. With their help, he has just published a new study that describes the condition in empirical detail.
Somer and his colleagues, Liora Somer & Daniela Jopp, interviewed 21 maladaptive daydreamers from across the world. From these in-depth discussions, the team gleaned a number of commonalities.
First and foremost, subjects described maladaptive daydreaming as harmful, time-consuming, and isolating. All desperately sought information and support.
"I spend most of the day at home daydreaming... I feel like a ghost that misses out on life."
"Oh Gosh, nothing gets done! Homework, studying, cleaning, sometimes I would lie there, my stomach will be growling and hurting and I won't get out of bed because I'm trying to daydream. It's that bad..."
"For myself, I just want a life, not just stories about a life."
Somer also found that every single interviewee had a ritualized process to induce vivid daydreams. This process invariably involved listening to music while performing some sort of repetitive activity, such as rocking their head back and forth or pacing for hours on end.
"This set of conditions sounds similar to the focusing of attention described as an induction process for hypnosis and has been observed among indigenous communities as part of ritualized kinetic trance induction," he and his co-authors noted.
Almost every subject lamented that socializing was incompatible with daydreaming. To truly become immersed within their fantasies, they required solitude.
In this solitude, subjects said that their daydreams sprung to life in vibrant and vivid detail. Some described entering a dream-like state.
“It is visual like an actual dream with a tunnel vision on the person I am talking to and without much detail about the surrounding. I can hear the voices of the people in my daydream...When they talk to me it is not like my own voice coming back to me or voices from outside. It's theirs”
Others insisted that their daydreams seemed real.
"It is like a reality with colors, smells and tastes."
Inside their fantasies, subjects wove engrossing narratives that played out very much like parallel realities, in which they had enhanced social status and fulfilling relationships. The unfortunate irony is that these idealized visions inhibited their ability to achieve those things in real life.
Now that maladaptive daydreaming is gaining scientific legitimacy, Somer hopes that future research will explore what causes the condition. He and his co-authors suggest that irregularities with the neurotransmitter serotonin may be to be blame. Latent obsessive-compulsive disorder may also factor in.
Once the causes are nailed down, and potential treatments are explored, many maladaptive daydreamers may find some relief and be able to live their lives fully awake.
Source: Eli Somer, Liora Somer & Daniela S. Jopp (2016): Parallel Lives: A Phenomenological Study of the Lived Experience of Maladaptive Daydreaming, Journal of Trauma & Dissociation
We are all aware that the cost of tuition has been rising for decades. Furious political pandering is pushing a similar bloom in the number headlines concerning the economics of higher education.
Flourishing administrative overhead at universities is often named as a major culprit for this inflation. Hacking down some of this overgrowth is one way to address the problem.
Here's a more radical idea: let's plant some competition for the universities' business. Instead of restricting the teaching of accredited courses to colleges, why not let individual instructors gain accreditation for particular courses?
The philosophy is simple. The most important qualification for a job is qualification itself, not the calligraphed paper that represents it.
If a job requires some knowledge of biology, it often demands a degree in the subject. Why not, instead, ask for just the particular set of biological knowledge germane to the task?
One applicant to a job opening could present an entire preciously expensive degree showing their breadth of knowledge. Individual accreditation would allow a second applicant to instead present a smaller, leaner, more targeted package of professionally certified skills to compete with the first at a much lower cost.
The important machinery behind this idea would be a new process for higher education accreditation that certifies individuals or small groups of individuals to teach particular courses and subjects. As far as I am aware, only such large bodies as universities, colleges, junior colleges and online universities are given accreditation in the US.
A new accreditation process--and bodies to administer it--would be necessary to distinguish these new post-secondary educators from tutoring programs and awful online diploma mills. Courses would be taught in-person and graded like university-level classes. Students would be expected to work through the same textbooks and perform on the same in-person paper-based, hand-graded exams. Professional teachers working without research burdens and administrative committee drudgery would possess more one-on-one time for students and their work too.
Savings for students arise from two broad areas. The first is in university overhead. One instructor or five instructors can earn accreditation and open a teaching business. They can rent or buy a small classroom space. Maybe they hire a secretary or maybe they handle that work themselves. Their costs mostly amount to rent, electricity, blackboards, and chalk.
A lab course might cost significantly more, but some lab supplies last a long time, spreading cost between many different classes of students. Consumable supplies such as chemicals or dissection animals would of course have costs passed directly to students.
Here are a few of the budgetary liabilities they don't have to pass on to their pupils: academic administrators, non-academic administrators, human resources staff, admissions officers, interns, police, janitorial staff, professional staff, provosts, head deans, mid-deans, low-deans, assistant deans, aspiring deans, computer labs, IT staffs, maintenance departments and equipment, lawn care services, golf carts, 12-seater golf carts, special counselors, alumni schmoozers, outreach coordinators, chief diversity officers...
A second creator of major savings for the student would be a reduction in the necessary number of courses. Cutting out all the "Gen Ed" fat from the meat of the curriculum saves a significant fraction. Further, a strong track of education in one particular specialty might replace a full degree. Electives, generally put in place to broaden the experience of a student in different areas of a subject could be eliminated in favor of targeted knowledge. A student who only needed the most basic fundamentals might only take a handful of courses.
A broad and general education is certainly a good thing. But it's also expensive, time-consuming, and not necessarily essential for many jobs. Let students and employers decide what they want most. Offer them a path to education that circumvents the increasingly expensive and overgrown trail through the university system.
American politics are more polarized than ever.
Many of us have seen, felt, or experienced the division firsthand. Scientific research and public opinion polls show that it is indeed real. Many openly lament its existence, and wonder exactly how a country built on compromise reached such a sorry state. Increasing education rates and a tendency to choose likeminded mates have been offered as explanations, as has the technology-afforded ability to choose how and where we digest our news and information.
But whatever the causes, there is a solution: science. Or, more specifically, a scientific way of thinking.
Most importantly, this means being willing to admit that we do not know. Today, showing uncertainty can spell political doom for an elected official, but as the Great Explainer Richard Feynman reminded the audience at the 1955 autumn meeting of the National Academy of Sciences, uncertainty is an old and essential practice. It is also one that we must re-embrace in our modern era:
"This is not a new idea; this is the idea of the age of reason. This is the philosophy that guided the men who made the democracy that we live under. The idea that no one really knew how to run a government led to the idea that we should arrange a system by which new ideas could be developed, tried out, and tossed out if necessary, with more new ideas brought in – a trial-and-error system."
Feynman also noted what can happen when uncertainty, what he called a "satisfactory philosophy of ignorance," is thoroughly abandoned and replaced with emphatic certainty.
"Looking back at the worst times, it always seems that they were times in which there were people who believed with absolute faith and absolute dogmatism in something. And they were so serious in this matter that they insisted that the rest of the world agree with them. And then they would do things that were directly inconsistent with their own beliefs in order to maintain that what they said was true."
Will America repeat the mistakes of past societies? Of Nazi Germany? Of Stalinist Russia?
"The only way that we will make a mistake is that in the impetuous youth of humanity we will decide we know the answer... And we will jam," Feynman said. "We will confine man to the limited imagination of today's human beings."
We can commit to unshackling ourselves here and now. Freedom from certainty leads to liberation in thought, and returns us to the politics of democracy envisioned by our founding fathers. Those wise men disagreed vehemently yet compromised rationally to create the greatest country on Earth.
If you asked people off the street the most interesting fact about bonobos, those with knowledge of the species will probably answer something like this:
"They have a lot of sex with each other."
While that tidbit of information is certainly true and undeniably stimulating, it's not ultimately what fascinates Brian Hare, an assistant professor at the Duke Institute for Brain Sciences and one of the eminent bonobo researchers.
"The number one reason they are interesting is that they don’t kill each other," he told the New York Times.
Bonobos are highly-intelligent primates that reside in a 190,000-square-mile area of the Congo Basin in the Democratic Republic of the Congo. Their peaceful nature is even more remarkable when compared to that of chimpanzees. Though the two species share the same genus and are almost physically identical to each other when viewed with an untrained eye, their behavior couldn't be any more dissimilar when it comes to temperament. Over fifty-four years of study, scientists have witnessed chimpanzee killings on 152 occasions. During that same time period, there has been only one suspected murder amongst bonobos.
Why chimpanzees exhibit violence while bonobos rarely do is one of the most intriguing questions in the field of behavioral science. And considering that both bonobos and chimpanzees are humans' closest living relatives, each sharing roughly 99% of their DNA with us, it is a question whose answer could also reveal a lot about ourselves.
Two overt differences between chimps and bonobos could start to answer the question. The first large difference is who is in charge. While chimp societies are dominated by aggressive alpha males, bonobo communities are led by teams of females. The females maintain order faily well, only rarely resorting to violence in order to control unruly males.
The second difference was mentioned earlier: bonobos have a lot of sex. Group sex, hand sex, oral sex, genital on genital rubbing, and good old vaginal sex are all in the bonobos' repertoire, and they exercise their sexual abilities frequently, so much that Hare refers to sex as the "bonobo handshake." Bonobos also are not particularly selective about who they have sex with. The young will have sex with the old, and males and females frequently engage in same-sex activity. Chimps, on the other hand, are much more reserved, mating primarily for procreation.
For bonobos, sex seems to fulfill the role that competition plays in chimpanzees. For example, when a chimpanzee group stumbles upon a food source, the most aggressive males will often eat their fill first, frightening off all others, and leaving only scraps. When a group of bonobos discovers a cache of food, they often have an orgy, and then everyone shares.
Bonobos are actually physically built for this sexual problem solving. The clitorises of female bonobos are extremely large for the animals' size and are likely very sensitive. While more modest in size, males' genitalia may also be similarly sensitive and thus primed to deliver "good feelings."
More striking are the differences in how chimps' and bonobos' brains are wired. A 2011 study showed that bonobo brains are more developed in regions associated with empathy. Moreover, bonobo brains feature a thick connection between the amygdala -- the brain's fear center -- and the ventral anterior cingulate cortex, a region associated with rational functions like decision-making and impulse control. Chimps lack this developed connection, meaning that they are likely less in control of their fear and aggression.
Physical and behavioral differences between chimpanzees and bonobos evolved over millions of years. Roughly two million years ago, the two species were almost certainly one. But when the Congo River formed, the population was divided. To the north were the animals that eventually became chimpanzees. Hardened by living in relatively open, dry habitats, they competed for food and resources. To the south were the animals that became bonobos. There, in humid forests with abundant food and cover, they did not need to compete. So traits that fostered cooperation and generosity grew more prevalent.
Interestingly, though bonobos seem to have more "evolved" sensibilities, they may be lesser evolved than chimpanzees
"If this evolutionary scenario of ecological continuity is true, the bonobo may have undergone less transformation than either humans or chimpanzees," ethologist Frans de Waal wrote in a 1995 issue of Scientific American.
Evolution does not make species more advanced, it simply makes them better suited to their environment. If there's one thing that the divide between chimpanzees and bonobos illustrates, it's that all lifeforms are molded by where they live.
(Image: AP Photo/The Commercial Appeal, Kyle Kurlick)
Archaeologists have unearthed thousands of skeletons belonging to ancient humans, and with every bone unearthed, they usually learn something new. But equally as often, the remains reveal curiosities that pile into a mounting mystery. Such is the case with trephined skulls -- skulls with holes in them.
Mind you, these are not fracture holes cause by blunt trauma, nor are they holes wrought by wear and tear over many millennia of entombment under earth and rock. These are holes that were purposely cut. Thousands of years ago, our ancestors were excising pieces of skull from the heads of men, women, and children that were likely alive at the time of the procedure.
More than 1,500 trephined skulls have been uncovered throughout the world, from Europe and Asia to North America and South America. So common are these craniums that five to ten percent of all skulls from the Neolithic Period -- ranging from roughly 4,000 to 12,000 years ago -- have holes cut into them. Miguel Faria, a retired Clinical Professor of Neurosurgery and medical historian from Mercer University, calls trephination the "oldest documented surgical procedure performed by man."
But why did our ancestors do it?
"The mystery continues to perplex medical historians," Faria noted last spring in an editorial published to the journal Surgical Neurology International. But he believes the mystery has actually been solved for more than twenty-five years, hidden in plain sight within an expansive, seven-part tome on the history of medicine written by medical historian Plinio Prioreschi.
Considering the tome's length, a daunting 4,292 pages, it's easy to see how such a fascinating finding could be lost amongst Prioreschi's intellectual ocean of words. But Faria conquered the literary kraken in its entirety, and was delighted to discover that Prioreschi had singlehandedly conducted an extensive analysis of skull trephination and had arrived at a fascinating hypothesis, one that Faria believes is almost certainly correct.
Prioreschi suggested that Neolithic humans cut holes into skulls in an attempt to bring injured or sickened individuals back to life. His reasoning is as follows: In the act of hunting, our ancestors would undoubtedly notice that penetrating injuries to the chest or abdomen would commonly result in death. They would also notice that blows to the head would cause a victim to enter a death-like state (what we know to be unconsciousness) and then miraculously awaken some time later. For primitive man, who -- as evidenced in cave paintings -- likely believed that death resulted from sorcery, evil spirits, or the otherwise supernatural, it might seem that the head was the key to making an individual become undead.
"More blows would not accomplish the ritual, but an opening in the head, trephination, could be 'the activating element,' the act that could allow the demon to leave the body or the good spirit to enter it, for the necessary 'undying' process to take place," Faria summarized.
"Since most Neolithic skulls were not trephined, Prioreschi further hypothesized the procedure was reserved for the most prominent male members of the group and their families," Faria added.
Faria believes that Prioreschi's deductive reasoning is sound, although confirming his hypothesis beyond a shadow of a doubt may never be possible.
Source: Faria MA. Neolithic trepanation decoded- A unifying hypothesis: Has the mystery as to why primitive surgeons performed cranial surgery been solved?. Surg Neurol Int 2015;6:72.
(Image: San Diego Museum of Man)
Nature recently published an article on the forthcoming end of Moore's Law. In case you don't keep up with tech buzz, this refers to Intel founder Gordon Moore's empirical observation that the number of transistors in a microchip doubled every 24 months, running from roughly 1965 to the present. To the layman, this meant that computers roughly doubled in power every two years and became 1000 times more powerful every two decades.
Moore's Law is more than just geek arcana. It's a philosophical statement about the progress of technology. We live in times with such rapid growth in scientific knowledge, manufacturing capability, and global economic power that we have come to expect technology to improve so fast we can't even keep up. We're used to our computers and phones and entertainment systems and internet resources upgrading at a blinding tempo. So much so that we take it for granted.
Well, things are about to change.
Many experts and writers have forecast that computer power will grow exponentially forever, eventually bringing about the replacement of human intelligence by computational power. Instead we may soon be reminded that computers, too, are natural systems, subject to natural laws. Processes based on harnessing the resources of nature don't grow exponentially forever; they plateau. Mushrooming growth gives way to a more measured progress.
We witness this exact trend in many areas of human technology.
The first rocket carrying a human being to orbit launched in 1961. By 1969 a man was on the moon. Within a year of its launch in 1982, the space shuttle program lofted 10 missions into space. NASA planned to put men on Mars by 1983, or 1987, or possibly 1999. Colonization would follow. Technology, politics and economics came together to slow things down. Now we pay Russia to put people in orbit.
Orville Wright's first flight (1903) traveled 120 feet and failed to escape the chasing Wilbur. By 1912, planes hit 100 mph. Lindbergh's 1927 flight covered 3600 miles. In 1947 we broke the sound barrier; by 1953 we doubled it. The 1960s ushered in the mighty SR-71 Blackbird, capable of cruising at 90,000 feet at velocities topping 2200 mph, and the Boeing 707, carrying nearly 200 passengers across a continent at 500 knots. But then the 60-year revolution in speed, range, and size tailed off.
Today's planes are better, but not radically more capable. A modern passenger jet looks just like a 707. It's no faster either, but it's lighter, more powerful, and can carry passengers further in greater luxury. No production plane has ever bested the SR-71's speed and altitude capabilities; warplanes now advance more subtly in other ways.
Other inventions that make our modern lives so easy have followed a similar path. Refrigerators don't get any colder than they used to. But they are more efficient, a little bigger, and they fire ice cubes and spray water at you on demand. Cars are not capable of massively greater speed or range than they were decades ago, but they are moderately more efficient, more comfortable, more luxurious, and more safe. Those are just the practical improvements. Cars are also more powerful, more sporty, more blingy and more electronically sophisticated.
Computers will likely face something similar: a period of measured growth and increasing refinement in their finer aspects.
Having sprinted nearly to the limits of speed and power within current technology, devices may now consolidate their gains by growing in sophistication instead of sheer power. A smartphone won't be so much faster as it will more carefully pack more features in, with better integration, while using less battery power. Personal computers have already begun to do this: they are less about doing one task with greater speed than they are about doing more tasks at once, running longer on a battery charge, and doing more things in a more integrated fashion.
This isn't entirely bad news. A gentler pace of change may give us time to breathe and become more used to what we have. Perhaps we won't have to throw out our 8K TVs that replaced our 4K TVs that replaced our HD TVs that replaced our tube sets quite so soon.
The technical reasons to expect this slowdown are clear: silicon is nearing its limits both scientific and economic, and there is no heir.
The heart of a modern chip is made of crystalline silicon, carved into nanometer-sized transistors and topped with copper connections and wiring. This technology has been so fruitful that the investments in scientific research, specialized equipment, specialized techniques, specialized chemicals, and most importantly dollars in the trillions have advanced it vastly beyond any competitor. In fact, most competing schemes have never passed beyond the dreams of primitive research prototypes.
If a successor to silicon is coming to re-fuel Moore's Law, it's nowhere to be seen today. And given the decades of monumentally successful work behind silicon, it's probably not going to pop up overnight.
Last summer, a team of researchers led by Stanford University conservation biologist Paul Ehrlich released a study confirming the worst: "Earth is on the brink of a sixth mass extinction," and it's our fault. By polluting the environment and altering habitats, humans are killing off our earthly neighbors.
As disconcerting as this news is, it unfortunately came as little surprise. The notion that humans are erasing species off the face of the Earth at near unprecedented levels is a perennial story that has been blared in the media for more than two decades. In the year 2000, the United Nations' Millennium Ecosystem Assessment estimated that species are going extinct 1,000 times faster than they naturally do, and that this rate could increase to 10,000 times. These rates translate to between 17,000 and 140,000 species going extinct each year by some estimates.
As an undergraduate majoring in zoology and conservation biology at the University of Wisconsin-Madison (the institution where Aldo Leopold pioneered the field of wildlife management), these depressing numbers became engrained within my psyche, along with a desire to do something about them. That desire remains, but my acceptance of those estimates was recently shaken when I came across a simple fact. Over the last five hundred years, there have been just 875 confirmed extinctions. Why so few, when some scientists have insisted there should have been millions?
A large reason for the disparity is that we almost certainly have not come close to identifying all of the species alive on Earth. Between one and 1.5 million species have been discovered, but there may be five to 14 million in total, perhaps more. And so, conservationists assume that many of these undescribed species face similar extinction risks as a result of human activity.
This may be a flawed extrapolation, as the business of predicting species extinction is not a very certain science.
"No proven direct methods or reliable data exist for verifying extinctions," scientists Fangliang He and Stephen Hubbell noted in a paper published to the journal Nature in 2011. And the indirect method conservationists primarily use -- the species–area accumulation curve -- likely overestimates species extinction by 160 percent or more.
While their results countered the prevailing dogma that the world is undergoing a sixth mass extinction, Fangliang and Hubbell made plain that species extinction is a serious issue that must be addressed.
"Although we conclude that extinctions caused by habitat loss require greater loss of habitat than previously thought, our results must not lead to complacency about extinction due to habitat loss, which is a real and growing threat."
There's no doubt that humans have caused and are causing animals to go extinct, but to compare the current situation to previous mass extinctions is misleading. As Sarah Kaplan reported in the Washington Post last year:
The losses of the past century account for only about 1 percent of the roughly 40,000 known vertebrate species — a statistic that pales in comparison to the level of destruction seen during previous mass extinction events. Even in the least of them, between 60 and 70 percent of species were killed off. During the end-Permian event about 250 million years ago, known as “the Great Dying,” that number was more than 90 percent.
In her recent book Resurrection Science, journalist M.R. O'Connor offered a reason why the hyped notion of a sixth mass extinction persists.
"The field of conservation biology is a crisis discipline," she wrote, suggesting that the field is inclined to forecast doom and gloom in order to promote needed environmental protections.
"Conservationists themselves have said that the field breeds a culture of despair," she continued. "And at times, their pessimism threatens to undermine the cause. 'A society that is habituated to the urgency of environmental destruction by a constant stream of dire messages from scientists and the media will require bigger and bigger hits of catastrophe to be spurred into action,' wrote biologists Ronald Swaisgood and James Sheppard in 2010."
It definitely seems that those hits of catastrophe are growing more forceful. With the release of his study last year, Ehrlich issued a pressing warning, suggesting that humanity itself may even be threatened by the current mass extinction.
"We are now moving into another one of these events that could easily, easily ruin the lives of everybody on the planet," he said.
Bananas are radioactive. That tidbit of information has likely crossed your path before. The yellow, phallic fruit is filled with potassium, a naturally radioactive element. So iconic are bananas as a source of radiation that they inspired their own unit of measurement, the banana equivalent dose, which is roughly 0.1 microsieverts of ionizing radiation.
But did you know your coffee is radioactive, too? It's true. In fact, on a weighted basis, the average coffee powder is roughly three times as radioactive as bananas! If you feel like you're glowing and abuzz after your morning brew, it may not be due to the caffeine...
Other foods are radioactive as well. Carrots, white potatoes, red meat, and lima beans all contain roughly the same amount of potassium per kilogram -- and thus radiation -- as bananas.
However, all of these foods pale in comparison to the mighty Brazil nut. Not only does this tasty nut deliver significantly more potassium than bananas, it also contains a surprising amount of radium -- more than 1000 times as much as most common foods. The radium builds up in the nut thanks to the brazil nut tree's incredibly extensive root system.
So does this mean that lovers of Brazil nuts are at risk of their jaws falling off just like the Radium Girls of a century past?
Actually, you've little to worry about from the radioactivity in your food. Even at the often obscene levels that Americans drink coffee, there's no danger whatsoever. And for reference, you'd have to consume between one and two million kilograms of Brazil nuts to reach a potentially lethal dose of radiation. It goes without saying that you'd succumb to a bursting stomach before radiation poisoning.
As an unseen threat, radiation understandably evokes fear. But consider this: humans -- and indeed all animals -- have lived on Earth for many, many millions of years, and on our beautiful blue-green planet, we bathe in radiation every day thanks to the great ball of fire in sky that endows the Earth with life. The Sun shoots down radiation each and every day in the form of powerful, warming light, and yet we go on living. That's because all surface-dwelling animals have evolved exquisite bodily mechanisms to mitigate and repair radiation's damage.
On average, background radiation causes DNA molecules inside 100,000,000 million of your cells to suffer double-strand breaks every hour, meaning both links in the DNA double helix are severed. When this happens, the genetic information encoded inside can get garbled. But despite this daily breakdown of DNA, we don't fall apart or automatically develop cancer.
The simple fact is that life itself is radioactive. That isn't scary. It's just life.
(Image: Wikimedia Commons)
Albert Einstein is once again getting his due for the depth and quality of his contributions to science. The recent discovery of gravitational waves further highlights his extraordinary imagination and unparalleled ability to reenvision the universe.
One fun way to describe his success is to look at how many Nobel Prizes his discoveries would have won, had different physicists separately made them all. Physicist A. Douglas Stone wrote a great piece about this. He counts seven. (For the curious: Special Relativity, General Relativity, photons, work on energy quantization, spontaneous and stimulated emission, DeBroglie waves, Bose-Einstein Condensates.)
Stone continues by envisioning a "fantasy scientist draft" analogous to how fantasy sports players statistically rank athletes and build teams to compete with other fans' teams. His conclusion is that Einstein would be the greatest physicist of all time, chosen number one.
Would you take Einstein first, as the most accomplished, successful, Nobels-per-lifetime statistical leader? You might. But you might instead take his only true peer in the history of science: Isaac Newton.
Let's compare these two scientific goliaths. How many Nobels could Newton have won?
Einstein is renowned for his imagination and ability to intuitively lay out new conceptual models of the universe. Newton's talents were different. His unparalleled logical and mathematical genius allowed him to formulate observations into laws and to prove ideas through rigorous mathematics. When the mathematical machinery he needed didn't fully exist, he invented it. That's what largely inspired RealClearScience Editor Alex Berezow to name Newton the smartest person who ever lived.
While Einstein's physics are still being proved today, Newton's is so monumental, so important, so fundamental, so proven within its realm of validity, that scientists of every sort take it for granted every day. The laws of gravity and motion that Einstein reenvisioned were edits of the commandments first called down from the ether by Newton's blinding brilliance.
Let's enumerate Newton's discoveries.
First, there are the three laws of motion. That's a Nobel Prize.
Next up, Newton combined these first two discoveries and applied them to understanding the orbits of celestial bodies. Kepler, using Brahe's data, produced his laws empirically. Newton essentially derived Kepler's laws from his own*, showing how these orbital laws came from simple physical principles. He went on to use his findings to explain numerous astronomical observations. I think this earns him another Prize.
Newton devoted time to optics: the study of light and instruments that utilize it. He was the first person to describe a modern color theory. Using prisms and simple lenses, he refuted the longstanding idea that pure white light contains no colors, that it is changed into colored light. He correctly concluded that all colors of light consist solely of a combination of the seven basic colors: red, orange, yellow, green, blue, indigo, and violet. He describes the perceived color of light as a result of logical processes like transmission and refraction. That's four calls from Stockholm.
Describing light as made of particles was in vogue before it was out of vogue before it was in vogue again. Newton wrote the book on light particles, or corpuscles, that was used for the next century. Wave theories of light then displaced it via their ability to explain more phenomena, including interference and diffraction. Along came Einstein another century later to bring back light particles. This one could go either way. However, holding the dominant theory in a field for a century probably earns you the Nobel.
Newton created the first law that mathematically described heating and cooling, known as Newton's Law of Cooling. This law says that the rate of cooling (within certain limits) is proportional to how much hotter the object is than its environs. The cooler it gets, the slower it continues to cool. My Swedish committee would give this another Prize.
Another thing you may not know about Newton: he invented the telescope that is ubiquitous in astronomy today. The reflecting telescope design circumvents the limitations of telescopes relying on enormous lenses to collect and focus light with a mirror. This is absolutely an enormous invention. As deserving of a Nobel Prize as any invention ever produced. That's seven.
Now, we can look beyond science as well. Nobel Prizes are of course awarded in the field of economics. For 30 years beginning in the 1690s, Newton ran the royal mint. Newton embarked on an enormous program to stop the counterfeiting that was debasing English currency. 10% of English money was funny at the time! He redesigned the coins, standardized the system, and ushered in the gold standard. Later he performed the same service for Scotland. I think that is worth an economics Nobel Prize.
Newton 8: Einstein 7. That's close.
Given that total, and given that much of Einstein's work was to update the fundamental work of Newton, I'm giving this to Newton by a nose. Call Einstein the Tom Brady of physicists, and Newton the slightly greater Peyton Manning of physicists.
*All three of the monumental results above were published in that single book (The Principia).
In 2014 and 2015, DNA testing companies 23andMe, Ancestry, and Family Tree DNA all reported having more than one million customers. Chief among the companies' offerings are tests to reveal your genetic history, dating back hundreds or even thousands of years. A hundred bucks and a simple cheek swab can show you your true ethnicity and uncover past relations you never knew you had!
Ancestry summarizes the offer's intuitive appeal prominently on their website: “Who knew a kid from Queens was descended from royalty?”
But while testing one's DNA to uncover ancient family links may be popular, that doesn't make it accurate. Many scientists say the tests are about as meaningful as a horoscope.
Think about it. As you travel back in time though your family history, the number of ancestors you have roughly doubles with every generation. Using the most conservative estimate of generation time -- 32 years -- in the year 1152, you had as many as 134,217,728 potential ancestors. And since genes are scrambled with every generation, it's very likely you share little to no genetic relation to most of them. They might as well be strangers!
DNA companies use two DNA tests, a Y-chromosome DNA which provides information about your male line ancestry, and a mitochondrial DNA (mtDNA) test which provides information about your female line ancestry. These tests supposedly yield more accurate information, but they still suffer from major pitfalls. For example, if two males have similar DNA on their Y-chromosome, they likely share a more recent common ancestor than individuals with dissimilar DNA, but any estimate of when or that common ancestor lived and who they are is almost entirely speculative. Mitochondrial DNA tests are similarly limited. The rate of mutation in the whole mtDNA genome is one to three percent per generation, so the time gap between mutations could be as many as 100 generations. This means that a lot of people share the same mtDNA, and their common ancestor could be as close as one generation or as far as fifty or more.
DNA testing companies often take this ambiguity and fill in the blanks with impressive stories that you can show your friends and relatives. Though fascinating, these tales share more in common with astrological horoscopes than historical accounts.
Mark Thomas, a Professor of Evolutionary Genetics at University College London is one of the most vocal advocates of this criticism. On a recent episode of the BBC radio show The Infinite Monkey Cage, he said that that appeal of both horoscopes and genetic ancestry tests arises from the Forer effect.
“If you tell somebody something that seems like it’s highly personalized but in fact is very generic -- you can apply it to anybody -- then people are much, much more likely to believe it.
The effect was revealed almost sixty-eight years ago. In 1948, psychologist Bertram R. Forer gave subjects a psychology test then presented them with a number of personality traits that the test supposedly revealed. Unbeknownst to the subjects, they all received the same set of personality traits, containing vague observations like "You have found it unwise to be too frank in revealing yourself to others" and "While you have some personality weaknesses, you are generally able to compensate for them." Yet when asked to rate the accuracy of the "personalized" evaluation, the subjects on average scored it 4.26 on a five-point scale.
Thomas is not a fan of genetic testing because for a variety of reasons, most of all because it makes science look like a buzzkill.
"It costs unwitting customers of the genetic ancestry industry a substantial amount of hard-earned cash, and it disillusions them about science and scientists when they learn the truth, which is almost always disappointing relative to the story they were told," he wrote in The Guardian.
"Exaggerated claims from the consumer ancestry industry can also undermine the results of serious research about human genetic history, which is cautiously and slowly building up a clearer picture of the human past for all of us."
In 2007, veteran science writer Richard Conniff wrote an excellent piece for Smithsonian explaining why genealogy is bunk. In it, he seemed to relish in puncturing the inflated ego of humanity.
"Almost everyone... has Julius Caesar as a common ancestor. Half of you can probably claim Charlemagne, too. That’s because they lived a long time ago and went about the business of forefathering con gusto. You are probably also descended from every sniveling peasant who ever managed to replicate in ancient times."
Stick that in your cheek and swab it.