Numerous problems plague politics. For starters, there's too much money, not enough openness, and a current disdain for compromise.
Complications like these come and go, but there is one that has remained prevalent for decades: Politics is far too unscientific.
Mind you, I don't mean "unscientific" for lack of citing studies or funding scientific endeavors (though our elected officials can and should do more of both). I mean "unscientific" by Carl Sagan's fundamental definition of science, as a "way of thinking".
Come election season, candidates do their best to distill all nuance out of complicated issues, instead campaigning with an unscientific mixture of absolutism, oversimplification, and straw man arguments. Even worse, when politicians actually attempt to talk issues, the media simplifies their speeches and comments to sound bites, usually the most controversial. Reasoned ideas simply don't stand up to an entertaining narrative.
The outspoken and enlightening theoretical physicist Richard Feynman was keenly aware of this disconcerting situation back in April 1963, when he gave a series of public lectures. In one of those lectures, "This Unscientific Age," Feynman described a troubling -- and all too often true -- scenario:
Suppose two politicians are running for president, and one... is asked, "What are you going to do about the farm question?" And he knows right away— bang, bang, bang.
The next presidential candidate the same question, yet his reply is more honest and thoughtful:
"Well, I don't know. I used to be a general, and I don't know anything about farming. But it seems to me it must be a very difficult problem, because for twelve, fifteen, twenty years people have been struggling with it, and people say that they know how to solve the farm problem... So the way that I intend to solve the farm problem is to gather around me a lot of people who know something about it, to look at all the experience that we have had with this problem before, to take a certain amount of time at it, and then to come to some conclusion in a reasonable way about it. Now, I can't tell you ahead of time the conclusion, but I can give you some of the principles I'll try to use..."
According to Feynman, the authenticity and rationality embodied in this answer almost always doom a politician.
"Such a man would never get anywhere in this country, I think... This is in the attitude of mind of the populace, that they have to have an answer and that a man who gives an answer is better than a man who gives no answer, when the real fact of the matter is, in most cases, it is the other way around. And the result of this of course is that the politician must give an answer. And the result of this is that political promises can never be kept... The result of that is that nobody believes campaign promises. And the result of that is a general disparaging of politics, a general lack of respect for the people who are trying to solve problems, and so forth. It's all generated from the very beginning (maybe—this is a simple analysis). It's all generated, maybe, by the fact that the attitude of the populace is to try to find the answer instead of trying to find a man who has a way of getting at the answer."
It is ironic that in this age of information we still take our politics simplified, spoon-fed, and unscientific. This election, let's demand a more scientific way of thinking from our politicians, our media sources, and ourselves.
Source: Richard Feynman. The Meaning of It All: Thoughts of a Citizen-Scientist. 2005
(Image: Tamiko Thiel)
THE UNITED STATES' victory over the British Empire in the Revolutionary War is our country's quintessential tale of David overcoming Goliath. Birthed as an underdog, we embrace that mindset still.
Winning independence was not an easy task. It was a success that hinged on pivotal moments. The Continental Army's narrow escape across the Delaware River, the British surrender at Saratoga, and foreign intervention from Spain and France are a few of those moments. But lesser known is George Washington's bold decision to vaccinate the entire Continental Army against smallpox. It was the first mass inoculation in military history, and was vital to ensuring an American victory in the War of Independence.
GEORGE WASHINGTON'S first brush with smallpox came long before he was a military commander. At the age of nineteen, he was infected with the disease while traveling in Barbados with his brother. For twenty-six days, Washington battled headache, chills, backache, high fever, and vomiting. He developed the horrific rash and pungent pustules that are the hallmarks of smallpox. At times, his brother wasn't sure he'd make it. In those days, smallpox mortality rates ranged from 15 to 50 percent.
Washington did not succumb, but the infection left him permanently pocked for life. The scars granted Washington the grizzled look that would later contribute to his image as a leader. They also served as a constant reminder of the danger of smallpox.
As a result, the disease was never far from Washington's mind after he took command of the Continental Army in summer 1775. Making the matter all the more prescient was the fact that a smallpox epidemic was just beginning to crop up. It would rage for seven more years in the nascent United States, eventually reaching the Pacific. Tens of thousands would die.
Over the ensuing months after taking command, Washington witnessed the great burden of disease upon his men. None was worse than smallpox. As a result, Washington went to great lengths to minimize its spread, particularly during the nine-month siege of Boston in 1775 and 1776. At the time, the city was reeling from the epidemic.
In an even clearer example, when Washington ordered one of his generals to take the heights outside the city in March 1776, he specified that every single one of the thousand soldiers in the attacking force must have already survived smallpox, and thus have immunity.
Washington's cautious approach to smallpox absolutely helped keep his army healthy and functional. To the north, however, American patriots experienced what happens when smallpox is left to run amok.
AN OFT-FORGOTTEN fact: during the Revolutionary War, American forces invaded Canada. Their aims were to drive British troops from Quebec and even convince Quebec's citizens to bring their province into the American colonies. The effort, however, met with miserable disaster, which is perhaps why it doesn't cling to public remembrance. The chief reason for the defeat? Smallpox.
Approximately ten thousand American troops marched on Canada in fall of 1775, and at one point, nearly three thousand of them were sick. Brutally handicapped, the invasion never stood a chance. Officers fell victim, too. Major General John Thomas died of smallpox during the retreat the following spring.
"By spring [of 1776] the condition of the American soldiers in Canada had deteriorated severely due to continuous outbreaks of smallpox... Approximately half of the soldiers were ill. The majority of the new recruits were not immune to the disease, and reinforcements sent to Canada sickened quickly." Becker recounted. "Contemporary evidence is overwhelming: smallpox destroyed the Northern Army and all hope of persuading the Canadians to join the Revolution."
"Our misfortunes in Canada are enough to melt a heart of stone," John Adams wrote in June 1776. "The small-pox is ten times more terrible than Britons, Canadians, and Indians together."
FULLY AWARE of the disaster in the north, George Washington realized that merely evading smallpox would no longer suffice; he wanted to prevent it altogether. Inoculation was already available, although the procedure -- called variolation -- was not without risks. The vaccines we're accustomed to today were not invented yet, so doctors would simply make a small incision in the patient's arm then introduce pus from the pustules of an infected victim into the wound. Variolation often resulted in a minor smallpox infection with a speedier recovery and vastly lower fatality rates, around two percent. Survivors were granted lifelong immunity.
At first, Washington simply required new recruits to be inoculated. Then, in February 1777, he bit the bullet entirely.
"Finding the smallpox to be spreading much and fearing that no precaution can prevent it from running thro' the whole of our Army, I have determined that the Troops shall be inoculated. This Expedient may be attended with some inconveniences and some disadvantages, but yet I trust, in its consequences will have the most happy effects."
This was a bold move. At the time, variolation was technically outlawed by the Continental Congress, so Washington was openly flouting the law. Whole divisions were inoculated and quarantined en masse, a process that would continue for months. Strict secrecy was maintained to prevent the British from uncovering the program, lest they launch an attack upon the recovering troops. By year's end, 40,000 soldiers were immunized.
The results were stunning. The smallpox infection rate in the Continental Army rapidly fell from 17 percent to one percent, prompting the Continental Congress to legalize variolation across the states.
With the threat of smallpox vanquished, George Washington and the Continental Army were able to entirely focus on the real enemy: the British. As Becker summed up:
"Due in large part to [Washington's] perseverance and dedication to controlling smallpox, the Continental Army was able to survive and develop into an effective and reliable fighting force, unhampered by recurring epidemics of that disease."
(Image: Auguste Couder, Currier & Ives, John Trumbull)
Forty-six years ago, a young graduate student named Terry Turner examined a slide under a scanning electron microscope. Peering through the eyepieces, he witnessed a sight that provoked overwhelming, immeasurable feelings of excitement and discovery.
"There before me was the view of a grassy canyon stretching away into the distance, its walls steep, with boulders strewn in the grass."
He would later compare his emotions to those described by Vasco Núñez de Balboa when, cresting a mountain in Panama, the Spanish explorer gazed upon the Pacific Ocean for the very first time. But Turner had not witnessed an infinite sea of blue. He had seen the inside of the long, scrunched, and winding tube inside the male scrotum: the epididymis.
Somehow, the sample's orientation during preparation and its orientation in the scope were such that I was getting a “sperm's eye” view of the tubule lumen, the microvilli of epididymal epithelial cells being the “grass” and the “boulders” being what I suspect we today would call epididymosomes. I could alter the focus and orientation of the sample to achieve an impression of zooming along the tubule interior just above the level of the microvilli, and I thought, “Wow, look at that!” I was convinced my eyes were the first to see the terrain of the epididymal lumen.
Turner shared his awe-inspiring experience with attendees of the Sixth International Conference on the Epididymis, which took place in Shanghai back in 2014. Laypersons unaware of the enigmatic organ might be surprised that the comparatively tiny body part has its very own conference, but you can bet that the attending researchers know full well that the epididymis deserves the notoriety.
Packed tightly inside the scrotum, just above the testes, the epididymis is a long tube through which sperm travels from the testes to the vas deferens. It is present in male mammals, birds, and reptiles. Amazingly, the mouse epididymis measures over one hundred times a mouse's body length, some eighty meters in stallions, and over six meters in men. It is through this "least-investigated organ in a man's body" that the sperm makes its "lengthiest, uncharted voyage," researchers from the University of Pittsburgh recently remarked.
"The purpose of the epididymis remains mysterious and the reasons for the sperm’s extensive journey are perplexing. If a sperm were human-sized, its week or two trek through the epididymis would be ~2,750 kilometers."
It is during this rite of passage that the male sperm matures. We know this because researchers have extracted sperm from different areas of the epididymis and tested their ability to fertilize eggs. The farther along a sperm was in its journey, the better chance it had to successfully fertilize an egg.
Sperm aren't yet capable of swimming during their journey, which is why the epididymis is filled with fluid and lined with stereocilia, the "grass" that Turner witnessed under the microscope all those years back. These stereocilia absorb the fluid, thus creating powerful currents that sweep the sperm along.
Eventually, sperm reach the tail end of the epididymis, known as the cauda. There, they can be stored for up to three months. It is from this reservoir that sperm are siphoned into the vas deferens to be ejaculated from the penis.
The cauda has been closely studied of late. Sperm are best stored at cool temperatures, but the advent of warm, tight clothing over the past centuries has elevated the temperature of the male groin by as much as seven degrees Fahrenheit. Human epididymides are overheating, likely lowering male fertility.
Researchers are also starting to consider if the epididymis may be the location where epigenetic changes are wired into sperm cells. Such a finding would further elevate the organ's status.
No doubt scientists will have much to discuss when the Seventh International Conference on the Epididymis convenes sometime in the next few years!
(Images: KDS444, Terry Turner)
In the early 1600s, pioneering astronomer Johannes Kepler put forth his three laws of planetary motion, which, for the first time, provided an accurate and evidence-based description of the movement of the Solar System's planets around the Sun. By the end of the century, Isaac Newton followed Kepler's example with three laws of his own, describing the relationship between an object and the forces acting on it, thus laying the foundations for classical mechanics. Almost exactly three hundred years later, Carlo M. Cipolla, a professor of economic history at the University of California - Berkeley, introduced a set of laws no less revelatory than those of Kepler or Newton: The Basic Laws of Human Stupidity.
While these laws are not taught in grade school, they do hold lessons worthy of reflection in this modern era. Stupidity today is on display more than ever before -- on TV, YouTube, and the city streets you frequent each and every day. To better react to and avoid such dimwitted behavior, one must first understand it. Cipolla's insightful set of five laws is a helpful guide.
His first law sets the stage.
"Always and inevitably everyone underestimates the number of stupid individuals in circulation."
Glaringly pessimistic, the first law is meant to prepare you for what's out there, and what's out there are hordes of people who do stupid things, often without notice. And there are always more of them than you think.
Contributing to the first law is Cipolla's second law.
"The probability that a certain person will be stupid is independent of any other characteristic of that person."
Anybody, whether intellectual or ignorant, blue-collar or white collar, book smart or street smart, can be stupid. Moreover, idiocy persists at roughly equal proportions at all levels of society. The rate of stupidity amongst Nobel laureates is just as high as it is amongst male swimmers on the U.S. Olympic team.
"[The Second Basic Law’s] implications are frightening," Cipolla wrote. "The Law implies that whether you move in distinguished circles or you take refuge among the head-hunters of Polynesia, whether you lock yourself into a monastery or decide to spend the rest of your life in the company of beautiful and lascivious women, you always have to face the same percentage of stupid people -- which (in accordance with the First Law) will always surpass your expectations."
How can this be? Well, it might make more sense in light of the definition of stupidity, which Cipolla provides in his third law. Understandably, given his background, he tackles the term from an economic perspective. (See the figure below for a visual explanation of the definition.)
"A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses."
The brute who starts a bar fight; the tailgating driver; the football player who commits a flagrant personal foul; the video gamer throwing a temper tantrum and deciding to sabotage his team; all of these are "stupid" people. Their actions are so utterly thoughtless and unreasonable that reasonable individuals have trouble fathoming how these people can function, Cipolla insists.
"Our daily life is mostly made of cases in which we lose money and/or time and/or energy and/or appetite, cheerfulness and good health because of the improbable action of some preposterous creature who has nothing to gain and indeed gains nothing from causing us embarrassment, difficulties or harm. Nobody knows, understands or can possibly explain why that preposterous creature does what he does. In fact there is no explanation - or better there is only one explanation: the person in question is stupid."
With his next law, Cipolla admonishes the members of society who tacitly encourage stupidity. Most of us are guilty.
"Non-stupid people always underestimate the damaging power of stupid individuals. In particular non-stupid people constantly forget that at all times and places and under any circumstances to deal and/or associate with stupid people always turns out to be a costly mistake."
When we have a good idea of who stupid individuals are, we still hang out with them, even if it's to our detriment, Cipolla laments.
"Through centuries and millennia, in public as in private life, countless individuals have failed to take account of the Fourth Basic Law and the failure has caused mankind incalculable losses."
Cipolla's fifth law of stupidity is unequivocal.
"A stupid person is the most dangerous type of person."
Yes, more dangerous even than a bandit (refer back to the figure above), who inflicts losses upon others but at least reaps benefits for himself. Stupid people drag down society as a whole, Cipolla insists.
"Stupid people cause losses to other people with no counterpart of gains on their own account. Thus the society as a whole is impoverished."
It's the great and burdensome responsibility of everyone else, particularly the intelligent, to keep them in check.
Source: Cipolla, Carlo M. "The Basic Laws of Human Stupidity." Whole Earth Review (Spring 1987 pp 2 - 7)
The football field is hallowed ground in American sports culture. It's a place where legends are forged and everlasting memories are made, where timeless tradition is honored and kept. But alas, this reverence to convention is not always rational. There are a number of accepted actions that football teams take that may actually be detrimental to their performance.
One is about as common as they come: punting. Football teams are afforded four downs to travel ten yards. If they surpass that distance, they are awarded another set of set downs. If they don't, then they turn the ball over to their opponents. But overwhelmingly, if teams do not travel the ten-yard distance in only three tries, they elect to punt the ball to their opponents, effectively surrendering their final try in order to move their opponents father away from the end zone, where points are scored.
Punting is an everyday tactic. In fact, do the math and you'll find that punters are paid, on average, tens of thousands of dollars for every appearance they make. But in 2002, economist David Romer at UC-Berkeley studied National Football League (NFL) games and punt data between 1998 and 2000, and found that teams would almost always be better off "going for it" on fourth down if the distance to-go was four yards or less.
"Even on its own 10-yard-line -- 90 yards from the end zone -- a team within three yards of a first down is marginally better off, on average, going for it," ESPN's Greg Garber reported.
When Romer updated his data through 2004, the conclusion only solidified. Punting was often a mistake, such a big one, in fact, that it might be costing teams who regularly do it an average of one and a half wins per year! In a sixteen game season, where a swing of two wins can make the difference between the postseason and the offseason, that's huge!
In his analysis, Romer also found that teams would be far better off forgoing field goals on fourth down when within five yards of the end zone. Football fans are well aware that coaches often elect the more conservative route of an almost-assured three points, but a more exciting "do or die" approach is actually supported by statistics.
Scientists like Romer have also scrutinized commonsense actions off the field. The most notable instance pertains to the NFL Draft. Each year, all thirty-two teams get to choose new players in a grown-up recess spectacle spanning seven rounds. The team with the worst record the preceding year gets to pick first, the team with the second-worst record gets to pick second, and so on and so forth. First round players are by far the most coveted, especially players considered in the top ten. So alluring is their supposed value that teams will trade a great deal of their picks in the later rounds for picks in the first round.
But when economists Cade Massey and Richard Thaler analyzed the value of drafted players from a cost and performance standpoint, they found that players in the later rounds offered -- by far and away -- much better bang for the buck!
"The implication of Thaler and Massey's work is that teams should trade away their first-round picks. They should stockpile players in the second and third rounds, who can be paid a lot less and are nearly as good. This is how you build a winning football team," author Malcolm Gladwell summarized on a recent episode of This American Life.
"Indeed, the irony of our results is that the supposed benefit bestowed on the worst team in the league, the right to pick first in the draft, is really not a benefit at all, unless the team trades it away. The first pick in the draft is the loser’s curse," Massey and Thaler wrote.
Massey, Thaler, and Romer's works have been known for more than a decade, but NFL football coaches and owners still show near universal hesitance to adopt their advice. For all its high stakes, pro football is very much a conservative sport. Tradition trumps reason. Football is powerful, irrational, and instinctual, with an almost animal magnetism. Perhaps that's what keeps us watching.
Renowned physicist Edward Witten recently suggested that consciousness might forever remain a mystery. But his words haven't discouraged other physicists from trying to unravel it.
In the past, consciousness was almost entirely relegated to the musings of philosophers; it was too ethereal to be studied materially. But as science advanced, so too did our ability to examine the wispy intricacies of the waking mind. Biologists joined the pursuit, followed by neuroscientists with brain scanners in tow. It was only recently that select physicists shifted their attentions from concepts like the Big Bang, quantum information, and electrodynamics and instead began tendering their two cents on consciousness.
Sir Roger Penrose, a mathematical physicist at Oxford University, has openly wondered if the minute interactions taking place within the subatomic world of quantum mechanics might give rise to consciousness.
UC-Santa Barbara theoretical physicist and Nobel laureate David Gross has offered other ideas. As Ker Than wrote for LiveScience in 2005, Gross "speculated that consciousness might be similar to what physicists call a phase transition, an abrupt and sudden large-scale transformation resulting from several microscopic changes. The emergence of superconductivity in certain metals when cooled below a critical temperature is an example of a phase transition."
Gross might be on to something. One of the leading theories of consciousness comes from neuroscientist Giulio Tononi at the University of Wisconsin. Similar to Gross' concept of a phase transition, Tononi suggests that as the brain integrates more and more information, a threshold is crossed. Suddenly, a new and emergent state arises: consciousness. According to the theory, only certain parts of the brain integrate all that information. Together, these regions constitute the seat of consciousness.
Recently, Nir Lahav, a physicist at Bar-Ilan University in Israel, went searching for this nucleus of conscious activity. He and his interdisciplinary team, which also included neuroscientists and mathematicians, used detailed scans of six brains to assemble an information map (or network) of the human cortex, the brain's outer layer of neural tissue. With the map, they observed and recorded how certain parts of the cortex were connected to other parts. They charted regions of high connectivity and regions of low connectivity. The map approximated how information "flows" within the cortex, and showed where that flow is concentrated. The region with the highest traffic may very well be the seat of consciousness.
The region with the highest number of connections, which Lahav dubbed the "nucleus", was primarily composed of the superior frontal gyrus, the cingulate cortex, Wernicke's area, and Broca's area. Though these areas are scattered across the brain, they were highly interconnected.
"This unique hierarchy is a single, highly interconnected component, which enables high levels of data integration and processing, probably involved in the highest cognitive functions," Lahav and his colleagues wrote.
It may also be the seat of consciousness within the brain, they suggest.
"Indeed, all of the regions in the nucleus have been previously correlated to consciousness activities," the researchers write. "The nucleus... is therefore a perfect candidate to be the high integrative, global work space region in which consciousness can emerge."
Lahav next plans to analyze the whole brain, not only the cortex. Beyond this line of research, he has even grander ambitions.
"Physics tries to uncover the basic laws of nature by constructing general mathematical equations that can describe as many natural phenomena as possible," he told RealClearScience. "These mathematical equations reveal fundamental aspects of reality. If we really want to understand what is consciousness and how the brain works we have to develop the mathematical equations of our brain and our conscious mind. We are not there yet, in fact we are quite far away from this goal, but I feel that this should be our 'holy grail' and we already started the process to get there."
Source: Nir Lahav, Baruch Ksherim, Eti Ben-Simon, Adi Maron-Katz, Reuven Cohen and Shlomo Havlin. K-shell decomposition reveals hierarchical cortical organization of the human brain. New Journal of Physics. 2 Aug 2016. http://dx.doi.org/10.1088/1367-2630/18/8/083013
In the past, we here at RealClearScience have been very critical of chiropractic. The reasons are manifold, but they can be neatly summarized by saying that scientific evidence does not support the profession's tenets or treatments for any condition, even back pain.
But, as RealClearScience is guided by a science-based worldview, we forever strive to keep an open mind. If evidence or circumstances change, we are ready and willing to re-evaluate our positions.
It is in this spirit of curiosity that a recent article published in the journal Chiropractic & Manual Therapies has left us intrigued, and even slightly hopeful, for the future of chiropractic. Associate Professor Bruce Walker, the head of the chiropractic program at Murdoch University in Australia, offers a ten point plan to revitalize the profession and bring it in line with evidence-based medicine.
"By embracing this plan the profession can be set on a new path, a new beginning and a new direction," he writes. It will be the "New Chiropractic."
Here are a few of Walker's key points:
Walker's recommendations are absolutely worthwhile, and we wish him all the best in his efforts to implement them. However, a dash of nuanced skepticism is warranted. Walker admits that his plans may take a "generation" to succeed, but realistically, chiropractic might not be capable of re-making itself, no matter how much time is allotted for the transformation. "Science-based chiropractic" won't be chiropractic at all. Instead it will probably be a simpler and less effective version of physical therapy. Will faithful patrons keep going back for their "adjustments" when all of the profession's woo is cast out? I'm not so sure.
Clearly, Walker is a supporter of chiropractic. He makes that very plain.
"I contend that the global 'good' produced by the profession far outweighs the 'bad'," he writes. "The 'good' can be summed by recognising over a century of improvement to public health by improving pain and disability in countries where chiropractic is practised. It can be asserted that this has provided significant economic savings and improved productivity."
But as Mark Crislip noted over at Science-Based Medicine, this sweeping assertion is unreferenced. That's telling. The blunt truth is that if Walker and his chiropractic colleagues transition their profession to evidence-based practice and do so with intellectual honesty, they will almost certainly find that their profession is not all it's cracked up to be.
Source: Walker, Bruce. "The new chiropractic." Chiropractic & Manual Therapies. 2016 24:26 DOI: 10.1186/s12998-016-0108-9
(Image: Butch Comegys/AP)
Popular culture perpetuates a lot of poor information about intimate relationships. The systematic disinformation can, and does, lead people astray, and not just into hilariously awkward situations (see almost any romantic comedy), but into genuine misery. In fact, as social psychologist Matt Johnson made known on the very first page of his new book, Great Myths of Intimate Relationships, the largest predictor of life satisfaction is relationship satisfaction. "So, we had better pay attention to those relationships!"
In his book, Johnson tried his best to pay more than mere lip service to intimate relationships. Sorting through a boatload of scientific evidence, he dispelled twenty-five myths on topics ranging from online dating, to sex, to divorce. Here are four of those myths.
1. Men have a much stronger libido than women. Stronger? Probably. But the difference is much narrower that what common thinking dictates. In his book, Johnson points to pioneering studies by Meredith Chivers of Queen's University. In a series of experiments that have been repeatedly replicated, Chivers had both men and women watch various sexually stimulating videos and asked participants to report their levels of arousal. Participants were also equipped with devices to measure blood flow to their genitalia, a physiological sign of arousal. Men's self-reports of arousal closely matched their physiological signs of arousal, but women's did not.
"Women -- straight and lesbian -- seemed to be pan sexual," Johnson summarized. "The women had blood flow when watching the sexual videos regardless of who was with whom... Clearly, there's a large gap between the arousal that women report and the arousal they feel."
This divide may be societally constructed, Johnson suggests. If women had not had their sexuality systematically repressed for centuries, their libido might be more on par with men's.
2. Opposites attract. More than 8 in 10 individuals desire a partner with opposite traits that complement theirs. Fueling this situation is the widespread myth that "opposites attract." But scientific evidence does not bear this belief out.
"There's essentially no evidence that differences lead to greater attraction or improved relationship outcomes," Johnson reports. Similarity, however, does predict attraction and relationship success. Honestly, this makes sense. While scientific fact is often counterintuitive, in this case, what's intuitive seems to be correct. People aren't magnets, after all.
3. You should live together before marriage. Roughly seven out of ten high school seniors and young adults agree that it's usually a good idea for couples to live together before getting married. This majority opinion certainly seems like wisdom -- a couple should probably make sure they can successfully cohabitate before deciding to spend the rest of their lives together. Intriguingly, however, there's no evidence that premarital cohabitation improves marriage quality or reduces divorce rates. When Penn State social psychologist Catherine Cohan reviewed over 100 studies on the topic in 2013, she found "no benefits of cohabitation in terms of... personal well-being and relationship adjustment, satisfaction, and stability." If anything, there was actually a small negative effect!
4. Children bring couples closer together. Newborns are often dubbed "bundles of joy". In reality, they are sacks of discord. As Johnson reveals, the general consensus amongst social scientists is that children cause a drop in marital and relationship satisfaction. Moreover, marital satisfaction usually doesn't begin to recover until children "leave the nest". Raising kids is certainly worthwhile, but that doesn't change the fact that it's immensely difficult.
"Even with careful planning, bringing a new child into a family is a sudden and jarring experience that will permanently change the dynamics of a relationship," Johnson writes.
Primary Source: Matthew D. Johnson. Great Myths of Intimate Relationships: Dating, Sex, and Marriage. 2016. Wiley-Blackwell
This weekend RealClearScience ran an editorial bemoaning the public’s distrust of science. The author explains away the problem by insisting that the public is naive, rooked by Republican "pandering," and uninterested in a scientific establishment “too old, too male,” and not "representative.”
In essence, she answers her own question. Reasonable people distrust science because of the way scientists treat them.
The condescension is the worst part. Many research scientists who fret and wail about public ignorance live off the public dime. Our microscopes, our labs, our pipettes and our particle colliders are bought with taxpayers’ money. They pay our salaries too.
So when we tell them how stupid they are, how ignorant and backward and wrong they are, why shouldn’t they be angry? Belittling someone in an argument never wins his support. How much more arrogant and foolish is it to belittle the people who write your paycheck?
If the public doesn’t understand why we believe certain things, that’s our fault. We owe them better communication. We need to demonstrate the value of our work and ideas, not demean theirs.
The other big problems in scientist-citizen relations arise from politics.
Scientists have increasingly become overtly political rather than maintaining distance. Worse, they usually fight for one side only. Then they turn around and act shocked when the enemies they have unnecessarily made don’t trust them! There’s condescension here too: how dare these ignorant little people not fall in line behind their intellectual superiors!
The public trusts those with strong moral codes lying in a plane above politics. Four of the five most trusted institutions in the United States are the military, the police, the church, and medicine. All of which are supported by apolitical moral backbones.
Congruently, we distrust those possessing no morals above the political. Among the least popular institutions in this country: Congress, TV news, organized labor, and newspapers.
Every time science picks sides in politics, it slips away from the trusted group and sinks toward the disreputable one. Science is selling away its considerable moral stock by choosing to fight for the momentary goals of its political favorites. That’s a terrible, shortsighted bargain.
Just as science needs a system of morals outside of politics, it needs a system of discerning merit outside of politics too. Scientists should be choosing and rewarding their leaders and workers by quality of output and scientific ability. When we trade this honest system of merit for politically correct affirmative action, we throw away the confidence of a huge swath of the public.
“Too old, too male, out of touch.” These are politically correct cries that endear scientists to the press, to the bureaucracy, and to a subset of the public. But they create many more needless enemies. Here again we sell out our moral stock for a shallow bit of momentary political and media affection.
If the public does not trust science, I say many scientists have given them good reason. Scientists need to be more trustworthy. We need to stop looking down on the public. We need to stop playing politics and taking up the banner of political correctness. Otherwise we become just another snooty political faction, trusted fleetingly by our friends of convenience but permanently loathed by our growing legion of self-inflicted enemies.
What happened before the Big Bang? From a cosmic perspective, it's impossible to know for sure (for now, at least). But there is a similar question we can answer! What was there before the Big Bang theory?
For half a century, The Big Bang model of the Universe has stood out as the dominant theory for how everything came to be. But before coming to prominence, the Big Bang struggled to take hold in the minds of cosmologists. Instead, many preferred Steady State theory. Championed by English astronomer Sir Fred Hoyle, the theory states that the observable Universe remains essentially the same at all locations of time and space. Stars and planets form, die, and form again, but the density of matter does not change. This means that the Universe had no beginning. It also means it will have no end. The Universe simply is.
During Steady State's heyday in the 1940s and 50s, Hoyle and his contemporaries defended the theory with intellectual vigor, winning over a significant portion of the scientific community. To reconcile their idea with Edwin Hubble's observations that the Universe was expanding, they suggested that matter was created in between distant galaxies. As long as a single atom of hydrogen per cubic meter were squeezed out of the stretching fabric of space every ten billion years, their theory would check out with observational data.
"If matter were spontaneously created at this meager rate, then each and every location in the Universe -- on average -- would always contain the same number of galaxies, the same population of stars, and the same abundance of elements," astrophysicist Ethan Siegel described in his recent book Beyond the Galaxy.
Compared to the notion of a fiery, primeval creation event from which the entire Universe sprouted -- what Hoyle snarkingly dubbed the "Big Bang" -- this was not that far-fetched. Hoyle derided the Big Bang model as an "irrational process" that "can't be described in scientific terms," akin to creationism.
But in 1965, Steady State Theory lost a decisive scientific battle. Radio astronomers Arno Penzias and Robert Woodrow Wilson realized that the faint, blanketing noise emanating from their antenna's receiver at Bell Labs originated from the universe itself. It was the afterglow of the Big Bang, the cosmic microwave background radiation! Hoyle and his fellow Steady State proponents scrambled to account for the new observations.
"Perhaps this was not radiation left over from the Big Bang, but rather was very old starlight, emitted from stars and galaxies strewn across the Universe," Siegel described. This light could have been scattered and re-emitted by the atoms constantly popping into existence.
Alas, further observations discounted this explanation, and the Big Bang model swiftly ascended to take its place as the leading theory accounting for the Universe.
Like the slow, creeping heat death of the universe predicted by the Big Bang model, in which the Universe expands forever until what's left is too devoid of energy to sustain life, the Steady State theory gradually fizzled out in scientific circles. Hoyle, however, defended his theory until his dying day in 2001, never accepting the overwhelming evidence for the Big Bang.
For the first 600 million years of Earth's 4.54 billion-year history, our planet was a hellish place. The rampant volcanism and frequent collisions that wracked our world rendered the surface unforgiving and purportedly inhospitable to life. While water was probably present, the oceans of the time may instead have been rolling seas of magma. The name for this period, the Hadean, is borrowed from Hades, the Greek god of the underworld. The moniker's meaning is obvious: early Earth was a place of death.
Yet it was on this comparatively cursed landscape that --against all odds -- life might have emerged. The controversial clue to this incredible notion was made public last fall. Scientists from UCLA showed off apparently biogenic carbon that was locked away inside a near impenetrable crystal for 4.1 billion years.
The oldest rocks on Earth don't even date back that far, but peculiar minerals called zircons do. The oldest-known zircons, discovered in the Jack Hills of Western Australia, originally crystalized 4.4 billion years ago! It was within one of these zircons that geochemist Elizabeth Bell and her team discovered the carbon they think was produced by life. Life that old, whatever it was, would not have bones, or even a clearly-defined shape, so a true fossil find will probably never be unearthed. Instead, whatever carbon-based life existed back in the Hadean would simply leave traces of itself in the form of carbon itself. Bell's co-author, Mark Harrison, referred to the stuff as "the gooey remains of biotic life."
But not all carbon comes from living organisms, so how did Bell and Harrison conclude that the carbon originated from life? Well, biological processes typically concentrate the lighter, more common form of carbon, an isotope called carbon-12, and less of the heavier isotope carbon-13. The carbon that they discovered within the zircon had lots more carbon-12 than one would expect from typical stores of the element. (Figure below: The arrows indicate the concentration of Bell's carbon compared to the typical carbon concentrations of certain life forms. Note that it has far less carbon-13. )
"The simplest explanation for this is that we had biogenic carbon at or before 4.1 billion years," Bell said in a talk earlier this year at the SETI Institute. The second-most likely explanation, she says, is that the carbon came from the meteorites that supposedly peppered the planet around the same time.
Astrobiologists are big fans of Bell's finding. If true, it serves as a testament to the hardiness of life, and fuels hopes that we may find it elsewhere in our very own solar system.
Bell and her team have now set about searching for carbon in more ancient zircons from the Jack Hills, raising the possibility that they may find even earlier signs of life on Earth.
(Images: MarioProtIV, Elizabeth Bell)
For the past eight months, physicists held their breath anticipating further news about the "bump" seen at the Large Hadron Collider (LHC). We just found out that this bump was a statistical anomaly, but it’s still of great consequence to the particle physics community. The future of the field depends on bumps like these. The departure of this bump back to the beyond brings us closer to the decision looming at the end of the accelerator physics road.
We can argue about how many TeV we need to hypothetically imagine seeing some supersymmetric particle. That's missing the forest for the trees. The scientific motivation is far too weak to justify spending tens of billions of dollars. High energy particle physics has been wildly successful, but things are now changing.
For half a century we beat down the path of experimental particle physics. Along the way, we verified that the field’s leading theory was indeed guiding us in the right direction. The Standard Model map was validated to stunning precision by the detection of bump after bump in the road, right about where they all should have been. The Higgs Boson, found in 2012, was the crowning achievement. It was also the last marked destination on that road.
Now we’re at a dead-end; the road has flattened out. String Theory—the new roadmap—has so far proved an utter failure. The excitement over this possible new detection was that it might signal some big discovery, maybe even open a whole new uncharted land of physics to explore. Unfortunately we weren’t so lucky. We’re back to being lost.
There's always one good reason to build big experiments: they may spot something totally new and unexpected. But that’s a tough sell when there may be nothing new to see with a $10 billion telescope and your next pitch is for a $20 billion-plus telescope. Perhaps someone will pick up this tremendous tab, but without any solid theoretical predictions—or astounding unexpected discoveries—I wouldn't bet on it. There probably won't be any more LHC's.
So, what do we do with what we've got? Two purposes come to mind for the future of the colossal machines built to bushwhack into uncharted fundamental physics.
Science shouldn’t be mental masturbation. The ultimate goal of burning 10-figures-worth of public dollars is to find facts that can be used to better everyone's lives. When you’ve got facilities as powerful as the LHC, you can focus their enormous capabilities onto the problems of medicine, energy, and designing new materials.
Many fundamental physics machines ply this trade. Well-designed, well-built, and well-run accelerator facilities can live on into their golden years by finding new uses to bend their beams to.
The machines themselves aren’t the sole accomplishments. America has built its commanding superiority in the scientific world with the help of thousands upon thousands of highly trained and exceptionally skilled scientists, many of whom have worked at places like the LHC, SLAC, Fermilab's Tevatron, its predecessor the Bevatron, the failed SSC project, and others.
It’s hard to directly appraise this resource but its secondary value is incalculable. You can’t buy this kind of talent; you can only develop it.
Directing its energy toward applied science while developing generations of new talent looks like the foreseeable future of America’s particle physics community. CERN and the Large Hadron Collider should now follow that path. Unless another bump comes along.
[Image: BEBC Experiment]
Richard Feynman imagined writing the Encyclopedia Britannica on the head of a pin. A decade ago scientists inscribed the entire Hebrew text of the Old Testament into a pinhead of silicon. Digging holes only one ten-thousandth-of-an-inch across is easy with today's technology.
A study published in Applied Physics Letters this week describes a different process: building a tiny Matterhorn on a pinhead.
A team of German scientists led by Gerald Göring used the concept of 3-D printing, miniaturized to the nanometer scale, to build tiny sculptures. But they aren't doing it for art connoisseurs. They made needle-point tips for an atomic force microscope (AFM).
An AFM drags its needle tip across a surface, feeling for contours. When it hits a bump as small as a single nanometer, it is deflected. A system watching the reflection of a laser off of the needle tip records the amount of tip movement from the change in the angle of the laser reflection. The tip gently swipes back and forth across the entire sample like the beam in an old CRT television, measuring the height along the way. In this manner it builds up a topographic map of the surface.
The goal of this incredibly delicate instrument is to make maps of surfaces far smaller than any optical microscope can see. The smallest thing the AFM can see is the size of the needle tip.
Hence, the German researchers used their miniaturized 3-D printer to produce tips of a wide variety of shapes and sizes. They made tips that are extremely long and thin, for probing deep and narrow canyons. They made tips with extremely precise shapes so that they could factor the shape back out of a surface map more accurately.
They also experimented with printing modifiable tips. Most AFM tips need to vibrate at a certain ferquency to perform their scans. By building and knocking out tiny struts along the AFM tip, they can tune the tip's vibration, like the tension of a buzzing guitar string.
Building a mountain on a pinhead is an exercise in incredible control of tiny things. It's also a gorgeous demonstration of how far miniaturized technology has progressed and expanded in new directions.
Is religion necessary for morality? The answer is almost certainly "no". But slightly to the chagrin of passionate atheists, religious people do seem to behave more morally than irreligious people. Through population level surveys and laboratory studies designed to gauge prosocial behavior, psychologists have found a robust link between religiosity and salubrious actions like helping, sharing, donating, co-operating, and volunteering.
Not to be upstaged in this foundational philosophical feud, the irreligious have fired back, arguing that religious morality arises not from altruism, but from selfishness. In this viewpoint, religious followers are more inclined to do good because they are tempted by divine rewards and threatened with divine punishments
Turns out, atheists may have a point.
Psychologists James Saleam and Ahmed Moustafa of Western Sydney University scoured the psychological literature in search of evidence for and against this controversial contention. They just published their findings in the journal Frontiers in Psychology,
Saleam and Moustafa begin by acknowledging that the balance of empirical evidence shows that religious people do tend to behave more prosocially than irreligious people. However, they note that the evidence is rarely accompanied by explanations.
"If there is a link between religious belief and prosociality, then there must be an underlying reason for this," they write.
The attempt to uncover this reason has led a small number of psychologists to focus on two intertwining ideas, the "supernatural monitoring hypothesis" and the "supernatural punishment hypothesis". Taken together, the notions suggest that religious people behave more prosocially because they feel they are being watched by divine entities and that their actions will be punished or rewarded by those entities.
A number of studies support this controversial idea. In one, religious people primed with notions of salvation and Heaven were more generous when participating in the Dictator game, a classic psychology experiment in which one subject gets to decide how much money to share with another subject. In another study focused on the Muslim population, subjects who read passages of the Qur'an related to divine punishment showed more prosocial actions than subjects who read more neutral passages focused on Allah's mercy. In a third study conducted in Canada, participants who viewed God as "forgiving" and "gentle" were more likely to cheat on a math test than subjects who viewed God as "vengeful" and "terrifying".
It will take a significant number of studies conducted across the world to truly evince the link between moral behavior and religious incentives. After all, human belief is just as diverse and nuanced as human beings. To say that all believers are more moral than nonbelievers out of fear and selfishness is a gross oversimplification.
"It is unlikely that any single hypothesis will provide a comprehensive account of the religion-prosociality link," Saleam and Moustafa write, "...but now there is a growing body of empirical literature supporting the notion that... divine incentives do influence prosociality."
Source: Saleam J and Moustafa AA (2016) The Influence of Divine Rewards and Punishments on Religious Prosociality. Front. Psychol. 7:1149. doi: 10.3389/fpsyg.2016.01149
Saturday is the 71st anniversary of the atomic bombing of Hiroshima. The culmination of the greatest science project in the history of the human race incinerated tens of thousands of Japanese and left thousands more with fatal radiation poisoning.
Widely considered at that time a necessary measure to finish the war with Japan, opinion on the bombing is mixed today. With the brutal reality of all-out global conflict a distant memory, many view the bombing as a tragic mistake.
I'm writing today because President Truman made the right decision in 1945. Millions of other so-called millenials can count the same blessing.
Historical facts strongly support the decision to bomb. The two atomic bombs killed roughly 200,000. Japanese military officers estimated that as many as 20,000,000 Japanese would have lost their lives in defense of the Japanese mainland. America estimated its own deaths to number in the hundreds of thousands. Wounded would have been in the millions on both sides.
But there is another angle to this story. It's very personal to me.
Beside my desk is an old leather case with leather handles and a brass catch. Gently easing open the worn top reveals a large pair of metal binoculars. The lenses are still clear; a careful focusing procedure brings into relief tiny reticles etched onto the glass for measuring distances. They are painted a dark green color, maybe to provide some meager camouflage in the high bows of a tree.
My grandfather was a forward observer. His job was to go in to the beach first, climb a tree, and call in directions for the artillery that would bombard the defenses at the Japanese landing beach. In front of the invading army, these binoculars were made to resolve the targets of the first artillery barrages to soften its arrival on the beachhead. Picture the opening scene of Saving Private Ryan; the planned invasion of Japan would have been an amphibious assault on the scale of D-Day.
Artillery spotters like my grandfather had just about the lowest life expectancy of any troops in ground combat. He very likely would have died up in that tree, calling artillery directions into his radio.
Thankfully, he never had to go in first, to face the 2.3 million defending Japanese. The US command at the time made the right decision: finish the war as quickly as possible, with the fewest deaths on both sides.
All-out war is often a choice between something horrific and something even more horrific. Making these decisions surely weighs upon the consciousness of wartime leaders. Harry Truman struggled with the decision but ultimately believed he had made the correct call. Oppenheimer, the physicist who headed the weapon design lab at Los Alamos during the war, was a staunch leftist who later became a peace activist. Yet, he too went to his grave supporting the creation of the bomb.
Today, looking back at history from a privileged position as beneficiaries of the bombing, many who grew up only in its aftermath wish that Hiroshima and Nagasaki had been spared. This is a fantastically naïve thought; yet it is held by even some of our most prominent leaders today.
Earlier this summer our President minced words with the Japanese Prime Minister at Hiroshima. While stopping short of apologizing, he expressed sadness about the event. How quickly we forget our history and our blessing never to have faced such a difficult choice.
Fortunately, we made the right decision. Without that momentous blast shaping human history I might not be alive today. Millions of you can count the same blessing. Remember that.
Over the next few weeks, sprinters from all over the world will gather in Rio to show their speed on the sport's grandest stage: the Summer Olympics.
Sometimes glossed over whilst watching the galloping strides of the world's top athletes is a simple truth: Sprinting events are not only tests of speed; they are also tests of reaction. To have a chance of winning the 100-meter dash, modern sprinters must finish in less than ten seconds. To win the 200 meters, they have to finish in under twenty.
The overwhelming majority of an athlete's race time is spent sprinting, but a significant minority is reserved for reaction time. Races are traditionally started with gunshots. At the sound of a bang, sprinters leap off the blocks and accelerate down the track at breathtaking speeds. But the seemingly small duration of time in between the gunshot and the start is actually a sizable gap. As Stanford neuroscientist David Eagleman explained in his recent PBS documentary The Brain:
"[Sprinters] may train to make this gap as small as possible, but their biology imposes limits. Processing that sound, then sending out signals to the muscles to move will take around two-tenths of a second. And that time really can't be improved on. In a sport where thousandths of a second can be the difference between winning and losing, it seems surprisingly slow."
In light of this situation, Eagleman posed an intriguing question: Why not start races with a flash of light? After all, light travels roughly 874,029 times faster than sound. By starting races with a gunshot, we may actually be handicapping sprinters!
To answer the question, Eagleman lined up a group of sprinters. In one case, they were triggered by a flash of light. In another case, they were triggered by a gunshot. Their starts were filmed in slow motion and compared. (Note: In the image below, the gun fired and the light flashed at the same time.)
As you can clearly see, when sprinters are triggered by a gunshot they actually react faster, despite the fact that light reaches their eyes well before sound reaches their ears! There's an incredible reason why this is the case.
"[Visual cues] takes forty milliseconds longer to process," Eagleman explained. "Why? Because the visual system is more complex. It's bigger, and involves almost a third of the brain. So while all of the electrical signals inside the brain travel at the same speed, the ones related to sight go through more complex processing, and that takes time."
As you sit down with friends and family to watch sprinting during this summer Olympics, feel free to enlighten them with this fascinating factoid!
(Images: AP, The Brain with David Eagleman)
On July 30th, 1999, the U.S. House of Representatives passed House Concurrent Resolution 107 by a vote of 355 to zero. The vote was staggering, but not for its bipartisan appeal in an era of hyperpartisanship. It was staggering because it was the first and only time that a scientific study was condemned by an act of Congress.
"Resolved by the House of Representatives (the Senate concurring), That Congress... condemns and denounces all suggestions in the article 'A Meta-Analytic Examination of Assumed Properties of Child Sexual Abuse Using College Samples'' that indicate that sexual relationships between adults and 'willing' children are less harmful than believed and might be positive for 'willing' children..."
With wording like that, it's plain to see why the resolution passed as it did. A vote against would be seen as a vote for pedophilia! Understandably, no politician would ever want that on their voting record.
Representative Brian Baird was one of the few representatives who opposed the resolution, electing to do so by voting "present." As a licensed clinical psychologist and the former chairman of the Department of Psychology at Pacific Lutheran University, Baird knew that the study had been blatantly mischaracterized. But one didn't need a doctorate in clinical psychology (like Baird) to recognize that, one simply needed to read the study.
Any politician who bothered to do so would have realized that the paper did not endorse pedophilia at all. In her recent book, Galileo's Middle Finger, historian of science Alice Dreger described the paper and its findings:
"Bruce Rind, Philip Tromovitch, and Robert Bauserman performed a meta-analysis of studies of childhood sexual abuse... They took a series of existing studies on college students who as children had been targets of sexual advances by adults and looked to see what patterns they could find."
The authors uncovered a plethora of patterns, but none more controversial than the fact that childhood sexual abuse did not always inflict lasting harm upon the victim. The degree of harm seemed to depend on a variety of factors. Two of the most notable included forced abuse and incest. As Dreger recounted:
"Rind, Tromovitch, and Bauserman were saying something very politically incorrect: some people grow up to be psychologically pretty healthy even after having been childhood sexual abuse victims. In fact, Rind and company... [suggested] that the term childhood sexual abuse seemed to imply that child-adult sex always led to great and lasting harm, whereas the data seemed to show it did not in a surprising proportion of cases."
Aware that their findings would be controversial, the authors concluded their study by saying that they in no way advocate pedophilia.
"The findings of the current review do not imply that moral or legal definitions of or views on behaviors currently classified as childhood sexual abuse should be abandoned or even altered. The current findings are relevant to moral and legal positions only to the extent that these positions are based on the presumption of psychological harm."
Their erudite invocation of nuance did not work. The North American Man/Boy Love Association quickly praised the paper, which, in turn, prompted righteous condemnation from religious, conservative, and family groups. Conservative radio host Dr. Laura Schlessinger even went so far as to assert "the point of the article is to allow men to rape male children." (It wasn't.) Representative Joseph Pitts claimed that "the authors write that pedophilia is fine... as long as it is enjoyed." (They wrote no such thing.)
With the fires of controversy blazing and ideological battle lines drawn, the paper soon landed on the desks of Republican congressmen Tom Delay and Matt Salmon, who swiftly moved to score political points. Salmon drafted the aforementioned resolution condemning the study and Delay pushed it through the House.
Laypersons and politicians weren't the only people to criticize the study, other scientists did, too, particularly questioning the methodology and the matter in which the findings were reported. However, the American Association for the Advancement of Science commented that the paper did not demonstrate questionable methodology or author impropriety.
To anyone even remotely connected to the scientific enterprise, the "Rind et. al. controversy" (as it is called) offers a chance not only for ample face-palming but also for reflection. Science sometimes questions common sense and does not adhere to political correctness. It deals only in what is true and what isn't. All that's up to us is to decide whether to accept that truth and try to understand it... or deny it.
As Dreger noted in her book, the truth presented Rind's study is not as alarming as it was caricatured more than fifteen years ago.
"It seemed to me that the Rind paper contained a bit of good news for survivors, namely that psychological devastation need not always be a lifelong sequela to having been sexually used as a child by an adult in search of his own gratification."
"But simple stories of good and evil sell better," she added.
Primary Source: Galileo's Middle Finger: Heretics, Activists, and One Scholar's Search for Justice, by Alice Dreger. 2015. Penguin Books.
(Image: Public Domain)
Lake Huron's Manitoulin Island is the largest lake island in the world. It's so large, in fact, that it has 108 lakes of its own. Three of Manitoulin's largest lakes -- Lake Manitou, Lake Kagawong and Lake Mindemoya -- even have islands of their own, some of which have ponds of their own.
Pockmarked, picturesque Manitoulin is also home to an enduring archaeological mystery. In 1951, Thomas Edward Lee, an archaeologist for the National Museum of Canada, discovered an incredible array of artifacts on a rocky hill near the island's northeastern shore. The site was dubbed Sheguiandah, after the small village nearby.
Four years of excavations would eventually unearth rudimentary scrapers, drills, hammerstones, and projectile points, apparently constructed out of the quarried bedrock jutting from the hill. Blades were in particular abundance. Many thousands of all shapes and sizes were found, some preserved in the cool, forest soils, others drowned in peat bogs, and even more littering the open ground. Documenting the expedition in 1954, Lee imagined prehistoric "quarrying operations of staggering proportions." Clearly, Sheguiandah was a hub of industry.
“It would not be surprising that people who carried out the enormous amount of work done here should also have lived on the site," Lee wrote.
But while Lee and his colleagues uncovered a great many blades and other objects, they didn't turn up a single human bone. Signs of human activity were etched, scraped, and smashed in stone (quartzite to be specific), but there were no human remains. The men and women that worked the quarry at Sheguiandah may have originated from more permanent settlements somewhere else. If that's the case, those settlements have yet to be unearthed.
It is partly due to the lack of remains that dating Sheguiandah has proven extremely difficult. When Lee concluded his excavations in the 1950s, he created quite a buzz by insisting that some of the artifacts were found in deposits dating back to the Ice Age, as many as 30,000 years ago! Suddenly, Manitoulin Island may have been home to the oldest traces of humankind in North America!
The story Lee wove was fascinating. As summarized by his son:
"Early peoples lived and left their stone tools on the Sheguiandah hilltop in a warm period before the last major glacial advance. The returning glaciers caught up and moved those tools - but only a few yards or tens of yards. The tools stayed locked up under the ice for tens of thousands of years, until it melted away. Then a succession of Paleo-Indian and Archaic groups migrated along the north shore of a subarctic Great Lake. Each stopped briefly at Sheguiandah, leaving a scattering of spearpoints and other stone tools on top of the glacial till deposits. About 5000 years ago, the lake basin filled again, temporarily turning the hill into an island in Great Lakes Nipissing. A tremendous stone-quarrying industry sprang up, covering parts of the site at least five feet deep in broken rock."
But the archaeological community generally disagreed. Lee's timeline challenged popular theories insisting that humans didn't reach the heart of North America until after the Laurentide Ice Sheet receded from what is now the northern United States. This glacial retreat heralded the end of the Ice Age roughly 20,000 years ago. Soon, major journals refused to publish Lee's controversial papers on Sheguiandah, and as the discussion faded from public forums, the site itself was slowly forgotten. As Lee's son Robert noted in 2005, "In 1987 neutral observers characterized it as 'Canada's most neglected major site of the past 30 years.'"
In 1991, archaeologists Patrick Julig of Laurentian University and Peter Storck of the Royal Ontario Museum in Toronto led new expeditions to Sheguiandah, and, armed with advances in dating techniques, re-evaluated Lee's claims. They determined that the site was roughly 10,500-years-old, instead envisioning Paleo-Indians as the site's first and only inhabitants after the conclusion of the Ice Age. At the time, Manitoulin Island was almost certainly connected to what is now mainland Ontario, and the inhabitants likely traded their stone tools far and wide.
Manitoulin Island is today a quiet place, home to roughly 12,600 permanent residents who adore the island's untrammeled wilderness and timeless, glacier-worn topography. The controversy surrounding Sheguiandah has settled down. Now, tourists can take guided walks through the site and still find scores of artifacts strewn across the landscape. We may never know who they once belonged to.
How do we know if something is true?
It seems like a simple enough question. We know something is true if it is in accordance with measurable reality. But just five hundred years ago, this seemingly self-evident premise was not common thinking.
Instead, for much of recorded history, truth was rooted in scholasticism. We knew something was true because great thinkers and authorities said it was true. At the insistence of powerful institutions like the Catholic Church, dogma was defended as the ultimate source of wisdom.
But by the 1500s, this mode of thinking was increasingly being questioned, albeit quietly. Anatomists were discovering that the human body did not function as early physicians described. Astronomers were finding it hard to reconcile their measurements and observations with the notion that the Sun revolves around the Earth. A select few alchemists were starting to wonder if everything really was composed of earth, water, air, fire, and aether.
Then, a man came along that refused to question quietly. When Italian academic Galileo Galilei looked through his homemade telescope and saw mountains on the moon, objects orbiting around Jupiter, and phases of Venus showing the Sun's reflected light -- all sights that weren't in line with what authorities were teaching -- he decided to speak out, regardless of the consequences.
In The Starry Messenger, published in 1610, Galileo shared his initial astronomical discoveries. He included drawings and encouraged readers to gaze up at the sky with their own telescopes. Thirteen years later, in The Assayer, Galileo went even further, directly attacking ancient theories and insisting that it was evidence wrought through experimentation that yielded truth, not authoritarian assertion. Finally, in 1632, Galileo penned the treatise that would land him under house arrest and brand him a heretic. In Dialogue Concerning the Two Chief World Systems, Galileo cleverly constructed a conversation between two fictional philosophers concerning Copernicus' heliocentric model of the Solar System. One philosopher, Salviati, argued convincingly for the sun-centered model, while the other philosopher, Simplicio, stumbled and bumbled while arguing against. At the time, "Simplicio" was commonly taken to mean "simpleton." Simplicio also used many of the same arguments the Pope employed against heliocentrism. At the the time, the Catholic Church was not opposed to researching the topic, but they did have a problem with teaching it. Thus, the Vatican banned the book and imprisoned Galileo.
By stubbornly refusing to be silent, Galileo irrevocably altered the very definition of truth. Scientists today forge breakthroughs in all sorts of fields, but their successes can ultimately be attributed to Galileo's breakthrough in thought. In her recent book, Galileo's Middle Finger, historian of science Alice Dreger paid tribute to the legendary astronomer.
"Galileo actively argued for a bold new way of knowing, openly insisting that what mattered was not what the authorities... said was true but what anyone with the right tools could show was true. As no one before him had, he made the case for modern science -- for finding truth together through the quest for facts."
Primary Source: Galileo's Middle Finger: Heretics, Activists, and One Scholar's Search for Justice, by Alice Dreger. 2015. Penguin Books.
Did you know that humans, mice, monkeys, and other mammals experience two puberties?
It's the second you're no doubt aware of. In a process that initiates in the early teens, boys and girls are flooded with hormones produced by their sex glands. The result is a mental and physical transformation. Boys grow facial hair, develop a deeper voice, and double their skeletal muscle. Girls' breasts enlarge and their hips widen. Both boys and girls grow pubic hair, become sexually fertile, take on an "adult" body odor, and generally annoy their parents.
But more than a decade before this second puberty, the one chronicled again and again in coming-of-age movies, there is a first puberty, a so-called "mini-puberty." Originally described back in the 1970s, scientists are still working today to understand it. Here's what they know: Roughly one to two weeks after birth, newborn boys and girls endure a rush of hormones, a process known as the Postnatal Endocrine Surge. Luteinizing hormone and testosterone dominate male mini-puberty, while follicle-stimulating hormone and estradiol (estrogen) hallmark female mini-puberty. The process lasts around four to six months in boys and a little longer in girls, concluding with hormone levels abating to typical childhood levels.
Exactly why mini-puberty happens is unclear, but the bodily changes it causes are quite clear. In a commentary recently published to the journal Pediatrics, University of Oklahoma pediatricians Kenneth Copeland and Steven Chernausek described a few of them.
"Resultant effects on the reproductive organs include testicular, penile, and prostate growth in boys, uterine and breast enlargement in girls, and sebaceous gland and acne development in both sexes."
According to a recent study, the testosterone surge that boys experience during this phase likely accounts for why adolescent boys are a little taller than adolescent girls. The surge also explains roughly fifteen percent of the height difference between adult men and women.
Research is increasingly showing that mini-puberty is a sensitive time of development, acting as a "stress test" of sorts for the endocrine system, revving it up to ensure lifelong function. It also primes target tissues -- particularly in the reproductive system -- for growth and maturation later in life. Lastly, and more controversially, mini-puberty may be a time where sexual orientation and behavior are imprinted, with hormones playing a defining role. Multiple studies evince this claim in animals, but research in humans is lacking.
Mini-puberty is not nearly as conspicuous as its older, more mature sibiling, but its effects may be just as consequential. Further research will undoubtedly reveal its outsized role in human development.
(Sandra J. Milburn/The Hutchinson News via AP)