We See What isn’t There

We See What isn’t There

I have long quipped that, “humans are so good at seeing patterns that we find them where none exist.”  But it wasn’t until sometime in graduate school that I learned the technical term for it: “apophenia.”  (A visual apophenia is a pareidolia.)  I still share this one with my students at some appropriate juncture almost every term.

Humans see figures in clouds, constellations in stars, the man in the moon, deeper significance in particular numbers (i.e., 3, 7, 12, 23, 666), and faces in geological formations.  We are more prone to seeing relatively simple patterns in randomness than we are to missing simple patterns that do exist.  We are also prone to asserting the reality of specific kinds of pattern: linearities, periodicities, and those that mimic some natural forms like the human face.

The Cydonia Mensae region of Mars.  Many see the hill in the top center of this photo (taken by Viking 1 in 1976) as a face.  Inset: the same region photographed in 2006 by the HiRISE camera aboard the Mars Recon Orbiter.
The Cydonia Mensae region of Mars. Many see the hill in the top center of this photo (taken by Viking 1 in 1976) as a face. Inset: the same region photographed in 2006 by the HiRISE camera aboard the Mars Recon Orbiter.

I contend that apophenia was adaptive.  In fact, one of the key characteristics defining our shared experience is that humans are active, adaptive, meaning-makers.  Our strength and our weakness is that we are so very good at identifying perceived patterns in all forms of stimuli: visual, aural, tactile, conceptual, cultural, social, and others.  This is a characteristic humans share with all life, but we are particularly facile at it and this ability is one of the roots of our evolutionary success.  It is also a source of many of our human problems.

Our ancestors who were more likely to discern patterns in the world were more likely to detect potentially harmful patterns, and so more likely to survive.  They lived in the midst of nature, life was precarious, and mistakes could often be deadly.  If they reacted to a potentially negative sensory pattern and it turned out to be false, then they could survive to laugh it off as an overreaction.  But if they failed to notice those environmental signals, then the chances are one of those missed cues would eventually kill them.

Consider Og and Froosh, two early human ancestors hunting on the savannah.  They grunt to one another in their proto-language…

Og: “Oh, look, there’s a rustling in the underbrush.  Ya think it might be a sabertooth tiger?”

Froosh: “Nah, it’s just the wind.”

Og: “I dunno.  I think I’m gettin’ outta here.”

Og runs away.  Froosh, laughing at his timidity, is eaten by the sabertooth tiger that emerges from the brush.  This may have happened many times before with Og running from shadows and the wind while Froosh mocked his jumpy friend.  But the result was that, the one time Og was right, Froosh died.  Cautious Og lives on to have a family and dies peacefully on his pallet at the ripe old age of 40.  We learned, over many generations, that it is better to err on the side of apophenia.

There are only two ways we can be wrong: we can think we’ve detected a pattern when, in fact, there is none (Type I error, alpha error, or false positive), or we can miss detecting a pattern that really exists (Type II error, beta error, or false negative).  Type I error happens when our detection gear is too sensitive, Type II error is a result of it not being sensitive enough.

The scientific method is inherently conservative — not in the political sense, but in the sense of requiring exceptional evidence for us to provisionally accept a newly identified pattern as probably real.  It is designed to guard especially against those false positives.  This is particularly important because humans are adapted to err on the side of Type I, and not Type II, error.  All things being equal, we are far more likely to “see” something that does not exist than to not notice something that is real.

But being prone to Type I error is not now as adaptive as it long was.  We have created an environment that no longer rewards this tendency as it did.  We have created a world that is busier and chock full of signals, but many of those are not meaningful or relevant — and their speed and density is confusing, further encouraging us to find meaning where we can make it.  We have also created a world where a greater proportion of the signals we experience are human created cultural products — reinforcing our default position of finding patterns (because they now really are everywhere), and so our likelihood of falling prey to apophenia (not everything is pattern).

Extreme cases of apophenia may be debilitating, and so usefully categorized as pathological, but such cases are rare.  It’s everyday forms are far more common and often result in poor decisions.  Apophenia is also doubtlessly related to human creativity, which is a necessary trait.  Innovation entertains us, enriches us, and allows us to progress, individually and collectively.  But too much creativity incorrectly applied leads us down blind alleys, wastes human effort, and even destroys lives.

As always, the trick is still to discern which patterns are real and which are (sometimes even well-meant) illusion.  The trick is to find the balance allowing us to recognize the reality facing us and still see those possibilities not yet recognized.  It is easy to recognize apophenia in those with whom we disagree.  Our continuing challenge is to see it in ourselves, and in those we love and respect.

[I particularly recommend this Digital Bits Skeptic article for an in-depth exploration of apophenia.]

“Yeah, But is it Science?”

labcoats
Random lab scientists, in their stereotypical uniforms, from days gone by.

Maybe the most difficult issue faced by social scientists is the perception that, somehow, what we do is not “science.”  Although, in fairness, it doesn’t seem as widespread or blatant now as when I first got into the game; it is almost unthinkable now that a business, agency, or organization would implement a new policy without provision for its social scientific study; and funding for US social science research continues in the billions every year.

One reason we face this challenge is that all people are so familiar with what social scientists study that everyone thinks they know what’s really happening.  But that is not the case.  If it were, we would all be living better lives and our social systems would operate far more smoothly.

Natural scientists have the advantage here because, even though we also all live in the physical and biological world, those feel more distant to the level of our direct experience.  Distance between observer and observed, easily mistaken for objectivity, is built into their process.  Natural scientists study many phenomena that are relatively simple (that is, reducible to simple mathematical models), while most everything social scientists study are complex (in the formal, mathematical sense).  And natural scientists are more readily forgiven their invention of necessary technical language to precisely describe what they study, while social scientists are questioned for introducing unnecessary complications to something that is, after all, patently obvious, right?

So what makes something “science?”  Several distinguishing characteristics have been suggested.  Here are a few of the most popular:

  • “Science” can be objectively described.
  • “Science” can be quantified.
  • “Science” can be directly observed.
  • “Science” can be manipulated experimentally.
  • “Science” can be falsified.

And yet it is as easy to find examples of respected, influential natural science that violate each of these criteria as it is to find examples of social science that hold to them.  My point is simply that, sometimes, “social” scientists adhere to all of these ideals, and other times, “natural” scientists do not — not to start yet another squabble over who has the more difficult time of it.  Good science is a challenge, no matter what we study.

So if “science,” in practice, violates each stereotypical principle, what does make something “science?”

  • “Science” is systematic.
  • “Science” is empirical.
  • “Science” is theoretical.
  • “Science” is capable of being compared to other explanations.
  • “Science” is self-correcting over time.

Science is about pattern recognition: separating signal from noise, and about identifying whether those patterns we think we see are real and meaningful.  We seek to find the simplest defensible explanation for a phenomenon, but not an oversimplification.

Science, in all its forms, is a human endeavor.  That means that it is sometimes messy, always imperfect, and seldom bears much resemblance in practice to what it looks like in the ideal.

I’ll return to each of these points in the coming weeks.  As a research methodologist, it is a subject near and dear to my heart.  But for now, I’ll step down off my digital soapbox.