Monday, November 23, 2020

What is the opposite of bias?

 Introduction

Back in Module 3 of this course, Dr. Hawley gave us a quick, experiential crash course in cognitive bias. With a handful of simple questions, she demonstrated a variety of memory and attentional effects -- primacy and recency effects, self-serving bias, among others, if I remember correctly. Soon a question popped into my head: what is the opposite of bias? Honesty? Objectivity? Rationality? Precise calibration? Fairness? Justice? Does the psychological community have a specific term for whatever isn’t bias or biased?

My initial concern and hunch was that the rhetoric of bias research implicitly supported a western-white-male-serving ideal of rationality. I bet all this talk about bias is just another way to put down the “unenlightened”or brow-beat the “overly-emotional” with the bible of “reason.” Blah blah about our “higher” nature! Whose status is supported by this research? Those who happen to be the ones defining and maintaining what’s “higher.” So on and so forth, such went my thought process. I figured this hunch would be easy to confirm (confirmation bias!), and I had an axe to grind (motivation bias!).

It didn’t take long in my reading to realize that I was way in over my head, that bias is a very rich term with lots of contrast, and that I really enjoyed this bias research stuff - to the extent that I could understand what I was reading! I temporarily suspended my confirmation search and decided to use my initial question -- what is the opposite of bias -- in a less pointed way. My attempt in this paper is to use the gestalt idea of a figure against a ground to discuss the way “bias” is used in my small sampling of psychological literature. The three overlapping grounds I present are history, rationality, and fairness, and I conclude with reflections for the  counseling setting.


Who’s to blame for bias? The 1970’s or Bowling?

What if Amos Tversky and Daniel Kahneman had titled their famous 1974 paper, “Judgment under uncertainty: heuristics and skew” (Kahneman 2011)? Or, “imbalance,” or “partiality,” or “common errors?” Would psychology be as biased toward ‘bias’ as it is now? Perhaps bias would still be important in psychological vocabulary, but its use would be more specific to methodological research errors rather than cognitive effects generally; perhaps it would not be quite the catch-all term that it seems to be. However, in hindsight (bias), it is hard to imagine any other word so nicely covering the phenomena Tversky and Kahneman describe in their paper: representativeness, availability, and anchoring. By many accounts this paper was the flag bearer in the vanguard of an army of bias papers (Welsh 2018; Krueger, Funder 2004; Lilienfeld, Ammirati, Landfield 2009). If you are looking for someone to blame -- and I am -- Kahneman is a nice target. But how about the zeit and its geist...

David Chavalarias and John Ionnidis, in their mind-boggling paper mapping 235 biases mentioned in the PubMed database, present a handy list of the 40 most commonly mentioned biases (Chavalarias, Ioannidis 2010). Of those 40, 19 were first mentioned in the 1970’s, compared to only 8 in the 40’s, 50’s, and 60’s combined. Unfortunately, I’ve failed to find a similar word-search-study or usage-history of bias specific to psychological literature. Surely someone, somewhere has written a history of bias research.

On catalogofbias.org, a project of the Center for Evidenced-Based Medicine at Oxford, Jeff Aronson has several wonderful blog posts on the word and its various definitions. He cites David Sackett in 1979 as the first to publish a categorization of biases, and he notes that Sackett drew on the definition laid out in E.A. Murphy’s The Logic of Medicine, 1976 (Aronson 2018). 

Joachim Kreuger and David Funder, in their 2004 call for a “more balanced social psychology,” see the 1970’s as the beginning of their field’s cognitive shift, specifically a shift toward studying bias in social perception and judgment (Kreuger, Funder 2004). Scott Lilienfeld, Rachel Ammirati, and Kristin Landfield do not define the “modern” time period, but claim the research into cognitive fallibility as one of modern psychology’s “crowning achievements” (Lilienfeld, Ammirati, Landfield 2009).

All this to say: the bias kudzu seems to have first flowered in the 1970’s, and ever since the academic environment has provided fertile soil for its spread. Particularly good fertilizer has been the increased statistical sophistication in psychology and the social sciences. Bias has been a key term in statistics longer than in psychology. In Aronson’s blog his earliest cited technical definition of the term is from a 1926 paper on probability theory (Aronson 2018). Kahneman and Tversky’s initial research question was, “are people intuitively good statisticians?” Another factor involved in a bias toward statistical terminology in psychology could be a shift toward more computational metaphors for the mind and away from literary-humanistic language.

Of course, bias has non-statistical usage as well, which preceded and informed game and probability theories. Pragya Agarwal, in the introduction to her new book Sway: Unravelling Unconscious Bias, provides an overview of the history of the word, beginning with its hypothetical Indo-European root “sker-,” to turn or bend, and continuing in that tradition with 13th century French and Greek words meaning “at an angle or crosswise” and “to cut crosswise,” respectively (Agarwal 2020, p 11-12). In 16th century English the word appears meaning “an oblique or slanting line” and begins to be used to refer to bowling balls weighted unequally to one side, which produced curved trajectories when bowled (Agarwal 2020). Shakespeare uses the word both literally and metaphorically (Agarwal 2020). 

In Thinking, Fast and Slow, Kahneman often uses this language of weight (verb) -- to overweight, to underweight -- to explain or substitute for the word bias (Kahneman 2011). In this sense, bias is the tendency to or result of over/under-weighting, or over/under-valuing, or over/under-attending to something, relative to some ideal of balance, value, or objectivity. Before beginning this research I would have defined bias as a diminishment of objectivity because of prior commitments or allegiance, more in line with what might be called “motivation bias.” In other words, I would have said, bias is the skewing effect a committed belief, association, or goal has on arguments or perspectives. If we overlay the pointed, statistically informed use of bias onto the long, glorious tradition of admitting to or accusing others of using loaded bowling balls (metaphorically speaking), then we have a word with a lot going for it! 

What do these two general historical grounds - modern psychology and the tradition of argument/debate - suggest as contrasting backdrops for bias? In statistical psychology, I am guessing (I am statistically uneducated) it would be calibration and validity. Kahneman defines bias as a “systematic error,” so it is a reliable effect, just not valid (Kahneman 2011). In the colloquial-argumentative sense, I think objectivity and fairness capture the anti-bias spirit. All those terms, especially the first three - calibration, validity, and objectivity - hover closely to the idea of “rationality.”


Rational and irrational monsters

Matthew Welsh, in his field guide to bias, cites a 1996 paper by Gigerenzer and Goldstein, in which they argue that in a real world situation, with so many variables and possibilities, people would have to have the mental abilities of a “Laplacian demon” to follow the rules of rational behavior set out by economists (Welsh 2018). Laplace was a French mathematician who helped develop probability and statistics. A less frightening cousin of the Laplacian demon is the “homo economicus,” the fully rational “man.” Welsh and Kahneman enjoy poking fun at this “homo economicus,” while simultaneously promoting the benefits of his decision making abilities. Kahneman’s favorite nickname for him is Robert Thaler’s term, “Econ” (Kahneman 2011).

There are plenty of irrational monsters as well, many more than of the rational type. Lilienfeld, et. al., consider the demon of “ideological extremism” in their paper “Giving Debiasing Away,” and they propose confirmation bias as its main source of power (Lilienfeld, et. al., 2009). Jan De Houwer suspects that “implicit bias” is a threatening construct because it is conceptualized as an “unobservable structure” or “hidden force”  in the mind; he recommends taming this monster by framing implicit bias as a behavior (Houwer, 2019). Krueger and Funder, citing Dawes (1976), explain that, traditionally, “capricious emotional” monsters have been the main threat to rationality; modern psychology, however, has identified the monster within the conscious mind itself (Krueger, Funder 2004). The irrational monsters have breached the walls!

Most of the authors I have encountered see bias as irrational by force of definition. Rationality functions as a strong contrasting backdrop for bias. However, at the same time, these authors also see bias as incorporated into some kind of long-view rationality that, while it may not be rational in a specific instance, is rational because it leads to health or has led to evolutionary fitness. Megan Hughes, et. al., in the Handbook of Applied Cognition, write that, “It appears that some level of positive cognitive distortion is present in healthy individuals and may lead to improved functioning, health, and happiness” (Hughes, et. al., 2007, p 649). In the same vein Lisa Bortolotti and Magdalena Antrobus compare recent studies on “depressive realism” -- which show that people with depressed mood answer certain types of questions more realistically and accurately than average -- and studies on unrealistic optimism -- which show that overconfidence or unrealistically optimistic views are prevalent in nonclinical populations (Bortolotti, Antrobus, 2015). In certain circumstances, irrational confidence or optimistic bias may be more beneficial than clear, “rational” judgment, in which case it may be more “rational” not to listen to your inner Econ.

Another way to put this is that irrational beliefs may lead to, or have led to, rational behavior. This is the tack taken by James Marshall, et. al., in their paper analyzing two different evolutionary theories of cognitive bias (Marshall, et. al. 2013). They distinguish the ability to rationally assess a situation from the ability to rationally behave in that same situation; and these two abilities are not perfectly correlated. Dominic Johnson, et. al., explain this dynamic well using Error Management Theory (Dominic Johnson, et. al., 2013). Cognitive biases may not “maximize expected payoffs” of food or other goods, but they have maximized Darwinian fitness by helping us avoid very costly errors across a lifespan. Johnson, et. al., use the “smoke alarm analogy,” as does Agarwal, to explain how asymmetric costs of false positives and false negatives can encourage bias. Technological limits of smoke detectors mean that they can make mistakes, and falsely detecting smoke (annoyance) is less costly than falsely not-detecting smoke (house burns down). Therefore, engineers work harder to avoid false negatives than false positives.

According to Lilienfeld, et. al., the psychological community generally agrees that cognitive biases are “basically adaptive processes” (Lilienfeld, et. al., 2009). The next question is whether or not they will continue to lead to rational behavior in our current context or future contexts. Everyone seems to be clear that bias leads to specific irrational understandings and decisions, but can we comment on their current or future long-view-rationality? Are they still adaptive in a broad sense, and how could we possibly answer that question? Adaptive may be a word best reserved for highsight. Plus, we seem to have the ability to recognize bias in each other and mitigate it socially. How and why did that evolve? Plus, with all these biases front and center in my mind (availability bias), it is hard for me to conceptualize a “rational” cognitive process; it is easier to think of interacting biases, facilitating or inhibiting each other. This is a rabbit hole! 


A bad feeling about bias

Overlapping and undergirding the bias/rationality discussion is the bias/fairness discussion. The most sobering parts of Agarwal and Kahneman’s books are their examples of bias in the judicial system. Kahneman mentions a study of the anchoring effect on German judges who simply rolled a die loaded to 3 or 9 before estimating jail time for a specific case (Kahneman 2011, p 125-126). For judges who rolled the 3, the average jail-sentence estimate was 5 months. For those who rolled the 9, the average estimate was 8 months. In her chapter on biases built into technology, Argarwal mentions a risk assessment algorithm used in many state courts to predict reoffending rates and “inform decisions about who can be set free at what stage of the criminal justice system” (Agarwal 2020, p 378). A 2017 ProPublica report exposed the algorithm as unreliable and extremely biased against black defendants “even when controlling for prior crimes, actual future reoffending, age and gender” (Agarwal 2020, p 379).

Agarwal’s discussion of unconscious bias packs a direct moral punch because she relates biases to social inequalities and injustice. But, even the less morally potent decision-making-theory context of Welsh contains a social justice/fairness backdrop. He describes how biases have led to unfair hiring practices in academia, and he is particularly interested in how bias functions in the spread and maintenance of socially harmful “factoids,” like the link between the MMR vaccine and autism (Welsh 2018). The way Kahneman frequently describes bias as over/under-weighting itself suggests the scales of justice and fairness. It is hard to say “bias” without evoking some unfairness connotations or generally negative feelings.

This is in part the basis for De Houwer’s argument that implicit bias should be framed as “implicit group based behavior” rather than as a “latent mental construct” (De Houwer 2019). If bias is something bad or unfair, and if it is something we have hidden inside us, then, “Being told that we are implicitly biased can threaten core beliefs about who we think we are and aspire to be” (De Houwer 2019). Being told I am bad feels quite different from being told I have behaved badly in specific instances. Agarwal makes a similar argument regarding the use and interpretation of the Implicit Association Test. Because it has been difficult to correlate test results with specific biased behaviors, she cautions against using the test to say anything conclusive about an individual person, to label them biased or unbiased (Agarwal 2020). While Agarwal does not refrain from trying to investigate implicit beliefs and associations, she is ultimately concerned with debiasing behavior.


Conclusion

What is the opposite of bias? And does that have anything to do with mental health counseling?

While the word bias has no distinct opposite, it does have a rich meaning with lots of contrast. The two strongest contrasting backdrops may be rationality -- accurate, objective, goal oriented thinking and decision making -- and fairness, with an emphasis on social justice. The word also has a rich history, from bowling to bell-bottomed wearing 70’s psych professors and beyond. Today the word has a “buzz” quality to it; as Agarwal says, “there is a real danger of unconscious bias being reduced to a ‘trend’ or ‘fluff word’” (Agarwal 2020, p 11). At the same time its negative edge may be sharper now than when it first established itself in the psychological literature.

It is an important word in the world of cognitive therapies, and it is also common in the political and social rhetoric of today. There is a good chance that therapists and clients will discuss bias, and therapists might want to 1) consider beforehand how they will frame the word, and 2) give time to the client to reflect on the term and how it makes them feel and think. 

What backdrop does the therapist use? What backdrop does the client use? For example, let us say that a therapist wants to briefly describe negativity bias to a client with depression. The therapist may be thinking of this bias simply as an unhelpful tendency at this particular moment for this particular person; the therapist may not want to imply anything about their client’s rationality, objectivity, fairness, et cetera. However, the client may see this bias as a negative part of their character, or as a failure of their intelligence (negativity bias!).

Also, therapists may be involved in psychoeducational efforts to reduce prejudice and discrimination and increase inclusion and justice. In these contexts bias will likely be used frequently. Again it may be helpful for the therapist to examine how they intend to use and frame the word. “What is the opposite of bias?” could be a fruitful question for implicit bias or unconscious bias workshops. Bias is such an interesting and attention-catching subject (negativity bias?), it can be easy to lose sight of the end goal of most implicit bias workshops: increasing inclusion and justice. To achieve that end we probably need as much or more broaden-and-build-inclusion work as we need diagnose-and-debias work.

References


Agarwal, Pragya (2020). Sway: Unraveling Unconscious Bias. Bloomsbury Sigma.


Aronson, Jeff (2018). A Word About Evidence: 4. Bias - etymology and usage [Blog post].

https://catalogofbias.org/2018/04/10/a-word-about-evidence-4-bias-etymology-and-usag/


Aronson, Jeff (2018). A Word About Evidence: 5. Bias - previous definitions [Blog post].

https://catalogofbias.org/2018/04/20/a-word-about-evidence-5-bias-previous-definitions/


Aronson, Jeff (2018). A Word About Evidence: 6. Bias - a proposed definition [Blog post].

https://catalogofbias.org/2018/06/15/a-word-about-evidence-6-bias-a-proposed-definition/


Bortolotti, Lisa, & Antrobus, Magdalena. (2015). Costs and benefits of realism and

optimism. Current Opinion in Psychiatry, 28(2), 194-198.


Chavalarias, David, & Ioannidis, John P.A. (2010). Science mapping analysis characterizes

235 biases in biomedical research. Journal of Clinical Epidemiology, 63(11),

1205-1215.


De Houwer, Jan (2019). Implicit Bias is Behavior: A Functional-Cognitive Perspective on Implicit

Bias. Perspectives on Psychological Science, Vol. 14(5) 835-840.


Hughes, Megan E, Panzarella, Catherine, Alloy, Lauren B, & Abramson, Lyn Y. (2007).

Mental Illness and Mental Health. In Handbook of Applied Cognition (pp. 629-658).

Chichester, UK: John Wiley & Sons.


Johnson, Dominic D.P, Blumstein, Daniel T, Fowler, James H, & Haselton, Martie G. (2013).

The evolution of error: Error management, cognitive constraints, and adaptive

decision-making biases. Trends in Ecology & Evolution (Amsterdam), 28(8),

474-481.


Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.


Krueger, Joachim I, & Funder, David C. (2004). Towards a balanced social psychology:

Causes, consequences, and cures for the problem-seeking approach to social

behavior and cognition. The Behavioral and Brain Sciences, 27(3), 313-327.


Lilienfeld, Scott O, Ammirati, Rachel, & Landfield, Kristin. (2009). Giving Debiasing Away:

Can Psychological Research on Correcting Cognitive Errors Promote Human

Welfare? Perspectives on Psychological Science, 4(4), 390-398.


Marshall, James A.R, Trimmer, Pete C, Houston, Alasdair I, & McNamara, John M. (2013).

On evolutionary explanations of cognitive biases. Trends in Ecology & Evolution

(Amsterdam), 28(8), 469-473.


Welsh, Matthew (2018). Bias in Science and Communication: A field guide. IOP Publishing

Ltd.



No comments:

Post a Comment