Monday, November 23, 2020

What is the opposite of bias?

 Introduction

Back in Module 3 of this course, Dr. Hawley gave us a quick, experiential crash course in cognitive bias. With a handful of simple questions, she demonstrated a variety of memory and attentional effects -- primacy and recency effects, self-serving bias, among others, if I remember correctly. Soon a question popped into my head: what is the opposite of bias? Honesty? Objectivity? Rationality? Precise calibration? Fairness? Justice? Does the psychological community have a specific term for whatever isn’t bias or biased?

My initial concern and hunch was that the rhetoric of bias research implicitly supported a western-white-male-serving ideal of rationality. I bet all this talk about bias is just another way to put down the “unenlightened”or brow-beat the “overly-emotional” with the bible of “reason.” Blah blah about our “higher” nature! Whose status is supported by this research? Those who happen to be the ones defining and maintaining what’s “higher.” So on and so forth, such went my thought process. I figured this hunch would be easy to confirm (confirmation bias!), and I had an axe to grind (motivation bias!).

It didn’t take long in my reading to realize that I was way in over my head, that bias is a very rich term with lots of contrast, and that I really enjoyed this bias research stuff - to the extent that I could understand what I was reading! I temporarily suspended my confirmation search and decided to use my initial question -- what is the opposite of bias -- in a less pointed way. My attempt in this paper is to use the gestalt idea of a figure against a ground to discuss the way “bias” is used in my small sampling of psychological literature. The three overlapping grounds I present are history, rationality, and fairness, and I conclude with reflections for the  counseling setting.


Who’s to blame for bias? The 1970’s or Bowling?

What if Amos Tversky and Daniel Kahneman had titled their famous 1974 paper, “Judgment under uncertainty: heuristics and skew” (Kahneman 2011)? Or, “imbalance,” or “partiality,” or “common errors?” Would psychology be as biased toward ‘bias’ as it is now? Perhaps bias would still be important in psychological vocabulary, but its use would be more specific to methodological research errors rather than cognitive effects generally; perhaps it would not be quite the catch-all term that it seems to be. However, in hindsight (bias), it is hard to imagine any other word so nicely covering the phenomena Tversky and Kahneman describe in their paper: representativeness, availability, and anchoring. By many accounts this paper was the flag bearer in the vanguard of an army of bias papers (Welsh 2018; Krueger, Funder 2004; Lilienfeld, Ammirati, Landfield 2009). If you are looking for someone to blame -- and I am -- Kahneman is a nice target. But how about the zeit and its geist...

David Chavalarias and John Ionnidis, in their mind-boggling paper mapping 235 biases mentioned in the PubMed database, present a handy list of the 40 most commonly mentioned biases (Chavalarias, Ioannidis 2010). Of those 40, 19 were first mentioned in the 1970’s, compared to only 8 in the 40’s, 50’s, and 60’s combined. Unfortunately, I’ve failed to find a similar word-search-study or usage-history of bias specific to psychological literature. Surely someone, somewhere has written a history of bias research.

On catalogofbias.org, a project of the Center for Evidenced-Based Medicine at Oxford, Jeff Aronson has several wonderful blog posts on the word and its various definitions. He cites David Sackett in 1979 as the first to publish a categorization of biases, and he notes that Sackett drew on the definition laid out in E.A. Murphy’s The Logic of Medicine, 1976 (Aronson 2018). 

Joachim Kreuger and David Funder, in their 2004 call for a “more balanced social psychology,” see the 1970’s as the beginning of their field’s cognitive shift, specifically a shift toward studying bias in social perception and judgment (Kreuger, Funder 2004). Scott Lilienfeld, Rachel Ammirati, and Kristin Landfield do not define the “modern” time period, but claim the research into cognitive fallibility as one of modern psychology’s “crowning achievements” (Lilienfeld, Ammirati, Landfield 2009).

All this to say: the bias kudzu seems to have first flowered in the 1970’s, and ever since the academic environment has provided fertile soil for its spread. Particularly good fertilizer has been the increased statistical sophistication in psychology and the social sciences. Bias has been a key term in statistics longer than in psychology. In Aronson’s blog his earliest cited technical definition of the term is from a 1926 paper on probability theory (Aronson 2018). Kahneman and Tversky’s initial research question was, “are people intuitively good statisticians?” Another factor involved in a bias toward statistical terminology in psychology could be a shift toward more computational metaphors for the mind and away from literary-humanistic language.

Of course, bias has non-statistical usage as well, which preceded and informed game and probability theories. Pragya Agarwal, in the introduction to her new book Sway: Unravelling Unconscious Bias, provides an overview of the history of the word, beginning with its hypothetical Indo-European root “sker-,” to turn or bend, and continuing in that tradition with 13th century French and Greek words meaning “at an angle or crosswise” and “to cut crosswise,” respectively (Agarwal 2020, p 11-12). In 16th century English the word appears meaning “an oblique or slanting line” and begins to be used to refer to bowling balls weighted unequally to one side, which produced curved trajectories when bowled (Agarwal 2020). Shakespeare uses the word both literally and metaphorically (Agarwal 2020). 

In Thinking, Fast and Slow, Kahneman often uses this language of weight (verb) -- to overweight, to underweight -- to explain or substitute for the word bias (Kahneman 2011). In this sense, bias is the tendency to or result of over/under-weighting, or over/under-valuing, or over/under-attending to something, relative to some ideal of balance, value, or objectivity. Before beginning this research I would have defined bias as a diminishment of objectivity because of prior commitments or allegiance, more in line with what might be called “motivation bias.” In other words, I would have said, bias is the skewing effect a committed belief, association, or goal has on arguments or perspectives. If we overlay the pointed, statistically informed use of bias onto the long, glorious tradition of admitting to or accusing others of using loaded bowling balls (metaphorically speaking), then we have a word with a lot going for it! 

What do these two general historical grounds - modern psychology and the tradition of argument/debate - suggest as contrasting backdrops for bias? In statistical psychology, I am guessing (I am statistically uneducated) it would be calibration and validity. Kahneman defines bias as a “systematic error,” so it is a reliable effect, just not valid (Kahneman 2011). In the colloquial-argumentative sense, I think objectivity and fairness capture the anti-bias spirit. All those terms, especially the first three - calibration, validity, and objectivity - hover closely to the idea of “rationality.”


Rational and irrational monsters

Matthew Welsh, in his field guide to bias, cites a 1996 paper by Gigerenzer and Goldstein, in which they argue that in a real world situation, with so many variables and possibilities, people would have to have the mental abilities of a “Laplacian demon” to follow the rules of rational behavior set out by economists (Welsh 2018). Laplace was a French mathematician who helped develop probability and statistics. A less frightening cousin of the Laplacian demon is the “homo economicus,” the fully rational “man.” Welsh and Kahneman enjoy poking fun at this “homo economicus,” while simultaneously promoting the benefits of his decision making abilities. Kahneman’s favorite nickname for him is Robert Thaler’s term, “Econ” (Kahneman 2011).

There are plenty of irrational monsters as well, many more than of the rational type. Lilienfeld, et. al., consider the demon of “ideological extremism” in their paper “Giving Debiasing Away,” and they propose confirmation bias as its main source of power (Lilienfeld, et. al., 2009). Jan De Houwer suspects that “implicit bias” is a threatening construct because it is conceptualized as an “unobservable structure” or “hidden force”  in the mind; he recommends taming this monster by framing implicit bias as a behavior (Houwer, 2019). Krueger and Funder, citing Dawes (1976), explain that, traditionally, “capricious emotional” monsters have been the main threat to rationality; modern psychology, however, has identified the monster within the conscious mind itself (Krueger, Funder 2004). The irrational monsters have breached the walls!

Most of the authors I have encountered see bias as irrational by force of definition. Rationality functions as a strong contrasting backdrop for bias. However, at the same time, these authors also see bias as incorporated into some kind of long-view rationality that, while it may not be rational in a specific instance, is rational because it leads to health or has led to evolutionary fitness. Megan Hughes, et. al., in the Handbook of Applied Cognition, write that, “It appears that some level of positive cognitive distortion is present in healthy individuals and may lead to improved functioning, health, and happiness” (Hughes, et. al., 2007, p 649). In the same vein Lisa Bortolotti and Magdalena Antrobus compare recent studies on “depressive realism” -- which show that people with depressed mood answer certain types of questions more realistically and accurately than average -- and studies on unrealistic optimism -- which show that overconfidence or unrealistically optimistic views are prevalent in nonclinical populations (Bortolotti, Antrobus, 2015). In certain circumstances, irrational confidence or optimistic bias may be more beneficial than clear, “rational” judgment, in which case it may be more “rational” not to listen to your inner Econ.

Another way to put this is that irrational beliefs may lead to, or have led to, rational behavior. This is the tack taken by James Marshall, et. al., in their paper analyzing two different evolutionary theories of cognitive bias (Marshall, et. al. 2013). They distinguish the ability to rationally assess a situation from the ability to rationally behave in that same situation; and these two abilities are not perfectly correlated. Dominic Johnson, et. al., explain this dynamic well using Error Management Theory (Dominic Johnson, et. al., 2013). Cognitive biases may not “maximize expected payoffs” of food or other goods, but they have maximized Darwinian fitness by helping us avoid very costly errors across a lifespan. Johnson, et. al., use the “smoke alarm analogy,” as does Agarwal, to explain how asymmetric costs of false positives and false negatives can encourage bias. Technological limits of smoke detectors mean that they can make mistakes, and falsely detecting smoke (annoyance) is less costly than falsely not-detecting smoke (house burns down). Therefore, engineers work harder to avoid false negatives than false positives.

According to Lilienfeld, et. al., the psychological community generally agrees that cognitive biases are “basically adaptive processes” (Lilienfeld, et. al., 2009). The next question is whether or not they will continue to lead to rational behavior in our current context or future contexts. Everyone seems to be clear that bias leads to specific irrational understandings and decisions, but can we comment on their current or future long-view-rationality? Are they still adaptive in a broad sense, and how could we possibly answer that question? Adaptive may be a word best reserved for highsight. Plus, we seem to have the ability to recognize bias in each other and mitigate it socially. How and why did that evolve? Plus, with all these biases front and center in my mind (availability bias), it is hard for me to conceptualize a “rational” cognitive process; it is easier to think of interacting biases, facilitating or inhibiting each other. This is a rabbit hole! 


A bad feeling about bias

Overlapping and undergirding the bias/rationality discussion is the bias/fairness discussion. The most sobering parts of Agarwal and Kahneman’s books are their examples of bias in the judicial system. Kahneman mentions a study of the anchoring effect on German judges who simply rolled a die loaded to 3 or 9 before estimating jail time for a specific case (Kahneman 2011, p 125-126). For judges who rolled the 3, the average jail-sentence estimate was 5 months. For those who rolled the 9, the average estimate was 8 months. In her chapter on biases built into technology, Argarwal mentions a risk assessment algorithm used in many state courts to predict reoffending rates and “inform decisions about who can be set free at what stage of the criminal justice system” (Agarwal 2020, p 378). A 2017 ProPublica report exposed the algorithm as unreliable and extremely biased against black defendants “even when controlling for prior crimes, actual future reoffending, age and gender” (Agarwal 2020, p 379).

Agarwal’s discussion of unconscious bias packs a direct moral punch because she relates biases to social inequalities and injustice. But, even the less morally potent decision-making-theory context of Welsh contains a social justice/fairness backdrop. He describes how biases have led to unfair hiring practices in academia, and he is particularly interested in how bias functions in the spread and maintenance of socially harmful “factoids,” like the link between the MMR vaccine and autism (Welsh 2018). The way Kahneman frequently describes bias as over/under-weighting itself suggests the scales of justice and fairness. It is hard to say “bias” without evoking some unfairness connotations or generally negative feelings.

This is in part the basis for De Houwer’s argument that implicit bias should be framed as “implicit group based behavior” rather than as a “latent mental construct” (De Houwer 2019). If bias is something bad or unfair, and if it is something we have hidden inside us, then, “Being told that we are implicitly biased can threaten core beliefs about who we think we are and aspire to be” (De Houwer 2019). Being told I am bad feels quite different from being told I have behaved badly in specific instances. Agarwal makes a similar argument regarding the use and interpretation of the Implicit Association Test. Because it has been difficult to correlate test results with specific biased behaviors, she cautions against using the test to say anything conclusive about an individual person, to label them biased or unbiased (Agarwal 2020). While Agarwal does not refrain from trying to investigate implicit beliefs and associations, she is ultimately concerned with debiasing behavior.


Conclusion

What is the opposite of bias? And does that have anything to do with mental health counseling?

While the word bias has no distinct opposite, it does have a rich meaning with lots of contrast. The two strongest contrasting backdrops may be rationality -- accurate, objective, goal oriented thinking and decision making -- and fairness, with an emphasis on social justice. The word also has a rich history, from bowling to bell-bottomed wearing 70’s psych professors and beyond. Today the word has a “buzz” quality to it; as Agarwal says, “there is a real danger of unconscious bias being reduced to a ‘trend’ or ‘fluff word’” (Agarwal 2020, p 11). At the same time its negative edge may be sharper now than when it first established itself in the psychological literature.

It is an important word in the world of cognitive therapies, and it is also common in the political and social rhetoric of today. There is a good chance that therapists and clients will discuss bias, and therapists might want to 1) consider beforehand how they will frame the word, and 2) give time to the client to reflect on the term and how it makes them feel and think. 

What backdrop does the therapist use? What backdrop does the client use? For example, let us say that a therapist wants to briefly describe negativity bias to a client with depression. The therapist may be thinking of this bias simply as an unhelpful tendency at this particular moment for this particular person; the therapist may not want to imply anything about their client’s rationality, objectivity, fairness, et cetera. However, the client may see this bias as a negative part of their character, or as a failure of their intelligence (negativity bias!).

Also, therapists may be involved in psychoeducational efforts to reduce prejudice and discrimination and increase inclusion and justice. In these contexts bias will likely be used frequently. Again it may be helpful for the therapist to examine how they intend to use and frame the word. “What is the opposite of bias?” could be a fruitful question for implicit bias or unconscious bias workshops. Bias is such an interesting and attention-catching subject (negativity bias?), it can be easy to lose sight of the end goal of most implicit bias workshops: increasing inclusion and justice. To achieve that end we probably need as much or more broaden-and-build-inclusion work as we need diagnose-and-debias work.

References


Agarwal, Pragya (2020). Sway: Unraveling Unconscious Bias. Bloomsbury Sigma.


Aronson, Jeff (2018). A Word About Evidence: 4. Bias - etymology and usage [Blog post].

https://catalogofbias.org/2018/04/10/a-word-about-evidence-4-bias-etymology-and-usag/


Aronson, Jeff (2018). A Word About Evidence: 5. Bias - previous definitions [Blog post].

https://catalogofbias.org/2018/04/20/a-word-about-evidence-5-bias-previous-definitions/


Aronson, Jeff (2018). A Word About Evidence: 6. Bias - a proposed definition [Blog post].

https://catalogofbias.org/2018/06/15/a-word-about-evidence-6-bias-a-proposed-definition/


Bortolotti, Lisa, & Antrobus, Magdalena. (2015). Costs and benefits of realism and

optimism. Current Opinion in Psychiatry, 28(2), 194-198.


Chavalarias, David, & Ioannidis, John P.A. (2010). Science mapping analysis characterizes

235 biases in biomedical research. Journal of Clinical Epidemiology, 63(11),

1205-1215.


De Houwer, Jan (2019). Implicit Bias is Behavior: A Functional-Cognitive Perspective on Implicit

Bias. Perspectives on Psychological Science, Vol. 14(5) 835-840.


Hughes, Megan E, Panzarella, Catherine, Alloy, Lauren B, & Abramson, Lyn Y. (2007).

Mental Illness and Mental Health. In Handbook of Applied Cognition (pp. 629-658).

Chichester, UK: John Wiley & Sons.


Johnson, Dominic D.P, Blumstein, Daniel T, Fowler, James H, & Haselton, Martie G. (2013).

The evolution of error: Error management, cognitive constraints, and adaptive

decision-making biases. Trends in Ecology & Evolution (Amsterdam), 28(8),

474-481.


Kahneman, Daniel (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.


Krueger, Joachim I, & Funder, David C. (2004). Towards a balanced social psychology:

Causes, consequences, and cures for the problem-seeking approach to social

behavior and cognition. The Behavioral and Brain Sciences, 27(3), 313-327.


Lilienfeld, Scott O, Ammirati, Rachel, & Landfield, Kristin. (2009). Giving Debiasing Away:

Can Psychological Research on Correcting Cognitive Errors Promote Human

Welfare? Perspectives on Psychological Science, 4(4), 390-398.


Marshall, James A.R, Trimmer, Pete C, Houston, Alasdair I, & McNamara, John M. (2013).

On evolutionary explanations of cognitive biases. Trends in Ecology & Evolution

(Amsterdam), 28(8), 469-473.


Welsh, Matthew (2018). Bias in Science and Communication: A field guide. IOP Publishing

Ltd.



Friday, November 13, 2020

Bias in Science and Communication

 Bias in Science and Communication: A Field Guide - Matthew Welsh

chp 1 - quiz

chp 2 - anchors aweigh

- decision making theories springing from probability theory, game theory, mathematicians trying to help maximize decisions in games/gambling

-Laplace - "common sense reduced to calculus", expected value (value of outcomes multiplied by likelihood)

-Nocolas and Daniel Bernoulli, St Petersburg Paradox

-homo economicus, rules for rational decision making (Von Neumann and Morgenstern), transitivity (A>B and B>C, means A>C), independence (preferences don't change), completeness

-Herbert Simon, bounded rationality, scissors metaphor, cognitive abilities and structure of world (how much info is readily available)

-Tversky and Kahneman - heuristics and biases

-two systems theory

    -Stanovich and West - system 1 as TASS (the autonomous set of systems)

    -metacognitive process of switching systems, error recognition, etc?

chp 3 - on message: reasons for and types of communication

-cognitive biases vs motivational biases

-implicit vs explicit bias

-elicitation of uncertainty

-wisdom of crowds effect (Galton) - have to be independent

chp 4 - Improbable interpretations: misunderstanding statistics and probability

-risk - multiple definitions, 1- probability of negative outcome, 2- product of likelihood and magnitude, 3 - dangerous and uncertain behavior/event

-variability - measurements of same parameter vary

-uncertainty - multiple definitions, 1 unable to measure, 2 - not knowing whether true or not

    -common error is for people to assume that past variability is an adequate measure of possible future values, use variability as measure of uncertainty

-monty hall problem, Let's Make a Deal, Marilyn vos Savant, many people responded angrily to her

-common problems in probability

    -sample size invariance

    -base rate neglect

chp 5 - truth seeking? biases in search strategies

-how search for info and update beliefs

-Laplacian demon, so many options and parameters

-"secretary problem"...an optimal stopping problem

-Simon - satisficing (when optimisation is too demanding)

-Naturalistic decision making (Klein and Zsambok) study how experts make decisions

-recognition and other selection heuristics

    -confirmation bias, confirming rather than testing

Chp 6 - same but different: unexpected effects of format changes

- people interpret percentages and natural frequencies differently

    -base rate neglect

    -twice as bad, etc (when risks are super low)

-nudge (default decisions, organ donors in europe)

-framing effects

-prospect theory: gains and losses, loss aversion

-order effects

    -primacy effect (remember first items, anchoring?)

    -recency effect - remember more recent items

-comparisons and preferences - single evaluation vs joint evaluation

Chp 7 - I'm confident, you're biased: accuracy and calibration of predictions

-positive correlations in confidence across fields of activity (higher level of competence or trait of confidence?)

-can someone accurately recognize whether the confidence they feel is justified or not? no

-Dunning-Kreuger effect (ignorant of ignorance)

-hard-easy effect - overconfident in hard questions, underconfident in easy questions

-three different forms of overconfidence

    -overplacement (more than 80% rate themselves above the median)

    -overprecision (predict too precisely, especially experts)

    -overestimation (overestimate the likelihood of getting things right)

-underestimate uncertainty!!!!

-planning fallacy

-reducing overconfidence, awareness helps but not solves problem

    -format, process changes (elicitation tools)

    -taking the outside view (use data from other similar situations)

Chp 8. - sub-total recall: nature of memory processes, their limitations and resultant biases

-long-term vs short-term memory

-forgetting

-availability bias, strong even when probability described, planing fallacy

-faulty trees (unpacking effect); hard to account for implicit categories (unavailable) - probability predictions change when you unpack the categories

-anchoring searches, still strong even after been explained

    -break dependence on a single anchoring value

-hindsight bias, causal explanations, once formed, difficult to imagine things having turned out differently, or etc, changing hypotheses

Chp 9 - Angels and demons: biases from categorization and fluency

-heightism

-halo effect

-pretty/good

-Matthew effect

-stereotypical thinking

    -fuzzy boundaries

    -probabilistic membership, predictive power

    -making predictions with limited info

        -combination of stereotypes, halo effect, confirmation bias - view a group as good or bad

        -implicit bias

            -"people are faster at completing a categorisation task with items that are stereotypically related, and slower with less stereotypically related

-Groupthink

-Easy to believe: fluency leads to believability, less likely to be checked

Chp 10 - us and them: scientists vs lay-people and individual differences in decision bias

-situational awareness developed by certain experts in certain fields (regular feedback, regular environment)

    -meteorologists: good calibration

-intelligence only weakly correlated to bias susceptibility, how about personality? (big 5)

-high conscientiousness more susceptible to hindsight bias

-cognitive reflection test

-"need for cognition" (NFC)

    -less prone to system one biases (stereotyping and halo effect)

    -more prone to system 2 biases (hindsight, confirmation bias)

Chp 11 - Warp and weft: publication bias example to weave it all together

-publication bias - tendency to decide whether to pursue publication based on what was found and how interesting, far more likely to pursue publication with positive results

    -bias toward novelty, toward confirmation, less replication

    -HARKing effect - hypothesising after results are known (works its way into papers)

    -too much dependence on small sample sizes

    -less re-test and replication

    -editorial decisions, who the author is, who you know, gender bias,

Chp 12 - Felicitous elicitation: reducing biases through better elicitation processes

    -outside view, calculate, transparency, decision making procedures, elicitation toles (MOLE, more or less elicitation - better and prefer making relative judgments, reduce anchoring effect, wisdom of crowds effect)

Chp 13 - A river in Egypt: denial, scepticism, and debunking false beliefs

    -gullibility: makes sense socially and evoluntionarily, have to learn a lot of info, makes sense to trust others

    -echo: more often factoid repeated

    -trust, trust the messenger first, then the messenger

Chp 14 - spotters guide to bias

Wednesday, November 11, 2020

Sway

 Sway: Unravelling Unconscious Bias - Pragya Agarwal


Intro

- 13th c french, biais - at an angle or crosswise

- english 16th c, oblique or slanting line, bowling balls that were weighted on one side

Chp 1 Gut Instinct

- Antoine Bechara, USC, studying people w/ brain damage not able to use intuitions, decision making very difficult and time consuming

- satisficing

- Herbert Simon: scissors metaphor: one blade task environment and other blade computation capabilities (context and cognition)

- inattentional blindness - lack of attention b/c focused on something else

- cognitive illusion - unconscious inferences

- conformity: informational, normative, internalized

- confirmation bias

- default bias

- association and affiliation, category membership, in-group/out-group

Chp 2 - dawn of time

- affordances - what the environment offers the individual

- out-group bias stronger than in-group favoritism

- more reliance on cognitive shortcuts when uncertain

- smoke alarm analogy

- three theories of how implicit biases formed

    - heuristics, shortcuts, kahneman and tversky

    - error management theory (haselton and buss), judgments about opportunities and threats consistently deviate toward extreme response; in case of threat, false negative highly costly, while false positive not that costly

        - propensity for false negatives may be root

    - artefact theory, biases are product of applying wrong strategy in wrong context, these are artefacts of hunter-gatherer days, etc

- easier to process hierarchical relationships more fluently than egalitarian

- implicit egotism theory - favor objects that they associate with self

- name-letter effect (Jozef Nuttin) - tendency to like appearance of first letter in name

- normative determinism, aptronym, name-is-fitting bias

Chp 3 - all in your head

- us vs them

-study of kids with Williams Syndrome (tend to be friendly and have less fear of others

-damaged amygdala, less likely to do risk assessments, more likely to trust and approach

-parochial altruism - more likely to help ingroup

-different brain patters when reacting to ingroup member in pain vs outgroup member

-more regret from negative result of non-default actions as opposed to routine actions

-default bias

-negativity bias - react more strongly and remember negative info

    -negative info processed more quickly

-truth bias - more likely to believe statements as truthful compared to actual

-people are better at recognising own race faces and own-gender faces

-halo effect and horn effect

-different process for people (global) vs object (local) recognition; women more often perceived as objects

    -objectification theory

-stereotype threat - fear of being stereotyped, impairs performance, increased anxiety, etc

-social fear not necessary for gender stereotypes but impt for racial stereotypes

-frequent contact, exposure may decrease salience of racial stereotypes

-accent bias

-affective processing theory - positive bias exhibited toward others who speak with own accent

-partisan bias

-reflective system (2) vs reflexive system (1)

Chp 4 - back in your box

-constantly assigning people membership

-more likely to rely on stereotypes when cognitive load is high

-kernel of truth hypothesis

-stereotype endorsement, activation, categorisation, application

-out-group homogeneity effect - tendency to see out-group members as more alike than in-group

    -even in gender stereotypes, so exposure not necessarily issue

-higher in the hierarchy more likely to stereotype lower

-Patricia Devine, UW-Madison

-more likely to remember info that confirms our stereotypes

-intersectional invisibility model - less likely to recognize people with multiple identities as full members of groups

-"Indianness is a national heritage...everyone owns them...right to use Indians"

-perpetual foreigner

-Bhabha and Chow - stereotypes work through repetition and ambivalence, easily shifting b/w contradictory meanings

-positive stereotypes - create competition and division between groups

    -create misconception that negative stereotypes have been neutralized

    -tend to lead to stronger negative stereotypes

    -tend to be prescriptive

-Double-bind bias

-stereotype lift - improve performance based on denigration of out-group

-stereotype boost - improved performance based on activation of positive stereotypes

-resume of a mother rated less competent than father

-self-objectification theory - women and girls internalize sexual objectificationa

Chp 5 - Bobbsey Twins

-confirmation bias

-Schelling segregation model, or tipping model

    - small preferences lead to large effects (segregation)

-disconfirmation bias (denigrating arguments counter to our own)

-social contagion theory

-more likely to act on biases and prejudices when surrounded by others with same

-social media fostering homophilic environments / echo chamber

-filter bubble (Pariser)

-availability cascade

-position bias (pay more attention to things at beginning and end)

-frequency bias (Baader-Meinhof phenomenon)

-association of nationality and social category membership

-threat and fear: more deference to authority, aggression toward out-group, more rigid hierarchical view of world

-political parties more identity than policy

-partisan political bias based on morality, more socially acceptable to be biased/prejudiced, etc.

-representation and role-models

Chp 6 - Hindsight is 20/20

-present bias, procrastination, delayed gratification with a hyperbolic curve

-loss aversion bias

-endowment effect

-choice can be demotivating

-familiarity bias, mere exposure effect

-evaluation of trustworthiness precedes eval of competence

-happpiness makes novelty attractive, saddness prefers familiarity

-accumulated advantage - high status evaluated more positively

    -the Matthew effect (rich get richer, poor get poorer)

    -higher status pitchers getting more calls

    -halo effect

    -hindsight bias, false memory, recall bias

        -believe that it appeared more likely after the fact

        -Roese and Vohs (memory distortion, inevitability, foreseeability): myopic attention to a single causal understanding of the past (to the neglect of other reasonable explanations) as well as general overconfidence in the certainty of one's judgments')

-hindsight bias greater when outcome negative

-just deserts bias

-conjunction fallacy, also have inherent bias that detailed statements more likely than general ones

-narrative fallacy

Chp 7 - Sugar and Spice

-gender stereotypes formed very early on

-binary bias, cultural or biological, or both?

-solve stereotype inconsistent riddles with more difficulty

-male-firstness bias

-linguistic determinism

-"she" only once in hobbit!

-very minor differences in number of words boys and girls speak

-women speak less than men in mixed groups, interrupted more

    -interruptions associated with dominance

-ambivalent sexism, positive stereotypes

-implicit sexist bias in women as well as men

-more ingroup difference in spacial abilities than between group

-creeping determinism, connecting the dots

-women in e.r. less likely to be taken seriously or given same amount of pain med

-most trials carried out on male mice or human males

Chp 8 - It's not black and white

-2015 guardian study - 102 people of 464 killed by police unarmed, black Americans twice as likely to be unarmed while killed

-black people way more likely to be arrested for drug crimes despite similar rates of use

-study of when to shoot/not shoot in video game setting, different results black vs white priming info

-implicit bias in language of judicial process

-young boys of color perceived as older and less innocent

-black patients receive less pain medication

-minority ethnic maternal mortality rates increased in us 2000-2014

-"snow capping" organization white at top and black at bottom

-Claire Jean Kim - racial triangulation theory

-model minorities

-colourism, especially prominent at intersection of race and gender

-using white actresses to calibrate lighting, color, contrast in media, "Shirley card"

-aversive racism (?)

-imposter syndrome

-those who believe they are not racist or sexist more likely to show implicit bias

-hypodescent rule - designated the status of subordinate group in lineage

Chp 9 - Swipe right for a match

-beauty bias, halo and horn effects

-physical attribute stereotypes

-associate of attractiveness and intelligence

-mate choice theory

-attractiveness a perpetual anchor

-effects of attractiveness on infant gaze/attention

-faces evoke trust

-man can be competent but not likeable, women need to be likeable to be competent

-weight bias, size and shape bias, weight discrimination, more socially acceptable

-internalized negativity, shame, bias

-heightism, verticality and power

-ageism, harder on women

-assign positive or negative evaluation of a cue within seconds

-metaphors we live by

-out-group favoritism (old favoring young)

Chp 10 - I hear you, I say

-besides faces, often react to accents first

-voice like a second signature, linguistic first impressions

-language association with nationality starts earlier than race

-villains with foreign accents

-trust and belief

-standard language ideology, native-speakerism

-associated with education, honesty, intelligence, criminality

-linguistic accommodation, chameleon effect, convergence or divergence

-code-switching, bi- or multi-dialecticism

-gender stereotypes and voice

-names playing into racism, sexism, ethnocentrism (familiarity and name bias)

-ease of name pronuciation

Chp 11 - I'd blush if I could

-technology often being developed by and for white men (tested mostly on whites and men)

    -voice recognition, facial recognition, virtual reality games

    -algorithms, trained on biased data sets, or data sets not representative of population

    -risk assessments, medical assessments

    -default settings for cameras, media equipment, set up for whites or lighter skin tones

    -Joy Buolamwini, Algorithmic Justice League, the "coded gaze"

Chp 12 - Good intentions

-diversity does not equate to inclusivity or equality or equal opportunity

-some criticisms of the IAT

    -hard to show connection between IAT scores and specific behaviors

    -doesn't show test-re-test reliability

    -hard to prove that it's measuring implicit biases

Epilogue - De-biasing 101

-taking more time

-awareness

-criticize behavior rather than person

-reduce essentialist tendencies and stereotypes

-name and gender blind reviewing (technique)

Tuesday, November 10, 2020

Thinking Fast and Slow

 Thinking Fast and Slow - Daniel Kahneman


Intro

  • Biases - systematic errors

  • Using heuristics - usually substitute an easier question for a harder one

  • System 1 more influential than your experience tells you

Chp 1

  • Control of attention shared by 1 and 2

  • Limited budget of attention (invisible gorilla)

  • System 1 constantly suggesting, feeding info to system 2

  • Cannot turn off system 1

  • Cognitive illusions

  • System 2 cannot substitute for 1, need to compromise

  • System 1 - really likes stories, cause-effect, active agents

Chp 2

  • 2 is “lazy”

  • Measuring cognitive effort by pupil dilation

  • Mental effort is distinct from emotional arrousal

  • We have a maximum mental effort, above which we “give up”

  • High mental effort, hard to pay attention to other things

  • Gravitate to least demanding course of action

  • Switching from one task to another is effortful

Chp 3

  • Self-control and deliberate thought draw on same limited budget of effort

  • Flow

  • Busy and depleted system 2 - less self control, yield more easily to temptation, more selfish and prejudiced

  • All variants of voluntary effort draw at least partly on shared pool of mental energy

  • “Ego depletion”

  • Motivation

  • Overconfidence

  • If system 1 involved, conclusion comes first and arguments follow

  • Connection between ability to control attention and ability to control emotions

  • Stanovich and West (system 1 and 2, or type 1 and 2 process)

    • Stanovich says 2 parts to system 2

      • Algorithmic and “rational,” or engaged or reflective

Chp 4 - Associative Machine

  • Associative activation, cascade

  • Coherent, good story, causal story

  • Priming effect - relies on unconscious association

  • Ideomotor effect - influence of an action by an idea (“florida/old age” experiment), also works in reverse

  • Reciprocal priming effects - circular

  • Money priming

  • “Lady macbeth effect”

Chp 5 - Cognitive Ease

  • Fluency level, familiar

  • Memory illusion

  • Predictable illusions occur if a judgment is based on an impression of cognitive ease or strain. Anything that makes it easier for the associative machine to run smoothly will also bias belief

  • Truth illusions (maximize legibility, visual contrast, easy to read, memorable)

  • Repetition induces cognitive ease/availability

    • Mere exposure effect (usually stronger for stimuli that individual never consciously sees)

  • Mednick - creativity is associative memory that works very well, RAT (remote association test)

    • Powerful effect of mood on this test

    • Happy mood loosens control of system 2

Chp 6 - Norms, surprises, and causes

  • System 1 assessing what is normal, maintain model of normality

  • “Norm theory” - recruit original episode and interpret in conjunction with it

  • Seeing causes and intentions, need for coherence

  • Perception of intention and emotions is irresistible (except to some extent for folks on spectrum)

  • Experience of freely willed action is different from physical causality

    • Paul Blood - inborn readiness to separate physical and intentional causality explains religious beliefs; perceive “world of objects as essentially separate from world of minds”

Chp 7 - machine for jumping to conclusions

  • System 1 doesn’t keep track of alternatives it doesn’t select

    • System 2 has to decide whether to “unbelieve it”

  • Confirmation bias, positive test strategy, search for confirming evidence, is how system 2 tests hypothesis

  • Exaggerated emotional coherence (halo effect)

    • Suppressed ambiguity

    • Halo increased weight of initial impressions

    • Decorrelate error!

  • What you see is all there is (WYSIATI)

    • Asymmetry between ways our mind treat info that is available vs info we don’t have (availability bias)

    • System 1 radically insensitive to quality and quantity of information that gives rise to impressions and intuitions

  • Consistency and coherence of info more important than completeness;

    • Knowing less is often better/easier than knowing more

  • Overconfidence

  • Framing effects - how present info

  • Base rate neglect

Chp 8 - How Judgments Happen

  • Directing attention and searching memory

  • System 1 able to translate values across dimensions

  • Mental shotgun

  • Basic assessments

    • Friend or foe, approach avoid

    • Dominance, attractiveness

    • Similarity, representativeness

    • System 1 does well with averages but poorly with sums

      • Deals with exemplars or prototype

      • Almost complete neglect of quantity in certain situations

  • Intensity matching

    • Prediction by matching not accurate, but comes naturally

  • Mental shotgun - system 1 computes more than want or need for particular judgment

Chp 9 - Answering an Easier Question

  • Have intuitive feelings and opinions about almost everything

  • If question or judgment difficult or not enough time, system 1 find an easier related question (“substitution”)

  • Heuristics

  • Judgment based on heuristic will be biased in predictable ways

  • Present state of mind looms very large when people evaluate their happiness “WYSIATI”

  • Affect Heuristic - dominance of conclusions over arguments most pronounced where emotions are involved

    • Paul Slovic - let likes and dislikes determine beliefs

    • System 2 more of apologist than critic, endorser rather than enforcer

Chp 10 - the Law of Small Numbers

  • System 1 inept when faced with “merely statistical” facts

    • Sampling effects, misunderstand or ignore the importance of sample size

    • Artifacts - observations produced entirely by method of research

  • Bias of confidence over doubt

    • Focus on the story rather than reliability of results

    • Sustaining doubt harder work

  • Always looking for causes, make it hard to see randomness (vigilance)

  • “To the untrained eye, randomness appears as regularity or tendency to cluster”

  • “No such thing as a hot hand” in basketball; satisfies test of randomness

  • Illusions of patterns (e.g. gates foundation donating for more ‘small schools’ based on misinterpretation of data [small schools more variability])

Chp 11 - Anchors

  • Anchoring effect - occurs when people consider a value for an unknown quantity before estimating that quantity

    • People’s judgments influenced by obviously uninformative numbers

    • System 2 anchoring - adjustment

    • System 1 anchoring - priming

  • Anchor and adjust

    • Start with anchoring number, then adjust, but usually not far enough, b/c stop at edge of uncertainty

    • People adjust less when mental resources depleted

  • Priming - evoke info compatible with it

  • Powerful anchoring effects in choices about money

  • Random anchors can be as effective as informative ones

    • Far more suggestible than we would like to be

    • Negotiations

    • Deliberately think the opposite

    • Assume the anchoring effect

Chp 12 - Science of Availability

  • Impression of the ease with which instances come to mind

  • Salient events, dramatic events, personal experiences

  • Fluent retrieval

    • If asked for more examples (cognitive strain), tend to believe or estimate lower amounts

  • Some resistance to availability bias if more vigilant

  • When feel good or powerful, more likely to rely on intuitions, system 1

Chp 13 - availability, emotion, and risk

  • Protective actions usually designed based on worst past experience, rather than worst possible

  • Tv coverage biased toward novelty and poignancy

  • Damasio - emotional evaluations of outcomes

  • Haidt - the emotional tail wags the rational dog

  • Slovic - people have rich view of risk

  • Risk exercise in power

  • Availability cascade

  • Emotional reaction becomes a story in itself

  • Importance judged by fluency and emotional charge

  • Hard to deal with small risks: either ignore or drastically overweigh

  • Protect public from fear as well as danger

Chp 14 - Tom W’s Specialty

  • Base rate

  • Stereotype

  • Unaffected by size of group (predicting by representativeness, substitution

  • Representative heuristic

    • Often more accurate than chance guess

    • Excessive willingness to predict unlikely events

    • Think like statistician vs think like clinician

    • Insensitivity to quality of evidence

    • Bayesian statistics

Chp 15 - Linda: less is more

  • Linda fits idea of “feminist bank teller” than bank teller

  • Conjunction fallacy - judge conjunction of events more probable than one of the events

    • Substitution of plausability for probability

    • Coherence

  • System 1 averages rather than adding (sometimes less is more)

  • Single evaluation vs joint evaluation

Chp 16 - Causes Trump Statistics

  • Statistical base rates

  • Causal base rates

  • Stereotypes are how we think about categories

  • Hard to change mind about human nature

  • People tend to exempt themselves from statistics or scientific conclusions (hard to learn from the general)

  • But easier to learn from examples, specific to the general

Chp 17 - Regression to the Mean

  • Illusory effect of punishment for below standard performance and reward for above standard

  • Talent and luck

  • Regression strange to the human mind

    • It always occurs when correlation between two things less than perfect

    • Correlation coefficient

    • Correlation and regression two ways of looking at same concept

  • Our mind strongly biased to causal explanations

  • Often confuse correlation with causation

Chp 18 - taming intuitive predictions

  • Insensitive to predictive quality of evidence

  • Prediction often matches evaluation

  • Correlation b/w two measures equal to proportion of shared factors among their determinants

Chp 19 - The illusion of understanding

  • Taleb - narrative fallacy, focus on few striking events

  • Halo effect (or negative, horn effect)

  • Compelling narrative fosters illusion of inevitability

  • Ultimate test of an explanations is whether it would have made the event predictable in advance

  • The human mind does not deal well with non-events

  • Unlimited ability to ignore our past, believe we understand past, hindsight bias

  • Once adopt new view, lose much of your ability to recall what believed before (hindsight bias) - i knew it all along

  • Outcome bias (blame and praise agents based on outcome)

  • In the presence of randomness, patterns are illusions

Chp 20 - Illusion of validity

  • Reluctant to infer the particular from general

  • Stock market, picking stocks - an illusion of skill, rewarding luck as if it was skill

  • Expert pundits, bad record of predictions

  • The more famous the forecaster, the more flamboyant the forecasts

Chp 21 - intuitions vs formulas

  • Meehl - clinical vs statistical prediction

    • Low validity environments; accuracy of experts matched or exceeded by simple algorithm

    • Experts try to be clever, too specific and bold

    • Use algorithm except for “broken leg rule”

  • Gawande - a checklist manifesto

    • Horror/fear of algorithmic mistakes, the cause of a mistake matters

Chp 22 - Expert intuition: when can we trust it

  • Klein - Naturalistic Decision Making, how experienced professional develop skills

  • Recognition primed decision

  • Great facility to learn when to be afraid

  • Expertise takes time to develop, need regular environmental feedback

    • An environment regular enough to be predictable

    • Time to learn these regularities through practice

Chp 23 - the outside view

  • Start public discussion by confidentially collecting each person’s judgment

  • Proud emphasis on uniqueness of each case

  • The planning fallacy - unrealistically close to best case scenarios, could be improved by consulting statistics of similar cases

    • Reference class forecasting, overcome base rate neglect

  • Optimistic bias, delusional optimism

  • Sunk cost fallacy

Chp 24 - the engine of capitalism

  • Optimism both a blessing and a risk, largely inherited, less depression, better immune system, feel healthier, promotes action

  • Starting small businesses

  • People generally feel that they are superior to most others on most desirable traits

  • Optimistic risk taking - helps drive economic dynamism

  • Competition neglect

  • Anchor on our plan, illusion of control

  • Cognitive ease vs cognitive strain - judge better than vs worse than

  • CFO’s way overconfident in predictions, paid to be knowledgeable, not acceptable to account for all the uncertainty

  • Generally a sign of weakness and vulnerability for clinicians to appear unsure

  • Premortem, what will we do if this crashes and fails, legitimize doubts

Chp 25 - Bernoulli’s error

  • The agent of economic theory is rational, selfish, and his tastes do not change

  • Thaler - “Econs”

  • Prospect Theory - value the change in status, not just value of outcome, so need to know reference points

  • Disbelieving is hard work, and system 2 is often tired

Chp 26 - Prospect Theory

  • Dislike losing more than like winning, threats more urgent than opportunities, asymmetrical s-curve

  • Diminishing sensitivity

  • Loss aversion ratio (1.5 - 2.5), but all bets are off is loss is potentially ruinous

  • Doesn’t deal with disappointment, anticipation of regret

Chp 27 - endowment effect

  • Indifference map/curve, combination of two goods

  • Loss aversion induces a bias that favors status quo

  • Owning an object appears to increase its value (endowment effect)

    • Held for use vs held for future exchange

  • For econs, buying price is irrelevant history, not so for humans

  • Decision making under poverty

    • Living below one’s reference point

    • Always in the “losses”, so improvement is “reduced loss” rather than a gain, all choices between losses

Chp 28 - bad events

  • Negativity and escape dominate positivity and approach

    • Golfers put more accurately for par than birdie

    • Amygdala - threat center; threats can bypass visual cortex (react before “seeing”)

    • Some distinctions between good and bad hardwired

  • Goals are reference points

  • Animals also fight harder to avoid losses than achieve gains

  • Loss aversion in law (fairness)

    • Asymmetrical effects on well-being

Chp 29 - Fourfold Pattern

  • Possibility effect: large impact of changes between 0%-5%

    • Overweigh small risks

  • Certainty effect: large impact of changes between 95%-100%

    • Certainty at a hefty price

  • Inadequate sensitivity to intermediate probabilities

  • Probability vs decision weight

  • Almost completely insensitive to variations of risk among small probabilities

  • High probability of gain? Risk averse, fear of disappointment, accept settlement

  • Low prob of gain? Hope of large gain, risk seeking, lottery ticket

  • Hig prob of loss? Hope to avoid loss, risk seeking, reject favorable settlement

  • Low prob of loss? Fear of large loss, risk averse, accept settlement

  • Hard to cut losses

Chp 30 - rare events

  • Terrorism: availability cascade

  • Lotteries and terrorism - same kind of mechanism

  • Rare events ignored or overweighted

  • Plausability, can you imagine it

  • Probability likely to be overestimated if alternative not fully specified

  • Vivid and emotional

  • Denominator neglect

    • Unlikely events more heavily weighted when stated in relative frequencies (1 out of 100) vs abstract risk (1%), people tend to take more “seriously”

  • Decisions from global impressions

    • Decision from experience - usually don’t overweight, often underweigh

Chp 31 - risk policies

  • Gains and losses combined or deconstructed, different preferences

    • Narrow (separate decisions) vs broad (single comprehensive decision) framing

      • Humans more likely to narrow frame

    • Look at gamble as part of bundle of gambles, shield yourself from pain of losses with broad framing

    • Creak a risk policy based on broad framing

Chp 32 - keeping score

  • Money as proxy for points on a scale

  • Mental accounting

  • Massive preference for selling winners than losers - disposition effect

  • Sunk cost fallacy

  • Regret, self-administered punishment

  • Stronger reaction to sins of commission vs omission, more regret and blame

  • Loss aversion higher for health

  • Taboo tradeoff (can’t accept increase in risk)

  • Be explicit about anticipation of regret

  • Regret and hindsight bias

  • Gilbert - people tend to overestimate amount of regret they will feel

Chp 33 - reversals

  • Discrepancy between joint and single evaluation

  • Joint eval can focus attention on different aspect

  • Categories, if eval is across categories, can cause reversal

  • Intensity matching

  • Hsee’s evaluability hypothesis: somethings not evaluable on their own

  • Awards to victims of personal injury were more than twice as large in joint than in single eval

  • Joint eval usually broader, but be wary of sales technique or manipulation of joint eval

Chp 34 - frames and reality

  • Meaning - associative machinery

  • Framing effects - losses hurt more than costs

  • Cash discount vs credit surcharge

  • Physicians just as susceptible to framing effects

  • Frame-bound vs reality-bound

  • System 2 has no moral intuitions of its own

  • Descriptions vs substance

  • Mpg frame is wrong, should be replace by gallons per mile

  • Opt-out vs opt-in

  • Not how we experience the workings of our mind

Chp 35 - two selves

  • Experienced utility (J. Bentham)

  • Decision utility (wantability, economics)

  • Peak-end rule - avg level of pain at worst moment and end

  • Duration neglect - tend to forget/neglect duration

  • Experiencing self vs remembering self

  • Maximize future memories

  • Intensity more impt to memory than duration

Chp 36 - life as a story

  • Rules of narratives and plot

  • Significant events and memorable moments, progress, gains and losses

  • Peak-end rule

  • Amnesic vacations

Chp 37 - experience well-being

  • Experience sampling (Csikszentmihalyi)

  • Day reconstruction method

  • Extent of inequality of emotional pain

  • Being poor is miserable, being rich man enhance life satisfaction reporting but doesn’t enhance experienced well-being

  • Satiation level

Chp 38 - thinking about life

  • Gilbert and Wilson: Affective forecasting

    • Miswanting - bad choices from affective forecasting

    • Focalism - rich source of miswanting

  • Mood heuristic one way to answer life-satisfaction question

  • Small sample of highly available ideas

  • Focusing illusion: substitute small part for the whole (synecdoche)

  • Attention to new situations withdrawn over time as it becomes more familiar

    • Except for: chronic pain, exposure to loud noise, severe depression

  • Bias in favor of goods and experiences that are initially exciting

Conclusions

  • Don’t consider humans irrational, have rational capabilities

  • Chicago school of economics, faith in human rationality, freedom to choose, milton friedman

  • Libertarian paternalism (Nudge, Thaler and Sunstein), defaults, etc

  • Marvel of system 1 - maintaining rich and detailed model of world

  • System 2 - can’t distinguish b/w info from skills vs heuristic

  • Richer language, diagnostic ability