Every once in a while, I try to read about research that diverges from my usual interests. I find it gives me some interesting perspectives, often specifically applicable to user experience and human factors. Last night was a good example. I read a paper from a journal on psychotherapy that compared the effectiveness of interventions with patients of high or low self esteem.
The psychotherapy-related finding is that patients with either low- or high- self esteem responded well to therapists (or friends) who showed emotional empathy. Communicating that “I understand that you feel bad” had positive results across the board. We like it when people "feel our pain."
But cognitive intervention showed a difference. Standard cognitive intervention has the therapist suggest reframing the situation, such as “if you think about your situation this way instead, you can see that it is not so bad, or even get something positive from it.” This resonated with patients who had high self esteem. Their self-image was high, so gaining an advantage resonated with them. The world was supposed to give them opportunities to gain from whatever came their way, even the bad stuff.
But the same therapy intervention was dissonant for patients who had low self-esteem. Their self-image was that their situation was supposed to be negative. So getting something good from it was not the way the world was supposed to work. They rejected the therapy. They just wanted therapy (or friends) who would validate their low self esteem. All they wanted was "You have every right to feel bad, your situation really sucks." That's it. They don't want to hear about how to feel better.
Of course, I wasn’t reading the paper to learn how to become a cognitive behavioral therapist. But some interesting applications to human factors jumped out at me. Users of any system are going to come in with a variety of psychographic profiles. They might have low self esteem or one of any number of individual differences. We need to make sure that our designs are aligned with their self-identity, or at least does not create any identity-dissonance. This is especially true for domains like warning design and risk communication when identity issues might be particularly relevant. We can’t just give people the facts and expect them to react appropriately to them. Even a logically consistent and rationale procedure will be rejected if it goes against some user identity trait.
Because the paper was not about human factors and perhaps didn't even know what HF is, they didn't suggest specifics for warning design or risk communication. We will have to figure these details out on a case by case basis. But I think it is worth some real consideration, don't you?