I was kind of interested in reading about this study because I had never really thought about it much before. Andrew Howell at Grant MacEwan University (Canada) studied people's willingness to apologize (in general) and then compared them to a whole battery of personality assessments.
The obvious connections were that people with more compassion and more agreeableness apologize more. But what were more surprising were the effects of self-esteem. People with really low self-esteem are hesitant to apologize, even if they feel bad about what they did, because they direct the shame internally. They feel bad, but may also feel sorry for themselves, and therefore not apologize to others.
Those with really high self-esteem (i.e. egocentrics, narcissists, etc) were also less likely to apologize. Perhaps they felt entitled to do whatever they did and therefore didn't need to apologize for it.
So how can we use this information? First, think about how you feel when you do something that most people generally apologize for. If you apologize more than others, then your self-esteem is right in the sweet spot and you should feel good. But if you apologize less, think about which end of the tail you fall on. You may need to work on your self-esteem, or you may need to come off your pedestal.
My musings about human behavior and how we can design the world around us to better accommodate real human needs.
Wednesday, November 30, 2011
Tuesday, November 15, 2011
Consistency v flexibility in rule enforcement
Kristen Laurin from the University of Waterloo has just published some research findings that I think have some important implications for Human Factors, especially in shop floor management (any behavior related to safety, productivity, quality etc.). Even though her work was actually focused on emigration, it is amazing how it relates to our work. Hey, no thanks necessary. Linking disparate areas of research is what I am here for.
What she found is that when rules are absolute (no chance for either changing them or getting away with violating them) we tend to accept them as a fact. By being attached to our "fact" schema, our subconscious rationalizes some justifications for why we wanted to comply anyway, to maintain our feelings of internal consistency. We don’t like to feel 100% forced into things, so our subconscious plays tricks on us. It tells us that it must be a good rule. That the leaders must be smarter than we thought.
On the other hand, when there is leeway, we associate the rule with a fuzzy likelihood schema. If it's not absolute then we can find an exception. We don't have 95%, 99%, 50% schema. Either it's absolute or it's not. So if it's not, our brain is always looking for loopholes. Can we get away with violating the rule? Will there eventually be new leaders who change the rule? Can we convince the leaders that we deserve an exception to the rule. If we think there is a chance, then our brain keeps working on solutions. And because of that, the rationalization never happens.
Dr. Laurin’s research focuses on emigration policies. She found that if you have no chance of leaving a country because of a dictator, then people’s subconscious rationalizes that the dictatorship isn’t as bad as it seems. We have stability don’t we? Security? This is kind of similar to the Stockholm syndrome experienced by kidnap victims. But if there is some leeway. A weak dictator, someone to bribe, etc. this never happens.
So the general conclusion is that if you are going to implement unpopular rules, you need to make them absolute. You will get that subconscious rationalization and buy-in. But if there is any leeway, people will keep trying to find loopholes. And no buy-in.
So why do I think that research on emigration in dictatorships is related to shop floor management? Well, we always have some rules and policies that are unpopular. Hopefully not many, but there are always a few. The key to having them accepted is that they have to be absolute. 100% enforcement on 100% of the relevant employees. No turning a blind eye when the rule would delay shipment to a good customer. No exceptions for pet workers. You might think that this is being the corporate asshole Robert Sutton writes about. But if you are really 100%, then you get that subconscious rationalization effect. Workers can’t help but think the rule must be better than they thought. You must be smarter than they thought for having it. But it only works if you enforce it 100%.
The results can bleed into the corporate culture. We can enforce all of our rules and regs 100%. If a rule needs exceptions, put them in writing or change the rule. Then we get an entire culture of acceptance and compliance. If all rules are 100%, you must really be smart and competent. The rules must be really appropriate and in our best interest. Consistency in rule enforcement creates this rationalization effect that pervades the entire culture and reinforces itself over time.
But having even just a few rules that are acceptable to violate reduces the compliance culture. Maybe one or two rules are 100% compliance if they make sense (like wearing a respirator when coal mining). But the rest . . . . would be hit or miss.
My recommendation – make up the best set of rules you can, write exceptions and exclusions into the official rule. And then don’t make exceptions on the fly. The one time that this might be an advantage is well compensated by the extra compliance and buy-in you get from consistency.
What she found is that when rules are absolute (no chance for either changing them or getting away with violating them) we tend to accept them as a fact. By being attached to our "fact" schema, our subconscious rationalizes some justifications for why we wanted to comply anyway, to maintain our feelings of internal consistency. We don’t like to feel 100% forced into things, so our subconscious plays tricks on us. It tells us that it must be a good rule. That the leaders must be smarter than we thought.
On the other hand, when there is leeway, we associate the rule with a fuzzy likelihood schema. If it's not absolute then we can find an exception. We don't have 95%, 99%, 50% schema. Either it's absolute or it's not. So if it's not, our brain is always looking for loopholes. Can we get away with violating the rule? Will there eventually be new leaders who change the rule? Can we convince the leaders that we deserve an exception to the rule. If we think there is a chance, then our brain keeps working on solutions. And because of that, the rationalization never happens.
Dr. Laurin’s research focuses on emigration policies. She found that if you have no chance of leaving a country because of a dictator, then people’s subconscious rationalizes that the dictatorship isn’t as bad as it seems. We have stability don’t we? Security? This is kind of similar to the Stockholm syndrome experienced by kidnap victims. But if there is some leeway. A weak dictator, someone to bribe, etc. this never happens.
So the general conclusion is that if you are going to implement unpopular rules, you need to make them absolute. You will get that subconscious rationalization and buy-in. But if there is any leeway, people will keep trying to find loopholes. And no buy-in.
So why do I think that research on emigration in dictatorships is related to shop floor management? Well, we always have some rules and policies that are unpopular. Hopefully not many, but there are always a few. The key to having them accepted is that they have to be absolute. 100% enforcement on 100% of the relevant employees. No turning a blind eye when the rule would delay shipment to a good customer. No exceptions for pet workers. You might think that this is being the corporate asshole Robert Sutton writes about. But if you are really 100%, then you get that subconscious rationalization effect. Workers can’t help but think the rule must be better than they thought. You must be smarter than they thought for having it. But it only works if you enforce it 100%.
The results can bleed into the corporate culture. We can enforce all of our rules and regs 100%. If a rule needs exceptions, put them in writing or change the rule. Then we get an entire culture of acceptance and compliance. If all rules are 100%, you must really be smart and competent. The rules must be really appropriate and in our best interest. Consistency in rule enforcement creates this rationalization effect that pervades the entire culture and reinforces itself over time.
But having even just a few rules that are acceptable to violate reduces the compliance culture. Maybe one or two rules are 100% compliance if they make sense (like wearing a respirator when coal mining). But the rest . . . . would be hit or miss.
My recommendation – make up the best set of rules you can, write exceptions and exclusions into the official rule. And then don’t make exceptions on the fly. The one time that this might be an advantage is well compensated by the extra compliance and buy-in you get from consistency.
Tuesday, November 08, 2011
More happiness research
Warning, this is a little random - I am having trouble focusing today.
As you know, I am an avid reader of the happiness literature. Not the psychobabble type, but legitimate research using rigorous and valid methods. Not that there is anything wrong with that . . . .
Anyway, here are a few recent findings I thought I would share, and then brainstorm a few Human Factors implications.
One study found that:
- Happiness is composed of 50% genetic predisposition, 10% circumstances, and 40% mental and behavioral strategies to cope.
- Varying the coping strategies is better than using the same one repetitively.
So what is the HF implication? First, it means that individual differences are the most important thing when it comes to attitude (in this case happiness). So when we are measuring user satisfaction or branding strength, a lot of what we want to impact through design is already pretty set in stone.
The impact of coping strategies is also telling. Many studies find that user performance is as much a factor of things like self-efficacy as they are with design ease of use. Confidence makes you good at things. And in a self-fulfilling cycle, being good at things makes you more confident. This has incredible implications for training and education. Using techniques like scaffolding, slowly increasing difficulty, quick feedback, user-customized speed, etc all create a virtuous cycle of confidence and performance.
And happiness.
Another finding is that varied thinking increases happiness. Whether it is varied and fast (like brainstorming) or varied and slow (like daydreaming), varied thinking increases happiness. On the other hand, focused thinking does the opposite, leading to depression in extreme cases. Focused and slow (obsession) or focused and repetitive (panic) decrease happiness.
A related finding is that whether you are good at multi-tasking or not (which I have blogged about before), it makes you less happy to do it. There is something soothing about focusing on one thing at a time (like meditation) that makes you happier. Of course, keeping in mind what we learned in the previous study, we need to have enough variety in this one focused task to keep it interesting. Boring isn’t good, but confusion isn’t either.
Subscribe to:
Posts (Atom)