Friday, December 30, 2011

Pragmatic subjectivism

Interesting On Point last night (rebroadcast from last month plus a few text essays from James).  The subject was the philosophy of William James, and his religious philosophy in particular.  I have read a lot of his more psychological writings, but not his philosophic work.  I was multi-tasking during the show, so I didn’t really hear a lot of it, but I really liked the basic gist that I got from the discussion.

He kind of mixes his ontology and epistemology.  There is no such thing as a noun or adjective truth/true.  It is a verb.  Truth is a process not a thing, and it involves two components – subjective experience and pragmatism. 

Subjective experience means that anything can be true for me if I experience it to be true.  The example they used is that if I see a goddess in a tree, then that goddess exists for me.  If you don’t see her, then she doesn’t exist for you.  This isn’t my own epistemology, but I like it because it has a strong libertarian quality to it.  I have no reason to feel insecure about my beliefs based on whether others agree with me or not.  So why pressure anyone to believe it also?  In fact, if no one else believes in my goddess, then I get her all to myself.  It is kind of the opposite of the atheist arguments of Christopher Hitchens. He rejects religion because it can’t be proven and it is improbable on its face.  But the opposite is true of the radical subjective experience.  It is both proven and obvious that our beliefs can have a huge impact on our lives. 

But he also adds on the requirement of pragmatism.  He was a utilitarian.  The belief is only true if it has a positive result on society.  So if something you think you believe has a net negative impact, then you must have misunderstood it and it’s time to go back to the drawing board.  This way, you can't just believe that theft is good and all of a sudden it is.  Otherwise, psychopaths would rule the world.

And then the last thing they discussed is his belief in an objective reality.  What James said was that it was only when everyone believed something to be true did it become objectively true.  And because truth was a process, it truly did “become” true.  It wasn’t true before that. 

A caller made a really interesting point that because he was more of a psychologist than a philosopher (or at least his philosophy emanated from his psychology) he wasn’t describing any ontology, but rather he was saying that this is how our minds work.  So if our brains are wired so that a majority of people believe something, then it doesn’t really matter if it is objectively true, you are not going to convince the world of something they don’t want to believe.  I wish this weren’t the case, but my own cognitive behavioral research tells me that it has a large ring of truth to it.  That is why we have so much obesity, climate change denial, conspiracy theories, flat earth societies, predictions of the apocalypse, etc.  Our brains are pretty much wired to believe what we prefer to believe.  James puts some interesting philosophical meat behind this finding.

Wednesday, December 28, 2011

Defining a "customer" in the social media age.

Anyone in the sales or human resources industries probably knows a lot about how proprietary their contact lists are.  If you leave the company, the company owns your rolodex (electronically speaking of course).  Those names could be worth more than the company’s entire product inventory, especially if you are a B2B large volume dealer.  Or an executive search firm.  Most contracts prohibit you from taking the client files with you, or even contacting them for a certain period.

So here is a new twist.  Noah Kravitz was a hired to tweet for Phonedog.  He even had the company name in his username (@Phonedog_Noah).  He left the company, changed his username (to remove Phonedog) and kept the 20,000 followers.  Phonedog sued to get the account (and followers) back. 

So this is a great example of the difference between the letter and spirit of the law.  Noah’s contract didn’t say anything about Twitter followers, just customer lists.  The law can decide that where the contract says “customer list”, that includes Twitter followers implicitly.  Twitter followers are a "kind" of customer in the social media age.  Or it can decide that Twitter followers weren’t included because they are not the same as customers.  Or that twitter followers were not the “original intent” of the contract, to quote one of Justice Scalia’s favorite concepts. 

But the contract was signed before Twitter even existed.  You could argue that they should have updated the contract, but with technology changing as fast as it is, that would be unworkable.  Every two months they would have to add new social media.  Or they could define it so generically that it would be unenforceable in court as too vague.   In light of this argument, it seems the court has to go with the company.  And yet . . .  I could make an argument the other way too.

But I will let you do that in the comments, just to get a discussion going.

Monday, December 26, 2011

HBR forecasts and saving the world through Human Factors.

At this time of year, everyone and their cousin puts out their Top Predictions for Next Year.  Most of them are hardly worth a second glance.  Or maybe one of them is worth reading, but the others put you to sleep. 

This article from Harvard Business Review only lists six.  Just that should tell you that they aren’t going to waste your time.  But some of them are really insightful.  Here are the three I like best:

Slacktivism.  We are all getting lazier and lazier, but we also care about more and more global and local causes. So how do we reconcile our desire to do the least amount of work possible but also help the most causes?  It’s easy.  Just hit the share button.  See an article on world hunger?  Just share it with all your friends on FB, contacts on LI, tweet it, etc. etc.  If you have a social media dashboard, you can do this all in one click.  And then you can feel really good about yourself for doing your part to solve world hunger and only spend a few seconds doing it.

What can we do about this trend to make it stronger and more effective?  That's what human factors is all about, right?  Well, we can make the sharing easier and more powerful and make it actually work towards solving the problem not just talking about it.  I have a million ideas for this, but I will let you think about it for a while first and go on to the next one . . . .

Self-quantification.  We hear all the time about identity theft and privacy concerns.  And yet the iGeneration puts more and more of their personal information online.  Last year, they used their GPS phones to check in from every restaurant, museum, airport, or event they attended.  A few niche markets emerged as well to record food eaten, TV shows watched, miles ran, and more.  As these grow in ease of use and popularity, we will know everything about everyone.  This can be used for good – by using population data to predict trends in public health, drug interactions, and other issues of national concern.  But it can also be used for microtargeting in insidious ways that are not obvious to the targeted consumer.  I am sure you have heard of neuromarketing, but it can get orders of magnitude more detailed than that when your whole life is available for full review.  What you ate, where you went, who you talked to, what brands you mentioned in conversation, and so on. 

What can we do here?  Well, we can  create some privacy platforms that facilitate the positive and prevent the negative.  Again, I will let you ruminate on this for a while before answering it myself.

Gerontabletification.  The world is getting older.  This is not new.  But how can we leverage 2011’s new technologies to address it?  How about using tablets like the iPad.  We can distribute all kinds of apps that help older people remain productive members of society, or at least minimize the burdens when we lose our health.  Medical apps to make sure older people stick to their medication regimen.  Mental games to keep away Alzheimer’s.  eReaders to present books and news in large print.  Easy multi-media communication with the grandkids.  All of these would be better presented through a tablet.  I am sure you can think of (and develop) your own ideas.  And become a millionaire while also saving the world from crushing health care costs.

The next three are good too.  I will leave you to read those on your own.

Friday, December 23, 2011

The Future of the Internet

I am reading Jonathan Zittrain’s 2008 book on “The Future of the Internet” in preparation for my Spring web innovation class.  He doesn’t talk about futuristic technology or specific products and services like most predictors do.  The book doesn’t have flying cars or 3-D printing of fully functional automobiles at home, or any of that.  He spends the entire 250 pages discussing what he sees as the key issue of the future.
When the Internet was young, there was no business case for doing anything bad, so most users were trustworthy and trusted.  Even the hackers mostly did their work just to get street cred, not to actually hurt anything.  They would break into the CIA server just to be able to boast about the accomplishment.  No one was stealing credit card numbers and identities.

Then the Internet spread to the masses.  This led to three very significant changes.  First, there were many more users (volume).  Second, the advent of e-commerce meant that sensitive information was now there for the taking (value).  Third, the typical user was pretty clueless about how the system actually worked, so they didn’t know how to secure their networks and PCs (efficiency).  Any business that has a high volume, high value product and can produce it efficiently has a great business.  So even if philosophically we would like have an open Internet, net neutrality, long tail, open source . . . . . this just might not be sustainable.

He sees two ends of a spectrum that the Internet will evolve into.  On one end is the tethered information appliance.  In this case, everybody makes a small piece of the puzzle (like the toaster, blender, oven, microwave, silverware, plates, etc in your kitchen).  They are mostly cross-compatible (most food containers are microwavable nowadays), and consumers don’t have the ability to hack anything because none of them are particularly complex.  There is not much to hack with.  In this model, our word processor, browser, camera, etc etc etc would all be very minimally functional, you would buy the ones you want, and plug them all together.  They would not truly “integrate” like they can now.

On the other end of the spectrum is the old IBM model where they leased you the hardware, software, training, maintenance plan, and even customized programming for your entire system.  So everything was fully integrated and fully compatible, but you weren’t connected to the outside.  It was all locked down, so no hacking was possible and even if you could break through you couldn’t spread it to others.
The problem with both ends of the spectrum is that we destroy the innovative culture of the Internet.  Mashups and mods, and 5th party apps on top of 4th party APIs on 3rd party software on 2nd party hardware on a 1st party network disappears.  No more Angry Birds.

So he proposes something down the middle.  A hybrid of the two that combines technological, regulatory, process, and communication innovations.  It’s not perfect, but given where we are going now it is much better than the alternatives.  It balances some amount of security and safety with a moderate freedom to hack, mashup, and innovate.

All three of these futures are still in the future and none of us has a crystal ball to know what is possible or what will happen.  Maybe we will hit a technological singularity and all of this will be moot.  Or maybe the bad guys will win before we have a chance to get to one of these futures.  But if we want the future to reflect a balance of freedom and security, we can’t leave it to the governments of the world or to do the Microsoft/Apple/Google’s.  Nobody’s best interest is quite aligned with the general public.  Look at what happened when we let the banks redesign the mortgage industry.   One thing we know is that incentives matter.

So I started thinking about who would make up the best team to figure out and design the future?  It seems to me that what Zittrain describes requires a combination of many of our disciplines.  A team of brilliant innovators working through the technological, political, cultural, regulatory, psychological, and other issues to create a vision for the future would be a great read (maybe a White Paper, or short book) and could be the roadmap that the future really follows.

Anyone interested?  Let me know.

Monday, December 12, 2011

Participatory goal setting for carbon footprints?

I have always been a big advocate of participatory goal setting.  This is when the employee and the boss get together at least once a year to have a deep discussion about what that person’s performance objectives are in terms of personal productivity, teamwork, innovation, self-improvement and so forth.  This conversation is used to set more specific goals for the employee for the year.  Specific, relevant, milestones can be set throughout the year to monitor progress towards the goals.  This way, neither the employer nor the employee get a big surprise at year end.

This tends to work best with employees who have unique job responsibilities (CEO, football quarterback).  When one is a member of a team with similar responsibilities, it is hard to customize different performance goals for different people.  Based on their hopes for the future, one person can push for more advanced skills training while another could push for more current production bonuses.  But it does add a political complication if one feels underappreciated and underrewarded.

So I was very interested to hear a summary of the Durban Conference on Climate Change last night.  I have not read the full documents, but this is how the news summarized the agreement.  Over the next year, every country needs to set a specific goal for its 2020 carbon footprint.  These goals can be different for every country because they all have different needs, environments, economies, cultures, and so on.  So just like custom performance goals work for employees, they are expected to work for countries on their environmental interventions.  One might focus on stopping deforestation or wetland protection.  Another might switch to renewable fuels.  But whatever goals they set for themselves will be legally enforceable (I’m not sure how this part works).

But what incentive is there for any country to set tough goals for itself?  Employees usually get a salary bonus, promotion, or at least an improved resume.  What does a country get?  From what I can see, the conference attendees are hoping that public pressure from each country’s internal population as well as peer pressure from other countries will push them to set appropriate goals.  It is an interesting application of a traditional management technique.  

Monday, December 05, 2011

Do we have to be sad?

Some new research challenges a myth that has been around since Freud.  When someone important to us passes away, or some other major traumatic event happens, we are expected to have a rush a sadness, distress, and grief.  If we don’t, then everyone around us warns that it is going to come out eventually, so we should just “let it out.”  Or even seek therapy to help deal with these buried emotions.

Well, it turns out that some people are just more emotionally resilient than others.  It doesn’t mean we didn’t care about the person who passed away, it just means that our emotional constitution doesn’t bend as much in the face of adversity.  You don’t need therapy, you don’t need medication, you don’t need to “let it out.”  You just need these people to leave you alone.

If you need a reference to prove this to the people always getting on your case, there is an article in the November/December 2011 issue of Scientific American Mind that cites research by Camille Wortman of Stony Brook University, Kathrin Boerner of Mount Sinai School of Medicine, and George Bonanno of Columbia University.  Plenty of ammunition. It's behind a firewall, so you will have to Google it.