Friday, March 02, 2012

Ethical consequences of Augmented Cognition

A very hot topic today in the Human Factors discipline is augmented cognition.  There are so many ways we can enhance the sensation, perception, memory, attention, decision making, psychomotor coordination, etc.  The military is the biggest and was the original customer of this kind of technology.  But now the mobile web is enabling an incredible assortment of "augcog".

I recently read an article in the Atlantic on military uses of augcog that got me thinking.  This article was about the ethics of augmenting soldiers.  Right now, it is against the Geneva conventions to keep military prisoners awake for extended periods.  But if we can manipulate soldiers not to need sleep (using the genes that dolphins use to sleep just one half of their brain at a time so the other half can make sure they go up for air) does it then become ethical to do it?  If we engineer them not to feel fear (by manipulating the genes in the amygdala), would waterboarding become ethical because the only real consequence of that is making them think they are going to drown?  What if we engineer them not to feel pain by dulling the pain receptors in the brain?  Are other kinds of torture now ethical? 

And then there are the ethics of extending the digital divide.  What if high end smart phones become so powerful that they enable their owners to get all kinds of benefits not available to others?  They can use these benefits to increase the income gap that is already too big. 

Does our discipline have a duty to put an equal amount of time and resources into designing significant benefits into low end technology to mitigate the widening of the digital divide?  We can't guarantee equal outcomes, but should we at least try for equal opportunities?  Technology can be a powerful source of differentiation, but it can also be a powerful equalizer.  Is there an ethical duty that if we do the former, we also do the latter?

No comments: