This is an interesting design idea. Basically, your clothes are fitted with physiological sensors that use data mining to model your emotional state. Based on your emotion, your iPod or other device can be tuned to a song/image/even maybe smell some day that is tuned to it. When you are sad, you can get a happy song or a funny video. When you are angry, you can get a soothing song or the image of a cute baby. Depending on how good the modeling is, perhaps it can sense when you are enjoying the experience of wallowing in your sorrow (i.e. after a relationship breakup) and amplify it with a sad song. There are many directions to go with this.
But the question is whether the user would accept the software deciding for them. How hard is it to select your own song to fit what you are feeling! Is this the kind of automation we need? It may depend on whether we are talking about mood or emotion. Mood is a general feeling that lasts for some time. Emotion is something that hits you fast. When you are experiencing road rage after some idiot driver cuts you off, perhaps automation is a good idea. But when you are just feeling sad, selecting the sad songs is half the fun and probably just as therapeutic as hearing the songs.
This is new, so there are no companies actually offering the product yet. But I hope they do some ethnography to make sure they are targeting the right user contexts.