Music and human emotion go hand-in-hand. The tedious manual chore of choosing a series of musical compositions to fit these emotions may soon be drawing to a close. For music lovers with rudimentary computer skills, the process of acquiring, encoding, and selecting a seemingly endless stream of content can be extremely daunting. To simplify some of those issues, a team at the University of Maryland, Baltimore County, has been working on an interesting project called XPod. Thanks to technology, the preferences of a listening experience may become a total body-initiated experience.
Bill Christensen of Live Science reports research on the XPod, a music player that senses activity and emotion, has been presented at Proceedings of the International Conference on Mobile Technology, Applications and Systems as early as 2006. ("XPod Would Sense Your Emotions Then Pick Music" Live Science Jan. 2oo6)
The XPod concept is based on the idea of automating much of the interaction between the music player and its user. A "smart" music player that learns its user's preferences, emotions and activity, then tailors its music selections accordingly is employed.
The device monitors a number of external variables to determine its user's levels of activity, motion and physical states to make an accurate model of a task predicting the genre of music most appropriate at the time. The user actually trains the player as to what music is preferred and under what conditions. After initial training, the XPod uses its internal algorithms to make an educated selection of the song that best fits its user's emotion and situation. A key component in the system is a physiological sensor called the SenseWear from Body Media (www.bodymedi.com). This is an armband designed to monitor a variety of parameters such as skin temperature, heart rate, and galvanic skin response. Such data can be used to determine emotional states (sadness, anger, surprise, fear, frustration, and amusement) with a high degree of accuracy (70-90% range). The SenseWear armband also has an accelerometer that measures acceleration. That information is used to determine the user's level of activity: sitting, walking, running, and so on.
The XPod system “learns” the user's preferences with the help of the armband. Using a client-server configuration, the server processes the incoming data from the SenseWear, combines that with the information entered by the user about what music they prefer under which conditions, and selects the tunes that would best fit the current situation.
The experimental setup includes a Windows laptop, which wirelessly receives the data from the SenseWear, executes all processing, stores all song data, and sends the selected songs to a PDA client that plays the music and provides the user interface. The laptop and PDA communicate via Wi-Fi, which also lets the user rate the music, skip to the next song, and otherwise control playback.
According to Scott Wilkinson, writer for Electronic Musician, "When the user presses Play, the server begins examining the data from the SenseWear. Using a series of algorithms, it determines the average and standard deviation of the incoming values and compares the results with predetermined ranges that correspond to active, passive, and resting states. Once the user's state is known, that information is passed to a neural-network engine, which compares the user's current state to states for which song preferences have been specified. Finally, it makes a musical selection and sends the data to the PDA." ("Mind Reader," Nov 1 2006)
Users can continually update the system by indicating satisfaction for the selections. The preference can be applied to the song being played as well as to the artist and genre as indicated by the song's metadata. The neural network learns the user's preferences by monitoring which songs are skipped under what conditions, eventually leading the system to skip songs it believes the user would skip anyway.
Initial experiments by the UMBC team included monitoring several test subjects of different genders, ethnicities, and athletic abilities while the subjects were lying down, sitting at a desk, walking, and running. The values obtained during those trials allowed the team to develop algorithms that accurately determine any user's activity level.
This research is in its infancy, and the XPod currently selects music based on activity rather than emotion. Future versions could expand the selection criteria and become more sensitive to the user's state, automatically supplying just the right music for any situation. In addition, advances in miniaturization could allow the XPod to become a small mobile music player.