Hostname: page-component-8448b6f56d-sxzjt Total loading time: 0 Render date: 2024-04-25T03:40:19.663Z Has data issue: false hasContentIssue false

Listening With Machines: A shared approach

Published online by Cambridge University Press:  05 March 2015

Simon Emmerson*
Affiliation:
Music, Technology and Innovation Research Centre, De Montfort University, Leicester

Abstract

The aim of this article is to review the last twenty years of ‘machine listening’1 to sound and music, and to suggest a balanced approach to the human–machine relationship for the future. How might machine listening, and MIR2-based ideas of data storage, retrieval and presentation enhance both our embodied experience of the music and its more reflective study (analysis)? While the issues raised may be pertinent to almost any music, the focus will remain on electroacoustic music in its many forms, whether for interactive composition, performance or analytical endeavour. I suggest a model of listening with – that is, alongside – machines in such a way that our skills may be enhanced. What can we share with machines to mutual advantage?

Type
Articles
Copyright
© Cambridge University Press 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bregman, A. S. 1994. Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, MA: MIT Press.Google Scholar
Casey, M. 2009. Soundspotting: A New Kind of Process?. In R. T. Dean (ed.), The Oxford Handbook of Computer Music. Oxford: Oxford University Press.Google Scholar
Casey, M., Veltkamp, R., Goto, M., Leman, M., Rhodes, C. and Slaney, M. 2008. Content-Based Music Information Retrieval: Current Directions and Future Challenges. Proceedings of the IEEE 96(5).Google Scholar
Chion, M. 1983. Guide des objets sonores: Pierre Schaeffer et la recherche musicale. Paris: Buchet/Chastel, English translation by J. Dack and C North available from www.ears.dmu.ac.uk.Google Scholar
Clarke, E. F. 2005. Ways of Listening: An Ecological Approach to the Perception of Musical Meaning. Oxford: Oxford University Press.Google Scholar
Clarke, M. 2006. Jonathan Harvey’s Mortuos Plango, Vivos Voco. In M. Simoni (ed.), Analytical Methods of Electroacoustic Music. New York: Routledge.Google Scholar
Clarke, M. 2010. Wind Chimes: An Interactive Aural Analysis. In Denis Smalley: Polychrome Portraits. Paris: GRM.Google Scholar
Clarke, M., Dufeu, F. and Manning, P. 2013. Introducing TaCEM and the TIAALS Software. In Proceedings of the 2013 International Computer Music Conference (ICMC), Perth, Australia. San Francisco: ICMA.Google Scholar
Collins, N. 2007. Musical Robots and Listening Machines. In N. Collins and J. d’Escriván (eds.), The Cambridge Companion to Electronic Music. Cambridge: Cambridge University Press.Google Scholar
Cope, D. 1991. Computers and Musical Style. Madison, WI: A-R Editions.Google Scholar
Dean, R. T. 2003. Hyperimprovisation: Computer-Interactive Sound Improvisation. Middleton, WI: A-R Editions.Google Scholar
Dean, R. T. and Bailes, F. 2010. A Rise–Fall Temporal Asymmetry of Intensity in Composed and Improvised Electroacoustic Music. Organised Sound 15(2): 147158.Google Scholar
Drever, J. L. 2009. Soundwalking: Aural Excursions into the Everyday. In J. Saunders (ed.), The Ashgate Research Companion to Experimental Music. Aldershot: Ashgate.Google Scholar
Eigenfeldt, A. and Pasquier, P. 2010. Real-Time Timbral Organisation: Selecting Samples Based upon Similarity. Organised Sound 15(2): 159166.Google Scholar
Emmerson, S. 2009. Combining the Acoustic and the Digital: Music for Instruments and Computers or Pre-Recorded Sound. In R. T. Dean (ed.), The Oxford Handbook of Computer Music. Oxford: Oxford University Press.Google Scholar
Gray, D. 2013. The Visualization and Representation of Electroacoustic Music. PhD thesis. Leicester: De Montfort University.Google Scholar
Handel, S. 1993. Listening: An Introduction to the Perception of Auditory Events. Cambridge, MA: MIT Press.Google Scholar
Harvey, J. 1981. Mortuos Plango, Vivos Voco: A Realization at IRCAM. Computer Music Journal 5(4): 2224.Google Scholar
Lewis, G. 2000. Too Many Notes: Computers, Complexity and Culture in Voyager. Leonardo Music Journal 10: 3039.Google Scholar
London, J. 2004. Hearing in Time. Oxford: Oxford University Press.Google Scholar
McAdams, S. and Bigand, E. (eds.) 1993. Thinking in Sound: The Cognitive Psychology of Human Audition. Oxford: Oxford University Press.Google Scholar
Mion, P., Nattiez, J.-J. and Thomas, J.-C. 1982. L’envers d’une oeuvre: ‘De Natura Sonorum’ de Bernard Parmegiani. Paris: Buchet/Chastel.Google Scholar
Mital, P. K. and Grierson, M. 2013. Mining Unlabeled Electronic Music Databases through 3D Interactive Visualization of Latent Component Relationships. NIME Proceedings.Google Scholar
Nattiez, J.-J. 1990. Music and Discourse: Towards a Semiology of Music. Princeton, NJ: Princeton University Press.Google Scholar
Norman, K. 1996. Real-World Music as Composed Listening. Contemporary Music Review 15(1–2): 127.Google Scholar
Park, T. H., Hyman, D., Leonard, P. and Wu, W. 2010. SQEMA: Systematic and Quantitative Electro-Acoustic Music Analysis. Proceedings of the 2010 International Computer Music Conference (ICMC), Stony Brook. San Francisco: ICMA.Google Scholar
Park, T. H., Hyman, D., Leonard, P. and Hermans, P. 2011. Towards a Comprehensive Framework for Electro-Acoustic Music Analysis. Proceedings of the 2011 International Computer Music Conference (ICMC), Huddersfield. San Francisco: ICMA.Google Scholar
Rowe, R. 1993. Interactive Music Systems: Machine Listening and Composing. Cambridge, MA: MIT Press.Google Scholar
Schaeffer, P. 1966. Traité des objets musicaux. Paris: Éditions du Seuil.Google Scholar
Smalley, D. 1996. The Listening Imagination: Listening in the Electroacoustic Era. Contemporary Music Review 13(2): 77107.Google Scholar
Smalley, D. 2007. Space-Form and the Acousmatic Image. Organised Sound 12(1): 3558.Google Scholar
Solis, J., Chida, K., Taniguchi, K., Hashimoto, S. M., Suefuji, K. and Takanishi, A. 2006. The Waseda Flutist Robot WF-4RII in Comparison with a Professional Flutist. Computer Music Journal 30(4): 1227.Google Scholar
Spevak, C. and Polfreman, R. 2001. Sound Spotting: A Frame-Based Approach. In Proceedings of the Second International Symposium of Music Information Retrieval (Bloomington Indiana): 35–6. (www.ismir.net).Google Scholar
Truax, B. 1984. Acoustic Communication. Norwood, NJ: Ablex Publishing Corporation.Google Scholar
Wang, D. and Brown, G. J. 2006. Computational Auditory Scene Analysis: Principles, Algorithms, and Applications. Piscataway, NJ: Wiley-IEEE Press.Google Scholar
Weinberg, G. and Driscoll, S. 2006. Toward Robotic Musicianship. Computer Music Journal 30(4): 2845.Google Scholar
Wiggins, G. A., Pearce, M. T. and Müllensiefen, D. 2009. Computational Modeling of Music Cognition and Musical Creativity. In R. T. Dean (ed.), The Oxford Handbook of Computer Music. Oxford: Oxford University Press.Google Scholar
Windsor, L. 2000. Through and Around the Acousmatic: The Interpretation of Electroacoustic Sounds. In S. Emmerson (ed.), Music, Electronic Media and Culture. Aldershot: Ashgate.Google Scholar
Winkler, T. 2001. Composing Interactive Music: Techniques and Ideas Using Max. Cambridge, MA: MIT Press.Google Scholar
Wishart, T. 2012. Sound Composition. York: Trevor Wishart.Google Scholar